I think a big advantage of a framework is that the team(s) don't end up arguing over architecture as much because the framework made those decisions for us. Also the framework has a proven history of this architecture working well for xyz problems. Also it makes finding compatible talent much easier. If you're using a framework, finding someone who has worked with that same framework for years makes it a safer bet that they will start being useful quickly. If you have your own bespoke system, it could be years until a new hire feels comfortable with the existing system's quirks and nuances and stops breaking things as much.
More complicated in what ways specifically? I think the relevant thing is wether building an app with Rama is more or less complicated. Rama may be more complicated than mysql in implementation, but that doesn't affect me as a developer if it makes my job easier overall.
Discussing levels of complexity quickly gets pretty subjective. It is possible that Rama has found good abstractions that hide a lot of the complexity. It is also possible that taking on more complexity in this area saves you from other sorts of complexity you may encounter elsewhere in your application.
However, there is just more going on in an event sourcing model. Instead of saving data to a location and retrieving it from that location you save data to one location, read it from another location, and you need to implement some sort of linker between the two (or more).
This also comes down to my personal subjective experience. I actually really like event sourcing but I have worked on teams with these systems and I have found that the majority of people find them much harder to reason about than traditional databases.
There can be a lot of integration pain when implementing event sourcing and materialized views by combining individual tools together. However, these are all integrated in Rama, so there's nothing you have to glue yourself as a developer. For example, using the Clojure API here's how you declare a depot (an event log):
That's it, and you can make as many of those as you want. And here's how a topology (a streaming computation that materializes indexes based on depots) subscribes to that depot:
(source> my-events :> *data)
If you want to subscribe to more depots in the topology, then it's just another source> call.
That these are integrated and colocated also means the performance is excellent.
This is what has me excited about Rama. I was very into the idea of event sourcing until I realized how painful it would be to make all the tooling needed.
How exactly is it different from No-SQL?
No schema? Check.
No consistency? Check. (eventually consistent)
Key-value store? Check. (because it's using ZooKeeper under the hood)
Promising amazing results and freeing you from the chains of the SQL? CHECK!
Everything you wrote here is false, with the exception of Rama not using SQL. Rama has strong schemas, is strongly consistent, and is not limited to key/value (PStates can be any data structure combination). Zookeeper is used only for cluster metadata and is not involved with user storage/processing in any way.
If there's someone we can't trust to kill/harass/assault people, they need to be in prison, not having speech policed for them as if they were a child. That's one of the core purposes of imprisonment, in fact.
But the reality is that we can, in fact, trust most people to not do those things. And yes, that means sometimes we will be wrong and we will have to pick up the pieces after someone does something awful. That is simply part and parcel of living in a free society.
The people being threatened disagree with you. They aren't living in a free society. They are forced to protect themselves, both in real life and online.
Personally, I'd be in favor of enforcing existing laws about threats. Make a death threat, go to jail. Law enforcement is pretty bad at that, with the excuse that most death threats turn into action, and it's very difficult to track down an anonymous commenter.
But I'd like to see what would happen if they took the existing laws seriously. Maybe then I'd find it easier to credit the notion of unrestricted free speech that doesn't rise to the level of criminality. I'm not convinced, but I'd at least be able to consider it.
Meantime, "Somebody threatened to kill you and that's your problem until you're actually dead" does not seem like an acceptable situation.
Like I have a bachelors degree in engineering. Is that intermediate or advanced? I feel like saying my science knowledge is anything beyond intermediate is a bit of an overstatement personally.
I have one in physics, but I have also taken a rather a lot (given the degree) in biology, chemistry, civil engineering, and electrical engineering. Went some places also not usual in math. And I've worked in IT since forever. There's a lot of stuff where I take a glance at it and realize I'm just looking at a single hull plate on a battleship.
What I am getting at is that if you have just intermediate knowledge in a bunch of places, I think it lends itself to sensing that you're just a paramecium stuck to the side of some N-dimensional construct. There's so much. I had a professor who was the expert in the second excited state of Helium-3. That was his thing. Just a single needle in the whale-sized blowfish of physics.
Each scientific field has specialised and become so dense with knowledge that even new graduates in that field would barely be classed as intermediate.
I didnt find out what standard they use, but personally I'd say intermediate sounds about right. I also have an engineering degree and while I know more than the average bear in many scientific disciplines I couldn't say that any of it is advanced.
I'm thinking of times where I went to the library to dig deeper on a topic and discovered a huge and complex topic just laying in wait.
[9]: Fernbach, P. M., Light, N., Scott, S. E., Inbar, Y. & Rozin, P. Extreme opponents of genetically modified foods know the least but think they know the most. Nat. Hum. Behav. 3, 251–256 (2019). https://www.nature.com/articles/s41562-018-0520-3
(useful to note all those surveys are 2019 pre-pandemic, before there was severe partisanization of the phrase "trust in science". I wonder how hard it would be to construct a neutral methodology post-pandemic, now that even the basic vocabulary itself is loaded with associations.)
Intermediate knowledge of what's being tested, everyone is the most overconfident on subjects they have intermediate knowledge about.
If you test high school concepts then if you are shaky about high school concepts that applies to you. People who have no clue about high school and people who understand high school well will be less overconfident on that test.
Or if you talk about college algorithms, then that is the basis for intermediate knowledge. An average comp sci grad will be the most overconfident, a person who never studied algorithms and a person who teaches algorithms for years will be less overconfident on that test.