Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Without details it's hard to really get anything from this. Could you share some?


I can share some. Had a similar experience as the parent comment. I do support "one big database" but it requires a dedicated db admin team to solve the tragedy of the commons problem.

Say you have one big database. You have 300 engineers and 30-50 product managers shipping new features every day accountable to the C-Suite. They are all writing queries to retrieve the data they want. One more join, one more N+1 query. Tons of indexes to support all the different queries, to the point where your indexes exceed the size of your tables in many cases. Database maintenance is always someone else's problem, because hey, it's one big shared database. You keep scaling up the instance size cause "hardware is cheap". Eventually you hit the m6g.16xlarge. You add read replicas. Congratulations, Now you have an eventually consistent system. You have to start figuring out which queries can hit the replica and which ones always need the fresh data. You start getting long replication lag, but it varies and you don't know why. If you decide to try to optimize a single table, you find dozens or 100+ queries that access it. You didn't write them. The engineers who did don't work here anymore....

I could go on, and all these problems are certainly solvable and could have been avoided with a little foresight, but you don't always have good engineers at a startup doing the "right thing" before you show up.


I think this hits the nail right on the head, and it's the same criticism I have of and article itself: the framing is that you split up a database or use small vms or containers for performance reasons, but that's not the primary reason these things are useful; they are useful for people scaling first and foremost, and for technical scaling only secondarily.

The tragedy of the commons with one big shared database is real and paralyzing. Teams not having the flexibility to evolve their own schemas because they have no idea who depends on them in the giant shared schema is paralyzing. Defining service boundaries and APIs with clarity around backwards compatibility is a good solution. Sometimes this is taken too far, into services that are too small, but the service boundaries and explicit APIs are nonetheless good, mostly for people scaling.


> Defining service boundaries and APIs with clarity around backwards compatibility is a good solution.

Can't you do that with one big database? Every application gets an account that only gives it access to what it needs. Treat database tables as APIs: if you want access to someone else's, you have to negotiate to get it, so it's known who uses what. You don't have to have one account with access to everything that everyone shares. You could


It would be easier to create different databases to achieve the same thing. Those could be in the same database server, but clear boundaries is the key.


Schemas can be useful in this regard


Indeed! And functions with security definers can be useful here too. With those one can define a very strict and narrow API that way, with functions that write or query tables that users don't have any direct access to.

Look at it as an API written in DB functions, rather than in HTTP request handlers. One can even have neat API versioning through, indeed, the schema, and give different users (or application accounts) access to different (combinations of) APIs.

The rest is "just" a matter of organizational discipline, and a matter of teams to internalize externalities so that it doesn't devolve into a tragedy of the commons — a phenomenon that occurs in many shapes, not exclusively in shared databases; we can picture how it can happen for unfettered access to cloud resources just as easily.

But here's the common difference: through the cloud, there's clear accounting per IOP, per TB, per CPU hour, so incentive to use resources efficiently is can be applied on a per-team basis — often through budgeting. "Explain to me why your team uses 100x more resources than this other team" / "Explain to me why your team's usage has increased 10-fold in three months".

Yet there's no reason to think that you can only get accounting for cloud stuff. You could have usage accounting on your shared DB. Does anyone here have experience with any kind of usage accounting system for, say, PostgreSQL?


I think we're getting hung up on database server vs. database as conceptual entity. I think separation between the entities is more important (organizationally) and don't think it matters as much whether or not the server is shared.


These are real problems, but there can also be mitigations, particularly when it comes to people scaling. In many orgs, engineering teams are divided by feature mandate, and management calls it good-enough. In the beginning, the teams are empowered and feel productive by their focused mandates - it feels good to focus on your own work and largely ignore other teams. Before long, the Tragedy of the Commons effect develops.

I've had better success when feature-focused teams have tech-domain-focused "guilds" overlaid. Guilds aren't teams per-se, but they provide a level of coordination, and more importantly, permanency to communication among technical stakeholders. Teams don't make important decisions within their own bubble, and everything notable is written down. It's important for management to be bought in and value participation in these non-team activities when it comes to career advancement (not just pushing features).

In the end, you pick your poison, but I have certainly felt more empowered and productive in an org where there was effective collaboration on a smaller set of shared applications than the typical application soup that develops with full team ownership.


In uni we learnt about federated databases, i.e multiple autonomous, distributed, possibly heterogeneous databases joined together by some middleware to service user queries. I wonder how that would work for this situation, in the place of one single large database.

Federated databases are never usually mentioned in these kind of discussions involving 'web scale'. Maybe because of latency? I don't know


> Teams not having the flexibility to evolve their own schemas because they have no idea who depends on them

This sounds like a problem of testing and organization to me, not a problem with single big databases.


Sure. My point is that the organization problems are more difficult and interesting than the technical problems being discussed in the article and in most of the threads.


Introducing an enormous amount of overhead because training your software engineers to use acceptable amounts of resources instead of just accidentally crashing a node and not caring is a little ridiculous.


For whatever reason I've been thrown into a lot of companies at that exact moment when "hardware is cheap" and "not my problem" approaches couldn't cut it anymore...

So yes, it's super painful, and requires a lot of change in processes, mindsets, and it's hard to get everyone to understand things will get slower from there.

On the other end, micro-services and/or multi-DB is also super hard to get right. One of the surprise I had was all the "cache" that each services started silently adding on their little island when they realized the performance penalty they had from fetching data from half a dozen services on the more complicated operations. Or the same way DB abuse from one group could slow down everyone, and service abuse on the core parts (e.g. the "user" service) would impact most of the other services. More that a step forward, it felt a lot like a step sideways and continuing to do the same stuff, just in a different way.

My take from it was that teams that are good at split architectures are also usually good at monolith, and vice-versa. I feel from the parent who got stuck in the transition.


> teams that are good at split architectures are also usually good at monolith, and vice-versa.

aka, low-competency engineers will not outperform using better processes or project management.

The way, imho, is to up-skill the team (which is only possible if it was small unfortunately).


Sure, you'll get to m6g.16xlarge; but how many companies actually have oltp requirements that exceed the limits of single servers on AWS, eg u-12tb1.112xlarge or u-24tb1.metal (that's 12-24tb memory)? I think these days the issues with high availability, cost/autoscaling/commitment, "tragedy of the commons", bureaucracy, and inter-team boundaries are much more likely to be the drawback than lack of raw power.


You do not need that many database developers, it's a myth. Facebook has 2 dedicated database engineers managing it. I work in United Nations, there is only 1 dedicated database developer in 1000+ team.

If you have a well designed database system. You do not need that many database engineers.


I do not disagree at all that what you are describing can happen. What I'm not understanding is why they're failing at multi year attempts to fix this.

Even in your scenario you could identify schemas and tables that can be separated and moved into a different database or at maturity into a more scalable NoSQL variety.

Generally once you get to the point that is being described that means you have a very strong sense on the of queries you are making. Once you have that it's not strictly necessary to even use a RDBMS, or at the very least, a single database server.


> Even in your scenario you could identify schemas and tables that can be separated and moved into a different database or at maturity into a more scalable NoSQL variety.

How? There's nothing tracking or reporting that (unless database management instrumentation has improved a lot recently), SQL queries aren't versioned or typechecked. Usually what happens is you move a table out and it seems fine, and then at the end of the month it turns out the billing job script was joining on that table and now your invoices aren't getting sent out.

> Generally once you get to the point that is being described that means you have a very strong sense on the of queries you are making.

No, just the opposite; you have zillions of queries being run from all over the case and no idea what they all are, because you've taught everyone that everything's in this one big database and they can just query for whatever it is they need.


Every database I know of can generate query logs. Why not just log every query and do some statistical analysis on it?


> Every database I know of can generate query logs.

Every one I know of warns that it comes with significant performance implications and isn't intended to be used in production.

> Why not just log every query and do some statistical analysis on it?

And then what? If you know this table only gets queried a few times a month, what does that actually tell you that you can use?


It's resource intensive - but so is being in a giant tarpit/morass. Adding client query logging is cheaper and can be distributed. I just double checked, and neither Oracle nor Postgres warn 'never use it in production'

And if you have logs, you can see what actually gets queried, and by whom, and what doesn't get queried, and by whom.

That will also potentially let you start constructing views and moving actual underlying tables out of the way to where you can control them.

Which can let you untangle the giant spaghetti mess you're in.

But then, that's just me having actually done that a few times. You're welcome to complain about how it's actually unsolvable and will never get better, of course.


> It's resource intensive - but so is being in a giant tarpit/morass.

Agreed, but it means it's not really a viable option for digging yourself out of that hole if you're already in it. Most of the time if you're desperately trying to split up your database it's because you're already hitting performance issues.

> Adding client query logging is cheaper and can be distributed.

Right, but that only works if you've got a good handle on what all your clients are. If you've got a random critical script that you don't know about, client logging isn't going to catch that one's queries.

> But then, that's just me having actually done that a few times. You're welcome to complain about how it's actually unsolvable and will never get better, of course.

I've done it a few times too, it's always been a shitshow. Query logging is a useful tool to have in some cases but it's often not an option, and even when it is not a quick or easy fix. You're far better off not getting into that situation in the first place, by enforcing proper datastore ownership and scalable data models from the start, or at least from well before you start hitting the performance limits of your datastores.


If you are in the hole where you really cannot add load to your database server but want to log the queries, there is a technique called zero impact monitoring where you literally mirror the network traffic going to your database server, and use a separate server to reconstruct it into query logs. These logs identify the queries that are being run, and critically, who/what is running them.

A past workplace of mine has used this approach.


Excellent pointer! Do you happen to know of anything which can do this out of the box?


I've seen this too. I guess 50% of query load were jobs that got deprecated in the next quarterly baseline.

It felt a system was needed to allocate query resource to teams, some kind of tradeable tokens that were scarce maybe, to incentivise more care and consciousness of the resource from the many users.

What we did was have a few levels of priority managed by a central org. It resulted in a lot of churn and hectares of indiscriminately killed query jobs every week, many that had business importance mixed in with the zombies.


Let me tell you about my new Blockchain CorpoCoin... (/s)


Do you think it would make it better to have the tables hidden behind an API of views and stored procedures? Perhaps a small team of engineers maintaining that API would be be able to communicate effectively enough to avoid this "tragedy of commons" and balance the performance (and security!) needs of various clients?


This is so painfully painfully true. I’ve seen in born out personally at three different companies so far. Premature splitting up is bad too, but I think the “just use one Postgres for everything” crowd really underestimate how bad it gets in practice at scale


Maybe it’s all a matter of perspective? I’ve seen the ‘split things everywhere’ thing go wrong a lot more times than the ‘one big database’ thing. So I prefer the latter, but I imagine that may be different for other people.

Ultimately I think it’s mostly up to the quality of the team, not the technical choice.


I’ve seen splitting things go bad too. But less often and to a lesser degree of pain than mono dbs - a bad split is much easier to undo than monodb spaghetti.

However I think it’s “thou shall” rules like this blog post that force useless arguments. The reality is it depends, and you should be using your judgement, use the simplest thing (monodb) until it doesn’t work for you, then pursue splitting (or whatever). Just be aware of your problem domain, your likely max scale, and design for splitting the db sooner than you think before you’re stuck in mud.

And if you’re building something new in an already-at-scale company you should perhaps be starting with something like dynamo if it fits your usecase.


I did in the original comment.

We have over 200 monolith applications each accessing overlapping schemas of data with their own sets of stored procedures, views, and direct queries. To migrate a portion of that data out into it's own database requires, generally, refactoring a large subset of the 200 monolith apps to no longer get all the data in one query, but rather a portion of the data with the query and the rest of the data with a new service.

Sharding the data is equally difficult because even tracing who is writing the data is spread from one side of the system to the next. We've tried to do that trough an elaborate system of views, but as you can imagine, those are too slow and cover too much data for some critical applications so they end up breaking the shard. That, in and of itself, introduces additional complexity with the evolution of the products.

Couple that with the fact that even with these solutions, getting a large portion of the organization is not on board with these solutions (why can't we JUST buy more hardware? Get JUST bigger databases?) and these efforts end up being sabotaged from the beginning because not everyone thinks it's a good idea (And if you think you are different, I suggest just looking at the rest of the comments here in HN that provide 20 different solutions to the problem some of which are "why can't you just buy more hardware?")

But, to add to all of this, we also just have organizational deficiencies that have really harmed these efforts. Including things like a bunch of random scripts checked into who knows where that are apparently mission critical and reading/writing across the entire database. General for things like "the application isn't doing the right thing, so this cron job run every Wednesday will go in and fix things up" Quiet literally 1000s of those scripts have been written.

This isn't to say we've been 100% unsuccessful at splitting some of the data into it's own server. But, it's a long and hard slog.


>Including things like a bunch of random scripts checked into who knows where that are apparently mission critical and reading/writing across the entire database.

This hits pretty hard right now, after reading this whole discussion.

When there is a galaxy with countless star systems of data its good to have locality owners of data who publish for their usage as domain leaders, and build a system that makes subscription and access grants frictionless.


100% agreed and that's what I've been trying to promote within the company. It's simply hard to get the momentum up to really affect this change. Nobody likes the idea that things have to get a little slower (because you add a new layer between the data) before they can get faster.


fwiw hacking hundreds of apps literally making them worse by fragmenting their source of record doesn't sound like a good plan. it's no surprise you have saboteurs, your company probably wants to survive and your plan is to shatter its brain.

outside view: you should be trying to debottleneck your sql server if that's the plan the whole org can get behind. when they all want you to succeed you'll find a way.


> fwiw hacking hundreds of apps literally making them worse by fragmenting their source of record doesn't sound like a good plan. it's no surprise you have saboteurs, your company probably wants to survive and your plan is to shatter its brain.

The brain is already shattered. This wouldn't "literally make them worse", instead it would say that "now instead of everyone in the world hitting the users table directly and adding or removing data from that table, we have one service in charge of managing users".

Far too often we have queries like

SELECT b.*, u.username FROM Bar b JOIN users u ON b.userId = u.id

And why is this query doing that? To get a human readable username that isn't needed but at one point years ago made it nicer to debug the application.

> you should be trying to debottleneck your sql server if that's the plan the whole org can get behind.

Did you read my post? We absolutely HAVE been working, for years now, at "debottlenecking our sql server". We have a fairly large team of DBAs (about 30) who's whole job is "debottlenecking our sql server". What I'm saying is that we are, and have been, at the edge (and more often than not over the edge) of tipping over. We CAN'T buy our way out of this with new hardware because we already have the best available hardware. We already have read only replicas. We already have tried (and failed at) sharding the data.

The problem is data doesn't have stewards. As a result, we've spent years developing application code where nobody got in the way of saying "Maybe you shouldn't join these two domains together? Maybe there's another way to do this?"


assuming some beastly server with terabytes of ram, hundreds of fast cores, and an exotic io subsystem capable of ridiculous amounts of low latency iops, I'd guess the perf issue with that example is not sql server struggling with load but rather lock contention from the users table being heavily updated. unless that beast of a server is sitting pegged with a hardware bottleneck it can probably be debottlenecked by vertically partitioning the users table. ie: split the table into two (or more) to isolate the columns that change frequently from the ones that don't, replace the table with a view that joins it back together w/instead-of triggers conditionally updating the appropriate tables, etc. etc. then when this happens:

    SELECT b.*, u.username FROM Bar b JOIN users u ON b.userId = u.id
sql server sees that you're only selecting username from the users view and eliminates the joins for the more contentious tables and breathes easy peasy

> And why is this query doing that? To get a human readable username that isn't needed but at one point years ago made it nicer to debug the application.

imo users should be able to do this and whatever else they want and it's not even unreasonable to want usernames for debugging purposes forever. I'd expect the db team to support the requirements of the apps teams and wouldn't want to have to get data from different sources


> assuming some beastly server with terabytes of ram, hundreds of fast cores, and an exotic io subsystem capable of ridiculous amounts of low latency iops, I'd guess the perf issue with that example is not sql server struggling with load but rather lock contention from the users table being heavily updated.

You'd guess wrong. The example above is not the only query our server runs. It's an example of some of the queries that can be run. We have a VERY complex relationship graph, far more than what you'll typically find. This is finance, after all.

I used the user example for something relatable without getting into the weeds of the domain.

We are particularly read heavy and write light. The issue is quiet literally that we have too many applications doing too many reads. We are literally running into problems where our tempDb can't keep up with the requests because there are too many of them doing too complex of work.

You are assuming we can just partition a table here or there and everything will just work swimmingly, that's simply not the case. Our tables do not so easily partition. (perhaps our users table would, but again, that was for illustrative purposes and by no means the most complex example).

Do you think that such a simple solution hasn't been explored by a team of 50 DBAs? Or that this sort of obvious problem wouldn't have been immediately fixed?


> Do you think that such a simple solution hasn't been explored by a team of 50 DBAs? Or that this sort of obvious problem wouldn't have been immediately fixed?

based on what you've shared, yeah. I also wouldn't expect a million DBAs to replace a single DBE


Just trust me, I can refute anything with vague explanations.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: