Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love PostgreSQL.

But for a personal project I went with MongoDB, because my data set was a perfect match for mongo's design.

Now I love mongo too, I'm amazed with how easy it has been to maintain 100% uptime on comodity hardware (one server is literally in a room in my apartment) through all the random server downtimes, upgrades, migrations, etc.

And now I have more ideas for some personal projects, and they would go very well with postgres, but I'm so missing replica sets from mongo.

If postgres would have something similar to replica sets in mongodb, that would be amazing.



If mongo impresses you, you should check out rethinkdb: https://github.com/rethinkdb/rethinkdb


no official java driver :-/


do take a look if you can leverage jsonb datastructure of postgres - you get 80% of the power of mongo and all the advantages of postgres.

https://www.compose.io/articles/is-postgresql-your-next-json...


Maybe I wasn't clear, I love everything about PostgreSQL.

But I would love it even more if it were easier to distribute (see replica sets in mongodb, with auto-failover and other goodies).


Actually the article you posted makes the case that postgres is not a good json database, because it can not modify json documents in place.


PG 9.5 will improve this situation somewhat, e.g., builtin functions like jsonb_set(), and overloading the '-' operator for jsonb values:

http://www.postgresql.org/docs/devel/static/functions-json.h...

That said, if you're doing a lot of mutation of large JSON documents stored in a single Postgres row value, the storage/concurrency control behavior still won't be ideal.


Actually if you really need to update large JSON documents efficiently, probably the only really efficient technology would be ToroDB http://www.8kdata.com/torodb/


I would love to hear more - this is the first I'm hearing of Toro - it never comes up in NoSQL talks or conversations. Any decent success stories?


Actually without providing some benchmark results, the statement, that PostgreSQL has performance problems because of on-copy updating of JSONB, is groundless.


I'm genuinely interested in which dataset/use case is serviced better by MongoDB than PostgreSQL


I didn't mean that it was serviced better by MongoDB rather than PostgreSQL.

I meant that it was a valid/recommended use case for MongoDB since I didn't really have any relational data (every document inserted was pretty much standalone).

I've mentioned it because I've seen plenty of posts around HN where "mongo sucks" because they tried to fit a round shape through a square.

The extra goodies from MongoDB helped too.

Like automatic failover, I can literally go and unplug a node and everything will still be fine.

Having tail -f functionality in the db was also pretty handy (for my project).

The sysadmin in me was happy too, it's not everyday you see software that allows you to upgrade between major versions / storage engines without downtime (when using replica sets, not standalone, of course).


If it's running on a desktop in your apartment, you don't need replica sets.


Why not?

It is a desktop (as-in, it has normal desktop components, although used headless like a server), but it's got plenty of RAM, good CPU, RAID1 on WD Re hard drives, 100mbit connection and hooked up to a UPS.

In the last year it has been more stable than some of the cheaper hosting I was using.

Besides, it's not running the whole replica set, just one member (out of 3).


Because it can and will fail in odd ways, and that has one of two outcomes:

1) It doesn't matter, which means the time, energy and money spent setting it up was squandered when it could have been spent on marketing or product dev.

or

2) It does matter, which means now you have to blow even more time, energy and money recovering it and standing it back up. Hope you've rehearsed your DR plan!

I'm not trying to preach, I apologize if it's coming off that way. But this highly resembles tinkering, and tinkering doesn't generally pay the bills. Usually the opposite.


> Because it can and will fail in odd ways, and that has one of two outcomes:

Can't that happen anywhere? Regardless of the type of hardware.

> But this highly resembles tinkering

Guilty pleasure.

> [...], and tinkering doesn't generally pay the bills.

Thankfully, I was aware that it most likely won't be paying the bills, and considering I've made 35€ from it in the past year and a half, I guess I was right :-)

I've made it for myself (and opened it for the rest of the world if they need it), but I'm my most demanding customer, that's probably why I expected nothing less than 100% uptime since I launched it.

And I've managed to do that, without breaking the bank.

I don't know how my tone sounds (I'm not native), I'm just trying to emphasize that with the right tools, you don't need a shiny cloud for really good uptime.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: