Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have some wishfull thinking ideas on this, but it should be possible to have both at least in an imaginary, theoretical scenario.

You can have both guaranteed delivery and no downtime if your whole system is so deterministic that anything that normally would result in blocking just will not, cannot happen. In other words it should be a hard real-time system that is formally verified top to bottom, down to the last transistor. Does anyone actually do that? Verify the program and the hardware to prove that it will never run out of memory for logs and such?

Continuing this thought, logs are probably generated endlessly, so either whoever wants them has to also guarantee that that they are processedand disposed of right after being logged... or there is a finite ammount of log messages that can be stored (arbitrary number like 10 000) but the user (of logs) has to guarantee that they will take the "mail" out of the box sooner than it overfills (at some predictable, deterministic rate). So really that means even if OUR system is mathematically perfect, we're just making the downtime someone elses problem - namely, the consumer of the infinite logs.

That, or we guarantee that the final resources of our self-contained, verified system will last longer than the finite shelf life of the system as a whole (like maybe 5 years for another arbitrary number)



From a hardware point of view, this system is unlikely to exist, because you need a system with components that never have any reliability issues ever to have a totally deterministic system.

From a software point of view, this system is unlikely to exist as it doesn't matter that the cause of your downtime is "something else that isn't our system". As a result, you're gonna end up requiring infinite reliable storage to upkeep your promises.


PACELC says you get blocking or unavailability or inconsistency.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: