It is unfortunate we do not strive to make "work" as enjoyable and as addictive as some games, while also allowing employees to share significantly in the wealth they help generate. It feels like many video games show us how much potential is hidden away inside people.
A series of charts plotted by Wolfram Alpha (for US data) shows that debt seems to go up, steadily, forever. [1] The steady rise of debt seems to be a phenomenon independent of specific events. Perhaps this is simply a long-term effect of interdependency.
I have seen many people fail to realize (and sometimes to understand at all) that taking measurements of large numbers of people is always fraught with errors of all sorts. No matter how hard you try, it is like the universe always pushes back to make sure the data are imperfect. Sometimes those imperfections are acceptably small, and other times the imperfections are uncomfortably large.
In my opinion, anyone involved in such large-scale data collection and analysis should acknowledge the inevitability of error and provide disclaimers about potential sources of error.
Yes, but if the error consistently results in you getting paid more than you should have, it's not really an error, is it? No, we would in fact call that fraud.
+1, the issue here is not that errors can occur but instead how a company responds. It’s tough to be sympathetic to leaders in a company that were aware of the errors and then took no action for years to remedy it, particularly as the error resulted in overcharging customers.
Apple has kind of herded me into zsh for work in the terminal. So far it has been pleasant, with interesting plugins, but it's periodic updates are a little annoying. Despite remaining in zsh while in a terminal, I still find myself writing only bash scripts when a script it needed.
If portability is important, I'd recommend Bourne shell instead.
Sometimes there's a justification for writing shell scripts in bash. E.g. if you only care about Linux forever.
Frequently though, a simple script will unexpectedly grow, and start to require non-trivial data structures, etc. If there's no time for a rewrite in a more appropriate language (Python/Ruby/Perl), then moving to bash can be a compromise.
For reference, Judea Pearl won the ACM Turing award in 2011 "[f]or fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning."[1]
This post was quite a surprise. I have an old copy of Paradise Lost (illustrated by Gustave Dore) on a bookshelf next to me. (Digitized version here: http://digital.auraria.edu/AA00006467/00001) The cover of my copy is green instead of gold.
Definitely a good one: "Experiences with running PostgreSQL on Kubernetes - Gravitational - blog post 2018"
For anyone who thinks running a database in a container environment is a neat idea, think again. I am guilty of using containers for temporary test databases, but the thought of running production databases in containers sends shivers down my spine.
I don’t know why anybody would presume that a technology focused on ephemeral resource provisioning would be a suitable place to put your persistence layer...
That said, I don’t think it’s a sin at all to use it for testing. My default local dev setup is to use a Postgres container. But persistence is very much not required in that situation.
> I don’t know why anybody would presume that a technology focused on ephemeral resource provisioning would be a suitable place to put your persistence layer...
Kubernetes does more than that, and has features like PVCs + Statefulsets are basically intended for, designed for exactly this use case. If you see the HN comments[1], the top comment mentions this, and that the article waves it away for reasons not related to k8s, but to "well, if the underlying storage is slow or not durable, then…" … yeah, then it doesn't matter if you're running k8s in the middle of it or not.
Kubernetes was not initially designed with persistence in mind. If it was then the etcd of the master nodes would also be in containers.
There are (good) attempts at shoving it in, but the general advice I would give is that if you care about your data you should give it every possible chance for to not be corrupted or disrupted; and that means keeping the number of abstractions and indirections low.
You can make it work, but why would you want to? Databases aren’t generally something that benefits from using container orchestration. They’re not usually highly dynamic, horizontally scaling systems. Generally you’d optimize that part of your system to maximize stability and consistency. For most typical use cases I can’t see the intuitive leap required to decide that all that additional complexity is necessary to attempt to replicate what you’d get from a few VPS. Unless you have a specialized use case, to me it just seems like very obviously the wrong tool for the job.
It that I advocate running your own Postgres setup in your own cluster instead of just renting a managed version, but I’ve run a few databases on K8s and found it pretty fine: useful for when your hosting provider doesn’t support the database you want to run (Clickhouse managed AWS service when?) or for application-specific KV-stores: EBS volumes and PVC’s are great, solid performance, kubernetes takes care of the networking, will resurrect it if the worst happens and it does go down.
I probably could have those things on their own instance but then I’d need to have to go through the hassle of networking, failover/recreation, deployments, etc and for the vast majority of cases that’s 100% more effort than deploy a stateful-set.
Now! Altinity runs Altinity.Cloud now in AWS. Feel free to drop by.
There are also services in other clouds. Yandex runs one in their cloud and there are at least 3 in China. ClickHouse has a big and active community of providers.
I strongly support this advice having felt the pain.
Inherited a setup using a semi-well-known vendors Patroni/Postgres HA operator implementation on OpenShift and it was extremely fragile to any kind of network latency/downtime (due to its strong tie to the master api) or worker node outage/drainage/maintenance. These events would mean hours of recovery work hacking around the operator.
It was not my decision to place Postgres on OpenShift and I will strongly discourage anyone planning to do this for production (or even testing). Please do not do it if you value your time and sanity. Spin up a replica set on VMs using one of the already production ready and battlehardened solutions or if in cloud use a managed Postgresql service.
For me, personally -- I cannot think of a sufficient justification to put a production database in a container. A good database server is designed for performance, reliability, scalability, security, etc., without containers. Putting a production database inside a container introduces a world of unnecessary edge cases and complexity.
Depends on requirements. Someone needs one big, highly-optimized DB instance. Someone else needs high-availability 3+ instance cluster. Having a cluster of containers brings performance penalty but if your app is read-heavy, you can read from all instances and multiply read throughput...
Thanks for your feedback. I might run the DB on the host then, and just use containers for the app server. I'm not at the scale to warrant a separate host for the DB.
I mostly want to point out that, when discussing ancient history, objective data are only one facet contributing to the interpretation of past events. Most people prefer simple, straightforward answers or interpretations (i.e., the "correct" answer), but the world and greater universe is filled with complexities and subtleties that should not necessarily be overlooked when seeking understanding.
I still cringe when I think back to some C projects from long ago, where the dependency chains of headers and libraries where unfathomably byzantine.
On a related note, lately I have been looking into some biochemistry topics and their relation to computer science. It seems that cyclic dependencies are possibly a requirement for life, which makes faithful simulations of biochemical processes an interesting challenge.
I’m a compiler nerd at the moment. I have an inherent trust of things that bootstrap themselves, as if it means they are conceptually more pure. I’m not sure whether that is 100% well founded, but it sounds similar to some of these observations. :)
I think self-hosting is just a little over-rated for languages. It's good that Rust and C++ are self-hosting, you'd expect those to be languages you can write a compiler in.
But Lua and JavaScript satisfy their own niche just fine without having popular runtimes written in themselves.
Only if you don't consider time, or optional dependencies. GCC 10 _can_ be built with GCC 10, but it was probably built with GCC 9 the first time around.
If you can bootstrap it, and you can, then it's not really a hard, unbreakable cycle, right? It's just an option that's quicker than starting from tcc or whatever every time.
That is an interesting observation. I wonder whether the cycling in chemistry is more about information flow then like (negative?) feedback, it helps in making a stable system.