Hacker Newsnew | past | comments | ask | show | jobs | submit | krallin's commentslogin

(engineer working on Buck2 here)

Buck2 is actually used internally with Git repositories, so using Sapling / hg is definitely not a requirement


I love SQLite (and use it in a large number of places in our application stack!), but I feel that it suffers from a case of bad defaults. To name a few:

- Foreign key checking (and cascading deletion for that matter) are turned off by default. You need to enable them using `PRAGMA foreign_keys = ON;`.

- There are practically no downsides (and a number of upsides) to using the WAL journalling mode (at least for "use a local database and store it on the disk" use cases). The main one being that reads will conflict with writes if you don't enable it! (which is a problem in a multi-threaded environment) Unfortunately, that's another feature you must remember to enable: `PRAGMA journal_mode = WAL;` (for obvious reasons, this one "stays enabled" after you turn it on).

- Full auto-vacuum cannot be enabled after you start writing to the database unless you enabled incremental auto-vacuum. If you're unsure, it's a good idea to enable incremental auto-vacuum to keep that option open. But, here again, that's not the default: you need `PRAGMA auto_vacuum = INCREMENTAL`.

This tends to be explicitly problematic if you have to perform one or more ad-hoc queries using the SQLite command line on an app's database, and forget to apply the relevant PRAGMAs! I wish there was a way to add "default" PRAGMAs on a sqlite database file to avoid this.

(note: some of these defaults are configurable when compiling sqlite from scratch, but if you're dynamically linking with an OS-provided instance of the library, you can't really do that).


The disadvantage of enabling WAL is that your database just turned into two files and you have to recover the database to read it. Recovery is fast but it requires write access.


AWS provides you with a number of DNS records for each NLB:

- One record per zone (which maps to the EIP for that zone) - A top-level record that includes all active zones (these are all zones you have registered targets in, IIRC)

The latter record is health checked, so if an AZ goes down, it'll stop advertising it automatically (there will be latency of course, so you'll have some clients connecting to a dead IP, but if we're talking unplanned AZ failure, that's sort of expected).

That said, this does mean you probably shouldn't advertise the IPs directly if you can avoid it, yes.

(disclaimer: we evaluated NLB during their beta, so some of this information might be slightly outdated / inaccurate)


Won't DNS failover be painfully slow? Some clients ignore small TTL values. I've seen DNS updates taking several hours to propagate.

I thought one of the advantages of multiple zones is that zonal failover can happen with "zero" downtime (this seems to be the case with Amazon RDS).


The default answer includes multiple A records, so if clients can't reach one of the IPs, they try another. There's no need for anything to propagate for that to kick in, it's just ordinary client retry behavior.

We do also withdraw an IP from DNS if it fails; when we measure it, we see that over 99% of clients and resolvers do honor TTLs and the change is effected very quickly. We've been using this same process for www.amazon.com for a long time.

Contrast to an alternative like BGP anycast, where it can take minutes for an update to propagate as BGP peers share it with each other in sequence.


RDS failover still uses DNS and you still need to be aware of client TTLs:

"Because the underlying IP address of a DB instance can change after a failover, caching the DNS data for an extended time can lead to connection failures if your application tries to connect to an IP address that no longer is in service."

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_B...


Author here - let me know if you have any questions!

(whether that's about using Supercronic... or making regular cron work in containers)


(Note: the following is only applicable to Linux)

The default for overcommitting on Linux is heuristic; it doesn't always succeed: if you try and allocate several exabytes or RAM, allocation will definitely fail (in fact, trying to allocate e.g. 2GB of RAM if you only have 1 free will usually fail just the same).

There is an option for "always overcommit" (incidentally; the one Redis recommends you use), in which case allocation will always succeed provided the Kernel can represent what you're trying to allocate (what you're describing), but it's definitely not the default

Reference: https://www.kernel.org/doc/Documentation/vm/overcommit-accou...


malloc() should fail even with always over commit enabled if you try to malloc more memory than there is virtual address space, although that needs testing to confirm it. The alternative to failing would be hanging inside the kernel, which would be a DoS attack.


Yes, you're entirely correct; that's actually what I meant by "provided the Kernel can represent [it]" (albeit perhaps not very clearly!).

Cheers,


There is also the case of the sum of the allocations of several malloc operations exceeding that threshold. The case of just 1 malloc operation being that large is just a special case of that. That is what I meant, but reading what Ibwrore made me realize that I was not clear about that.


Thanks for clarifying that up! :)


Note: if you're using Ubuntu, there is a semi-official PPA that has a non-vulnerable version (2.7.3): https://launchpad.net/~git-core/+archive/ubuntu/ppa


But a fix should come via the normal update channel soon? I'm on wily, should I expect to add this PPA or risk vulnerability?


Ubuntu should announce the fix at https://www.ubuntu.com/usn/ but I can't load the page right now.

(removed DSA link as per advice below)


That Debian advisory is a different, older vulnerability. Looks like they know about it but haven't released anything yet:

https://security-tracker.debian.org/tracker/source-package/g...


Oops, thanks.


https://bugs.launchpad.net/ubuntu/+source/git/+bug/1557787 is the tracking bug for this issue. Seems like it's fixed on xenial but not yet in older releases.


Unfortunately it looks like the distros have not been particularly diligent about releasing a fix via the security channel (according to the article), so this PPA is a workaround


This means "install the `requests` package with the `security` extras.

In this particular case, this installs 'pyOpenSSL>=0.13', 'ndg-httpsclient', 'pyasn1'. See: https://github.com/kennethreitz/requests/blob/46184236dc177f...


This principle doesn't necessarily mean functions should never return booleans, though.

Booleans are used in a variety of (popular) Python libraries when checking whether a password is correct (e.g. Django's `check_password` returns False if the password is wrong).


Quick disclaimer: I'm the author of Tini (thanks for the hat tip, by the way!).

Note that for interactive usage, Tini actually hands over the tty (if there is one) to the child, so in that case signals that come "from the TTY" (though in a Docker environment this is an over-simplication) actually bypass Tini and are sent to the child directly. This should include SIGSTP, though I'm not sure I tested this specifically.

That being said, both tools are probably indeed very similar — after all there is little flexibility in that kind of tool! Process group behavior is probably indeed where they differ the most. : )


While not as common as nounset and errexit, pipefail is a useful option as well (set -o pipefail).

Using pipefail, if any program in a pipeline fails (i.e. exit code != 0), then the exit code for the pipeline will be != 0.

E.g. pipefail can be useful to ensure `curl does-not-exist-aaaaaaa.com | wc -c` doesn't exit with exit code 0..!


You can set all three of them in a single line. Set up your Bash template with this today:

    set -o nounset -o pipefail -o errexit


I usually shorten this to:

    set -eu -o pipefail


I used to do that but the long versions are more understandable by other people who will look at the script.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: