Your write will be fine; that is, it's not as if data from one write will be interspersed with the data from another write. It's just that the order might be wrong, or opening the file multiple times (possibly from multiple processes) could be fun too. The program or computer crashing mid-write can also cause problems. Things like that.
Again, may not be an issue at all for loads of applications. But I used a lot of "flat file databases" in the past, and found it's not an issue right up to the point that it is. Overall, I found SQLite simple, fast, and ubiquitous enough to serve as a good fopen() replacement. In some cases it can even be faster!
> Your write will be fine; that is, it's not as if data from one write will be interspersed with the data from another write.
Are you sure? I thought it could be if the first write had more data than the size of the kernel/fs-driver buffer, not all of it would be written, and then it could be interrupted when another thread calls write() with a small buffer that gets written in one go.
No, I'm not sure haha; but in my experience it usually works like that, but no doubt there could be edge cases there, too. Another good reason to use SQLite.
Although not a POSIX requirement, in practice for unix-like systems, file writes are atomic across concurrent writers.
You maybe thinking of stdio buffering, where calls to printf etc get split into multiple write calls. Then in those cases, it's possible to get errant interleaved writes.
It eliminates them if they're smaller than PIPE_BUF (IIRC, Beltalowda, dmoy, and stevenhuang are wrong about this), but the thing that prevents data races with regard to writes is running the application in Node, which is completely single-threaded.