- standard multithreading support (<threads.h> header)
- atomic operations (<stdatomic.h> header)
- type-generic macros (sqrt translates to sqrt, sqrtf, sqrtl depending on argument type)
- Unicode support (char16_t and char32_t types)
- gets function is removed from the standard (use gets_s)
- static assertions
- aligned memory allocation (aligned_alloc)
- anonymous structures and unions
- no-return functions (specify functions that never return)
- exclusive access for opened files ("x" in fopen)
- macros to create complex numbers
- additional way to terminate the program (quick_exit and at_quick_exit)
I was curious how type-generic macros can be implemented in C, so I looked up an explanation of how it's done in glibc. It's quite fascinating (despite being an unholy abomination):
> exclusive access for opened files ("x" in fopen)
Wouldn't this need to be a service provided by the operating system? How can it be part of the standard? Or is it just that the standard prescribes that "x" must work if the operating system allows it.
I know Windows usually defaults to exclusive access, but in practice do any unix-like systems provide exclusive access for file read? Just wondering since I've never requested exclusive read access in my Linux programming.
Yeah, it requires OS support, but it's no different than most of the standard library. Opening files, writing to files, printing to the console, allocating memory, etc. all require OS support, too.
As to how it can be implemented, "man 2 open" on OSX shows O_EXLOCK and O_SHLOCK flags, and they're from BSD. Not sure if they're POSIX, but I'd bet Linux has something equivalent. And there's also the flock() function. My guess is the new "x" option to fopen can be implemented using O_EXLOCK and flock(), though I'm too tired to look up details on how it could be done.
The draft specifies that this should work "to the extent that the underlying system supports exclusive accesss", so basically, you are hosed if you use this :-)
On some systems, this will mean no locking, on others, advisory locks, on yet others, no locking at all. I bet it will depend on the file system, too (NFS mounts, I am looking at you)
Imagine if da Vinci couldn't stop himself from painting the Mona Lisa and decided to add a little of her endearing facial hair there under the lip.
* C89 + Amendment 1 (and, as usual, try really hard to stay out of the way of the Sasquatch sized footprints being added by the "new and improved standards.")
And they still haven't adopted the C++ const-correctness rules (for pointers to pointers to ...), which really do help in writing correct code, and correct code that has flavor of C89.
What? C has those const rules. The type "int const* const*" is valid C.
The differences are primarily that in C you can't use a const symbol in a place where a constant expression is expected (e.g. initializing another const, or a statically sized array).
But you can absolutely do const-correct code in C.
It has a set of const-correctness rules. They're much simpler than the C++ rules. While they handle the simple cases perfectly well, they forbid some assignments that really can't get you in trouble, and would be quite useful in practice. In particular, while "int const * const * " exists, it can't be assigned from an "int * * ", even though this is sound. This makes it hard to write const-correct functions handling 2-d arrays.[1] kzrdude's link shows how they have to rule out certain constructs that are not obviously wrong. It doesn't explain how the C++ assignment compatibility rules allow more, while still being sound, but that's a bit much to expect from a C FAQ, particularly as the C++ rules are somewhat involved. (I can't find a decent reference that explains the rules well. I found one about six months ago, but didn't save it, as I don't professionally use C++.)
[1]: Preemptive pedanticism: Yes, there are difference between C arrays and pointers, the expression types merely decay in the right contexts. But if I'm using [] to dereference, they're being used as arrays.
For personal projects, I'm going to try and use ANSI C89 forever. I know this isnt especially rational but it's somehow appealing to stick with a small(er) and more historic version.
I think that <stdint.h> is a C99 feature worth using -- otherwise each project has to redefine eg. myuint32_t over and over again. Colossal waste of effort and namespace pollution.
Most of the time you don't actually care about the exact size of your integer types (especially for signed integers).
At work, we use C90 with a few C99 extensions which can be disabled through the preprocessor (e.g. inline, restrict) and although we stared out with a stdint.h like thing, it is proving to be less useful.
I'd really _like_ to use C99, but unfortunately Microsoft is putting a damper on my fun. It drives me crazy: for once I was working on a project from scratch and decided to go ahead and enjoy myself using all the niceties of C99 that are available in GCC, and then people come along and complain that they can't compile it in Visual Studio. Even though I don't use Windows, if I want a certain cross-section of the developer community to be interested in my work, MS is effectively able to tell me what to do. I'm forced to use the lowest common denominator, which they have decided is not going to be C99, so I end up having to start back-porting my work to C89 or tell these people they need to use MingW. (Which apparently is not an acceptable solution even though it technically works.. it's just complicated enough to set up an MSys bash prompt that it turns people off.)
That's your call and I admire it (I think that restriction is a major cause of Lua's famous portability). I like the C99 struct initialisers and the ability to declare things right next to their use site too much to go back.
Good luck in convincing anyone that doing something that's not rational is a good idea. The very definition of rationality is something that represents something good and necessary. I don't know why you'd try and argue for non-rationality/mysterious/spiritual reasons when using something is mathematical and scientific as computers. It's a bad combination.
Also, you don't have to use the whole standard. You know there are people who use C++ for hobby projects? That standard is dozens of times larger than the C standard.
I don't know why you would consider the choice of C89 irrational. If you want the widest range of compiler portability, you pretty much have to use C89.
I doubt any programmer practices platform, tool and library choice with 100% rationality, so it's not a black and white issue. If I had in fact been trying to convince people and hadn't brought up irrationality myself, your concern would be welcome :)
I gave up from C99 and its successors when I saw that they introduced _Bool. Actually, I've never understood the meaning behind having a boolean type in any language. [It would kinda make sense if conditionals accepted ONLY boolean arguments, and then having bool type would make sense only as a return value of a function. Why somebody feels a need to declare a boolean variable is beyond me.]
There are so many different ways to communicate errors through an int in C. For example 0=success and negative error codes. With _Bool, at least one scheme is obvious from the signature.
No, it's not. Adding a new built-in type that doesn't participate in type-checking in any meaningful way just clutters the language. _Bool is like int, just having unspecified size and slower, since the compiler has to ensure that it shall contain etiher 0 or 1.
Very little, if anything, would have been lost by standardizing "typedef unsigned bool;"
nice. would like to find out all the new stuff in some blog/books before I had to read the standard itself, plus the status of toolchain supports of them(gcc seems only support part of the new standard)
This is often the case with language and file format standards: The working group drafts are free, and the standard is just an approval-without-change of the final draft.