Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Thanks to them being yet another attack vector and funny stuff like on this post, got demoted to optional on C11.

Additionally Google spent several years paying to clean up the Linux kernel from all VLA occurrences.

https://www.phoronix.com/news/Linux-Kills-The-VLA



>Thanks to them being yet another attack vector and funny stuff like on this post, got demoted to optional on C11.

Sadly, the C committee doesn't really understand what was wrong with VLAs and a sizable group of its members wants to make them mandatory again:

https://www.open-std.org/jtc1/sc22/wg14/www/docs/n2921.pdf ("Does WG14 want to make VLAs fully mandatory in C23")


What's wrong with VLAs is their syntax. It really shouldn't use the same syntax as regular C arrays, otherwise they would be fine, maybe with a scary enough keyword. They are more generic than alloca too, alloca being scoped to the function, while VLAs being scoped to the innermost block scope that contains them.


Syntax, no protection against stack corruption,...


You can corrupt the stack without VLAs just fine. What else?


VLAs make it a lot easier to corrupt the stack by accident. Unless you're quite a careful coder, stuff like:

  f (size_t n)
  {
    char str[n];
leads to a possible exploit where the input is manipulated so n is large, causing a DoS attack (at best) or full exploit at worse. I'm not saying that banning VLAs solves every problem though.

However the main reason we forbid VLAs in all our code is because thread stacks (particularly on 32 bit or in kernel) are quite limited in depth and so you want to be careful with stack frame size. VLAs make it harder to compute and thus check stack frame sizes at compile time, making the -Wstack-usage warning less effective. Large arrays get allocated on the heap instead.


> stuff like ... leads to a possible exploit where the input is manipulated so n is large

The same is true for most recursive calls, should recursion be also banned in programming languages?


When writing secure C? In most cases, absolutely.


That's not really a fair comparison though. Recursion is strictly necessary to implement several algorithms. Even if "banned" from the language, you would have to simulate it using a heap allocated stack or something to do certain things.

None of this applies to VLA arguments.


It's not strictly necessary precisely because all recursions can be "simulated" with a heap allocated stack. And in fact, the "simulated" approach is almost always better, from both a performance and a maintenance perspective.


This is simply nonsense. In cases with highly complex recursive algorithms, "unrecursing" would make the code a completely unmaintainable mess, requiring an immensely complicated state machine, which is why something like Stockfish doesn't do that in its recursive search function even though the code base is extremely optimised. And yes, some algorithms are inherently recursive, and don't gain any meaningfull performance from the heap stack + state machine approach.


> In cases with highly complex recursive algorithms, "unrecursing" would make the code a completely unmaintainable mess, requiring an immensely complicated state machine

Nothing about it is "immensely complicated". Rather than store your recursion state in a call stack, you can store it in a stack of your own, i.e. a heap-allocated container. The state of a cycle of foo(a,b,c) -> bar(d,e,f) -> baz(g,h) -> foo(...) becomes expressible as an array of tagged union of (a,b,c), (d,e,f) and (g,h).

And there is nothing inherently unmaintainable about this approach. I would hope that it's a commonly taught pattern, but even if it's not, that doesn't make it impossible to understand. Picking good names and writing explanatory comments are 90% of the battle of readability.

> which is why something like Stockfish doesn't do that in its recursive search function even though the code base is extremely optimised

I can't speak for what Stockfish devs do, as I have no insight into which particular developers made what set of tradeoffs in which parts of their codebase. But it doesn't change the reality that using your own stack container is almost always more performant and more extensible:

1. Your own stack container can take up less space per element than a call stack does per stack frame. A stack frame has to store all local variables, which is wasteful. The elements of your own stack container can store just the state that is necessary.

2. Your own stack container can recurse much farther. In addition to the previous point, call stacks tend to have relatively small memory limits by default, whereas heap allocations do not. In addition, you can employ tricks like serializing your stack container to disk, to save even more memory and allow you to recurse even farther.

3. Your own stack container can be deallocated, shrunk, or garbage collected to free memory for further use, but a call stack typically only grows.

4. Your own stack container can be easily augmented to allow for introspection, which would require brittle hackery with a traditional call stack. This can be an extremely useful property, e.g. for a language parser resolving ambiguities in context sensitive grammars.

> And yes, some algorithms are inherently recursive, and don't gain any meaningfull performance from the heap stack + state machine approach.

Using a heap-allocated stack container is recursion. It is the state machine. The only fundamental implementation difference between the approach I describe and traditional recursion is that the former relies on the programmer using an array of algorithm state, and the latter relies on the runtime using an array of stack frames.


First of all, my original point was only that comparing recursion to VLAs was not a reasonable comparison, not to make some profound point about your favourite way to implement recursion. So, chill dude.

1. This is often irrelevant, depending on your priorities. Taking stockfish as an example again, memory is not what's at a premium, search nodes is. The search space is inherently intractable. You're never gonna be able to recurse meaningfully deeper by shaving off some stack space, because the breadth of the tree grows exponentially in the number of plies. The only form of optimisation that helps here is caching and various heuristics to avoid searching certain nodes at all

2. You know you can change the stack size, right? This is what Stockfish does for its search threads. No need for fancy dynamic allocation here. Also, have fun watching shit get interesting interesting when you have to realloc() ncpu huge chunks of contiguous memory balls deep into a search, when the engine is in time trouble... Sometimes resizing allocated memory is simply not an option.

3. Again this is not always relevant. Stockfish needs its big stacks all the time. So big whoop.

4. This Stockfish does need to do, which is why it keeps an array of structs(never resized) for that purpose. But Stockfish also needs to make decisions about when things move between the different stacks, which is why it uses recursive calls despite having a stack on the heap also.

5. It's obvious I am aware of this. My original comment literally said that banning recursion would force you to implement recursion manually anyway using a state machine. Like, dude, you're literally repeating my own comment back at me as if I didn't already know it. What's up with that?

The point here is: yes, in some specialised cases it might be preferable to implement the recursion yourself if the problem calls for it. But other times, and I'd argue most of the time, this is not necessary. So just use the already available abstraction provided by language. Your line of reasoning is a bit like arguing for not using C at all, because it will be slower than assembly in some cases. sure, write hand optimised assembly in your hot paths if you need to, but most people don't. Abstractions are generally our friends, they help us write clearer, more consise code.


> It's not strictly necessary precisely because all recursions can be "simulated" with a heap allocated stack.

This just moves the problem from a stack blowout to a heap blowout.

> And in fact, the "simulated" approach is almost always better, from both a performance and a maintenance perspective.

I am unsure about the performance, but turning recursive code implementing a recursive procedure into iterative code which has to maintain a stack by hand cannot possibly improve readability unless the programmers involved are pathologically afraid of seeing recursive code.


Computers have many GiBs of heap space. Your thread has MiB of stack. Tell me. Which is the bigger problem?

This is also ignoring the fact that the memory usage for recursive algorithms is higher because there’s a bunch of state for doing the function call being pushed onto the stack that you just don’t see (return address, potentially spilling registers depending on the compiler’s ability to optimize, etc). Unless you stick with tail recursion but that’s just a special case where the loop method would be similarly trivial. Case in point. I implemented depth first search initially as a recursive thing and blue out the stack on an embedded system. Switched to an iterated depth first search with no recursion. No problem.

OP said “it’s the only way to solve certain problems”. That’s clearly not true because ALL recursive algorithms can be mapped to non recursive versions.

I never got the fascination with implicit recursion. It’s just a slightly different way to express the solution. Personally I find it usually harder to follow / fully understand than regular iterative methods that describe the recursion state explicitly (ie time and space complexity in particular I find very hard to reason about for recursion.)


Definitely not.


For the can be, or performance, or maintenance perspective?? With example please ..


You’ll find it difficult to beat a construct that has dedicated language support. Trying to fake function calls without actually making function calls is difficult and going to have poor ergonomics.


MISRA C bans recursion for instance.


Doesn't a similar DoS risk (from allowing users to allocate arbitrarily large amounts of memory) also apply to the heap? You shouldn't be giving arbitrary user-supplied ints to malloc either.


> Doesn't a similar DoS risk (from allowing users to allocate arbitrarily large amounts of memory) also apply to the heap?

DoS Risk? No one cares too much about that - the problem with VLAs is stack smashing, which then allows aribtrary user-supplied code to be executed.

You cannot do that with malloc() and friends.


VLAs don’t smash the stack.


Depends on the number you put inside, and the linker settings for stack size.


No.


How does a huge VLA corrupt the stack? If there's not enough space but code keeps going then isn't that a massive bug with your compiler or runtime?


Okay. How do you tell the kernel that? Sure, the kernel will have put a guard page or more at the end of the stack, so that if you regularly push onto the stack, you will eventually hit a guard page and things will blow up appropriately.

But what if the length of your variable length array is, say, gigabytes, you've blown way past the guard pages, and your pointer is now in non-stack kernel land.

You'd have to check the stack pointer all the time to be sure, that's prohibitive performance-wise. Ironically, x86 kind of had that in hardware back when segmentation was still used.


I think the normal pattern is a stack probe every page or so when there's a sufficiently large allocation. There's no need to check the stack pointer all the time.

But that's not my point. If the compiler/runtime knows it will blow up if you have an allocation over 4KB or so, then it needs to do something to mitigate or reject allocations like that.


> I think the normal pattern is a stack probe every page or so when there's a sufficiently large allocation.

What exactly are you doing there, in kernel code?

> But that's not my point. If the compiler/runtime knows it will blow up if you have an allocation over 4KB or so, then it needs to do something to mitigate or reject allocations like that.

Do what exactly? Just reject stack allocations that are larger than the cluster of guard pages? And keep book of past allocations? A lot of that needs to happen at runtime, since the compiler doesn't know the size with VLAs.

It's not impossible and mitigations exist, but it is pretty "extra". gcc has -fstack-check that (I think) does something there.


> What exactly are you doing there, in kernel code?

In kernel code?

What you're doing is triggering the guard page over and over if the stack is pushing into new territory.

> Do what exactly? Just reject stack allocations that are larger than the cluster of guard pages? And keep book of past allocations? A lot of that needs to happen at runtime, since the compiler doesn't know the size with VLAs.

Just hit the guard pages. You don't need to know the stack size or have any bookkeeping to do that, you just prod a byte every page_size. And you only need to do that for allocations that are very big. In normal code it's just a single not-taken branch for each VLA.


That seems to be what -fstack-check for gcc is doing:

"If neither of the above are true, GCC will generate code to periodically “probe” the stack pointer using the values of the macros defined below."[1]

I guess I'm wondering why this isn't always on if it solves the problem with negligible cost? Genuine question, not trying to make a point.

[1] https://gcc.gnu.org/onlinedocs/gccint/Stack-Checking.html


What I'm finding in a quick search is:

* It should be fast, but I haven't found a benchmark.

* There appear to be some issues of signals hitting at the wrong time vs. angering valgrind, depending on probe timing.

* Probes like this are mandatory on windows to make sure the stack is allocated, so it can't be that bad.


I'm mostly interested in it for kernel code though, so the second point at least does not apply, at least not directly. Maybe there is something analogous when preempting kernel threads, I haven't thought it through at all. But interesting.


Because it's on by default in MSVC [0], and we all know that whatever technical decisions MS makes, they're superior to whatever technical decision the GNU people make. /s

Speaking seriously, I too would like an answer.

[0] https://docs.microsoft.com/en-us/windows/win32/devnotes/-win...


One decision Microsoft made was not to support VLAs at all, even after their new found C love.


An attacker would first trigger a large VLA-allocation that puts the stack pointer within a few bytes of the guard page. Then they would just have the kernel put a return address or two on the stack and that would be enough to cause a page fault. The only way to guard against that would be to check that every CALL instruction has enough stack space which is infeasible.


But that's the entire point of the guard page, it causes a page fault. That's not corruption.

Denial of service by trying to allocate something too big for the stack is obvious. I'm asking about how corruption is supposed to happen on a reasonable platform.


Perhaps they're trying to guard against introducing easy vulnerabilities on unreasonable platforms. With VLAs unskilled developers can perhaps more easily introduce this problem. It would be a case of bad platforms and bad developers ruining it for the rest.


An attacker could trigger a large VLA allocation that jumps over the guard page, and a write to that allocation. That write would start _below_ the guard page, so damage would be done before the page fault occurs (ideally, that write wouldn’t touch the guard page and there wouldn’t be a page fault but that typically is harder to do; the VLA memory allocation typically is done to be fully used)

Triggering use of the injected code may require another call timed precisely to hit the changed code before the page fault occurs.

Of course, the compiler could and should check for stack allocations that may jump over guard pages and abort the program (or, if in a syscall, the OS) or grow the stack when needed. Also, VLAs aren’t needed for this. If the programmer creates a multi-megabyte local array, this happens, too (and that can happen accidentally, for example when increasing a #define and recompiling)

The lesson is, though, that guard pages alone don’t fully protect against such attacks. The compiler must check total stack space allocated by a function, and, if it can’t determine that that’s under the size of your guard page, insert code to do additional runtime checks.

I don’t see that as a reason to outright ban VLAs, though.


VLAs give the attacker an extra attack vector. The size of the VLA is runtime-determined and potentially controlled by user input. Thus, the only safe way to handle VLAs is to check that there is enough stack space for every VLA allocation. Which may be prohibitively expensive and even impossible on some embedded platforms. Stack overflows may happen for other reasons too, but letting programmers put dynamic allocations on the stack is just asking for trouble.


I don’t think “may be prohibitively expensive and even impossible on some embedded platforms” is a strong argument for not including it in C. There are many other features in C for which that holds, such as recursion, dynamic memory allocation, or even floating point.


Welcome to the world of undefined behavior. Anything can happen....


I think this is a common misunderstanding about UB. It's not that anything can happen, just that the standard doesn't specify what happens, meaning whatever happens is compiler/architecture/OS dependent. So you can't depend on UB in portable code. But something definite will happen, given the current state of the system. After all, if it didn't, these things wouldn't be exploitable either.


> But something definite will happen, given the current state of the system.

This is only true in the very loose and more or less useless sense that the compiler is definitely going to emit some machine code. What does that machine code do in the UB case? It might be absolutely anything.

One direction you could go here is you insist that surely the machine code has a defined meaning for all possible machine states, but that's involving a lot of state you aren't aware of as the programmer, and it's certainly nothing you can plan for or anticipate so it's essentially the same thing as "anything can happen".

Another is you could say, no, I'm sure the compiler is obliged to put out specific machine code, and you'd just be wrong about that, Undefined Behaviour is distinct from Unspecified Behaviour or merely Platform Dependant behaviour.

Many C and C++ programmers have the mistaken expectation that if their program is incorrect it can't do anything really crazy, like if I never launch_missiles() surely the program can't just launch_missiles() because I made a tiny mistake that created Undefined Behaviour? Yes, it can, and in some cases it absolutely will do that.


I'm aware you can get some pretty crazy behaviours, say if you end up overwriting a return address and your code begins to jump around like crazy. Even that could reproduce the same behaviour consistently though.

I once had a bug like that in a piece of AVR C code where the stack corruption would happen in the same place every time and the code would pathologically jump to the same places in the same order every time. It's worth noting though that when there's an OS, usually what will happen is just a SIGABRT. See the OpenBSD libc allocator for a masterclass in making misbehaving programs crash.

I was never advocating to rely on UB, btw. But yes, UB can be understood in many cases.


You are confusing the C standard and actual platforms/C implementations. A lot of things are UB in the standard but perfectly well defined on your platform. Standards don’t compile code, real compilers do. The standard doesn’t provide standard library implementations, the actual platform does.

Targeting the standard is nice, but if all of your target platforms guarantee certain behaviors, you might consider using those. A lot of UB in the C standard is perfectly defined and consistent across MSVC, GCC, Clang, and ICC.


> A lot of UB in the C standard is perfectly defined and consistent across MSVC, GCC, Clang, and ICC.

Do you have examples of this "a lot of UB in the C standard" which is in fact guaranteed to be "perfectly defined and consistent" across all the platforms you listed ? You may need to link the guarantees you're relying on.


Okay so take the two most complained about UBs, improper aliasing and signed integer overflow. Every compiler I’ve ever used lets you turn both into defined behavior.


These things aren't the default, the compiler may "let you" but it doesn't do it until you already know you've got a problem and you explicitly tell it you don't want standard behaviour. However lets ignore that for a moment:

My GCC offers to turn signed integer overflow into either an abort or wrapping, either of which is defined behaviour, but how do I get MSVC to do precisely the same thing?

Likewise for aliasing rules. GCC has a switch to have the optimiser not assume the language's aliasing rules are actually obeyed, but what switch in MSVC does exactly the same thing?


MSVC doesn’t enforce strict aliasing to begin with. And passing /d2UndefIntOverflow makes signed integer overflow well-defined. Even if you don’t pass it, MSVC is very conservative in exploiting that UB, precisely to avoid breaking code that was valid before they started doing this optimization (2015 or so).


What you are describing is unspecified and implementation-defined behavior [0].

Avoiding UB (edit: in general) doesn't have anything to do with the code being portable and everything with the code not being buggy [1][2].

[0] https://en.cppreference.com/w/c/language/behavior

[1] https://blog.regehr.org/archives/213

[2] http://blog.llvm.org/2011/05/what-every-c-programmer-should-...


Oh really? Then why does every compiler I use have a parameter to turn off strict aliasing?

You cite to a source that contradicts you. In the llvm blog post: "It is also worth pointing out that both Clang and GCC nail down a few behaviors that the C standard leaves undefined."


Sometimes a compiler gives a guarantee that a particular UB is always handled in a specific way, but you cannot generalize this to all UB.

Added 'in general' to my comment to make this explicit.


What is undefined about a large VLA? It shouldn't be undefined.

According to wikipedia "C11 does not explicitly name a size-limit for VLAs"


The C standard has no mentions of a program stack. This isn’t undefined behavior.


You shouldn't be writing C if you're not a careful coder.


Yeah, right.

https://msrc-blog.microsoft.com/2019/07/16/a-proactive-appro...

https://research.google/pubs/pub46800/

https://support.apple.com/guide/security/memory-safe-iboot-i...

Maybe you could give an helping hand to Microsoft, Apple and Google, they are in need of carefull C coders.


I'm not sure if you intentionally missed my point. Everything in C requires careful usage. VLAs aren't special: they're just yet another feature which must be used carefully, if used at all.

Personally, I don't use them, but I don't find "they're unsafe" to be a convincing reason for why they shouldn't be included in the already-unsafe language. Saying they're unnecessary might be a better reason.


The goal should be to reduce the amount of sharp edges, not increase them even further.


VLAs are unsafe in the worst kind of way as it is not possible to query when it is safe to use them. alloca() at least in theory can return null stack overflow, but there is no such provision with VLA.


They're not unsafe (in the memory sense) as long as they check for overflow and reliably crash if there is one.


If a lot of platforms don't implement this check reliably, then it's unsafe in practice at this time, even if not in theory.


Who out there has a version of stack checking that doesn't actually check the stack…? If it doesn't check by default, as C doesn't, then it's not "as long as".


And if you're a careful coder writing C, you should give the VLA the stink eye unless it's proving its worth.


Hint, that means nobody should be writing C.


Where is the lie?


Too bad we have all that legacy C code that won't just reappear by itself on a safer language.

That means there are a lot of not careful enough developers (AKA, human ones) that will write a lot of C just because they need some change here or there.


With VLAs:

1. The stack-smashing pattern is simple, straightforward and sure to be used often. Other ways to smash the stack require some more "effort"...

2. It's not just _you_ who can smash the stack. It's the fact that anyone who calls your function will smash the stack if they pass some large numeric value.


They can overflow the stack. They cannot smash the stack.


Fair enough; I had the mistaken idea that the two terms are interchangeable, but apparently stack smashing is only used for the attack involving the stack:

https://en.wikipedia.org/wiki/Stack_buffer_overflow

so, pretend I said "overflow" instead of "smash" in my post.


Useless semantic pedantry at best, but arguable wrong as there isn't some sort of ISO standard on dumb hacking terms.


Overflowing the stack gives you a segfault. Smashing the stack lets hackers pop a shell on your computer. They are incredibly different. VLAs can crash your program, but they do not give attackers the ability to scribble all over the stack.


> Overflowing the stack gives you a segfault.

Maybe. If the architecture supports protected memory and the compiler has placed an appropriately sized guard page below the stack. If it doesn't then overflowing the stack via a VLA gives you easy read and write access to any byte in program memory.


If your architecture does not support this then you’re at risk whenever you make a function call.


Backpedal harder!


I’m not backpedaling. If your environment has guard pages and does probing then it protects equally well against VLAs and function calls overflowing the stack. If it has neither then both are liable to overwrite other memory. Obviously I would prefer that you have the protections, or some sort of equivalent, but they have nothing to do with VLAs.


Unless they happen to be enjoying kernel space.


There is no difference.


What about not adding even more ways how we should avoid using C?


> What about not adding even more ways how we should avoid using C?

That's a mute point for C's target audience because they already understand that they need to be mindful of what the language does.


What the heck. It's "moot", not "mute".


I'm curious, are there accents in which those two words are homophones? Given the US tendency to pronounce new/due/tune as noo/doo/toon I can imagine some might say mute as moot but I can't find anything authoritative online.


According to Wikipedia, East Anglia does universal yod-dropping, so mute/moot would be homophonic. (See https://en.wiktionary.org/wiki/Appendix:English_dialect-depe...).

Personally, I haven't come across anyone who pronounces 'mute' without the /j/.


They are not perfect homophones. There is a slight i (IPA j) in "mute".

https://en.wiktionary.org/wiki/mute

https://en.wiktionary.org/wiki/moot


That is like saying if sushi knifes are already sharp enough, there is no issue cutting fish with a samurai sword instead, except at least with the knife maybe the damage isn't as bad.


The difference between the largest sushi knives and a katana is more about who wields them than the blade involved.


One ends up cutting quite a few pieces either way.


When you really need a samurai, nothing less will do. Arguably most of us need sushi chefs these days.


> That is like saying if sushi knifes are already sharp enough (...)

No, it's like saying that professional people understand the need to learn what their tools of the trade do beyond random stackoverflow search on how to print text to stdout.

It seems you have an irrational dislike of C. That's perfectly ok. No need to come up with excuses though.


It doesn't seem, I do.

Ever since I got my hands on Turbo C++ 1.0, back in 1993, I see no reason why one should downgrade ourselves to C.

At least C++ give us the tools to be a bit more secure, even if tainted with C's copy-paste compatibility.

You will find posts from me on Usenet, stading on C++ frontline of C vs C++ flamewars.

No one is making excuses, it should be nuked, unfortunely it will outlive all of us.


I like your comparison of a C programmer with a samurai.


Including that most of them end up doing Seppuku on their applications.


while we're hugging them from behind


It's more like the C programmer is a sushi master. They can make a delicious, beautifully crafted snack. But if the wrong ingredients are used you'll get very sick.


VLAs are no more unsafe than standard C is for stack corruption.


Just one additional attack vector more to add to the list, who's still counting them?


It’s not an additional attack vector.


    int A[100000000];
Also has no protection.


the only result of banning VLAs is to force everyone to use alloca, which is even less safe.

exhibit A: https://lists.freedesktop.org/archives/mesa-commit/2020-Dece...

exhibit B: https://github.com/neovim/neovim/issues/5229

exhibit C: https://github.com/sailfishos-mirror/llvm-project/commit/6be...

etc etc


Nobody is forced to use alloca, which is not less safe, only equally disastrous. Just use malloc, already.


ah yes, why didn't I think of it, let me just try:

    #include <cstdlib>
    #include <span>
    
    __attribute__((annotate("realtime")))
    void process_floats(std::span<float> vec) 
    {
      auto filter = (float*) malloc(sizeof(float) * vec.size());
    
      /* fill filter with values */
    
      for(int i = 0; i < vec.size(); i++)
        vec[i] *= filter[i];
    
      free(filter);
    }

    $ stoat-compile++ -c foo.cpp -emit-llvm -std=c++20
    $ stoat foo.bc
    Parsing 'foo.bc'...

    Error #1:
    process_floats(std::span<float, 18446744073709551615ul>) _Z14process_floatsSt4spanIfLm18446744073709551615EE
    ##The Deduction Chain:
    ##The Contradiction Reasons:
     - malloc : NonRealtime (Blacklist)
     - free : NonRealtime (Blacklist)

oh noes :((


Here's a nickel, kid.

The bullshit about oh my embedded systems doesn't have dynamic memory is bullshit. You either know how big your stack is and how many elements there are, and you make the array that big. Or you don't know and you're fucked.

You can't clever your way out of not knowing how big to make the array with magic stack fairy pretend dynamic memory. You can only fuck up. Is there room for 16 elements? The array is 16. Is there room for 32? It's 32.


I think the parent comment was about malloc not being real-time? Not about storage space.

Though I do wonder why there can't be a form of malloc that allocates in a stack like fashion in real time to satisfy the formal verifier?


Real time also generally means your input sizes are bounded and known, otherwise the algorithm itself isn't realtime and malloc isn't the reason why.

But strictly speaking the only problem is a malloc/free that can lock (you can end up with priority inversion). So a lock-free malloc would be realtime just fine, it doesn't have to be stack growth only.


> Real time also generally means your input sizes are bounded and known, otherwise the algorithm itself isn't realtime and malloc isn't the reason why.

I think you meant to say something else? Real-time is a property of the system indicating a constraint on the latency from the input to the output—it doesn't constrain the input itself. (Otherwise even 'cat' wouldn't be real-time!)


If your input is not bounded you can't know in advance the time needed to process it. In other word you cannot be realtime.

`cat` can be realtime, but only by fixing the size of its internal buffer where it reads to and writes from. In this case it can in theory bound the time needed to process the fixed block of input.

But if for some reason `cat` tried to read/write by lines of unknown in advance size, it would fail to be realtime.


I think we're not disagreeing on the actual constraints, but the terminology. The "internal buffer" is not part of the system's "input". It's part of the system's "state".


We're talking about function inputs, not system inputs.


Real-time is a property of the whole system though (?) not individual functions. But even if you want to reframe it to be about functions, small input is neither necessary nor sufficient for it being "real time". Like your function might receive an arbitrarily large n-element array a[] and just return a[a[n-1]], and it would be constant time. Again, the size is the simply not the correct property to look at.


> Real-time is a property of the whole system though (?) not individual functions.

As I see it, it leads to a contradiction. `cat` have unbounded time of execution. It depends on a size of input. It means that `cat` cannot be realtime. But this logic leads us to a conclusion, that any OS kernel cannot be realtime, because it works for an unbounded time.

It is a non sense. Realtime is about latency: how much it takes time to react to an input. `cat` may be realtime or may be not, it depends on how we define "input". We need to define it in a way that it is bounded. I mean 4Kb chunks of bytes for example.

> Like your function might receive an arbitrarily large n-element array a[] and just return a[a[n-1]], and it would be constant time.

No. Receiving n-element array is an O(n) operation. It needs to be copied to memory. We can of course pick one function that just get a pointer to an array, but if real-time is a property of the whole system, then this system needs to copy the array from the outside world into memory. And it is an O(n) operation. So for any latency requirement exists such N so when n>N this requirement would not be met. So unbounded array as an input is incompatible with a real-time.


> Though I do wonder why there can't be a form of malloc that allocates in a stack like fashion in real time

I think that's basically what the LLVM SafeStack pass does -- stack variables that might be written out of bounds are moved to a separate stack so they can't smash the return address.


how would you implement that in the face of multiple threads? you can't use TLS as it will have initialize your stack on first access of your malloc_stack in a given thread, which may or may not be safe to use in real-time-ish-contexts (I think it's definitely not on Windows, not sure on Linux)


> how would you implement that in the face of multiple threads?

I imagine you could you allocate it at the same time as you allocate the thread's own stack?


> You either know how big your stack is

that's in most systems I target a run-time property, not a compile-time one


> wants to make them mandatory again

What does 'mandatory' mean? Like if I write a C compiler without them... what are they going to do about it?


Code that complies with the standard will be rejected by your compiler. The effect would probably be that few people would use your compiler.


Most mainstream compilers aim to be standards compliant


This usage of VLAs is once again mandatory for compilers to support, since C23. From Wikipedia: "Variably-modified types (but not VLAs which are automatic variables allocated on the stack) become a mandatory feature".


Oh well...


At least it is still optional to allow for stack allocated VLAs. Which is the attack vector you mentioned.


Tell me, what’s wrong with variably modified types specifically?


Sizeof, which used to be one of the very few things you could blindly do to an arbitrary (potentially UB-provoking) expression, can now have side effects.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: