Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I mostly agree with the premise: logic errors are always going to be there, at least until the compiler is an IA strong enough to catch them for us (and by then we probably won't need coders anyway...). There's no silver bullet, bad coders are always going to produce. And I also don't like it when people claim that bug X or vulnerability Y wouldn't have happened if they had been technology Z, they're just begging for that type of post.

That being said I'm a bit more skeptical of this part: "code no true C programmer would write : heartbleed :: code no true rust programmer would write :: (exercise for the reader)"

If I look at the examples in the acticle, the C version doesn't look that terrible and contrieved to me. I wonder what the author means by "Survey says no true C programmer would ever write a program like that, either." That looks like a lot of C code I've read, there's nothing particularly weird about it.

On the other hand the rust version looks very foreign to me (and I've been writing quite a lot of rust lately). You basically have to go out of your way to create the same issue.

I guess my point is that while it's true that as long as there'll be coders there'll be bugs and security vulnerabilities it doesn't mean we shouldn't try to make things better. And in my opinion Rust makes it much more difficult to shoot yourself in the foot than plain C.



>I wonder what the author means by "Survey says no true C programmer

I think he is being sarcastic. I.e. the idea that "no true C programmer" would write code like that is nonsense, since we have all seen C code like that. Therefore the idea that "no true Rust programmer" would write the Rust snippet is not a valid defence, because bad programmers gonna program.


Yes. It's an instance of the No True Scotsman Fallacy.

https://en.wikipedia.org/wiki/No_true_Scotsman


Having not done much Rust due to the volatility, I still have to give it credit here for a few more reasons:

* I assume old_io is going away. That is why it is old, after all. Is this still possible in the new_io? If they made this not doable anymore that means they fixed the bug. * The compiler spat out a warning. Maybe it should have been "you dumb fuck why are you doing raw buffer reads and writes" but at least it said "old_io is bad". * I don't see much in the C version that would generate any warnings or errors under any compiler flags. That should be the take away, in my book.

When I do my own projects I almost always go for maximum warnings and errors, and don't call it done until the compiler stops generating them.


The fundamental underlying problem is that a buffer containing sensitive data is being reused for a separate purpose without being scrubbed of the sensitive data. In the particular instance of heartbleed, this faulty behavior was due to the underlying memory allocator rather than any property of the programming language.

It is true that idiomatic Rust tends to avoid working with raw buffers, but for low-level tasks this is sometimes unavoidable. Rust also especially doesn't encourage reusing buffers, but if you've already taken the step of specifying an unsafe memory allocator then Rust can't help you. So yes, Rust gives you a lot of tools to avoid this situation but it can't outright prevent it, as a few people on the internet appeared to be suggesting last year.


logic errors are always going to be there, at least until the compiler is an IA strong enough to catch them for us (and by then we probably won't need coders anyway...)

This raises some interesting philosophical questions: will the ultimate judge of correctness be a human or machine? If it's a machine, what is to say that its definition of "correct" is what humans want?

For some reason, this quote comes to mind: "Freedom is not worth having if it does not include the freedom to make mistakes."


Is this logic error or just misuse of memory? (The buffer array)


The latter. It's trivial to reuse buffers in Rust while avoiding this issue, for example `Vec` has a `clear` method that sets the length to 0 while keeping the allocation.

But AFAICT, in C it didn't even have the same buffer by design, it was reading uninitialized memory from whatever `malloc` gave it back - which is equivalent to allocating a new buffer in Rust.


It was actually using a custom allocator, not the system malloc, which exacerbated the problem. System malloc could still have this problem, but for example OpenBSD has mitigations for this sort of data leakage in their malloc implementation, which OpenSSL then bypassed by using their own allocator.

This is a problem for any "this would never happen in Better Language X" claim regarding Heartbleed. If you decide you want to write your own buffer reuse system for whatever reason, you can pretty easily write this sort of bug in any language.


I would argue that the whole Rust ecosystem pretty strongly discourages you from writing broken low-level infrastructure code. The idea behind unsafe code blocks, the generics system, and Cargo is to encourage you to use off-the-shelf tools instead of rolling your own, often broken, solutions. The C ecosystem, by contrast, tends to encourage rolling your own solutions because managing dependencies in a cross-platform manner is such a pain (and the language doesn't help due to the lack of generics and so forth). I suspect that any free-list library in Cargo that didn't zero out buffers would be fixed pretty quickly (and, as eddyb rightly points out downthread, it would be pretty hard to write a free list system that works with multiple types that doesn't require initialization before use--Rust in general abhors uninitialized memory).

There are parallels between the philosophy of crypto libraries like NaCl and Keyczar and the Rust philosophy. Don't write your own low-level infrastructure code. Use somebody else's.


I mostly agree, but "use somebody else's" doesn't really help if you're writing a crypto library in the first place.


They're referring to writing your own allocator.


And in fact, I've written that exact thing in a C# network stack, about 7 years ago. No bug, as I did properly check the size of the data, but totally possible to have messed up. It's not even hard code to write, just a simple object pool to return byte arrays. Which is natural, as in .NET, high performance often ends up as an exercise in removing every heap allocation possible.


That said, this is a fair point. I have shot myself in the foot in a fairly equivalent manner in another memory-safe language (doesn't really matter which) because I was trying to reuse buffers as an optimization. Oops. I did it to myself, and at least I understood enough about what was going on that I didn't spend days wondering at the heisenbugs, but still, oops. It can happen.

But at least you have to work at it a bit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: