> As the offending commentor, I apologize. Particularly to the Rust team for generating this negative publicity, and to the person I replied to, for asserting a lie.
Realistically you're on the right side here. I just like to argue because it makes people justify their positions, which helps me (and hopefully other people) understand them better.
Rust is one of the most promising languages I've seen in a long time. People have been trying to make a language better than C++ for the things that C++ is good at for decades and this is the first time anything has the potential to succeed.
That's why I argue for the other side. Because when something has obvious potential, people have the inclination to deify it. The statement "X will solve all security problems" is false for all X. You can write secure code in C (see also DJB). You can write insecure code in Rust. It's easier to write secure code in Rust, and that's very important, but it's just as important for everybody to realize that no compiler is made out of magic pixie dust that will make all my code perfect even if I'm an idiot.
But Rust does solve all memory safety issues, doesn't it? In the same way, say, F# does. Except I can use it without worrying about deployment or runtime costs.
One scenario I am using Rust for is a packet capture parsing and forwarding daemon. With C, my biggest failure mode is "people can run code on your server by sending a packet across your network". With Rust, my biggest failure is "my code might be buggy so results could be useless". In fact, even if I try, I'm having a hard time coming up with code I could write that'd expose a security issue.
That's pretty close to magic pixie dust. Yes, it's a restricted scenario, but I doubt it's an uncommon one. A bunch of vulnerable utilities are basically just reading and writing data from/to files. If they weren't using a memory-unsafe language, they simply wouldn't be in a position to open security holes.
Rust can't solve all memory safety issues. Rust tries very hard to guarantee that safe code (i.e. not unsafe{}) will be memory safe and free of some race conditions. The hard part is making this possible - turns out you need unsafe in the core to be able to write safe implementations of those features in the general case. Attack surface is greatly lessened, but it's still there.
Other GC'd languages come with runtimes written in C/C++ and often these libraries are the source of vulnerabilities. Rust is no different, if you grep your source code and find no unsafe blocks then you are back to where you were in GC land.
The difference is that in Rust not the whole source code but rather only the `unsafe` blocks and what they touch need to be verified for memory-safety.
And often that involves verifying code outside of the blocks that are literally marked unsafe. It's all of the code sitting behind some safe abstraction boundary that needs to be verified.
Aren't we just talking about Heartbleed again? If you're forwarding packets the attacker could send you one with a forged length field.
Even if all you're doing is receiving packets and writing them to a file, what happens when you use the attacker-controlled reverse DNS of the packet's IP address as the file name?
Heartbleed isn't a memory safety issue though, which is what kicked this off. You're right in your original reply to me that if you want to go reuse buffers with leftover data, you can do that in any language. If you want to create a byte array in Java, fill it with private key material, then reuse the same byte array for an output buffer... what can stop that?
But more to my point: in C, I could end up executing arbitrary code by misparsing a network packet. In Rust, the worst I'll do is parse it wrong send invalid data onwards. That's just a massive reduction in scope.
I suppose if you have a tool that writes to arbitrary, attacker-supplied file locations, that could have a severe impact. Or it could pass an attacker-controlled value to the shell without escaping.
But things like the cpio[1] bug mentioned in the lessopen issue. Or the numerous compression libraries that require trusted inputs. Image manipulation code. And on and on. How many of them become simple crashes with a memory safe language? Cause from my unscientific (and flawed as this entire article is about) review, it seems like the great majority are these memory safety issues. Perhaps 90%? For widespread code, am I that far off the mark? (I understand that most intranet or custom software might just eval() every querystring given to it.)
> Heartbleed isn't a memory safety issue though, which is what kicked this off.
It kind of is. It's just not of the type that "memory safe" languages fix for you, which is basically the point. If you define the scope of the problem in terms of what the proposed solution fixes then it tautologically fixes the entire problem, but it's still very important for people to understand that that doesn't mean it fixes every problem in that class.
> in C, I could end up executing arbitrary code by misparsing a network packet. In Rust, the worst I'll do is parse it wrong send invalid data onwards.
This is essentially what I'm talking about. It's still possible to execute arbitrary code in Rust, it's just not as easy. An obvious example is if your parsing bug is in the code that validates attacker-provided input before doing something sensitive with it. Or allows the attacker to flip a bit which is equivalent to remote code execution, like giving the attacker's account admin rights.
And even if "the worst" you do is parse it wrong and send invalid data, that's Heartbleed. The "invalid" data could be secrets.
The question you have to ask is, for all those memory corruption bugs in C, what does "memory safe" turn them into? They're still bugs, they're just not the same bugs. For example, a common way you get RCE in C is an integer overflow that leads to a heap overflow when the overflowed integer is used to allocate a buffer. But take away the heap overflow and the integer overflow is still there. Exploiting an integer overflow is highly context-dependent, but it's commonly possible regardless of whether it leads to a heap overflow. Being able to truncate the amount of Bitcoin being debited from the attacker's account or convert "username=rootxxxx" into "username=root" is arguably better than straight RCE, but not by very much.
Realistically you're on the right side here. I just like to argue because it makes people justify their positions, which helps me (and hopefully other people) understand them better.
Rust is one of the most promising languages I've seen in a long time. People have been trying to make a language better than C++ for the things that C++ is good at for decades and this is the first time anything has the potential to succeed.
That's why I argue for the other side. Because when something has obvious potential, people have the inclination to deify it. The statement "X will solve all security problems" is false for all X. You can write secure code in C (see also DJB). You can write insecure code in Rust. It's easier to write secure code in Rust, and that's very important, but it's just as important for everybody to realize that no compiler is made out of magic pixie dust that will make all my code perfect even if I'm an idiot.