Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wouldn't call Zig's comptime "C++-style." Unlike Rust, there's very little in Zig that is borrowed from C++. Zig's error reporting and comptime makes it easy to write arbitrary compile-time checks, so Zig uses a single construct and keyword, comptime, to replace all special instances of partial evaluation: generics, concepts/traits, value templates, macros and constexprs.

The main difference between Zig and Rust is a huge disparity in language complexity. Zig is a language that can be fully learned in a day or two. Rust has the same philosophy of "zero-cost abstraction" as C++, i.e. spending a lot of complexity budget to make a low-abstraction language appear as if it has high abstraction. Zig, like C, does not try to give the illusion of abstraction.

There is also the difference in their approach to safety, but that's a complicated subject that ultimately boils down to an empirical question -- which approach is safer? -- which we don't have the requisite data to answer.



> There is also the difference in their approach to safety, but that's a complicated subject that ultimately boils down to an empirical question -- which approach is safer? -- which we don't have the requisite data to answer.

We do have the requisite data to answer whether preventing use-after-free is better than not preventing it.

You can argue (not successfully, in my opinion) that it's not worth the loss in productivity to prevent UAF and other memory safety issues, but it's impossible to argue that not trying to prevent UAF is somehow safer.


> We do have the requisite data to answer whether preventing use-after-free is better than not preventing it.

I expect Zig will prevent use-after-free. It will be sound for safe code and unsound for unsafe code (by turning this on only in debug mode for testing).

> but it's impossible to argue that not trying to prevent UAF is somehow safer.

First, see above. Second, it is not only possible but even reasonable to argue that not trying to completely eliminate a certain error is very much safer. The reason is that soundness has a non-trivial cost which can very often be traded for an unsound reduction in a larger class of bugs. As an example, instead of soundly eliminating bugs of kind A, reducing bugs of kinds A, B and C -- for a similar cost -- may well be safer.

There has been little evidence to settle whether sound elimination of bugs results in more correctness than unsound reduction of bugs or vice-versa, and it's a subject of debate in software correctness research.


It's not interesting to talk about a system that might or might not exist in the future. (There is a word for that—"vaporware".) The point is that Zig doesn't even try to prevent UAF now, so you can't say that it's safer than languages that do prevent the problem.

> As an example, instead of soundly eliminating bugs of kind A, reducing bugs of kinds A, B and C -- for a similar cost -- may well be safer.

Hasn't this essentially been what C++ has been trying for memory safety for decades, without success? The C++ approach has been "smart pointers are good enough, and they prevent several other problems too", and the experience of web browsers (among others) has pretty much definitively shown: no, they really aren't. For memory safety, I would not bet on this approach.


Just for clarity for anyone reading, the Zig author does not claim that Zig is safe and has in fact said that is unsafe. Could change in the future, but there's no denial about what it is today.


There is a difference between safe code, which is the goal, and a safe language (that's a statement the language makes on sound safety guarantees). Using a safe language is definitely one way to write safe code, but it is not necessarily always the best way, and it's certainly not the only way. Zig is not meant to ever be a safe language, but it is very much intended to be a language that helps write safe code. That is what I meant when I said that the two languages have a very different approach to safety.


> The point is that Zig doesn't even try to prevent UAF now

I wouldn't say Zig exists at all right now, but just as it strives to one day be production-ready, it strives to prevent use-after-free. Safety is a stated goal for the language.

> Hasn't this essentially been what C++ has been trying for memory safety for decades, without success?

No. I'm talking about a mechanism that can detect various errors at runtime, and is turned on or off for various pieces of code and/or for all code at various stages of development. Rust, BTW, doesn't entirely guarantee memory-safety, either, when any unproven unsafe code is used, and even when it isn't (e.g., have you proven LLVM's correctness?). We always make some compromises on soundness; the question is where the sweet-spots are.

Software correctness is one area where there are no easy answers and very few obvious ones.


> No. I'm talking about a mechanism that can detect various errors at runtime, and is turned on or off for various pieces of code and/or for all code at various stages of development

What you are describing exists: ASan. We have a pretty good answer to the question "is ASan sufficient to prevent memory safety problems in practice": "no, not really".

> Rust, BTW, doesn't entirely guarantee memory-safety, either, when any unproven unsafe code is used, and even when it isn't (e.g., have you proven LLVM's correctness?). We always make some compromises on soundness; the question is where the sweet-spots are.

Empirically, Rust's approach has resulted in far fewer memory safety problems than previous approaches like smart pointers and ASan, with only garbage collectors (and restrictive languages with no allocation at all) having similar success in practice. Notice that the working approaches have something important in common: a strong system that, given certain assumptions, guarantees the lack of memory safety problems. Even though those assumptions are never quite satisfied in practice, empirically having those theoretical guarantees seems important. It separates systems that drastically reduce safety problems, such as Rust and GC languages, from those that do so less well, such as ASan and smart pointers. This is why I'm so skeptical of just piling on more mitigations: they're helpful, but we've been piling on mitigations for decades and UAF (for instance) is still a big a problem as ever.


> We have a pretty good answer to the question "is ASan sufficient to prevent memory safety problems in practice"

That is not the question we're interested in answering, and elimination of all memory errors is no one's ultimate goal, certainly not at any cost. By definition, unsound techniques will let some errors through. The question is which approach leads to an overall safer program for a given effort, and soundness (of properties of interest) always comes at a cost.

(also, Zig catches various overflow errors better than ASan)

> Empirically, Rust's approach has resulted in far fewer memory safety problems than previous approaches like smart pointers

I don't doubt that, and if minimization of memory errors was programmers' primary concern (even in the scope of program correctness or even just security), there would be little doubt that Rust's approach is better.

As someone who currently mostly programs in C++, lack of memory safety barely makes my top three concerns. My #1 problem with C++ is that the language is far too complex for (my) comfort, where by "complex" I mean requires too much effort to read and write. That, and build times, have a bigger impact on the correctness of the programs I write than the lack of sound memory safety. Would I be happier if, for a similar cost, I could eliminate all memory safety errors? Sure, which is why, if C++ and Rust were the only low-level languages in existence, I'd rather people used Rust. But I would be happier still if I could solve the first two, and also get some better safety as a cherry-on-top. Memory safety is similarly not the main, and certainly not the only, reason I use languages that have a (tracing) GC when I use them.


P.S.

> Notice that the working approaches have something important in common: a strong system that, given certain assumptions, guarantees the lack of memory safety problems.

That's a very good point and I'm not arguing against it. It's just that even if it's true -- and I'm more than willing to concede that it is -- it still doesn't answer the question, which is: what is the best approach to achieving a required level of correctness?

The "soundness" approach says, let's guarantee, with some caveats, certain technical correctness properties that we can guarantee at some reasonable cost. The problem is that that cost is still not zero, and my hypothesis is that it's not negligible. My personal perspective is that Rust might be sacrificing too much for that, but that's not even what has disappointed me with Rust the most. I think -- and I could be wrong -- that Rust sacrifices more than it has to just to achieve that soundness, by also paying for "zero-cost abstractions," which, for my taste, is repeating C++'s biggest mistake, namely sacrificing complexity for the appearance of high-level abstraction that may look convincing when you read the finished code (perhaps more convincing in Rust than in C++), but falls apart when you try to change it. Once you try to change the code you're faced with the reality that low-level languages have low abstraction; i.e. they expose their technical details, whether it's through code -- as in C and Zig -- or through the type system, as in Rust. Zig says, since the abstraction in low-level languages is low anyway (i.e. we cannot really hide technical details) there is little reason to pay in complexity for so-called zero-cost abstractions.

Language simplicity goes a long way, even as far as sound formal verification is concerned. For example, there are existing sound static analysis tools that can guarantee no UB for C -- but not the complete C++, AFAIK -- with relatively little effort. It's not yet clear to me whether Zig, with its comptime, is simple enough for that, though.

It is my great interest in software correctness, together with my personal aesthetic preferences, that has made me dislike language complexity so much and made me a believer in "when in doubt -- leave it out."


I would like to chime in by noting that Rust's mature form is borne of a very specific scenario, which is the web browser, software so complex that the "zero-cost" element is more like "actually possible to optimize" in practice. And a great deal of that complexity is accidential in some form, a result of accreted layers.

And in that respect, it's not really the kind of software anyone needs to aspire to; aspiring to write programs simple enough that Zig will do the job is much more palatable.


I don't take issue with the "zero-cost" part -- Zig and C, like every low-level language, have that -- but with the non-abstraction-"abstraction" part, which is rather unique to C++ and Rust. Rust has become a modern take on C++, and I'm not sure it had to be that for the sake of safety; I think it became that because of what you said: it was designed to replace C++ in a certain application with certain requirements. It's probably an improvement over C++, but, having never been a big fan of C++, it's not what I want from a modern systems programming language. It seems to me that Rust tries to answer the question "how can we make C++ better?" while Zig tries to answer the question "how can we make systems programming better?"

Of course, Zig has an unfair advantage here in that it is not production-ready yet, and so it's not really "out there," and doesn't have to carry the burden of any real software (there's very little software that Rust carries, but it's still much more than Zig). I admit that when Rust was at that state I had the same hopes for Rust as I do now for Zig, so Zig might yet disappoint.


> For example, there are existing sound static analysis tools that can guarantee no UB for C -- but not the complete C++, AFAIK -- with relatively little effort.

The only static analysis tools that can guarantee anything about C are doomed to have plenty of false positives. They are much less used in practice than tools ASan & co which don't guarantee anything but have way fewer false positive, if any.

They have their use-case, but calling them “little effort” is disingenuous.


> The only static analysis tools that can guarantee anything about C are doomed to have plenty of false positives.

That's where the effort comes in. Those false positives are removed by adding annotations or changing code.

> They have their use-case, but calling them “little effort” is disingenuous.

I said they require relatively little effort, because it's definitely less effort than a rewrite in Rust. This isn't hypothetical. >1MLOC programs in the industry today are checked in this way. If you have an existing large C program and decide that you need your program to have no UB, those sound static analysis tools are the most cost-effective way of doing that today.


> That's where the effort comes in. Those false positives are removed by adding annotations or changing code.

With the former you lose all guarantees and fall back to a safe/unsafe duality, with the latter you need to rethink how your code works to comply yo the analyzer's mindset: in the end it's pretty much like Rust, but in an ad-hoc way, much less ergonomic.

> I said they require relatively little effort, because it's definitely less effort than a rewrite in Rust.

This is a ridiculous way to save your argument. You are well aware that this is not what “relatively easy” means.

It is comparatively easier to deploy such tools than to rewrite a whole project in Rust, but the payoff is also lower in the long run (Rust offers more than zero-UB), so we might get to a point (when tooling[1] and hiring pool have reached a point where it becomes sustainable) where the latter option makes more sense for most people (unless C is mandatory, for portability reasons for instance).

[1]: I'm especially thinking about C2Rust here https://immunant.com/blog/2020/01/quake3/


> With the former you lose all guarantees and fall back to a safe/unsafe duality

This is simply not true. The annotations are checked. It's exactly like adding type annotations when inference fails.

> with the latter you need to rethink how your code works to comply yo the analyzer's mindset

No. I'm talking about adding something like a bounds check in a function entry.

> in the end it's pretty much like Rust, but in an ad-hoc way, much less ergonomic.

Except that it is cheaper than a rewrite in Rust, which is one of the several reasons why this is currently the preferred approach in industry segments that require certain correctness guarantees. I don't know if you know this, but Rust isn't exactly making big headways in the safety/security-critical software world, especially, though not only, in embedded (for a multitude of reasons). Those sound static analysis tools, on the other hand, are showing nice growth.

Also, even if Rust were more ergonomic than this, there are alternatives that I think will be more ergonomic than Rust. I.e. it's not enough to be better than C++; if you want to get the people currently using C/C++ you need to be better than C/C++ in a way that justifies the transition cost and better than the other alternatives.

> You are well aware that this is not what “relatively easy” means.

Relatively easy means easier than all or most other available options. Anyway, that's what I meant.

> but the payoff is also lower in the long run (Rust offers more than zero-UB)

Nobody knows about the long term payoff Rust gives you because few people have had sufficient long term experience with it. It could be large, small, nil, or negative. And there are other languages as well. Zig, when it's available, might well have a bigger payoff than Rust (that's my current guess), and in any event, few people in the low-level programming space who aren't currently using C++ are even thinking about, let alone considering, Rust. Not that anyone is thinking about Zig, but at least Zig is aiming at C shops as well.

Again, as someone who has been using formal methods for some years now, I can tell you that nothing is obvious in software correctness, and no one knows what the best way to achieve it is (although we do know some best practices, and we have some answers to more specific questions).

> so we might get to a point (when tooling[1] and hiring pool have reached a point where it becomes sustainable) where the latter option makes more sense for most people

You are assuming that Rust is the preferrable choice. I no longer think it will be. BTW, here's "C2Zig": https://youtu.be/wM8vz_UPTE0


> Except that it is cheaper than a rewrite in Rust, which is one of the several reasons why this is currently the preferred approach in industry segments that require certain correctness guarantees. I don't know if you know this, but Rust isn't exactly making big headways in the safety/security-critical software world, especially, though not only, in embedded (for a multitude of reasons). Those sound static analysis tools, on the other hand, are showing nice growth.

Rust is way too new for that obviously. And as I said, because tooling and hiring pool are not here yet, it would be a critical mistake to attempt such move at the moment.

For new projects however, Rust is a really interesting bet. (I was until recently working on a new medical robot, whose software was mostly Rust and the speed at which we got it working was really exciting!).

> in any event, few people in the low-level programming space who aren't currently using C++ are even thinking about, let alone considering, Rust.

That's not my experience. Not many are considering it for a lot of reasons (too new, not enough people mastering it, resistance to change etc.) But “thinking about” is another story ;).

> You are assuming that Rust is the preferrable choice

Zig isn't really a choice at this point. It might become it in a few years, but there still a long way to go.


> I think -- and I could be wrong -- that Rust sacrifices more than it has to just to achieve that soundness, by also paying for "zero-cost abstractions," which, for my taste, is repeating C++'s biggest mistake, namely sacrificing complexity for the appearance of high-level abstraction that may look convincing when you read the finished code (perhaps more convincing in Rust than in C++), but falls apart when you try to change it.

The argument here seems to be that there is can be no real abstraction in low-level languages, so there's no point providing language features for abstraction. The premise seems clearly false to me, because even C has plenty of abstraction. Functions are abstractions. Private symbols are abstractions. Even local variables are abstractions (over the stack vs. registers).

People often argue that Rust is too complicated for its goal of memory safety. It's easy to say that, but it's a lot harder to list specific features that Rust has that shouldn't be there. In fact, as far as I'm concerned Rust is an exercise in minimal language design, as the development of Rust from 0.6-1.0 makes clear (features were being thrown out left and right). Most of the features that look like they're there solely to support "zero-cost abstractions"—traits, for example—are really needed to achieve memory safety too. For instance, Deref is central to the concept of smart pointers, and, without smart pointers, users would have to manually write Arc/Rc/Box in unsafe code every time they wanted to heap-allocate something.

> Language simplicity goes a long way, even as far as sound formal verification is concerned. For example, there are existing sound static analysis tools that can guarantee no UB for C -- but not the complete C++, AFAIK -- with relatively little effort. It's not yet clear to me whether Zig, with its comptime, is simple enough for that, though.

The most important static analyzers used in industry today are Clang's sanitizers, which work on both C and C++. The most important such sanitizers actually work at the LLVM level, which means they work on Rust as well [1]! The days of having to write a compiler frontend for static analysis are long gone. We have excellent shared compiler infrastructure that makes it easy to write instrumentation that targets many low-level languages at once. (Even in the world of C, this is necessary. Plain old C99 is an increasingly marginal language, because the real important code, such as Windows and Linux kernels, are written in compiler-specific dialects of C, which means that a static analysis tool that isn't integrated with some popular compiler infrastructure will have limited usefulness anyway.)

> It is my great interest in software correctness, together with my personal aesthetic preferences, that has made me dislike language complexity so much and made me a believer in "when in doubt -- leave it out."

Again: easy to say, harder to specify specific Rust features you think should be removed.

[1]: https://github.com/japaric/rust-san


> The argument here seems to be that there is can be no real abstraction in low-level languages, so there's no point providing language features for abstraction.

My argument is that low-level languages allow for low abstraction, i.e. there's little that they can abstract over, where by abstraction I mean hide internal implementation details in a way that when they change the consumer of the construct, or "abstraction", does not need to change; if it does, then the construct is not an abstraction. With "zero-cost abstraction," C++/Rust offer constructs that syntactically appear as if they were abstractions (e.g. static vs dynamic dispatch; subroutine vs. coroutine call), but in reality aren't. I am not aware of any other language (unless Ada has changed considerably since I last used it in the early '00s) that values this idea to such a great extent.

The things you mentioned are abstractions only in the sense that the user doesn't need to know how the compiler implements them; in that respect, every language construct, including `if` (e.g. in Java, not every if is compiled into a branch) is an abstraction. I speak of the language's ability to allow users to abstract, and in that regard all low-level languages provide for poor abstraction. Without a JIT, the caller needs to know the calling convention; without a tracing GC the caller needs to know how the memory pointed to by a returned value is to be deallocated. The question is how much you try to make sure that all this knowledge is implicit in the syntax.

> People often argue that Rust is too complicated for its goal of memory safety.

I didn't know people often say that. I said it, and I'm not at all sure that's the case. I think that Rust pays far too heavy a price in complexity. It's too heavy for my taste whether or not it's all necessary for sound memory safety, but if it isn't, all the more the shame.

> Most of the features that look like they're there solely to support "zero-cost abstractions"—traits, for example—are really needed to achieve memory safety too.

OK, so I'll take your word for it and not say that again.

> The most important static analyzers used in industry today are Clang's sanitizers

I'm talking about sound static analysis tools, like Trust-in-Soft, that can guarantee no UB in C code. I think that particular tool might support some subset of C++, but not all of it. The sanitizers you mention rely on concrete interpretation (aka "dynamic") and are, therefore, usually unsound. Sound static analysis requires abstract interpretation, of which type checking and type inference are special cases. Just as you can't make all of Rust's guarantees by running Rust's type-checker on LLVM bytecode, so too you cannot run today's most powerful sound static analysis tools -- that are already strong enough to absolutely guarantee no UB in C st little cost -- on LLVM bytecode; they require a higher-level language. Don't know about tomorrow's tools.

> Again: easy to say, harder to specify specific Rust features you think should be removed.

I accept your claim. In general, I don't like to isolate language features; it's the gestalt that matters, and it's possible that once Rust committed to sound memory safety everything else followed. But let me just ask: are macros absolutely essential?


> But let me just ask: are macros absolutely essential?

Yes, for two main reasons: (1) type-safe printf; (2) #[derive]. Nothing is "absolutely essential" in any Turing-complete language, of course, but printf comes pretty close. The only real alternative would have been to use the builder pattern for formatting like C++ does (i.e. the << operator), but nobody seriously proposed that as the aesthetics are really bad and setting formatting flags like precision is problematic. And without custom #[derive], you couldn't serialize types, which is pretty important in a modern language.

Note that these are both cases (maybe the primary cases) in which OCaml has built-in ad-hoc solutions that feel very un-"systemsy". In OCaml, format strings are built in to the language, as are functions like PartialEq and Debug, the latter of which are implemented via magic built-in functions that call private internal reflection APIs. There was a desire to do better in Rust, as it was felt that these are the ugliest parts of OCaml, and so macros were part of Rust from the very early days.


OK. I mean, personally I would swallow a lot of ugly special cases before adding macros, but it's certainly in line with other people's aesthetic preferences and Rust's "C++ spirit" of a low-level language with high-level language features. I can see the logic in saying that since we need lots of fancy mechanisms for sound safety anyway, what's one more gonna hurt?

BTW, Zig manages to do both things without macros and without any special cases in the compiler. TBF, Zig's approach was unfamiliar to me until I saw it in Zig, and it is Zig's "brilliant idea" (even if it had originated elsewhere), so it's OK if Rust simply didn't consider it; Rust certainly has its own brilliant idea.


> As someone who currently mostly programs in C++, lack of memory safety barely makes my top three concerns.

Isn't having a UB-free JVM a noteworthy goal though? Especially if it gets used in life-critical systems such as avionics or autonomous cars.


UB-freeness is not a goal in-and-of-itself. It's shorthand for a certain kind of technical (i.e. non-functional) correctness, which, in turn, is related in some ways to functional correctness, and it's improving functional correctness (and I include security here) that's the goal. Is the most effective way to achieve that is by working to completely eliminate undefined behavior? I'm not at all sure.


> Rust, BTW, doesn't entirely guarantee memory-safety, either

> We always make some compromises on soundness; the question is where the sweet-spots are.

Excellent point. There are complex tradeoffs and the "rust is safe" slogan is just a slogan.


It's not a slogan; there's a published type safety proof of the core of Rust. You may not like the assumptions that underlie that proof, but it's unarguable that "safety" has a very concrete meaning in the context of the language.


> reducing bugs of kinds A, B and C -- for a similar cost -- may well be safer.

What kind of bugs do your have in mind that Zig would prevent and Rust doesn't ? I can think about several Rust does and Zig doesn't thanks to its afine type system, but I've yet to see a safety feature benefit in Zig.


Preventing bugs with soundness is not necessarily the best, and certainly not the only, way to reduce bugs. You can reduce bugs by making code easier to read (code review has been empirically shown, time and again, to be the most effective, in terms of cost/benefit, bug reduction technique), and by making code faster to write and to compile, thus leaving more time for tests and other verification techniques.

If sound elimination of bugs were the best way to write programs, we'd all be writing in Isabelle, Coq or Idris, except even those of us who do -- in fact, especially those of us who do -- know that it's not the best way to write programs.


> but it's impossible to argue that not trying to prevent UAF is somehow safer.

It might be possible. Someone might prove that preventing UAF necessarily increases complexity somewhere else. Like one of those laws of thermodynamics, but for software.


comptime does not replace macros though...there is no way to manipulate AST in zig, nor will there ever be, according to the author.


You can't manipulate ASTs (ie, Zig code), but nothing stops you from parsing string literals however you want.

For example, I wrote a PEG-like parser combinator library in Zig. Using it currently [looks like this](https://github.com/CurtisFenner/zsmol/blob/87de4c77dd8543011...). However, as a library, I ^could provide a function that looks like

    pub const UnionDefinition = comb.fromLiteral(
        \\ _: KeyUnion
        \\ union_name: TypeIden
        \\ generics: Generics?
        \\ implements: Implements?
        \\ _: PuncCurlyOpen
        \\ fields: Field*
        \\ members: FunctionDef*
        \\ _: PuncCurlyClose 
   );
etc. But, I find reading the code as it is good enough for now that I didn't want to spend time implementing such a library.

[^]: Being able to create brand new types at `comptime` isn't [yet implemented](https://github.com/ziglang/zig/issues/383), so this can't quite be done yet, though you could fake it with `get`/`set` methods instead of real fields


Well, it is intentionally weaker than macros (and I agree with Zig's designer that that's a very good thing, though it is a matter of taste), but it does replace many of the cases where in Rust you'd have to use macros (or the preprocessor in C/C++). So it replaces macros everywhere where it deems their usage reasonable.


It is a legit design choice, but it does detract from your comment about language complexity. Not having AST macros inherently adds complexity to a language by requiring features to be built into the compiler rather than be implemented as libraries.


Those are different kinds of complexity. You're talking about the effort required by the implementor of the compiler. I'm talking about the effort required by the programmer using the language.


Then you aren't talking about complexity (an objective quality), you are talking about difficulty to read (a subjective quality relative to the reader). There is no doubt that macros can make a given piece of code harder to read, if the reader is unfamiliar with the macro being used. Complexity describes how intertwined different pieces of something are internally, which has nothing to do with a given vantage point.


No, that's just how Rich Hickey describes complexity; it's hardly a universal definition. For example, in computer science, the complexity of a task is often a measure of the effort, in time or memory, required to perform it.


English is certainly not free of ambiguity, but in the past i've seen you heavily emphasize precision in word choice, so it's surprising to see you de-emphasize it here. Nobody is a final arbiter of definitions, but the distinction i'm making is not a trivial one, and Hickey isn't the only one to have made it. Even thinking about it colloquially, how often do you follow the word "complex" with an infinitive verb describing an action? A rube goldberg machine is complex...and hard to build!


I'm not deemphasizing it, I'm just saying that we're talking about different meanings of "complexity" here and there is no well-accepted definition. My "complexity" refers to the effort required by the programmer when understanding programs written in the language.


Fair enough, ron. I won't belabor it further. I'll just leave this: Long ago, after coming across a very useful distinction between the words "practical" and "pragmatic," i intentionally changed my usage of those words as a result. Not because a charismatic person told me to, but because it was useful. If a distinction is useful to make, start making it, my man!


Which is the right call really. It is valuable to have the code you're looking at actually be what it appears to be.


I don't agree.

For many domains, being able to implement a domain-specific language with a set of expressive rules for the domain is an incredible productivity boost and also prevents many mistakes because you cannot represent them.

Not being able to manipulate the AST means that you are restricted on the embedded DSLs that you can provide. And embedded DSLs encompasses code generation for:

- state machines

- parsing grammars (PEGs for example)

- shader languages

- numerical computing (i.e. having more math like syntax to manipulate indices)

- deep learning (neural network DSL)

- HTML templates / generators

- monad composition

There is a reason most people are not building in Assembly anymore, there is a right-level of abstraction for every domain. A language that provides building block for domain expert to build up to the right level of abstraction is very valuable.


The question is, as always, at what cost?

Low-level languages (aka "systems programming" languages) already suffer from various constraints that increase their accidental complexity. Is it really necessary to complicate those particular languages further to support embedded DSLs?

I don't think there's a universal right or wrong answer here, but there is certainly a big question.


Ironically, the lack of macros leads to an explosion of ad-hoc, extra-language DSLs. Look at rules engines, for example. In Drools (java), you have to write rules with a special language, DRL. Meanwhile in Clara (clojure) you write your rules in clojure. Macros simplify languages, they don't complicate them.


It's always a trade-off and placing the cursor correctly is tricky. Things like macros, operator overloading, metaprograming, virtual calls, exceptions or even function pointers are effectively "obfuscating" code by having non-obvious side effects if you don't have the full context. On the other hand if you push the idea too far in the other direction you end up with basically assembly, where you have an ultra-explicit sequence of instruction to execute.

It's very easy to come up with examples of terrible abuse of these features that lead to bad code (like for instance if somebody was insane enough to overload the binary shift operator << to, I don't know, write to an output or something) but it also gives a lot of power to write concise and expressive code.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: