Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The industry as a whole errs too much on the side of syntax as the focus of the community. Syntax should fade out of the way. It should be infrastructure, not features. Libraries are the proper object of focus. Libraries can be modified, forked, debugged without dividing a language community. Doing that to a programming language syntax weakens and divides such communities. In contrast, those actions on libraries actually strengthen programming language communities.


Syntax has two problems: (1) it is the most visible thing in programming languages, and (2) there is only one per language.

I believe we should be able to modify, fork, or debug syntax the same way we do libraries. So what about a syntax meant to be read by computer instead of by humans?

    # Factorial example, machine syntax
    fac
      -> int int
      \
        0 -> 1
        n ->
          *
            n
            - n 1
It could be displayed in human readable form by an IDE or converted back and forth by suitable preprocessors:

    -- Factorial example, Haskell syntax
    fac :: int -> int
    fac 0 = 1
    fac n = n * fac (n - 1)

    // Factorial example, C syntax
    int fac(int 0) { return 1; }
    int fac(int n) { return n * fac(n - 1); }
No more quibbling over prefix vs infix, curly braces vs indentation, familiarity vs terseness… That would take a flame war away from language design and put it where it belongs: the IDE.


I've seen some people try this. It has problems once you extend past Ye Olde Factorial Example. Try converting larger chunks of syntax and it rapidly develops that you are quite obviously just putting different skins on the same semantics and it also quite rapidly becomes clear that having multiple skins is a disadvantage, not an advantage. All putting a new language in a skin that looks more "familiar" to you does is fool you into programming Old-Language-in-New-Language; you really need the syntactic differences to remind you that you are not in OldLanguage.

Try translating at the deeper semantic level and you also discover that you basically can't. The C-semantic equivalent of the Haskell is closer to a for loop, and even that isn't an exact match. (An exact match probably involves "goto".) Non-trivial examples just explode in complexity, and that's long before you get to useful code.

This is in the class of "things that have been possible for decades but haven't taken off for good reasons", right up there with the purely-visual languages and other classic "good ideas".


The "syntax as infrastructure" idea has been around in many moderately successful incarnations. (Lisp and Smalltalk, to name a couple.) Another way to think of this: shift Syntax into Meta-syntax, so it gets out of the way of the libraries.

No need for "syntax skins" in that case.


> you are quite obviously just putting different skins on the same semantics

That was exactly my intention. Translating "at the deeper semantic level" is reserved for actual compilation. Now you may be right about syntax being useful for differentiating languages. My stance right now is that we should try (or look at the failed attempts you speak of, do you have any link?).

Testing my idea will require quite a bit of work: I need to write a compiler and an IDE with some kind disciplined editing. I also will have to bootstrap all this stuff, so it passes the minimum credibility threshold. If I ever get to this point, I will (at last) be able to test my language for actual (f)utility.


What a great comment: deep, suggestive, and a pleasure to read from start to finish. I just read it three times.


I'm pretty sure the C example will not compile with a C compiler.

Which may seem like a nit, but the reason is because the semantics of Haskell and C are extremely different. Furthermore, the semantics strongly influence the design of the language. In Haskell, whitespace denotes function application, because the most common thing to do in Haskell is to apply a function. C has semicolon delimited statements, because imperatively executing statements one after the other is a very common thing to do in C programs. In Haskell, you cannot guarantee the exact order in which things happen without going to great lengths (monads, etc.).

In short, I don't think offering a skinnable syntax buys much, and is likely to just generate greater confusion.


Of course it won't compile with a C compiler. It wasn't meant to. I just invented a C-like syntax for pattern matching. I had to, so it could be translated to the machine version, which will be compiled. (As you may have guessed such a compiler doesn't exist —yet.)

On the benefits of skinnable syntax: look at C++, Java, and Javascript. They have two things in common: their popularity, and their syntax. Coincidence? I think not. By stealing the syntax of C, they build on it's original success.

I painfully know that C syntax isn't suited to functional code with gazillions of nested function calls. However, this is a known syntax. It makes learning far easier (or at least looks that way, so people are actually willing to try).


> C has semicolon delimited statements, because imperatively executing statements one after the other is a very common thing to do in C programs.

Actually that would be an argument for making newlines delimit statements, like in, say, Python. Instead of requiring extra work by the programmer.


Why? "imperatively executing statements one after the other" does not imply that they have to be on different lines.


They don't have to, but they are, most of the time. Knowing that, "newline as delimiter" yields the lightest syntax.


Plus optional semicolons to put some on the same line as in Python.


Congratulations, you have just invented... the compiler. ;-)

More seriously though:

Interestingly of the two languages that come close to this model - lisp with minimal syntax and perl with very flexible syntax - one remained in academic obscurity while the other became wildly popular.


You're comparing apples to oranges with Lisp to Perl based on popularity. Perl came on the scene just as Lisp was beginning to lose ground to Unix. They grew up in different worlds. Even then, Lisp never languished in academic obscurity. It was, for a while, very popular in the marketplace.


Wait, Lisp is a programming language. Unix is an operating system. Category error?


I think he's referrring to the old Lisp Machines, which were in direct competition with the UNIX machines for a while. I recall reading some old rant about the new-fangled UNIX machines by a die-hard Lisp machine hacker a long time ago (can't find it now). There was something in it about how amazingly quickly a UNIX machine could boot, which was good, since it had to do so rather often.

EDIT: it's the preface to the UNIX haters handbook, of course. A copy is up at http://www.art.net/~hopkins/Don/unix-haters/preface.html


Lisp Machines were never that much in direct competition with UNIX machines as such. They were competing with UNIX + LISP. Research labs who programmed in Lisp bought Unix systems plus a Lisp environment (like Allegro CL) or bought a Lisp system (like Allegro CL) for their existing Unix environment. But Unix systems were used for quite different things than Lisp programming or running Lisp apps.

When Lisp Machines lost in the market, the Lisp users moved to UNIX + Lisp. Companies like Franz and Harlequin/LispWorks that came from that market still exist today.

Lisp was always used on different systems. When Lisp Machines were 'popular' (we are talking about less then ten thousand machines that were ever build and sold), Lisp was used on Windows, Macs, Mainframes, Unix and other operating systems.

I should also mention that almost all Lisp Machines were single user personal workstations - initially specially aimed mostly at AI programmers.


Sounds like what Charles Simonyi has been trying to do for the last two decades with Intentional Programming.

Of course, I couldn't make heads or tails of the small demos they've given.


Lots of things are human readable. Chinese for example.


Libraries are heavily influenced by the semantics of the language they're written in / designed for. It's quite easy to see this in the Clojure community today, by Java libraries are good from the clojure perspective.

The goodness of a java library has little to do with syntax (because clojure can easily call any java library), but semantics. For instance, java commonly uses mutable state. A fundamental principle of Clojure is that data should be immutable. Immutability makes testing easier, it makes parallelization easier. Clojure was built to make pmap easy. If I have a java library where I have to remember that Foos are thread-safe, but Bars are not, that adds to the incidental complexity of my solution, and reduces my ability to bring Clojure's new tools to bear on the problem.

As another commenter pointed out, syntax of a language is heavily influenced by the semantics of the language. A language starts with a few high level principles, and a favorite hammer or two. The syntax of the language is designed around making those common operations easy. Libraries written in that language will tend to follow those same principles.

Having Java libraries available to Clojure is a huge advantage, but it's not the be-all end-all, because the semantics of those libraries may be incompatible with your new language.

Instead, focus on semantics. Figure out which high level principles are good. What abstractions and design principles result in better, easier to understand solutions? From that follows your libraries and syntax.


While I agree in general thrust with your focus on infrastructure -- one may argue that a lot of improvements of ignoring language have occurred. .Net allows for multiple languages on a single infrastructure, I believe Java VM does as well, and there is Parrot too.

I do think that there is a healthy focus on syntax -- at least my own experience bears it out. It was the syntax of doing things the python way that allowed me to grok polymorphism and MVC despite programming in an OO language for .Net for years. I find that if I want to learn a concept, the syntax is extremely helpful. I also find that if I have to program quickly, syntax being close to how I think is also helpful, and will get frustrated having to make yet another translation of my thought into a syntax that wasn't well thought out. I would also bet that a lot of people find a different syntax to intuitive, and others counter-intuitive than I do. And I'm fine w/ that! Just because there is difference doesn't mean there has to be division!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: