Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As far as I can tell grammars are a way of organizing a bunch of regexes together so you can run them against a document to match and extract all the data. It might make it easier to write parsers but that is a small, though admittedly annoying, subset of programming problems. It doesn't seem like a general enough feature to qualify as "profound".


Parsers are already fairly easy to write in lispy languages. They just have a different name in that context -- "recursive functions".

Pick a famous computer scientist, and the odds are better than even they've written a paper or two on this topic.


I have no idea what you've just said, but it doesn't make any sense.

Of course parsers are described by recursive functions, that's because recursive functions usually describe a push-down automata or a DFA, which you know, is needed for a parser.

But describing a parser is not easy, even in lispy languages ... you still need a declarative syntax (like BNF or PEGs) for those rules, you still need a strategy (like LL(*) or LALR(1)), you still need to deal with context-dependent constructs, and you still need an AST that must be optimized.

Text parsing is also not one of Lisp's strengths. Many implementations don't even have proper regexp engines ... but indeed, it's easy to translate any language to LISP as long as you're goal is not to generate something human readable.


Lisp has portable libraries for regular expressions.

Like this one: http://weitz.de/cl-ppcre/


Thanks for the downvote ... I was more interested in what you meant in your comment, rather than proving me wrong about regexp engines.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: