As far as I can tell grammars are a way of organizing a bunch of regexes together so you can run them against a document to match and extract all the data. It might make it easier to write parsers but that is a small, though admittedly annoying, subset of programming problems. It doesn't seem like a general enough feature to qualify as "profound".
I have no idea what you've just said, but it doesn't make any sense.
Of course parsers are described by recursive functions, that's because recursive functions usually describe a push-down automata or a DFA, which you know, is needed for a parser.
But describing a parser is not easy, even in lispy languages ... you still need a declarative syntax (like BNF or PEGs) for those rules, you still need a strategy (like LL(*) or LALR(1)), you still need to deal with context-dependent constructs, and you still need an AST that must be optimized.
Text parsing is also not one of Lisp's strengths. Many implementations don't even have proper regexp engines ... but indeed, it's easy to translate any language to LISP as long as you're goal is not to generate something human readable.