Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Has it absorbed features from functional Lisp dialects, or does it just have features in common?

Early Python was inspired by ABC and C. ABC has a Python-like REPL, strong dynamic typing, automatic memory management, memory safety, a nice collection of built-in types, arbitrarily long integers, and rational numbers. C has low-level bit operations, (an easy way to call into C code,) a ternary expression-level if-else operator (and make no mistake, Python's expression-if is a ternary operator that's distinct from the statement-if-else), and hexadecimal and octal number literals.

There is at least a little influence from functional Lisps (Mypy lists Typed Racket as one of its influences), but a lot of what you list was taken from different languages, or is distinctly un-Lisp-like in Python, or was in Python from the start rather than absorbed over time, or is just obvious enough to put in a very high-level language to be reinvented.

It's also important to distinguish between internal complexity and user complexity. Arbitrarily long integers are complex to implement, but easier to use than fixed-length integers. Even features that do have a lot of user-facing complexity can be very easy to use in the common case. Python is hideously complex if you explore all the tiny details, but I'm not sure that it's all that complex to use. But I haven't used Clojure and Racket so I can't really comment on them.

> I even think that Rust, while clearly being targeted at a very different domain, is more streamlined and well-composed than Python.

I think I agree. But Rust has the benefit of only dealing with a measly five years of backward compatibility. Python has accumulated complexity, but the alternative would have been stagnation. If Python hadn't significantly changed since 1996 it would be more streamlined but also dead.

> What is also worth mentioning is that these functional languages have seen steady improvements in compilers and performance of generated code, with the result that Rust code is now frequently at least as fast as C code

I don't think Rust suffers from the issues that make functional languages hard to compile, so that might be a bad example. In Rust code it's unambiguous where memory lives. It has functional features augmenting a procedural model, rather than a functional model that has to be brought down to the level of procedural execution. So it might be "merely" as hard to optimize as C++.



> C has low-level bit operations,

As an unrelated fun fact, Common Lisp has more low-level bit operations than C, such as "and complement of integer a with integer b" or "exclusive nor":

http://www.lispworks.com/documentation/HyperSpec/Body/f_loga...

It also has logcount, which is the counterpart to the implementation-specific popcount() in C.


> but a lot of what you list was taken from different languages, or is distinctly un-Lisp-like in Python, or was in Python from the start rather than absorbed over time, or is just obvious enough to put in a very high-level language to be reinvented.

Here, Python clearly borrows from functional languages. And there are basically two families of functional languages: Strongly and statically typed languages like ML, OCaml, Haskell, Scala, F#, and on the other hand, dynamically typed Lisps and Schemes.

My point is that all these adopted features are present in the latter category.


How many of these features are present in neither statically typed functional languages nor dynamically typed procedural languages?

My impression is that Python has a) a lot of bog-standard dynamic features, and b) a few functional features (like most languages nowadays).

Group a) overlaps with functional Lisps, but no more than with ABC and Perl and Lua, so functional Lisps are not a great reference point.

Group b) overlaps with functional Lisps, but no more than with ML and Haskell, or even modern fundamentally-procedural languages like Kotlin(?) and Rust, so functional Lisps still aren't a great reference point.

It's mostly parallel evolution. It can be interesting to compare Python to functional Lisps because similarities are similarities no matter where they come from.

But I don't think that functional Lisps neatly slot into an explanation as to why Python looks the way it does. In a world where functional Lisps didn't exist Python might not have looked all that different. In a world where ABC and Modula didn't exist Python would have looked very different, if it existed at all.


> Group b) overlaps with functional Lisps, but no more than with ML and Haskell, or even modern fundamentally-procedural languages like Kotlin(?) and Rust, so functional Lisps still aren't a great reference point.

Both of them stem from Lambda calculus. The difference between ML languages and Lisps is the type system. To do functional operations like map, foldl, filter, reduce in compiled ML-style languages with strong static typing, one needs a rather strong and somewhat complex type system. When you try that at home with a weaker type system, like C++ has for example, the result is messy and not at all pleasant to write.

Lisps/Schemes do the equivalent thing with strong dynamic typing, and good Lisp compilers doing a lot of type inference for speed.

> It's mostly parallel evolution. It can be interesting to compare Python to functional Lisps because similarities are similarities no matter where they come from.

Lisps (and, for the field of numerical computation, also APL and its successors) had and continue to have a lot of influence. They are basically at the origin of the language tree of functional programming. The MLs are a notable fork and apart from that there are basically no new original developments. I would not count that other languages like Java or C++ pick up some FP features like lambdas, too.

What's however interesting is the amount of features that Python 3 has now in common with Lisps. Lisps are minimalist languages - the have only a limited number of features which fit together extremely well.

And if all these features adopted were not basically arbitrary, unconnected, and easy to bolt-on, why has Python such a notably bad performance and -- similar as C++ -- such an explosion in complexity?


Map, filter and lambda were originally suggested by a Lisp programmer, so those do show functional Lisp heritage. (I don't know of similar cases.) But they're a small part of the language. Comprehensions are now emphasized more, and they come from Haskell, and probably SETL originally—no Lisp there.

> They are basically at the origin of the language tree of functional programming.

That's fair. But that only covers Python's functional features, which aren't that numerous.

> if all these features adopted were not basically arbitrary, unconnected, and easy to bolt-on

I never said they weren't! I just don't think they're sourced from functional Lisps.

>why has Python such a notably bad performance

Because it's very dynamic, not afraid to expose deep implementation details, and deliberately kept simple and unoptimized. In the words of Guido van Rossum: "Python is about having the simplest, dumbest compiler imaginable."

Even if you wanted to, it's hard to optimize when somewhere down the stack someone might call sys._getframe() and start poking at the variables twenty frames up. That's not quite a language design problem.

PyPy is faster than CPython but it goes to great lengths to stay compatible with CPython's implementation details. A while ago I ran a toy program that generated its own bytecode on PyPy, to see what would happen, and to my surprise it just worked. I imagine that constrains them. V8 isn't bytecode-compatible with JavaScriptCore, at least to my knowledge.

The most pressing problems with Python's performance have more to do with implementation than with high-level language design.

PHP is the king of arbitrary, unconnected, bolted-on features, and it's pretty fast nowadays. Not much worse than Racket, eyeballing benchmarksgame, and sometimes better.

> and -- similar as C++ -- such an explosion in complexity?

I'm not so sure that it does. I'm given to understand the problem with C++ is that its features compose badly and interact in nasty ways. Do Python's? Looking at previously new features, I mainly see people complaining about f-strings and the walrus operator, but those are simple syntactic sugar that doesn't do anything crazy.

Instead of an explosion in complexity, I think there's merely a steady growth in size. People complain that it's becoming harder to keep the whole language in your head, and that outdated language features pile up. I think those are fair concerns. But these new features don't make the language slower (it was already slow), and they don't complicate other features with their mere existence.

The growth isn't even that fast. Take a look at https://docs.python.org/3.9/whatsnew/3.9.html . I wouldn't call it explosive.

I don't know enough C++ to program in it, but the existence of operator= seems like a special kind of hell that nothing in Python compares to.


> Even if you wanted to, it's hard to optimize when somewhere down the stack someone might call sys._getframe() and start poking at the variables twenty frames up. That's not quite a language design problem.

It's hard to optimize only if you accept the tenet that it sys._getframe(), and all its uses, must continue to work exactly the same in optimized code.

Instead, you can just declare that that it (and any related sort of anti-pattern of the same ilk) won't work in optimized code. If you want the speed from optimized compiling of some code, then do not do to those things in that particular code.

The programmer can also be given fine-grained tools over optimization, so as to be able to choose how much is done where, on at least a function-by-function basis, if not statement or expression.

It's not written in stone that compiled code must behave exactly as interpreted code in every last regard, or that optimized code must behave as unoptimized code, in every regard. They behave the same in those ways which are selected as requirements and documented, and that's it.

In C in a GNU environment, I suspect your Glibc backtrace() function won't work very well if the code is compiled with -fomit-frame-pointer.

In the abstract semantics of C++, there are situations where the existence of temporary objects is implied. These objects are of a programmer-defined class type and can have constructors and destructors with side-effects. Yet, C++ allows compete freedom in optimizing away temporary objects.

The compiler could help by diagnosing situations, as much as possible, when it's not able to preserve this kind of semantics. Like if a sys._getframe call is being compiled with optimizations that rule it out, a warning could be issued that it won't work, and the generated code for it could blow up at run-time, if stepped on.

One way in which compiled code in a dynamic language could differ from interpreted code (or less "vigorously" compiled code) is safety. For that, you want some fine-grained, explicit switch, which expresses "in this block of code it's okay to make certain unsafe assumptions about values and types". Then, he optimizer removes checks from the generated code, or chooses unsafe primitives from the VM instruction set.

The code will then behave differently under conditions where the assumptions are violated. The unoptimized code will gracefully detect the problems, whereas the vigorously compiled code will behave erratically.

This entire view can nicely take into account programmer skill levels. Advanced optimization simply isn't foisted onto programmers of all skill levels, so then they have to grapple with issues they don't understand, with impaired ability to debug. You make it opt-in. People debug their programs to maturity without it and then gradually introduce it in places that are identified as bottlenecks.


> In C in a GNU environment, I suspect your Glibc backtrace() function won't work very well if the code is compiled with -fomit-frame-pointer.

Actually, backtraces work correctly without explicit framepointers (in a typical GNU environment using ELF+DWARF).

The general concept has existed in DWARF since version 2 in 1992. The mechanism used for this is known as Call Frame Information (CFI)[0][1] — not to be confused with Control Flow Integrity, which is unrelated.

Here's some example libgcc code that evaluates CFI metadata[2]; there's similar logic in the libunwind component of llvm[3].

Burning a register on a frame pointer is a big deal on i386 and somewhat less so on amd64; there are other platforms where the impact is even lower. So, just know that you don't have to include FPs to be able to get stack traces.

If you're interested in how to apply these directives to hand-written assembler routines, there are some nice examples in [0].

[0]: https://www.imperialviolet.org/2017/01/18/cfi.html

[1]: https://sourceware.org/binutils/docs/as/CFI-directives.html

[2]: https://github.com/gcc-mirror/gcc/blob/master/libgcc/unwind-...

[3]: https://github.com/llvm/llvm-project/blob/main/libunwind/src...


> I don't think Rust suffers from the issues that make functional languages hard to compile, so that might be a bad example.

The issue is that in functional languages, the compiler has more information and can reliably rely on more assumptions, thus it can make more optimized code. This is why also Common Lisp can compile to very fast code (in a few of the cited micro-benchmarks, faster than Java).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: