Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The answer is that the type systems of these languages are not sufficiently developed. They can tell you if you're assigning a string to an int, but not if you're assigning 200 to an int whose type allows only numbers in [10, 100].

I am pretty sure Ada did that in the 80s. :)

I get what you are saying, but the example you gave is a pretty trivial one that has been solved (although not implemented very often...) for awhile in more "regular" languages!



I picked the trivial example on purpose to make the point that none of the top 5 programming languages do it, even if it was solved in the 80s.

Recent languages like swift/kotlin (which I expect will see increased usage in coming years) seem to have ignored this area.

https://github.com/peter-tomaselli/swift-refined/blob/master... https://discuss.kotlinlang.org/t/refinement-types/9753/19


Because it requires a non-trivial amount of static analysis work on the compiler side to make the UX comfortable. Rust's borrow checker mechanism took years to become pleasant for the end user to use. There is still next to no adoption of Ada outside of aerospace, defence, and certain embedded programming sectors.


> Because it requires a non-trivial amount of static analysis work on the compiler side to make the UX comfortable.

How so?

Any modern language that supports operator overloading enables a half arsed version where you create a struct/class for any subtype of integer you want (e.g. `milliseconds`), and then you create binary operators that takes in on both sides your class, and you add bounds checking yourself, and also this way you can restrict allowing "primitive" types from being just added/subtracted willy-nilly.

As I ranted about below, a simple "milliseconds" type would make life easier all over the place. Who hasn't mixed up an API call where part of it takes seconds and another part takes milliseconds?

That sort of crud should be easy for a simple subtyping system to catch.


So as is the answer to 99 out of 100 questions, it requires a lot of extra time and effort.

Getting those guarantees at compile time is very, very, very hard. Even ADA Spark struggles to verify seemingly simple stuff [1, 2]. It is much easier if you trade compile time for runtime verification, which you can do in C# (but not Mono) or via code comments in Java [3].

WRT subtypes: language designers have plenty of motivation to try and push every non-core feature into a library. Programming in Ada kinda sucks because there are soooo many reserved keywords. You are also stuck with bad design decisions forever, like with Java's Date API [4]. Since it's relatively easy to use the newtype pattern to achieve what you want, it probably doesn't factor into the core language design.

Which is to say that I largely agree with you, but the people paying for new languages have a different set of priorities. Making a cat video database go faster is very lucrative, but it's only with the rise of cryptocurrencies that there has been significant funding for high-assurance programming languages outside of defense and aerospace.

[1]: https://www.imperialviolet.org/2014/09/07/provers.html

[2]: https://blog.adacore.com/using-spark-to-prove-255-bit-intege...

[3]: https://www.openjml.org/

[4]: https://stackoverflow.com/questions/1571265/why-is-the-java-...


Thanks for the awesome answer, with citations!

As someone who has to do day to day programming in languages with minimal concept of safety (e.g. JavaScript, C), my concern isn't for provably correct, it's for "make my life easier by covering the 90% case."

Number of times I've needed to work with 255 bit integers: 0.

Number of times I've confused units of time passing into APIs: way too many.

Heck on one system I was working on, I had one API that took distance in as thousands of an inch. Everything else was in metric.

Fixing that sort of stuff could be done w/o great trouble, and it'd be a huge boon to everyday programmers.

Would it be a sexy language feature? No. But it'd make life better.


It is super annoying that it isn't done more.

I'd love to be able to opt in to bounds checking when I feel it is needed.

I have no bloody idea why language designers seem so intent on not letting me choose what perf/safety trade-off is worthwhile for me. :/


Easy, use C++.

While not perfect, given its copy-paste compatibility with C, you can opt-in into bounds checking by using the STL data types, or similar.

Then you can also make use of compiler switches, to opt-in in changing the behaviour of operator[]() to do bounds checking.

With templates, you can opt-in into checked numerics.

While not perfect, and maybe it will be replaced by Rust or other candidate someday, it already offered plenty of opt-in possiblities since early 90's, no need to keep writing C.


> While not perfect, given its copy-paste compatibility with C, you can opt-in into bounds checking by using the STL data types, or similar.

I've heard of abuses of templates to add something that resembles proper ranged types, but it is not a path I'd want to go down.

Partly, because C++ templates are not an excuse for a proper type system, even if they are a tool that can be used to force a type system into the language.

A proper type system lets you do things like add centimeters and meters together, divide by seconds, and pop out a proper unit and ensure no overflow happens.

A proper type system lets you easily, without the visual noise or compile time hit that templates have, add bounds checking wherever it is appropriate.

    type Age is Integer range 0 .. 130
    type Day_Of_Week is (Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday);
    subtype Working_Day is Day_Of_Week range Monday .. Friday;

Take a look at https://en.wikibooks.org/wiki/Ada_Programming/Type_System#De...

For all the flak it gets, Ada did this right long ago. And a huge part of all this is compile time only, and you can opt in/out of the runtime checks as performance dictates.

I'd kill for a proper "milliseconds" type everywhere, one that can only have other milliseconds added to it. And then double the fun with a "seconds" type, and a language that keeps track of which is which for me. Why the heck am I writing timeInMs or timeInSeconds still?

Subtyping integers should be a basic and primitive operation in any modern language.


I know most Wirth based languages relatively well.

Actually your examples are even possible in Pascal.

    type
        Age =  0 .. 130;
        Day_Of_Week = (Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday);
        Working_Day = Monday .. Friday;
However in 1992, C++ was a viable path forward on PC, whereas Ada was too expensive and Modula-2 compilers weren't getting the love they needed from OS vendors (https://freepages.modula2.org/compi.html), so C++ was the best alternative to move from Object Pascal into cross platform development.

So while C++ isn't perfect, its copy-paste compatibility was key for its adoption, but also the reason why C++ will never be Ada in regards to security.

Just like TypeScript or the JVM guest languages, have to deal with the semantics of the underlying platform.

In any case, in what concerns opt-in, C++ provides the options, and using C already in 1992 was a matter of being religiously against C++. RAII, proper strings and arrays data structures with bounds checking and reference parameters instead of pointers were already available.

From the 5 surviving Ada vendors, there is a free option, so anyone that cares about security with the issues that C brings into C++, can also have a look at it.

In fact, GenodeOS, seL4 and NVidia Autonomous Vehicle project are all using Ada/SPARK to some extent, as they considered investing into Ada a better option than adopting Rust, given its history in production.


> In fact, GenodeOS, seL4 and NVidia Autonomous Vehicle project are all using Ada/SPARK to some extent,

As far as I am aware, seL4 does not use Ada/SPARK, but C (with some assembler snippets), and Isabelle/HOL (for the proofs). This is also what is written in https://wwwnipkow.in.tum.de/teaching/proof21/SS18/files/14-f...


My mistake, I thought that Componolit work was for seL4, but it is also for Genode.

https://componolit.com/index_en.html#components


> Why the heck am I writing timeInMs or timeInSeconds still?

Because "keeping track of types" is much more complicated than one would think at first. How many languages are there that can do something like (3A * 4V == 12VA), (12VA / 3A == 4A)? What if you want to use a generic routine that is just about the integers? It will lose track of all the types. I claim that there is still no nice way to do this stuff except scripting languages (dynamic types).

One of the languages that go a little more into types than mainstream languages but still are somewhat close to mainstream is Haskell. Have you ever tried to use it? You get quickly sucked down into a sea of language and type extensions that you have to use, and that are often mutually incompatible, most of them probably the result of a PhD thesis. And you never quite get at what you want (if you want sth slightly differend the card house tumbles and you have to pick a different set of extensions). That is unless you're very, very smart, and have so much investment into Haskell that you know the current limits, so you're able to choose a model that is still within the bounds of representability in the language.

For all of us mere mortals and practitioners, we use simple integers and get our job done.

For everyone still clinging on the idea of types, I strongly believe the type system vs runtime values dichotomy is a dead end. Sometimes you need dynamic computations with types - sometimes types of expressions are known statically, but sometimess not. That's why most mainstream type systems keep it simple and provide escape hooks (like void-pointers or RTTI).


> You get quickly sucked down into a sea of language and type extensions that you have to use, and that are often mutually incompatible

"Often mutually incompatible" suggests that you should be able to name, say, ten such pairs of extensions. Can you? (I've used Haskell professionally for nearly eight years and I can't name one such pair. I could probably find a few marginal cases if I spent 30 mins, but they don't come up in practice.)

> most of them probably the result of a PhD thesis

Can you name any extensions that are the result of "just" a PhD thesis (as opposed to a peer reviewed academic paper published at a top conference)?

> if you want sth slightly differend the card house tumbles and you have to pick a different set of extensions

Can you name any examples where a card house tumbled and someone had to pick a different set of extensions?


It's a long time that I've tried to do anything in Haskell, but I had to use extensions such as GADTs, type families, existential quantifications, kind signatures, multiparam type classes...

I can't name you any ones that are mutually incompatible because I simply don't remember what all of those were about anymore, but I remember there was something (I'll take back the "often"), and I'll bet that some of those that I named above are now deprecated or superseded by others.

> Can you name any extensions that are the result of "just" a PhD thesis (as opposed to a peer reviewed academic paper published at a top conference)?

Well, no, I can't, since I don't follow it anymore, and if I understand correctly you are making my point by stating that a PhD thesis is not the level where you already make language extensions. I know someone who's made significant contributions to GHC also as part of their PhD work, though.

My actual point was simply that the idea of modelling more than basic ideas (like physical layout of data) with static types is a bad idea. You quickly get sucked in to places where you need to be able to express even more complicated typing deductions, and you'll end up resorting to unergonomic stuff like templating, dynamic types and various RTTI approaches, and all kinds of language extensions that are really hard to grasp or that are still in the making. As a guy of modest intelligence, I feel like a dog chasing its tail (and I assume that this is how the super-smart guys out there feel like, too, a lot of the time).

> Can you name any examples where a card house tumbled and someone had to pick a different set of extensions?

Really anything where you're trying to represent a concept with compile time language primitives (which is what Haskell guys are about, a lot of the time - and take pride of). I'm convinced you know this situation yourself where you start of modelling something statically (in type language) and then you have this additional requirement coming in, and you don't know how to represent it with compile time language feature anymore, so you end up scratching everything and building it with a more modest mindset, most of the stuff available with run time values - maybe a few more run-time case switches or asserts that invalid situations can't happen, but overall simpler and easier to understand code.


> I'll bet that some of those that I named above are now deprecated or superseded by others.

All of them are in common use.

> if I understand correctly you are making my point by stating that a PhD thesis is not the level where you already make language extensions

I'm not entirely sure what you're trying to say. I thought you were trying to say that many things that people depend on in Haskell are merely the result of a PhD thesis, i.e. perhaps a messy body of work designed by an inexperienced person. On the contrary, language extensions are well designed and thoroughly thought through.

On the other hand, sometimes people get the idea that Haskell can encode everything in types. It can't. They get stuck and get into messes. Nonetheless the ability to encode some things into types makes Haskell by far the most practical language I've used.


> On the other hand, sometimes people get the idea that Haskell can encode everything in types. It can't. They get stuck and get into messes. Nonetheless the ability to encode some things into types makes Haskell by far the most practical language I've used.

It's not even that, I'll assume that the majority of Haskell novices are aware that there are humble limits to the expressiveness of most (including Haskell's) type systems.

The real problem IMHO is the dichotomy that this approach entails. You have to decide which language to use to model a concept (type language or run time language), and if the idea of what you want to do changes a little, chances are you can't continue on your current path.

In other words, I think the type system approach prevents a lot of the fluidity that I want from a programming language as a tool to get things done.


Personally my experience doesn't match that description. I find it's fairly reasonable to convert existing code between "run time type checking" and static types as necessary. I realise that it doesn't click for everyone immediately. Sorry you had a bad experience with Haskell. If you ever try again I'm happy to help. Feel free to drop me a line with any questions whatsoever. My contact details are in my HN profile.


> I'm not entirely sure what you're trying to say.

My understanding is that "because it took a whole PhD thesis to come up with, it must take a lot of effort to understand".

But I don't think that holds up. Much like the "if I had more time I'd have written a shorter letter", it can often take effort and skill to make a thing smaller and easier to understand.


Ada cannot do this at compile time. Predicates with arbitrary logic like that are checked with runtime assertions. There are limits on which static predicates you can write.

F* implements refinement and dependent types, which allow true logic to be computed at _compile_ time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: