What Haskell is mission in order to be more learnable by anyone imho is sane error messages.
The simplest example I can think of is typing foldl + 100 [1, 2, 5] in the repl by mystake instead of foldl (+) 100 [1, 2, 5] and getting an error message that starts like Could not deduce (Num ([t0] -> (b -> a -> b) -> b -> [a] ->....
...instead, it should fking yell at you that you gave some wrongly typed arguments to the function (+), and you'd instantly realize what's wrong: "...but I'm not trying to call + ...oops, forgot the parentheses around it. fixed!". Also, auto-currying in ML-family languages probably doesn't help with making meaningful error messages easy to implement because it gives more variants for what the user could have meant (besides making type signatures 10x harder to read and making implementing named arguments or optional arguments with default values close to impossible... if they could have just used a postfix $ after the function name as currying-trigger, but no, they've had to use this for some more error-prone and hard to read "syntax sugar" because making people just type a few more parentheses would've been too Lisp-y for the haskellers taste...).
I don't think this is as easy as you make it seem. A whole bunch of problems in typing and deduction are undecidable in the general case. (Don't know about this one, but just saying.)
Also, you seem a little aggressive toward "Haskellers". I'm sure you didn't mean to be! :)
The thing with "undecidable in the general case" is that 80% if the time you can just heuristically guess an be right. Most bugs are shallow. Typos, forgetting operators etc. And even if the guess is just plain wrong, I think that a wrong simple guess is most of the time better at nudging the learner in the direction of fixing the mistake than an incomprehensible type deduction explanation... And if it's not you can always have it under some kind of "details". And no, I don't think this is "a job for the IDE"... this would be Java-think :)
...and that't the thing, when guessing what's actually wrong, syntax can help a lot. Things like having the equivalent of an auto-curry-on-off toggle at the syntactic level, would make guessing at least 50% easier. And other little things too. I think ML-lamnguages tend to shoot themselves in the foot here. C-like syntax has a clear advantage when it comes to enabling guessing and heuristics.
I didn't mean to be agressive, I love the concepts behind Haskell... this is why I hate the "little things" that ruin it and make it almost unusable for average "don't make me think too much right now" type developers (like me after some sleep deprivation or a couple of beers or during a "this bug needs to be fixed by yesterday morning" sprint :) ).
Jonathan Blow is creating a new language with an interesting idea for this: if you have 'fn f(a:$T, b:T) -> T', the $ means: T is a polymorphic (generic) type and 'a' is the parameter which defines the value of the type so if there is a type mistake you get nicer error message.
I think he will have a harder time not having not studied algebra (yet?).
Maybe it would help to explain why in regular maths + and * have similar properties. For example they both return the same result regardless of order of parameters. They are both conform to ((a <op> b) <op> c) = (a <op> (b <op> c))
Where <op> can be + or *. Now we are getting a bit abstract but keeping it kind of concrete, without using scary terms like 'associativity'.
And once he is used to abstractions he may find Haskell easier.
I was lucky to study Maths at Uni, so to me Haskell makes a lot of sense. Although the foldr/foldl/applictive/monad stuff is a mind bender I have to admit, but well worth persisting.
Son: “Well. Hm. If the function is the (+), then you first apply that to the 1. So…..you have something like (1 +)? And now you apply that to the 2? And then you get the final answer?”
Seriously? He grabbed the idea just like this? Am I that old and stupid, that partial application didn't seem to me that easy? Or is currying what kids learn in a kindergarten these days?
Sorry for my ranting, I'm just really amazed. A humanity went a long way to understanding such quite abstract conceptions as number 0 (as absence of something). And at 10 years kids, i think, are still learning by seeing, touching, playing. So concluding a much more abstract idea seems really fantastic.
I don't know Haskell at all, the ~"all functions only have one argument" concept interests me.
>"had an easy time explaining how (2 * 3) really is two functions, each with one argument and one result, and he drew it out on paper nicely" //
So, like you pass the multiply function an argument which is a list (2, 3). Or is it more like you pass the multiply function the first operand (2 in this case) and it returns a function multiply-by-2 which you pass the second operand? The latter seems like it would get sub-optimal pretty quickly. Mind you surely here the compiler produces the same sort of assembly [repeated addition here] making it just a way to abstract around an alternate mental model.
I've just looked at https://wiki.haskell.org/Currying, could someone walk through how something like arcsin(0) works - that's probably a bad example - something that takes a single variable input but has multiple solutions (returns n.π in this case, but say sqrt or an anagram function or ...)?
I'm sure you can tell I'm not a computer scientist, not even a programmer ...
> Or is it more like you pass the multiply function the first operand (2 in this case) and it returns a function multiply-by-2 which you pass the second operand
map takes a function and returns a function that takes a list and returns a list. It has a type like: (a->b) -> [a] -> [b] (I take a function from type a to type b and return a function from list of a to list of b)
Sounds like the big issue she's going to keep running into is that a 10-year-old doesn't really do abstractions well (and doesn't have the background to do abstract algebra yet), whereas Haskell expects a certain familiarity with abstraction.
It could be, though, that showing things in concrete, and then introducing the abstractions, is the right approach to teaching Haskell for nearly everyone, not just 10-year-olds...
whereas Haskell expects a certain familiarity with abstraction.
I'm curious why you think Haskell is unique in this regard.
I mean, sure, if you want to understand why things like Monads exist, I agree, that fundamentally means grokking some pretty deep abstractions.
But I don't know that you need to understand those abstractions to use and appreciate Haskell... maybe to fully use and fully appreciate it, but I wouldn't expect that of a 10 year old to begin with, in part because I wouldn't necessarily expect that of an adult either.
Yes, I had things like monads in mind, but also currying, and higher-order functions, and I could probably think up a couple more if I tried.
True, all languages require abstractions. But it seems to me that Haskell is more abstract than most. That's it's strength, in fact. But it seems to me that it also creates something of a mismatch when trying to teach it to a 10-year-old.
Huh... weird, I don't really think of currying as much of an abstraction. I mean, if you're going to be rigorous about it, maybe, but you can cheat and explain it as partial function application, which gets you 99% of the way there, and I think is pretty easy to explain.
Higher order functions are certainly a bit trickier, but you could explain them in specific contexts (as with 'map') without needing to deal immediately in generalizations, which probably makes it a lot easier to explain.
I assume then you'd expect him to learn about polymorphism, inheritance, classes, the 'static' keyword, and virtual functions if this was an OO language, if these kinds of things are on the table, right?
I'm sure there aren't truckloads of evidence of beginners having problems with those concepts.
In all honesty, inheritance, classes, the 'static' keyword, and virtual functions are all harder than anything in Haskell. They're all weird ad-hoc concepts without simple underlying mathematical concepts. That's rather unlike Haskell, where the underlying math is usually very simple, if more abstract than most people are used to.
That is very subjective. For example, Lisp has a very simple grammar, yet many people find the minimalism complicated in how it manifests in paren heavy code. Likewise, Haskell's reliance on abstract mathematics is definitely going to turn many people off to it. "It's just math" is often taken as a negative, not a positive (depending on the person).
Natural language is built around all sorts of non mathematical ad hoc concepts, yet we seem to use it just fine. Of course, it might not be the best way to talk to the computer, but it is definitely "accessible."
Most people don't know their natural language as well as they'd need to know it to accurately and unambiguously describe things. The only reason natural languages are even useable is because the human interpreting them has a huge amount of context to infer the meaning from and do error correction with.
The reason programming languages exist is to simplify the language used to describe things so they can be interpreted without too much additional context and unambiguously.
You aren't wrong. But there is a good reason objects remain popular even if they aren't ideal by any means. With objects, you get to rely on your built in metaphor capabilities of natural language, your ability to think ad hoc. Now, that gets you into trouble quickly because the computer isn't human, but it shouldn't be a big mystery why OO remains popular. Haskell makes imprecise ad hoc thinking hard to encode, because well..eat your vegetables, they are good for you!
There's no abstract mathematics in the casual Haskell user's experience. I have no education, no background in math, self-educated or otherwise and use it effectively and happily for industrial purposes.
Programming in Haskell is as close to math as one can get in a general purpose programming language (theorem provers are more so, but not general purpose). You might just have a knack for it even without formal training.
Something as simple as "+" is already [a] a -> a -> a. Where did "a" come from? Whereas in e.g. Scala, "5 + 6" is just calling the "+" method on 5 (I can see it right there, on Int: "def +(x: Int): Int").
I also find OO polymorphism a bit more discoverable than typeclass-based polymorphism. You can look at an OO object in the REPL and see what methods it has on it, in a way that's harder to do in Haskell.
You pay the price eventually of course - in Scala you end up needing more machinery to be able to treat "+" as a two-argument function, and you need typeclasses for certain problems so you end up having two different forms of polymorphism. But the tradeoff is that you can learn the less general version first and get started quicker, even if you end up having to learn more overall.
No different or harder than calling a function in Haskell.
> What's 5 that lets me "call" "methods" on it?
A value like any other. That's how you do everything in Scala - calling methods on values. Again, I don't see how you could reasonably claim this is any more intimidating or confusing than calling a function in Haskell.
> Yeah, absolutely nothing intimidating for a 10 year old there...
I didn't say that; just that it's less intimidating than Haskell. It doesn't introduce confusing abstraction immediately like Haskell does.
In Scala, 5 is an Int, you can click through to what an Int is and see that it has a + method on it. "Hello" is a String, you can click through to what a String is and see that it has a startsWith method on it.
In Haskell, "+" comes from a mysterious faraway place and you can see that it's "a :: Num => a -> a -> a". I do think this involves not just more symbols but genuinely more abstract concepts than the Scala version.
Most of my problems with Haskell are related to getting rid of patterns and habits from other languages that are deeply ingrained.
Assuming you have the mental models and internal language (which from the OP seems like a ten year old just barely has) it seems like it would be easier to learn Haskell at a younger age or at least before you learn any other languages.
I tutored Haskell to first-year students and I think the hardest part of picking it up is understanding that previous expertise won't help until you grok the distinctly new perspective with which you must approach problems.
I'm (somewhat) sceptical of the popular "Children inherently have the most malleable minds" trope, to the grand annoyance of one neuroscientist friend; I feel it's more an environmental factor stemming from fewer areas of growth/stress to focus on and increased motivation as they lack the jack-of-all-trades experience in similar areas to fall back on whenever they get frustrated. I like to believe this since it motivates me to overcome those limiting factors by adapting my work ethic, with the hope to continue learning efficiently.
Back on topic, I feel that the best mindset with which to approach Haskell (and many other new areas) is:
"I don't get much right now, but that's OK. If I keep working with what I know and expand that set by just a little every day, it's bound to click and I will be able to look back on the entire journey as worthwhile."
I feel that the best mindset with which to approach Haskell (and many other new areas) is: "I don't get much right now, but that's OK. If I keep working with what I know and expand that set by just a little every day, it's bound to click and I will be able to look back on the entire journey as worthwhile."
And I think that, itself, is a mindset that children more easily accept.
Children necessarily live in a world they can't yet fully understand (not that adults can, either, but we're more likely to live with the delusion that we can :). So I think they're more accepting of "you'll understand that later" than an adult is.
So I agree with you, I don't think children are necessarily more malleable. But I do think they have a lack of ego, an openness to knew things, an acceptance (or maybe it's ignorance) of their own limitations, and far less fear of failure than the typical adult. And that I find incredibly inspirational, and something I try (and usually fail) to apply to my own life.
I agree. The other day I was looking at some tutorial on Haskell type classes that I didn't understand earlier and it just clicked for me on the third-fourth try. We think, with our experience, that the learning-pipe going to our brain is the size of an oil pipeline but it's really only the size of a straw.
Well, you're not alone. I also think the "malleability" we see in children is just because of the attributes of their existence: nothing to worry about, nothing to do but play (and voluntary learning is play) and so on.
And like you, I also use this to motivate me in Haskell. Little by little I learn things I didn't know before. It takes me years but I do it at the pace I choose. And why would Haskell be different from any other subject I learned as an adult? Everything makes no sense in the beginning, then it's hard, then you can kind of see what they're talking about, then you get some practice, and before you know it you are equipped enough to dig into intermediate stuff and then the advanced parts.
Was it not the Japanese that came up with Kaizen - continuous improvement? I believe in that principle.
Simple variable assignment is algebraic mind you so the assertion doesn't match with the note of having studied some programming already. Functions are algebraic too.
I'm guessing the parent here is thinking "he's not studied a formal course specifically targeting a subject area referred to as algebra".
Gender hypersensitivity is the new grammar nazi movement. It adds nothing to any given conversation and only derails every thread it's mentioned in. Much worse than grammar correction actually...
[EDIT]
Sadly my point is being proven. I REALLY wanted to hear about teaching Haskell to 10 year olds but now we've got the same boring ass gender war going on that never goes anywhere and is totally off topic. I wonder if you could delete your comment before it gets too out of hand. At the very least try not to bring it up next time.
The responses that amounted to "You did nothing wrong!" "We don't need to talk about this!" were what made the thread heated -- outside of that it was just a couple of posters realizing they made an incorrect assumption, what need was there to _debate_ that?
First of all he called himself an asshole for making that assumption. What about others that made the same assumption but didn't feel the need to derail the thread? Are they assholes too? Maybe people felt the need to chime in since there was an insult implied and they wanted to make it clear that by having the same assumptions about the author, they did nothing wrong.
Second, his post has NOTHING to do with the original article, it's just off topic noise.
I agree that too much has already been said about it, but I'd say too much was said about it when it was simply posters calling themselves assholes and sexist for making a simple mistake. On that note, I'll step away from this topic :) It would be nice if this entire comment tree was removed from the thread, it detracts from the actual subject at hand.
>Is it because you assumed it was a dad? Me too. :(
Yeah, god we're such shitlords, aren't we? Time to go give myself some lashings at my home built shrine to Ellen Pao.
Seriously though, this is entering the realm of stupidity. You made an assumption based on your own experiences (that the vast majority of computer programmers and enthusiasts are male) and one that is backed up by data - the vast majority of computer programmers and enthusiasts are male.
Coupled with that assumption is the many other various "I'm teaching my kid to program" blogs and similar that pop up here from time to time that to date, as far as I recall, have all been written by men. A story about a programmer is highly likely to be male. A story about a programmer regarding their son wanting to learn programming and them teaching them is likely to be a male teaching a male.
When women are showing the same level of interest in computers, in programming, in computer science, etc. as their male counterparts and are blogging, developing, taking part in the community at the same level, then it may be questionable to automatically assume it must be a man teaching his kid to program.
You are as sexist in your assumptions here as you are in your transphobia in automatically assuming the son is "cisgendered".
C'mon now, let's take a large step back from this nonsense.
I don't think any of us are ACTUALLY berating ourselves. I took "asshole" at least partially facetiously--like saying "I dun fucked up" when the pasta boils over. The point is just that it's good to recognize and question the assumptions our mind make.
Just because assumptions correlate with real statistics doesn't de-facto mean the assumptions shouldn't be questioned. For example, why should we assume a gender one way or another at all?
Also it is worth entertaining the possibility that our assumptions can have cultural effects. Assuming it will rain today because I'm in London doesn't effect the weather. However, one can imagine that our assumptions about gender can impact the cultural zeitgeist making women feel more or less comfortable in tech. And for both selfish and altruistic reasons, I want women to feel as comfortable in tech so their numbers increase. It's not random happenstance that there are so few women in tech. Maybe this sort of thing has zero effect, but there's no harm in not simply yielding to our base biases.
Asshole is of course facetious, and I don't think anyone suspects any of the posters here of beating themselves up horribly about it, but the general sentiment being expressed is still one of regret and feeling bad, and for what? Because you assumed the sex of someone in an article and ended up being wrong? This doesn't hurt anyone, I suspect. I don't believe that the positive energy in the ether is going to suffer a net loss because some people made this assumption either.
But it's not just that, feeling bad about about assuming the incorrect sex of an author and recognizing a simple mistake. This can be seen in the fact that one poster even went so far as to describe their assumption and one of another user's as being 'sexist', which it most likely is not. Someone said it best earlier, so I'm just going to quote them,
'Making an assumption about someones gender, race, age, etc based on what is most common (or most common to you) does not make you a sexist, racist or agist. It just makes you human.'
When presented with a faceless author and no hints of their sex or appearance, its pretty typical for one to fill in the blanks in their mind. The picture they come up with will be informed by all the experiences that they have encountered in their lifetimes. Simply being wrong doesn't suggest any ill will, belief of inequality, discrimination, or prejudices towards or against any group of people. Whoops seems like a level headed response, Oh shit I'm being sexist seems like a bit of an exaggeration.
Although it totally could be case, and kudos to someone who suddenly has the self realization that they're a sexist and wants to post about it.
Yes, but by the same token, every tech article written by a woman seems to have a comment saying "Oh, I'm such an asshole, I assumed you were a man". Now I can't speak from personal experience, but that seems to be pretty explicitly enforcing the impression "people think there are no women in tech" (and, by extension, "people think it's unusual and/or weird that I'm a woman in tech").
You're here fighting against the _grave injustice_ that someone ... felt a little bad about an assumption they made?
Having that "oh shit" moment isn't a bad thing. Realizing that these assumptions aren't necessarily true is important -- If you just subconsciously know computer people are dudes exclusionary behavior is really hard to notice.
> You're here fighting against the _grave injustice_ that someone ... felt a little bad about an assumption they made?
The problem is that they insulted everyone who made that assumption.
Edit: Downvote? Look at the wording. It wasn't swearing from making a mistake, it was saying that the mistake was shameful to make, as a general statement.
I'm sorry but none of this is sexist. Making an assumption about someones gender, race, age, etc based on what is most common (or most common to you) does not make you a sexist, racist or agist. It just makes you human.
If, after realizing your assumption was wrong, you said "omg, I can't believe a woman knows haskell, how did she ever learn that?" THEN you're being sexist.
Don't be too hard on yourself. By your own admission you assumed the sex of the author just because you have a wife who is teaching your own child. As far as my understanding of sexism goes this is miles away from being an example of sexism.
Stereotypes describe a generality that may or may not apply to the majority. Despite the fact that it doesn't apply to the majority, the stereotype itself still describes an actual real-world generality. Coming to a conclusion based off of a generality/stereotype is not totally unfounded.
I mean this guy called himself an asshole because he came to a very probable and realistic conclusion. Sometimes I think feminism goes a bit too far;... especially when the topic takes up 50% of my news feed nowadays.
>especially when the topic takes up 50% of my news feed nowadays.
In an era when the "media" we consume is "free" (entirely supported by ad revenue driven by clicks), clickbait, bent sensationalist narratives and faux outrage appear to win the day.
Personally I'm sick of it. The tech media have went from reporting tech news to hiring culture vultures from Vox, Salon and the Gawker network to stir up click counts from idiots.
>> ...and just turned 10, has been nagging me for ages to teach him Haskell...
I'm guessing you would have taught your human(?) kid some human languages first before he started nagging you (why you, he can read too, right?) to teach him Haskell. I think you had a dream or sumping...
The simplest example I can think of is typing foldl + 100 [1, 2, 5] in the repl by mystake instead of foldl (+) 100 [1, 2, 5] and getting an error message that starts like Could not deduce (Num ([t0] -> (b -> a -> b) -> b -> [a] ->....
...instead, it should fking yell at you that you gave some wrongly typed arguments to the function (+), and you'd instantly realize what's wrong: "...but I'm not trying to call + ...oops, forgot the parentheses around it. fixed!". Also, auto-currying in ML-family languages probably doesn't help with making meaningful error messages easy to implement because it gives more variants for what the user could have meant (besides making type signatures 10x harder to read and making implementing named arguments or optional arguments with default values close to impossible... if they could have just used a postfix $ after the function name as currying-trigger, but no, they've had to use this for some more error-prone and hard to read "syntax sugar" because making people just type a few more parentheses would've been too Lisp-y for the haskellers taste...).