I have been waiting for "Mathematica Online" to be released since they've been giving hints about it "coming soon" since about 2008. So this is a big announcement. Us Sage developers have also been building something similar to this, but around IPython/Sage https://cloud.sagemath.com, and we've also built something similar for embedding in web pages https://sagecell.sagemath.org/.
I would have loved to have something like this in school. I ended up paying for mathematica while working on my degree. I think IPython/Sage connected to an open-source version of Wolfram Alpha would be a Killer app. Any chance Julia would be integrated at some point?
Open source version of Alpha? That will never happen. Wolfram is the most anti-source-code person out there. He even has this diatribe about why you should not be allowed to see "his" source code:
I don't know if it's too late for this, but as a matter of marketing, why not Omega instead of Gamma? Alpha was the first word on the subject. Omega will be the last.
That's odd for a business so concerned about selling to academia. You'd expect journals to reject papers where the number-crunching went through a "black box" like that.
He mostly claims other people's work for himself. I suppose he's a good salesman and a good huckster, that much is demonstrably true. He's also apparently competent as a mathematician somewhat, but it's doubtful how much of Mathematica really is his own work. He puts everything under his own name and sues anyone who disputes otherwise.
Multiple forms of Julia integration in https://cloud.sagemath.com is very high on my list. We already have syntax highlighting for editing julia files, and Julia is installed and usable from the command line. What's missing is support for both IJulia (Ipython notebook julia) and a Julia mode for Sage worksheets.
I'm curious about other people's experiences learning the Wolfram language if they're not already a mathematician. I've been tempted a few times, but the language (and Mathematica itself) has so far struck me as one of those "Sure, seems like it can do amazing things after you already know how to do it" types of technologies. As opposed to languages where you can reason yourself towards the correct approach. It seems sort of similar to Applescript that way. What was it it like for you to learn it, and how did you go about it?
If you come from a programming background, just treat it as a Lisp with M-expressions rather than S-expressions. It all comes together quite quickly then.
At it's core Mathematica is actually a rule rewriting engine. but except for more heavyweight uses you can view it as simply a Lisp with the worlds best standard library.
I found that "just using it as a lisp" was a huge pain: the fact that Mathematica is, as you say, "actually a rule rewriting engine" kept causing it to behave in ways that looked absolutely bonkers if you were trying to think Lisp.
The jungle of function-definition-like expressions and the scoping constructs were particularly bad, but I also found that the language & the culture made it far too difficult to write readable code. (I find my eyes glazing over almost immediately when I try to read the Wolfram blog: it's so very hard for me to see the structure of the code.) Perhaps once you become happy with the core principles of the language all of this gets better, but I spent quite a lot of time thinking it and never really got there.
Moreover, I never figured out how to do iterations quickly; even once I got the thing working, it was prohibitively slow.
Ultimately I re-wrote it in Julia, which was a much better fit for the problem (exact diagonalization of a one-particle tight-binding Hamiltonian). That was a rather more pleasant experience.
Now, this is in part about the problem domain: I wasn't doing any symbolic manipulation, though (IIRC) I was comparing my results with those from some symbolic calculations. (Honestly, I feel pretty stupid for having even tried to do it in Mathematica in the first place.)
The lesson I learned, though, was to never use Mathematica for anything beyond basic symbolic calculations---integrals, that sort of thing.
My experience with Mathematica is sort of like my experience with Fortran: if you stick to the sorts of problems for which it was developed (hard-core numerics for Fortran, symbolic manipulations for Mathematica), it's great, but if you want to do something more general you're in for a rough time.
(I understand Fortran's gotten progressively better, but I don't have any experience with anything after 95, and even that was a codebase heavily inflected with 77.)
> I found that "just using it as a lisp" was a huge pain: the fact that Mathematica is, as you say, "actually a rule rewriting engine" kept causing it to behave in ways that looked absolutely bonkers if you were trying to think Lisp.
Can you give an example of bonkers-ness?
The functional parts of Mathematica/WL (by which I mean the equivalents of fold, filter, map, etc) are pretty straightforward and should be quite easy for a Lisp programmer to pick up. See [0].
In fact the suite of functions you have available for basic list and hashmap manipulation is probably one of the richest among languages in this space.
Btw, its called a "term rewriting system", not a "rule rewriting engine".
> Moreover, I never figured out how to do iterations quickly; even once I got the thing working, it was prohibitively slow.
It's a bit odd to criticize a functional programming language for not having super-fast 'for' loops. But you can use Compile[1] if you want to write fast procedural-style code. We also do some amount of JIT compilation.
> Ultimately I re-wrote it in Julia, which was a much better fit for the problem (exact diagonalization of a one-particle tight-binding Hamiltonian). That was a rather more pleasant experience.
Did you really want to write diagonalization from scratch? Why not use the superfunction Eigensystem [2], which has been honed by many experts over many years?
> Now, this is in part about the problem domain: I wasn't doing any symbolic manipulation, though (IIRC) I was comparing my results with those from some symbolic calculations. (Honestly, I feel pretty stupid for having even tried to do it in Mathematica in the first place.)
> The lesson I learned, though, was to never use Mathematica for anything beyond basic symbolic calculations---integrals, that sort of thing.
That's basically nonsense. Wolfram|Alpha is a multi-million line Mathematica/WL system that goes far beyond symbolic manipulation. As a totally random example, take Facebook social network analysis [3].
Note to self: clearly we aren't making it easy enough to understand what WL can do, especially for people who pick it up for one specific thing. The video helps a bit, and there is the fast introduction for programmers [4].
As you can see from the doc, functional programming is completely underspecified in Mathematica. How does lexical binding / closures /... work? I fear these are all dirty hacks and performing very poorly. Why isn't it better specified? Because Wolfram wants to prevent competition? From here it looks like the implementation is the spec. There is no rigorous language spec, which is quite poor for a language used in Mathematics.
The compile function spec -as little as is there - does not make me happy as a Lisp programmer. The compile function has only very limited capabilities...
From the doc:
> Compiled code does not handle numerical precision and local variables in the same way as ordinary Wolfram Language code.
This is a huge warning sign. It does not even say HOW it works differently. Mathematica from a language implementation point is stuck in the 70s... They have fancy stuff on top, but the basic language looks broken. The Lisp and FP communities faced these implementation and semantics problems (like lexical binding, having an interpreter and compiler with same semantics, implementing optimizing compilers for the full language, ...) in the 70s and 80s. Scheme, ML and other languages showed how this stuff should be specified and implemented.
> As you can see from the doc, functional programming is completely underspecified in Mathematica.
That inference is not correct. That doc is one of many hundreds of 'guide pages' that collect together various functions by domain/concern. It's not meant to be the exhaustive proof of "WL is a modern functional programming language with all the features you expect".
> I fear these are all dirty hacks and performing very poorly.
Why do you fear that? From my experience we're about on a par with CPython.
> Why isn't it better specified?
There is a gargantuan library of documentation, available both offline and online, that covers every area of the language, including core language areas like scoping.
> Why isn't it better specified? Because Wolfram wants to prevent competition?
We don't need to obscure how the language works to prevent competition, we can rely on the fact that it would take hundreds of engineers (with a full spectrum of domain knowledge) a decade to replicate the kind of functionality we have.
The exact thing you seem to want, which is "what does a LISP programmer need to know about WL?" doesn't exist, but perhaps it should! It might make sense to add Block, Module, and With to the functional programming page, though the Scoping Constructs guide [1] is the place to start understanding our scoping.
> > Compiled code does not handle numerical precision and local variables in the same way as ordinary Wolfram Language code.
> This is a huge warning sign.
Again, you seem to have little patience for documentation. The notes in the function page are a summary of salient points, rather than an essay about every detail. The tutorial goes into more depth [2].
And specifically: precision tracking is something almost no-one else does. BLAS can't do it -- so yes, the semantics have to change if you want speed. Lisp and the FP community would have the same problem if they represented quantities with the generality we do.
> The exact thing you seem to want, which is "what does a LISP programmer need to know about WL?" doesn't exist,
I would want a spec of the basics of the language. Not a collection of vague statements and some examples.
> The tutorial goes into more depth [2].
A tutorial goes into more depth than the main documentation? Really?
If you read the tutorial, you see that the compiler is far from being able what a modern compiler for Lisp, SML, OCAML or Haskell does. The Wolfram compiler is mostly there to speed up some simple numeric processing. As you can see the parameters need to be numbers or 'logical variables'.
> compilation can speed up execution by a factor as large as 20.
Which is completely vague...
> For more complicated functions, such as BesselK or Eigenvalues, most of the computation time is spent executing internal Wolfram Language algorithms, on which compilation has no effect.
That more or less shows how primitive the compiler is.
> Compile can handle approximate real numbers that have machine precision, but it cannot handle arbitrary‐precision numbers
...
> An important feature of Compile is that it can handle not only mathematical expressions, but also various simple Wolfram Language programs.
Wow.
> Thus, for example, Compile can handle conditionals and control flow structures.
WL is similar to Python: it is predominantly an interpreted language. Where that becomes a problem, we wrap other libraries, or move small chunks of code into a VM to avoid the cost of the evaluator, or to C and then compile that (a rather roundabout way of doing things that will improve when we move to LLVM).
But as I said before, CPython is a close analogy for where we are at. What we don't have yet is the equivalent of PyPy. Still, if you were to make this kind of dismissive critique against the maintainers of CPython, I think most people would find it kind of silly.
Also, one should note that this documentation was written during an era when perhaps most of Mathematica's user base were people doing math, for which the primary use for compilation was things like plotting, numeric solving, and so on. The language of the tutorial reflects that heritage. At this point we should probably rewrite it.
If there is something like a Wolfram Language, it better should be documented like other 'programming' languages. The core of the language is underspecified. Even the small Scheme standard is much more rigorously specified, including an attempt on a formal semantics.
I want a good "what does a LISP programmer need to know about WL?" :-)---or, rather, a nice, conversational tutorial, a "WL for Lisp programmers" that discusses not just the things you need to know, but the underlying philosophical similarities and differences and the pitfalls you're liable to encounter if you try to make the transition.
(I also want an open-source implementation, but it would seem that isn't on the table.)
Maybe :-)---this was a year and a half ago, or more. I do remember spending a fair amount of time trying to wrap my head around the implications of the difference between `f[x] = x^2` and `f[x_] = x^2`, and the differences between "Module", "Block", and "With". I also had a hard time with closures & symbols---I was trying use something not unlike the bank-account example from SICP (https://mitpress.mit.edu/sicp/full-text/book/book-Z-H-20.htm...) to cache some intermediate computations, and it got very messy very quickly.
Now I recognize that this is in no way idiomatic Mathematica code, and I'm sure there is a nice Mathematica-y way to achieve this, but that's sort of my point. I have a problem, think "Aha! I know exactly how to deal with this in Scheme!", try to translate the Scheme solution into Mathematica, and either fail miserably or spend way too much time trying to figure out how to persuade Mathematica to do what I want. (I should perhaps admit that when I hear "Lisp" I think Scheme and to a certain extent CL, with lexical scoping, as opposed to elisp, with dynamic scoping. This is a flaw in my thinking I've never really gotten around to rectifying.)
> The functional parts of Mathematica/WL (by which I mean the equivalents of fold, filter, map, etc) are pretty straightforward and should be quite easy for a Lisp programmer to pick up. See [0].
Yeah, those were pretty natural.
> Note to self: clearly we aren't making it easy enough to understand what WL can do, especially for people who pick it up for one specific thing.
This is important, I think, but I also think the problem's less in your documentation than in how I "learned" Mathematica. I got started using Mathematica for things like messy integrals, or differentiating and then simplifying some huge expression. Something comes up in a problem set, I try three or four times to do it by hand and always lose signs/factors/whatever, and then farm it out to Mathematica; as I gradually started doing more and more, I kept just porting Lisp experience, and this always worked just barely well enough that I wasn't forced to learn Mathematica on its own terms.
I suspect if I had thought about it as "basically m4 with a crazy-awesome standard library", rather than "basically Scheme with a crazy-awesome standard library", I might have been happier, but ultimately I needed (need) to recognize that Mathematica is its own thing, with its own strengths, weaknesses, and fundamental metaphors.
> Btw, its called a "term rewriting system", not a "rule rewriting engine".
Ack. Thanks; I'll try to bear that in mind.
> Did you really want to write diagonalization from scratch? Why not use the superfunction Eigensystem [2], which has been honed by many experts over many years?
No! You're right, that would be a terrible idea. I was using Eigensystem; in Julia I farmed it out to LAPACK via eigs(). My problem was in setting up the matrix to be diagonalized. The state space had something like seven degrees of freedom (four two-dimensional and three with arbitrary dimensions), so I was calculating matrix elements for up to something like N = 10^5 basis states. (There were some tricks to pull so I was iterating through O(N) states, not O(N^2).)
Ultimately, for sufficiently large N, the diagonalization is going to take much longer than the setup, just because the scaling's worse. For small-ish N, though, the setup was interminable, and those small-N test cases are precisely where I need to be able to move quickly when I'm trying to figure out which stupidity I perpetrated this time.
> It's a bit odd to criticize a functional programming language for not having super-fast 'for' loops.
Yeah, no kidding. This was a classic Fortran-style problem, which is why I feel kind of stupid for having even tried to do it in Mathematica.
>> The lesson I learned, though, was to never use Mathematica for anything beyond basic symbolic calculations---integrals, that sort of thing.
>That's basically nonsense. Wolfram|Alpha is a multi-million line Mathematica/WL system that goes far beyond symbolic manipulation. As a totally random example, take Facebook social network analysis [3].
Fair enough. And I have a very smart friend who has sworn by Mathematica for years (and just spent some time working for you guys)---maybe it really is a combination of how I approach Mathematica, the types of problems I've tried to solve in it, and the fact that de gustibus non disputandum. I can't shake the feeling, though, that I'd want to run away very quickly from a large Mathematica/WL project like Wolfram|Alpha.
That's a really good example: you'd have to find the tutorial [0] to know what to do. And we could make that job easier by detecting your probably incorrect use of = instead of := and giving you a "I see you're trying to define a function" kind of deal. Of course people hated Clippy, so we have to tread carefully with that kind of thing :)
> This is important, I think, but I also think the problem's less in your documentation than in how I "learned" Mathematica.
Yes, I think you hit the nail on the head with this paragraph. It is possible to 'accrete' tricks in a way that potentially blocks you from having a holistic knowledge of the language. The workflows for symbolic manipulation, which involve lots of global state and symbols representing variables, is probably a prime culprit.
> My problem was in setting up the matrix to be diagonalized. The state space had something like seven degrees of freedom (four two-dimensional and three with arbitrary dimensions), so I was calculating matrix elements for up to something like N = 10^5 basis states.
That makes more sense. There might have been higher level ways to do this using functions like Array and Table, but perhaps not. And Julia is a really interesting language, I think we can learn a lot from them.
> I can't shake the feeling, though, that I'd want to run away very quickly from a large Mathematica/WL project like Wolfram|Alpha.
Huge codebases in any language get hairy. I'd say we're on a par with C++ in that respect (meaning: not very good, but workable).
Modern languages have had some innovations with clean package systems and API boundaries (though the ML family showed the way), so it's perhaps good we're still waiting to modernize our package system. Plus, I think we have a chance in the next year or two to really leapfrog other languages with some amazing static analysis tools.
Oh, man, that bit me so many times, especially when it had to interact with some kind of scoping trick. I read that tutorial (or the equivalent from before the WL), and never really got good intuition for how immediate & delayed evaluation worked and when to use which.
> There might have been higher level ways to do this using functions like Array and Table, but perhaps not.
I was actually using Table, but I was thinking of it as "iterate over these variables". Table's nice, although every once in a while it would break the picture I had in my head of it as "map-over-cartesian-products".
> Plus, I think we have a chance in the next year or two to really leapfrog other languages with some amazing static analysis tools.
Great! Another beef I have with Mathematica (and Scheme, for that matter) is that it doesn't have types: since I learned bits and pieces of Haskell and started using Julia seriously, I've come to love the way a type system can save me from my own stupidity. This is definitely a matter of taste, though.
it's a great language, once you get past the fact that writing code in the notebook is a horrible experience. You get a great lisp, along with pattern matching and rule replacement.
I can recommend the book by Paul Welling, as well as the books by Michael Trott (very meandering).
Just funking things up in the notebook is the most fun I've been having in a long time. You just get sucked into trying more and more things, adding manipulates around it, playing some more, taking the output image and running image processing on it then converting it to a mesh on which you solve differential equations.
I disagree, I think the notebook is one of the best interfaces I've ever found. Just not for "traditional programming".
The notebook is a literate programming document. It contains my thoughts and process for creating an algorithm/piece of code along with tests and examples. This is very, very different from how people traditionally "program".
However, Mathematica has some very real, very strong deficiencies if you wanted to build an application end-to-end within it. It's dynamically typed, with everything-is-a-list mentality. So you will get type errors.
Unit testing (let alone combinator libraries like quick check) are only available at the higher price levels.
Warning personal opinions ahead!! Wolfram employees often have have strange views on how software is built outside of Wolfram Inc. Unit tests are alien, and type checking or datatypes (Say, to make sure that order your fancy Mathematica based trading system is going to make is sane) are seen as only things dumb rails programmers need. Rather than tools to make developers more powerful.
I would honestly recommend every programmer learn Mathematica. I'm sure they'll find that they can aim far higher when they have the right tools behind them.
It's the same distance again from Python/Ruby as Python/Ruby are from C. Probably more.
> Unit testing ... are only available at the higher price levels.
Actually, this is now available for everyone, something I advocated for. VerificationTest and TestReport are the key functions there.
Good implicit suggestion about QuickCheck.
We've supported unit testing in our Eclipse plugin for years, and with V10 brought the MUnit package into the System context with a redesign of the essential functions.
There's the beginnings of an algebraic type system hiding in 10 (do Get["TypeSystem`"], then try RandomData[Vector[Atom[Integer]]). So generating synthetic data for a function with a given signature is already possible. Now we need to be able to express invariants etc, and then minimize counterexamples.
The notebook is excellent for its intended domain, which is scientific exploration. It is very good to be able to create definitions and change data, having the results immediately visible. Notebooks can also be saved in its current state and exported to other formats.
Since my first exposure was to Matlab/Scilab, I felt like my initial interactions with Mathematica were like trying to bang my head against a wall.
Over the last year, as I incidentally got interested in Haskell and Lambda calculus, it dawned on my that Mathematica's language is built for functional programming. It felt like I'd just managed to find the light-switch. Now that I understand, it all makes sense when somebody says that all Mathematica essentially does is search and replace. I think this is in the spirit of a model of computation called [Markov algorithms](https://en.wikipedia.org/wiki/Markov_algorithm).
Similar to Applescript it seems like a good way to scratch your own itches, but not to build a product with. That's perfect for academia, but not for your average programmer. It does look fascinating though and has been on my rainy day list for a while.
$15/mo is still too high for dabblers. It should have a free or cheaper tier (with low quota, or $0.20/hr, bill as-you-use like AWS) for people who want want to fiddle.
Allowing people to try it for free would definitely help them attract potential users.
IMO they should just use a cookie to let you try it out for a couple of days, and then tell you that you need to buy the product. It's easy to circumvent, but a ton of people would just choose to pay for it.
I also think that they should have some kind of student pricing. Many students couldn't afford $15/month, but they might be able to afford $15/year.
Wolfram need to basically adopt the model that Adobe unofficially has ie. don't stress too much about users pirating your software, because they'll become so dependent on it that when they go and use it in industry they'll demand enterprise licenses.
Except Wolfram should just restore Wolfram Alpha to its once free glory (instead of the annoying nagware that it's become) and also offer this Wolfram cloud stuff at a reduced price.
The end goal for them should be to convince people that they need to purchase Mathematica. Pricing at $15/month means they're only really attracting existing Mathematica users who want the convenience of the cloud when they're away from their primary machine.
I'm a SymPy developer, but I'd probably pay $15/year just to test different things so I could compare results. I won't pay $15/month, though. Especially since other developers have access to it, mainly through their university.
How is data security handled? If a company wanted to analyze some sensitive/proprietary-type data, could they do so in Mathematica Online with some confidence that the data is safe?