Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Nim vs. Crystal (status.im)
274 points by open-source-ux on Dec 26, 2019 | hide | past | favorite | 147 comments


Nice article!

There's one small error, where the article says "With Nim, we were also able to link both the Nim and C files into the same executable, which Crystal sadly cannot do." However, this is not true! You've passed the object file you created directly to the linker, so it is in fact included directly in the executable.

I'm a core team member of Crystal, feel free to ask away.


Status dev here (i.e. colleague with the benchmarker) and prolific high-performance Nim libraries author.

Note that Nim standard library focuses on maintenance cost as the Nim team is small. When raw performance is needed, Nim gives us the tools to reach for it.

For instance at Status we use our own json serialization/deserialization library: https://github.com/status-im/nim-serialization and even Araq, Nim language creator, has his own JSON library: https://github.com/Araq/packedjson

This allows Nim to be in the top10 of json parsing in TechEmpower: http://www.techempower.com/benchmarks/#section=data-r18&hw=p...

Disclaimer: Status is the main sponsor behind Nim https://our.status.im/status-partners-with-the-team-behind-t... / https://nim-lang.org/blog/2018/08/07/nim-partners-with-statu...

Status is committed to Nim but we do have a pretty diverse stack (i.e. most of our code is in Clojurescript and we have part of the codebase in Go that we are migrating to Nim at the moment) and teams work very independently from each other, so other teams are sharing back their experiences with Nim.


> This allows Nim to be in the top10 of json parsing in TechEmpower: http://www.techempower.com/benchmarks/#section=data-r18&hw=p...

Maybe I'm misreading but to me this implies that the TechEmpower JSON benchmark doesn't use the `json` module. Just to be clear that is not the case, it uses the `json` module. Parsing performance is not benchmarked there, just serialization.


Have you written about why you're migrating from Go to Nim?


We did cover part of it here: https://our.status.im/status-partners-with-the-team-behind-t...

But to give more detailed explanations:

- We are a blockchain company and close partner with the Ethereum Foundation. Ethereum research is done in Python, using Nim allows to quickly transcribe research to fast code. - We focus on resource-restricted devices; for now mobile phone but running a blockchain node on a router might make sense in the future as well, so we need tight control over memory allocation. - We can reuse C and C++ tooling (Valgrind, LLVM Sanitizers, gdb, lldb) - The biggest bottleneck we have (and all Ethereum clients have) is cryptography, using Nim means being able to reuse cryptographic libraries written in C or C++ or even using C++ templates with one of the easiest FFI or rolling our own implementation. - WASM is going to be very important for blockchain development and the most important thing is generated codesize (because it lives forever in the blockchain), Go and Rust struggles with WASM codesize at the moment. Nim is very good at that: https://www.youtube.com/watch?v=QtsCwRjtbrQ - Being the main sponsor of a language means also being able to shape it suitable for our needs from scratch.

So this was why we started using Nim in the first place.

The Nim team was started 2 years ago and the experiment is successful. For example, our Ethereum client was kickstarted by translating python to Nim with an automated tool: https://github.com/metacraft-labs/py2nim.

Our main go codebase is supporting our own fork of the major Etehreum client go-ethereum. But this is costly for us as upstream and our teams have different goals. Now that the Nim Ethereum client is more mature, moving completely to Nim with a homegrown clients will allow us to have better API, better logging, controlled release cycles and avoid the cost of syncing with upstream.


The smallest wasm binary produced by rustc is ~110 bytes, Rust struggles with this far, far less than Go, or any language that has a significant runtime.

This doesn’t mean that the rest of your reasons are invalid, of course.


Hopefully wasm's gc proposals will be implemented and get LLVM support, allowing languages like Crystal to sanely support wasm.


The linked py2nim github is basically empty, though some forks of it show substantial content. Was this project intentionally pulled down?


https://github.com/metacraft-labs/py2nim_deprecated/ sorry, the other repo is part of github.com/metacraft-labs/languist/ now: a more expanded tool which also tries to support ruby etc : (however its also on pause for now) https://metacraft-labs.github.io/fast-rubocop/ruby-kaigi-201...

to be honest: the py2nim original tool was used for some initial transpiling experiments, but i am not sure how much of it ended in the actual codebase, the tech surely has more potential if developed


How has the experience of working with Clojurescript been?


Apparently it's fantastic so Clojurescript is here to stay, the future stack will probably be Clojurescript in the frontend and Nim in the backend.

See our Clojure repos: https://github.com/status-im?utf8=%E2%9C%93&q=&type=&languag...


I wrote some Crystal recently for a hobby project. I have to say, I really like it.

• Nice syntax (mostly just that blocks/procs are easy to use— Ruby-like syntax is nice, but it’s not really that important)

• straight-forward class/object model

• type system is simple but powerful (union types + type inference are a great combo)

• syntax and std lib that enables functional-style programming, but isn’t strictly functional

• Pretty darn fast— compiles to machine code via LLVM, and seems like it’s not far behind C, C++ and Rust in most benchmarks, despite being garbage collected

What other languages offer a similar profile? D? Swift? Kotlin (via LLVM)?


> What other languages offer a similar profile? D? Swift? Kotlin (via LLVM)?

Julia. It’s fast, JIT compiled (static compilation is possible but it’s quite rough around the edges), has a fantastic type system, uses multiple dispatch to achieve some very cool stuff, has quite powerful lispy macros, and a really great community / library ecosystem, especially in scientific computing.


I'm an enthusiastic user of Julia, but I would not think of it as a language with a "similar profile" as Nim or Crystal. As you said, it's hard to produce static binaries, and they are huge if compared with Nim's or Crystal's. Moreover, the typical workflow for writing code is very different from what users of static languages follow (edit, compile, run, loop over).


The person I responded to specifically said the things they like about Crystal are:

    • Nice syntax (mostly just that blocks/procs are easy to
     use— Ruby-like syntax is nice, but it’s not really that important)
    • straight-forward class/object model
    • type system is simple but powerful (union types + type inference are a great combo)
    • syntax and std lib that enables functional-style 
    programming, but isn’t strictly functional
    • Pretty darn fast— compiles to machine code via 
    LLVM, and seems like it’s not far behind C, C++ 
    and Rust in most benchmarks, despite being 
    garbage collected
Julia ticks all those boxes except the "straightforward class/object model" (which I initially missed). I should have been more clear that Julia takes a different route to Object Orientation (multiple dispatch instead of classes).

While julia has a very different user experience for the reasons you mention, that doesn't seem to be what the OP was talking about, but maybe I'm wrong.


> I should have been more clear that Julia takes a different route to Object Orientation (multiple dispatch instead of classes).

I think your original comment was fair. I've looked at Crystal and Julia and find both projects really exciting. That said, it's hard to see why one would prefer Crystal's class/object model to Julia's. Multimethods + operator overloading is just insanely powerful and elegant. It's one of Julia's real strengths and allows for a new level of code reuse.

I wish Julia had a more python-like syntax (I don't want to open a can of worms with this comment but do feel that Python's popularity is at least to some degree attributable to a aesthetically pleasing syntax.), and I wish its syntax were more regular (I'm sure the Mathematica function definition style will lead to problems with tooling, linters. C++, to my mind, shows the perils of an overly-complex syntax, and Julia, in a well-meaning attempt to entice scientists, risks doing the same.). But in every other sense Julia is just fucking awesome.

That said - Crystal is pretty cool too, and if you're a ruby programmer I'm sure it's a compelling language to use.


You're right, but I inferred there was more in the OP's list that this list of points. They mentioned D, Swift, and Kotlin Native (LLVM-based). The fact the OP excluded the JVM-based Kotlin (its best supported platform) made me think that an additionally implicit requirement was the production of a small stand-alone static executable, like D, Swift, Nim, and Crystal are able to do.


> Moreover, the typical workflow for writing code is very different from what users of static languages follow (edit, compile, run, loop over)

Would you elaborate on what's different about writing Julia?


Julia's typical workflow [1] is mainly REPL-based: you test ideas in the REPL, accumulate functions in modules and build larger payloads at the prompt (or in a Jupyter notebook, or using Juno [2] or VSCode). That's a sensible way of working for its main target: scientific computing and number-crunching tasks, where you must often explore and understand your data before betting able to implement real algorithms.

[1] https://docs.julialang.org/en/v1/manual/workflow-tips/

[2] https://junolab.org/


Hmm, that’s interesting - I enjoy REPL-based development, having spent a lot of time working in Python, and a little Clojure. How does the REPL experience play with the static typing (that I think) Julia has? If you change the type signature of a function in the REPL what happens?


Julia is a dynamic language, it doesn't have static typing. What happens is that it's dynamic nature is actually a superposition of all possible static implementations of a function. So if you call a * b with an integer it will JIT compile an integer based version of the product, while if you do it with matrices it will compile a matrix product version. Julia Base has 361 implementations of product, which any library or programmer can freely add at any point and the compiler will always match at compile time the most specific version defined for the combination of all arguments (multiple dispatch). You also usually don't need type hints when calling methods, the compiler will infer the types by itself and choose the optimal implementation.

If you define one method with the exact same parameters as an older one it will just redefine it. As a side-note, it has safe points where the JIT will work (usually the global namespace, in which the REPL runs each command), and otherwise it can't see newly declared methods (as it is running already compiled code) unless you use a method to force it (like invokelatest or eval). The period between safe-points is called the world age.


Julia is most comfortably used in the manner of a traditional Lisp: with a REPL open the entire time, sending snippets of code from your editor to the REPL for compilation, and then testing things out at the REPL. Sometimes people don't even write out a "main" function meant to be run from the outside world, but leave a few different entry point functions to be called from the REPL.


F#. I hate the syntax, but anyone who enjoys ML will like it. Curly braces are optional.

It also meets all of your other requirements, including the "isn't strictly functional" part (which is somewhat rare).

Scala has most of these qualities, but people say it's extremely complicated and supports too many different paradigms in a single language. I've never used it.


I'd second F# and also in the same vein recommend Ocaml, in particular, if fast-running native code is a requirement. I haven't benchmarked it enough but I suspect that in most cases Ocaml might be a good deal faster.

I'm also very sympathetic to the non-functional aspects. I think being able to write imperative code easily (and it's surprisingly pleasant in ocaml/f#) is a huge plus.


As always, take these numbers with a grain of salt, but benchmarks seem to suggest that F# on .NET Core is significantly faster than OCaml [1]. I wouldn't be surprised if that were the case myself given how much work Microsoft has put into optimizing .NET Core for performance.

[1] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


And how much work Anthony Lloyd has put into optimizing those programs ;-)

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


> Curly braces are optional.

Why do languages make anything optional? It just leads to confusion and unnecessary style friction. I'm not saying everything should be as strict as Go, but... well, nothing should be optional.


In this case, I have no idea why they did it. Maybe to ease the transition from C-like languages to an ML language. I personally have never seen anyone use braces in F#.

Optional syntax is not at all rare, though.

Examples from various popular languages: white spaces, braces around single statements, trailing commas in lists, inferred type declarations, and parentheses.

There are also multiple ways to write things, like if...else vs. ternary operator vs. pattern matching vs. switch, often in the same language (C# has all of those, with pattern matching only being added recently due to demand).

I agree that I'd prefer that there only be one way to do things, but that's easier said than done. Sometimes users demand new syntax for different contexts or for the sake of terseness.


Does F# offers native builds ? I thought it was CLR only.

Also Scala is about to jump to a new compiler (and maybe a new language). It's a weird time for scala (from the few that I know).


>Does F# offers native builds ? I thought it was CLR only.

i would guess that it will eventually support aot:

https://mattwarren.org/2018/06/07/CoreRT-.NET-Runtime-for-AO...


You can build arch-specific, single file binaries, but I believe the bulk of the executable is a zip file with the .net core runtime and all of your platform agnostic assemblies, so it's not exactly native like you're thinking of


> What other languages offer a similar profile? D? Swift? Kotlin (via LLVM)?

Nim? :)


d and ocaml are both nice. i like ocaml's syntax, but if you don't there is always reasonml, an alternate syntax that is a bit more c-like.


OCaml. Sometimes I feel like the last 20 years of language development have mostly been spent on catching up to OCaml.


Anybody toyed with crystal onto rpi or even microcontrollers ?


Not a lot of options if you don't want curly braces.


Clojure, Haskell, Nim, Python, and I think you could even write JavaScript without using curly braces.


Other than Haskell and Nim, the rest are hardly compiled to machine code.


Well C can be used without those:

   #define BEGIN {


Haha.

To me BASIC syntax always looks like a clunky version of C syntax and the pre-processor code you wrote is just what I have in mind when seeing this.

It always feels like someone wanted to do ML syntax, but didn't go all the way with it.


There are a few Schemes that can compile to C or machine code.


OCaml, maybe?


Biggest turn-off with Crystal for me is no native support for Windows. It’s kind of a big deal.

> However, if you are a Windows user, for the time being you are out of luck

We’ve been waiting for years.


> We’ve been waiting for years.

I know, and it's unfortunate that there's not been time to work on this. I don't think people understand how small a language like Crystal is though. There's just one person working full-time on it currently!

If you'd like Windows support, one thing you could do to help is donate time or money to the Crystal project [1].

[1] https://salt.bountysource.com/teams/crystal-lang and https://opencollective.com/crystal-lang


>> We’ve been waiting for years.

Given the current status of the Windows port or any other issue, it is no good as a programmer "Waiting for years" and not even donating to said project and expecting any progress. That's like creating a new motor company with zero funding with a small team of less than 20 and its users begging for progress on a production-ready concept car in Q4.

> If you'd like Windows support, one thing you could do to help is donate time or money to the Crystal project

That's the sort of impression I expect from any meritocratic open-source project these days. Anyone interested in the project can donate their time to getting Windows support working or if they can't do that GitHub sponsors provides the shortest route to donating to devs (Like yourself RX14) who work on Crystal. No excuses from 'interested programmers' who want to use Crystal to complain for years about Windows on Crystal then.


There's a lot of people like myself that can code, but aren't at the level of doing Windows ports. Strictly speaking, it would be a very big effort for some folks, so they wait.


The basic foundations of a Windows port are already in place for Crystal, merged into master. In fact you can can already cross-compile a "Hello World" on Windows.

This means the port is in a place where the community can pitch in and help port little bits of the standard library, and help bring it into a release-ready condition.


I'm sure the crystal devs have done a great job, but again let me echo, just because I can write code at an intermediate level, doesn't mean I'd have a clue where to start or what to do in a project like this. I suppose you can never learn if you don't jump in, but it would be very difficult and time consuming. Maybe if there was a way to specify exactly what needs to be done so those less saavy could be directly pointed towards where to help? I'm not sure how this is really done.


I'd disagree with the point of sponsoring really. In theory it's a cool idea... but if there are not enough devs to even start the effort, there won't be any to maintain it either. Adding windows support now would only slow down development that is happening now. And potentially it could place the authors in an uncomfortable situation where they feel obligated to support something they've got no interest / expertise in.


I'm not sure why you're being downvoted. A car might be a great vehicle - but if you work in the ocean then the car is useless.

Crystal looks like a great language, and clearly the developers aren't prioritizing Windows support - that's not a bad thing. But that does mean people who need to be able to use it on Windows have to look elsewhere.


>I'm not sure why you're being downvoted.......and clearly the developers aren't prioritizing Windows support

Crystal is barely 5 years old born purely out of side project and free time. In its early days there were other project aiming for the same thing, ( I cant remember what that project was called ) and both author discussed how each of others implementations would not work. There were lots of unknown but just keep grinding when they have spare time.

I dont want to make this like a rant, but from the tone of it, apart from waiting I am not sure what the OP, or in fact 99% of the internet which somehow has this entitlement something needs to be done and they have done nothing about it. Donating would be a start. Even if it is a dollar a month. But as DHH has put it, he would not be surprised the donation only makes it worst because it make people feels more entitled.


I read your comment and realized I don't think much about what it means to have native Windows support in a programming language. A crystal issue page [1] talks about two ways to support Windows:

- Cygwin (POSIX)

- Native Windows API support

Though I wonder if targeting .NET would be something to consider. It would not save much effort, compared to Windows API, but in principle, .NET is a portable platform (like POSIX).

Also I wonder why the LLVM project doesn't make it relatively easier to support multiple platforms (including Windows). The whole promise of LLVM is that your prog-lang just targets LLVM (so to speak) and doesn't have to worry about specific platforms.

[1] https://github.com/crystal-lang/crystal/issues/2569


If Crystal added CLR[1] (what you likely meant by .NET) or WebAssembly as a target, I guarantee its popularity would vastly increase.

LLVM would be smart too, but I suspect that having access to the complete .NET ecosystem from a Ruby-like language would be very appealing to devs who worked on Ruby at home (or at startups) and are now at big enterprises that demand big-name stacks.

1. https://en.wikipedia.org/wiki/Common_Language_Runtime


The GP is talking about the standard library support etc. of the compiled objects, while still being native.


> The whole promise of LLVM is that your prog-lang just targets LLVM (so to speak) and doesn't have to worry about specific platforms.

"Platform" in this case mostly means executable format. LLVM is too low-level to be able to abstract away operating system features.

The hard part of porting a project like Crystal isn't generating a valid binary (since LLVM does handle that part well), it's adapting the standard library for dealing with all the places where Windows and POSIX offer different interfaces to userspace - stuff like filesystem semantics, networking behavior, and threading where the system calls can't always be translated in a straightforward way.


> Also I wonder why the LLVM project doesn't make it relatively easier to support multiple platforms (including Windows)

LLVM's concern is the generation of machine code. That does indeed work for Windows even with Crystal; you can cross-compile a "Hello, world!" program and it'll run just fine in the Command Prompt.

The actual problem lies in the OS-specific APIs. Windows and Unix are very different operating systems with very different designs and very different feature sets. LLVM can't really do much to fix that.


Having LLVM support is only half of the story, someone needs to make the runtime and standard library work on the platform as well.


>We’ve been waiting for years

Spin up a Linux VM and get developing?

I do .NET for my $DAYJOB and we use Windows for development. It works, but I wouldn't constrain myself only to Windows for development. I have an Ubuntu VM at home that I do dev in for other stuff.


I don't think it's a big deal. Plenty of users aren't on Windows, and plenty of people have no use for code on Windows.


What other languages you work with that have native Windows support?


Aside from Crystal what languages don't support Windows? You pretty much have to go out of your way to make a language that won't run on Windows.


>You pretty much have to go out of your way to make a language that won't run on Windows.

That's a bit ridiculous. In the group of most desktop operating systems, Windows is the odd ball. Supporting it is a huge hassle, and from a certain point of view with a very small ROI.

I wonder why the Swift threads have far fewer complaints about missing Windows support. Perhaps the absurdity is more obvious in that case?


> In the group of most desktop operating systems, Windows is the odd ball.

Hilarious considering it is, by far, the most used desktop operating system. It's almost like there's a reason small languages stay small...


>Hilarious considering it is, by far, the most used desktop operating system.

You think so? The latest Stackoverflow survey says otherwise: https://insights.stackoverflow.com/survey/2019#technology-_-...

"Linux 53.3%, Windows 50.7% MacOS 22.2%"

Yes, since the question is slightly open, this could refer to target platforms instead of what the actual developers are using on their desktops. Still, a nice-looking datapoint ;)

Anecdotally, as an IT chief I see developers who use Windows struggle with tasks that are simple in Linux and MacOS. There's a reason why a) MacOS won so many developers and why b) Microsoft is spending so many resources making WSL/WSL2.


Each one picks the survey it suits them best.

> For October 2019 the Linux gaming population on Steam according to Valve was about 0.83%, basically flat compared to September, at least on a percentage term. Meanwhile for the newly-published November figures it comes at 0.81%, or a decline of 0.02%.

https://www.phoronix.com/scan.php?page=news_item&px=Steam-Su...


Yeah, that's a good point, given how Crystal might very well work as a language for making games.


Swift is Apple garbage and there is no expectation it would run on anything outside of iOS.

Crystal not working on Windows indicates to me that are implemented the run time idiotically-- should have just reused the C and C++ run time & standard libs, as this would have given them portability from the get go.

Exposing Win API or .NET is unnecessary, just allow users to call into C code for that nonsense.


Swift runs on Linux just fine. There are web frameworks in swift. Tensorflow is working on Swift support [2]. Yes it is Apple supported but it’s open source and runs plenty of places besides iOS. You may still think it’s “garbage” but you’re misinformed about where it can be used.

[1] https://github.com/vapor/vapor [2] https://www.tensorflow.org/swift


Swift is a corporate-backed language, and everyone knows Apple is trying to lock-in customers (both users and devs).


Windows support is often behind other platforms though. Linux and Mac support tend to get prioritized for new languages.

Not sure if Swift works very well on Windows e.g.


Rephrasing: what other non-mainstream languages do you work with that have native windows support?

I see pretty much 80% of interesting developments happening on *nix first.


I'm having a harder time coming up with a language that doesn't have native Windows support. All of the language runtimes I've worked with (node, ruby, python, java, c#, php) have had native Windows binaries.


bash? (outside of mingw/wsl/cygwin)?


There's a Windows-native version of bash included with the Windows version of Git, no?

But yeah, I'd definitely count MinGW (or more modernly, MSYS2) as "native" in this context.


> There's a Windows-native version of bash included with the Windows version of Git, no?

Not really, as it's built against MSYS2 libs, which are a fork of Cygwin.


> I'd definitely count MinGW (or more modernly, MSYS2) as "native" in this context


what about cygwin?


Something to keep in mind while looking at the benchmarks in this article:

* Nim's `json` module will parse the full JSON file into memory, now I might be mistaken but AFAIK Crystal doesn't do this. The JSON module in the stdlib is good enough for most use cases, but there does also exist `packedjson`[1] which could be more performant for some use cases

* Regarding the base64 benchmark, you may want to read this: https://forum.nim-lang.org/t/5363 (this article does not include the patches made by treeform)

Benchmarks are easy to game, the only takeaway that can be had from an article like this is that on average the performance of software written in Crystal and Nim should be about the same.

1 - https://github.com/Araq/packedjson


The JSON code is written in a way which forces the whole object to be loaded into memory - both in Nim and Crystal. It would be interesting to compare code clarity and benchmarks for a streaming JSON implementation though.

Agree that performance comparisons between Nim and Crystal are going to be largely moot. They're both "fast enough" for their target audiences.


FWIW, my experience with quite a few (usually small(ish)) tasks I implemented in Crystal is in line with the article's.

Once, a CSV-parsing and string-wrangling data batch job in python indicated it would take 27 hours to finish. I got annoyed and wrote a line-by-line translation in crystal. Writing it took about 40 minutes. It took a total of 2.5 minutes to run, that's two, almost three orders of magnitude.

As I said, this was a line-by-line translation, so whatever mistakes the python version had, the crystal version would have also had them. There are, however, a bunch of specialised python libraries (numpy et al) that weren't used, and I guess you could achieve some significant performance increases that way. Coming from ruby, I just happen to be quite productive in Crystal, to the point where I stumbled over the article's description as a "systems language".


I'm not sure why you're comparing Crystal to Python here, but just in case you've misunderstood something, Nim is a completely different language from Python. It borrows some bits of syntax (like I suppose Crystal does with Ruby, but I haven't used either of them), but otherwise it has nothing to do with Python, so a performance comparison of Crystal vs. Python doesn't say anything about Crystal vs. Nim.


There are probably ways to speed it up significantly in plain Python. For example, if you're processing strings you'll probably want to avoid iterating over individual characters and rely more on regular expressions, str.translate or other higher-level mechanisms.

Did you profile the Python code?


Another thing I noticed: the Nim code was compiled without the -d:release flag.

For example, the JSON test was compiled with:

    $ nim c -o:json_test_nim -d:danger --cc:gcc --verbosity:0 json_test.nim
I don't think that the -d:danger implies release (even if necessary to do things like disable bounds checking)?


It does imply release in latest Nim versions, `-d:release` still has some checks enabled, and `-d:danger` is full-on release mode with all possible checks disabled.


Ah, thank you for explaining! That makes sense.


I found that I can always optimize stuff in nim to be near C speed - without doing anything crazy non-nim like.

Stuff that is there might not be optimized yet. I hope as more people comb over the libraries we all get a boost here and there.


I really like this new generation of "C replacements". I guess these are what I was hoping Go would be like but although it got part way there, it never quite fully achieved the combination of power, simplicity and ease of use I was looking for. It would be awesome if one of these would gain enough mainstream support that it didn't feel irresponsible to bring it into mainline use for work projects. For now though I just have to play with them on the side.


I'm a big fan and long-time user of crystal. It's like the good parts of ruby minus the dangerous parts, with actual performance. The OOP system is also very opt-in, in that you can get as granular as Java is if you want, or use modules for everything. The introspection offered by macros is absurdly good as well, and allows for constructions you would only think are possible in interpreted languages. Slices make everything nice as well, especially with string and byte manipulation.

Generally, crystal will let you do what you want to do, often in a number of ways. I definitely couldn't say that about Rust, though I like Rust.


>It's like the good parts of ruby minus the dangerous parts

I like this description. Others said Ruby allows you to shoot yourself in the foot, I often think may be there should be gun control in the first place.

Edit: This list is a little old, The proliferation of programming languages (all of which seem to have stolen countless features from one another) sometimes makes it difficult to remember what language you’re currently using. This handy reference, although not authored by me, is offered as a public service to help programmers who find themselves in such a dilemma.

TASK: Shoot yourself in the foot.

C: You shoot yourself in the foot.

C++: You accidentally create a dozen instances of yourself and shoot them all in the foot. Providing emergency medical assistance is impossible since you can’t tell which are bitwise copies and which are just pointing at others and saying, “That’s me, over there.”

Algol: You shoot yourself in the foot with a musket. The musket is esthetically fascinating, and the wound baffles the adolescent medic in the emergency room.

Perl: There are so many ways to shoot yourself in the foot that you post a query to comp.lang.perl.misc to determine the optimal approach.

After sifting through 500 replies (which you accomplish with a short perl script), not to mention the cross-posts to the perl5-porters mailing list (for which you upgraded your first sifter into a package, which of course you uploaded to CPAN for others who might have a similar problem, which, of course, is the problem of sorting out email and news, not the problem of shooting yourself in the foot), you set to the task of simply and elegantly shooting yourself in the foot, until you discover that, while it works fine in most cases, NT, VMS, and various flavors of Linux, AIX, and Irix all shoot you in the foot sooner than your perl script could.

Then you decide you can do it better with the new, threaded version…

SNOBOL: You grab your foot with your hand, then rewrite your hand to be a bullet. The act of shooting the original foot then changes your hand/bullet into yet another foot (a left foot).

FORTRAN: You shoot yourself in each toe until you run out of toes, then you read in the next foot and repeat. If you run out of bullets, you continue with the attempts to shoot yourself anyway because you have no exception-handling capability.

Pascal: The compiler won’t let you shoot yourself in the foot.

Ada: After correctly packing your foot, you attempt to concurrently load the gun, pull the trigger, scream, and shoot yourself in the foot. When you try, however, you discover you can’t because your foot is of the wrong type.

COBOL: Using a COLT 45 HANDGUN, AIM gun at LEG.FOOT, THEN place ARM.HAND.FINGER on HANDGUN.TRIGGER and SQUEEZE. THEN return HANDGUN to HOLSTER. CHECK whether shoelace needs to be re-tied.

LISP: You shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds…

Scheme: You shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds the gun with which you shoot yourself in the appendage which holds … but none of the other appendages are aware of this happening.

FORTH: Foot in yourself shoot.

Prolog: You tell your program that you want to be shot in the foot. The program figures out how to do it, but the syntax doesn’t permit it to explain it to you.

BASIC: Shoot yourself in the foot with a water pistol. On large systems, continue until entire lower body is waterlogged.

HyperTalk: Put the first bullet of gun into foot left of leg of you. Answer the result.

Motif: You spend days writing a UIL description of your foot, the bullet, its trajectory, and the intricate scrollwork on the ivory handles of the gun. When you finally get around to pulling the trigger, the gun jams.

APL: You hear a gunshot, and there’s a hole in your foot, but you don’t remember enough linear algebra to understand what happened.

APL: You shoot yourself in the foot, then spend all day figuring out how to do it in fewer characters.

Unix: $ ls foot.c foot.h foot.o toe.c toe.o $ rm * .o rm:.o no such file or directory $ ls $ csh: You can’t remember the syntax for anything, so you spend five hours reading man pages before giving up. You then shoot the computer and switch to C.

Ada: If you are dumb enough to actually use this language, the United States Department of Defense will kidnap you, stand you up in front of a firing squad, and tell the soldiers, “Shoot at his feet.”

Concurrent Euclid: You shoot yourself in somebody else’s foot.

370 JCL: You send your foot down to MIS and include a 400-page document explaining exactly how you want it to be shot. Three years later, your foot comes back deep-fried.

Assembler: You try to shoot yourself in the foot, only to discover you must first invent the gun, the bullet and the trigger. And your foot.

Assembler: Using only 7 bytes of code, you blow off your entire leg in only 2 CPU clock ticks.

Modula2: After realizing that you can’t actually accomplish anything in this language, you shoot yourself in the head.

Visual Basic: You’ll really only appear to have shot yourself in the foot, but you’ll have had so much fun doing it that you won’t care.

SNOBOL: If you succeed, shoot yourself in the left foot. If you fail, shoot yourself in the right foot.

Paradox: Not only can you shoot yourself in the foot, your users can, too.

Access: You try to point the gun at your foot, but it shoots holes in all your Borland distribution diskettes instead.

Revelation: You’re sure you’re going to be able to shoot yourself in the foot, just as soon as you figure out what all these nifty little bullet-thingies are for.

May be this list should be updated with more modern programming languages.


I was curious to reproduce the "benchmark" in rust, which got me to this code. Mind you, i have not much experienace in rust, so i'd think this would be the most straight forward translation of the nim code in the blog post for a beginner.

  use std::fs::File;
  use std::io::Read;
  use std::path::Path;
  use serde_json::Value;
  
  fn main() -> Result<(), Box<dyn std::error::Error>> {
      let mut json_file = File::open(Path::new("1.json"))?;
      let mut json_content = String::new();
      json_file.read_to_string(&mut json_content)?;

      let json_value : Value = serde_json::from_str(json_content.as_str())?;
      let coords = json_value["coordinates"].as_array().expect("Coordinates");
      let len: f64 = coords.len() as f64;
      let mut x: f64 = 0.0;
      let mut y: f64 = 0.0;
      let mut z: f64 = 0.0;
  
      for value in coords {
          x = x + &value["x"].as_f64().expect("X Conversion");
          y = y + &value["y"].as_f64().expect("y Conversion");
          z = z + &value["z"].as_f64().expect("z Conversion");
      }
  
      println!("{}", x / len);
      println!("{}", z / len);
      println!("{}", z / len);

      Ok(())
  }
(fyi: the program ran in 2 seconds on my 7 year old laptop)


You can make this a lot faster assuming the schema is fixed, and clean it up a bit with iterators. IE something like this

    #[derive(Copy,Clone,Deserialize)]
    struct Point {
        x: f64, 
        y: f64, 
        z: f64
    }

    #[derive(Copy,Clone,Deserialize)]
    struct Schema {
        coordinates: Vector<Point>
    }

    fn main() -> Result<(), Box<dyn Error>> {
        let mut json_content = String::new(); 

        File::open("1.json")? // don't need Path 
            .read_to_string(&mut json_content)?;

        let schema = serde_json::from_str::<Schema>(&json_content)?;
 
        let len = schema.coordinates.length(); 
        let (x, y, z) = 
            schema.coordinates
                  .iter()
                  .fold((0.0, 0.0, 0.0), 
                    |(xsum, ysum, zsum), (x, y, z)|{
                        (xsym + x, ysum + y, zsum + z)
                    }); 
        println!("x: {}, y: {}, z: {}", x / len, y / len, z / len);
      Ok(())
    }


While I appreciate the intent, that’s turning beautiful concise code into a very much less readable... thing.

How much faster is that?


It would be more polite and accurate to express that as your opinion rather than objective fact. Personally, I prefer to read the second version. I also expect the techniques shown in it, especially parsing against a type-level schema, would be less error-prone than the original in a larger program.


Haven’t benchmarked but in principle:

Serde is one of the fastest deserializers out there. It excels when it has a defined schema to decode, using the #[derive] syntax should offload the work of the deserializer immensely and negate the effects of looking up fields which may not exist.

As well, using rust’s iterator adaptors like fold can allow the compiler to make better optimizations for things like summing vectors of floats, such as SIMD/auto vectorizing easier. It’s also better at eliminating or reducing bounds checks.

Using iterator abstractions is much more “beautiful” in my opinion to for loops, as well in rust tends to generate better assembly than indexing into collections like slices and vectors.


Yes, you can do the same optimizations in Crystal, and Nim. Someone should write these and post the code for comparison.


FYI you can read directly from a file without reading into a buf first. This might be a bit faster:

    let json_file = File::open(json_file_path).expect("file not found");
    let deserialized_camera: SomeDataType =
        serde_json::from_reader(json_file).expect("error while reading json");


A year ago I did some benchmarks and found that from_reader is much slower than from_slice. There's now a note in the serde_json docs, so it still seems to be an issue.

I guess std::fs::read + serde_json::from_slice is the fastest way to do it, because it avoids the from_reader slowness and an UTF-8 validation.


Thanks for the info, good to know!


Kinda biased benchmarks u just took best of crystal and worst of nim from this xd -> https://github.com/kostya/benchmarks


That statement seems wrong in the strict sense, since the "Havlak" benchmark from your link has a more extreme difference between Crystal and Nim and was not included in the article's tests.

The word "bias" has several meanings. One of them is an accusation that those two benchmarks were chosen specifically to produce the desired outcome. Considering the above, and that Base64 and especially JSON parsing are very common, I think it would be fair to allow for the possibility that the article's author chose them accidentally and not with any intent to deceive.

FWIW I was surprised by the article's focus on interfacing with C, something I have not done in 20 years of programming, and was wondering if it was a (possibly subconcious) choice to set up Nim to win.


Indeed. A micro-benchmark on JSON parsing is a very misleading way to compare languages.


There are interesting summaries there but when it comes to the performance the article compares the implementation speed of the json parsing and the base64 encoding libraries in Nim and Crystal. It is not clear at all if those lessons apply to all/most code as well.


Additionally, the Nim code was not compiled with many optimizations turned on! (I.e., without -d:release).

    $ nim c -o:base64_test_nim -d:danger --cc:gcc --verbosity:0 base64_test.nim

    $ nim c -o:json_test_nim -d:danger --cc:gcc --verbosity:0 json_test.nim
IIRC the -d:danger flag is necessary for some optimizations (like disabling bounds checking) but -d:release is necessary for most optimizations to be enabled.

Edit: It appears I'm incorrect, -d:danger does imply -d:release in newer Nim versions.


There's definitively a continuous focus on performance in Crystal, every release has some PR improving performance in various areas.

Also the good thing about the macro system is that you're trading off slower compilation for faster execution. A lot of the code you're using in Crystal which looks dynamic is actually just simple macros compiled to the bare minimum of code at the end.


Right. When the Nim program is translated to Python it's only 20 % slower, which seems unlikely to apply more generally.


When will you have first-class support for Windows?


Unfortunately there's no concrete timeline, it's being worked on on-and-off by the community.

If you have free time, or free money, it'd be a great help to the effort.


Realistically, what amount of time is needed to get windows support a first class citizen?


Schedules are challenging to estimate even when you can count on continuous, stable efforts. That does not seem to be the case here.


First class? About a girl-year or two I'd guess. Fully usable? About half that.


What’s a “girl-year”?


A less male gender focused way of saying a “man-year”, a common term to imply the amount of effort, rather than an exact duration.

If someone says it’ll take 3 person-years (just to make it gender agnostic), that means if only one person was working, it’d take about 3 years. It implies that theoretically 3 people could do it in one year, 6 people in 6 months.

It’s not a definitive statement, especially if one has read mythical man month and buys into it, but gives a very tough scoping.


Interesting choice of words beyond just gender. 'Girl' is roughly analogous to 'boy', both implying reduced age. Considering how proficient teenagers are these days, I suppose the terminology is not that far-fetched.


This is what threw me for a loop. I speak in person-hours at work even when other people speak of man-hours, but hearing someone speak in girl or boy-hours is a bit jarring due to the implications around age and experience.


I didn’t read that much into it. Now I’m curious if the OP had some secondary motive beyond using a different gender pronoun.


My secondary motive is that I'm a girl :)

I certainly wasn't much more than sleepy and wanting to break the stereotypes on that particular expression when I wrote it.


Presumably the amount of work that a girl can do in one year. Similar to the "man-hour/month/year" unit.


Sorry if my English is broken and I may sound rude but this is my concern for the team.

Other than free time or money, I’m curious if the main issue is due to complex testing process on Windows within the community?

In concerning to support your community, how responsive will the team resolve issues on Windows even if it is ready? As far as I feel you need a project manager to drive the progress. If there is a dip in the quality or, consider rebuild Crystal from scratch that inspire by Go’s roadmap or you will simply throw a huge amount of monies and resources for the contributors to invest with no ROI where Go succeed in simplicity.

Even if you have a big funding at your disposal to fill other missing features and API where Zig already works on Windows and V language are moving hastily to support Windows with even less amount of donations. How did they have the right community to support which I think the Crystal inner working code can be tedious to work on? C is the best language at least most programmers know.

Someone in your team ought to give a serious thought to get this sort out or scale down to Crystal core to keep its simplicity at least: why would you choose LLVM in the first place when it can become complex for a small team and depending on Ruby’s new features that you have to keep adding? Go choose to be simplicity and not accepting new feature for 1.x even when they have vast amount of cash and software engineering?

To put it another opinion, I don’t see how Crystal should be benchmark against Go and others if they are vastly differ in feature capabilities. So for this case, what made you think of comparing?


Since Windows support isn't on parity with other platforms, how can we make sure that our monetary contribution will fully go to the Windows work? Or, is it so that only the 1% of it will make it to the Windows budget?


I can't imagine donations working that way for any organization unless you're donating a considerable sum. Every organization has its own priorities and donations help them meet those priorities. If you have a cool million lying around it might help to convince them to make Windows support a priority, but if not, then all you can do is to donate in hopes of helping them focus on knocking out priorities so that they can get to your pet issue sooner.


True in that small individual donations are unlikely to fund it. Sponsoring a feature is a thing in open source. Either through a bounty site or a fund me site.


While bounty has been existed for long time, why its taking slowly?


Maybe there just isn't much interest in windows support?

Crystal is a fantastic language for server-side programming where it pairs the performance of Golang with the expressiveness of Ruby.

I imagine most Crystal users love it for exactly that reason and have little interest in development resources being diverted to a platform that they have no use for.


IMO, even if 100% of the donations would fund exclusively the Linux version, it would help Windows development anyway, although indirectly. More projects using Crystal on Linux would create more media coverage, hence in the long run more interest for better Windows support, either from the core devs or from external entities. Unless one desperately needs feature X by tomorrow, which would of course be a different story.


If you reach out to manas directly, I'm sure they will be happy to have a sponsor for a particular feature.


I'd more than love see Crystal growing in popularity. That's be a great option for Ruby devs to tackle tasks requiring high-performance


I'm curious how both would compare to D.


Crystal is a bit faster at prime number crunching than D. I haven't done any optimizations, just a loop, math, push and hash ops. Its's on par with Go, which is also very fast. Compilation with the release option is slower.


I really like some of the features of D for userspace programs. Optional GC for hot paths, the ability to write raw ASM if needed, familiarity of java/cpp. Really wish it would take off more.


I like it as well. It would probably be my first choice of a general use statically typed compiled language.


If you are interested ins speed, here is a huge list of similar benchmarks: https://github.com/kostya/benchmarks


OT: This is the first time I've seen "Energy" being measured in a benchmark.

That strikes me as a really cool idea, both because it measures CPU usage over time, a figure that has always been of interest for such comparisons, as well as highlighting the ecological and economical aspects of computing.


Does NIM have anything like numpy for SSE-accelerated vector math?


Yes.

You can even use numpy in nim: https://forum.nim-lang.org/t/4102

But why use that when you can use Cuda and OpenCL accelerated vector math: https://github.com/mratsim/Arraymancer

If thats too much speed, you can just roll your own for loops. Because Nim is compiled with battle tested GCC, LLVM or VC++ it will try to SSE optimize your code if you pass the right switches. If you know what CPU your computer/server has you can compile with newest brand of high performance instructions like AVX2, or even the newest AVX512VNNI...


Because my laptop's GPU is Intel GMA 4500M HD, it neither supports CUDA nor OpenCL.


> As you can see; in this case Crystal is the more performant language – taking less time to execute & complete the test, and also fewer Megabytes in memory doing so.

Yeah no - all you have proven is that one JSON library is less efficient than the other. I bet that if someone spends some time optimizing the heck out of the slower one, the implementations can probably be equally as fast and use memory in the same range.


I feel like these are 2 very different languages with different goals. Both are cool, but I don't think it's really fair to compare them.



JSON is a really bad way to benchmark a language because it's often implemented in a different language.

Much better to implement a small program entirely in that language, along the lines of the benchmark game.


JSON is actually implemented entirely in Crystal for Crystal, I'm pretty sure it's the same for Nim too.

For a language which claims to be fast, having to use a C-based JSON parser would be a bit of a cop-out :)


But using an existing parser that happens to be written in C could definitely be sensible.


Yes, we do so for more complex formats like YAML and XML. However, for JSON, reimplementing is worth it for binary portability.


That sounds like a pretty awful idea. If you're using a C library for parsing JSON then you must add an additional C to Nim/Crystal conversion step that requires additional RAM and CPU time.


Nim compiles to C, there is no cost in calling C.


That's not how people use C libraries.


thats not how it works


it still compares algorithm implementation, not the language speed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: