Hacker Newsnew | past | comments | ask | show | jobs | submit | edubart's commentslogin

This is cool because it avoids emulation. However I think it has many shortcomings today which could all be solved by emulating a real CPU architecture (e.g memory protection support, ecosystem with tooling and Linux distributions).

By the way I have developed a similar project, WebCM, a RISC-V emulator capable of running full Alpine Linux that can be embedded in the Web browser and can reach up to 500 MIPS for some users, which I think is pretty fast despite the emulation, you can try at https://edubart.github.io/webcm/. Booting is also fast, it always boots from scratch when you open the page, so you can boot fast even with emulation.


That indeed feel fast, awesome stuff!


That is excellent!


You can try https://edubart.github.io/webcm/

WebCM is a serverless terminal that runs a virtual Linux directly in the browser by emulating a RISC-V machine.

It's powered by the Cartesi Machine emulator, which enables deterministic, verifiable and sandboxed execution of RV64GC Linux applications.

It's packaged as a single 24MiB WebAssembly file containing the emulator, the kernel and Alpine Linux operating system.

It comes with Bash, many programming languages (eg. lua, micropython), cli utilities (htop, vim). It has no internet connection.

Disclaimer: I created it.


Main differences in emulation context:

- The Cartesi Machine can run any Linux distribution with RISC-V support, it emulates RISC-V ISA, while CheerpX emulates x86. For instance the Cartesi Machine can run Alpine, Ubuntu, Debian, etc.

- The Cartesi Machine performs a full Linux emulation, while CheerpX emulates only Linux syscalls.

- The Cartesi Machine has no JIT, it only interprets RISC-V instructions, so it is expected to be slower than CheerpX.

- The Cartesi Machine is isolated from the external world and this is intentional, so there is no networking support.

- The Cartesi Machine is a deterministic machine, with the possibility to take a snapshot of the whole machine state so it can be resumed later, CheerpX was not designed for this use case.


Perhaps if it's possible to generate a RISC-V CPU in 5 hours, it's also possible to generate a JIT for RISC-V with a similar approach? https://news.ycombinator.com/item?id=36566578

"Show HN: Tetris, but the blocks are ARM instructions that execute in the browser" (2023) https://news.ycombinator.com/item?id=37086102 ; emu86 supports WASM, MIPS, RISC-V but not yet ARM64/Aarch64

Is there a "Record and replay" or a "Time-travel debugger" at the emulator level or does e.g. rr just work because Cartesi Machine is a complete syscall emulator?


> Destructors are avoided

RAII in general is avoided, because supporting it gives many unwanted consequences, increasing the language complexity and simplicity going against the goals.

> Unless you enable a particular flag, then you have to do all memory management manually, making code non-portable.

You can make portable code to work with or without the GC, the standard libraries do this. Of course you have more work to support both, but you usually don't need to, choose your memory model depending on your problem requirements (realtime, efficiency, etc) and stick with one.

> Why on earth wouldn't you just use reference counting with destructors?

Reference counting is not always ideal because it has its overhead, and the language is not to be slower than C, relying on referencing counting would impact a lot. The user can still do manual reference counting if he needs to, like some do in C. Also referencing counting requires some form of RAII which is out of the goals as already explained.

> Avoids LLVM because 'C code works everywhere', then doesn't support MSVC.

MSVC-Clang can be used, naive MSVC is just not really supported running directly from the Nelua compiler, but the user can grab the C file and compile himself in MSVC. Better support is not provided for MSVC just because of lack of time and interest. But Nelua supports many C compilers like GCC, TCC, Clang. Supporting more backends than just LLVM is still better than not officially supporting MSVC.

> 1-indexed in libraries copied from Lua, 0-indexed elsewhere.

1-indexing is used in just a few standard library functions for compatibility with Lua style APIs. On daily use this usually matters little, also you can either make your code 1-indexed in Lua style, or 0-indexed in systems programming style, it's up to your choice. The language is agnostic to 1/0-indexing, just some libraries providing Lua style APIs use it, you could completely ignore and not use them, you can not even use the standard libraries or bring your own (like in C).

> Preprocessor model instead of the features being a more integrated component of the language. Aforementioned preprocessor directives get seriously wedged. You need them for polymorphism, varargs, etc.

The language specification and grammar can remain more minimal, by having a capable preprocessor, instead of having many syntax, semantics and rules. Also Nelua preprocessor is not really just a preprocessor, as it provides the context of the compiler itself while compiling, this gives the language some powerful capabilities that only meta-programming the language to understand better. Calling a preprocessor does not do it justice, but the name is used due the lack of a better name.

> No closures. This one seems the most baffling to me because Lua has the __call metamethod, which is exactly what you'd use for that, and they're 99% of the point of anonymous or inner functions.

The language is under development, not officially released yet, and this feature is not available yet but on the roadmap. Nevertheless people can code fine without closures, many code in C without closures for example.


Having some very widely used things being 0 indexed, and others 1 indexed, kind of points to a problem in the language, regardless of if it's technically a language syntax issue:

At the end of the day you're still going to have a ton of time wasted by devs having to check, and there will be bugs eventually of things being not converted or double converted between the two index schemas


Hello, are you the language author? I've a question, the page says

> Safe

> Nelua tries to be safe by default for the user by minimizing undefined behavior and doing both compile-time checks and runtime checks.

And

> Optional GC

> Nelua uses a garbage collector by default, but it is completely optional and can be replaced by manual memory management for predictable runtime performance and for use in real-time applications, such as game engines and operational systems.

But of course, if one prefers manual memory management, then the code will be unsafe, right? Because use-after-free might occur.

(More specifically, free() is always unsafe in every low level lang, unless you have some static checking like Rust's borrow checker or ZZ and ATS compile-time proofs, which I think nelua doesn't have.)


You are correct, when not using the GC and doing manual memory management the code will be unsafe as similar as in C, but the language generate some runtime checks (that can be disabled) to minimize undefined behavior. Nelua does not provide safety semantics like Rust because this increases the language complexity a lot, thus out of the goals. Nelua is not another safe language, and the compiler will not stand in your way when doing unsafe things, the user can do unsafe things like in C.


Okay, thanks! And props for making a simple language.


Terra was an inspiration to create Nelua in its metaprogramming aspects. Terra's famous Brainfuck example (on the front page of Terra) is available in Nelua's examples folder for comparison of meta programming.


Teal add type annotations and type checking to Lua, and transpiles to Lua, while Nelua is a new systems programming language with optional type annotations and compiles to C.

@ubertaco also made a good comparison in his comment with Teal.


> I wish C was scriptable

C kinda can be used as scripting language with MIR project https://github.com/vnmakarov/mir

It released version 0.1 just a few days ago, and I've successfully used it as an alternative and fast C compiler with Nelua.


Actually Nelua have been tested to work with Arduino, AVR, RISC-V and some other embedded devices. You can make some free standing code with the language if you follow some rules (avoid APIs that uses lib C and use a custom allocator).


> how do I make higher-kinded types

In Nelua you can create new specialized types using compile time parameters, they are called "generics", you can use this for example to create a specialized vector type.

> and typeclasses there?

Yes, but with some metaprogramming. In Nelua you can create a "concept", that is just a compile time function called to check whether a type matches the concept.


Is there a monoid example implemented in Nelua?


> In fact, I see many similarities between Nelua and Nim.

I am the Nelua author, and I've used Nim for a reasonable amount of time before creating Nelua, thus Nim served as one of the inspirations for the project and they share similarities. But while Nim resembles Python, Nelua resembles Lua. Also when comparing both, Nelua tries to be more minimal and generate more readable and compact C code.


Nice! It looked like there were a lot of similarities. Fun language looking language you made.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: