Hacker Newsnew | past | comments | ask | show | jobs | submit | droelf's commentslogin

I've been using unikraft (https://unikraft.org/) unikernels for a while and the startup times are quite impressive (easily sub-second for our Rust application).


What drove you to choose that over something like containers?


Yeah, boot time, isolation (proper VM vs containers), and ease of use on a larger Hetzner box.


Did you notice a substantial difference in those factors between more traditional micro VMs that use OCI images (like Firecracker) and unikernels?


shorter cold-boot times.


If we’re talking about cold boot times, wouldn’t the relevant metric for unikernels be the hypervisor’s boot time?


How would that compare with containers running on Firecracker or other virtio-based μVM's?


A unikernel on Firecracker is probably going to start faster than a container on Linux on Firecracker.


I assume they meant using an OCI image for the rootfs of a firecracker VM, not running a container inside a firecracker VM.

Still difficult to see how the unikernel could be slower, but I doubt the difference would be huge? Don't have anything to back that up though.


Fast boot up means nothing if your agent/app is slow at runtime (due to virtualization tax or QEMU emulation). Fast boot up is a PR term, which can easily be optimized for compared to designed a better virtualization layer that performs near-bare-metal.


Wouldn't faster boot times mean that scale-out can be done on-demand? Whether this is preferable or not over poorer runtime performance is up to the domain, no?


When scaling out, edge latency will overshadow kernel boot-up times: speeding up boot-up from 1.5s to 150ms will not have any perceived impact on app performance when scaling on edge to meet the demand.


Cool! Emscripten-forge also recently got a R distribution that runs natively on the browser: https://blog.jupyter.org/r-in-the-browser-announcing-our-web...


Pixi works for this use case: https://pixi.sh/latest/

It gives you cross-platform binary packages, quickly (also written in Rust).


I think for a small configuration TOML might be fine. Where it breaks down in my opinion is in larger configuration files. It just becomes pretty unreadable.

Think about a Github Action being written in TOML ... would probably not look great!


We've been working on Pixi, which uses Conda packages. You can control versions precisely and also easily build your own software into a package to ship it. Would be curious to chat if it could be useful as an alternative to `mise`.


We've been building `pixi` and more specifically "pixi global" as a replacement for homebrew, but based on Conda packages. It creates a single virtual environment per globally installed tool (deduplication works by hard-linking) and then links out the binaries from the isolated environments to a single place.

It's written in Rust and quite fast: https://pixi.sh


You should really try `pixi` and `pixi global` - uses Conda packages but much faster, and great experience for installing packages globally.

https://pixi.sh


I think this is really cool. We're tackling this problem from the other side by building `pixi` (pixi.sh) which bundles project / package management with a task runner so that any CI should be as simple as `pixi run test` and easy to execute locally or in the cloud.


My team has a setup that sounds essentially the same using Nix via devenv.sh. We deterministically bundle and run everything from OpenTofu and all its providers to our programming languages runtimes to our pre-commit hooks this way, and it also features a task runner that builds a dependency graph and runs things in parallel and so on.

Our commands for CI are all just one liners that go to wrappers than pin all our dependencies.

Lately I've been working with a lot of cross-platform Bash scripts that run natively on macOS, WSL, and Linux servers, with little to no consideration for the differences. It's been good!


I'd be interested to know more about a team that uses Nix and Guix. Is there a website or email an interested party can contact?


that’s not really what’s new or special about pixi, is it? poetry (poethepoet) and uv can both do variations of this.

From the outside, pixi looks like a way to replace (conda + pip) with (not-conda + uv). It’s like uv-for-conda, but also uses uv internally.

Better task running is cool, but it would be odd to use pixi if you don’t otherwise need conda stuff. And extra super duper weird if you don’t have any python code!


Interesting project. We'd love to work together for better Lua support in Pixi (through the conda-forge ecosystem). We already package lua and a few C extensions. C extensions are the bread and butter for Pixi, so I think it could be a good fit!

- pixi.sh (docs) - lua package on the registry: https://prefix.dev/channels/conda-forge/packages/lua


That sounds like a great idea! I've opened [an issue](https://github.com/nvim-neorocks/lux/issues/550) in our repo. Feel free to ping us there :)


Thanks for bringing up conda. We're definitely trying to paint this vision as well with `pixi` (https://pixi.sh) - which is a modern package manager, written in Rust, but using the Conda ecosystem under the hood.

It follows more of a project based approach, comes with lockfiles and a lightweight task system. But we're building it up for much bigger tasks as well (`pixi build` will be a bit like Bazel for cross-platform, cross-language software building tasks).

While I agree that conda has many short-comings, the fundamental packages are alright and there is a huge community keeping the fully open source (conda-forge) distribution running nicely.


I just want to give a hearty thank you for pixi. It's been an absolute godsend for us. I can't express how much of a headache it was to deal with conda environments with student coursework and research projects in ML, especially when they leave and another student builds upon their work. There was no telling if the environment.yml in a student's repo was actually up to date or not, and most often didn't include actual version constraints for dependencies. We also provide an HPC cluster for students, which brings along its own set of headaches.

Now, I just give students a pixi.toml and pixi.lock, and a few commands in the README to get them started. It'll even prevent students from running their projects, adding packages, or installing environments when working on our cluster unless they're on a node with GPUs. My inbox used to be flooded with questions from students asking why packages weren't installing or why their code was failing with errors about CUDA, and more often than not, it was because they didn't allocate any GPUs to their HPC job.

And, as an added bonus, it lets me install tools that I use often with the global install command without needing to inundate our HPC IT group with requests.

So, once again, thank you


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: