Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What does GPU-aware mean in terms of a registry? Will `uv` inspect my local GPU spec and decide what the best set of packages would be to pull from Pyx?

Since this is a private, paid-for registry aimed at corporate clients, will there be an option to expose those registries externally as a public instance, but paid for by the company? That is, can I as a vendor pay for a Pyx registry for my own set of packages, and then provide that registry as an entrypoint for my customers?



> Will `uv` inspect my local GPU spec and decide what the best set of packages would be to pull from Pyx?

We actually support this basic idea today, even without pyx. You can run (e.g.) `uv pip install --torch-backend=auto torch` to automatically install a version of PyTorch based on your machine's GPU from the PyTorch index.

pyx takes that idea and pushes it further. Instead of "just" supporting PyTorch, the registry has a curated index for each supported hardware accelerator, and we populate that index with pre-built artifacts across a wide range of packages, versions, Python versions, PyTorch versions, etc., all with consistent and coherent metadata.

So there are two parts to it: (1) when you point to pyx, it becomes much easier to get the right, pre-built, mutually compatible versions of these things (and faster to install them); and (2) the uv client can point you to the "right" pyx index automatically (that part works regardless of whether you're using pyx, it's just more limited).

> Since this is a private, paid-for registry aimed at corporate clients, will there be an option to expose those registries externally as a public instance, but paid for by the company? That is, can I as a vendor pay for a Pyx registry for my own set of packages, and then provide that registry as an entrypoint for my customers?

We don't support this yet but it's come up a few times with users. If you're interested in it concretely feel free to email me (charlie@).


Is there an intention to bring the auto backend selection to the non-pip interface? I know we can configure this like you show https://docs.astral.sh/uv/guides/integration/pytorch/ but we have folks on different accelerators on Linux and remembering ‘uv sync --extra cu128’ at the right time is fragile so we just make cpu folks have the CUDA overhead too currently.

(As always, big fans of Astral’s tools. We should get on with trying pyx more seriously)


Hi Charlie

what happens in a situation in which I might have access to a login node, from which I can install packages, but then the computing nodes don't have internet access. Can I define in some hardware.toml the target system and install there even if my local system is different?

To be more specific, I'd like to do `uv --dump-system hardware.toml` in the computing node and then in the login node (or my laptop for that matter) just do `uv install my-package --target-system hardware.toml` and get an environment I can just copy over.


Yes, we let you override our detection of your hardware. Though we haven't implemented dumping detected information on one platform for use on another, it's definitely feasible, e.g., we're exploring a static metadata format as a part of the wheel variant proposal https://github.com/wheelnext/pep_xxx_wheel_variants/issues/4...


I love curated, consistent, and coherent metadata.

Is the plan to also provide accurate (curated) metadata for security and compliance purposes?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: