I think the problem is simply that css is too restricted that you can style a fixed piece of html in any way you want. In practice, achieving some desired layout require changing the html structure. The missing layer would be something that can change the structure of html like js or xslt. In modern frontend development you already have data defined in some json, and html + css combined together is the presentation layer that can't really be separated.
The way I actually have things setup, in case it helps. I don't change my default shell. I actually default to pretty much working within tmux. So, I kept my default shell to what the OS brings, then in my tmux config, I have,
# set shell
set -g default-shell /opt/homebrew/bin/fish
This means, that when I start my terminal, it drops me to zsh (macOS default). Then when I run tmux, it opens fish. The nice thing is that I inherit the environment of zsh.
I have my .zshrc and my .bashrc sourcing a .shellrc file which contains most of my env stuff. This keeps random utilities that write to .bashrc and zshrc working within fish too.
I think `zsh -l` start a login shell, which does not load zshrc so oh-my-zsh don't get initialized. Try `zsh -ic exit` and it should load zshrc before executing exit.
Valid points, I learned something new today. Thanks, you were right. If using -ic flags I am getting around 300 ms... Interesting how I never noticed, guess I don't open many terminal during the day
I don't think it is generally possible to escape from a docker container in default configuration (e.g. `docker run --rm -it alpine:3 sh`) if you have a reasonably update-to-date kernel from your distro. AFAIK a lot of kernel lpe use features like unprivileged user ns and io_uring which is not available in container by default, and truly unprivileged kernel lpe seems to be sufficient rare.
The kernel policy is that any distro that isn't using a rolling release kernel is unpatched and vulnerable, so "reasonably up-to-date" is going to lean heavily on what you consider "reasonable".
LPEs abound - unprivileged user ns was a whole gateway that was closed, io-uring was hot for a while, ebpf is another great target, and I'm sure more and more will be found every year as has been the case. Seccomp and unprivileged containers etc make a huge different to stomp out a lot of the attack surface, you can decide how comfortable you are with that though.
>The kernel policy is that any distro that isn't using a rolling release kernel is unpatched and vulnerable, so "reasonably up-to-date" is going to lean heavily on what you consider "reasonable".
I would expect major distributions to have embargoed CVE access specifically to prevent this issue.
Nope, that is not the case. For one thing, upstream doesn't issue CVEs and doesn't really care about CVEs or consider them valid. For another, they forbid or severely limit embargos.
To be honest, there are two ways to solve the problem of xkcd 2347, either putting efforts into the very small library or just stop depending on it. Both solutions are fine to me and Google apparent just choose the latter one here.
If not depending on a library is an option, then you dont really have an xkcd 2347 problem. The entire point of that comic is that some undermaintained dependencies are critical, without reasonable alternatives.
If being used in a CTF counts, then running latest docker with no extra privilege and non-root user on a reasonably up-to-date kernel meets the definition of secure I think. At least for what I have seen, this kind of infrastructure is pretty common in CTF.
For python specifically, the uuid4 function does use the randomness from os.urandom, which is supposed to be cryptographically random on most platforms.
I think the problem is that some local server are not really designed to be as secure as a public server. For example, a local server having a stupid unauthenticated endpoint like "GET /exec?cmd=rm+-rf+/*", which is obviously exploitable and same-origin does not prevent that.
Yes. Passkeys help with the bad password problem. That’s a big deal but doesn’t magically solve everything.
To address other security risks more comprehensively, you need to have a tight issuance process and use something key based in hardware. I’m working on a project where we deploy Yubi keys or similar, with an audit trial of which is used by who.
High trust environments need things like enterprise attestation and a solid issuance process to meet the control needs. Back in the day, the NIST standards required a chain of custody log of the token - you could only use in person delivery or registered mail to send them.
That’s overkill, but the point is the technology is only one part of the solution for these problems.
Within the larger spec, you can whitelist a set of known devices, such as only allow Yubikey's, etc. Which would prevent the private key material from getting into your password manager.
You can but the server can require an device attestation during registration, proving that you're actually using an Yubikey or whatever. That isn't possible with TOTP
It doesn't need it if this vulnerability is the only one you're worried about (remote websites), but it'd be nice to have it before letting it use e.g. your Github account. This is how VS Code extensions work, for example, and it's pretty nice