Doesn't match my experience at all. I love that packages are always up to date. Also, in my experience, having a rolling release cycle leads to significantly fewer issues than having to upgrade everything at once. Using Arch has been a net time saver for me compared to something like Ubuntu where I've wasted a lot of time fighting package upgrade issues and trying to get newer package versions than the distro provides.
In my limited experience, how Arch works out depends on how you're using Linux. If it's your daily driver and you stay on top of updates it's probably going to be fine, but on the other hand if it's a secondary OS on a multiboot system or a VM or something that only gets updated occasionally, chances of things breaking are much higher and something like Fedora might make more sense.
There would definitely be issues with the keyring being outdated which you have to know/search how to work around. And from time to time Arch also requires some manual interventions in the package update process (that are posted on archlinux.org) – you'd have to deal with those all at once if Arch wouldn't have been updated for a very long time. But, other than that, I can't think of other reasons why having a less often used Arch installation would give you trouble.
Then again, I haven't used Arch in such a manner, so you might as well be right.
I dug some old computers out of the attic recently. The one that had not been updated since 2021 only took around half an hour to get up to speed, and that was with a metric ton of packages installed.
The other one seemed to have been updated last time in 2015. It didn't even have updated certificates for https, so it couldn't sync the keyring. After trying for a while I just gave up and reinstalled Arch from a USB stick.
I use Nix on an ARM single-board computer to host a personal Matrix homeserver (and a bunch of bridges), and I absolutely love it. It's invaluable to have a reproducible specification of the whole system, including custom software to build, in a single place.
That being said, for day to day stuff Arch (and Nix standalone) works well enough for me, to be weary of switching my daily driver PC to Nix, out of the fear of dealing with unforeseen issues and maybe encountering less well maintained packages (there's always something broken on Nix unstable, but maybe it's not an issue for more popular stuff). So I'm sticking to Arch for non-servers for now.
In that context, if, that is, the comparison involved Windows user share, then yes.
Hell, even in a Linux-only context too. I mean, an exchange like:
- We're shipping this enterprise software in packages compatible with RHEL and Ubuntu, would it be worth our while to also devote resources to specifically support Arch too?
- Nah, nobody uses Arch
While not accurate to the maximum possible precision (something like say 5% of Linux users is not the same as 0%), it would still be quite understandable...
> something like Ubuntu where I've wasted a lot of time fighting package upgrade issues
The irony. You’re aware Arch its policy is to release packages in a broken state and just put it in the release log? They even very publicly state that.
If you want an actually stable rolling release, stick with OpenSUSE.
I’ve done a lot of distro hopping in the last couple of years, and I always seemed to spend a lot longer managing the environment rather than doing the work (which was an issue when I was freelancing…). Indeed Arch is the only one that ended up with a massive folder in a notion notebook for guides and how tos.
good on you (or anyone) having a clean and efficient process on Arch - but really, this is not the same YMMV for everyone with all software stacks. can someone whose entire worklife came to a halt gradually over weeks on Arch, please add here?
Latvian eID also provides cryptographic signing, and it's widely used when communicating with governmental institutions, because it's mandated by law that they must accept such digitally signed documents, and they have the same legal power as regular documents. I believe the situation in Estonia and Lithuania is probably similar. Many businesses also accept them but it's not universal.
Most of the times when I've tried finding stuff in (Firefox) history, I wasn't able to. Unless it's in the last week or so. In my experience, history filtering and search options are too basic to be useful. Once I was even desperate enough to try to load some Firefox sqlite file directly, hoping to query history entries, but that didn't work out.
The only reliable way that I've come across for finding stuff after a long time has passed is saving every sightly interesting webpage to Zotero and using fulltext search afterwards (including webpage body).
I'm curious, do you find the builtin browser history facilities sufficient for your needs, or are you using some third party tool for that?
I do find the built-in browser history is sufficient for _my_ needs, in that I mostly want a super-fadt autocomplete of certain hot items. Everything else that I know I want to revisit and find again I've bookmarked, but I don't bookmark that many things, maybe ~1 new bookmark a month.
Mostly though I realize I have focused heavily on not having clutter vs. being able to recall quickly everything I've ever found necessary or useful. It's a trade off I like, but it may not be for everyone.
> Most of the times when I've tried finding stuff in (Firefox) history, I wasn't able to. Unless it's in the last week or so.
I mentioned this below, but check to see what your history limits are in Firefox (https://support.mozilla.org/en-US/questions/1039372). It's possible if you do enough browsing that you might have trouble finding older pages because they're not there anymore.
I'm not sure what the best mitigation is for that, I've kind of accepted that history for Firefox is short-term, not long-term. It might be possible to rig up a webextension to save history more permanently, but I suspect it would need to do native messaging I think to do that, and at that point maybe it's better to just do regular copies of the SQLite database.
Relying on Firefox history less also has the kind of minor advantage of allowing you to be more aggressive about cleaning it yourself, which can have a noticeable performance impact in some cases.
I wanted to preorder the updated 13 inch model but it turned out that in EU Framework sells their products only in a selected few countries (also they are actively preventing people from using forwarding services so there's no good way to sidestep this). It would be great if they had a distributor somewhere in the EU or something that could resell their products otherwise it might be a very long wait till they're available where I'm at.
I feel like many of the issues people are complaining about here would instantly dissapear if there was regulation preventing content producers and content distributors be the same entity. There's even precedent for a similar thing in USA https://en.m.wikipedia.org/wiki/United_States_v._Paramount_P....
> The extremely short version: The EU is going to task a standardisation body to write a document that tells everyone marketing products and software in the EU how to code securely. This to further the EU Essential Cybersecurity Requirements. For critical software and products, EU notified bodies (which until now have mostly done physical equipment and process certifications) will do audits to determine if code and products adhere to this standard. And if not, there could be huge fines.
Do you account for the fact that probability distributions can have multiple peaks with equal probability? If multiple brains were involved, they'd somehow have to coordinate on what they deem the most likely outcome.
Say there is a quantum system – a particle or something – that has an equal probability to collapse in either of two classical states if measured. Say there are two scientists in a laboratory who perform a measurement on that system. If your hypothesis is true, how do they agree on what they perceive when looking at the result of the measurement? Each brain would have to make an arbitrary decision on which of the two equally likely outcomes to perceive.
Well, consider some of the political disagreements we've had in the last decade or so, we have ample evidence that two different people can look at the exact same thing and arrive at opposite conclusions.
I'm curious, shouldn't the "charge only" mode, that's the default, when connecting usb stuff to Android phones, be enough to protect users? Is it really that difficult to implement a "don't read data pins, only charge" mode on a phone and not have vulnerabilities in it?
If it’s “just a reset” I still wouldn’t be too worried plugging into an otherwise normally placed public charger. It would obviously suck to have my device reset, especially when traveling, but of course a port could also just fry your device anyway.
If it's just a USB-initiated factory reset, that's much less worrying, just DoS not infiltration. Exploiting that at a busy airport would be a huge nuisance, but not a huge security risk. Just like wiring 110VAC into the USB wires would be a DoS...
USB is a very intelligent protocol, with a microcontrollor on both ends. The controller has access to at least the driver's state, which is usually in the kernel and potentially has access to system memory.
How does your Android phone even know that data is an option to switch into when you plug it into a USB port? It has already negotiated itself to be a device on the USB bus. Your phone will probably show up in lsusb on Linux even in charging mode. (Mine does.) When you switch the phone to data mode, it changes its USB device profile, and becomes a more sophisticated attached device, from the host's perspective.
Many (most?) phones made in recent years can be USB hosts, too. This lets you connect a USB mouse and keyboard to a tablet, for example. That would open you up to all kinds of pretty simple but often quite effective attacks, like simulating a virtual keyboard and mouse and just manipulating the UI that way.
I don't know if any of these particular attacks are possible with Android right now, but many variations on these themes have been shown over the years on many platforms. USB wasn't really designed with adversarial peripherals in mind.
Maybe I'm stupid but what I gather from this is simply that this is a potential vector, not that it is currently an actual possibility. It's akin to saying using Bluetooth is dangerous because theoretically any data on my phone can be extracted through it, while neglecting the fact that the people building a phone OS are clearly aware of that and have built-in countermeasures.
BadUSB emulates a keyboard. So one would want to make sure that the phone was locked before hooking it up to a random charging port. Android exploit demo here:
You phone can only figure out if it’s connected to a known device (your car, your speaker, etc) by asking the data pins. A charge-only mode would “break” usability of the USB port for most users.
android 11 asks me if i want to charge only or also allow data transfer. Is it that we can't trust android to be not be hacked just by checking if data pins exist?
My phone asks me if I am connected to a trusted device and want to share data, asking me rather than asking the device if it is trusted seems to be an effective model.
The "pure utilitarian" viewpoint never made much sense to me because it seems to usually imply accounting for only first order effects, when calculating some costs and gains, and stopping there. I think the secondary effects of not saving people might go far beyond some saved fuel and time in such situations (especially in war, as some have mentioned here).
In other words, optimizing some aspects of a very complex and well functioning system, such as society, locally is likely to mess things up globally.
I think you could create a system that's resilient to such issues even with federation (not saying it's easy, though), and Matrix actually has a solution in the works for this – decentralised user accounts [1].
And all of this makes me wonder – maybe it's better to re-implement something like Mastodon on top of Matrix. If Matrix adopts decentralised user accounts, that would seemingly solve such issues automatically. There was a POC Matrix based Twitter clone demonstrating this, actually [2] (but without the decentralised accounts yet).
We’re hoping to make progress on decentralised accounts on Matrix by the end of the year.
https://cerulean.matrix.org is another POC Matrix based Twitter clone (built for Jack & Parag) that demonstrates this (but without decentralised accounts yet).