It used to be like that, computer had limited resources and desktop environments were light. Then at some point RAM became less and less of an issue, and everything started to get bigger and less efficient.
Coyuld anyone summarize why a desktop Windows/MacOs now needs so much more RAM than in the past? is it the UI animations, color themes, shades etc etc or is it the underlying operating system that has more and more features, services etc etc ?
I believe it's the desktop environment that is greedy, because one can easily run a linux server on a raspberry pi with very limited RAM, but is it really the case?
> Coyuld anyone summarize why a desktop Windows/MacOs now needs so much more RAM than in the past
Just a single retina screen buffer, assuming something like 2500 by 2500 pixels, 4 byte per pixel is already 25MB for a single buffer. Then you want double buffering, but also a per-window buffer since you don't want to force rewrites 60x per second and we want to drag windows around while showing contents not a wireframe. As you can see just that adds up quickly. And that's just the draw buffers. Not mentioning all the different fonts that are simultaneously used, images that are shown, etc.
(Of course, screen bufferes are typically stored in VRAM once drawn. But you need to drawn first, which is at least in part on the CPU)
Per window double buffering is actively harmful - as it means you're triple buffering, as the render goes window buffer->composite buffer->screen, and that's with perfect timing, and even this kind of latency is actively unpleasant when typing or moving the mouse.
If you get the timing right, there should be no need for double-buffering individual windows.
You don't need to do all of this, though. You could just do arbitrary rendering using GPU compute, and only store a highly-compressed representation on the CPU.
Yes, but then the GPU needs that amount of ram, so it's fairer to look at the sum of RAM + VRAM requirements. With compressed representations you trade CPU cycles for RAM. To save laptop battery better required copious amounts of RAM (since it's cheap).
The web browser is the biggest RAM hog these days as far as low-end usage goes. The browsing UI/chrome itself can take in the many hundred megs to render, and that's before even loading any website. It's becoming hard to browse even very "light" sites like Wikipedia on less than a 4GB system at a bare minimum.
> is it the UI animations, color themes, shades etc etc or is it the underlying operating system that has more and more features, services etc etc ?
...all of those and more? New software is only optimized until it is not outright annoying to use on current hardware, it's always been like that and that's why there are old jokes like:
"What Andy giveth, Bill taketh away."
"Software is like a gas, it expands to consume all available hardware resources."
"Software gets slower faster than hardware gets faster"
...etc..etc... variations of those "laws" are as old as computing.
Sometimes there are short periods where the hardware pulls a little bit ahead for a few short years of bliss (for instance the ARM Macs), but the software quickly catches up and soon everything feels as slow as always (or worse).
That also means that the easiest way to a slick computing experience is to run old software on new hardware ;)
Indeed. Much of a modern Linux desktop e.g. runs inside one of multiple not very well optimized JS engines: Gnome uses JS for various desktop interactions, and all major desktops run a different JS engine as a different user to evaluate polkit authorizations (so exactly zero RAM could be shared between those engines, even if they were identical, which they aren't), and then half your interactions with GUI tools happens inside browser engines, either directly in a browser, or indirectly with Electron. (And typically, each Electron tool bundles their own slightly different version of Electron, so even if they all run under the same user, each is fully independent.)
Or you can ignore all that nonsense and run openbox and native tools.
Which is baffling as to why they chose it - I remember there being memory leaks because GObject uses a reference counted model - cycles from GObject to JS then back were impossible to collect.
They did hack around this with heuristics, but they never did solve the issue.
They should've stuck with a reference counted scripting language like Lua, which has strong support for embedding.
A month with CrunchBang Plus Plus (which is a really nice distribution based on Openbox) and you'll appreciate how quick and well put together Openbox and text based config files are.
> Much of a modern Linux desktop e.g. runs inside one of multiple not very well optimized JS engines
A couple of years ago I saw a talk by Sophie Wilson, the designer of the ARM chip. She had been amused by someone saying there was an ARM inside every iPhone: she pointed out that there was 6-8 assymetric ARM cores in the CPU section of the SOC, some big and fast, some small and power-frugal, an ARM chip in the Bluetooth controller, another in the Wifi controller, several in the GSM/mobile controller, at least one in the memory controller, several in the flash memory controller...
It wasn't "an ARM chip". It was half a dozen ARMs in early iPhones, and then maybe dozens in modern ones. More in anything with an SD card slot, as SD card typically contain an Arm or a few of them to manage the blocks of storage, and other ARMs in the interface are talking to those ARMs.
Wheels within wheels: multiple very similar cores, running different OSes and RTOSes and chunks of embedded firmware, all cooperatively running user-facing OSes with a load of duplication, like a shell in one Javascript launching Firefox which contains a copy of a different version of the same Javascript engine, plus another in Thunderbird, plus another embedded in Slack and another copy embedded in VSCode.
Insanity. Make a resource cheap and it is human nature to squander it.
I've found that Gnome works about as well as other "lighter" desktop environments on some hardware I have that is about 15 years old. I don't think it using a JS engine really impacts performance as much as people claim. Memory usage might be a bit higher, but the main memory hog on a machine these days is your web browser.
I have plenty of complaints about gnome (not being able to set a solid colour as a background colour is really dumb IMO), but it seems to work quite well IME.
> Or you can ignore all that nonsense and run openbox and native tools.
I remember mucking about with OpenBox and similar WMs back in the early 2000s and I wouldn't want to go back to using them. I find Gnome tends to expose me to less nonsense.
There is nothing specifically wrong with Wayland either. I am running it on Debian 13 and I am running a triple monitor setup without. Display scaling works properly on Wayland (it doesn't on X11).
> I find Gnome tends to expose me to less nonsense.
IMHO, I find the reverse. It feels like a phone/tablet interface. It's bigger and uses way more disk and memory, but it gives me less UI, less control, less customisation, than Xfce which takes about a quarter of the resources.
Example: I have 2 screens. One landscape on the left, one portrait on the right. That big mirrored L-shape is my desktop. I wanted the virtual-desktop switcher on the right of the right screen, and the dock thing on the left of the left screen.
GNOME can't do that. They must be on your primary display, and if that's a little laptop screen but there is a nice big spacious 2nd screen, I want to move some things there -- but I am not allowed to.
If I have 1 screen, keep them on 1 screen. If I have 2, that pair is my desktop, so put one panel on the left of my desktop and one on the right, even if those are different screens -- and remember this so it happens automatically when I connect that screen.
This is the logic I'd expect. It is not how GNOME folks think, though, so I can't have it. I do not understand how they think.
> IMHO, I find the reverse. It feels like a phone/tablet interface. It's bigger and uses way more disk and memory, but it gives me less UI, less control, less customisation, than Xfce which takes about a quarter of the resources.
I've used Xfce quite a lot in the past and quite honestly most of the "customisation" in it is confusing to use and poorly thought out.
I've also found these "light DEs" to be less snappy than Gnome. I believe this is because it takes advantage of the GPU acceleration better, but I am not sure tbh. The extra memory usage I don't really care about. My slowest laptop I use regularly has 8GB ram and it is fine. Would I want to use this on a sub 4GB machine, no. But realistically you can't do much with that anyway.
Also Gnome (with Wayland) does a lot of stuff that Xfce can't do properly. This is normally to do with HiDPI scaling, different refreshrates. It all works properly.
With Xfce, I had to mess about with DPI hacks and other things.
> Example: I have 2 screens. One landscape on the left, one portrait on the right. That big mirrored L-shape is my desktop. I wanted the virtual-desktop switcher on the right of the right screen, and the dock thing on the left of the left screen.
> If I have 1 screen, keep them on 1 screen. If I have 2, that pair is my desktop, so put one panel on the left of my desktop and one on the right, even if those are different screens -- and remember this so it happens automatically when I connect that screen.
I just tried the workspace switcher. I can switch virtual desktops with Super + Scroll on any desktop. I can also choose virtual desktops on both screens by using the Super + A and then there is virtual desktop switcher on each screen.
I just tried it on Gnome 48 on Debian 13 right now. It is pretty close to what you are describing.
> This is the logic I'd expect. It is not how GNOME folks think, though, so I can't have it. I do not understand how they think
I think people just want to complain about Gnome because it is opinionated. I also don't like KDE.
I install two extensions on desktop. Dash to Dock and Appindicators plugins. On the light DEs and Window Managers, I was always messing about with settings and thing always felt off.
This is quite interesting. As before, what you find is the reverse of what I find.
> I've used Xfce quite a lot in the past and quite honestly most of the "customisation" in it is confusing to use and poorly thought out.
In places, it can be. For instance, the virtual-desktop switcher: you can choose how many in 1 place, how many rows to show in the panel in another place, and how to switch in a 3rd place. This shows it evolved over time. It's not ideal but it it works.
But the big point is, it's there. I'd rather have confusing customisation (as Xfce can be) than no customisation like GNOME.
> I've also found these "light DEs" to be less snappy than Gnome.
I find the reverse.
> I believe this is because it takes advantage of the GPU acceleration better
Some do, yes. But I avoid dedicated GPUs for my hardware, and most of the time, I run in VMs where GPU acceleration is flakey. So I'd rather tools that don't need hardware for performance to tools that require it.
Here's some stuff I wrote about that thirteen years ago.
I really have been working with this for a while now. I am not some kid who just strolled in and has Opinions.
> The extra memory usage I don't really care about.
You should. More code = more to go wrong.
When I compared Xfce and GNOME for an article a few years ago I compared their bug trackers.
GNOME: about 45,000 open bugs.
Xfce: about 15,000 open bugs.
This stuff matters. It is not just about convenience or performance.
> But realistically you can't do much with that anyway.
News: yeah you can. Billions have little choice.
The best-selling single model range of computers since the Commodore 64 is the Raspberry Pi range, and the bulk of the tens of millions of them they've sold have 1GB RAM -- or less. There is no way to upgrade.
> Also Gnome (with Wayland) does a lot of stuff that Xfce can't do properly.
I always hear this. I had to sit down with a colleague pumping this BS when I worked for SUSE and step by step, function by function, prove to him that Xfce could do every single function he could come up with in KDE and GNOME put together.
> This is normally to do with HiDPI scaling,
Don't care. I am 58. I can't see the difference. So I do not own any HiDPI monitors. Features that only young people with excellent eyesight can even see is is ageist junk.
> different refreshrates.
Can't see them either. I haven't cared since LCDs replaced CRTs. It does not matter. I can't see any flicker so I don't care. See above comment.
> I just tried the workspace switcher. I can switch virtual desktops with Super + Scroll on any desktop. I can also choose virtual desktops on both screens by using the Super + A and then there is virtual desktop switcher on each screen.
You're missing the point and you are reinforcing the GNOME team's taking away my choices. I told you that I can't arrange things where I want -- even with extensions. Your reply is "it works anyway".
I didn't say it didn't work. I said I hate the arrangement and it is forced on me and I have no choice.
> I just tried it on Gnome 48 on Debian 13 right now. It is pretty close to what you are describing.
It is not even similar.
> I think people just want to complain about Gnome because it is opinionated. I also don't like KDE.
I complain about GNOME because I have been studying GUI design and operation and human-computer interaction for 38 years and GNOME took decades of accumulated wisdom and experience and threw it out because they don't understand it.
> I install two extensions on desktop. Dash to Dock and Appindicators plugins. On the light DEs and Window Managers, I was always messing about with settings and thing always felt off.
So you are happy with it. Good for you. Can you at least understand that others hate it and have strong valid reasons for hating it and that it cripples us?
There is so much wrong here I don't really know where to start. There is a bunch of the usual flawed assumptions on things that haven't been relevant in decades. So I am going to pick the most egregious examples.
> But the big point is, it's there. I'd rather have confusing customisation (as Xfce can be) than no customisation like GNOME.
Those gnome plugins I install and extensions I must have imagined. I am sure there will be some reason why this isn't good enough, but I can customise my desktop absolute fine.
> Some do, yes. But I avoid dedicated GPUs for my hardware, and most of the time, I run in VMs where GPU acceleration is flakey. So I'd rather tools that don't need hardware for performance to tools that require it.
I am not sure why you wouldn't want GPU acceleration that works properly.
Your examples of VM. Gnome works fine through in a VM (I used it yesterday), Remote Desktop and even Citrix. I used Gnome in a Linux VM over RDP and Citrix 2 years at work. It worked quite well in fact, even over WAN.
I don't care about what the situation 13 years ago (I dubious it was true then btw becase I was using a CentOS 7 VM).
EDIT: I just read the article. You are complaining about enabling a bloody checkbox.
> The best-selling single model range of computers since the Commodore 64 is the Raspberry Pi range, and the bulk of the tens of millions of them they've sold have 1GB RAM -- or less. There is no way to upgrade.
I guarantee you people aren't using these 1GB models as desktops. They are using this for things like a Pi Hole, Home Assistant, 3d printer, Kodi, Retro Gaming emulators or embedded applications.
People do run KDE, Gnome and Cinnamon on the 4GB/8GB/16GB models or buy a Pi400/500.
> I always hear this. I had to sit down with a colleague pumping this BS when I worked for SUSE and step by step, function by function, prove to him that Xfce could do every single function he could come up with in KDE and GNOME put together.
I was quite obviously talking about HiDPI support. You didn't read what I said.
This stuff works properly on Gnome and not on Xfce.
> Don't care. I am 58. I can't see the difference. So I do not own any HiDPI monitors. Features that only young people with excellent eyesight can even see is is ageist junk.
I do fucking care. I use a HiDPI monitor. Fonts are rendered better. My games look better than I run on my desktop. I like it.
I am 42. I can see the difference. While I am younger. I am not that young.
Why you are bringing ageism into what is essentially more pixels on a screen I have no idea. It is baffling that you are taking exception because I want the scaling to work properly on my monitors that I purchased. BTW my monitors are over a decade old now. HiDPI is not novel.
> It is not even similar.
It is exactly what you described. I literally read what you said and compared to what I could do on my Gnome Desktop. So I can only assume that you can't actually the describe the issue properly. That isn't my issue, that is yours.
> So you are happy with it. Good for you. Can you at least understand that others hate it and have strong valid reasons for hating it and that it cripples us?
No. You literally repeated all the usual drivel that isn't true (I know because I've actually use Gnome) and complaints that are boil down to "I don't like how it works" or "the developers said something I didn't like and now I hate them forever". It is tiresome and trite, I would expect such things from someone in their early 20s, yet you are almost 60.
> I am sure there will be some reason why this isn't good enough,
Installing extensions is not customisability. It is code patching on the fly and it breaks when the desktop gets upgraded.
Not good enough.
> Gnome works fine through in a VM
Again you translate "does not do something well" into "it does not work". Yes it can run in a VM. It doesn't do it very well and it only does it if the VM is powerful on a fast host.
Just a few years ago it did not work.
> EDIT: I just read the article. You are complaining about enabling a bloody checkbox.
You didn't understand it, then. It is really about what settings to enable and what extensions you must install.
> I guarantee you people aren't using these 1GB models as desktops.
Then you're wrong. I did myself not long ago. Most of the world is poor, most of the world doesn't have high-end tech.
> I was quite obviously talking about HiDPI support. You didn't read what I said.
I read it. I replied. I don't care.
The GNOME developers destroyed an industry standard user interface -- https://en.wikipedia.org/wiki/IBM_Common_User_Access -- which I am willing to bet you've never heard of and don't know how to use -- just to avoid getting sued by Microsoft 20Y ago.
A bunch of entitled kids who don't know how to use a computer with keyboard alone and who don't give a fsck about the needs of disabled and visually impaired people ripped out menu bars and a tonne more to make their toy desktop, but they threw in features to amuse audiophiles and people with fancy monitors, and you don't understand why I am pissed off.
You ripped out my computer's UI and replaced it with a toy so you could have higher refresh rates and shinier games.
> It is baffling
It's only baffling because never heard before from anyone inconvenienced by it and never thought before of other people's needs and use cases -- which is GNOME all over.
> It is exactly what you described.
No it is not.
Tell me what extensions will put the GNOME favourites bar on the left of the left screen and a vertical virtual desktop switcher on the right of the right screen.
You didn't understand my blog post about GUI acceleration in VMs, and you don't understand my comments either.
I have used every single version of GNOME released since 2.0 and I know my way round it pretty well -- same as I am atheist and know the Bible better than all but about 3 so-called christians I've met in 6 decades. Know your enemy.
I have been getting hatred and personal abuse from the GNOME team and GNOME fans, every time I ever criticise it, for over a decade now. It is the single most toxic community I know in Linux.
> same as I am atheist and know the Bible better than all but about 3 so-called christians I've met in 6 decades. Know your enemy.
I missed this the first time around. The fact that you see Christians as enemies (your words btw) is quite telling about this entire interaction/conversation we've had.
I honestly think that if you haven't learned why this attitude of your is a problem at almost 60 years old, I don't think you ever will.
> Installing extensions is not customisability. It is code patching on the fly and it breaks when the desktop gets upgraded.
This is nonsense.
1) It changes how it works to how I prefer it, so that is customising it.
2) I've used the same extensions for ages. Nothing ever broken.
Basically want you and a lot of people want, is that there are hundred of options setting trivial things. Ok fine, then don't use Gnome, nobody is forcing you to use it.
As I said I install dash to dock and appindicator icons.
> Again you translate "does not do something well" into "it does not work".
It seems to be that you are getting hung up on the word "works fine" and wanting to get into some stupid semantic argument.
I found that it does do it well. You didn't read what I said. I used it for 2 years. It worked perfectly fine during duration.
So I know for a fact that what are you are saying incorrect.
> You didn't understand it, then. It is really about what settings to enable and what extensions you must install.
I was being flippant when I said "enable a checkbox". What was described in your blog post I've done this in virtualbox myself in the past.
It isn't difficult, pretending it is is asinine. I haven't used virtualbox in years, but I am quite familiar with the general purpose from when I did.
> I read it. I replied. I don't care.
Right. So why are you replying at all? So why should I care about your opinion if you aren't willing to consider mine?
You said you were 58 years old, I expect someone that is 58 years old (and is clearly articulate) to behave better tbh.
> A bunch of entitled kids who don't know how to use a computer with keyboard alone and who don't give a fsck about the needs of disabled and visually impaired people ripped out menu bars and a tonne more to make their toy desktop, but they threw in features to amuse audiophiles and people with fancy monitors, and you don't understand why I am pissed off.
I have HiDPI monitors for work. You keep on making assumptions about people and then come to the wrong conclusions.
Also I actually have a blind friend and he says that Gnome is actually works reasonably well (he installed it in a VM on his Mac).
He says it isn't as good as MacOS and thus he still uses his Mac. But he used Gnome and Unity and he says they are "ok".
As for pipewire/pulse. I had some issues with it like while ago, but it all seems to be fixed now.
So I am going to assume that you don't know what you are talking about.
> You ripped out my computer's UI and replaced it with a toy so you could have higher refresh rates and shinier games.
This is absolute nonsense. I did nothing of the sort. I just customised the default UI that happened to come with CentOS 7 at work and happened to like it and usually return to using it.
Gnome actually known for not working well with games. I am actually making YouTube video about it. You have to install GameScope to sandbox the compositor.
This is another case of you not knowing what you are on about quite frankly.
> I have been getting hatred and personal abuse from the GNOME team and GNOME fans, every time I ever criticise it, for over a decade now. It is the single most toxic community I know in Linux.
Says the person that just told me he didn't care about my needs and whether my hardware works and then blames for something never did. The toxicity isn't coming from me.
BTW, None of this was done by me. I use gnome. I am not part of the community. I done exactly one YouTube video for a friend to show him how to configure some stuff in Gnome as he was new to Linux. Oh I think I once may have logged a bug on their issue tracker.
It seems to me that you are arguing with the wrong person. You need to direct anger elsewhere. I did find the accusations of quite hilarious. So thanks for the giggles.
Is the intention of the author to use the number of years bugs stay "hidden" as a metric of the quality of the kernel codebase or of the performance of the maintainers? I am asking because at some point the articles says "We're getting faster".
IMHO a fact that a bug hides for years can also be indication that such bug had low severity/low priority and therefore that the overall quality is very good. Unless the time represents how long it takes to reproduce and resolve a known bug, but in such case I would not say that "bug hides" in the kernel.
> IMHO a fact that a bug hides for years can also be indication that such bug had low severity/low priority
Not really true. A lot of very severe bugs have lurked for years and even decades. Heartbleed comes to mind.
The reason these bugs often lurk for so long is because they very often don't cause a panic, which is why they can be really tricky to find.
For example, use after free bugs are really dangerous. However, in most code, it's a pretty safe bet that nothing dangerous happens when use after free is triggered. Especially if the pointer is used shortly after the free and dies shortly after it. In many cases, the erroneous read or write doesn't break something.
The same is true of the race condition problems (which are some of the longest lived bugs). In a lot of cases, you won't know you have a race condition because in many cases the contention on the lock is low so the race isn't exposed. And even when it is, it can be very tricky to reproduce as the race isn't likely to be done the same way twice.
> …lurked for years and even decades. Heartbleed comes to mind.
I don’t know much about Heartbleed, but Wikipedia says:
> Heartbleed is a security bug… It was introduced into the software in 2012 and publicly disclosed in April 2014.
Two years doesn’t sound like “years or even decades” to me? But again, I don’t know much about Heartbleed so I may be missing something. It does say it was also patched in 2014, not just discovered then.
This may just be me misremembering, but as I recall, the bug of Heartbleed was ultimately a very complex macro system which supported multiple very old architectures. The bug, IIRC, was the interaction between that old macro system and the new code which is what made it hard to recognize as a bug.
Part of the resolution to the problem was I believe they ended up removing a fair number of unsupported platforms. It also ended up spawning alternatives to openssl like boring ssl which tried to remove as much as possible to guard against this very bug.
> IMHO a fact that a bug hides for years can also be indication that such bug had low severity/low priority and therefore that the overall quality is very good.
It doesn't seem to indicate that. It indicates the bug just isn't in tested code or isn't reached often. It could still be a very severe bug.
The issue with longer lived bugs is that someone could have been leveraging it for longer.
But it matters for detection time, because there's a lot more "normal" use of any given piece of code than intentional attempts to break it. If a bug can't be triggered unintentionally it'll never get detected through normal use, which can lead to it staying hidden for longer.
How about switching from VS Code to VS Codium? Same experience without the microsoft telemetry. I suppose Copilot won't be included due to licensing constraints.
How does the extension model work with MS? I did a similar move to chromium and eventually had to move to firefox when they pulled the plug on ad blockers.
I am more interested in how MCP can change human interaction with software.
Practical example: there exists an MCP server for Jira.
Connect that MCP server to e.g. Claude and then you can write prompts like this:
"Produce a release notes document for project XYZ based on the Epics associated to version 1.2.3"
or
"Export to CSV all tickets with worklog related to project XYZ and version 1.2.3. Make sure the CSV includes these columns ....."
Especially the second example totally removes the need for the CSV export functionality in Jira. Now imagine a scenario in which your favourite AI is connected via MCP to different services. You can mix and match information from all of them.
Alibaba for example is making MCP servers for all of its user-facing services (alibaba mail, cloud drive, etc etc)
A chat UI powered by the appropriate MCP servers can provide a lot of value to regular end users and make it possible for people to use their own data easily in ways that earlier would require dedicated software solutions (exports, reports). People could use software for use cases that the original authors didn't even imagine.
It would, but the point of MCP is that it's discoverable by an AI. You can just go change it and it'll know how to use it immediately
If you go and change the parameters of a REST API, you need to modify every client that connects to it or they'll just plain not work. (Or you'll have a mess of legacy endpoints in your API)
Not a fan, I like the "give an LLM a virtual environment and let it code stuff" approach, but MCP is here to stay as far as I can see.
How does it remove the need for CSV export? The LLM can make mistakes right? Wouldn’t you want the LLM calling the deterministic csv export tool rather than trying to create a csv on its own?
I think the design is bad: my girlfriend would never wear it. Maybe they know already and that's why the webpage contains only picture of male hands.
Given the many smartwatches on the market which can do so much more, are lightweight and some of them with acceptable battery life (Garmin, Suunto, Amazfit), a smartring is of very little interest to me. But I often struggle to understand why certain products fascinate people, so I may be totally wrong and I wish the makers best of luck.
Yeah, I have to wonder what lead them to these color choices. Not sure why they wouldn't include a white option or any other good neutral colors that actually go with silver and gold. And I think the number of people willing to wear a matte black ring is quite low, especially among women.
Here is an American example, Fox suspensions. Fox is one of the main producers of bicycle suspensions. Great products, but check their service intervals for a fork [0], 125 hours.
Now if you practice mountainbike you may ride your bike 1 to 5 times a week. Let's say you only ride once a week for 4 hours: 125 / 4 = 31, you would need to service your fork every 31 weeks. Add some few more rides and you have to service the fork twice a year.
Each service easily costs $150 if done by a bike shop. If you do it yourself (plenty of tutorials on youtube), you need expensive special tools, oil, special grease and spare o-rings and seals easily costs 30-40$ for every service. And you have to properly dispose the old oil.
Ant did not include IF THEN ELSE, unless you added the contrib package.
If you understood the paradigm, you could write branches in Ant files simply using properties and guards on properties ("unless"). Using IF in Ant was basically admission of not having understood Ant.
This said, I used Ant for a very limited amount of time.
As of Ant 1.9.1, you can use 'if' and 'unless' attributes on any task or element in a target. I stopped using Ant a long time ago, but this was a pleasant discovery when I had to pick up an old Ant based project recently.
I agree, that is what I meant: there were people who installed Contrib to have <if> element, but in reality you did not need that you could just use Ant's built-in features like you said. In my opinion installing Contrib to use <if> was a demonstration of not having understood how Ant works.
I passed the PDF to Claude and asked it to check if there is any part of the document that states that google deprioritizes good search results in favor of advertisement. Here is the output from Claude:
Yes, the document contains highly significant factual findings by the Court regarding how Google deprioritized organic search results in favor of advertising. The most significant findings: The Court documents that the positioning of Google's AI features (AI Overviews, WebAnswers) on the search results page reduced users' interactions with organic web results - deliberately.
Relevant text:
"Some evidence suggests that placement of features like AI Overviews on the SERP has reduced user interactions with organic web results (i.e., the traditional "10 blue links")."
And:
"Placement of features like AI Overviews on the SERP has reduced user interactions with organic web results where Google's WebAnswers appears on the SERP"
Important note: these are not "admissions" in the sense of Google voluntarily confessing, but rather factual findings by the Court based on evidence presented during the trial - which is legally even more binding.
While I understand the reasons behind this campaign, I have mixed feelings about it.
As an iPhone user, I find it frustrating that deploying my own app on my own device requires either reinstalling it every 7 days or paying $100 annually. Android doesn't have this limitation, which makes it simpler and more convenient for personal use.
However, when it comes to publishing apps to the store, I take a different view. In my opinion, stricter oversight is beneficial. To draw an analogy: NPM registry has experienced several supply chain attacks because anyone can easily publish a library. The Maven Central registry for Java libraries, by contrast, requires developers to own the DNS domain used as a namespace for their library. This additional requirement, along with a few extra security checks, has been largely effective in preventing—or at least significantly reducing—the supply chain attacks seen in the NPM ecosystem.
Given the growing threat of such attacks, we need to find ways to mitigate them. I hope that Google's new approach is motivated by security concerns rather than purely economic reasons.
Android already has this strict oversight, in theory, in the form of the Play Store. And yet.
Personally I feel much more safe and secure downloading a random app from F-Droid, than I do from Google, whose supposed watchful eyes have allowed genuine malware to be distributed unimpeded.
> In my opinion, stricter oversight is beneficial.
I agree; stricter oversight is beneficial for the official app store. It should not be necessary (and neither should Google's (or Apple's, or Microsoft's, or the government's, etc) verification be necessary) for stuff you install by yourself.
> The Maven Central registry for Java libraries, by contrast, requires developers to own the DNS domain used as a namespace for their library.
This means that you will need to have a domain name, and can verify it for this purpose. (It also has a problem if the domain name is later reassigned to someone else; including a timestamp would be one way to avoid that problem (there are other possibilities as well) but I think Java namespaces do not have timestamps.)
> I hope that Google's new approach is motivated by security concerns rather than purely economic reasons.
Maybe partially, but they would need to do it a better way.
If the manufacturer wants to offer verification of developers, this should be an optional feature allowing the user to continue the installation of applications distributed by unverified developers in a convenient way.
Making this verification mandatory is an absolute non-starter, ridiculous overreach, and a spit in the face of regulators who are trying to break Google and Apple's monopoly on mobile app distribution.
I don't understand how you can have mixed feelings about this.
> However, when it comes to publishing apps to the store,
This isn't about publishing apps to the Play Store. If that's all this was about, we wouldn't give a shit. The problem is that this applies to all stores, including third party stores like F-Droid, and any app that is installed independently of a store (as an apk file).
> Given the growing threat of such attacks, we need to find ways to mitigate them.
How about the growing threat of right-wing authoritarian control? How do we mitigate that when the only "free" platform is deciding the only way anybody can install any app on their phone is if that app's developer is officially and explicitly allowed by Google?
Hell, how long until those anti-porn groups turn their gaze from video games and Steam onto apps, then pressure MasterCard/Visa and in turn Google to revoke privileges from developers who make any app/game that's too "obscene" (according to completely arbitrary standards)?
There's such a massive tail of consequences that will follow and people are just "well, it's fine if it's about security". No. It's not. This is about arbitrary groups with whatever arbitrary bullshit ideology they might have being able to determine what apps are allowed to be made and installed on your phone. It's not fucking okay.
My elderly father unknowingly installed an application on Android after seeing a deceptive ad. An advertising message disguised as an operating system pop-up convinced him that his Android phone's storage was almost full. When he tapped the pop-up, and followed instructions he installed a fake cleaner app from the Play Store. While the app caused no actual harm, it displayed notifications every other day urging him to clean his phone using the same app. When he opened it, the app — which did nothing except display a fake graph simulating almost full storage — pressured him to purchase the PRO version to perform a deeper cleanup.
In reality, the phone had 24 GB of free space out of 64 GB total. I simply uninstalled the fake cleaner and the annoying notifications disappeared.
How such an app could reach the Play Store is beyond me. I can only imagine how many people that app must have deceived and how much money its creators likely made. I'm fairly certain the advertisement targets older people specifically—those most likely to be tricked.
For better or worse, I'm pretty sure that such an app would never land into the Apple App Store.
So you're saying Google is doing fuck all to protect customers on their already locked down store, right? This doesn't sound like it will be addressed by Google extending developer registration outside of their store at all if they can't even address obvious scam apps that they're already promoting. And to your point, yes, Apple probably does do a better job of maintaining their app store, that way they can prevent some of the push back on iOS being so locked down. An iPhone sounds like the right device for your father.
Litmus test: Can you get NewPipe or other Youtube clients onto an Android phone? This is non-malicious software that users want to run but could reduce YouTube's profits.
Coyuld anyone summarize why a desktop Windows/MacOs now needs so much more RAM than in the past? is it the UI animations, color themes, shades etc etc or is it the underlying operating system that has more and more features, services etc etc ?
I believe it's the desktop environment that is greedy, because one can easily run a linux server on a raspberry pi with very limited RAM, but is it really the case?
reply