Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sad. It's probably unpopular but wayland is not ready yet IMHO and lacking on a conceptual level. Yet another few years/months until most bugs are fixed and more broken functionality...


Maintenance mode != dead [1]

This just means that development of _new_ features and research should take place on Wayland where it belongs. From what I've read X has been a long evolution into a hodepodge of historical but effectively dead interfaces with newer ones crammed along side... it doesn't make sense to continue the tradition of cramming more features into X, but that doesn't stop people developing new things on top of it while Wayland matures.

[1] https://en.wikipedia.org/wiki/Maintenance_mode


yes, and they specifically said:

  We will keep an eye on it as we will want to ensure X.org stays supportable until the end of the RHEL8 lifecycle at a minimum
which gives us a 10 year-ish time frame of support from RH.


You mean _parity_ features? SSHing into a server and launching a small GUI tool is still dead. It was killed by Wayland intentionally and willfully as a fundamental design idea. In fact, the feature of doing that is considered "not part of wayland" by Wayland designers.

The root problem is that wayland replaced half of x.org and left the rest for someone else to figure out.


There have been a few attempts to patch the core library, most notably [0], but those have fizzled out, probably because they'd break backwards compatibility. (Another victim of "We'll figure it out later".) The most recent work appears to be a work in progress tool for this at [1], but it's also the sort of thing that could have, and should have, been done 10 years ago.

[0] https://web.archive.org/web/20170302095534/http://blogs.s-os... [1] https://gitlab.freedesktop.org/mstoeckl/waypipe/


> SSHing into a server and launching a small GUI tool is still dead.

They are working on native support for that via PipeWire. For the time being you can still do that with Xwayland.


PipeWire is video streaming, not remote rendering. They're assuming a rack of GPUs in the datacenter and a virtual circuit dedicated to me, but in reality I have a congested link and one GPU attached to my display.


The question is, which is more data. Rendered result or the data to do the rendering.

If the rendered result is lighter, then a video stream makes perfect sense.

If sending the data over to render is lighter, there's still one more issue: that GPUs are still not interchangeable, and the server hosting the application needs to have the necessary information about all the possible GPUs, their quirks and limitations. And you might still get inconsistent results at various clients.

My personal guess is that the data to render is a lot more than data the required to do video streaming. So much depends on doing CPU rendering directly to textures. Although more and more is moving to GPU, so...


X forwarding is pretty awful over congested links and high-latency links in my experience.

And from what I've seen if a program has even a medium-low level of graphical intensity it's not going to interact well with X forwarding. So anything that previously worked, should still work without a GPU.


Synergy (https://symless.com/synergy) is something that I've been searching for a good, Wayland-compatible, replacement for. It's simple, but I've gotten very accustomed to the ability to plop down a laptop (Mac, Linux, whatever) next to my desktop and instantly have another screen-worth of real estate to work with.

Wayland, by all accounts I've seen to date, just doesn't have the hooks to allow a tool like Synergy to exist. :(


It's sad that maintenance mode != dead. They should have put X-Windows out of its misery decades ago!

What's actually sad is that its replacement, Wayland, didn't learn any of the lessons of NeWS (or what we now call AJAX) and Emacs.

They could at least throw a JavaScript engine in as an afterthought. But it's WAY too late to actually design the entire thing AROUND an extension language, like NeWS and Emacs and TCL/Tk, which is how it should have been in the first place.

https://en.wikipedia.org/wiki/NeWS

NeWS was architecturally similar to what is now called AJAX, except that NeWS coherently:

+ used PostScript code instead of JavaScript for programming.

+ used PostScript graphics instead of DHTML and CSS for rendering.

+ used PostScript data instead of XML and JSON for data representation.

Designing a system around an extension language from day one is a WHOLE lot better than nailing an extension language onto the side of something that was designed without one (and thus suffers from Greenspun's tenth rule). This isn't rocket surgery, people.

https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule

>Any sufficiently complicated C or Fortran program contains an ad-hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

https://medium.com/@donhopkins/the-x-windows-disaster-128d39...

>If the designers of X-Windows built cars, there would be no fewer than five steering wheels hidden about the cockpit, none of which followed the same principles — but you’d be able to shift gears with your car stereo. Useful feature, that. - Marus J. Ranum, Digital Equipment Corporation


That's the Wikipedia summary, but the details matter heavily. NeWS was developed in the time before "personal computing", for workstations where all software was trusted. The threat model was "what threat?"

It also baked into assumptions about the graphics model and where the expensive parts are. It has the same fatal flaw as the X11 graphics model: that rendering will be far slower than I/O traffic, so socket bandwidth will never be the bottleneck.

Embedded programmability has ended up being a security disaster. It was perhaps ahead of its time, but we understand its flaws and principles now.


You seem to be misunderstanding that I'm advocating that X-Windows be replaced by NeWS in 2019. That's not at all what I'm saying.

I guess you're one of those people who always uses the noscript extension in your browser, and drives to the bank, pays for a parking place, and waits in line instead of using online banking.

Have you ever used Google Maps? Are you arguing that everybody should turn off JavaScript and use online maps that you scroll by clicking and waiting for another page to load?

You must really hate it when WebGL shaders download code into your GPU!


Considering I'm the author of one of the more technically advanced WebGL apps out there ( https://noclip.website/), I understand the power of the web as an application delivery platform.

NeWS-style programmability does not have the same advantages. I cannot host a NeWS application in one place and send a single link to let others run it.

The NeWS architecture put graphics rendering responsibility on the server with Display PostScript (also a mistake X11 made), and the scripting was so you could design a button in one process and instance it in another. It was a workaround for the lack of shared libraries, not a way of moving computation to a data center (the reason you use AJAX).


No, NeWS didn't use Display PostScript. A common misconception. Adobe's Display PostScript extension to X11 didn't have any support for input, event handling, threading, synchronization, networking, object oriented programming, user interface toolkits, window management, arbitrarily shaped windows, shared libraries and modules, colormaps and visuals, X11 integration, or any of the other important features of NeWS.

We actually wrote an X11 window manager in NeWS, with tabbed windows, pie menus, rooms, scrolling virtual desktops, etc. Try doing that with Display PostScript.

And it actually performed much better than was possible for an X11 window manager, since X11 window managers MUST run in a separate process and communicate via an asynchronous network protocol, incurring lots of overhead like context switching, queuing, marshaling and unmarshaling, server grabbing, etc.

https://news.ycombinator.com/item?id=15327339

You seems to have a lot of misconceptions about NeWS, and how and why it was designed and implemented. I suggest you read the X-Windows disaster article I wrote in 1993, and the original paper about Sundew by James Gosling, "SunDew - A Distributed and Extensible Window System", which he published in Methodology of Window Management in 1985.

http://www.chilton-computing.org.uk/inf/literature/books/wm/...

Again: I'm not advocating that X-Windows be replaced by NeWS in 2019. I'm saying that Wayland didn't learn from the lessons on NeWS. And that a much better solution would be to push Electron down the stack to the bare metal, to become the window system itself.

How do you reconcile your use of WebGL and JavaScript with your distaste for embedded programmability? Or do you just hold your nose with one hand and type with the other, the way I program X11? ;)


No response? I'd still like to know how you reconcile this:

>Embedded programmability has ended up being a security disaster.

With this:

>I'm the author of one of the more technically advanced WebGL apps out there

Do you believe your "more technically advanced WebGL app" is a "security disaster"?

If so, perhaps you should take it down, instead of linking to it! I'm afraid to click on such an ominous link to what you describe as a security disaster.


I had a response and even posted it for a few seconds but deleted it because the back-and-forth would continue so I just wanted you to have the last word peacefully.

But since you posted a second time I'll at least tell you why I didn't give you an in depth response.


> I guess you're one of those people who always uses the noscript extension in your browser

Imagine that not only your browser, but the entire system cripples because of poorly written trackers, ad spots with animated sprites floating over mp4 videos, and 0day exploits like the one that was recently found in the wild (and there was plenty of vulnerabilities in PostScript implementations as well, so in this sense your analogy is pretty much spot-on). Sure, that's the system of the future that everyone should've switched to decades ago.


What?! I use NoScript but I online bank exclusively. :v


Back in the day I worked on a system built on NeWS. Hand coding PostScript was an adventure. At the time NeWS certainly looked like the future. Oh, and the server side application (written in C) had ad-hoc version of about 3/4 of Common Lisp. Every function call passed an array of void star star


Wayland is literally never going to be ready unless it’s forced to be. How long has Nvidia promised to support it now, and still unless you’re using Nouveau it’s basically not usable on Nvidia cards.


That isn’t waylands fault, it’s the closed source third parties


That was the same argument gnome 3 used when it wasn't working in my Nvidia GPU: the driver is buggy, not our fault.

This is very hard to explain to your users, because there's nothing they can do.

So yes, it really doesn't matter too much why if the software doesn't work. I don't mind X until Wayland is ready for all.


That's been the argument for every Linux problem for at least as long as I've been using Linux. WiFi drivers used to suck. To the point that you had to be careful which WiFi card (yes, card) you bought, because only a couple of models worked on Linux at all. This situation took an extremely long time to get better. The same thing happened (and is still happening) with graphics cards. But at least most of them sort of work, instead of outright failing.

As long as enough consumers don't actively demand decent Linux (or BSD, whatever) support, it will never come. I'd suggest voting with your wallets, but with the laptop market being the steaming pile of shit that it is, that's virtually impossible.

The year of the Linux desktop...just like fusion power, is always just around the corner.


Sometimes manufacturers lie to us, though. Like the Thinkpad x1 Carbon was supposed to be "linux-ready," and then it ships with sleep functionality broken for a couple months until they managed to get a BIOS fix through.

I tried to vote with my wallet, but this is what Linux is like /shrug


It's actually easy to explain - ditch Nvidia, they aren't supporting Linux properly. Pretty clear message, and users either get it and ditch it, and if not, nothing you could do to help them. It's totally Nvidia's fault.


My Nvidia GPU was fine until gnome 3 happened. See, is not my fault. It had good drivers and decent support.

Now I use only Intel because... well, it works. But I don't bother with 3D gaming, of course.

My point is that you can blame the manufacturers because their support is broken, but you can't blame the users because they will use something else (I know I did: XFCE works great).

EDIT: Gnome shell was initially released in 2011. This may or may not have changed, for better or worse. I moved on.


Nvidia is far from fine. To understand their attitude, read this comment from one of the leading Nouveau developers: https://www.phoronix.com/forums/forum/linux-graphics-x-org-d...

I agree, you can't blame users for it, but you can totally tell them to try to switch. DE and Wayland compositor developers can't spend resources cleaning up the mess that Nvidia created by refusing to upstream their driver and preventing Nouveau from working properly as well.


Users don't care whose fault it is. Once you start laying blame you've already lost.


Users not knowing or caring about these details is precisely why users should be kept away from the management of these projects. The details do matter, even if the user lacks the context or domain specific knowledge to make heads from tails of those details.


For whom does that actually matter, though?


Wayland developers, who are not GPU reverse-engineers.


Sure, but they already know this. You or anyone else on Hacker News is not clarifying this for them.

Meanwhile, telling users who to blame does nothing for them. They don’t care. To them, the best entity to blame is the distribution, that’s pretty much their job.

But while blame doesn’t fix problems, neither does doing nothing. What I’m suggesting is, the move to Wayland needs to be pushed harder if anything is going to be fixed. Nvidia can’t hold it back, that’s not sustainable.

But again, users can’t do that. Distributions can do that. Distributions can tell people, sorry, we can’t support Nvidia, contact Nvidia, they already promised to work towards this, and what they produced is incomplete. If they continue to support X11, there's no urgency to fixing the issues with Wayland.

This isn’t really about blame though, it’s about responsibility. Somebody has to fix the problem. Open source projects like Nouveau can help, but Nvidia has blocked progress by requiring signed blobs and continually not providing access to important information (like technical bits for reclocking.) I think that is pretty much reason enough to suggest Nvidia should be shouldering the pressure and responsibility to deliver a good Linux experience, and we should absolutely hold them to it, until or if they stop blocking open source efforts on purpose.

Establishing blame would tell you who’s fault it is that something is broken. Answer is nearly useless imo. Only thing that matters is how do we fix it and possibly what can be done to prevent it from happening again. Saying Nvidia should fix it is different from worrying about who is to blame; if you frame it that way, they could fire back about how kernel licensing restrictions make their life harder, and nobody cares about that issue either.


> "Meanwhile, telling users who to blame does nothing for them. They don’t care. To them, the best entity to blame is the distribution, that’s pretty much their job."

I think there is value in telling users to vent their frustration in a productive direction; namely at Nvidia who has the power to change the situation, rather than at Wayland developers who are likely just as frustrated. I think we both generally agree on that.


Yeah, Nvidia is the right entity to bother no doubt, I just have a different idea of the tone and attitude. The approach I’ve seen before feels more like guiding an angry mob. I’m not really an expert but I feel what we need is firm decisions (drop support, it’s time) and clear messaging (vote with your wallet, and tell Nvidia you need this.)

Apple dropped support for Nvidia graphics. Linux will be next at this rate.

Still, I want to shy away from blame. Blame is complicated, can be deflected, and often leads to hatred. I prefer to say, Nvidia is responsible, nobody else can fix this.


NVidia sales hardly suffer from it.

In what regards Linux, the only customers that matter to NVidia are CUDA users (don't care about their UI on cloud instances) and Hollywood studios (have their own in-house distributions).


Drivers are often bad and nvidia's especially so but at this point if it's been years and you can't get a desktop window manager compositing on top of a GPU stack the problem seems increasingly likely to be you and not them. Blitting quads is not THAT hard. We've got vendors like Mozilla out there rasterizing entire webpages on NVIDIA (and AMD now, I think?) GPUs but Wayland can't reliably composite window bitmaps for some reason?


Is that still the case? I'm running FC29 with the nVidia closed-source drivers and I haven't had issues. (I have had issues on a laptop with switchable graphics, though, and Nouveau was the solution there.)


Is this new? Last time I tried, Nvidia on Wayland had performance issues and did not support Optimus at all (Edit: after rereading what you said I am guessing that is still an issue.)

If it is working though I retract what I said, though it’s absurd how long it took to happen.


Even without wayland, in recent Fedora releases (I forget if it started with 28 or 29), I've found optimus switchable graphics to be impossible or at least impractical to configure. What's become easier is having the entire X display use the nvidia gpu, with the intel gpu transparently slaved to relay pixels to its connected laptop display.

But, it does return the annoying, periodic driver breakage, where a dnf update replaces the xorg-x11-drv-nvidia RPMs and suddenly the userspace is incompatible with the still-running kernel module's slightly older API, so new processes cannot use opengl until I reboot. This was one benefit of optimus via the optirun infrastructure. The main desktop was on the intel gpu and I could actually unload and upgrade the nvidia module if desired, without forcing a disruptive reboot.


Do you have any documentation, or even random notes, on how you set that up? I have an optimus laptop and couldn't make the proprietary drivers work ( tried both rpmfusion and negativo17). I m happily using nouveau for now but at some point I ll need to use CUDA again.


I have an older Thinkpad T440p with Geforce GT 730M running Fedora 29. I use the MATE desktop rather than GNOME.

I just installed the akmod-nvidia packages from rpmfusion which also pulls in xorg-x11-drv-nvidia and xorg-x11-drv-nvidia-cuda. Search for "PRIME" discussions in the README.txt under /usr/share/doc/xorg-x11-drv-nvidia. I cannot be sure every bit of my config is still necessary, as I left it alone once I got it working.

First, I have /etc/X11/xorg.conf.d/nvidia-prime.conf with a small amount of config:

  Section "Module"
    Load "modesetting"
  EndSection

  Section "Device"
    Identifier "nvidia"
    Driver "nvidia"
    Option "AllowEmptyInitialConfiguration"
  EndSection
Then, I created a custom script in /etc/lightdm/display_setup.sh:

  #!/bin/sh
  
  if rpm -q xorg-x11-drv-nvidia \
     && lsmod | grep '^nvidia' '
  then
    xrandr --setprovideroutputsource modesetting NVIDIA-0
    xrandr --auto
    xrandr --output eDP-1-1 --set "PRIME Synchronization" 1
  fi
And in my /etc/lightdm/lightdm.conf I added one line to call that script:

  ...
  [Seat:*]
  ...
  display-setup-script=/etc/lightdm/display_setup.sh
To be honest, my old GPU is too slow and with too small RAM to be worthwhile for OpenCL. I find it more practical to just use the Intel OpenCL runtime for multi-core CPUs on this old quad-core laptop or on newer workstations. I do get some use out of xorg-x11-drv-nvidia-cuda on a Titan X desktop GPU though.


Thanks a lot!

I 've already created a thread about my problems in askfedora, made a last post there now linking to your post here in case they help somebody else. https://ask.fedoraproject.org/t/black-screen-after-installin...

I 'll try your notes when I have enough free time to re-install if things go bad.


Why is your standard for a Linux desktop renderer whether a pretty Linux-hostile company's niche laptop energy saving hybrid GPU driver works well on it?


Last time I checked screen sharing applications couldn't work with Wayland (meet, slack, Skype, etc). All of them work only with X11. I need them to work with my customers so I can't use Wayland until this problem is solved.


Especially since Gnome is basically the only DE that works on Wayland.


There's also Sway.


Hadn't heard of Sway, but the point still stands... Most of the desktop environments don't support Wayland. Cinnamon is the big one for me but KWin doesn't either which likely has a much larger user base than anything other than Gnome.


KWin has support, albeit still very buggy (last I heard).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: