Non-AMD, but Metal actually has a [relatively] excellent debugger and general dev tooling. It's why I prefer to do all my GPU work Metal-first and then adapt/port to other systems after that: https://developer.apple.com/documentation/Xcode/Metal-debugg...
I'm not like a AAA game developer or anything so I don't know how it holds up in intense 3D environments, but for my use cases it's been absolutely amazing. To the point where I recommend people who are dabbling in GPU work grab a Mac (Apple Silicon often required) since it's such a better learning and experimentation environment.
I'm sure it's linked somewhere there but in addition to traditionally debugging, you can actually emit formatted log strings from your shaders and they show up interleaved with your app logs. Absolutely bonkers.
The app I develop is GPU-powered on both Metal and OpenGL systems and I haven't been able to find anything that comes near the quality of Metal's tooling in the OpenGL world. A lot of stuff people claim is equivalent but for someone who has actively used both, I strongly feel it doesn't hold a candle to what Apple has done.
Nice project, great to see the scripts doing good work in the wild. If you needed any extra additions or tweaks to get them working, I'd love to hear about it.
Well it was radar. The first Raytheon microwaves were really pushing for 3GHz not 2.4GHz. If you like to play Connections, the reason for that is the first mass produced magnetrons were made by gun manufacturers like Colt and Smith & Wesson and the tooling for gun bore holes and magnetron cavities lined up at 3GHz.
The official FCC minutes from 1945 [1] indicate that publicly they were marketed for heat therapy massages not food, with a weird wink, wink that if they could get a carve out for using for medical reasons they could also sell it to the Navy for reheating food as well.
The ISM carve out came after by a couple of years in 1947 because Raytheon had got an exception for this machine, not the other way around.
The whole origin story of why this particular slot of spectrum is full of carts before horses. That water oscillation thing is a common misconception - water oscillates at much higher frequencies [2].
Reminds me of "Reality has a surprising amount of detail" [0] — unknown unknowns often remain that way until you get up close and personal with something new.
One of the hardest bugs I've investigated required the extreme version of debugging with printf: sprinkling the code with dump statements to produce about 500GiB of compressed binary trace, and writing a dedicated program to sift through it.
The main symptom was a non-deterministic crash in the middle of a 15-minute multi-threaded execution that should have been 100% deterministic. The debugger revealed that the contents of an array had been modified incorrectly, but stepping through the code prevented the crash, and it was not always the same array or the same position within that array. I suspected that the array writes were somehow dependent on a race, but placing a data breakpoint prevented the crash. So, I started dumping trace information. It was a rather silly game of adding more traces, running the 15-minute process 10 times to see if the overhead of producing the traces made the race disappear, and trying again.
The root cause was a "read, decompress and return a copy of data X from disk" method which was called with the 2023 assumption that a fresh copy would be returned every time, but was written with the 2018 optimization that if two threads asked for the same data "at the same time", the same copy could be returned to both to save on decompression time...
Yeah on macos the communication between your app and the window server (which is conveniently called WindowServer) happens via mach ports. Most of it is undocumented, in fact anything more "low-level" than using AppKit is undocumented, although IIRC it is in principle possible to use undocumented CG* apis to create and manipulate windows yourself without going through the appkit layers. I think each CG* api is basically a thin shim that communicates to the window server, which has a corresponding CGX* implementation which does the actual logic. This article has some details https://keenlab.tencent.com/en/2016/07/22/WindowServer-The-p...
I had a similar experience at Waterloo. The first semester of class we used Racket, HTDP (linked above) and even Haskell. I had already some experience with programming but it was like starting all over again to build solid foundations. If you are in the business of teaching introductory computer science material, I suggest you consider "How to design programs". Fun, good times.
I recall going to a talk when I was at Northeastern, and at one point the speaker said something like, "But why would you create a new programming language just to solve one problem?"
Matthias Felleisen (the guy in the video) interrupted him from the audience to respond, "I know several people who have done exactly that, and it worked out quite well for them!"
As he says in the video, Northeastern teaches students how to program in a systematic way, rather than by copy-paste-modify from other examples. They certainly teach a powerful way of thinking, and it has served me well over the years.
If anyone out there is reading this and is thinking about going to college for computer science, definitely check out Northeastern. I could not be more thankful that I went there.
You can read the freshman text book online: "How to Design Programs"[1].
> A little bit of this also has to do to stick it to all those Luddites on the internet who post "that's impossible" or "you're doing it wrong" to Stack Overflow questions... Requesting permissions in the JNI "oh you have to do that in Java" or other dumb stuff like that. I am completely uninterested in your opinions of what is or is not possible. This is computer science. There aren't restrictions. I can do anything I want. It's just bits. You don't own me.
From the wonderful CNLohr's rawdraw justification[0]. I always enjoy these kinds of efforts because they embody the true hacker spirit. This is Hacker News after all!
While this is worthwhile, there is an opportunity for the math to violate your assumptions in a way you didn't understand. Let's talk briefly about TLS 1.3 Selfie attacks.
TLS 1.3 is the first TLS in which experts built a formally verified model of the protocol before they shipped the standard. There are models of earlier TLS versions, but nothing learned from them could be incorporated into the corresponding standard because they were an afterthought - like writing unit tests after shipping version 1.0
TLS 1.3 has a mode intended for applications where some nodes which have a pre-existing relationship communicate, in this case we don't need the Web PKI ("SSL certificates") instead we use Pre-shared Keys (PSKs) - that is all the parties know the keys anyway, which clearly couldn't work for the open web but is fine for my boiler and its remote thermostat since duh, I don't want some random other person's thermostat talking to my boiler. PSKs are also used when you connect to the same server again later, but that doesn't matter here.
The mathematical proof seemed to say exactly what the designers wanted, and it passed. So TLS 1.3 is exactly what we wanted... right? Well, almost. Designers were thinking of symmetries like "Alice talks to Bob" and "Bob talks to Alice" as one conversation, but the proof thinks that's two conversations. So the proof thinks Alice and Bob need two keys, the Alice->Bob key and the Bob->Alice key, while humans assumed they only need a single key, an Alice-x-Bob key. And so the RFC documents (prior to errata) the human assumption but the protocol actually requires the machine assumption.
The result is the Selfie Attack. In our scenario Alice, and Bob have a Cat. The Cat, like many cats, would like more food, but Alice and Bob use a secure protocol to ensure they only feed the cat once.
Bob has fed the cat and left. The cat is sat by an empty food bowl looking plaintive. Alice sends Bob an encrypted message, "Did you feed the cat?". The Cat intercepts the message, it doesn't have the Alice-x-Bob key so it can't decrypt this message nor directly answer it, but it doesn't need to. The Cat simply sends Alice's own message back to her. Receiving a message encrypted with the Alice-x-Bob key, Alice concludes it's from Bob. "Did you feed the cat?". So she replies "No, I did not feed the cat". The cat receives this message too, and directs it back to Alice as well, now Alice has what appears to be a reply to her first question, "No, I did not feed the cat" encrypted with the Alice-x-Bob key. So, Alice feeds the cat again because the cat was able to trick her into answering her own question.
> I want to study more, read more, and finish my coding projects.
Be honest with yourself: do you really want to do those things? Or do you want them to be done? I suspect it’s the latter, and you don’t actually want to do what you say you want to do. A lot of people in my experience conflate “wanting something done” with “wanting to do something”. Understanding that may help you understand why you’re having a hard time completing the tasks you want to.
Either way, before you can complete a task you need to have a reason to complete it; what makes you want to do it? Maybe you want to read more to improve your knowledge about a field, or maybe you want to study more to better understand the material you’re learning. That motivation will be your guiding light when you don’t presently want to do something, but your higher goal is to do it. And that time will come. Everyone has days where they lack motivation, and having a good reason for doing something will enable you to have the discipline to do it anyway.
Next, set yourself up for small wins. If your goal is “I want to focus on my studying for eight hours”, then you’ve set yourself up for failure. Start smaller, with something that you know it is impossible for you to fail at. Start by studying for fifteen minutes, or ten, or five; whatever you know you can do successfully. Accomplishing that goal, no matter how tiny, will be a success that you can build upon. Next time, try to study for longer - push your comfort zone a bit. If studying for fifteen minutes was hard to focus, maybe try sixteen. Progress at a rate that you believe you can progress at. And that’s key - if you start challenging yourself with tasks that you don’t believe you can do, you won’t be able to do them. Build up your confidence in yourself, and your habits.
The other tip I have is to set yourself up for success. Reflect on why you typically lose focus. Is it your phone? Maybe leave your phone somewhere else when you study. Is it a loud or distracting environment? Find a separate environment that’s better suited to the task. Do you lose focus because you get hungry? Bring snacks. Whatever the obstacles in your way are, identify them, and then come up with specific ways to target those problems.
I've found that the things I procrastinate most are things that have high "activation energy", for lack of a better term. Getting off my computer, getting to my car, and driving to the gym takes a lot of up front willpower, even if I find the actual working out enjoyable (or at least tolerable). The approach that I take to resembling a functional adult is to make small adjustments to my routine that significantly cut down on that activation energy. To use the gym example, I started showering at the gym. Every day, after work I go to the gym to shower. Some days I don't have the energy or time to work out, but I always show up, grab my gym bag from my car, go to the locker room, and shower.
That way, it's not a question of "do I have the energy today to go to the gym and workout?", it's just "do I have the energy today to workout while I'm at the gym anyway?", which is a much lower bar.
This game installs some random low reputation anti cheat from japan that I have noticed some very strange behaviors from.
First it maintains running even when the game is closed, including tray icons. Also it starts on boot up regardless of boot up settings.
It seems to be sending some packets of data to japan IPs while the game has already been closed.
If you made the mistake of installing this very suspicious application it took me some time to discover all the traces of it on my PC but I believe I have completely removed it.
first uninstall the warlanders game from steam. Check to make sure that C:\Program Files\SentryAntiCheat is gone, not just empty
open a commandline and run C:\Program Files\SentryAntiCheat\setray.exe --unins2 just to make sure
search your whole PC for ses.exe and setray.exe. Delete any traces that occur
Then go here: C:\Users(put your username here)\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Sentry Anti-Cheat Notification.lnk <- delete that
finally I searched quite a bit in the registry and mostly everything in the registry is gone relating to this AC. However there still is an entry so get rid of that
Win+R button
type regedit.exe
then navigate to this section:
HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Uninstall\SentryAntiCheat <- right click that and press remove then confirm
Book curation, or for that matter, any trustworthy product curation, is highly valuable. You used to be able to ask a bookshop keeper for recommendations based on your preferences and reading history. However, most independent bookshops have now disappeared from the UK. If you only read one genre, such as romantic novels, recommendation engines will easily find your next good read. If your reading habits are more eclectic, this provides a way to look at curated content from your favourite storefronts.
There is _still_ no other word processor I'm aware of that can properly convert to title case. Not to mention Reveal Codes. I... might actually use this, not just play with it.
What was it, 10 to 20 years ago, people started to be noticeably nervous when they were coming near a description of my disability. It used to be so simple. I am 100% blind, and guess what, I prefer the term blind because it is pretty descriptive and relatively short. But all of a sudden, people external to the community started to fumble around with "visually challenged", and all the nonsense variations of that in my native language. It is so weird, because it adds yet another layer of distance between "us" and the "normal" people. You can almost feel how the stumbling-word is making communication even more awkward. I (and almost all of my friends with a similar disability) make a point of letting people know that we actually prefer the word blind over everything else, and not even that does put people at ease.
It sounds a bit provocative, but it feels like that: The language terror they were subjected to has made them so unsecure that they actually dont want to hear that blind people have no issue with being called blind. They somehow continue to argue, sometimes not wanting to accept that and going on to use weird language.
Its a weird phenomenon. The longer I watch all of this, and I also mean the gender-language-hacks, I feel like this move has added to the distance between various groups, not made it smaller.
It is so condescending to believe your own language-police more then the person you are talking to. Yet, the peer pressure seems to be so high that this actually happens. Sad.
HIGHLY highly recommend checking out Stephen Marz's OS blog, where he walks through building a basic RISC-V OS in Rust step by step: https://osblog.stephenmarz.com/
I once read that business is the art of ripping someone off without pissing them off, and I've since made that a part of my life philosophy because it's very accurate.
These businesses are simply more devious manifestations of that art.
Bootlin's embedded Linux class is fantastic and will teach you how to do all of this--uboot setup, building a kernel and root fs, etc. All of the materials for the class are free downloads so if you're motivated you can read the slides and follow it all yourself. I highly recommend it: https://bootlin.com/training/embedded-linux/
I suffered from chronic diarrhea for years which I attributed to Crohn's disease. This seemed obvious since diarrhea is a very common symptom of Crohn's, but its onset had been several years after I was diagnosed and prior to that I had tended more towards the opposite problem (constipation). I was thus always suspicious that the root cause lay elsewhere. After much experimenting I found that if I ran my dishwasher through an extra rinse cycle the diarrhea went away.
I tried many different detergents but never found one which didn't cause problems, so I've just continued to run an extra cycle. My GI doctor didn't really believe me when I told him. I wonder how many people are having their IBS/IBD symptoms exasperated by detergent residue left on their dishes.
It'll depend on where you live and what your goals are. If you have free-time to tinker and enjoy that kind of thing, you can build something very fast and reliable and prevent e-waste by building your own storage server with used parts on the cheap.
If you're in the United States, electricity is cheap enough that you can pick up much older SAS drives for really low $/TB cost and have it be worthwhile.
For example, I bought a used Supermicro CSE-836 [1], which is like a 3U server chassis with 16 hot-swappable drive bays and a backplane of some sort.
The backplanes vary, but mine came with the BPN-SAS2-836EL1. I paid $300 in total for the chassis itself, backplane, dual power supplies, heatsinks, etc, along with a Supermicro X9DRi-LN4F+ [2] and two Xeon E5 2660 V2s as a bundle from someone in the 'ServeTheHome' classifieds section [3]. From there, I picked up a load of HGST 3TB 7200rpm SAS2 drives on eBay for about $10 each from a recycling company. And then 192GB of DDR3 ECC memory from the same place for about $80.
I also grabbed a couple less-than-production-ready 3.84TB U.2 NVMe drives on eBay for a little over $100 each.
I think if I were to do it again, I'd have gotten slightly larger, newer drives. These are all totally fine, but I started seeing ~6TB drives for about 3x the cost per terabyte, which would pay itself off quickly with the energy reduction. The other reason is that I ended up going a little overboard; I have about 56x3TB drives right now, which is a lot more than 16, so I needed to get a couple of JBOD expansions to put them in, each of which were like $250 -- if I had gotten fewer, larger drives, I'd have had another $500 to work with & be saving on energy.
Another thing I'd have done differently is get fewer but larger sticks of memory. I have a really nice amount of RAM right now, but the energy consumption with 24x8GB isn't worth the upfront savings compared to getting 16 or 32GB DIMMs.
All the storage is in OpenZFS on Linux. The 56x3TB drives are configured as 7 RAIDZ2 vdevs, so 2 drives each are for redundancy, and 6 for actual usable storage. This leaves me with a bit over 100TB of usable space. And the 3.84TB U.2 drives are mirrored and act as a "special" device (lol, literally what they are called) [4] to automatically store small blocks and ZFS metadata.
I am sure I could have done a bunch better, but, so far, everything has been lightning fast and reliable.
I am using ZFSBootMenu [5] as my bootloader. It's cool since it is basically a tiny Linux distro that lives in your EFI and comes with a recent version of ZFS, so you can store your entire OS, including your actual kernel and such in ZFS, and you can enable all sorts of ZFS features that GRUB doesn't support, etc.
This is nice because, since the entire OS is living in ZFS, when I take snapshots, it is always of a bootable, working state, and ZFSBootMenu lets me roll-back to a selected snapshot from within the bootloader.
The Supermicro board has a slot for a SATA DOM [6], which is sort of like the form fact of an SD card. I picked up the smallest, cheapest one I could on eBay for like $15 and use that to store my bootloader. I did this so that my tiny 128GB SSDs that I use for my OS could be given to ZFS directly for simplicity instead of having to carve out a small boot partition, etc.
All in all, I'm probably out about $1750 for >100TB usable, redundant, fast storage, and a decent bit of power for virtualization and whatever else. It costs me like $50ish a month in electricity because of all the drives and DIMMs. But I was already paying 65 euros a month for a 4x8TB server from LeaseWeb to use as a seedbox, and ran out of space, so it's been worth it, even with my dumb decision to use 3TB drives.
Edit: Also, figured it'd be worth mentioning, but the way I got the chassis+motherboard+cpu bundle for such a decent price was by posting my own thread. So, if anyone reading this is broke like me and not finding anything suitable, that is an option.
You won't always find exactly what you're looking for if you just browse around. But I've always had good luck explaining my situation, my budget, my goals, and someone tends to have stuff they don't need.
eBay seems to be pretty useless right now for the chassises (chasses? chassi? I give up) due to memecoin Chia miners. Forums are your best bet if you don't want to pay scalper rates.
> I was actually suprised to hear that high end cards are the ones where money is lost. Typically high end products are the ones with by far the biggest margins.
two things:
first, high-end cards are the ones that miners liked, so, partners really loaded up on orders of those cards. They sold a huge number of high-end cards relative to previous generations. And now the mining market has crashed, and they're stuck with a huge number of high-end cards (relative to actual sustainable demand) both at the vendor (NVIDIA) level and the partner (EVGA, et al) level. And miners are dumping their cards, so you have a huge number of high-end cards flowing back into the market too.
secondly, the card he cites as being "their highest-margin card" is a model that was a bad deal at launch and has seen virtually no price reduction since then. I have actually seen quite a few people commenting on just how little prices of that specific model have dropped, with the 3060 Ti and 3070 pushing downwards (and those are significantly faster than the base 3060). On the AMD side, you can now get a 6700XT (which again, significantly faster) for the same price, which is a performance bracket higher. It's an oddity of that specific model.
so, what he's saying is, an overpriced card that has seen no price reduction while much faster cards crash around it, has high margins - "no shit", as they say. Doesn't mean anybody is really buying it though.
the whole thing is very much "true, from a certain point of view"... like his "wow we're losing money!". Yeah, now that the GPU market is crashing you're losing money... and during those years, the company as a whole lost money because this CEO has been doing all sorts of zany business ventures that ended up in massive losses... the GPU division made money hand-over-fist for the last 2 years, and he's lost money hand-over-fist by trying to break into the enthusiast monitor market and enthusiast motherboards (both extremely support, warranty, and R&D intensive markets) and doing a ton of branding deals where he sticks EVGA logos on hdmi capture devices and pcie sound cards (in 2020, seriously) on products licensed from other vendors (who it turns out were skeezes and EVGA was on the hook for a ton of defective and falsely-marketed products).
he ran the company into the ground and is trying to shift blame to NVIDIA. Yeah, the board-partner thing is predatory, but partners don't really add significant value to the product anymore, and middlemen get squeezed out of business everywhere. Yeah, EVGA is in deep shit, and he's lying when he says they're not going to go under, the writing has been on the wall for months now. The GPU market crashing is almost certainly the last straw for EVGA and they won't be able to pivot to their remaining markets.
But like, he did make a ton of money on GPUs over the last 2 years, he just lost a ton of money on other stuff, and that's not NVIDIA's fault. Partners in general ordered way too much (and again, particularly the high-end stuff) trying to cash in, they make fucktons of margin on it during 2020-2021, and now they are lobbying NVIDIA to cover their downside and buy back all the chips they have left over. This is part of that push, and they know the public generally doesn't like NVIDIA, so it's worth a go.
And it's true that unless NVIDIA buys back the chips from EVGA, they probably go under. I imagine that's not a very pleasant conference room to be in during the negotiations. GN notes the extremely "personal" tenor of this guy's affront. If NVIDIA doesn't cave, his company goes under, and he's toast because EVGA is a pillar of the fucking community. Super tempting to try and go around the negotiations and take your case to the public, try and cash in on anti-NVIDIA sentiment (which is broad and intense), and try to appear to be the good guy.
Partners tried to pull this same stunt in 2018... they ran to tech media framing it as "NVIDIA is forcing us to buy obsolete chips if we want next-gen ones!" and if you read the details, what actually happened is partners wanted to cancel their contractually-agreed orders after mining tanked, and NVIDIA said no, if you do that it's over, we're not giving you 20-series chips if you break your contract on the previous batch.
I bet NVIDIA wishes they could cancel their contractually-agreed orders from TSMC too. That's not how it works unfortunately. But NVIDIA did knuckle under in 2018 and bought chips back from Gigabyte and perhaps other vendors, and now the board partners are hoping for a repeat. But that was 300k chips, which, while it's a lot, was tractable. The current overstock is like, millions of chips, all in the high end.
That's why both NVIDIA and AMD are launching top-first this generation... the lower chips would compete with those massively-overstocked 30-series cards. So, launch the high-end stuff over the top and let the 30-series inventory burn through.
For my setup, I do a "local" version of Geforce Now.
I have my gaming PC in another room and stream it to my laptop using Nvidia's GameStream and Moonlight. I run it at 1440p with 120fps. With everything connected via ethernet, I get an end-to-end latency of 7ms. This means my stream is only just 1 frame behind the PC.
I use this setup for fast-paced games as well as regular PC usage. 99% of the time, I can't tell that it's a remote stream.
The advantages of the setup are:
1. Don't have to deal with the heat and noise of my gaming PC being in my room.
2. Switching between my laptop and gaming PC is faster than using a hardware KVM switch.
3. I can easily stream games or use my PC remotely with tablets and phones.
Disadvantages are:
1. Gamestream and Moonlight don't support streaming dual screens at once.
2. Gsync doesn't work over streaming. So lower frame rates (< 60fps) aren't as smooth as native.
I'm not like a AAA game developer or anything so I don't know how it holds up in intense 3D environments, but for my use cases it's been absolutely amazing. To the point where I recommend people who are dabbling in GPU work grab a Mac (Apple Silicon often required) since it's such a better learning and experimentation environment.
I'm sure it's linked somewhere there but in addition to traditionally debugging, you can actually emit formatted log strings from your shaders and they show up interleaved with your app logs. Absolutely bonkers.
The app I develop is GPU-powered on both Metal and OpenGL systems and I haven't been able to find anything that comes near the quality of Metal's tooling in the OpenGL world. A lot of stuff people claim is equivalent but for someone who has actively used both, I strongly feel it doesn't hold a candle to what Apple has done.