There are a few problems with how Trump is going about this:
1. The tariffs are too broad, they don't target a single or a few industries.
2. Trump has gone back and forth many times on them, using them as negotiating leverage, not as long term incentives.
3. They are on very shaky legal grounds and will likely end up getting reversed by either the Supreme Court or the next president.
If you want to use tariffs to encourage on-shoring you make them targeted and pass them with bipartisan support through congress. Companies need stability and long term guarantees for the kind of capital expenditure that is needed. Even better if you use a mix of carrot and stick, rather than all stick
I agree that's actually the problem. The problem with discourse in the US is that it comes in soundbites, division and confusion. This predates, arguably ENABLED Trump.
There could have been an argument for tariffs, done rationally and with a very specific program to rebalance trade. I'm not saying it's necessarily correct, but it could have entered as an option for voters to consider. But that's an alternative universe to people at this point, and we end up with an unpredictable waffling that scares businesses and doesn't appear to have obvious aims at this point beyond petty attacks.
And with China a key target in the Trump Tariff debacle, China is punching holes in these punitive tariffs. Besides shipping goods to intermediary countries that are not as heavily tariffed then exporting to the U.S., China is taking ownership stakes in American businesses, thus circumventing the whole tariff thing. And the beauty of this is, they can take advantage of U.S. taxpayer benefits, such as an R&D tax credit, to sweeten the deal.
What does the US gain from taking Greenland that it doesn't already have? If the US does invade an ally to acquire territory I think Canadians should be worried. In any case, what the US gains is the wrong perspective. This is about Trump and those around him wanting to build an empire and the American people, seemingly, letting them.
In C++ you do it the other way around, have a single class that is polymorphic over templates. The name of this technique within C++ is type-erasure (that term means something else outside of C++).
Examples of type erasure in C++ are classes like std::function and std::any, and normally you need to implement the type erasure manually, but there are some library that can automate it to a degree, such as [1], but it's fairly clumsy.
how do apis typically manage to actually « use » the « bar » of your example, such as storing it somewhere, without enforcing some kind of constraints ?
Depending on exactly what you mean, this isn't correct. This syntax is the same as <T: BarTrait>, and you can store that T in any other generic struct that's parametrized by BarTrait, for example.
> you can store that T in any other generic struct that's parametrized by BarTrait, for example
Not really. You can store it on any struct that specializes to the same type of the value you received. If you get a pre-built struct from somewhere and try to store it there, your code won't compile.
I'm addressing the intent of the original question.
No one would ask this question in the case where the struct is generic over a type parameter bounded by the trait, since such a design can only store a homogeneous collection of values of a single concrete type implementing the trait; the question doesn't even make sense in that situation.
The question only arises for a struct that must store a heterogeneous collection of values with different concrete types implementing the trait, in which case a trait object (dyn Trait) is required.
Yes, in part because the US outsourced a lot of their industry to China since. The US is still one of the principal per capita emitters, they need to cut emissions by two thirds to catch up with Europe and in half to reach China.
Having worked on a design system previously I think most people, especially non-frontend developers, discount how hard something like that is to build. LLMs will build stuff that looks plausible but falls short in a bunch of ways (particularly accessibility). This is for the same reason that people generate div-soup, it looks correct on the surface.
EDIT: I suppose what I'm saying is that "The paid products Adam mentions are the pre-made components and templates, right? It seems like the bigger issue isn't reduced traffic but just that AI largely eliminates the need for such thing." is wrong. My hunch is that AI has the appearance of eliminating the need for such things.
It's not that people care about quality, but that people expect things to "just work".
Regarding the point about accessibility, there are a ton of little details that must be explicitly written into the HTML that aren't necessarily the default behavior. Some common features of CSS and JS can break accessibility too.
None of this code would obvious to an LLM, or even human devs, but it's still what's expected. Without precisely written and effectively read-only boilerplate your webpage is gonna be trash and the specifics are a moving target and hotly debated. This back and forth is a human problem, not a code problem. That's why it's "hard".
I use the web every day as a blind user with a screenreader.
I would 100% of the time prefer to encounter the median website written by Opus 4.5 than the median website written by a human developer in terms of accessibility!
That's really interesting. Are you speaking from experience with websites where you know who authored them or from seeing code written by humans and Opus 4.5 respectively?
So I have been using the human-authored web since well... 1999 or so, starting with old AOL CDs. I've obviously seen a lot of human content.
Back in the old days you might have image links and other fun stuff. Then we entered the era of flash. Flash was great, especially the people who made their whole site out of it (2004 + not being able to order ... was it pizza? something really sticks in my memory here.)
Then we entered the era of early Bootstrap. Things got really bad for a while -- there was a whole Bootstrap-Accessibility library people ended up writing for it, and of course nobody actually used the damn thing. The most frustrating thing at this point (2010?) was any dropdown anywhere. Any bootstrap dropdown was completely inaccessible using typical techniques, and you'd have to do something tricky with ... mouse routing? Gods it's been 15 years.
CAPTCHAs for stupid things became huge there for a brief moment -- I remember needing to pass a CAPTCHA to download ... was it Creative drivers? That motivated me to make a service called CAPTCHA-Be-Gone for other blind people for a while.
Then we see ARIA start to really come into its own... except that's a whole new shitshow! So many times you'd get people who thought "Oh to add accessibility, we just add ARIA" and had no fucking idea what they were doing, to the point where the most-common A11y advice these days has become "Don't use ARIA unless you know you need it."
Oh then we had this brief flash (~10 years ago?) of "60 FPS websites!" -- let's directly render to the fucking canvas, that'll be great. Flutter? ... Ick!
Nowadays the issues are just the same as they ever were. People using divs for everything, onclick handlers instead of stuff that will be triggered with keyboard... Stuff that Opus just doesn't do!
I guess I've only been using Opus 4.5 for about a month but just ... Ask it to build something? Use it with a screen reader? Try it!
> Then we see ARIA start to really come into its own... except that's a whole new shitshow!
I am not blind, but my experience trying to write accessible web pages is that the screen readers are inconsistent with how they announce the various tags and attributes. I'm curious what you think about the screen readers out there such as NVDA, JAWS, VoiceOver, TalkBack, etc. and how devs should be testing their web pages.
Many of the larger corporate clients tend to standardize on the exact behavior of JAWS and I am not sure that is helpful. It's like the Internet Explorer of screen readers.
If you want to know why a page ends up riddled with ARIA overriding everything, that's why. In even the best cases, the people paying for this dev work are looking for consistency and then not finishing the job. It's never made the highest priority work either since testing eats up a ton of time.
To reinforce my original point, I just don't think LLMs can write anything but the most naive code and everyone has opinions and biases completely incompatible with standardization. It's never "done" and fundamentally fickle and political just like the rest of the web.
Satisfying constraints like these isn't merely about knowing the spec and having lots of examples. Accessibility requirements are even more subjective than ordinary requirements already are to begin with.
But accessiblity on the frontend is to a large extend patterns - if it looks like a checkbox it should have the appropriate ARIA tag, and patterns are easy for an LLM.
It's just… a lot of people don't see this on their bottom line. Or any line. My awareness of accessibility issues is the Web Accessibility Initiative and the Apple Developer talks and docs, but I don't think I've ever once been asked to focus on them. If anything, I've had ideas shot down.
What AI does do is make it cheap to fill in gaps. 1500 junior developers for the price of one, if you know how to manage them. But still, even there, they'd only be filling in gaps as well as the nature of those gaps have been documented in text, not the lived experience of people with e.g. limited vision, or limited joint mobility whose fingers won't perform all the usual gestures.
Even without that issue, I'd expect any person with a disability to describe an AI-developed accessibility solution as "slop": because I've had to fix up a real codebase where nobody before me had noticed the FAQ was entirely Bob Ross quotes (the app wasn't about painting, or indeed in English), I absolutely anticipate that a vibe-coded accessibility solution will do something equally weird, perhaps having some equivalent to "As a large language model…" or to hard-code some example data that has nothing to do with the current real value of a widget.
Accessibility testing sounds like something an LLM might be good at. Provide it with tools to access your website only through a screen reader (simulated, text not audio), ask it to complete tasks, measure success rate. That should be way easier for an LLM than image-based driving a web browser.
I think perhaps the nuance in the middle here is that for most projects, the quality that professional components bring is less important.
Internal tools and prototypes, both things that quality components can accelerate, have been strong use-cases for these component libraries, just as much as polished commercial customer-facing products.
And I bet volume-wise there's way more of the former than the latter.
So while I think most people who care about quality know you can't (yet) blindly use LLM output in your final product, it's completely ok for internal tools and prototyping.
The Tailwind Team's Refactoring UI book was a big eye opener for me. I had no idea how many subtle insights are required to create truly effective UX.
I think people vastly underestimate just how much work goes into determining the correct set of primitives create a design system like Tailwind, let alone a full blown component library like TailwindUI.
While I believe you, its an argument that artists bring forward since the beginning of art, so even many hundred years before the internet on average humankind did not value this work.
It's not really a refutation of my point about how building a good component library is hard, to suggest using another component library. Of course, if you use one it's easier, that was my entire point.
shadcn ui is not a component library but the basis for a component library that has great accessibility built-in from the start, so yes, it is a refutation.
Maybe we're arguing semantics, but I think calling shadcn a "basis for a design system" is more accurate than a traditional component library. The difference to me is that shadcn lives inside your codebase and you can fully customize it as you please. You cannot customize a component library like MUI nearly to that extent.
Everything that's been said publicly is just pretence, just like Maduro's/Venezuela's supposed drug trafficking. This is about Trump being and old man in his waning days who wants to create a legacy. Those around him have ambitions of empire.
> If you want to make pointers not have a nil state by default, this requires one of two possibilities: requiring the programmer to test every pointer on use, or assume pointers cannot be nil. The former is really annoying, and the latter requires something which I did not want to do (which you will most likely not agree with just because it doesn’t seem like a bad thing from the start): explicit initialization of every value everywhere.
To me this is the crux of the problem with null. It's not that null itself is a problem, it's that nothing communicates when it can be expected. This leads to to anti-patterns like null-checking every single pointer dereference. What's need is a way to signal when and when not `null` is valid under your domain model. This is precisely what stuff like `Option` does. Without a signal like this, programming feels like a minefield, where every dereference is liable to blow up, personally I'm done with that kind of programming.
The latter part of the post about individual-element vs grouped-element mindset is interesting, but I'm not sure it refutes the need for null or reasoning about it.
EDIT: It's also worth noting that Rust can still zero initialise entire structs despite element-wise initialisation when the valid bit pattern for the struct is 0.
- 1: In C (and relatives), you cannot rule out that any pointer is not-null. Simply due to pointer arithmetic.
- 2: Some values have no default-zero state.
On #2 I found the EG discussions on the Java mailing lists to be fascinating. Think of innocuous types such as "date", where a default value only causes problems. You end up with something like "1970:0:0" or "0:0:0" and it acts like a valid date, but its use is likely unintentional and leads to follow-up issues.
And once you have multi-threading involved, even a simple null-flag becomes a difficult technical challenge to implement under all conditions.
While null-pointers are possibly under #1 it seems much more likely that you'd produce other kinds of invalid pointers(out of bounds, unaligned etc) than nullptr. The use of null pointers to signal absence and failure is surely the most common source of them in C (and relatives).
I've always understood the billion dollar mistake to be more about #2 and language like Java in particular. Agree about default values being bad, it's one of my primary reservations with Go.
> While null-pointers are possibly under #1 it seems much more likely that you'd produce other kinds of invalid pointers(out of bounds, unaligned etc) than nullptr. The use of null pointers to signal absence and failure is surely the most common source of them in C (and relatives).
Fair point. Still, it just leaves a bitter taste when you want to express something as non-null but can't technically exclude it...
The current AI bubble seems like a bad proposition for most people regardless of how it shakes out. The way I've seen it described elsewhere is: either the bubble pops, causing a significant recession or it doesn't and loads of people lose their livelihoods to AI. In either case average people lose.
The problems with AI aren't technical they are political and economical. This topic is discussed in Max Tegmark's "Life 3.0", in which he theorises about various outcomes if we do invent AGI. He describes one possibility where we move to a post-scarcity society and people spends their days doing art and whatever else they fancy. Another option looks more like the world described in Elysium. I suspect the latter prediction feels more likely to most people.
Losing contact with the code is definitely on my mind too. Just like how writing can be a method of thinking, so can programming. I fear that only by suffering through the implementation will you realise the flaws of your solution. If this is done by an LLM you are robbed the opportunity and produce a worse solution.
Still, I use LLM assisted coding fairly frequently, but this is a nagging feeling I have.
One of Rust's core guarantees is that a race condition in safe code will never cause UB. It might return a nondeterministic result, but that result will be safe and well-typed (for example, if it's a Vec, it will be a valid Vec that will behave as expected and, once you have a unique reference, is guaranteed not to change out from under you).
When talking about the kind that lead to torn memory writes, no it doesn't have those. To share between threads you need to go through atomics or mutexes or other protection methods.
USENIX paper on model checking for Rust OS kernels uncovered 20 concurrency bugs across 12 modules in projects like Redox OS and Tock, including data races, deadlocks, and livelocks
You've linked to a bug that was unintentional and was fixed.
Go allowing torn writes for their slices and interfaces (their fat pointer types) is intentional behavior in the go implementation and has no sign of being fixed.
Some one getting unsafe code unintentionally wrong is not an indication that any language lacks memory safety.
Deadlocks are not memory safety issues by the definition used in the OP. Furthermore, safe Rust is only intended to guarantee protection against data races, not race conditions in general.
I think this is starting to wander rather far afield from where this thread started...
But anyways, at least from a quick glance those would at the very least seem to run into codys' unintentional bug vs. intentional behavior distinction. The bugs you linked are... well... bugs that the Rust devs fully intend to fix regardless of whether any in-the-wild exploits ever arise. The Go data race issue, on the other hand, is an intentional implementation decision and the devs have not indicated any interest in fixing it so far.
1. The tariffs are too broad, they don't target a single or a few industries.
2. Trump has gone back and forth many times on them, using them as negotiating leverage, not as long term incentives.
3. They are on very shaky legal grounds and will likely end up getting reversed by either the Supreme Court or the next president.
If you want to use tariffs to encourage on-shoring you make them targeted and pass them with bipartisan support through congress. Companies need stability and long term guarantees for the kind of capital expenditure that is needed. Even better if you use a mix of carrot and stick, rather than all stick