It'd be interesting to prompt it to do the same job but try to be innovative.
To your point, yeah, I mostly don't want AI to be innovative unless I'm asking for it to be. In fact, I spend much more time asking it "is that a conventional/idiomatic choice?" (usually when I'm working on a platform I'm not super experienced with) than I do saying "hey, be more innovative."
Yeah, I'd love to find time to. But e.g. I think that is also a "later stage". If you want to come up with novel optimizations, for example, it's better to start with a working but simple compiler, so it can focus on a single improvement. Trying to innovate on every aspect of a compiler from scratch is an easy way of getting yourself into a quagmire that it takes ages to get out of as a human as well.
E.g. the Claude compiler uses SSA because that is what it was directed to use, and that's fine. Following up by getting it to implement a set of the conventional optimizations, and then asking it to research novel alternatives to SSA that allows restarting the existing optimizations and additional optimisations and showing it can get better results or simpler code, for example, would be a really interesting test that might be possible to judge objectively enough (e.g. code complexity metrics vs. benchmarked performance), though validating correctness of the produced code gets a bit thorny (but the same approach of compiling major existing projects that have good test suite is a good start).
If I had unlimited tokens, this is a project I'd love to do. As it is, I need to prioritise my projects, as I can hit the most expensive Claude plans subscription limits every week with any of 5+ projects of mine...
Yes. I'll go down a wrong path in 20 minutes that'd have taken me half a day to go down by hand, and I keep having to remind myself that code is cheap now (and the robot doesn't get tired) so it's best to throw it away and spend 10 more minutes and get it right.
> No install fuss — download and start designing immediately.
also
> Gatekeeper blocks the app immediately. You'll see either "TUIStudio cannot be opened because it is from an unidentified developer" or "TUIStudio is damaged and can't be opened" on newer macOS after quarantine flags the binary.
To get past it: right-click the .app → Open → Open anyway — or go to System Settings → Privacy & Security → "Open Anyway".
It'd be interesting (earnestly!) to see someone make a solid case for AI reimplementation being bad but that the original (afaik) "clean room" project, Compaq's reimplementation of IBM's PC BIOS (something most people seem to see as a righteous move toward openness and freedom), was good.
An argument that I have some sympathy for, while still being moderately+ in favor of gun control (here in the USA where I'm a citizen).
It seems that gun control—though imperfect—in regions that have implemented it has had a good bit of success and the legitimate/non-harmful capabilities lost seem worth it to me in trade for the gains. (Reasonable people can disagree here!)
Whereas it seems to me that if we accept the proposition that the vast majority of code in the future is going to be written by AI (and I do), these valuable projects that are taking hard-line stances against it are going to find themselves either having to retreat from that position or facing insurmountable difficulties in staying relevant while holding to their stance.
> these valuable projects that are taking hard-line stances against it are going to find themselves either having to retreat from that position or facing insurmountable difficulties in staying relevant while holding to their stance.
It is the conservative position: it will be easier to walk back the policy and start accepting AI produced code some time down the road when its benefits are clearer than it will be to excise AI produced code from years prior if there's a technical or social reason to do that.
Even if the promise of AI is fulfilled and projects that don't use it are comparatively smaller, that doesn't mean there's no value in that, in the same way that people still make furniture in wood with traditional methods today even if a company can make the same widget cheaper in an almost fully automated way.
The AI hype machine is pushing the "inevitability" and "left behind" sentiments to make it a self-fulfilling prophecy, like https://en.wikipedia.org/wiki/Pluralistic_ignorance, and they have the profit and power incentives to do so and drive mass adoption. It is far from certain that AI will be indispensable or that people will "fall behind" for not using it.
Why would the AI-fans even care if others who decide not to use it fall behind? Wouldn't they get to point and laugh and enjoy the benefits of "keeping up"? Their fervor should be looked at with suspicion.
If you're addressing this to me: you need to separate my description of how I perceive things from any effort/desire on my part to make that come to pass. I don't expect to stand to gain if AI continues to get better at coding — most likely just the opposite; this is the first time in my career that I've ever felt much anxiety about whether I'd be able to find work in my field in the future.
There are many others like me who share this expectation, and, while we certainly may be wrong, it's not because of some sinister plan to make the prophecy come true. (There are certainly some who do have sinister/profit-seeking motives, of course!)
> It seems that gun control—though imperfect—in regions that have implemented it has had a good bit of success and the legitimate/non-harmful capabilities lost seem worth it to me in trade for the gains.
This is even true despite the fact that there are bad actors only a few minutes drive away in many cases (Chicago->Indiana border, for example).
The "Swift has too many keywords now" meme makes me want to go insane. The vast majority of Swift code never runs into any of that stuff; so, what advocates of it are saying is in effect "we don't want Swift to expand into these new areas (that it has potential to be really good at) even if it's in a way that doesn't affect current uses at all."
That said, the Swift 6 / Strict Concurrency transitions truly have been rough and confusing. It's not super clear to me that much of it could have been avoided (maybe if the value of Approachable Concurrency mode had been understood to be important from the beginning?), and the benefits are real, but my gut feeling is that a lot of the "Swift is too complicated" stuff is probably just misplaced annoyance at this.
Swift's concurrency story is what happens when a multi-year project meets Apple's fixed six month Swift release timeline. And being written by highly knowledgeable but low level engineers who've never written an iOS app in their life, means that there was a huge approachability hole they've only recently worked their way out of, but even that has major issues (MainActor default on in Xcode but not Swift itself).
To your point, yeah, I mostly don't want AI to be innovative unless I'm asking for it to be. In fact, I spend much more time asking it "is that a conventional/idiomatic choice?" (usually when I'm working on a platform I'm not super experienced with) than I do saying "hey, be more innovative."
reply