Hacker Newsnew | past | comments | ask | show | jobs | submit | anon7725's commentslogin

The civil rights acts had firm constitutional grounding in the 14th and 15th amendments.


14th and 15th amendments were binding on government. The civil rights act was binding on private businesses, even those engaging in intrastate trade.

The civil rights act of 1875, which also tried to bind on private businesses, was found unconstitutional in doing so, despite coming after the 15th amendment. But by the 60s and 70s we were already in a post-constitutional society as FDRs threatening to pack the courts, the 'necessities' implemented during WWII, and the progressive era more or less ended up with SCOTUS deferring to everything as interstate commerce (most notable, in Wickard v Filburn). The 14th and 15th amendment did not change between the time the same things were found unconstitutional, then magically constitutional ~80+ years later.

The truth is, the civil rights act was seen as so important (that time around) that they bent the constitution to let it work. And now much of the most relied on pieces of legislation relied on a tortured interpretation of the constitution, making things incredibly difficult to fix, and setting the stage for people like Trump.


The corollary is that literally everything that the US government communicates should be assumed to be a lie. Even normal, boring announcements from the USDA and such are communicated in the voice of a terminally-online twitter troll.


Since many people are primed for this video to confirm their worldview, it doesn't even need to be that good. It will spread like wildfire, and its debunking won't. Technically, there is no reason why this can't be done today.


> How do you cult de-program 40% of the population of the most powerful country on the planet?

If history is any guide, that doesn't happen without a substantial - existential, perhaps - exogenous shock.


> I can tell you personally that the action which most seriously affected my performance at a workplace was being denied a bereavement day because the official policy was to only allow one.

One of the things I remember most from my career was a manager "rules lawyering" about bereavement leave when my aunt passed away. Ironically, HR was very sympathetic and accommodating, and it was a non-issue with them.

I've been treated "worse" by jackass execs and managers, but always in the context of work. Someone acting in the way this manager did about a personal situation sticks with me much more than those.


My ex didn’t go to her own father’s funeral because the company said she couldn’t have that much time off. Six months later when she talked about it at work they were horrified she hadn’t felt she could go, but how could you possibly make that up to someone? I think they might have actually worried she would sue them.

I told her to go and we’d sort out her work situation when she or we got back.

It kinda came out of the blue so we didn’t have time to hypothetically it out so we could just operate on autopilot.

Since then I’ve had bosses who heard of a death/critical illness in the family just say, “Go.” No discussion or details needed. Just go. Because being petty or precious about the whole thing just makes you public enemy. And when clever people work for you they don’t always come at you straight on. They come at you sideways and you don’t even know it’s revenge. They just passive aggressively let something slide that made your life miserable.


> HN is doing the equivalent of (a) denying Venezuelans appreciate this, and when that fails (b) claiming they know better than Venezuelans wrt whether this is good or bad for them.

It’s very dangerous to do the “right thing” for the wrong reasons in a complex situation. This is step 1. Does anyone have faith that the Trump admin will properly execute steps 2..N?

I would have some respect if the administration announced that it would support a provisional government led by the apparent winner of the last election in Venezuela. As such it seems to be that the administration has left the existing power structure in place and established a client/patron relationship with the leadership. This is revolting.


> It’s very dangerous to do the “right thing” for the wrong reasons in a complex situation.

Venezuelans do not care for this train of thought. No one else was going to do it, and their equivalent of Hitler has just been ousted.

Far better, from their perspective, to have the evil guy removed than endless do-nothing hand-wringing from the international community that shares your train of thought.

Democratically held elections will be run again in the country.

The "wrong reasons" can still be mutually beneficial. The US gets its oil and Venezuela gets its dictator disappeared.


Inflation risk.


I think most people would accept inflation as less of a risk then 20+% swings of the crypto market on a fairly common basis.


If they’re that common, it’s easy to profit 20% from them.


That makes absolutely zero sense, and you know it. I understand you are here to essentially shill bitcoin given you have a company that exists because of it but at least argue in good faith.


A strange game. The only winning move is not to play.


> My multinational big corporation employer has reporting about how much each employee uses AI, with a naughty list of employees who aren't meeting their quota of AI usage.

“Why don’t you just make the minimum 37 pieces of flAIr?”


> the prompt is now the source that needs to be maintained

The inference response to the prompt is not deterministic. In fact, it’s probably chaotic since small changes to the prompt can produce large changes to the inference.


The inference response to the prompt is not deterministic.

So? Nobody cares.

Is the output of your C compiler the same every time you run it? How about your FPGA synthesis tool? Is that deterministic? Are you sure?

What difference does it make, as long as the code works?


> Is the output of your C compiler the same every time you run it?

Yes? Because of actual engineering mind you and not rolling the dice until the lucky number comes up.

https://reproducibility.nixos.social/evaluations/2/2d293cbfa...


It's not true for a place-and-route engine, so why does it have to be true for a C compiler?

Nobody else cares. If you do, that's great, I guess... but you'll be outcompeted by people who don't.



That's an advertisement, not an answer.


Did you really read and understand this page in the 1 minute between my post and your reply or did you write a dismissive answer immediately?


Eh, I'll get an LLM to give me a summary later.

In the meantime: no, deterministic code generation isn't necessary, and anyone who says it is is wrong.


The C compiler will still make working programs every time, so long as your code isn’t broken. But sometimes the code chatgpt produces won’t work. Or it'll kinda work but you’ll get weird, different bugs each time you generate it. No thanks.


Nothing matters but d/dt. It's so much better than it was a year ago, it's not even funny.

How weird would it be if something like this worked perfectly out of the box, with no need for further improvement and refinement?


> So? Nobody cares

Yeah the business surely won't care when we rerun the prompt and the server works completely differently.

> Is the output of your C compiler the same every time you run it

I've never, in my life, had a compiler generate instructions that do something completely different from what my code specifies.

That you would suggest we will reach a level where an English language prompt will give us deterministic output is just evidence you've drank the kool-aid. It's just not possible. We have code because we need to be that specific, so the business can actually be reliable. If we could be less specific, we would have done that before AI. We have tried this with no code tools. Adding randomness is not going to help.


I've never, in my life, had a compiler generate instructions that do something completely different from what my code specifies.

Nobody is saying it should. Determinism is not a requirement for this. There are an infinite number of ways to write a program that behaves according to a given spec. This is equally true whether you are writing the source code, an LLM is writing the source code, or a compiler is generating the object code.

All that matters is that the program's requirements are met without undesired side effects. Again, this condition does not require deterministic behavior on the author's part or the compiler's.

To the extent it does require determinism, the program was poorly- or incompletely-specified.

That you would suggest we will reach a level where an English language prompt will give us deterministic output is just evidence you've drank the kool-aid.

No, it's evidence that you're arguing with a point that wasn't made at all, or that was made by somebody else.


You're on the wrong axis. You have to be deterministic about following the spec, or it's a BUG in the compiler. Whether or not you actually have the exact same instructions, a compiler will always do what the code says or it's bugged.

LLMs do not and cannot follow the spec of English reliably, because English is open to interpretation, and that's a feature. It makes LLMs good at some tasks, but terrible for what you're suggesting. And it's weird because you have to ignore the good things about LLMs to believe what you wrote.

> There are an infinite number of ways to write a program that behaves according to a given spec

You're arguing for more abstractions on top of an already leaky abstraction. English is not an appropriate spec. You can write 50 pages of what an app should do and somebody will get it wrong. It's good for ballparking what an app should do, and LLMs can make that part faster, but it's not good for reliably plugging into your business. We don't write vars, loops, and ifs for no reason. We do it because, at the end of the day, an English spec is meaningless until someone actually encodes it into rules.

The idea that this will be AI, and we will enjoy the same reliability we get with compilers, is absurd. It's also not even a conversation worth having when LLMs hallucinate basic linux commands.


People are betting trillions that you're the one who's "on the wrong axis." Seems that if you're that confident, there's money to be made on the other side of the market, right? Got any tips?

Essentially all of the drawbacks to LLMs you're mentioning are either already obsolete or almost so, or are solvable by the usual philosopher's stone in engineering: negative feedback. In this case, feedback from carefully-structured tests. Safe to say that we'll spend more time writing tests and less time writing original code going forward.


> People are betting trillions that you're the one who's "on the wrong axis."

People are betting trillions of dollars that AI agents will do a lot of useful economic work in 10 years. But if you take the best LLMs in the world, and ask them to make a working operating system, C compiler or web browser, they fail spectacularly.

The insane investment in AI isn't because today's agents can reliably write software better than senior developers. The investment is a bet that they'll be able to reliably solve some set of useful problems tomorrow. We don't know which problems they'll be able to reliably solve, or when. They're already doing some useful economic work. And AI agents will probably keep getting smarter over time. Thats all we know.

Maybe in a few years LLMs will be reliable enough to do what you're proposing. But neither I - nor most people in this thread - think they're there yet. If you think we're wrong, prove us wrong with code. Get ChatGPT - or whichever model you like - to actually do what you're suggesting. Nobody is stopping you.


Get ChatGPT - or whichever model you like - to actually do what you're suggesting. Nobody is stopping you.

I do, all the time.

But if you take the best LLMs in the world, and ask them to make a working operating system, C compiler or web browser, they fail spectacularly.

Like almost any powerful tool, there are a few good ways to use LLM technology and countless bad ways. What kind of moron would expect "Write an operating system" or "Write a compiler" or "Write a web browser" to yield anything but plagiarized garbage? A high-quality program starts with a high-quality specification, same as always. Or at least with carefully-considered intent.

The difference is, given a sufficiently high-quality specification, an LLM can handle the specification->source step, just as a compiler or assembler relieves you of having to micromanage the source->object code step.

IMHO, the way it will shake out is that LLMs as we know them today will be only components, perhaps relatively small ones, of larger systems that translate human intent to machine function. What we call "programming" today is only one implementation of a larger abstraction.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: