Hacker Newsnew | past | comments | ask | show | jobs | submit | Closi's commentslogin

The counter however is that lots of schools are on 365, which doesn’t work so well on a Chromebook but works great on a mac.

its quite common for schools to issue windows laptops to staff (who use MS 365) and chromebooks to students (who use Google Classroom). The windows laptops also have no problem with google classroom of course.

I think we can safely assume that OP was picking a bit of a ridiculous hypothetical example to make a point that it’s possible for something to be deadly and transmissible, although in nature Baculovirus in Caterpillars has a similar mechanism (encourages their host to eat a lot, then climb to the top of a plant so when it turns to ooze it infects others) or cordyceps although both of these aren’t as highly transmissible as they hypothetical explode virus.

But the Black Death mixed high contagion and high mortality as an actual example that shows they aren’t mutually exclusive.


Oh, I would never say biological weapons are harmless, but the wiping out humanity claim I debated.

What? That's your second strawman in two comments.

Nobody said you claimed they were harmless. People are taking issue with your assertion that biological agents can be either contagious or lethal (not both), and therefore you discount its risk. This implied tradeoff between contagiousness and lethality simply is not enforced by anything in nature.

The natural emergence of a pathogen that's both highly contagious and highly lethal would be a much rarer event than the natural emergence of one that's either contagious or lethal, but we're talking about engineered pathogens. There is no reason to think that pathogens cannot be deliberately created that are both of those things.


None of you have seen ‘The Beauty’, I’m guessing.

No, but I have learned that sometimes there is a difference between fiction and reality.

Bet you’re fun at parties.

I do understand your sentiment. But also, this isn't a party

Better models already exist, this is just proving you can dramatically increase inference speeds / reduce inference costs.

It isn't about model capability - it's about inference hardware. Same smarts, faster.


Because humans always say 'bread' if you ask them what you put in a toaster.

And humans will always deduce that you should switch doors if you are in a hypothetical gameshow and they show you a horse behind one of the doors.

(All I mean is - an example of an LLM answering illogically is not proof that LLMs can't really think logically, as you can equally find examples of humans answering illogically and also find examples of novel questions that LLMs get right)


Humans aren't immune to getting questions like this wrong either, so I don't think it changes much in terms of the ability of AI to replace jobs.

I've seen senior software engineers get tricked with the 'if YES spells yes, what does EYES spell?', or 'Say silk three times, what do cows drink?', or 'What do you put in a toaster?'.

Even if not a trick - lots of people get the 'bat and a ball cost £1.10 in total. The bat costs £1 more than the ball. How much does the ball cost?' question wrong, or '5 machines take 5 minutes to make 5 widgets. How long do 100 machines take to make 100 widgets?' etc. There are obviously more complex variants of all these that have even lower success rates for humans.

In addition, being PHD-Level in maths as a human doesn't make you immune to the 'toaster/toast' question (assuming you haven't heard it before).

So if we assume humans are generally intelligent and can be a senior software engineer, getting this sort of question confidently wrong isn't incompatible with being a competent senior software engineer.


humans without credentials are bad at basic algebra in a word problem, ergo the large language model must be substantially equivalent to a human without a credential

thanks but no thanks

i am often glad my field of endeavour does not require special professional credentials but the advent of "vibe coding" and, just, generally, unethical behavior industry-wide, makes me wonder whether it wouldn't be better to have professional education and licensing


Let's not forget that Einstein almost got a (reasonably simple) trick question wrong:

https://fs.blog/einstein-wertheimer-car-problem/

And that many mathematicians got monty-hall wrong, despite it being intuitive for many kids.

And being at the top of your field (regardless of the PHD) does not make you immune to falling for YES / EYES.

> humans without credentials are bad at basic algebra in a word problem, ergo the large language model must be substantially equivalent to a human without a credential

I'm not saying this - i'm saying the claim that 'AI's get this question wrong ergo they cannot be a senior software engineer' is wrong when senior software engineers will get analogous questions wrong. If you apply the same bar to software engineers, you get 'senior software engineers get this question wrong so they can't be senior software engineers' which is obviously wrong.


I suspect they aren’t doing this (just) to avoid fees - it’s more about national security in a world where the US might stop being a reliable ally, and in a world where the US has used withdrawing Visa / Mastercard as a strategy to weaken enemy economies.


I think the recent stories of the International Criminal Court judge being forbidden to use VISA and Mastercard, making his life somewhat more challenging, did make some politicians aware of the risks.

https://en.wikipedia.org/wiki/Nicolas_Guillou


> The rest of the world is free riding.

The rest of the world isn't free riding - the USA has just setup a market where there is very little bargaining power for consumers because of how the US medical market and insurance works.

Novo and Eli are still making plenty of money in Europe where these drugs cost a fraction of the price, and where there aren't other significant suppliers for GLP-1's like is being implied.


No, they're free-riding. If drug companies can't charge higher prices in the US, they will do less drug development. Everyone involved in the business/investing side of pharma knows this; it's not even an argument.


I think we have a different definition of free riding.

If you and me both buy the same car, but i'm better at negotiating than you and get a lower price, I'm not free riding because you 'funded the design of the car with the extra money you paid', you are just bad at negotiation.


In this metaphor, the car manufacturer only invested R&D in new models because it expected to be able to recoup that R&D spending from my higher purchase price. If I start paying the same low price for cars as you do, the manufacturer stops investing as much in R&D for new models. Your access to new models was free riding on my higher prices.

When we all negotiate lower prices, we get fewer new drugs. Maybe that's better than the status quo (for Americans). Maybe it isn't.


The free riding is referring to outcomes rather than causal links.


> It's not a programming language if you can't read someone else's code, figure out what it does, figure out what they meant, and debug the difference between those things.

Well I think the article would say that you can diff the documentation, and it's the documentation that is feeding the AI in this new paradigm (which isn't direct prompting).

If the definition of programming is "a process to create sets of instructions that tell a computer how to perform specific tasks" there is nothing in there that requires it to be deterministic at the definition level.


> If the definition of programming is "a process to create sets of instructions that tell a computer how to perform specific tasks" there is nothing in there that requires it to be deterministic at the definition level.

The whole goal of getting a computer to do a task is that it’s capable to do it many times and reliably. Especially in business, infrastructure, and manufacturing,…

Once you turn specifications and requirements in code, it’s formalized and the behavior is fixed. That’s only when it’s possible to evaluate it. Not with the specifications, but with another set of code that is known to be good (or easier to verify)

The specification is just a description of an idea. It is a map. But the code is the territory. And I’ve never seen anyone farm or mine from a map.


It's about consistency - you want to build an app that looks and functions the same on all platforms as much as possible. Regardless of if you are hand-coding or vibe-coding 3 entirely separate software stacks, getting everything consistent is going to be a challenge and subtle inconsistencies will sneak in.

It comes back to fundamental programming guidelines like DRY (Don't Repeat Yourself) - if you have three separate implementations in different languages for everything, changes will be come harder and you will move slower. These golden guidelines still stand in a vibe-code world.


> Isn’t the more fundamental question why Europe has not been as successful as the US or China in building a native tech industry despite having a huge market? What are the barriers to creating startups and how can you lower them and preserve the enviable European social model? Solve that and you’ll solve the problem of a native cloud.

IMO here in the UK we are good at starting tech startups, we are just bad at not selling them to overseas investors early in their life, or having a tax framework that is advantageous to them growing in the UK.

In the UK see Google Deepmind, ARM, Deliveroo... Elevenlabs being incorporated in the USA, Dyson moving to singapore etc - Even outside of the tech space, Cadburys, Sainsbury's, Jaguar Land Rover... If the UK kept hold of everything that the UK created, we would be great!

Even our infrastructure we sell to the French, Chinese, Germans etc just for short-term gain, despite that we are cutting our nose off if we look forward 10 years.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: