Scammers are using AI to copy the voice of children and grandchildren, and make calls urgently asking to send money. It's also being used to scam businesses out of money in similar ways (copying the voice of the CEO or CFO, urgently asking for money to be sent).
Sure, the AI isn't directly doing the scamming, but it's supercharging the ability to do so. You're making a "guns don't kill people, people do" argument here.
Exactly this. These systems are supposed to have been built by some of the smartest scientific and engineering minds on the planet, yet they somehow failed (or chose not) to think about second-order effects and what steady-state outcomes their systems will have. That's engineering 101 right there.
That's a small part on why people became more cynical of tech over the decades. At least with the internet there were large efforts to try and nail down security in the early 00's. Imagine if we instead left it devolve into a moderator-less hellscape where every other media post is some goatse style jump scare.
That's what it feels like with AI. But perhaps worse since companies are lobbying to keep the chaos instead of making a board of standards and etiquette.
This phrase almost always seems to be invoked to attribute purpose (and more specifically, intent and blame) to something based on outcomes, where it should be more considered as a way to stop thinking in terms of those things in the first place.
> Just because you can cook with a hammer doesn't make it its purpose.
If you survey all the people who own a hammer and ask what they use it for, cooking is not going to make the list of top 10 activities.
If you look around at what LLMs are being used for, the largest spaces where they have been successfully deployed are astroturfing, scamming, and helping people break from reality by sycophantically echoing their users and encouraging psychosis.
Email, by number of emails attempted to send is owned by spammers 10 to 100 fold over legitimate emails. You typically don't see this because of a massive effort by any number of companies to ensure that spam dies before it shows up in your mailbox.
To go back one step farther porn was one of the first successful businesses on the internet, that is more than enough motivation for our more conservative congress members to ban the internet in the first place.
Is it possible that these are in the top 10, but not the top 5? I'm pretty sure programming, email/meeting summaries, cheating on homework, random QA, and maybe roleplay/chat are the most popular uses.
The number of programmers in the world is vastly outnumbered by the people that do not program. Email / meeting summaries: maybe. Cheating on homework: maybe not your best example.
This is satire. Its purpose is to use exaggeration to provide comedy while also drawing attention to issues.
Obviously the intended use and design of AI isn't to scam the elderly, but it's extremely efficient at doing it, and has no guard rails to help prevent it.
Why is anyone allowed to make a digital copy of me, without my permission, and then use that to call my relatives? It should be illegal to use it and it should be illegal to even generate it. Sure, it's already illegal to defaud people, but that's simply not enough at this point. The AI companies producing these models should be held liable for this form of fraud, as they're not providing any form of protection.
You're exactly the person that this article is satirizing.
No one - neither the author of the article nor anyone reading - believes that Sam Altman sat down at his desk one fine day in 2015 and said to himself, “Boy, it sure would be nice if there were a better way to scam the elderly…”
An no one believes that Sam Altman thinks of much more than adding to his own wealth and power. His first idea was a failing location data-harvesting app that got bought. Others have included biometric data-harvesting with a crypto spin, and this. If there's a throughline beyond manipulative scamming, I don't see it.
There are legitimate applications - fixing a tiny mistake in the dialogue in a movie in the edit suite, for instance.
Do these legitimate applications justify making these tools available to every scammer, domestic abuser, child porn consumer, and sundry other categories of criminal? Almost certainly not.
Fair, but it’s an exaggerated statement that’s supposed to clue us into the tone of the piece with a chuckle. Maybe even a snicker or giggle! It’s not worth dissecting for accuracy.
Isn't that the vast majority of products? By making things easier they change the scale it is accomplished at? Farming wasn't previously impossible before the tractor.
People seemingly have some very odd views on products when it comes to AI.
It's actually a fair question. There are software projects I wouldn't have taken on without an LLM. Not because I couldn't make it. But because of the time needed to create it.
I could have taken the time to do the math to figure out what the rewards structure is for my Wawa points and compare it to my car's fuel tank to discover I should strictly buy sandwiches and never gas.
People have been making nude celebrity photos for decades now with just Photoshop.
Some activities have gotten a speed up. But so far it was all possible before just possibly not feasible.
This conversation is naive and simplifies technologies into “does it achieve something you otherwise couldn’t”.
The answer is that chatgpt allows you to do things more efficiently than before. Efficiency doesn’t sound sexy but this is what adds up to higher prosperity.
Arguments like this can be used against internet. What does it allow you to do now that you couldn’t do before?
Answer might be “oh I don’t know, it allows me to search and index information, talk to friends”.
It doesn’t sound that sexy. You can still visit a library. You can still phone your friends. But the ease of doing so adds up and creates a whole ecosystem that brings so many things.
No. I'm just stating that a huge portion of these comments have their own emotional investment and are confusing OUGHT/IS. On top of that their arguments aren't particularly sound, and if they were applied to any other technologies that we worship here in the church of HN would seem like an advanced form of hypocrisy.
...generate piles of low quality content for almost free.
AI is fascinating technology with undoubtedly fantastic applications in the future, but LLMs mostly seem to be doing two things: provide a small speedup for high quality work, and provide a massive speedup to low quality work.
I don't think it's comparable to the plow or the phone in its impact on society, unless that impact will be drowning us in slop.
There is a particular problem that comes with your line of thinking and why AI will never be able to solve it. In fact it's not a solved human problem either.
And that is slop work is always easier and cheaper than doing something right. We can make perfectly good products as it is, yet we find Shien and Temu filled with crap. That's not related to AI. Humans drown themselves in trash whenever we gain the technological capability to do so.
To put this another way, you cannot get a 10x speed up in high quality work without also getting a 1000x speed up in low quality work. We'll pretty much have to kill any further technological advancement if that's a showstopper for you.
Sure, the AI isn't directly doing the scamming, but it's supercharging the ability to do so. You're making a "guns don't kill people, people do" argument here.