Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not too worried about displacement of jobs as I think that's actually somewhat overhyped as an outcome. The actual near term problems I see are:

(a) perfect emulation of human behaviour makes spam and fraud detection almost impossible. an LLM can now have an intelligent, reasoned conversation over weeks with a target to perfectly emulate an entity known to the target (their bank, loved one, etc etc).

(b) direct attack on authenticity : we aren't far away from even video being faked in real time such that it's not even sufficient to get a person on a zoom call to understand that they are not real

(c) entrenching of ultra subtle and complex biases into automated processes. I expect companies to rapidly deploy LLMs to automate aspects of information processing and the paradox is that the better it gets at not showing explicit biases the more insidious the residual will be. For example it's not going to automatically reject all black applicants for loans but it may well implement some much more subtle bias that is very hard to prove.

(d) flooding of the internet with garbage. This might be the worst one in the end. I feel like fairly quickly we're going to see this evolve to requiring real identity for actual humans and ability to digitally sign content in ways bots can't replicate. That will then be the real problem outcome because the downstream effects of that will enable all kinds of censorship and control that we have thus far resisted / avoided on the internet.



Does this mean we will finally have key signing parties? I'm gonna make so many friends.


Jokes/nostalgia aside, you don't really even need fancy encryption mechanisms. All that's important is that you only use the internet to interact with trusted parties, vs treating it as a public square where you can ~generally tell whether someone is a real person. A domain name, an email address, a social media username, etc are all as trustworthy as they are right now as long as you've verified the person is real through some non-digital channel first (or someone you trust has)

I think the public social internet will die (for anything other than entertainment), but the direct-communication internet will look largely the same as it does today


We already have the digital channels in place. In Germany, the ID cards can do NFC and are part of a PKI. You can use them with your phone to prove that you are not underage, for example, and the other party will only get a signed boolean value. It's actually done quite well considering our state of digitalization.

Of course that comes with a host of different issues in the context of our discussion, like destroying pseudoanonymity.


Right, but that's what I'm saying. You don't need any of that; the only thing you need to verify out-of-channel (i.e. in-person) is that "a real human exists and they've told me that x@y.com is their email address". From there on, regular internet auth/encryption is sufficient to ensure you continue interacting with that real human over email


Gotcha. I think I would kind of welcome it if AI caused people to focus more on the offline world. Perhaps real life meetings will flourish because of this development. Perhaps artists will make art again for intrinsic reasons, because commercializing art will be even harder than it is now.


>A domain name, an email address, a social media username, etc are all as trustworthy as they are right now as long as you've verified the person is real through some non-digital channel first (or someone you trust has)

Until they've been compromised. A bot could train itself on their messages and photos and emulate them.


But this is not a fundamentally new threat. A bad actor today could compromise any of these and impersonate the person they took it from.


But not:

1) Automated at scale

2) To such a convincing degree

3) In real time, including audio and video


> as long as you've verified the person is real through some non-digital channel first (or someone you trust has)

At some point though, we're going to want to see how far we can take transitive trust. I'm not sure what the case will be, but sometimes you wanna say "who's with me?" and hear more than your handful of meatspace friends.


Yeah. I can see web-of-trust mechanisms similar to the ones Google uses to try and determine quality sites, or how Facebook used to be for friends, friends-of-friends, etc. There's some territory to explore here. But for the core, IRL connections, online communication should still mostly work as-is

Interesting idea: a social network that attempts to verify that two people actually met as real humans in-person, to increase the trust you have in friends' extended trust networks


Speaking of which, why doesn't Threema or Signal use the web of trust? They already have the key verification feature. Is it due to privacy concerns?


Signal didn't store your social graph on the server, it's only device side (privacy reasons). So a query for transitive trust would have to be implemented in a p2p way. I could see that becoming problematically combinatoric.


Yes. You bring your dongle and put it in everyone’s laptops. Others put their dongles in yours.

On a more serious note, I think even the author of GPG said that it was too complicated to use. It’s unfortunate, because we need e2ee auth & encryption more now than any time before.


Phil Zimmermann, inventor of PGP, couldn't figure out how to make it work on his mac.

https://www.vice.com/en/article/vvbw9a/even-the-inventor-of-...


Ever played wingspan? I have been, and I'm soaking up all of this information about birds that I never thought I'd know, and having fun too.

I'd like to make a board game that teaches web-of-trust hygiene in the same way. Then there can be an app that feels like playing the game, but really it's just a wrapper around PGP.


Actually, it's a pen-and-paper kind of shindig:

https://security.stackexchange.com/questions/126533/why-shou...


I think if you invited a random to a key signing party they might think it’s something else :)


Regarding d) - the whole idea from Cyberpunk that there was an "old internet" that was taken over by rogue AIs who now control it, with NetSec keeping them at bay and preventing them from spilling over into the new internet, is getting increasingly likely.

I can definitely see a possibility where the current internet as we know it just gets flooded with AI crap, and humans will have to build entirely new technologies to replace the old World Wide Web, with real-world identity checks to enable access to it, and so on. Oh, and probably some kind of punitive system for people who abuse their access (chinese social credit style)


Combining that with b) above, and what we get is that no important decision will be made without an in-person meeting.

So, rather than technology speeding up everything, we will slow down everything to pre-telephone days.

Bob, the CFO, will tell Alice, the CEO, that sure, I can make that transfer of $173 million to [new_supplier_bank_acct], do you want me to fly to your office for us to complete the hardcopy authorization papers, or will you be coming to mine?

All that stuff that was accelerated by telegraph, telephone, fax, mobile phones, email, video calls . . . poof! Pretty much entirely untrustworthy for anything significant. The only way around it could be a quantum-resistant and very trustworthy encryption system...

I'm not sure the net result of this technology actually makes life better. Seems to empower the criminals more than the regular people.

Deploy the technology against itself, train the cops to use it? My guess is that they'll always be several steps behind.

Even if we try to do the right thing and kill it, it is already out of the bag - the authoritarian states like China and Russia will certainly attempt to deploy it to their advantage.

The only way out now is to take maximum advantage of it.


No. Encryption has nothing to do with the problem. Ignoring quantum resistance (which is still probably not needed for any of this) you just need something like PGP with a key stored on a CCID card (with all other copies of the key stored on an airgapped and secured machine or non existent). Theft of the card is equivalent to someone stealing an employee's equipment. Theft of the card pin is equivalent to someone phishing the pin from the employee. To that extent, post perfect-AI-imitation you have the same guarantees as you have today. But this IS a massive step back in convenience. And if the card is stolen, the employee can no longer call in to prove their ID to have the card revoked. Because in that case, attackers would just spam companies with legitimate looking revocation requests.

But I guess what this COULD cause is a black market for stolen CCID cards and pins, and therefore crime to fuel the market.


Ok,'tho last time I looked PGP stood for the "Pretty Good Privacy" encryption system, and everything you wrote here is abt managing encryption keys, so I'm not sure how "Encryption has nothing to do with the problem".

Maybe you mean that it could be solved without quantum-resistant encryption systems?

I suppose the PGP kyes on CCID chip cards could work. But how are you going to enforce the no other copies not on secured & airgapped machines? Or the other security measures?

You're right , this will be at least a source of huge inconvenience compared to the current status, and a source of crime. Probably more violent too. Instead of Ransomware, it'll be kidnap the CFO's family and force them to use their card to make the transfers.


Yes my point was that encryption is not an obstacle to having assurance that the person you are communicating with is who you expect them to be.

Keys can be generated directly on the secure element of the CCID card, this means there is no other copy. Alternatively employees physically go to an office to collect cards prepared by the security team where the security team has access to a secure facility where key backups are kept. The enforcement is done by the nature of the CCID cards not actually ever directly exposing the keys.


> humans will have to build entirely new technologies to replace the old World Wide Web, with real-world identity checks to enable access to it, and so on

"Entirely new technologies"? In plenty of countries the exact world wide web you're using right now already works that way. China and South Korea, to name two.


In fairness, (d) is sort of already true. Google results are atrocious.


Yep, the garbage flood started a long time ago.


It's a self fulfilling prophecy, llms flooding the internet with nonsense and then we need ever more advanced llms to distill the nonsense into something usable. So far the nonsense grew faster than search engines were able to adopt to it, but that might also just be because google stopped improving their search engine or their search is broken by Google's misaligned incentives.


Yeah. 1/3rd of my Search results are like “is good Idea. <ProductName> limited time Offer” from <surname>-<localtown>-<industry>.com.<cctld>. Before that was “<search term> price in India”.


garbage in, garbage out. The web became atrocious over time.


Theres another one similar to A but perpetrated by the Marketing Industrial Complex.

What chance do you have when Facebook or Google decide to dedicate a GPT-4 level LLM to creating AI-generated posts, articles, endorsements, social media activity, and reviews targeted 100% AT YOU? They're going to feed it 15 years of your emails, chats, and browser activity and then tell it to brainwash the fuck out of you into buying the next Nissan.

Humans are no match for this kind of hyper individualized marketing and it's coming RIGHT AT YOU.


Agree. someone I know involved in the field views LLMs precisely this way : they are a direct attack on human psychology, because their primary training criteria is to make up sets of words that humans believe sound plausible. Not truth, fact based and certainly not in our interests. Just, "what would a human be unable to reject as implausible". When you view it this way, they are almost like human brain viruses - a foreign element that is specifically designed to plug into our brains in an undetectable way and then influence us. And this virus is something that nothing in human evolution has prepared us for. Deployed at scale for any kind of influence operation (advertising or otherwise) it is kind of terrifying to think about.


> Deployed at scale for any kind of influence operation (advertising or otherwise) it is kind of terrifying to think about.

Ironically this kind of influence operation may be the only realistic way to prevent earth from becoming uninhabitable for humans. This is what the likes of Extinction Rebellion should be spending their time on, not blocking roads.


All those years spent not having a Facebook account and hosting my own mail infrastructure have finally paid off.


Wait till the end war comes and they need more cannon fodder.


We already have all the required cryptographic primitives (which you've already alluded to in d ) to completely address a), b) and d) if desired. Full enforcement however, would destroy the internet as we know it, and allow corps and governments to completely control our electronic lives.

We already seem to be going down this path with the proliferation of remote attestation schemes in mobile devices, and increasingly in general computing as well.


> We already have all the required cryptographic primitives (which you've already alluded to in d ) to completely address a), b) and d) if desired.

Do we? My mother can barely keep the Russian DNS servers out of her home router. You want to entrust the public with individual cryptographic keys?


You can use cryptography to solve problems without asking individual people to manage their own keys.


Imagine having an iq below X (which is an ever growing number) and being told: From now on the voice assistant makes all of your decisions. Then you carry the thing around and it talks with other voice assistants asking for advice with the conversation gradually growing into the familiar management designing things to make things as easy as possible for it self.


What are security issues relating to Russian DNS servers?


> I'm not too worried about displacement of jobs as I think that's actually somewhat overhyped as an outcome.

I am. The people in charge of hiring and firing decisions are stupid, and frighten easily. As can be seen in the past year.


Perhaps they are stupid enough to be replaced.


A-D agree, But thinking about how they train llm's, what happens when LLM's start consuming a lot of their own content? Have you ever been alone for a really long time, without outside input you kind of go crazy. I suspect that a cancer like problem for AI will be how it handles not reinforcing on it's own data.

I suspect bias, your C option will be the trickiest problem.


Ai is just becoming multimodal now, so while feeding it the output of stable diffusion may not be a good idea, there is still massive amounts of untapped data out there to train AI on and give it grounding.


(c) I always find interesting because worrying about it coming from AI implies that we don't think humans operate that way, or that it's somehow okay / acceptable / whatever that humans have subtle, hard-to-prove biases, but if those same biases show up in a machine (that we could, in theory, dissect and analyze to identify and prove those biases) it's worse.


> I always find interesting because worrying about it coming from AI implies that we don't think humans operate that way,

No, it says “entrenching” because we know humans operate that way, but AI systems are presented as objective and removing bias, despite the fact that they demonstrably reproduce bias, and because they replace systems of people where someone could push back with a single opaque automaton that will not, they solidify the biases they incorporate.


That's an education problem, because they're emphatically not objective; they're subjectivity analyzers. They're consistent in their application of bias but it's still bias.


You are designing a promotional flyer, you have a white guy behind a desk on the front page. Its a big company, someone found the role to tell you there needs to be a black person in the image as well, and an asian, and a woman. You end up with 3 men and 3 woman one with each skin color and it looks completely ridiculous and set up so you randomize the set and end up with 3 white males. Suddenly you realize there is no way back from overthinking things.


When a human do that: you can ask for explanation, you can fire them.

When a machine do that: you "tune" it, you make the bias less obvious, it face no consequence.


This seems somewhat isomorphic to me. How is firing a human different from tuning the AI away from its previous biases?


If a machine with one set of biases displaces thousands or millions of free thinking individuals with distinct biases then that bias proliferates.


I think the issue is more that we at least recognize humans are fallible. Less so with “algorithms”.


Worrying about (c) is kind of ridiculous given how car insurance and credit checks have worked for decades.

Why do car insurance companies need to know my job again?


At least in the UK it is perfectly legal to take your job title. Come up with some abstract concept of your job. Find every job title in the standardized list which falls within that abstract category and choose the one which makes your insurance premium the lowest. Its an annual exercise for me.


> For example it's not going to automatically reject all black applicants for loans but it may well implement some much more subtle bias that is very hard to prove.

This sounds to me like a perfect explanation of the existing situation


It might be that humans might finally need to face the human problems.


I'm worried about the displacement of jobs. Why would any company want to hire someone to pay 30$ per hour while you could ask a bot?


They wouldn't, though despite ChatGPT's very impressive skills (I haven't tried GPT4 yet) it's still a very long way from actually being able to replace most skilled jobs.


But do you agree that majority of the software jobs are replaceable in foreseeable future?


I agree that I can see it from here, but remember when nukes were invented people said “that’s it they’re going to make a bomb big enough to knock everything out in one go and anyone could make one, and that’ll be it”

As far as I can tell it hasn’t happened yet, so keep an eye on things. I used to be disappointed by slow ai progress, now, not so much.


The "catastrophe" scenario only seems likely if AIs are somehow claiming vast resources for themselves, such as all the world's electricity production. Otherwise, there's nothing to stop humans having the same level of production of goods and services that we have currently, and perhaps it could even be achieved with less effort if the AIs can be assigned some of the work.


But what prevents a human from digitally signing content generated by an AI?


Exactly. A sufficiently intelligent AI can easily make a human do its bidding, through incentive, coercion, emotional manipulation. Easy peasy. Didn't GPT-4 already did that to a Task Rabbit worker?


The goofing off the internet with garbage has already begun if my search results are anything to go by. I have to go 3 pages deep before I get anything written by a human


Agree with all of your points. Side note, funny that (a) used to be how we would test for AGI (turing test) and now it's just "a problem with AI"


The internet is already full of garbage. GPT like models will accelerate the process, but honestly this is for the best. There are historical precedents for this situation. When garbage permeates a medium of communication, there’s a flight to quality. We’re already seeing this with the reemergence of paywalls on top tier sites.


(c) Ah, the insidious bias. No one knows how it works, but the results are easily predictable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: