Hacker Newsnew | past | comments | ask | show | jobs | submit | LelouBil's commentslogin

In France, for movies/music you get 2 warning letters, then a scary one that says you can now get to court possibly.

Didn't really hear about people getting fines for this, but the law exists.


I'm currently hesitating to use something like OpenClaw, however, because of prompt injections and stuff, I would only have it able to send messages to me directly, no web query, no email reply, etc...

Basically act as a kind of personal assistant, with a read only view of my emails, direct messages, and stuff like that, and the only communication channel would be towards me (enforced with things like API key permissions).

This should prevent any kind of leaks due to prompt injection, right ? Does anyone have an example of this kind of OpenClaw setup ?


> (...) and the only communication channel would be towards me (enforced with things like API key permissions).

> This should prevent any kind of leaks due to prompt injection, right ?

It might be harder than you think. Any conditional fetch of an URL or DNS query could reveal some information.


DNS Queries are fine, and also conditional URL fetches, as long as they are not arbitrary, should be okay too.

I don't mind the agent searching my GMail using keywords from some discord private messages for example, but I would mind if it did a web search because it could give anything to the search result URLs.


I wrote this exact tool over the last weekend using calendar, imap, monarchmoney, and reminders api but I can’t share because my company doesn’t like its employees sharing their personal work even.

Prompt injection just seems unsolvable.

Are there works toward preventing it 100% of the time ? (I would assume the LLMs architectures would have to change)


Sandboxing is great, and stricter Authorization policies are great too, but with these kinds of software, my biggest fear (and that's why I am not trying them out now) is prompt injection.

It just seems unsolvable if you want the agent to do anything remotely useful


Ultimately a prompt injection attack is trying to get the agent to do something it wasn't intended to do and if you have the appropriate sandboxing and authorization in place, a compromised agent won't be able to actually execute the exploits

In languages like kotlin and rust you can have a type encapsulation like this that does not exist at runtime

And there's also Tunic, that is both a zelda-like action RPG and and information game !

So it's still very fun to replay it with a randomizer for example.


Tunic is probably my favorite game that has multiple "Aha!" moments when all the hints and puzzles so far suddenly click in a different way.


The actual website listing all the tools of this office suite (in French)

https://lasuite.numerique.gouv.fr/#products


There is actually a "European company" structure.

https://europa.eu/youreurope/business/running-business/devel...

Most notably, Airbus is an "European company".


That's pretty much the same in France


There's also the open source Kvaesitso !

[0] https://kvaesitso.mm20.de/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: