Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm not denying it's hard, I'm sure it is.

I think you nailed it with this, though:

>If your app has a prompt injection hole, the answer may turn out to be "your app is fundamentally insecure and cannot be built safely". Nobody wants to hear that, but it's true!

Either security needs to be figured out, or the thing shouldn't be built (in a production environment, at least).

There's just so many parallels between this topic and what we've collectively learned about user input over the last couple of decades that it is maddening to imagine a company simply slotting an LLM inbetween raw user input and production data and calling it a day.

I haven't had a chance to read through your post there, but I do appreciate you thinking about it and posting about it!



We're talking about the rising star, the golden goose, the all-fixing genius of innovation, LLMs. "Just don't use it" is not going to be acceptable to suits. And "it's not fixable" is actually 100% accurate. The best you can do is mitigate.

We're less than 2 years away from an LLM massively rocking our shit because a suit thought "we need the competitive advantage of sending money by chatting to a sexy sounding AI on the phone!".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: