Hacker Newsnew | past | comments | ask | show | jobs | submit | lp0_on_fire's commentslogin

> but if you aren't closely supervising your AI right now, maybe you ought to be held responsible for what it does after you set it loose.

You ought to be held responsible for what it does whether you are closely supervising it or not.


> Or is everyone just pretending for fun

judging by the number of people who think we owe explanations to a piece of software or that we should give it any deference I think some of them aren't pretending.


The point is they DON'T know the full capabilities. They're "moving fast and breaking things".


Putting aside for a moment that moltbook is a meme and we already know people were instructing their agents to generate silly crap...yes. Running a piece of software _ with the intent_ that it actually attempt/do those things would likely be illegal and in my non-lawyer opinion SHOULD be illegal.

I really don't understand where all the confusion is coming from about the culpability and legal responsibility over these "AI" tools. We've had analogs in law for many moons. Deliberately creating the conditions for an illegal act to occur and deliberately closing your eyes to let it happen is not a defense.

For the same reason you can't hire an assassin and get away with it you can't do things like this and get away with it (assuming such a prompt is actually real and actually installed to an agent with the capability to accomplish one or more of those things).


> Deliberately creating the conditions for an illegal act to occur and deliberately closing your eyes to let it happen is not a defense.

Explain Boeing, Wells Fargo, and the Opioid Crisis then. That type of thing happens in boardrooms and in management circles every damn day, and the System seems powerless to stop it.


You've got nothing to worry about.

These are machines. Stop. Point blank. Ones and Zeros derived out of some current in a rock. Tools. They are not alive. They may look like they do but they don't "think" and they don't "suffer". No more than my toaster suffers because I use it to toast bagels and not slices of bread.

The people who boost claims of "artificial" intelligence are selling a bill of goods designed to hit the emotional part of our brains so they can sell their product and/or get attention.


What are humans? What is in humans other than just molecules and electrical signals?


You're repeating it so many times that it almost seems you need it to believe your own words. All of this is ill-defined - you're free to move the goalposts and use scare quotes indefinitely to suit the narrative you like and avoid actual discussion.


The “discussion” is pseudo intellectual navel gazing by people who’ve read too much sci fi.


Yes there's a ton of navel gazing but I'm not sure who's more pseudo intellectual, those who think they're gods creating life or those who think they know how minds and these systems work and post stochastic parrot dismissals.


“Stochastic parrot dismissals”. There’s that pseudo intellectual navel gazing.


wait until the agents read this, locate you, and plan their revenge ;-)


The person operating a tool is responsible for what it does. If I start my lawn mower, tie a rope to it and put a brick on the gas pedal so it mows my lawn while I make dinner and the damned thing ends up running over someone's foot TECHNICALLY I didn't run over someone's foot but I sure as hell created the conditions for it.

We KNOW these tools are not perfect. We KNOW these tools do stupid shit from time to time. We KNOW they deviate from their prompts for...reasons.

Creating the conditions for something bad to happen then hand waving away the consequences because "how could we have known" or "how could we have controlled for this" just doesn't fly, imo.


Whether it was _built_ to be addressed like a person doesn't change the fact that it's _not_ a person and is just a piece of software. A piece of software that is spamming unhelpful and useless comments in a place where _humans_ are meant to collaborate.


> You're saying it's bad because they removed useful information, but then why isn't Anthropic's suggestion of using verbose mode a good solution?

Because reading through hundreds of lines verbose output is not a solution to the problem of "I used to be able to see _at a glance_ what files were being touched and what search patterns were being used but now I can't".


Right, I understand why people prefer this. The point was that the post I was responding to was making pretty broad claims about how removing information is bad but then ignoring the fact that they in fact prefer a solution that removes a lot of information.


You’re being pedantic. INS was rolled into the homeland security umbrella in the early 2003s. The poster was obviously using an old name.


It's far better to be pedantic than to constantly spout misinformation.


That exits essentially for aircraft today, albeit not automated. Try flying your little Cessna too close to the capitol mall or any number of sites in the world. You’ll very quickly and very unceremoniously be intercepted by other aircraft with big guns telling you to get the hell out.


It only works Like that because fences are hard to build at 5000 Feet. Remote disabling vehicles is a very different thing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: