Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The garbage generator generates garbage, but if you run it enough times it gets something slightly-less-garbage that can satisfy a compiler! You're stupid if you don't think this is awesome!


I don't understand the point of this style of argument.

There are oh-so-many issues with LLMs - plagiarism/IP rights, worsening education, unmaintainable code - this should be obvious to anyone. But painting them as totally useless just doesn't make sense. Of course they work. I've had a task I want to do, I ask the LLM in plain English, it gives me code, the code works, I get the task done faster than I would have figuring out the code myself. This process has happened plenty of times.

Which part of this do you disagree with, under your argument? Am I and all the other millions of people who have experienced this all collectively hallucinating (pun intended) that we got working solutions to our problems? Are we just unskilled for not being able to write the code quickly enough ourselves, and should go sod off? I'm joking a bit, but it's a genuine question.


I had Copilot for a hot minute. When I wrote things like serializers and deserializers, it was incredible. So much time saved. But I didn't do it enough to make the personal cost worth it, so I cancelled.

It's annoying to have to hand-code that stuff. But without Copilot I have to. Or I can write some arcane regex and run it on existing code to get 90% of the way there. But writing the regex also takes a while.

Copilot was literally just suggesting the whole deserialization function after I"d finished the serializer, 100% correct code.


I remember writing LISP code that created the serialisers and deserialisers for me.

Now that everything is containerised and managed by docker style environments, I am thinking about giving SBCL another try, the end users only need to access the same JSON REST APIs anyway.

Everything old is new again =)


You are right about this.

Also, someone mathematically proved that's enough. And then someone else proved it empirically.

There was an experiment where they trained 16 pigeons to detect cancerous or benign tumours from photographies.

Individually, each pigeon had an average 85% accuracy. But all pigeons (except for one outlier) together had an accuracy of 99%.

If you add enough silly brains, you get one super smart brain.


Its also mathematically proven that infinite monkeys typing on typewriters for eternity will recreate all works of Shakespeare. It still takes someone with an actual brain to recognize the correct output.


Yep, there's some positive feedback loop missing in all these LLMs stuff.


Counter point: the brain also generates mostly garbage, just slower.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: