If I ask how to write a bash script to get the last 3 letters of a file name, my preferred answer would be the literal answer. If someone wants to pontificate below that answer that maybe I wanted the extension, that's cool, but first answer the question or don't bother participating.
People always invoke XY problem as if it's a way of helping someone, when it's mostly just condescension and or showing off when applied to forum answers.
If someone is looking for "advice", they can ask for it. If someone is looking for syntax, they don't necessarily want advice, even if you really think they need it. That's my advice.
Well there you go, OpenAI codex has solved your problem: technical help that won't bother warning you when you're about to shoot yourself in the foot, if that's really what you want. :)
> when it's mostly just condescension and or showing off when applied to forum answers.
You might be reading emotion and motivation into text that simple isn't there.
I do wonder if codex improves and asks clarifying questions if you'll attribute condescension to its actions?
> but first answer the question or don't bother participating
Sounds like you might be selfishly confusing other human beings for a coin operated answer vending machine that exists for your personal gratification. Or maybe I'm reading too much into your "just answer or don't participate"? :D
(I don't have any reason to believe it of you-- but I would assume that it's no less true than the windbaggy responders replying that way to show off their skills and mock your lack of them)
In any case... that is the point: If you want a answer vending machine that doesn't care if what you asked for might not be what you want, an actual machine will do a better job of it.
Personally, I'm grateful when someone backs me out of a niche I've reasoned myself into ... and I don't mind having to spend more time on my questions to qualify that I've already tried the obvious. I've probably solved more of my own problems just by stepping back to explain what I'm trying to do and what and then noticing the error on my own, than by any other means.
But even if I did mind-- even if the people responding really were just showing off-- if that's the price of free help it still seems like a good deal. I'd much rather get help from someone with a chance of warning me that my whole approach might be confused, in my view that's 95% of the value of learning from another mind as opposed to mashing buttons until the tests pass.
> If someone wants to pontificate below that answer that maybe I wanted the extension, that's cool, but first answer the question or don't bother participating
Beggars can't be choosers, especially when asking support questions, and especially when they're not paying for it.
For every person who asks a question expecting (and deserving) a literal answer, there are dozens of ill-posed questions that require a clarifying preamble to get to any answer at all. It's better not to waste anybody's time.
Support people tend to be able to distinguish between the two pretty well, too. If you ask a precise syntax question, you'll have a precise answer. Ask a vague "do my homework" question, and expect people will ask for clarifications.
Nobody is forcing anyone to answer though. If someone has some "do this, don't do that" wisdom they want to share, they could write a blog post or something.
I agree the dynamic of "help" in a discussion forum is always a bit weird, but I think if you're going to take the time to interact with someone who has asked a question, it's most constructive to start by answering (Sam goes for people who just criticize the tone of the question, or say RTFM or whatever. If it's off topic or low quality, it's probably better to just downvote the post).
No one forces anyone to pay attention to an answer they don't like either.
As far as "just answer"-- writing an answer takes work. You'd have to pay me a lot to mindlessly answer uninteresting queries that probably won't help you, like some meat based poor imitation of a search engine. It's not interesting, it's not fun, it doesn't foster building a community, it wouldn't help me expand my own knowledge.
I'm of the view that when someone comes existing on a particular form of answer and not an interaction with a potential (future) peer, -- what they ought to be doing is hiring someone on Fiverr if its not an answer that they can extract from a search engine (or our benevolent AI overlords).
Generally it's the person asking the question that wades into an environment with potential answers, they're the guest with the (minor) imposition of a question. I suppose my view is different with respect to rogue troubleshooters that swing in out of the blue with preachy answers to questions that were never asked. :P
Using Bash examples was a bad idea. Bash scripts are by-and-large hacky things written to solve a simple problem expediently. OpenAI's output code reflects this reality. Expecting OpenAI to write Bash code defensively is like expecting OpenAI to complete SMS messages with scientific english.
I was going to ask a similar thing. Does this AI not pull from a massive pool of countless examples of real human-written code? In the majority of cases, perfect code is far from necessary (put another way, solving Y will generally suffice even if you should be focusing on X), and that seems to be reflected in the sample data that the AI is most likely pulling from. Obviously it isn't perfect, but I never expected it to be perfect, so I'm left more impressed that the AI functions at all than I am upset that it doesn't function absolutely perfectly in every way.
I think it's performance is quite impressive. Consider, the very last example is a better implementation than most here would write (at least without some research). The cheeky tone is not targeted towards the machine.
> Does this AI not pull from a massive pool of countless examples of real human-written code
Pull from wouldn't be the right way of looking at it-- it's trained on and has presumably memorized significant amounts. But thus far no one has equip this kind of model with something like 'database access', beyond its own quadratic-cost internal self-attention.
On my decidedly yolo-colors laptop display, I like that ever so slightly less. But not ever-so-slightly enough to fail to defer to a highly actionable and justified display of nerdity.
Maybe it's just a literal match (Codex is known to quote code verbatim [0]) for existing training data? Given the popularity of the problem in both CS classes and coding competitions, there's a good chance a matching implementation was present in its Github-based training data more than just a few times.