If authors want people to read their articles they should learn to write.
Let's put snarkiness to one side - the article was shit. You know it, I know it, and everyone else knows it. Poorly written, with the same dumb thoughts - and I use the term loosely - I can get from any spoiled sophmore at any university in the western world.
The only reason it got any traction was a clickbait title about a hot topic. So I feel perfectly OK about shitting all over it. If you don't, flag me. It's the internet, it really doesn't matter.
The bias and motive is also obnoxious. The author is very clearly generating backlinks to other pseudo-academics in some postmodern anti "AI" résistance. It's just regurgitation of other peoples' original commentary.
It's hard because the author is just generating poorly disguised backlinks to other peoples' thoughts. Follow some of the links and see for yourself e.g. Stop feeding the hype and start resisting (which I also read and lacks any explanation of exactly how one might resist if one was so inclined) which links to the same CogSci2022 talk as the author does on resisting the dehumanization of technology. All three of which cite the resistance slogan Stochastic Parrots from a paper published in the proceedings of the Fairness Accountability and Transparency conference. Sociologists don't like large language models because they have the potential to make humans do less menial labor [which isn't good for a capitalist society because it frees humans up to think]. I mean I have no idea why these people feel so threatened by a large language model, but outrage porn gets clicks.
Here's what I took from it: LLM's encode and are very adept at generating hate and bullshit. To counter that, big wealthy companies pay foreign workers low wages to do the very unpleasant task of reading and tagging the worst outputs from their LLM. This is a bad state of affairs and should be changed.
Yes, they frame this is in weird academic phrases. But, it's really not hard to get the point, even a tiny modicum of intellectual curiosity would get you there.
I read way more into it. I actually think the author is attempting a postmodern deconstruction of the concept of AI in the first place. In typical internally inconsistent sloppy rhetoric, this anti-fascist (his words not mine) dismissal both argues that LLMs are not real AI because presumably they can't think and also that we should be afraid of them because as AI they encode modernity into their output and modernity is bad because it perpetuates inequality by its very nature. Since stochastic parrots threaten to accelerate labor and make humans more productive and make some labor redundant, it's also bad that it creates new jobs which pay humans a Kenyan median wage to sterilize it and make it acceptable for a postmodern anti-fascist utopia. I'm absolutely and horribly confused.
Interesting. I think a more coherent form of the argument is this: LLM's aren't capable of reasoning or self-updating. Capitalists are exploiting powerless labor to encode their preferences and goals into LLM's. Because the LLM can't really think and isn't human they can control it. So, we're going to be forced to deal with robots that perfectly embody the worst aspects of the modern economy and society.
Let's put snarkiness to one side - the article was shit. You know it, I know it, and everyone else knows it. Poorly written, with the same dumb thoughts - and I use the term loosely - I can get from any spoiled sophmore at any university in the western world.
The only reason it got any traction was a clickbait title about a hot topic. So I feel perfectly OK about shitting all over it. If you don't, flag me. It's the internet, it really doesn't matter.