IMO the more fundamental root cause is the bastardization of the term AI. If LLMs don't have any semblance of artificial intelligence than they should be referred to simple as LLMs or ML tools.
If they do have signs of artificial intelligence we should be tackling much more fundamental questions. Does an AI have rights? If companies are people, are AIs also people? Would unplugging an AI be murder? How did we even recognize the artificial intelligence? Do they have intentions or emotions? Have we gotten anywhere near solving the alignment problem? Can alignment be solved at all when we have yet to align humans amongst ourselves?
The list goes on and on, but my point is simply that either we are using AI as a hollow, bullshit marketing term or we're all latching onto shiny object syndrome and ignoring the very real questions that development of an actual AI would raise.
If they do have signs of artificial intelligence we should be tackling much more fundamental questions. Does an AI have rights? If companies are people, are AIs also people? Would unplugging an AI be murder? How did we even recognize the artificial intelligence? Do they have intentions or emotions? Have we gotten anywhere near solving the alignment problem? Can alignment be solved at all when we have yet to align humans amongst ourselves?
The list goes on and on, but my point is simply that either we are using AI as a hollow, bullshit marketing term or we're all latching onto shiny object syndrome and ignoring the very real questions that development of an actual AI would raise.