There are two avenues to deal with a hazard. You can try to manipulate the environment to eliminate the hazard, or you can try to strengthen people to make them immune to the hazard. I think we should prefer the latter over the former whenever possible.
For one thing, it's more robust. The environment is messy and control is often illusory at best. Control limits freedoms and introduces centralized points of failure that can be manipulated by bad actors. Making people strong and free creates more opportunity and innovation, even though it scares the people who long to be in charge of the centralized control.
What does it mean to strengthen people to make them immune to the harms caused by social media? I don't know exactly, but I bet we could find out.
>The brain has some flaws that are very hard to overcome. Addictions are some of them.
The majority of people don't feel victim to addiction, it's a minority of people who are prone to it. Everyone shouldn't have their freedoms restricted just to cater to the more weak-minded.
You don't really seem to understand how the legal system works, don't you?
Laws are there to protect powerful people (in many cases), to protect the majority (hopefully most cases) and to protect the vulnerable, sometimes from themselves (in a decent number of cases).
"prediction markets" -- what a loaded term. Do you consider financial futures markets to also be "prediction markets"? As I understand from financial research, it has been shown time and time again that financial futures do not predict future performance. In my mind, "prediction markets" are nothing more than legalised gambling on an (election) outcome. To be fair, most retail (non-institutional) traders of highly leveraged financial products (futures & options) are the same: They are gambling on an outcome, not investing or hedging risk. Finally, I am not saying it should be outlawed, but there should be some very strong warnings before trading the product -- as there are for futures & options.
I think we should start with removing the immunity that large platforms possess against relevant criminal prosecutions. So let's take for example suicide, in some jurisdictions driving another individual toward suicide is a criminal matter. If evidence can be put forward in a court that they used a lot of social media and that the algorithm contributed to that suicide, well maybe the publisher should be getting prosecuted.
Do I expect that some social media companies are really going to struggle to continue to operate at their current scale because of these changes? Yes I 100% expect it and I think it's great. It may lead to a smaller and more personal web. Your business model has no inherent right to exist if it harms people. Maybe, for example, you will need to hire more humans to handle moderation so that you stop killing people, and if humans don't scale, well, too bad, you're going to get smaller. We regulate gambling, tobacco etc. to limit the harm they do, I don't see any difference with social media.
To have the biggest impact without stifling innovation we can start by applying this rule to platforms which are above a certain revenue level. There is likely a combination of legislative and judicial action here in that there may already be crimes on the books which these platforms are committing, but the judiciary has not traditionally thought of a corporation being the person who committed that crime, certainly not at scale against thousands of victims. In other cases we may need to amend laws to make it clear that just because you used an algorithm to harm people at scale, doesn't make you immune to consequences from the harm you caused.
No, there's not. There's any number of ways to deal with a hazard. Your two avenues are not even distinct. Both require exerting control. Any scheme can be manipulated by bad actors. Scheming to not scheme is still scheming.
You cannot make people anything without limiting their freedom. How do you make people stronger? If you have an idea, there is a centralised point of control/failure. Bad actors will be more strong and free as well.
There's plenty of examples of successful measures to reduce harm by controlling the "environment" see: cigarettes, alcohol, gambling, being old enough to drive on public roads, being old enough to take on debt, child labour, etc.
It's weird to use the words "environment" and "hazard" on one hand and "people" on the other. The discussion is about hazards designed, created and maintained by people. The environment to manipulate is people, organisations, and law.
As a friend of mine said "If you can't kid-proof the farm, you have to farm-proof the kid". Watched said kid drink from farm puddles, and lick feed bowls. Seems to have worked: She's headed to tech school now.
I agree that we can't effectively manipulate the environment to eliminate the hazard, but I also worry that we can't effectively strengthen people to make them immune to the hazard either.
A common thread through all human history is people being misled on-mass. Before social media we were slaves to the tabloid headlines. Before widespread papers we were slaves to the pulpit. Etc.
For the last 10 years social media has been the tabloid but personalised. Outrage = engagement so the algorithms have pushed outrage, and personalized in the sense that they have searched for the thing that outrages each of us individually.
I fear that the next 10 years of social media is very basic generative stuff (LLMs don't need to get better; social media companies just have to apply what the current art), turning them into tabloid with intimacy. By turning into your friend in how they communicate with you, they get engagement x10.
The way to change someones mind is through intimacy.
And humans are suckers for it. We can't strengthen the masses against outrage, and we can't strengthen the individual against intimacy.
If the hazard is just me skulking about and punching you in the back of your head every time you let your guard down, you could strengthen yourself and make yourself immune by never going outside and always keeping your door locked and barring up your windows.
How is that inherently preferable to the addressing the harm itself? By what principle do you conclude that we should prefer one over the other whenever possible?
Or we could have the government require that everyone wear a loudspeaker that constantly announces their presence so that nobody can sneak up on anyone. Citizens are required to purchase their own loudspeaker and anyone caught not wearing one will be fined or jailed.
Is that what you prefer?
On the other hand, if everyone around you was a black belt in jiu jitsu and you knew that there was a good chance that they’d break your arm if you tried to sneak up and punch them, you probably wouldn’t want to do that, would you?
> Or we could have the government require that everyone wear a loudspeaker that constantly announces their presence so that nobody can sneak up on anyone. Citizens are required to purchase their own loudspeaker and anyone caught not wearing one will be fined or jailed.
A much worse solution than e.g. jailing me. It's not everyone who is the problem, neither in my scenario nor in the case of a handful of gigantic social networks exploiting insecurities and addictive tendencies in children.
> On the other hand, if everyone around you was a black belt in jiu jitsu and you knew that there was a good chance that they’d break your arm if you tried to sneak up and punch them, you probably wouldn’t want to do that, would you?
Is that a better solution than jailing me?
Seems we've ended up with three different potential solutions and your principle already isn't holding any water IMO. To bring us back to the problem at hand I can think of an analogous set of three solutions:
1. Make everyone announce their age when they use a social media website so that they know not to exploit children, which is kind of like making everyone wear a loudspeaker.
2. "Harden" the children, which is like teaching everyone jiu jitsu
3. Remove the profit incentive to exploitation of children or otherwise by banning ad funded social networks. This is like jailing the culprit.
For one thing, it's more robust. The environment is messy and control is often illusory at best. Control limits freedoms and introduces centralized points of failure that can be manipulated by bad actors. Making people strong and free creates more opportunity and innovation, even though it scares the people who long to be in charge of the centralized control.
What does it mean to strengthen people to make them immune to the harms caused by social media? I don't know exactly, but I bet we could find out.