This has always seemed like a somewhat short-sighted take from regulators. While I do think that platforms have some responsibility to police content that users post, I can’t help but feel like it just creates this cycle of
1. Latest political event creates rise in related disinformation
2. Regulators issue ultimatum
3. Platforms make big public statement that they’ll “address it”, ultimatum paused
4. Next event occurs before 3 is realized (if it even can be, reasonably)
5. Repeat
To me the much smarter investment would be making serious “online literacy and critical thinking” programs to help decrease the effect of disinformation on populations
You are assuming that disinformation is an absolute thing and that the state and governments tries to fight it for the common good. What I see around me is that state is a prime producer of disinformation which also already controls the most effective vectors of civilian propaganda (namely schools and TV), in addition to the dedicated military units.
What governments are trying to fight is information going against it's own propaganda. Raising critical thinkers goes against that (we have seen that even just asking questions was a crime-like action during covid), which is one of the reason education level is being sabotaged.
While it is a noble goal, we've seen in the past few years how susceptible people are for disinformation. Just see how many people absolutely ate up Covid/Vaccination disinformation. See how many people are still ignorant about climate change and believe the bullshit they're told by the fossil fuel lobby. Or even more fundamental; see how many people are susceptible for advertisement - it's basically the same.
You'd wish that people were critical thinkers. But that's not what our consumerism-oriented media landscape has been working towards in the last 50 years. Quite the contrary; "buy, so everyone will like you!" We've been trained to be brave little consumers and to eat up our daily share of capitalistic propaganda -eh- I mean advertisement.
The economic incentive to create as many mindless and uncritical consumers as possible has left much of the population unable to think for themselves.
So, before you can have your censorship-free utopia, you first need to raise the level of informed and critical education. And that's not going to happen while the economic interests need the population susceptible for advertisement.
States aren't event close to being the prime producers of disinformation. Exactly as libertarians ought to suppose, states are being comprehensively outcompeted by private industry in that respect.
does anyone know who is behind "reclaim the net"? a lot of their articles are hard-right leaning and there is zero information on their site about who funds it and who is writing the articles. googling the "authors" of the articles leads to nothing.
There’s a big issue around censoring disinformation, but currently Facebook is actively promoting and pushing the video of hamas beheading a baby, and this video auto plays.
This is the result of facebook’s content promotion algorithm, and is clearly an unbelievably, and unambiguously traumatic video that has no valid “free speech” argument, especially when it’s just trolls and terrorists intentionally promoting it to cause trauma.
It is not too much to ask for Facebook to block that - they have no problem blocking media that violates copyright, so the only reason this is still happening is because they don’t actually care. Hell maybe it drives engagement.
Alternatively if they can’t guarantee they’re not good Ng to auto play a baby being beheaded, they can just turn off autoplay and blur all videos. That approach also means there’s no censorship threat, if that’s important to you.
I think suggesting that the human beings working at Facebook are so devoid of humanity that they'd readily leave that sort of content online to drive engagement or just flat out don't care is pretty unfair and unreasonable thing to suggest I think.
Sure yeah go ahead and demonise FAANGS for stuff you don't agree with, but suggesting "they" (the real human beings working there) are leaving baby beheading videos up to drive some engagement metric is just a demonisation too far IMO.
Firstly, I want to say I agree with you. If what you're saying is correct, it seems insane to me they'd allow that given I don't believe nudity is even allowed on Facebook?
However, I'm not sure this is actually what the EU has a problem with? Showing the factually accurate and horrific things Hamas is doing is obviously not pro-Hamas or hateful. Major media orgs here in Europe are doing very similar things with the exception that they're blurring the content.
So I don't believe this content is illegal or hate speech? The first thing that comes to mind, is that back when the UK was in the EU a lot of media orgs here were publishing uncensored images of a dead child who drowned while seek refuge in Europe.
> ... and is clearly an unbelievably, and unambiguously traumatic video that has no valid “free speech” argument,
Strong disagree. That kind of content is very effective at changing attitudes and shifting political pressures, which is the core purpose of first amendment protections. I completely understand not wanting to see it, but disagree with preventing others from choosing to see and share it.
The idea that content that is "clearly an unbelievably, and unambiguously traumatic video" is justification for censorship is backwards, since that content has particularly large potential to drive change.
1. Latest political event creates rise in related disinformation
2. Regulators issue ultimatum
3. Platforms make big public statement that they’ll “address it”, ultimatum paused
4. Next event occurs before 3 is realized (if it even can be, reasonably)
5. Repeat
To me the much smarter investment would be making serious “online literacy and critical thinking” programs to help decrease the effect of disinformation on populations