That's basically what the author suggests when they say:
> Anyhow, I truly believe humanity has to rollback to operating at a human scale.
It's impossible to operate a complaints-platform on a global scale if it's run by humans. According to GMI [0],
500 hours of content are uploaded every minute.
Given an average video duration of about 12 minutes [1], that would be 2500 videos per hour. That's just too much to manually review and handle complaints.
Is it though? Let's do a very rough estimate: 500 hours of content per minute in 2019, lets say 1 reviewer can review 3 hours of video each hour (by increasing play speed/skipping etc.) and we have a global workforce in lower income countries working 3x 8 hour shifts + weekends. That's 50060/33 = 30000 reviewers, at a monthly cost of $1000 that's 30000100012*7/5 = $504m/year.
Youtube had $15b in revenues in 2019, so this represents around 3.5% of revenue. Now this is assuming that we actually need to 100% review every video before releasing it (which is not the case) and one reviewer can probably review more than 3 hours of content per hour with the right AI assistance so the real cost would be quite a bit lower. Even then, spending less than 5% of revenues on content review, moderation and support sounds very reasonable to me.
The flaw is that you think humans would do a better job than AI which def not the case. Especially hiring 30k people in a low income country, what could go wrong... This is the kind of scale problems that can't be fixed by humans review.
Yes, it is - because it's actually 2500 videos per MINUTE, not per hour, mea culpa. So your 30,000 reviewers would actually have to be at least 1.8 MILLION.
It's not about the viewing time, though, it's about the videos.
The misconception is that it's the review process that's the problem - it isn't. That can be automated just fine.
The problem arises as soon as there are complaints or issues with the content and that depends on the number of videos, not the duration.
So if there's a problem with a video it can get flagged, de-monetised or even taken down automatically by software (as is the case now). This is a non-issue. It gets complicated as soon as one party has a dispute over this and that scales with the number of videos, not their length.
> that scales with the number of videos, not their length.
That seems pausible, but if so, the entire calculation would have to be redone from scratch, with qualitatively (not quantitatively; different units, not just different values) different numbers, so bringing "1.6 million" into it is still a misleading non sequitor.
It takes a lot longer to make a video than to watch it. It therefore stands to reason that if humanity is capable of making all that content, humanity is capable of watching it - if it decided that were a priority.
2500 videos per minute doesn't equal 500 hours of original content per minute, which is part of the problem.
Just look at all the reaction channels and compilations that simply reuse the same content over and over again. You have one funny or shocking clip (often from 3rd party sources such as TikTok) and you'll find the same video snippet in at least 10,000 remix/compilation/reaction videos. Not to mention reuploads and straight up copies.
Algorithms have a hard time catching up with this and cropping, mirroring, tinting, etc. are often used to confuse ContentID. Asymmetry is the problem. Bots and software can both spam and flag content at superhuman rates.
The inverse - e.g. deciding whether a complaint is legit, fair use applies, whether monetisation is possible, etc. - is actually a really hard problem and therein lies the dilemma.
Certain parties are gaming the system and the scale is just too much to handle manually.
I don't have any data to back this up, but i believe of those 2500 videos per minute, 2450 or so the AI could classify them as safe, not requiring human interaction. The other 50 are classified on a scale from 0 to 100 on a badness scale. The ones closer to "not that bad" (ToS and such) gets put through automatically waiting for a review. The illegal content (rape, gore, child porn) and such gets blocked automatically until reviewed by a human. Doesn't sound that far fetched to implement with 50B a year in profits?
But how would that help with complaints, ContentId and copyright claims, though?
The problem isn't the automated review process, the problem is complaints and disputes.
Even if only 1% of all videos had any issues of this sort at all, that'd still be 25 complaints per minute about the most complex topic in media no less - copyright law and fair use.
The problem lies in the asymmetry - bots and automated flagging campaigns can scan, mark and take down thousands of videos per minute no problem.
But it's impossible for creators to get their issue reviewed in time by a human, because we just don't have AI capable of handling such decisions yet. And even then it's often still not as clear-cut as one might think and both sides need to be heard, etc.
I've thought that something like the spamassassin model would be sufficient - calculate a 0.0 - 1.0 range of likelihood to block, and set cutoffs on the 0 end to auto approve, and toward the 1 end to auto block, and moderate the middle.
> Anyhow, I truly believe humanity has to rollback to operating at a human scale.
It's impossible to operate a complaints-platform on a global scale if it's run by humans. According to GMI [0], 500 hours of content are uploaded every minute.
Given an average video duration of about 12 minutes [1], that would be 2500 videos per hour. That's just too much to manually review and handle complaints.
[0] https://www.globalmediainsight.com/blog/youtube-users-statis...
[1] https://www.statista.com/statistics/1026923/youtube-video-ca...