• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.

YouTube Will Curb Conspiracy Theories By Changing A.I. Algorithm

uair01

Antediluvian
Joined
Apr 12, 2005
Messages
5,413
Location
The Netherlands
I find it extremely irritating that the sucker who wrote the original algoritm is congratulating him/herself on the positive change they're making.
At least that's how I read it ...
But interesting and scary at the same time. Some comments are funny.



Brian is my best friend’s in-law. After his dad died in a motorcycle accident, he became depressed. He fell down the rabbit hole of YouTube conspiracy theories, with flat earth, aliens & co. Now he does not trust anyone. He stopped working, seeing friends, and wanting kids. 2/
Brian spends most of his time watching YouTube, supported by his wife.
For his parents, family and friends, his story is heartbreaking.
But from the point of view of YouTube’s AI, he’s a jackpot.
We designed YT’s AI to increase the time people spend online, because it leads to more ads. The AI considers Brian as a model that *should be reproduced*. It takes note of every single video he watches & uses that signal to recommend it to more people 4/
 
They didn't do a very good job did they?

Since February last year the number of conspiracy-oriented videos on YouTube has boomed.

And now:
In a blog post on Thursday, YouTube said it had "removed tens of thousands of QAnon videos and terminated hundreds of channels" under its existing content rules.

But citing challenges in managing "shifting and evolving" content, YouTube said it was necessary to take "another step in our efforts to curb hate and harassment".

"Today we're further expanding both our hate and harassment policies to prohibit content that targets an individual or group with conspiracy theories that have been used to justify real-world violence," it said.

One example, YouTube said, "would be content that threatens or harasses someone by suggesting they are complicit in one of these harmful conspiracies".


Full Article:
https://www.bbc.com/news/technology-54562802

a) This would be a simply massive task if enforced uniformly (which it won't be) millions of videos will need to be pulled. There's no way of doing that 'by hand', so it'll be automated by A.I. and lots of perfectly innocent material will be removed and unfair bans instigated. Given my point below, I don't really think it's a worthwhile enterprise.

b) To be honest, the only people I've ever 'met' who have seriously put forward utter drivel like Q-Anon theories were already cripplingly intellectually challenged and highly likely to line up for seconds the next time somebody turns up and starts passing around the bullshit sandwiches.

I don't think you can protect people like this from themselves. If it leaves YouTube, it'll soon infest somewhere else.
 
I agree.

But how do you fight nonsense?

Or should you even fight nonsense??
 
They should radicalise an AI and then ban most of the crap it watches.

This raises a whole raft of very interesting possibilities. Could you build a paranoid AI, a misogynist AI, a homophobic AI, a racist AI? I'm not talking about a simplistic deterministic program that just spews offensive garbage, but a true AI that can predict what will interest and capture the attention of these bad actors and perhaps pick up on the dog whistles that the rest of us might miss. There's been talk of programs that can predict criminal behavior, and some police forces are already using programs that identify areas where particular sorts of crime are likely to occur in the near future.

The attempts of some platforms (I'm thinking Facebook) to filter content manually by human operators are clearly ridiculous, and the results would be laughable if they weren't so often regrettable.
 
Isn't that simply censorship? After all, about 1 in 3 conspiracy theories actually turn out to be at least partially true. (Hillsborough, cough cough)
 
No AI is going to be able to do more than identify and flag users / content that meet standing criteria for being suspicious. Postings to social media (etc.) can involve subtleties (innuendo, phrasing, jargon, irony) AIs can't reasonably handle, and these subtleties can quickly change to allow mischief makers to evade any automated checks or filters.

The best one could hope for would be AI support for nominating users or content for further evaluation and action by humans who can parse the subtleties, decide acceptability and enact any necessary response.
 
Isn't that simply censorship? After all, about 1 in 3 conspiracy theories actually turn out to be at least partially true. (Hillsborough, cough cough)

A stickler would object that censorship is not the correct term for a privately-owned company that chooses from whom it takes custom and to whom it gives service, but frankly I'm a bit fed up with that.

The Internet is not comparable to the public space of a city square or speaker's corner, because even though in theory anybody can 'set up shop' and begin passing out virtual pamphlets with their own website, in effect all the established entrances to the website arena are controlled by large corporations, who have no qualms about simply delisting you and removing you from their indexes if they don't like your message.

The government ought to enforce a truly level playing field, or at least, in the first place, instruct the corporations to do so on pain of having it done for them. As it is, you have companies like Twitter and YouTube claiming simultaneously to be offering a public space (and hence exempt from legal liability for what their users post under U.S. law), and to have the right to edit and control what appears under their masthead.

They are either publishers--in a legal sense--or not.

They should not be allowed to have both.
 
Well, OK, it's good that we're closing the gap between machine intelligence and human intelligence. I just wish it wasn't happening because there are more and more stupid people . . .
The smarter AI becomes, the more stupid people will become. The human race will come to rely on it too much.
'Idiocracy' will turn out to be a prophecy.
 
Well, OK, it's good that we're closing the gap between machine intelligence and human intelligence. I just wish it wasn't happening because there are more and more stupid people . . .

There's no gap being closed ... The AI is simply shuffling tokens (words) around without any underlying semantic inference or the means to determine which semantic connections are to be avoided. Assuming it's a trained AI (e.g., a neural net based system) it's merely parroting patterns and phrasings available in its lexical repository, re-mixed to fit syntactic rules.
 
Anyone remember Tay?

Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day

....However, some of its weirder utterances have come out unprompted. The Guardian picked out a (now deleted) example when Tay was having an unremarkable conversation with one user (sample tweet: "new phone who dis?"), before it replied to the question "is Ricky Gervais an atheist?" by saying: "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."

... but there are serious questions to answer, like how are we going to teach AI using public data without incorporating the worst traits of humanity? If we create bots that mirror their users, do we care if their users are human trash? There are plenty of examples of technology embodying — either accidentally or on purpose — the prejudices of society, and Tay's adventures on Twitter show that even big corporations like Microsoft forget to take any preventative measures against these problems.

For Tay though, it all proved a bit too much, and just past midnight this morning, the bot called it a night:


https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
 
Isn't that simply censorship? After all, about 1 in 3 conspiracy theories actually turn out to be at least partially true. (Hillsborough, cough cough)
This relates to something that's been annoying me more and more of late, there seems to be an increasing trend in the main stream media to have us belive that all conspiracies are impossible, "folk couldn't keep it secret" etc. also they like to conflate different things - so if ludicrous conspiracy "A" is false then so must they all be. i.e "we have just proven that a picture of a bigfoot is fake therefore there was no conspiracy to kill Kennedy".
And if you don't go along then you are obvoulsy a credulous fool who belives everything you are told and probably give away all your money to Nigerian princes and random callers claiming to be Microsoft.
I've also noticed a tendency for the mainstream media to criticise or make fun of folk like for example - Donald Trump or David Icke inappropriately - as in they will ignore the 99% guff they talk and pick up on some point or remark or other made by them that in and of itself isn't particulary batshit crazy or even contraversial (even a stopped clock is right twice a day) and it seems deliberately misconstrue or interpret it in the worst possible light.
I used to think it was just laziness - Journalists are many things but seldom stupid, but It seems like deliberate misunderstanding and misrepresentation is becoming the norm.
Is it just "dumbing down" ala Idiocracy or is it closer to They Live I wonder ?
 
Last edited:
Isn't that simply censorship? After all, about 1 in 3 conspiracy theories actually turn out to be at least partially true. (Hillsborough, cough cough)

No mate you are totally looking down the wrong end of the telescope!

Do you honestly think social media has been suppressing conspiracy theories of late????

Honestly???
 
This relates to something that's been annoying me more and more of late, there seems to be an increasing trend in the main stream media to have us belive that all conspiracies are impossible, "folk couldn't keep it secret" etc. also they like to conflate different things - so if ludicrous conspiracy "A" is false then so must they all be. i.e "we have just proven that a picture of a bigfoot is fake therefore there was no conspiracy to kill Kennedy".
And if you don't go along then you are obvoulsy a credulous fool who belives everything you are told and probably give away all your money to Nigerian princes and random callers claiming to be Microsoft.
I've also noticed a tendency for the mainstream media to criticise or make fun of folk like for example - Donald Trump or David Icke inappropriately - as in they will ignore the 99% guff they talk and pick up on some point or remark or other made by them that in and of itself isn't particulary batshit crazy or even contraversial (even a stopped clock is right twice a day) and it seems deliberately misconstrue or interpret it in the worst possible light.
I used to think it was just laziness - Journalists are many things but seldom stupid, but It seems like deliberate misunderstanding and misrepresentation is becoming the norm.
Is it just "dumbing down" ala Idiocracy or is it closer to They Live I wonder ?


Are you bonkers?

There are lots of journalists around the world that have their stories reported in the press and have been killed.

These are men and women who have been murdered in their pursuit of the truth against organizations that deny everything.

https://www.washingtonpost.com/world/2019/12/30/past-decade-least-journalists-were-killed-worldwide/

These are proper conspiracy theorists - not man-baby stuff about reptiles eating babies for fucks sake.

The current conspiracy Qanon stuff, is a big peanuts-Linus safety blanket that stops a lot of people actually accepting that things are fucked up and everything is beyond their control so they don't have to do anything.

Also the 1% that Icke and Trump say, that possibly makes sense. does not make up for 99% of the bullshit they peddle to make money and advance their cause.

If you can't see that then I dunno.
 
No mate you are totally looking down the wrong end of the telescope!

Do you honestly think social media has been suppressing conspiracy theories of late????

Honestly???
No. But I don't think they should, now or in the future.
 
Are you bonkers?

There are lots of journalists around the world that have their stories reported in the press and have been killed.

These are men and women who have been murdered in their pursuit of the truth against organizations that deny everything.

https://www.washingtonpost.com/world/2019/12/30/past-decade-least-journalists-were-killed-worldwide/

These are proper conspiracy theorists - not man-baby stuff about reptiles eating babies for fucks sake.

The current conspiracy Qanon stuff, is a big peanuts-Linus safety blanket that stops a lot of people actually accepting that things are fucked up and everything is beyond their control so they don't have to do anything.

Also the 1% that Icke and Trump say, that possibly makes sense. does not make up for 99% of the bullshit they peddle to make money and advance their cause.

If you can't see that then I dunno.

Think we are slightly at cross purposes here.
I was criticising the main stream medias tendency (in my opinion) to dismiss anything that might be considered as "conspiracy" as all being a bit "bonkers" and to dismiss it all out of hand with all cases being equally silly, rather than taking individual cases on thier own merits.
I also was making no case for those two individuals, merely again pointing out what looks to me like a "lazy" attitude on reporting.
However having actually known many journalists over a number of years I find myself wondering if this is merely sloppy work or a more directed editorial stance.
I hadn't mentioned QAnon at all, but for what it's worth it's just a 4Chan joke that's gained legs, and it will no doubt be used by anyone who has a political agenda it suits and yes may indeed be being used to distract folk from more serious matters - editorial stances again.
Or maybe I'm reading more into the dumbing down of media than it merits.

Oh yes and I probably am "Bonkers" :)
 
There's a dangerous impulse to be more aware than anyone else, it's a point of pride in too many people, and it happens in too many areas of life, regardless of p*l*t*cs. There is nothing wrong with being informed, but if you've informed yourself with a load of rubbish that you want to disseminate to make yourself sound more knowledgeable than the average chap or chapess then it may well end in tears. Mind you, there's so much misinformation out there that we're drowning in it.
 
...they will ignore the 99% guff they talk and pick up on some point or remark or other made by them that in and of itself isn't particulary batshit crazy or even contraversial (even a stopped clock is right twice a day) and it seems deliberately misconstrue or interpret it in the worst possible light.

There’s a relevant neologism: “nutpicking”.

Nutpicking is the fallacious tactic of picking out and showcasing the nuttiest member(s) of a group as the best representative(s) of that group — hence, "picking the nut".

https://rationalwiki.org/wiki/Nutpicking

maximus otter
 
Wouldn't most YouTube videos need some kind of metadata/tagword just so people can find them? Wouldn't monitoring those help?
 
The goal of curbing these theories is to cater to the desires of investors and corporate clients of Youtube, who happen to be very rich and will obviously favor only ideas that aren't against them.

Given that, one should consider sites that aren't affected by such.
 
Wouldn't most YouTube videos need some kind of metadata/tagword just so people can find them? Wouldn't monitoring those help?

That's an intrinsic part of the problem. If you want to manage random content you have to be able to identify, characterize and track it. To accomplish this you have to tag all your content so you can efficiently index it for searching, etc. If you have a huge inventory of content (as YouTube does) the burden of the tagging itself becomes a problem and induces massive costs.

This additional tagging and indexing burden is bad enough when tagging is straightforward and unambiguous (e.g., using bar codes to tag merchandise). This burden is open-ended and inevitably gets out of control when the tagged items (in this case, random content) can't be clearly and unambiguously tagged once and for all.
 
There's a dangerous impulse to be more aware than anyone else, it's a point of pride in too many people, and it happens in too many areas of life, regardless of p*l*t*cs. There is nothing wrong with being informed, but if you've informed yourself with a load of rubbish that you want to disseminate to make yourself sound more knowledgeable than the average chap or chapess then it may well end in tears. Mind you, there's so much misinformation out there that we're drowning in it.

Once again we run into the same pesky principle: GIGO (Garbage In; Garbage Out).

Modern life (especially during this hell-year of 2020) is so hectic and over-burdened with unavoidable obligations in all directions that average folks have little or no time to ever catch up with themselves, much less maintain situation awareness on the world at large.

It's more convenient to absorb pre-digested and pre-spun crap that's being pushed to you than to invest time you don't have in getting solid information and connecting the dots for yourself. It's even more tempting when this occurs via the same demand-pull channel you use to access recreational content that helps to take one's mind off the increasingly stressful state of the world.
 
Back
Top