Can A.I. Detect Disinformation?

maximus otter

Recovering policeman
Joined
Aug 9, 2001
Messages
8,127
Reaction score
17,320
Points
309
Can AI Detect Disinformation? A New Special Operations Program May Find Out:

For all the U.S. military’s technical advantages over adversaries, it still struggles to counter disinformation. A new software tool to be developed for the U.S. Air Force and Special Operations Command, or SOCOM, may help change that.



“If you don’t compete in the information space, regardless of how good your operations are, your activities are, you will probably eat a shit sandwich of disinformation or false reporting later on,” Raymond “Tony” Thomas, a former SOCOM chief, said in an interview. “We certainly experienced that at the tactical level. That was the epiphany where we would have good raids, good strikes, etc. and the bad guys would spin it so fast that we would be eating collateral damage claims, etc. So the information space in that very tactical space is key."

Primer [is] a company that on Thursday announced a Small Business Innovation Research contract to develop software over the next year to help analysts better—and much more quickly—survey the information landscape and hopefully detect false narratives that show up in the public space.

Primer’s neural network technology can scan large amounts of text and extract themes and other information based on the frequency and prominence of words and phrases. It’s the sort of thing that can be very useful if you have a lot of text you want to very quickly summarize in an accurate headline, a capability they demonstrate here. To train their headline-writing neural net, they used a corpus “of millions of publicly available document-title pairs: news articles and headlines” according to their paper on the subject.

The new contract will help Primer to build a platform “to automatically identify and assess suspected disinformation,” according to a press release from the company.

[We were shown] an example of where the technology is today, in the context of the emerging conflict between Armenia and Azerbaijan. The network can find news, sources and social media posts about the conflict and segment that information into groups, based on who is saying what about a particular event or incident, such as a military strike. This immediately gives the user a sense of what different groups and different governments are claiming. You can also see how those reporting entities have changed the way they’ve discussed the situation in question overtime. Essentially, at present, the network gives you much of the same information that you might get from a newspaper story covering an incident or event.

The hope over the next 12 months is to add data that comes from operators responding and interacting with the product and the information it presents. Those users in SOCOM and the Air Force will be able to determine—and provide information on—which of the sources is the most credible, based on what they’ve seen. Their input will allow the network over time to develop a sense of which claims are more likely to be factual based on the source and what other sources are saying that’s different. “The next level of this system is one that’s… more predictive, allows you to see and make inferences that you can test along the way.”

Eventually, the platform will be able to award a particular claim or news item a sort of accuracy score based on those factors,

https://www.defenseone.com/technolo...ecial-operations-program-may-find-out/168972/

maximus otter
 

packshaud

Devoted Cultist
Joined
Nov 13, 2018
Messages
152
Reaction score
345
Points
64
Location
Brazil
The most impressive feature of such a tool is that it would not need to work to be useful. Brilliant!
 

Cochise

Priest of the cult of the Dog with the Broken Paw
Joined
Jun 17, 2011
Messages
6,711
Reaction score
9,172
Points
284
Can AI Detect Disinformation? A New Special Operations Program May Find Out:

For all the U.S. military’s technical advantages over adversaries, it still struggles to counter disinformation. A new software tool to be developed for the U.S. Air Force and Special Operations Command, or SOCOM, may help change that.



“If you don’t compete in the information space, regardless of how good your operations are, your activities are, you will probably eat a shit sandwich of disinformation or false reporting later on,” Raymond “Tony” Thomas, a former SOCOM chief, said in an interview. “We certainly experienced that at the tactical level. That was the epiphany where we would have good raids, good strikes, etc. and the bad guys would spin it so fast that we would be eating collateral damage claims, etc. So the information space in that very tactical space is key."

Primer [is] a company that on Thursday announced a Small Business Innovation Research contract to develop software over the next year to help analysts better—and much more quickly—survey the information landscape and hopefully detect false narratives that show up in the public space.

Primer’s neural network technology can scan large amounts of text and extract themes and other information based on the frequency and prominence of words and phrases. It’s the sort of thing that can be very useful if you have a lot of text you want to very quickly summarize in an accurate headline, a capability they demonstrate here. To train their headline-writing neural net, they used a corpus “of millions of publicly available document-title pairs: news articles and headlines” according to their paper on the subject.

The new contract will help Primer to build a platform “to automatically identify and assess suspected disinformation,” according to a press release from the company.

[We were shown] an example of where the technology is today, in the context of the emerging conflict between Armenia and Azerbaijan. The network can find news, sources and social media posts about the conflict and segment that information into groups, based on who is saying what about a particular event or incident, such as a military strike. This immediately gives the user a sense of what different groups and different governments are claiming. You can also see how those reporting entities have changed the way they’ve discussed the situation in question overtime. Essentially, at present, the network gives you much of the same information that you might get from a newspaper story covering an incident or event.

The hope over the next 12 months is to add data that comes from operators responding and interacting with the product and the information it presents. Those users in SOCOM and the Air Force will be able to determine—and provide information on—which of the sources is the most credible, based on what they’ve seen. Their input will allow the network over time to develop a sense of which claims are more likely to be factual based on the source and what other sources are saying that’s different. “The next level of this system is one that’s… more predictive, allows you to see and make inferences that you can test along the way.”

Eventually, the platform will be able to award a particular claim or news item a sort of accuracy score based on those factors,

https://www.defenseone.com/technolo...ecial-operations-program-may-find-out/168972/

maximus otter
Obviously not. It will detect information the programmers don't like. Ain't no such animal as Artificial Intelligence.
 

Ascalon

Ephemeral Spectre
Joined
Jul 3, 2009
Messages
398
Reaction score
748
Points
109
To the OP: yes, it already does.

That's half of how we know it is there.

Not being flippant, but there have been systems to detect as revealed when Snowden went public.
 

SkepticalX

Ephemeral Spectre
Joined
Nov 19, 2009
Messages
272
Reaction score
530
Points
109
Location
Midwest, USA
I agree that machines are still unable to do anything they haven't been programmed to do. So, this software can only identify disinformation as defined by its programmers. True artificial intelligence has been an elusive goal, simply because we still don't have a complete handle on how human intelligence works.
 

Ascalon

Ephemeral Spectre
Joined
Jul 3, 2009
Messages
398
Reaction score
748
Points
109
I agree that machines are still unable to do anything they haven't been programmed to do. So, this software can only identify disinformation as defined by its programmers. True artificial intelligence has been an elusive goal, simply because we still don't have a complete handle on how human intelligence works.
No, that's not quite how it works.

A machine learning system can be turned lose on a large data set and determine things for itself. Usually it just takes large amounts of relatively clean data to do it.

AI and ML are not quite the same thing, but an AI system will, to a certain extent, rely on ML to learn and discern things. The problems with AI is that when it is tasked with doing something with what it has learned through ML. Decision making based on a set of parameters that may have been determined by a set of ML rules applied to an unstructured dataset is still tough with AI, but the narrower the focus of AI operation, the more easily it can done. Not that any of it is easy, just in relative terms.
 

EnolaGaia

I knew the job was dangerous when I took it ...
Staff member
Joined
Jul 19, 2004
Messages
21,810
Reaction score
31,197
Points
309
Location
Out of Bounds
I agree that machines are still unable to do anything they haven't been programmed to do. So, this software can only identify disinformation as defined by its programmers. True artificial intelligence has been an elusive goal, simply because we still don't have a complete handle on how human intelligence works.
It can detect and / or categorize patterns in the data it scans - that's it; that's all. What it detects and / or how it categorizes is entirely dependent on its configuration, data sources, programming and / or initial training. Where machine learning is involved, processing experience to date is also a factor.

Whether or how the patterns or 'hits' constitute or represent 'disinformation' is a separate matter.
 

EnolaGaia

I knew the job was dangerous when I took it ...
Staff member
Joined
Jul 19, 2004
Messages
21,810
Reaction score
31,197
Points
309
Location
Out of Bounds
AI and ML are not quite the same thing, but an AI system will, to a certain extent, rely on ML to learn and discern things. ...
Machine learning is not an intrinsic component of an AI, nor is it an operational criterion for classifying something as an AI.

However, a certain degree of what might be called 'learning' is intrinsic to neural net systems in the sense they can adapt their connections with training / processing. Even then, such 'learning' relates to performance as judged by an external party and it needn't have anything to do with extending or refining any ongoing 'knowledge' of the task being performed.
 

Cochise

Priest of the cult of the Dog with the Broken Paw
Joined
Jun 17, 2011
Messages
6,711
Reaction score
9,172
Points
284
Machine learning is not an intrinsic component of an AI, nor is it an operational criterion for classifying something as an AI.

However, a certain degree of what might be called 'learning' is intrinsic to neural net systems in the sense they can adapt their connections with training / processing. Even then, such 'learning' relates to performance as judged by an external party and it needn't have anything to do with extending or refining any ongoing 'knowledge' of the task being performed.
My own personal programming hero went on from developing the database (that I am still to some degree responsible for), to Neural networks. They are a way of 'educating' a program with non-programmer input. But that's just semantics. If you construct a system such that someone can 'program' it without technically writing code, actually the person is still programming.

Computers are incredibly dim. They can add (very quickly) and compare. They really can't do anything else - everything else they appear to do is in fact a replaying of recorded human logic.
 
Top