• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
OpenAI built a text generator so good, it’s considered too dangerous to release
""
A storm is brewing over a new language model, built by non-profit artificial intelligence research company OpenAI, which it says is so good at generating convincing, well-written text that it’s worried about potential abuse.
That’s angered some in the community, who have accused the company of reneging on a promise not to close off its research.
OpenAI said its new natural language model, GPT-2, was trained to predict the next word in a sample of 40 gigabytes of internet text. The end result was the system generating text that “adapts to the style and content of the conditioning text,” allowing the user to “generate realistic and coherent continuations about a topic of their choosing.” The model is a vast improvement on the first version by producing longer text with greater coherence.
But with every good application of the system, such as bots capable of better dialog and better speech recognition ", the non-profit found several more, like generating fake news, impersonating people, or automating abusive or spam comments on social media."
To wit: when GPT-2 was tasked with writing a response to the prompt, “Recycling is good for the world, no, you could not be more wrong,” the machine spat back:..........."
See whole article here:
https://techcrunch.com/2019/02/17/openai-text-generator-dangerous/?yptr=yahoo

Page also includes:
""US intelligence community says quantum computing and AI pose an ’emerging threat’ to national security"..............
 
:kersplat:
OpenAI built a text generator so good, it’s considered too dangerous to release
""
A storm is brewing over a new language model, built by non-profit artificial intelligence research company OpenAI, which it says is so good at generating convincing, well-written text that it’s worried about potential abuse.
That’s angered some in the community, who have accused the company of reneging on a promise not to close off its research.
OpenAI said its new natural language model, GPT-2, was trained to predict the next word in a sample of 40 gigabytes of internet text. The end result was the system generating text that “adapts to the style and content of the conditioning text,” allowing the user to “generate realistic and coherent continuations about a topic of their choosing.” The model is a vast improvement on the first version by producing longer text with greater coherence.
But with every good application of the system, such as bots capable of better dialog and better speech recognition ", the non-profit found several more, like generating fake news, impersonating people, or automating abusive or spam comments on social media."
To wit: when GPT-2 was tasked with writing a response to the prompt, “Recycling is good for the world, no, you could not be more wrong,” the machine spat back:..........."
See whole article here:
https://techcrunch.com/2019/02/17/openai-text-generator-dangerous/?yptr=yahoo

Page also includes:
""US intelligence community says quantum computing and AI pose an ’emerging threat’ to national security"..............
With respect, not buying into any of that.

:
 
Aye, fair enough, son, daughter and myself still watch that video and all conclude it's both remarkable and just a wee bit...

...unsettling?
 
click on that link thispersondoesnotexist.com/ and every earnest, work-worn, intelligent, vaguely-familiar face you will see, each tim

There are some rather-disturbing images that can be generated by that mechanism *whether true AI or not* (look in the backgrounds):
download (1).jpeg
download (2).jpeg
download (4).jpeg

download (5).jpeg
download (6).jpeg
 
There are some rather-disturbing images that can be generated by that mechanism *whether true AI or not*...
Another disturbing issue, is how you come across an online profile of yourself and..

..none of this is true, presumably...?

2019221_14258399.jpg
 
none of this is true, presumably...
[off-topic] By 'true', if you mean 'how much of this is personally-applicable?' the answer is zero. I am The Intruder...(erm)...I use the avatar of a certain famous admirable animated cow from 'The Magic Roundabout' tv programme popular in my childhood, but am not female. That character had a similar-but-different name. I was unaware of the Merovingian origins, thank you. And back to AI [/off-topic]

To be fair, wanting to destroy all humans is just an inherent property of ai and robots in general
An unfair but forgivable conclusion. In fictional representations, any non-human automaton or thinking system is always depicted as having this subliminal desire, the corollary from this being Asimov's Rules of Robotics.

Stories involving benign or dutiful artificial humans just don't sell...in the same way that depicted minor disfunctionality amongst real humans is so much more marketable than silent compliance.

I'm also not in the least bit convinced by reported homicidal tendancies being displayed by paired-up AIs. The whole "Now I am God" trope is too much of a self-fulfilling prophesy to be true...in my opinion.
 
First contact with AI...? You have just reminded myself, in this very instance as I write.

ZX Spectrum... chess game...

If you made a 'stupid' move, said would respond with an onscreen message, 'You sure you want to do that?', or, 'Have you thought that through?' and so on...

My first recollection of ever being patronised by a 'machine'.
 
In a response to Comfortably Numb, Kameltk said.

..To be fair, wanting to destroy all humans is just an inherent property of ai and robots in general ..

It also appears to be spreading to Coast Guard personnel with extreme right wing leanings.

INT21.
 
AI can't create cats.

https://www.livescience.com/64771-ai-generated-cats-from-hell.html

'Artificial intelligence (AI) recently tried to generate cat photos from scratch, and the results were cat-astrophic.

....

"The neural network doesn't understand how cats work. It doesn't understand how many legs they have. It isn't really clear on how many eyes they have or where all of their anatomy goes,"
 
Just watching The X-Files episode where Mulder and Scully have problems with AIs. Everything from an automated restaurant to a driverless car to a smart house to drones. There's a simple solution to their problems though ... Good fun.

Rm9sbG93ZXJz. Season 11: Episode 7.
 
[off-topic] By 'true', if you mean 'how much of this is personally-applicable?' the answer is zero. I am The Intruder...(erm)...I use the avatar of a certain famous admirable animated cow from 'The Magic Roundabout' tv programme popular in my childhood, but am not female. That character had a similar-but-different name. I was unaware of the Merovingian origins, thank you. And back to AI [/off-topic]


An unfair but forgivable conclusion. In fictional representations, any non-human automaton or thinking system is always depicted as having this subliminal desire, the corollary from this being Asimov's Rules of Robotics.

Stories involving benign or dutiful artificial humans just don't sell...in the same way that depicted minor disfunctionality amongst real humans is so much more marketable than silent compliance.

I'm also not in the least bit convinced by reported homicidal tendancies being displayed by paired-up AIs. The whole "Now I am God" trope is too much of a self-fulfilling prophesy to be true...in my opinion.
That's a long (and serious) answer to my joke.
 
Some brilliant, sharp and downright cheerful posts above.

Though occureth... would not the first chess games, where there was intelligent interaction, count as well?

1550831404433747.jpg
 
where there was intelligent interaction
But that's not intelligence: that is just a set of pre-programmed responses, whether multi-stage predictive or not. All these devices did was to apply a logical ruleset in response to a set of inputs.

Before anyone says "that's just the same as intelligence", it isn't. In particular, I would really doubt that these early chess-bots could learn scenarios or recognise patterns (or be able to then adapt their own strategies accordingly).

Intelligence is about conclusion arising from analysis. It is pattern/differentials recognition. Prediction of outcomes. Risk identification. Extraction of key facts from a range of data. Retention of record, and survival. Adaptation and Derivation.

All of which are exceedingly-difficult to recreate as truly interconnected systems...
 
But that's not intelligence: that is just a set of pre-programmed responses, whether multi-stage predictive or not. All these devices did was to apply a logical ruleset in response to a set of inputs.
Such an interesting reply to the question... mmm... needs further thought here!
 
But that's not intelligence: that is just a set of pre-programmed responses, whether multi-stage predictive or not. All these devices did was to apply a logical ruleset in response to a set of inputs. ...

Precisely. The later introduction of neural nets did nothing to change this basic restriction except to hide the 'rules' so that no one could inspect them. Neural nets are trained to match inputs to outputs. Training deficiencies are in effect the same as algorithmic rule deficiencies, but implemented in such a way they can be corrected solely by further training (or starting all over again).
 
Ermintruder appears to be implying that it really boils down to 'IF THEN choices. These just rely upon a list to compare against.

Generally speaking ...

All AI implementations are essentially hyper-sophisticated 'IF'-'THEN' engines.

Symbolic AI is accomplished by inserting an inference routine operating against a 'knowledge base' (rules, etc.) between the 'IF' and the 'THEN'.

Neural-based AI is accomplished by routing the 'IF's' into a neural net (or equivalent) which will select / trigger a particular 'THEN' based on its training to date.

In both schemes 'machine learning' is sometimes added by having additional routines change the knowledge base, the inference engine, and / or the neural net's training parameters.
 
Generally speaking ...

All AI implementations are essentially hyper-sophisticated 'IF'-'THEN' engines.

Symbolic AI is accomplished by inserting an inference routine operating against a 'knowledge base' (rules, etc.) between the 'IF' and the 'THEN'.

Neural-based AI is accomplished by routing the 'IF's' into a neural net (or equivalent) which will select / trigger a particular 'THEN' based on its training to date.

In both schemes 'machine learning' is sometimes added by having additional routines change the knowledge base, the inference engine, and / or the neural net's training parameters.
After much thought, on all of this...

... a post deserving of same.

So... how does it work, when YouTube suggests some music, ostensibly unrelated to any previous searches/plays and it's something you love and hadn't heard for years!
 
Further thoughts... the simple explanation is surely, that sometime in the past, the YouTube software has a record of an older, 'visit' and programmed to seek any connection?
 
... So... how does it work, when YouTube suggests some music, ostensibly unrelated to any previous searches/plays and it's something you love and hadn't heard for years!
Further thoughts... the simple explanation is surely, that sometime in the past, the YouTube software has a record of an older, 'visit' and programmed to seek any connection?

There are any number of ways to suggest a next tune, and selecting a suggestion need not require anything more than straightforward algorithmic pattern matching against a database, perhaps using a scoring protocol to merge partial results.

Whether this requires - or qualifies as - AI depends on the approach and where one sets the threshold for being AI.

For example, YouTube's earlier-established video recommendation system prioritizes popularity based on number of views, with a small percentage of recommendations drawn from a pool of offerings with few views (most probably qualified with respect to elapsed time since introduction). The search area is typically circumscribed in accordance with your previous viewing history. If you aren't associated with a substantial data set of previous views, the system pretty obviously seems to default to everyone's collective viewing history (i.e., global statistics).

YouTube's more recent music recommendation system seems to follow the same general motif, with a noticeable tendency to increasingly recommend music that's tied to videos from their own inventory.

It's a huge, but finite and well-organized, data mass from which selections are made based on a similarly finite and orderly model or protocol.

For the recommendation system to need to 'seek connections', it would need to be negotiating selections based on multiple - possibly deep and fine-grained - features or attributes. I seriously doubt it's doing much more than simply running the statistics (your own or everybody's) and picking the offerings in a very pro forma fashion or simply traversing an established 'map' (figuratively speaking) of the content data space being surveyed for presentation.
 
There are any number of ways to suggest a next tune...
Massive appreciation! ;)

So... makes sense of, say, why I've last been playing Rory Gallagher and next time YouTube suggests Kate Bush and I'm thinking, 'Yesssss... how did you know that'?... :)
 
Probably because masses of Rory fans also listen to Kate on YouTube. Birds of a feather flock together. They pick up the data and use it to keep you on their site, clicking away in a pattern thousands have before.
 
Back
Top