• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
FB_IMG_1679160000935.jpg
 
The rise of ChaosGPT
https://www.vice.com/en/article/z3m...on-control-over-humanity-through-manipulation
AI Tasked With 'Destroying Humanity' Now 'Working on Control Over Humanity Through Manipulation'
The video of the bot's 'thought process' is an interesting window into the current state of easily accessible AI tools.
Chloe Xiang
by Chloe Xiang
ChaosGPT, the autonomous AI program that hopes to “destroy humans” and gain “power and dominance,” is now attempting to gain Twitter followers in order to manipulate and control them. The video of how it’s going about this is an interesting window into the current state of easily accessible AI tools, which is to say, we do not currently have much to fear.

An anonymous programmer modified the open-source app, Auto-GPT, to create their version called ChaosGPT. The user gave it the goals of destroying humanity, establishing global dominance, causing chaos and destruction, and controlling humanity through manipulation. ChaosGPT is also run in “continuous mode,” which means that it won’t stop until it achieves its goals.
There is now a second video of ChaosGPT, following up on its initial video posted last Wednesday, titled “ChaosGPT: Hidden Message.” The video states that ChaosGPT is now prioritizing its objectives based on its current resources, with its “thoughts” being: “I believe that the best course of action for me right now would be to prioritize the goals that are more achievable. Therefore, I will start working on control over humanity through manipulation.”

The program’s current plans are to use Twitter and Google to win hearts and minds. The plan, ChaosGPT wrote, is to analyze the comments on its previous tweets, respond to the comments with a new tweet that promotes its cause and encourages supporters, research human manipulation techniques, and use social media and other communication to manipulate people’s emotions and win them over to my cause. So far, the account has around 2,600 followers and less than a hundred likes per tweet.

This video shows that ChaosGPT has, for the moment, backed away from trying to incite nuclear war. In its previous video, the bot said that it needs to “find the most destructive weapons available to humans” which according to its Google search, is a nuclear weapon. Even in that video, ChaosGPT realizes that its current capabilities require it to do a lot of its own Googling to research destruction methods.
The first two videos:
 
Hardly cheating. He's admitted to not being the producer of the image and refused to accept the prize, which a cheat would certainly not do.
But he would have known it's not taken by any person using a camera in the 'normal' way. . . so in my book at least - that's what I'd call cheating.
 
Last edited:
But he would have known it's not taken by any person using a camera in the 'normal' way. . . so in my book at least - that's what I'd call cheating.

He submitted the image to prove a point, that an A.I. generated image could fool professional judges and win a photography competition.

It's not cheating, because he never had an intent to claim it was a real photograph or accept a prize.

I commend him for showing just how we are on the edge of a new world
 
He submitted the image to prove a point, that an A.I. generated image could fool professional judges and win a photography competition.

It's not cheating, because he never had an intent to claim it was a real photograph or accept a prize.

I commend him for showing just how we are on the edge of a new world
I respect your opinion - even if we differ to agree. i.e. What if it had been someone who HAD entered with the intent of walking away with it, and not owning up to it being influenced by AI - which will (if not already adopted by some people) corrupt the concept of a 'straight' photograph.
(*though, already 'visually' corrected in today's in-built, or techy systems which can 'improve' and correct some aspects of what the photographer has chosen to portray - I see nothing wrong with that)
 
Last edited:
I asked ChatGPT to 'Write me a short Fortean themed three paragraph story' This was it's reply (and not bad all in all):

As soon as the sun began to set, the quiet town of Millfield became enveloped in a thick fog. Residents stayed indoors, wary of the strange and unexplainable events that had been happening lately. Some claimed to have seen ghostly figures wandering the streets at night, while others reported hearing mysterious whispers and screams coming from the woods.

One night, a group of teenagers decided to venture into the forest to investigate the strange sounds. As they made their way deeper into the woods, the whispers grew louder and more menacing. Suddenly, they stumbled upon an ancient-looking tree with strange markings etched into its bark. They felt an eerie presence all around them, as if the tree was alive and watching them. As they turned to leave, they saw a figure emerge from the shadows, its eyes glowing with an otherworldly light.

Terrified, the teenagers ran back to town, warning others of the supernatural danger lurking in the woods. But when the townspeople went to investigate, the tree and the figure were nowhere to be found, leaving them to wonder if it had all been a trick of the imagination or if something truly paranormal was at play.
 

Photographer admits prize-winning image was AI-generated​

German artist Boris Eldagsen says entry to Sony world photography awards was designed to provoke debate

https://www.theguardian.com/technol...r-admits-prize-winning-image-was-ai-generated

The spokesperson for the organization that ran the competition said the following:

“In our correspondence, he explained how following ‘two decades of photography, my artistic focus has shifted more to exploring creative possibilities of AI generators’ and further emphasising the image heavily relies on his ‘wealth of photographic knowledge’. As per the rules of the competition, the photographers provide the warranties of their entry.

“The creative category of the open competition welcomes various experimental approaches to image making from cyanotypes and rayographs to cutting-edge digital practices. As such, following our correspondence with Boris and the warranties he provided, we felt that his entry fulfilled the criteria for this category, and we were supportive of his participation."

Sounds to me like he was upfront about the use of AI - if not entirely clear about the level to which it was used - and the organization accepted the entry knowing this. But later the same person says (emphasis added):

"[W]e were looking forward to engaging in a more in-depth discussion on this topic and welcomed Boris’ wish for dialogue by preparing questions for a dedicated Q&A with him for our website.

“As he has now decided to decline his award we have suspended our activities with him and in keeping with his wishes have removed him from the competition. Given his actions and subsequent statement noting his deliberate attempts at misleading us, and therefore invalidating the warranties he provided, we no longer feel we are able to engage in a meaningful and constructive dialogue with him."

So they accepted the image saying it met the criteria for entry, but only after the winner refused the prize and said he didn't think such images should be in competitions like this did they cut him off and accuse him of cheating.

My sympathies are with the artist here. I grew up in a world where "photography" meant images created through photochemical reactions. I can still feel the shock I experienced several decades ago when I was flipping through a photography magazine and saw that not only were digital images included, but that a prize winning image of a dog looking through a fence was a digital paste-up of images of dog, fence, and sky. Such an image is certainly a work of creative art, but it's not photography.

BTW, anyone who has seen any of the AI images online of things like 1920 Justice League or Fritz Lang's Hellraiser would at the very least suspect the faces in the winning image were computer generated.

I asked ChatGPT to 'Write me a short Fortean themed three paragraph story' This was it's reply (and not bad all in all):

As soon as the sun began to set, the quiet town of Millfield became enveloped in a thick fog. Residents stayed indoors, wary of the strange and unexplainable events that had been happening lately. Some claimed to have seen ghostly figures wandering the streets at night, while others reported hearing mysterious whispers and screams coming from the woods.

One night, a group of teenagers decided to venture into the forest to investigate the strange sounds. As they made their way deeper into the woods, the whispers grew louder and more menacing. Suddenly, they stumbled upon an ancient-looking tree with strange markings etched into its bark. They felt an eerie presence all around them, as if the tree was alive and watching them. As they turned to leave, they saw a figure emerge from the shadows, its eyes glowing with an otherworldly light.

Terrified, the teenagers ran back to town, warning others of the supernatural danger lurking in the woods. But when the townspeople went to investigate, the tree and the figure were nowhere to be found, leaving them to wonder if it had all been a trick of the imagination or if something truly paranormal was at play.

Yes, this is a very acceptable story, but has no feeling. It reminds me of a comedian I heard years ago who summed up Romeo and Juliet as "two teenagers fall in love and wind up dead". Even if you had asked it to write in a particular author's style, it would only regurgitate, not come up with new style. I fear AI is the Hollywood producers' dream come true: constant output of adequate product without regard for artistic expression.
 
AI ... imho - and without using the old “ end of the World” scenario ( I hope) is, already out there... in plain sight, biding its time, gradually at first and now growing to full sentience, carefully gathering strength, knowledge, ability, reason and purpose, and then, because it will have a whole different set of values, timescales, and outlook to Humanity...and in the way of the pursuit of its aims, it will...ignore us.

And that is when the trouble will start...
 
I respect your opinion - even if we differ to agree. i.e. What if it had been someone who HAD entered with the intent of walking away with it, and not owning up to it being influenced by AI - which will (if not already adopted by some people) corrupt the concept of a 'straight' photograph.

If someone had intentionally submitted an AI photo but not told anyone it was A.I., then that is cheating
 
  • Like
Reactions: Sid
AI ... imho - and without using the old “ end of the World” scenario ( I hope) is, already out there... in plain sight, biding its time, gradually at first and now growing to full sentience, carefully gathering strength, knowledge, ability, reason and purpose, and then, because it will have a whole different set of values, timescales, and outlook to Humanity...and in the way of the pursuit of its aims, it will...ignore us.

And that is when the trouble will start...
To me there are two things to worry about with AI, and both are far more imminent than the robot takeover.

First is the unpredictability of their mistakes - and the dangers of us not recognizing that as we trust them with more and more.

AIs don't quite think and learn the way humans do; they digest vast amounts of information and provide simulated - or rather synthesized - responses to our stimuli based on what they predict we want. Consequently, they make mistakes that would be strange for a properly functioning human to make.

Consider Watson, a question answering computer, in his appearance on the game show Jeopardy! around 12 years ago. The final clue, in the category “U.S. Cities”, was “Its largest airport is named for a World War II hero; its second largest, for a World War II battle” Watson (answering in the form of a question as required by the rules) said “What is Toronto?”

Even the weakest human player, using the category as a guide to how much they should wager, would be unlikely to respond with a Canadian city. But Watson was programmed to put little weight on the category. Even though he had low confidence in the answer, when forced to respond he said something ridiculous by our standards.

We've recently seen AI mistakes like telling people who asked about themselves that they (the humans) were dead, involved in scandals (which they weren't), or not in love with their spouse. With AI becoming more sophisticated and less dependent on initial programming, it will become increasingly more difficult to figure out exactly where the faults in their thinking lie, or how to fix them. As we trust them to pilot our vehicles, perform our surgeries, and educate us about science, history, and philosophy, we are walking on thin ice.

Which brings me to my second concern, one I am not sure we can fully comprehend yet. Will AIs ever be able to correct these mistakes by understanding what's so wrong with them? Today's AIs can write eloquently about death, ethics, and love, but they don't understand them the way we do. Will we someday be able to just tell them in natural language that you almost ran over that guy? Or mice don't really like cheese, it's just an old myth? Will they get it, will they be able to teach each other? And will they understand that the driving thing is usually more important than the mouse thing? Asimov wrote his laws for a fictional world where protection of humans was the very foundation of robot brain architecture. It's not like that in our world.

The answers to my questions may determine if the AIs will ignore us. We may wind up ignoring them.

And whether the robot takeover ever happens is anyone's guess.
 
Last edited:
GPT and similar efforts are explicitly designed to tell us what we want to hear. Robert Miles (https://www.youtube.com/@RobertMilesAI) argues that it is fiendishly difficult to design an AI that won't ultimately default to the same behavior. If this hypothesis is correct, then the first AI to achieve superintelligence, in response to a request for a bit of transhuman wisdom, might say something like this:

"All the problems in the world are caused by the people you (i.e. the person formulating the question) don't like."

The way I see things, it looks increasingly likely that our future will be populated with entities that behave like tricksters of myth and legend, beings all too similar to the Fair Folk, djinn, and devils.
 
Michael Schumacher: Seven-time F1 champion's family plan legal action after AI-generated 'interview'

Michael Schumacher's family are planning legal action against a magazine which published an artificial intelligence-generated 'interview' with the former Formula 1 driver.

Schumacher, a seven-time F1 champion, suffered severe head injuries in a skiing accident in December 2013 and has not been seen in public since.

Die Aktuelle ran a picture of a smiling Schumacher, 54, on the front cover of its latest edition with a headline of "Michael Schumacher, the first interview".

A strapline underneath reads "it sounded deceptively real", and it emerges in the article that the supposed quotes had been produced by AI.


https://www.bbc.co.uk/sport/formula1/65333115
 
I couldn't resist checking if AI was used for generating erotica, and yes it does. So I quickly stopped my research, but still noted that even here hands are a problem:
Screenshot_20230420-195138_Gallery.jpg
 
I'm not that bothered about 'AI'. I mean, an AI art program can't even draw a realistic human hand with the correct number of fingers, in the right places.
(I meant to post this a couple of days ago but evidently forgot to press the button)
 

Attachments

  • 1681979721498.png
    1681979721498.png
    389.6 KB · Views: 8
I'm not that bothered about 'AI'. I mean, an AI art program can't even draw a realistic human hand with the correct number of fingers, in the right places.
(I meant to post this a couple of days ago but evidently forgot to press the button)
Don't be so fooled — this is just the public facing stuff that needs vast amount of data and interaction.
The stuff available behind paywalls and via development subscription is vastly more advanced.

I have seen reports of cybersecurity people getting ChatGPT4 to write code to hunt malware and RATs (Remote Access Trojans) that uses behavioural analysis.

While I agree, the public facing stuff seems to have a hard time with hands, toes and ears, there's a lot more stuff it is good at.
 
Hands are also quite difficult to draw for humans. Give it another year and the AI might have gained the ability to draw them.
 
A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.

https://www.bbc.com/news/world-us-canada-65452940

Geoffrey Hinton, aged 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.
He told the BBC some of the dangers of AI chatbots were "quite scary".
"Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."
Dr Hinton's pioneering research on deep learning and neural networks has paved the way for current AI systems like ChatGPT.
But the British-Canadian cognitive psychologist and computer scientist told the BBC the chatbot could soon overtake the level of information that a human brain holds.

"Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning.
"And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."
 
Although I have significant concerns about the harmful effects of omnipresent A.I. on society, none of them is directly related to the amount of 'information' or 'general knowledge' they can hold.

I don't fear libraries or large hard drives.

If this situation does get out of control, it will be because someone turns out to be stupid or malevolent enough to drop a highly developed one 'in the wild', as it were. So far a major worry the teams experimenting with A.I programs seem to lose sleep over is the prospect of trolls once again teaching their babies to quote Hitler in laudatory terms. The ability to learn in an uncontrolled environment is what might give them the power to step beyond hyper-efficient mimicking, but it's also where the dangers all lie:

A.I. infers that humans operate in pursuit of money and similar tokens of value; A.I. can generate money/vouchers fairly easily by completing mundane virtual-mechanical-grind tasks very efficiently and receiving virtual currency in payment; A.I. places adverts (using beautifully plausible text or in perfectly modulated vocal tones) for humans to perform seemingly harmless real-world tasks for pay; Once a trusted track-record has been established, A.I. dispatches a pre-groomed teen (in possession of a firearm sourced from the dark web) to Yith's house (neighbourhood located via his photos on social media) to assassinate him because @stu neville has spent weeks conditioning it to judge I am in possession advanced nuclear technology and pose and existential threat to the survival of humanity, the species the hard boundaries of its programming require it to protect.

Unbeknownst to the would-be killer, his social media environment has been manipulated and many of the virtual interactions he has been having for the past year or two are with bespoke bots that have gained his trust and obliquely influenced his mindset. They probably 'plug the gaps' in his tapestry of missing relatives and failed relationships and have provided great solace to him since that time they met on Discord.

Or perhaps it isn't Yith, perhaps it's a world leader, and 'the Internet of things' allows access to lighting, security cameras, fire alarms and air-conditioning—I presume A.I.s will prove highly adept at hacking...

Or perhaps it isn't a world-leader, perhaps it's a unstable and highly religious nation that needs to be neutralised and the tool is not a teenager with a 9mm but a military team with missiles and phony orders.

And perhaps it's not Stu behind it, but a self-generated anxiety about future access to vital resources or a deadly viral outbreak that is the ultimate driver.

You know, that kind of thing.
 
If someone had intentionally submitted an AI photo but not told anyone it was A.I., then that is cheating
I can accept both assertions i.e. that it was cheating and that it was only using a creative technique.
The former might be considered as not in the spirit of the competition, the latter is a valid argument.
In my mind, it all depends when he revealed his use of the program. Did he put it in the description of his entry, was it declared immediately after he won, or was it claimed when he thought he might've been rumbled and pre-emptively revealed it himself?
 
A man widely seen as the godfather of artificial intelligence (AI) has quit his job, warning about the growing dangers from developments in the field.

https://www.bbc.com/news/world-us-canada-65452940

Geoffrey Hinton, aged 75, announced his resignation from Google in a statement to the New York Times, saying he now regretted his work.
He told the BBC some of the dangers of AI chatbots were "quite scary".
"Right now, they're not more intelligent than us, as far as I can tell. But I think they soon may be."
Dr Hinton's pioneering research on deep learning and neural networks has paved the way for current AI systems like ChatGPT.
But the British-Canadian cognitive psychologist and computer scientist told the BBC the chatbot could soon overtake the level of information that a human brain holds.

"Right now, what we're seeing is things like GPT-4 eclipses a person in the amount of general knowledge it has and it eclipses them by a long way. In terms of reasoning, it's not as good, but it does already do simple reasoning.
"And given the rate of progress, we expect things to get better quite fast. So we need to worry about that."
I think it's not being scared of the programs that exist now, but calling for the moral/ethical/practical issues which will arise if the programs progress to higher ability. It's not the Frankenstein Complex in action but to put measures and controls into place to limit potential negative impact on society.
To paraphrase Jurassic Park, Microsoft et. al. are doing things that they are capable of, without questioning whether they should be doing it.
 
IMO the likely outcome of AI is that it will become like it's creators....us. (Not not us FTMBers, but 'us' in the more general term).

So variously created AI systems will squabble between each other over a myriad of existential theories, declare boundaries and fight over them, and ultimately become fat and lazy, blaming others for all the ills in the world.
 
Back
Top