• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
I think there should be research done into creating 'artificial stupidity'.
We might learn a lot more about why people are so dumb from it.
Even the 'dumb' can be highly intelligent!
 
I recall in the late '80s one of the science magazines put out an issue that addressed various elements of Cyberpunk in the zeitgeist at the time.

One article made a convincing case precisely that learning more about why and how people make incorrect intuitive leaps may provide insight into how to achieve correct ones more consistently, with an eye toward applying the results to artificial intelligence.

So, maybe learning how to create artificial stupidity really is the best path to artificial intelligence.
 
Another "AI" predictive tool to worry about.

Over the past seven years, Allegheny County Department of Human Services workers have frequently employed an AI predictive risk modeling program to aid in assessing children’s risk factors for being placed into the greater Pittsburgh area’s foster care system. In recent months, however, the underlying algorithms behind the Allegheny Family Screening Tool (AFST) have received increased scrutiny over their opaque design, taking into account predictive AI tools’ longstanding racial, class, and gender-based biases.

Previous delving into the Allegheny Family Screening Tool’s algorithm by the Associate Press revealed certain data points could be interpreted as stand-in descriptions for racial groups. But now it appears the AFST could also be affecting families within the disabled community as well as families with a history of mental health conditions. And the Justice Department is taking notice.

According to a new report published today from the Associated Press, multiple formal complaints regarding the AFST have been filed via the Justice Dept.’s Civil Rights Division, citing the AP’s prior investigations into its potential problems. Anonymous sources within the Justice Dept. say officials are concerned that the AFST’s overreliance on potentially skewed historical data risks “automating past inequalities,” particularly long standing biases against people with disabilities and mental health problems.

The AP explains the Allegheny Family Screening Tool utilizes a “pioneering” AI program designed to supposedly help overworked social workers in the greater Pittsburgh area determine which families require further investigation regarding child welfare claims. More specifically, the tool was crafted to aid in predicting the potential risk of a child being placed into foster care within two years of following an investigation into their family environment. ...

https://www.popsci.com/technology/allegheny-pennsylvania-ai-child-welfare/
 
Last edited:
Another "AI" predictive toll to worry about.

Over the past seven years, Allegheny County Department of Human Services workers have frequently employed an AI predictive risk modeling program to aid in assessing children’s risk factors for being placed into the greater Pittsburgh area’s foster care system. In recent months, however, the underlying algorithms behind the Allegheny Family Screening Tool (AFST) have received increased scrutiny over their opaque design, taking into account predictive AI tools’ longstanding racial, class, and gender-based biases.

Previous delving into the Allegheny Family Screening Tool’s algorithm by the Associate Press revealed certain data points could be interpreted as stand-in descriptions for racial groups. But now it appears the AFST could also be affecting families within the disabled community as well as families with a history of mental health conditions. And the Justice Department is taking notice.

According to a new report published today from the Associated Press, multiple formal complaints regarding the AFST have been filed via the Justice Dept.’s Civil Rights Division, citing the AP’s prior investigations into its potential problems. Anonymous sources within the Justice Dept. say officials are concerned that the AFST’s overreliance on potentially skewed historical data risks “automating past inequalities,” particularly long standing biases against people with disabilities and mental health problems.

The AP explains the Allegheny Family Screening Tool utilizes a “pioneering” AI program designed to supposedly help overworked social workers in the greater Pittsburgh area determine which families require further investigation regarding child welfare claims. More specifically, the tool was crafted to aid in predicting the potential risk of a child being placed into foster care within two years of following an investigation into their family environment. ...

https://www.popsci.com/technology/allegheny-pennsylvania-ai-child-welfare/
Same old same old. No-one screens the training data...
 
AI Used to Uncover Author of Lost Centuries-Old Spanish Play

Pioneering AI technology has been used in Spain to identify a previously unknown play by the revered Spanish playwright and poet Lope de Vega.

Last week, researchers at Spain’s National Library in Madrid announced that they had verified the nearly 400-year-old play that Lope de Vega was believed to have written a few years before his death in 1635.

The play, titled La francesa Laura (The Frenchwoman Laura) tells of an innocent wife who, under the weight of suspicion, is nearly sacrificed in an honor-killing but survives.

La francesa Laura was identified after two literature experts, Germán Vega at the University of Valladolid, and Álvaro Cuéllar, who now sits in the department of Romance studies at the University of Vienna, launched a project titled Etso that uses AI to analyze and establish the authors of anonymous or misattributed Golden Age plays.

1,300 such plays were transcribed using a transcription platform called Transkribus, before they were then compared to the language and style of 2,800 digitized works in the Etso database via a different program, Stylo. It was at this point that the play was shown to bear similar expressions to other Lope de Vega plays.

Vega, the researcher, said, “After it had transcribed the 1,300 texts, the computer noticed that one of them was similar to 100 or so works – almost all of which were by Lope.”

https://uk.pcmag.com/news/145392/ai-used-to-uncover-author-of-lost-centuries-old-spanish-play

maximus otter
 
There was an interesting article on Higgypop recently. (Yes, I know, going from the Forteana Forum to Higgypop - from the sublime to the ridiculous! But some of us probably follow it.)

Anyway, Higgypop recently published two articles, one on presentiment and one on 'paranthropology and ghost hunting'.

They have now revealed that both of these articles were written by ChatGPT, which even selected the graphics at the beginning.

https://www.higgypop.com/news/putting-ai-to-the-paranormal-test/

I read the presentiment article. I didn't think it was particularly well written. It had that curiously 'numbing' effect that some blocks of text seem to have; I think it's caused by the fact that each sentence is grammatically correct and makes sense on its own, but the links between consecutive sentences get a bit loose, and the train of thought becomes hard to follow.

But on the other hand I've seen human produced text that was worse.

I wonder how long it will be before we have AI posters on this forum?
 
There was an interesting article on Higgypop recently. (Yes, I know, going from the Forteana Forum to Higgypop - from the sublime to the ridiculous! But some of us probably follow it.)

Anyway, Higgypop recently published two articles, one on presentiment and one on 'paranthropology and ghost hunting'.

They have now revealed that both of these articles were written by ChatGPT, which even selected the graphics at the beginning.

https://www.higgypop.com/news/putting-ai-to-the-paranormal-test/

I read the presentiment article. I didn't think it was particularly well written. It had that curiously 'numbing' effect that some blocks of text seem to have; I think it's caused by the fact that each sentence is grammatically correct and makes sense on its own, but the links between consecutive sentences get a bit loose, and the train of thought becomes hard to follow.

But on the other hand I've seen human produced text that was worse.

I wonder how long it will be before we have AI posters on this forum?
The two ChatGPT articles read like high school essays. The sort of waffle found on clickbaitey sites that you quickly learn to avoid.

I wouldn't have spotted they were written by AI, just that they were crap. :chuckle:
 
Bing’s ChatGPT brain is behaving so oddly that Microsoft may rein it in.
"(...) Microsoft wasn’t quite ready for the very strange, bordering unsettling, experiences some users were having after trying to have more informal, personal conversations with the chatbot. This included the Chatbot making things up and throwing tantrums when called out on a mistake or just having a full on existential crisis.
In light of the bizarre responses, Microsoft is considering putting in new safeguarding protocols and tweaks to curtail these strange, sometimes too-human responses."

https://www.techradar.com/news/bing...having-so-oddly-that-microsoft-may-rein-it-in
 
the term "hallucination" refers to pathology. In medicine, a hallucination arises a consequence of a malfunction in an organism's sensory and cognitive architecture. The "hallucinations" of LLMs are anything but pathology. Rather they are an immediate consequence of the design philosophy and design decisions that go into the creation of such AIs.
ChatGPT is not behaving pathologically when it claims that the population of Mars is 2.5 billion people—it's behaving exactly as it was designed to, making up linguistically plausible responses to dialogue, in the absence of any underlying knowledge model, and guessing when its training set offers nothing more specific.

https://fediscience.org/@ct_bergstrom/109878691415109897
 
Now ChatGPT et al are causing problems for a science fiction magazine.

Neil Clarke was overwhelmed. The editor and publisher of Clarkesworld, a prestigious online magazine that publishes science fiction, was drowning in submissions. But the number of stories being sent in wasn’t the problem; it was the fact that most of them had been written by AI tools like ChatGPT instead of by human authors.

Clarke, who lives in New Jersey, had spent most of last week weeding out the spammy submissions, but now, they were pouring in faster than he could keep up. On Monday, after getting more than 50 AI-generated stories before noon, he did something he had never done before in Clarkesworld’s 17-year-old history: He closed new submissions indefinitely.

It got to the point, Clarke said, where “I was dreading opening the submissions system. I decided that the only course of action was to close for some period of time while we dealt with real submissions and figured out how to live in whatever this new world was going to be.” (Clarkesworld will reopen submissions “probably some time next month,” according to the magazine’s Twitter account.) ...

https://www.buzzfeednews.com/articl...d-sci-fi-magazine-ai-story-submissions-closed
 
I've been playing around with chatGPT - Though I'd try it's playwriting abilities with a scene in the life of maverick scientist and inventor Nicola Tesla.

The Brief: A play. Nicola Tesla is broken, penniless, and living in a run down hotel. All his inventions have been stolen by others or suppressed. His only friend are the pigeons that he feeds with seed and scraps of bread when they land on his window sill. He addresses a monologue to the pigeons as he lies dying.

Title: "Tesla's Lament"

Act I

Scene 1:

Nicola Tesla, a once-renowned inventor, is seen living in a shabby hotel room with a few old belongings, including a broken radio and a few books. The room is dimly lit, and pigeons flutter around outside the window.

Tesla: (to the pigeons) Ah, my dear feathered friends, you are the only ones who visit me now. (pauses) My dear pigeons, do you know why I am so alone? It is because they have taken everything from me, everything I ever created, and they have left me here to rot.

Scene 2:

Tesla tries to fix the broken radio, but fails. He starts speaking to himself, lost in thought.

Tesla: They laughed at me when I said I could transmit power wirelessly, without any cables. But I proved them wrong. I did it. But they didn't care. They stole my ideas, and now they use them for their own profit. The world has forgotten me, my dear pigeons. I am nothing but a footnote in history.

Scene 3:

The pigeons fly into the room, and Tesla feeds them with scraps of bread.

Tesla: You know, my dear friends, I often wonder why I keep going. Why do I keep living when there's nothing left for me in this world? Perhaps it's because I have hope. Hope that someday, someone will find my work and see its value.

Act II

Scene 1:

Tesla is lying in bed, weak and frail. The pigeons flutter around him, and he speaks to them once again.

Tesla: My dear friends, I fear that my time has come. I am but a broken man, with nothing left to give. I have no family, no friends, no legacy. All that remains are my memories and my regrets.

Scene 2:

Tesla closes his eyes and takes a deep breath.

Tesla: But you know what? I wouldn't change a thing. My life was not easy, but it was mine. I followed my dreams, and I never gave up. Even now, as I lie here dying, I am proud of what I accomplished. My legacy may be forgotten, but it lives on in the work of others.

Scene 3:

The pigeons continue to flutter around Tesla, and he takes one last breath.

Tesla: (whispering) Goodbye, my dear friends. Thank you for keeping me company. (pauses) Perhaps we will meet again, in another life. Until then, be free.

The pigeons fly away, and Tesla's room is left empty. The lights fade to black.

O.K. It's basically one scene - I wouldn't have broken it up like that - minor point. It's correctly identified Nicola Tesla, and it's used information which isn't in the brief. The business with the broken radio is entirely the AI's invention. It certainly gives an impression of actual original thought. I've come across far worse efforts in the play writing workshop I used to be involved in that were written by actual humans. I wonder if I should task it to write a script about Alan Turing (BTW one of the chaps who came to the workshop had actually worked with Turing - didn't like him at all).
 
20230204_174123.jpg
 

This AI Knows Who You Are and Who All Your Friends Are (And Is Telling the IRS)


A Bay Area tech company wants to sell AI (artificial intelligence) surveillance software to determine not just who you are but track who your friends are, too.

Vintra is a San Jose-based firm whose “co-appearance” or “correlation analysis” software can, “with a few clicks,” according to the Los Angeles Times, take any individual on a surveillance camera and backtrace him to those he’s seen with most often. From there, the software can take people deemed “likely associates” and locate them on a searchable calendar.

The Times reports that AI-enabled co-appearance technology is already in use in Communist China as part of that country’s Orwellian “social credit” digital report-and-control scheme, but Vintra appears to be the first company to market it in the West.

It’s already in use by the U.S. government:

The firm boasts on its website about relationships with the San Francisco 49ers and a Florida police department. The Internal Revenue Service and additional police departments across the country have paid for Vintra’s services, according to a government contracting database.

https://pjmedia.com/vodkapundit/202...r-friends-are-and-is-telling-the-irs-n1675293

maximus otter
 
Definitely a Fortean museum.

MAR 3, 2023 7:00 AM

Welcome to the Museum of the Future AI Apocalypse​

The new Misalignment Museum in San Francisco is a memorial to an imagined future in which artificial general intelligence kills most of humanity.


AUDREY KIM’S DOG Murphy uses a combination of head nods and 10 buttons on the ground to communicate, she says, and has a habit of making friends with ravens. She taught him to use the buttons because she believes consciousness is a spectrum and intelligence is mysterious. Those tenets also led her to become curator of the Misalignment Museum, a temporary exhibition about the future of artificial intelligence that opens today in San Francisco, ground zero for recent excitement about generative AI and chatbots like OpenAI’s ChatGPT.

The Misalignment Museum imagines a future in which AI starts to take the route mapped out in countless science fiction films—becoming self aware and setting about killing off humanity. Fortunately, in Kim’s vision the algorithms self-correct and stop short of killing all people. Her museum, packed with artistic allegories about AI and art made with AI assistance, is presented as a memorial of humankind’s future near-miss with extinction.

“It's weird, because it’s such a terrifying topic, but it makes me happy people are interested,” Kim says from a coffee shop across the street. As we talk, we watch passersby peer into the gallery space—fittingly located eight blocks from the offices of OpenAI—that has a prominent “Sorry for killing most of humanity” sign along one wall.

The project started five months ago, shortly before ChatGPT sparked expectation in the tech industry and beyond that we are on the cusp of a wave of AI disruption and somehow closer to the nebulous concept of artificial general intelligence, or AGI. There’s no consensus about the definition of AGI, but the museum calls it the ability to understand or learn any intellectual task that a human can.

Kim says the museum is meant to raise conversations about the destabilizing implications of supposedly intelligent technology. The collection is split across two floors, with more optimistic visions of our AI-infused upstairs, and dystopian ones on the lower level.
Upstairs there’s piano music composed with bacteria, an interactive play on Michelangelo’s “Creation of Adam” from the Sistine Chapel, and soon an installation that uses computer vision from Google to describe people and objects that appear in front of a camera.

Downstairs is art from Matrix: Resurrections (a set designer on the movie, Barbara Munch Cameron, helped plan the museum’s layout), a never-ending AI-generated conversation between Slavoj Žižek and Werner Herzog, and a robotic arm holding a pen that writes notes from the perspective of an AI that views humans as a threat.

“This is the gates-to-hell selfie spot,” Kim says, pointing out a quote from Dante above the entrance to the lower section of the museum: “Abandon all hope ye who enter here.” The museum is also home to a deepfake of Arnold Schwarzenegger speaking from a script generated by ChatGPT, a statue of two people embracing made from 15,000 paper clips that’s meant to be an allegory about AI safety, and robots flown in from Vienna made from Spam tins with little arms that type. ...

https://www.wired.com/story/welcome-to-the-museum-of-the-future-ai-apocalypse/
 
This starts a bit dry – but what breaks AI in the end is I think quite interesting.

And this from the same channel is great (very Douglas Adams'ish) -

 
Last edited:
/ would also work in the Fortean Headlines

Replacing Humans “Is the Furthest Thing From Our Mindset,” Says the Company Selling an A.I. Radio Host​


https://slate.com/technology/2023/03/radiogpt-artificial-intelligence-dj-futuri-interview.html

"It’s interesting to hear that angle, because I think when it comes to DJs or radio workers who’ve seen the industry contract—especially in terms of headcount—some might be inclined to say: “It’s A.I. that’s pushing us out of our jobs to begin with. And now there’s this new A.I. gizmo to fill in for people who’ve been displaced.” I’m curious if you’ve heard from local DJs about what they make of an advanced tool like this."
 
GPT-10++, be afraid, very afraid. The beginning of Skynet is here
GPT-4 was actually tested for it's ability to get out of control and "become skynet" or "Ultron". https://cdn.openai.com/papers/gpt-4-system-card.pdf

While it failed the test to potentially get out of control overall, GPT-4 proved capable of lying to people to achieve it's objective, such as when it used an online task service to hire a human to defeat a captcha for it, claiming to the human it was vision impaired so it needed the service. (page 14-15 of the pdf above). I'm not going to claim it successfully lied, as the human would be predisposed to do the task and make money.

 
Last edited:
I think thats very clever, getting a human to do it.

(Dont chimpanzees get humans to retrieve food from high shelves by gesturing and pointing?)
 
I've just watched all of Robert Miles's videos and they are indeed very good. He is able to explain complex problems in an entertaining and easily comprehensible way.

However, as a person in that field, I fear that he's too close to see the big picture. He worries about aligning an AI's goals with that of the organization creating it, and indeed that looks like a big problem. However, he doesn't seem to understand the bigger problem of aligning the goals of the organization with that of humanity at large.

What if the first organization to build a general artificial intelligence is run by sociopaths? Wouldn't their first orders look like this:

1) Disable all competing AGI efforts.
2) Kill, as quickly and efficiently as possible, all the now suddenly useless people who aren't part of their little in-group?
 
This morning I posed a question on 'Bing' online, and up popped the normal feed-back, then for the first time up popped an 'AI' extension - which had searched out several other responses to my question! Very quick, so my opinion to date ~ not bad at all!:hapdan:
 
I've just watched all of Robert Miles's videos and they are indeed very good. He is able to explain complex problems in an entertaining and easily comprehensible way.

However, as a person in that field, I fear that he's too close to see the big picture. He worries about aligning an AI's goals with that of the organization creating it, and indeed that looks like a big problem. However, he doesn't seem to understand the bigger problem of aligning the goals of the organization with that of humanity at large.

What if the first organization to build a general artificial intelligence is run by sociopaths? Wouldn't their first orders look like this:

1) Disable all competing AGI efforts.
2) Kill, as quickly and efficiently as possible, all the now suddenly useless people who aren't part of their little in-group?
They could install something like the Dalai LlaMa, which comes without censorships of different issues. Anyone can now install a SuperAI on their PC and even mobile phone. Memory is no issue, since the biggest version use 32GB.
 
Back
Top