• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
There's this incident -
https://en.wikipedia.org/wiki/1990_Clinic_of_Zaragoza_radiotherapy_accident
Although it was blamed on human error, not AI.

That's not the incident to which I was referring, though it may well have been the same radiotherapy machine. The incident I cited involved a single patient, and it was determined to have been caused by the expert system controller rather than a hardware problem with the radiation emission apparatus.

I had read of this incident and discussed it with medical AI researchers and managers prior to 1990 (when the series of Zaragoza incidents occurred).
 
That's not the incident to which I was referring, though it may well have been the same radiotherapy machine. The incident I cited involved a single patient, and it was determined to have been caused by the expert system controller rather than a hardware problem with the radiation emission apparatus.

I had read of this incident and discussed it with medical AI researchers and managers prior to 1990 (when the series of Zaragoza incidents occurred).

There's Therac-25, but that was Canada/USA. Sure the incident you're referring to was in Spain?
 
There's Therac-25, but that was Canada/USA. Sure the incident you're referring to was in Spain?

Yes. And no - it wasn't the Therac affair.
 
Meanwhile, this is an interesting essay, more than just a rehash of the GIGO truism, arguing that many of the datasets used in training machine learning systems have been, um, uncritically applied, with unfortunate results, such as the IBM system that was unable to identify non-white faces. Matters go from bad to worse from there...
"A child wearing sunglasses is classified as a “failure, loser, non-starter, unsuccessful person.” This explains so much about the adult me! Never stood a chance. :cool2:

And now AI can now generate its own loser child images without the benefit of children. If I understand correctly, 2 algorithms duke it out, one generating the images, and the other detecting the fakes. Each "learns" from its mistakes. This article has some quick tips for detecting the fakes using your own human eyeballs and brains, and includes the question we're already asking: how much longer before we can't tell the difference between real and fake even with exacting scrutiny?
https://qz.com/1115353/new-research...aked-ai-generated-photos-is-quickly-emerging/
 
Last edited:
Problems with A.I. based facial recognition.

IN EARLY MAY, a press release from Harrisburg University claimed that two professors and a graduate student had developed a facial-recognition program that could predict whether someone would be a criminal. The release said the paper would be published in a collection by Springer Nature, a big academic publisher.

With “80 percent accuracy and with no racial bias,” the paper, A Deep Neural Network Model to Predict Criminality Using Image Processing, claimed its algorithm could predict “if someone is a criminal based solely on a picture of their face.” The press release has since been deleted from the university website.

Tuesday, more than 1,000 machine-learning researchers, sociologists, historians, and ethicists released a public letter condemning the paper, and Springer Nature confirmed on Twitter it will not publish the research.

But the researchers say the problem doesn't stop there. Signers of the letter, collectively calling themselves the Coalition for Critical Technology (CCT), said the paper’s claims “are based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The letter argues it is impossible to predict criminality without racial bias, “because the category of ‘criminality’ itself is racially biased.”

https://www.wired.com/story/algorithm-predicts-criminality-based-face-sparks-furor/


OAKLAND, Calif. (Reuters) - An incorrect facial recognition match led to the first known wrongful arrest in the United States based on the increasingly used technology, civil liberties activists alleged in a complaint to Detroit police on Wednesday.

Robert Williams spent over a day in custody in January after face recognition software matched his driver’s license photo to surveillance video of someone shoplifting, the American Civil Liberties Union of Michigan (ACLU) said in the complaint. In a video shared by ACLU, Williams says officers released him after acknowledging “the computer” must have been wrong.

Government documents seen by Reuters show the match to Williams came from Michigan state police’s digital image analysis section, which has been using a face matching service from Rank One Computing.

https://www.huffpost.com/entry/ai-r...irst-known-us-case_n_5ef3444cc5b663ecc8559306
 
Allow me to introduce you to the comedic best of how to defeat machine learning facial recognition:
chihuahua blueberry muffin.jpg

Behold ! The Chihuahua versus Blueberry Muffin meme!
 
A.I. and space suits: did nobody watch 2001?

A FEW MONTHS ago, NASA unveiled its next-generation space suit that will be worn by astronauts when they return to the moon in 2024 as part of the agency’s plan to establish a permanent human presence on the lunar surface.

The Extravehicular Mobility Unit—or xEMU—is NASA’s first major upgrade to its space suit in nearly 40 years and is designed to make life easier for astronauts who will spend a lot of time kicking up moon dust. It will allow them to bend and stretch in ways they couldn’t before, easily don and doff the suit, swap out components for a better fit, and go months without making a repair.

But the biggest improvements weren’t on display at the suit’s unveiling last fall. Instead, they’re hidden away in the xEMU’s portable life-support system, the astro backpack that turns the space suit from a bulky piece of fabric into a personal spacecraft. It handles the space suit’s power, communications, oxygen supply, and temperature regulation so that astronauts can focus on important tasks like building launch pads out of pee concrete. And for the first time ever, some of the components in an astronaut life-support system will be designed by artificial intelligence.

https://www.wired.com/story/nasas-new-moon-bound-space-suits-will-get-a-boost-from-ai/
 
Cory Doctrow writes about, A.I., full employment, climate change and throws in a bit of Futurism.

I am an AI skeptic. I am baffled by anyone who isn’t.

I don’t see any path from continuous improvements to the (admittedly impressive) ”machine learning” field that leads to a general AI any more than I can see a path from continuous improvements in horse-breeding that leads to an internal combustion engine.

Not only am I an AI skeptic, I’m an automation-employment-crisis skeptic. That is, I believe that even if we were – by some impossible-to-imagine means – to produce a general AI tomorrow, we would still have 200-300 years of full employment for every human who wanted a job ahead of us.

I’m talking about climate change, of course. Remediating climate change will involve unimaginably labor-intensive tasks, like relocating every coastal city in the world kilometers inland, building high-speed rail links to replace aviation links, caring for hundreds of millions of traumatized, displaced people, and treating runaway zoontoic and insectborne pandemics.

These tasks will absorb more than 100% of any labor freed up by automation. Every person whose job is obsolete because of automation will have ten jobs waiting for them, for the entire foreseeable future. This means that even if you indulge in a thought experiment in which a General AI emerges that starts doing stuff humans can do – sometimes better than any human could do them – it would not lead to technological unemployment.

Perhaps you think I’m dodging the question. If we’re willing to stipulate a fundamental breakthrough that produces an AI, what about a comparable geoengineering breakthrough? Maybe our (imaginary) AIs will be so smart that they’ll figure out how to change the Earth’s albedo.

Sorry, that’s not SF, it’s fantasy. ...

https://locusmag.com/2020/07/cory-doctorow-full-employment/
 
So Doctorow quit high school, got a GED, and attended four universities without ever earning a degree. Seems like the kind of guy I want to provide expert opinions on artificial intelligence and climatology. He is also described as an activist and a journalist. Sorry, you only get to be one of those.
 
So Doctorow quit high school, got a GED, and attended four universities without ever earning a degree. Seems like the kind of guy I want to provide expert opinions on artificial intelligence and climatology. He is also described as an activist and a journalist. Sorry, you only get to be one of those.

He's a great Science Fiction writer though, especially of the near-future variety.
 
The key word there being... what... "fiction"?

Indeed. I hope he's wrong about AIs. But his musings about the jobs created due to climate change sounds plausible enough. As well as his adult books he's written some very good Young Adult novels: Little Brother, Homeland, Pirate Cinema.
 
So Doctorow quit high school, got a GED, and attended four universities without ever earning a degree. Seems like the kind of guy I want to provide expert opinions on artificial intelligence and climatology. He is also described as an activist and a journalist. Sorry, you only get to be one of those.
He was bright enough to get into 4 universities, but he's clearly not cut out for academia. That doesn't mean he's stupid, perhaps just lazy.
One of my best friends has a high IQ but dropped out of uni. People do. Steve Jobs, Bill Gates... etc.
 
Last edited:
Cory Doctrow writes about, A.I., full employment, climate change and throws in a bit of Futurism.

I am an AI skeptic. I am baffled by anyone who isn’t.

I don’t see any path from continuous improvements to the (admittedly impressive) ”machine learning” field that leads to a general AI any more than I can see a path from continuous improvements in horse-breeding that leads to an internal combustion engine.

Not only am I an AI skeptic, I’m an automation-employment-crisis skeptic. That is, I believe that even if we were – by some impossible-to-imagine means – to produce a general AI tomorrow, we would still have 200-300 years of full employment for every human who wanted a job ahead of us.

I’m talking about climate change, of course. Remediating climate change will involve unimaginably labor-intensive tasks, like relocating every coastal city in the world kilometers inland, building high-speed rail links to replace aviation links, caring for hundreds of millions of traumatized, displaced people, and treating runaway zoontoic and insectborne pandemics.

These tasks will absorb more than 100% of any labor freed up by automation. Every person whose job is obsolete because of automation will have ten jobs waiting for them, for the entire foreseeable future. This means that even if you indulge in a thought experiment in which a General AI emerges that starts doing stuff humans can do – sometimes better than any human could do them – it would not lead to technological unemployment.

Perhaps you think I’m dodging the question. If we’re willing to stipulate a fundamental breakthrough that produces an AI, what about a comparable geoengineering breakthrough? Maybe our (imaginary) AIs will be so smart that they’ll figure out how to change the Earth’s albedo.

Sorry, that’s not SF, it’s fantasy. ...

https://locusmag.com/2020/07/cory-doctorow-full-employment/
Apparently Doctorow's super AI is incapable of producing robots and machines to do work.

We aren't going to pick up coastal cities brick but brick and move them inland like he seems to think either, we're going to abandon the existing stuff and build new stuff inland. That's what we already do.
 
Doctorow did write a nice review for our Orion's Arm website - in which one of the main tenets is the development of AGI, and the automation of most present-day jobs. Also mentioned in OA is the need for concerted action to ameliorate the climate over the coming centuries; I'd agree with Doctorow that we will all have our work cut out, even if AGI is developed (which does not seem to be imminent).
 
Defend humans against Algorithms!

Workers must be protected from adverse decisions where responsibility is displaced to apparently anonymous algorithms.

More and more companies delegate many of their responsibilities as an employer to algorithms, separating the human factor from labour management and exchanging it for computer programmes.

The recruitment of staff, the organisation of working time, professional promotion and the allocation of bonuses—even the application of a disciplinary regime—are all being put at the disposal of algorithms. This trend poses a severe risk to the rights and freedoms of workers.

Digital platforms already manifest this threat: their algorithms control and monitor their workers, evaluate their performance, determine their remuneration and even execute layoffs—and under abstruse, capricious and opaque criteria. Many attribute to these computer programmes characteristics science rejects, such as infallibility, neutrality and superficial precision.

https://www.socialeurope.eu/for-a-law-of-algorithmic-justice-at-work
 
Be afraid ... Be very afraid ...
Facebook wants to help build AI that can remember everything for you

On Friday, Facebook announced new AI research that could help pave the way for a significant change in how artificial intelligence — and some devices that incorporate this technology — functions in our daily lives.

The company announced a real-world sound simulator that will let researchers train AI systems in virtual three-dimensional spaces with sounds that mimic those that occur indoors, opening up the possibility that an AI assistant may one day help you track down a smartphone ringing in a distant room.

Facebook also unveiled an indoor mapping tool meant to help AI systems better understand and recall details about indoor spaces, such as how many chairs are in a dining room or whether a cup is on a counter.

This isn't something you can do with technology as it is today. Smart speakers generally can't "see" the world around them, and computers are not nearly as good as humans at finding their way around indoor spaces.

Mike Schroepfer, Facebook's chief technology officer, hopes this work, though early stage, could eventually power products like a pair of smart glasses to help you remember everything from where you left your keys to whether you already added vanilla to a bowl of cookie dough. In short, he wants to perfect AI that can perfect your own memory.

"If you can build these systems, they can help you remember the important parts of your life," Schroepfer told CNN Business in an interview about the company's vision for the future of AI.
But Schroepfer's goal could depend on the company convincing people to trust Facebook to develop technology that may become deeply embedded in their personal lives — no small feat after years of privacy controversies and concerns about how much personal information the social network already has from its users. ...

FULL STORY: https://www.cnn.com/2020/08/22/tech/facebook-ai-memory-research/index.html
 
The inexorable rise of A.I.

IN JULY 2015, two founders of DeepMind, a division of Alphabet with a reputation for pushing the boundaries of artificial intelligence, were among the first to sign an open letter urging the world’s governments to ban work on lethal AI weapons. Notable signatories included Stephen Hawking, Elon Musk, and Jack Dorsey.

Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.

The episode reveals DeepMind caught between two conflicting desires. The company doesn’t want its technology used to kill people. On the other hand, publishing research and source code helps advance the field of AI and lets others build upon its results. But that also allows others to use and adapt the code for their own purposes.

https://www.wired.com/story/dogfight-renews-concerns-ai-lethal-potential/
 
** NOT POLITICAL ** NOT POLITICAL ** NOT POLITICAL ** NOT POLITICAL ** NOT POLITICAL **

Now that we're all enjoying another round of presidential "debates", I would dearly love to see a debate moderated by Watson, the IBM supercomputer. It would be deeply satisfying to see politicians of all stripes deflated by a moderator who could not be intimidated, misdirected, or outmaneuvered.

"You did not answer the question."
"Your response is based upon a logical fallacy."
"The statistics you cite are incorrect."
"That is a mis-characterization of your opponent's position."
"The incident to which you refer has been shown to be a fake."

Perhaps in time we might even see an AI participating in the debate.
https://en.wikipedia.org/wiki/Project_Debater

How sweet it might be . . .

** NOT POLITICAL ** NOT POLITICAL ** NOT POLITICAL ** NOT POLITICAL ** NOT POLITICAL **
 
Last edited:
Screenshot from my own computer that I've cropped for space. On a big long thread someone posted the picture below, and wanting to see if it was real (for posting to our Strange Crimes thread :) ), I typed wife stabs into my search bar and autocomplete did the rest.

/story is real, from 2013, ceramic decorative squirrel.

stabby.jpg
 
The inexorable rise of A.I.

IN JULY 2015, two founders of DeepMind, a division of Alphabet with a reputation for pushing the boundaries of artificial intelligence, were among the first to sign an open letter urging the world’s governments to ban work on lethal AI weapons. Notable signatories included Stephen Hawking, Elon Musk, and Jack Dorsey.

Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.

The episode reveals DeepMind caught between two conflicting desires. The company doesn’t want its technology used to kill people. On the other hand, publishing research and source code helps advance the field of AI and lets others build upon its results. But that also allows others to use and adapt the code for their own purposes.

https://www.wired.com/story/dogfight-renews-concerns-ai-lethal-potential/
Even if Alphabet and Google does not want to sell it, the governments world over will work on their own military A.I., inspired by A.I. tech developed in the commercial sector. In USA, who knows what NSA and CIA can find while doing industrial spying.
 
Smells Like Teen Spirit with an AI attempting to complete the song


Don't Stop Me Now with an AI attempting to complete the song

 
AI is good at specific tasks, but to create something like we see with science fiction etc., we need to learn how to create a generalized AI,, which is a difficult problem. Best to solve our own human problems first without compounding things..
 
I once had the privilege of hearing Prof Brian Cox speak about this and other things, and then get to chat to him afterwards.

The topic was general AI. He talked about Von Neumann machines, and opined that a reason we may not have met ET is that a general AI to drive all of the practical and operational technology may not be possible.

It may also be that life is abundant, but only in basic forms. The miracle here that led to complex life may not have happened elsewhere, or only very rarely.

However, his main point was that either general AI is too hard, as many of the other principles for automated exploration and development are practical, or the exploration technologies form other worlds are already about but are so small, that we do not notice them. He said the latter is less likely than the former.

Nice bloke too, happy to chat. His enthusiasm is infectious and his unwillingness to entertain bullshit is very endearing.
 
Back
Top