Artificial Intelligence (A.I.)

Tribble

Killjoy Boffin
Joined
Apr 21, 2015
Messages
2,988
Reaction score
6,782
Points
209
That's not the incident to which I was referring, though it may well have been the same radiotherapy machine. The incident I cited involved a single patient, and it was determined to have been caused by the expert system controller rather than a hardware problem with the radiation emission apparatus.

I had read of this incident and discussed it with medical AI researchers and managers prior to 1990 (when the series of Zaragoza incidents occurred).
There's Therac-25, but that was Canada/USA. Sure the incident you're referring to was in Spain?
 

IbisNibs

Exotic animal, sort of . . .
Joined
Oct 30, 2016
Messages
1,255
Reaction score
2,957
Points
154
Location
Outside my comfort zone.
Meanwhile, this is an interesting essay, more than just a rehash of the GIGO truism, arguing that many of the datasets used in training machine learning systems have been, um, uncritically applied, with unfortunate results, such as the IBM system that was unable to identify non-white faces. Matters go from bad to worse from there...
"A child wearing sunglasses is classified as a “failure, loser, non-starter, unsuccessful person.” This explains so much about the adult me! Never stood a chance. :cool2:

And now AI can now generate its own loser child images without the benefit of children. If I understand correctly, 2 algorithms duke it out, one generating the images, and the other detecting the fakes. Each "learns" from its mistakes. This article has some quick tips for detecting the fakes using your own human eyeballs and brains, and includes the question we're already asking: how much longer before we can't tell the difference between real and fake even with exacting scrutiny?
https://qz.com/1115353/new-research...aked-ai-generated-photos-is-quickly-emerging/
 
Last edited:

ramonmercado

CyberPunk
Joined
Aug 19, 2003
Messages
50,340
Reaction score
23,909
Points
284
Location
Eblana
Problems with A.I. based facial recognition.

IN EARLY MAY, a press release from Harrisburg University claimed that two professors and a graduate student had developed a facial-recognition program that could predict whether someone would be a criminal. The release said the paper would be published in a collection by Springer Nature, a big academic publisher.

With “80 percent accuracy and with no racial bias,” the paper, A Deep Neural Network Model to Predict Criminality Using Image Processing, claimed its algorithm could predict “if someone is a criminal based solely on a picture of their face.” The press release has since been deleted from the university website.

Tuesday, more than 1,000 machine-learning researchers, sociologists, historians, and ethicists released a public letter condemning the paper, and Springer Nature confirmed on Twitter it will not publish the research.

But the researchers say the problem doesn't stop there. Signers of the letter, collectively calling themselves the Coalition for Critical Technology (CCT), said the paper’s claims “are based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The letter argues it is impossible to predict criminality without racial bias, “because the category of ‘criminality’ itself is racially biased.”

https://www.wired.com/story/algorithm-predicts-criminality-based-face-sparks-furor/


OAKLAND, Calif. (Reuters) - An incorrect facial recognition match led to the first known wrongful arrest in the United States based on the increasingly used technology, civil liberties activists alleged in a complaint to Detroit police on Wednesday.

Robert Williams spent over a day in custody in January after face recognition software matched his driver’s license photo to surveillance video of someone shoplifting, the American Civil Liberties Union of Michigan (ACLU) said in the complaint. In a video shared by ACLU, Williams says officers released him after acknowledging “the computer” must have been wrong.

Government documents seen by Reuters show the match to Williams came from Michigan state police’s digital image analysis section, which has been using a face matching service from Rank One Computing.

https://www.huffpost.com/entry/ai-r...irst-known-us-case_n_5ef3444cc5b663ecc8559306
 

GNC

King-Sized Canary
Joined
Aug 25, 2001
Messages
29,873
Reaction score
15,983
Points
309
I didn't realise chihuahuas were so criminal.
 

ramonmercado

CyberPunk
Joined
Aug 19, 2003
Messages
50,340
Reaction score
23,909
Points
284
Location
Eblana
A.I. and space suits: did nobody watch 2001?

A FEW MONTHS ago, NASA unveiled its next-generation space suit that will be worn by astronauts when they return to the moon in 2024 as part of the agency’s plan to establish a permanent human presence on the lunar surface.

The Extravehicular Mobility Unit—or xEMU—is NASA’s first major upgrade to its space suit in nearly 40 years and is designed to make life easier for astronauts who will spend a lot of time kicking up moon dust. It will allow them to bend and stretch in ways they couldn’t before, easily don and doff the suit, swap out components for a better fit, and go months without making a repair.

But the biggest improvements weren’t on display at the suit’s unveiling last fall. Instead, they’re hidden away in the xEMU’s portable life-support system, the astro backpack that turns the space suit from a bulky piece of fabric into a personal spacecraft. It handles the space suit’s power, communications, oxygen supply, and temperature regulation so that astronauts can focus on important tasks like building launch pads out of pee concrete. And for the first time ever, some of the components in an astronaut life-support system will be designed by artificial intelligence.

https://www.wired.com/story/nasas-new-moon-bound-space-suits-will-get-a-boost-from-ai/
 
Top