Years ago it was noticed that a 'herd' of computers could bankrupt a country via the stock exchange. measures have been built into the algorithms to prevent them doing this.
AI would think logically. Very logically.
If a countries population appeared to be nearing the capacity of the people's ability to, say, feed themselves. It would limit the population; by whatever means it found necessary. It would be the logical thing to do. Humans would let the place become an impoverished war torn hell hole. Who is to say the AI isn't right ?
North Korea would be an interesting test of AI.
Would it simply remove the threat by destroying the nukes (and the country in the following war), or would it calculate that the threat isn't actually real; just bluster ? ...
The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.
The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using “deep neural networks”, meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset.
The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.
Human judges performed much worse than the algorithm, accurately identifying orientation only 61% of the time for men and 54% for women. When the software reviewed five images per person, it was even more successful – 91% of the time with men and 83% with women. Broadly, that means “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”, the authors wrote.
“It’s certainly unsettling. Like any new tool, if it gets into the wrong hands, it can be used for ill purposes,” said Nick Rule, an associate professor of psychology at the University of Toronto, who has published research on the science of gaydar. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad.”
In the Stanford study, the authors also noted that artificial intelligence could be used to explore links between facial features and a range of other phenomena, such as political views, psychological conditions or personality.
“AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”
Brackeen, who said the Stanford data on sexual orientation was “startlingly correct”, said there needs to be an increased focus on privacy and tools to prevent the misuse of machine learning as it becomes more widespread and advanced.
Rule speculated about AI being used to actively discriminate against people based on a machine’s interpretation of their faces: “We should all be collectively concerned.”
EnolaGaia,
One would hope that the two would talk to each other and come to a compromise.
But that would seem to take the point out of AI as people already do that.
Isn't AI supposed to rise above this and present the logical response ?
...So now an actual gaydar exists?..
Isn't that what Grinder is ?
INT21:fetish:
Yep. Why they would do that is the next question.It's not about hook-ups. It's about the possibility/probability that companies, government agencies, organisations, could/would use it to discrimate against, weed out, persecute, exclude or whatever, people it indicates are probably gay. You can bet your life that some would use it for exactly that.
EnolaGaia,
..These problems were known 30 years ago. The supporting tech (e.g., neural emulation) has advanced, but the scope of AI's 'real-world' applicability hasn't...
So where is the usefulness in them ? ...
I was under the impression that neural networks were capable of learning from their own experiences.
Self drive cars are supposedly going to be able to make the millions of decisions I and everyone else makes when driving in normal road conditions. And there are literally millions of decisions per minute, often per second, that we make without even being conscious of them. Essentially one part of our brain may be driving the car on 'automatic pilot' whilst another may be looking out for, say, a particular turn off sign whilst also having to listen and respond to the back seat drivers at the same time.
As these self drive cars are supposed to be safer than a human driver, can you explain how ?
... but what if it could reliably identify potential criminals or tell when a politician is lying (I'd pay to see that one)?
'Reliably' is a relative term. A method that's 'reliable' for X% of cases - where X is anything less than '100' - is not appropriate to decide matters in which consequences are prescribed with respect to a standard of truth.
No need.Of course, this in itself could be problematic if misapplied, but what if it could reliably identify potential criminals or tell when a politician is lying (I'd pay to see that one)?
Zoltan Istvan caused a stir with his recent article: “When Superintelligent AI Arrives, Will Religions Try to Convert It?” Istvan begins by noting, “… we are nearing the age of humans creating autonomous, self-aware super intelligences … and we will inevitably try to control AI and teach it our ways …” And this includes making “sure any superintelligence we create knows about God.” In fact, Istvan says, “Some theologians and futurists are already considering whether AI can also know God.” ...
http://hplusmagazine.com/2015/04/28/will-religions-convert-ais-to-their-faith/
Church that Worships AI God May Be the Way of the Future
... You might soon be able — if you're so inclined — to join a bonefide church worshiping an artificially intelligent god.
Former Google and Uber engineer Anthony Levandowski, according to a recent Backchannel profile, filed paperwork with the state of California in 2015 to establish Way of the Future, a nonprofit religious corporation dedicated to worshiping AI. The church's mission, according to paperwork obtained by Backchannel, is "to develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society." ...
Author and religious studies scholar Candi Cann, who teaches comparative religion at Baylor University, said Levandowski's spiritual initiative isn't necessarily that odd from a historical perspective.
"It strikes me that Levandowski's idea reads like a quintessential American religion," Cann told Seeker. "LDS [The Church of Jesus Christ of Latter-day Saints] and Scientology are both distinctly American traditions that focus on very forward thinking religious viewpoints. LDS discusses other planets and extra-terrestrial life. Scientology has an emphasis on therapy and a psychological worldview, which is quite modern and forward thinking." ...
Here's an initiative in a different direction - making AI the object of worship rather than another source of worshippers for the established religions ...
FULL STORY: http://www.livescience.com/60728-church-that-worships-ai-god.html
The continual conflation of AI and "self-aware" bothers me. An "AI" in colloquial terms is a set of heuristics, but there's nothing self-aware about it.Here's an initiative in a different direction - making AI the object of worship rather than another source of worshippers for the established religions ...
FULL STORY: http://www.livescience.com/60728-church-that-worships-ai-god.html
You've convinced me! I'll be starting a religion.Let's not forget the large tax breaks you can get if you've started a religion...