• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
AI researchers use the example of an AI / AGI 'SantaNet' to illustrate the potential ethical and broader risks in giving an AI free rein ...
Could an AI 'SantaNet' Destroy The World?

Within the next few decades, according to some experts, we may see the arrival of the next step in the development of artificial intelligence. So-called "artificial general intelligence", or AGI, will have intellectual capabilities far beyond those of humans.

AGI could transform human life for the better, but uncontrolled AGI could also lead to catastrophes up to and including the end of humanity itself. This could happen without any malice or ill intent: simply by striving to achieve their programmed goals, AGIs could create threats to human health and well-being or even decide to wipe us out.

Even an AGI system designed for a benevolent purpose could end up doing great harm.

As part of a program of research exploring how we can manage the risks associated with AGI, we tried to identify the potential risks of replacing Santa with an AGI system – call it "SantaNet" – that has the goal of delivering gifts to all the world's deserving children in one night.

There is no doubt SantaNet could bring joy to the world and achieve its goal by creating an army of elves, AI helpers, and drones. But at what cost? We identified a series of behaviours which, though well-intentioned, could have adverse impacts on human health and wellbeing. ...

FULL STORY: https://www.sciencealert.com/could-an-ai-santanet-destroy-the-world
 
19922f2ee48025d6a67b168bb0f41af8.png
 
A question about AI. I know very little about this subject, so maybe someone can enlighten me: why are we pursuing this technology at all? What will it enable us to do that we can't do now?
And what's the likelihood that we'll just finish up with Marvin, the Paranoid Android, or something similar?
 
A question about AI. I know very little about this subject, so maybe someone can enlighten me: why are we pursuing this technology at all? What will it enable us to do that we can't do now?
And what's the likelihood that we'll just finish up with Marvin, the Paranoid Android, or something similar?
I see not much use for a general AI, computer or robot created to be like a human. It will be great for specialized topics. Like predictions in mathematics/statistics and society, do solar system explorations, do stuff too dangerous for humans. A general AI robot/computer can be used as a chatting companion for humans.
 
A question about AI. I know very little about this subject, so maybe someone can enlighten me: why are we pursuing this technology at all? What will it enable us to do that we can't do now?

There were 2 original motivations for AI R&D. The major motivation was to leverage computers to "industrialize decision-making"* in the same way machines and robots had industrialized production tasks. More specifically, the goal was to leverage technology to be more efficient and effective / accurate in arriving at decisions or conclusions that previously relied on humans' abilities to make inferences involving complex sets of rules, interdependent judgments or complicated interactions among requirements.

The earliest workplace implementations were known as 'expert systems' because they aided a human operator in performing a task requiring extensive expertise and discrimination. Such expert systems were only as feasible and usable as their targeted problem area / subject matter was static, well-defined, and unambiguous. They can play games governed by deterministic rules just fine, but they can't help much with problems that are fuzzy or whose parameters are subject to variable interpretations and prioritization.

The most successful implementations aid workers by reaching accurate conclusions in a lot less time and with fewer errors.

* This was a phrase used by an extremely techno-optimistic French representative to an international working group on AI in government at which I was representing another nation. I've used it for the last 30 years as a catchphrase illustrating the most overblown and misguided hopes for AI.

The minor original motivation was to create testbeds for better understanding human decision making processes. This angle was firmly grounded in a belief that human decision making was a matter of information processing - a view that is not considered as apt (or as sound) as it once was.

The notion of creating a full-fledged artificial 'mind' (as opposed to a decision automat) is the domain of artificial general intelligence (AGI).


And what's the likelihood that we'll just finish up with Marvin, the Paranoid Android, or something similar?

The odds are low, but ... It's already the case that (e.g.) neural-net based AIs are being considered as needing something akin to 'emotion', and there have been recent R&D articles pondering whether such neural AIs will need to get enough 'sleep'. In the old days of hard-coded AIs such seemingly lunatic behaviors / outcomes were the result of poor or deficient programming. In the more modern days of neural-based AIs such problems are usually the result of poor or deficient training.
 
In case you've been sleeping too readily and / or too well lately ... :twisted:

Based primarily on themes drawn from formal computing theory, a group of researchers recently published a paper explaining why an AI 'superintelligence' cannot be controlled and will almost certainly constitute a threat.
Calculations Show It'll Be Impossible to Control a Super-Intelligent AI

The idea of artificial intelligence overthrowing humankind has been talked about for many decades, and scientists have just delivered their verdict on whether we'd be able to control a high-level computer super-intelligence. The answer? Almost definitely not.

The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyse. But if we're unable to comprehend it, it's impossible to create such a simulation.

Rules such as 'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.

"A super-intelligence poses a fundamentally different problem than those typically studied under the banner of 'robot ethics'," write the researchers.

"This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilising a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable."

Part of the team's reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centres on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. ...

FULL STORY: https://www.sciencealert.com/calculations-show-it-d-be-impossible-to-control-a-rogue-super-smart-ai
 
Here are the bibliographic particulars and abstract from the published paper. The full paper is accessible at the link below.

Superintelligence Cannot be Contained: Lessons from Computability Theory
Manuel Alfonseca, Manuel Cebrian, Antonio Fernandez Anta, Lorenzo Coviello, Andrés Abeliuk, Iyad Rahwan
Journal of Artificial Intelligence Research, Vol. 70 (2021).
https://doi.org/10.1613/jair.1.12202

Abstract
Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potentially catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible.

SOURCE: https://jair.org/index.php/jair/article/view/12202

FULL REPORT (PDF): https://jair.org/index.php/jair/article/view/12202/26642
 
A question about AI. I know very little about this subject, so maybe someone can enlighten me: why are we pursuing this technology at all? What will it enable us to do that we can't do now?
And what's the likelihood that we'll just finish up with Marvin, the Paranoid Android, or something similar?

The fundamental motivation for general-purpose AI, as with industrial machinery, is to replace workers with capital. Capital can be owned by one person, and if designed properly, neither talks back nor disobeys.

If general-purpose AI can be made smarter than any human, then there is also the potential benefit that it could, in principle, provide wonders undreamt-of by humans. The down side here is that, as the paper above suggests, superintelligent AI CANNOT be made so that it never disobeys.

At best, keeping such a thing would be like owning a magic lamp. Maybe the genie will be like Robin Williams, and maybe it will be like Jafar. The AI genie also may develop a personality that is completely inhuman in ways that we can't even imagine.

The REALLY scary aspect of the whole situation is that the leap from an AI smarter than most workers, to an AI smarter than any scientist or manager, is comparatively small. If the AI is in any sense SELF-improving, which is a common idea for bringing the AI up to the human level in the first place, then it may very easily blow right past us without any human noticing.

Thus, an attempt to produce "only" an efficient and creative AI janitor may churn out an AI Jafar anyway without ever setting out to do so.
 
A further motivation for nation-states is the fear that their geopolitical enemies will "conjure" a genie before they do. It's the new "missile gap."
 
Oxford and Cambridge now have programs monitoring A. I.

Their thinking :

A.I. can harm humans

Humans can harm A.I.

A. I. will be more honest and ethical than humans, and humans will attack the A.I.
 
AI conquers challenge of 1980s platform games

Scientists have come up with a computer program that can master a variety of 1980s exploration games, paving the way for more self-sufficient robots.

They created a family of algorithms (software-based instructions for solving a problem) able to complete classic Atari games, such as Pitfall.

Previously, these scrolling platform games have been challenging to solve using artificial intelligence (AI).

https://www.bbc.com/news/science-environment-56194855
 
Hmmm. Is this really a good idea?

I can't let you review that Dave.

Artificial intelligence (AI) researchers are hoping to use the tools of their discipline to solve a growing problem: how to identify and choose reviewers who can knowledgeably vet the rising flood of papers submitted to large computer science conferences.

In most scientific fields, journals act as the main venues of peer review and publication, and editors have time to assign papers to appropriate reviewers using professional judgment. But in computer science, finding reviewers is often by necessity a more rushed affair: Most manuscripts are submitted all at once for annual conferences, leaving some organizers only a week or so to assign thousands of papers to a pool of thousands of reviewers.

This system is under strain: In the past 5 years, submissions to large AI conferences have more than quadrupled, leaving organizers scrambling to keep up. One example of the workload crush: The annual AI Conference on Neural Information Processing Systems (NeurIPS)—the discipline’s largest—received more than 9000 submissions for its December 2020 event, 40% more than the previous year. Organizers had to assign 31,000 reviews to about 7000 reviewers. “It is extremely tiring and stressful,” says Marc’Aurelio Ranzato, general chair of this year’s NeurIPS. “A board member called this a herculean effort, and it really is!”

Fortunately, they had help from AI. Organizers used existing software, called the Toronto Paper Matching System (TPMS), to help assign papers to reviewers. TPMS, which is also used at other conferences, calculates the affinity between submitted papers and reviewers’ expertise by comparing the text in submissions and reviewers’ papers. The sifting is part of a matching system in which reviewers also bid on papers they want to review.

But newer AI software could improve on that approach. One newer affinity-measuring system, developed by the paper-reviewing platform OpenReview, uses a neural network—a machine learning algorithm inspired by the brain’s wiring—to analyze paper titles and abstracts, creating a richer representation of their content. Several computer science conferences, including NeurIPS, will begin to use it this year in combination with TPMS, say Melisa Bok and Haw-Shiuan Chang, computer scientists at OpenReview and the University of Massachusetts, Amherst. ...

https://www.sciencemag.org/news/2021/04/ai-conferences-use-ai-assign-papers-reviewers
 
A computer has been trained to find Waldo in pictures.

 
500,000 Facebook users were hacked in 2019 from 106 different countries.

This information is now showing up on hacker’s web sites.

So much for A I .
 
500,000 Facebook users were hacked in 2019 from 106 different countries.
This information is now showing up on hacker’s web sites.
So much for A I .

It was 500 million Facebook members' data that was accessed, and the hack had nothing to do with AI.
 
In 2014 news agencies ran a Stephen Hawking story that he predicted that A I will end mankind.

Well, 7 years later we are still here, but you hope A I is used for good.
 
Well, Souleater,

I do not know about the UK, but in the U.S. everyone was in a state of panic when 1999 turned to 2000.

People did not know if their phones, computers, ATM money machines would work.

Banks were telling people to have extra cash on hand in case there was no money.

The job I had at that time, the company was making emergency plans if their computers failed, requiring

employees to come back to work after midnight if things were a total mess.

The point is, a bad person can really turn the machines against us like the movie The Terminator.

I think we are treading on dangerous ground.

Just like the new jet fighter the U.S. is developing.

Rumors are it does not have to have a pilot, and that is scary.
 
Hmmm. Is this really a good idea?

I can't let you review that Dave.

Artificial intelligence (AI) researchers are hoping to use the tools of their discipline to solve a growing problem: how to identify and choose reviewers who can knowledgeably vet the rising flood of papers submitted to large computer science conferences.



https://www.sciencemag.org/news/2021/04/ai-conferences-use-ai-assign-papers-reviewers
This explains why my paper "AI is Evil and Must Be Stopped!" keeps getting rejected.
 
Well, Souleater,

I do not know about the UK, but in the U.S. everyone was in a state of panic when 1999 turned to 2000.

People did not know if their phones, computers, ATM money machines would work.

Banks were telling people to have extra cash on hand in case there was no money.

The job I had at that time, the company was making emergency plans if their computers failed, requiring

employees to come back to work after midnight if things were a total mess.

The point is, a bad person can really turn the machines against us like the movie The Terminator.

I think we are treading on dangerous ground.

Just like the new jet fighter the U.S. is developing.

Rumors are it does not have to have a pilot, and that is scary.
Im pretty sure the first people to utilise 'proper' AI will be the military so we are all f*cked, i take it you are aware of the 'grey goo' theory around nanotechnology too :Givingup:l
 
Grey goo, that bird stuff on my car ?

I have never heard of grey goo so I will have to look it up.
 
Grey goo, that bird stuff on my car ?

I have never heard of grey goo so I will have to look it up.
Gray goo is a hypothetical global catastrophic scenario involving molecular nanotechnology in which out-of-control self-replicating machines consume all biomass on Earth while building more of themselves, a scenario that has been called ecophagy. Wikipedia
 
Back
Top