Artificial Intelligence (A.I.)

EnolaGaia

I knew the job was dangerous when I took it ...
Staff member
Joined
Jul 19, 2004
Messages
20,112
Reaction score
27,688
Points
309
Location
Out of Bounds
AI researchers use the example of an AI / AGI 'SantaNet' to illustrate the potential ethical and broader risks in giving an AI free rein ...
Could an AI 'SantaNet' Destroy The World?

Within the next few decades, according to some experts, we may see the arrival of the next step in the development of artificial intelligence. So-called "artificial general intelligence", or AGI, will have intellectual capabilities far beyond those of humans.

AGI could transform human life for the better, but uncontrolled AGI could also lead to catastrophes up to and including the end of humanity itself. This could happen without any malice or ill intent: simply by striving to achieve their programmed goals, AGIs could create threats to human health and well-being or even decide to wipe us out.

Even an AGI system designed for a benevolent purpose could end up doing great harm.

As part of a program of research exploring how we can manage the risks associated with AGI, we tried to identify the potential risks of replacing Santa with an AGI system – call it "SantaNet" – that has the goal of delivering gifts to all the world's deserving children in one night.

There is no doubt SantaNet could bring joy to the world and achieve its goal by creating an army of elves, AI helpers, and drones. But at what cost? We identified a series of behaviours which, though well-intentioned, could have adverse impacts on human health and wellbeing. ...
FULL STORY: https://www.sciencealert.com/could-an-ai-santanet-destroy-the-world
 

GNC

King-Sized Canary
Joined
Aug 25, 2001
Messages
31,357
Reaction score
18,146
Points
309
"Santa Claus is gunning you down!"
 

GuitarGeorge

Fresh Blood
Joined
Sep 27, 2020
Messages
14
Reaction score
28
Points
13
A question about AI. I know very little about this subject, so maybe someone can enlighten me: why are we pursuing this technology at all? What will it enable us to do that we can't do now?
And what's the likelihood that we'll just finish up with Marvin, the Paranoid Android, or something similar?
 

Vardoger

I'm #1 so why try harder
Joined
Jun 3, 2004
Messages
5,877
Reaction score
5,114
Points
309
Location
Scandinavia
A question about AI. I know very little about this subject, so maybe someone can enlighten me: why are we pursuing this technology at all? What will it enable us to do that we can't do now?
And what's the likelihood that we'll just finish up with Marvin, the Paranoid Android, or something similar?
I see not much use for a general AI, computer or robot created to be like a human. It will be great for specialized topics. Like predictions in mathematics/statistics and society, do solar system explorations, do stuff too dangerous for humans. A general AI robot/computer can be used as a chatting companion for humans.
 

EnolaGaia

I knew the job was dangerous when I took it ...
Staff member
Joined
Jul 19, 2004
Messages
20,112
Reaction score
27,688
Points
309
Location
Out of Bounds
A question about AI. I know very little about this subject, so maybe someone can enlighten me: why are we pursuing this technology at all? What will it enable us to do that we can't do now?
There were 2 original motivations for AI R&D. The major motivation was to leverage computers to "industrialize decision-making"* in the same way machines and robots had industrialized production tasks. More specifically, the goal was to leverage technology to be more efficient and effective / accurate in arriving at decisions or conclusions that previously relied on humans' abilities to make inferences involving complex sets of rules, interdependent judgments or complicated interactions among requirements.

The earliest workplace implementations were known as 'expert systems' because they aided a human operator in performing a task requiring extensive expertise and discrimination. Such expert systems were only as feasible and usable as their targeted problem area / subject matter was static, well-defined, and unambiguous. They can play games governed by deterministic rules just fine, but they can't help much with problems that are fuzzy or whose parameters are subject to variable interpretations and prioritization.

The most successful implementations aid workers by reaching accurate conclusions in a lot less time and with fewer errors.

* This was a phrase used by an extremely techno-optimistic French representative to an international working group on AI in government at which I was representing another nation. I've used it for the last 30 years as a catchphrase illustrating the most overblown and misguided hopes for AI.

The minor original motivation was to create testbeds for better understanding human decision making processes. This angle was firmly grounded in a belief that human decision making was a matter of information processing - a view that is not considered as apt (or as sound) as it once was.

The notion of creating a full-fledged artificial 'mind' (as opposed to a decision automat) is the domain of artificial general intelligence (AGI).


And what's the likelihood that we'll just finish up with Marvin, the Paranoid Android, or something similar?
The odds are low, but ... It's already the case that (e.g.) neural-net based AIs are being considered as needing something akin to 'emotion', and there have been recent R&D articles pondering whether such neural AIs will need to get enough 'sleep'. In the old days of hard-coded AIs such seemingly lunatic behaviors / outcomes were the result of poor or deficient programming. In the more modern days of neural-based AIs such problems are usually the result of poor or deficient training.
 

EnolaGaia

I knew the job was dangerous when I took it ...
Staff member
Joined
Jul 19, 2004
Messages
20,112
Reaction score
27,688
Points
309
Location
Out of Bounds
In case you've been sleeping too readily and / or too well lately ... :twisted:

Based primarily on themes drawn from formal computing theory, a group of researchers recently published a paper explaining why an AI 'superintelligence' cannot be controlled and will almost certainly constitute a threat.
Calculations Show It'll Be Impossible to Control a Super-Intelligent AI

The idea of artificial intelligence overthrowing humankind has been talked about for many decades, and scientists have just delivered their verdict on whether we'd be able to control a high-level computer super-intelligence. The answer? Almost definitely not.

The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyse. But if we're unable to comprehend it, it's impossible to create such a simulation.

Rules such as 'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.

"A super-intelligence poses a fundamentally different problem than those typically studied under the banner of 'robot ethics'," write the researchers.

"This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilising a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable."

Part of the team's reasoning comes from the halting problem put forward by Alan Turing in 1936. The problem centres on knowing whether or not a computer program will reach a conclusion and answer (so it halts), or simply loop forever trying to find one. ...
FULL STORY: https://www.sciencealert.com/calculations-show-it-d-be-impossible-to-control-a-rogue-super-smart-ai
 

EnolaGaia

I knew the job was dangerous when I took it ...
Staff member
Joined
Jul 19, 2004
Messages
20,112
Reaction score
27,688
Points
309
Location
Out of Bounds
Here are the bibliographic particulars and abstract from the published paper. The full paper is accessible at the link below.

Superintelligence Cannot be Contained: Lessons from Computability Theory
Manuel Alfonseca, Manuel Cebrian, Antonio Fernandez Anta, Lorenzo Coviello, Andrés Abeliuk, Iyad Rahwan
Journal of Artificial Intelligence Research, Vol. 70 (2021).
https://doi.org/10.1613/jair.1.12202

Abstract
Superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. In light of recent advances in machine intelligence, a number of scientists, philosophers and technologists have revived the discussion about the potentially catastrophic risks entailed by such an entity. In this article, we trace the origins and development of the neo-fear of superintelligence, and some of the major proposals for its containment. We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible.

SOURCE: https://jair.org/index.php/jair/article/view/12202

FULL REPORT (PDF): https://jair.org/index.php/jair/article/view/12202/26642
 

Aether Blue

Fresh Blood
Joined
Aug 14, 2020
Messages
18
Reaction score
52
Points
13
A question about AI. I know very little about this subject, so maybe someone can enlighten me: why are we pursuing this technology at all? What will it enable us to do that we can't do now?
And what's the likelihood that we'll just finish up with Marvin, the Paranoid Android, or something similar?
The fundamental motivation for general-purpose AI, as with industrial machinery, is to replace workers with capital. Capital can be owned by one person, and if designed properly, neither talks back nor disobeys.

If general-purpose AI can be made smarter than any human, then there is also the potential benefit that it could, in principle, provide wonders undreamt-of by humans. The down side here is that, as the paper above suggests, superintelligent AI CANNOT be made so that it never disobeys.

At best, keeping such a thing would be like owning a magic lamp. Maybe the genie will be like Robin Williams, and maybe it will be like Jafar. The AI genie also may develop a personality that is completely inhuman in ways that we can't even imagine.

The REALLY scary aspect of the whole situation is that the leap from an AI smarter than most workers, to an AI smarter than any scientist or manager, is comparatively small. If the AI is in any sense SELF-improving, which is a common idea for bringing the AI up to the human level in the first place, then it may very easily blow right past us without any human noticing.

Thus, an attempt to produce "only" an efficient and creative AI janitor may churn out an AI Jafar anyway without ever setting out to do so.
 

Aether Blue

Fresh Blood
Joined
Aug 14, 2020
Messages
18
Reaction score
52
Points
13
A further motivation for nation-states is the fear that their geopolitical enemies will "conjure" a genie before they do. It's the new "missile gap."
 
Top