• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
Thanks for your comments, EG. There's a lot to think about here

The 'intelligence' ascribed in AI (in all generations) more closely corresponds to your category of 'cleverness', insofar as AI applications are coded or trained to operate effectively in the course of a particular grounded activity or procedure.

Yes, after a little more reflection, I came to the same conclusion. In the present state of the art AI is CLEVERNESS writ large, and many orders of magnitude greater than human.

The criterion you cited for discriminating between the two categories - abstraction - may serve as a rule of thumb when differentiating them with respect to human and animal (and human versus animal) behaviors, but it doesn't afford any traction with respect to AI's.

This is because all AI's are operating with an abstracted model or rule base or training history that is hoped to adequately reflect best practice, but isn't linked to grounded praxis.

Yes, I take your point. But aren't those abstracted models typically structured to describe physical objects, data, or activities? And can those models extrapolate beyond their inputs to arrive at novel, original concepts?

I suspect a version of the dichotomy applicable to AI's would be better framed with respect to something along the lines of 'novelty' / 'creativity' than 'abstraction'.

I think there's room for novelty and creativity in "clever" activity as well as "intelligent". Perhaps creativity falls in the blurred region between the two.
 
If it were given the problem, what would an artificial intelligence do ?
Maybe it would find a way to cause starvation, engineer the release of awful diseases or fire off a few nukes?
 
... Yes, I take your point. But aren't those abstracted models typically structured to describe physical objects, data, or activities? And can those models extrapolate beyond their inputs to arrive at novel, original concepts? ...

Yes, the models are structured to represent or emulate the discrete chunks of reality (objects, etc.) with which one would interact if one were actually performing a task (procedure, game, etc.) in the 'real world'. However, dealing with a model is not the same thing as dealing with the modeled.

As Alfred Korzybski put it: "The map is not the territory".

In symbolic AI, the implementor has to explicitly develop and embed the map. This usually turns out to be far more complex than initially realized - particularly if it's a map covering anything more complicated than a game played with a finite set of well-defined rules.

In trained (e.g., neural based) AI's the AI has to to be (figuratively) led through enough simulations to act as if it were adequately following a map the developers had in mind. In addition to the complexity that always bedeviled the symbolic approach, this approach entails a sort of indirection in what the map may be.

Let me try to illustrate what I mean by this indirection bit ...

I used to tell people if you can't adequately guide a novice step-by-step through the target procedure (task, game, whatever ... ) via a phone call you have no hope of generating an adequate symbolic AI for that domain (or else a lot more knowledge acquisition would be required). This rule stood the test of time, and it became the basis for improving some of our knowledge acquisition techniques.

With a trained AI (vis a vis the phone call metaphor) you can only serve as a reference point by which the novice is expected to generate his / her own map from scratch. You can only tell the novice what to start with, determine wherever it is the novice ends up, advise the novice whether his / her final outcome was good or bad, and ask the novice to modify his / her own map (which you cannot ever see or inspect) as a sort of scorecard on success versus failure.

As to the second part ...

Generally speaking - no, the model can't modify itself; it can't extrapolate anything beyond the bounds of its current configuration. One can add yet another layer of complexity by including the ability to evaluate outcomes and shift criteria in response (the essence of machine learning), but:

(a) this is a separate / separable capability distinct from the core inference engine's engagement with the target task; and ...

(b) modifications are limited to re-arrangement of whatever elements are built or trained into the AI (rules / ontologies for symbolic AI's; whatever pattern of associations the trained AI has accreted).

Phrased more succinctly ... AI's have no means for abstracted reflection on the game they're playing in the same sense it seems you were invoking in identifying 'abstraction' as the key discriminator between clever and intelligent operations.
 
Yes, the models are structured to represent or emulate the discrete chunks of reality (objects, etc.) with which one would interact if one were actually performing a task (procedure, game, etc.) in the 'real world'. However, dealing with a model is not the same thing as dealing with the modeled.

As Alfred Korzybski put it: "The map is not the territory".

In symbolic AI, the implementor has to explicitly develop and embed the map. This usually turns out to be far more complex than initially realized - particularly if it's a map covering anything more complicated than a game played with a finite set of well-defined rules.

In trained (e.g., neural based) AI's the AI has to to be (figuratively) led through enough simulations to act as if it were adequately following a map the developers had in mind. In addition to the complexity that always bedeviled the symbolic approach, this approach entails a sort of indirection in what the map may be.

Let me try to illustrate what I mean by this indirection bit ...

I used to tell people if you can't adequately guide a novice step-by-step through the target procedure (task, game, whatever ... ) via a phone call you have no hope of generating an adequate symbolic AI for that domain (or else a lot more knowledge acquisition would be required). This rule stood the test of time, and it became the basis for improving some of our knowledge acquisition techniques.

With a trained AI (vis a vis the phone call metaphor) you can only serve as a reference point by which the novice is expected to generate his / her own map from scratch. You can only tell the novice what to start with, determine wherever it is the novice ends up, advise the novice whether his / her final outcome was good or bad, and ask the novice to modify his / her own map (which you cannot ever see or inspect) as a sort of scorecard on success versus failure.

As to the second part ...

Generally speaking - no, the model can't modify itself; it can't extrapolate anything beyond the bounds of its current configuration. One can add yet another layer of complexity by including the ability to evaluate outcomes and shift criteria in response (the essence of machine learning), but:

(a) this is a separate / separable capability distinct from the core inference engine's engagement with the target task; and ...

(b) modifications are limited to re-arrangement of whatever elements are built or trained into the AI (rules / ontologies for symbolic AI's; whatever pattern of associations the trained AI has accreted).

Phrased more succinctly ... AI's have no means for abstracted reflection on the game they're playing in the same sense it seems you were invoking in identifying 'abstraction' as the key discriminator between clever and intelligent operations.

Greatly enjoyed that post, EG. Thanks.

I would not dispute that the creation/programming of AI's falls in the "intelligent" camp.

The idea I'm trying to get at, however clumsily, is that there are two (or more) classes of mental activity, if it's even proper to use the term "mental" in this context. Humans partake of both. I don't think we know enough to say with confidence that at least some animals don't partake of both. AI's, so far as I can tell, are still limited to one.

My choice of "abstract" and "tangible" to differentiate "intelligence" and "cleverness" was probably a poor choice. If I were able to find the proper terms, I'd be a step closer to understanding the nature of sentience. That will remain beyond my grasp for now.
 
Maybe it would find a way to cause starvation, engineer the release of awful diseases or fire off a few nukes?

As we have done to ourselves? I will modify my position to agree that AI's under the control of humans may pose an existential threat to our survival.
 
As we have done to ourselves? I will modify my position to agree that AI's under the control of humans may pose an existential threat to our survival.

I'm not certain what you intend 'under the control of humans' to mean.

If you're referring exclusively to human monitoring and 'control' during a deployed AI's ongoing operations, I would argue the involvement of human oversight is a potentially mitigating rather than an innately aggravating factor.

One of the lessons learned decades ago was that an AI application's capacity for dangerous / erroneous outcomes was directly proportional to the autonomy vested in that application for determining and executing courses of action. Early on, AI was seen as the latest advance in automation - a means for automating decision making above and beyond established automation in whatever actions those decisions drove. I recall attending an international AI working group conference in Paris where our technocratic French host bragged that AI would "industrialize decision making" - one of the most cringe-worthy claims I'd ever heard.

Many of the early failures in applied AI resulted from the naive belief a 'decision automat' could be feasibly configured so as to spit out reliably reasonable conclusions that no other party needed to sanity-check. Various outcomes ensued in practice, ranging from mere hilarity to physical lethality.

Most of these failures could be traced back to the earliest and most important point at which humans exerted 'control' over the AI.

The primary form of 'control' humans have over AI's (both symbolic and trained) lies in their initial 'education' or 'programming' (coding; training) by which the application's inferential capabilities are implemented. Shortcomings, oversights, and / or outright errors in 'educating' the AI in the first place are the most unavoidable source of disappointing - and even lethal - outcomes.

As the tree is bent = as the inferential rules are specified = as the neural net is trained.
 
So basically there is no point in AI.

It can't do anything that a human can do but it can do these things faster; that is just down to improved processor speeds.

If it can't be allowed to make executive decisions then what is it good for ?

AS Mythopoeika wrote above..

..Maybe it would find a way to cause starvation, engineer the release of awful diseases or fire off a few nukes?..

I assume this was tongue in cheek as these are the usual solutions worked out by humans. AI is supposed to be a stage beyond this.

The most common use for AI that we will have to deal with is the autonomous car. And this has already killed at least two people by making mistakes that any driver would probably have avoided.

So, the big question...

What practical use is it ?

We have seen the example where an AI based 'respondent' was let loose on a social network and within days it had to be shut down as the responses and language it was learning were so offensive.

Any automatic machine, like a piece of code, is only as good as it's programmer.

INT21
 
AI applications can have practical benefits. In the best case(s), they provide a means for reaching conclusions faster and with less chance of inferential missteps in problem domains of high complexity.

Some problem domains are sufficiently 'regular' in terms of their elements and logic / rules that a structured decision aid can yield reliable results. This sort of structured decision aiding has a long history in the print medium with (e.g.) decision charts, procedural guides, and other such reference materials.

AI offers 2 advantages over a hardcopy decision aid. First, it can be dynamic and interactive as needed, so as to alleviate the need to (figuratively) flip pages, etc. Second, it 'connects the dots' automatically, and relieves the human user of potentially huge cognitive and procedural burdens otherwise required to get to any intermediate, much less the final, conclusion(s).

The degree of benefit, however, is something that has to be evaluated in the context of the machine's results as they relate to whatever the problem domain may be. It isn't a feature of the machine itself.

AI's are most useful, and least dangerous in and of themselves, when they serve an advisory role with a human in the loop.
 
... Any automatic machine, like a piece of code, is only as good as it's programmer.

In the case of software (including AI's of all stripe) the programmer's role is important, but not the biggest issue. The biggest issue is knowing what to program in the first place.

Fifty years ago it was already recognized that the person who knew how to code software wasn't necessarily informed about what the software was supposed to do in the first place. This led to the creation of the 'system analyst' role as originally defined. Before too long, the 'system analyst' label had been corrupted to connote something more like 'senior programmer' than 'person who specifies what needs to be programmed'.

In the early days of applied AI, a similar thing happened. The specialists in exotic AI languages, algorithms, etc. (i.e., the programmers) were clueless in figuring out the 'knowledge' and 'rules' their code was supposed to reflect and emulate. This led to the emergence of both 'knowledge acquisition' as the process of understanding and compiling the relevant 'knowledge' and the associated position of 'knowledge engineer' as the role dedicated to this critical effort.
 
I can understand the programming dilemma.

On the one hand you have people who know what needs to be done, but not how to code it. And then you have the coders who may not know exactly what the purpose of the program is.

But the person who needs the product can do the initial flow chart work. Then the 'script kiddies' can write the actual code.

I amuse myself with simple BASIC programing to run some hardware. One day I may even try to learn C++.

Or maybe not.

INT21
 
I'm not certain what you intend 'under the control of humans' to mean.

If you're referring exclusively to human monitoring and 'control' during a deployed AI's ongoing operations, I would argue the involvement of human oversight is a potentially mitigating rather than an innately aggravating factor.

One of the lessons learned decades ago was that an AI application's capacity for dangerous / erroneous outcomes was directly proportional to the autonomy vested in that application for determining and executing courses of action. Early on, AI was seen as the latest advance in automation - a means for automating decision making above and beyond established automation in whatever actions those decisions drove. I recall attending an international AI working group conference in Paris where our technocratic French host bragged that AI would "industrialize decision making" - one of the most cringe-worthy claims I'd ever heard.

Many of the early failures in applied AI resulted from the naive belief a 'decision automat' could be feasibly configured so as to spit out reliably reasonable conclusions that no other party needed to sanity-check. Various outcomes ensued in practice, ranging from mere hilarity to physical lethality.

Most of these failures could be traced back to the earliest and most important point at which humans exerted 'control' over the AI.

The primary form of 'control' humans have over AI's (both symbolic and trained) lies in their initial 'education' or 'programming' (coding; training) by which the application's inferential capabilities are implemented. Shortcomings, oversights, and / or outright errors in 'educating' the AI in the first place are the most unavoidable source of disappointing - and even lethal - outcomes.

As the tree is bent = as the inferential rules are specified = as the neural net is trained.

Your first point relates to the specific scenario I had in mind, where human oversight is NOT a mitigating factor but because of an extreme ideological/political/racial mindset actually drives the system to a lethal outcome.

Your second point raises the specter of a type of negative control, in which a conscious decision is made NOT to exercise any oversight but to allow the system complete autonomy. An example of this scenario is a "launch-on-warning" system in control of a country's nuclear arsenal.

Both are troubling, I think.
 
... On the one hand you have people who know what needs to be done, but not how to code it. And then you have the coders who may not know exactly what the purpose of the program is.

Exactly ... The problem becomes one of understanding the relevant factoids / 'knowledge' / requirements on the front end and translating these into workable specifications for a solution or improvement (e.g., a new software app) on the back end.


But the person who needs the product can do the initial flow chart work. Then the 'script kiddies' can write the actual code. ...

Been there, tried that, no - it doesn't work ...

The main problem is on the front end, with the expert(s) for whom some form of intervention (notice I use this term rather than 'innovation') is intended. Expertise in a task or procedure is a much trickier thing than most people realize. The most proficient experts are typically those who've learned over a long time and for whom certain processes (e.g., working a particular aspect of a pending case) have become sufficiently automatic that they can't express or explain how they do what they do.

The key term here is 'tacit knowledge' - i.e., the 'know-how' attributed to an expert that he / she doesn't really have to think about, rarely if ever reflects upon, and hence cannot elucidate in (e.g.) an interview. Such 'tacit knowledge' often comprises the bulk of what separates the expert from a well-trained rookie.

The classic example from AI's early days was chicken sexing - the task of determining the gender of newborn chicks. Proficient poultry workers could sex a chick (quit your snickering ... ) in a very few seconds, and they could train up a novice to adequately accurate performance in a matter of an hour or so. However, they were at a loss to explain how they did it.

A lot of early AI work was dedicated to medical diagnostic applications. Expert diagnosticians were often flustered when asked how they arrived at their conclusions, and more than a few such experts invented responses to avoid embarrassment, bailed out of knowledge acquisition projects, and / or reported severe psychological disruptions (up to and including a few retirements and one purported suicide) when confronted with pressure to explain what was to them reliable yet inexplicable or inexpressible.

I've had good results from interactive sessions in which experts and I jointly draw diagrams, work through sample cases, etc. I've obtained even better results from simply sitting with experts and watching them work, interrupting only to ask for clarification on what I'm observing.

I've obtained informative - but typically not decisive - results by collecting diagrams, flow charts, lists, etc., that experts have created in response to requests for how they conceptualize some aspect of what they do. These are most often 'informative' in the sense they illustrate how actual praxis varies from, or improvises to overcome deficiencies in, formal references / guides / aids.

It's more useful to conduct a sort of 'raid' on experts' workspaces to identify and obtain copies of any reference aids they keep in plain view for ready consultation.
 
I was once stumped for days when I couldn't get a proper (i.e. mathematically correct) answer to part of a program that read

A=INT(x)

Seems simple enough. and it worked most of the time; but not every time.

Then a lab tech took one look at the program and said 'oh, it's obvious. At this point you're trying to take the integer of an integer; won't work'.

Obvious to him, not so to me.

INT21
 
I'm not scared of a computer that can pass the Turing test. I'm scared of one that intentionally fails it.
 
At least the killer droids will have smiley faces.

CIMON is the droid predecessor to R2-D2 from “Star Wars” or HAL 9000 out of “2001: Space Odyssey”. But it’s not science fiction - CIMON is an AI, dubbed“the flying brain,” that’s been just launched into space aboard Space X’s “Dragon” cargo ship. It will assist the German astronaut Alexander Gerst in carrying out a number of scientific tasks aboard the International Space Station.

CIMON, which stands for “Crew Interactive Mobile Companion," is the first robot of its kind. Designed by Airbus and IBM, it weighs 11 pounds and is about the size of a basketball. Still, it packs the neural network strength of IBM’s Watson supercomputer in its “brain”, reports Techcrunch.

Speaking as a cartoon face on the monitor, CIMON has been trained to recognize the voice and face of the astronaut Gerst, who is also a geophysicist with the European Space Agency. Gerst will be able to call CIMON, prompting the droid to utilize its more than a dozen propellers to follow the voice and float over to it. The camera in CIMON will hover at about the eye-level, detecting the person it’s looking for. The bot's programming can even interpret emotional states and will react appropriately. Its emotional intelligence is supposed to help monitor the psychological states of the crew.

CIMON-new1609.jpg


https://bigthink.com/paul-ratner/fl...=Social&utm_source=Twitter#Echobox=1530408677
 
Elon Musk-AI hybrid will defeat the HALs.

Why Elon Musk thinks human-A.I. symbiosis can thwart “evil dictator A.I.”

Last Sunday, a particularly unusual DotA 2 tournament took place. DotA, a complicated, real-time strategy game, is among the most popular e-sports in the world. The five players of one team—Blitz, Cap, Fogged, Merlini, and MoonMeander—were ranked in the 99.95th percentile, inarguably among the best DotA 2 players in the world. However, their opponent still defeated them in two out three games, winning the tournament. An evenly matched game is supposed to take 45 minutes, but these two were over in 14 and 21 minutes, respectively.


Their opponent was a team of five neural networks developed by Elon Musk’s OpenAI, collectively referred to as OpenAI Five. Prior to Sunday’s tournament, the neural network played 180 years’ worth of DotA matches against itself every day, edging incrementally closer to mastery over the game. The reason why its creators chose DotA as OpenAI Five’s focus was to mimic the incredibly variable and complex nature of the real world; DotA is a complicated game, and if an A.I. is going to be able to process and interact with the world rather than, say, learn to plot a GPS course or play chess, open-ended video games are a good place to start.

While this is an impressive technical achievement on its own, Musk’s victory tweet highlighted that this was just a stepping stone toward the future of A.I. ...

https://bigthink.com/matt-davis/why-elon-musk-thinks-humanai-symbiosis-will-counter-evil-dictator-ai
 
... DotA is a complicated game, and if an A.I. is going to be able to process and interact with the world rather than, say, learn to plot a GPS course or play chess, open-ended video games are a good place to start. ...

There's something of a naive fallacy in this ...

No matter how complicated Defense of the Ancients may be, it's still a finite domain with immutable constraints.

DotA may be the most complicated 'toy world' to which AI has been applied, but it's still a toy world in the same sense noted and criticized within AI decades ago.

It's not as simple as chess, but it's still unrealistically simplistic.

The application of neural AI's doesn't make this situation better that it was back in the days of wholly symbolic / representational AI's. If anything it makes things worse, because it's essentially impossible to decipher / analyze the neural net's 'logic' developed through training and any subsequent learning its architecture affords.
 
The application of neural AI's doesn't make this situation better that it was back in the days of wholly symbolic / representational AI's. If anything it makes things worse, because it's essentially impossible to decipher / analyze the neural net's 'logic' developed through training and any subsequent learning its architecture affords.

Recently DARPA I2O released Explainable Artificial Intelligence (XAI) to encourage research on this topic. The main goal of XAI is to create a suite of machine learning techniques that produce explainable models to enable users to understand, trust, and manage the emerging generation of Artificial Intelligence (AI) systems.

https://www.darpa.mil/program/explainable-artificial-intelligence
 
Recently DARPA I2O released Explainable Artificial Intelligence (XAI) to encourage research on this topic. The main goal of XAI is to create a suite of machine learning techniques that produce explainable models to enable users to understand, trust, and manage the emerging generation of Artificial Intelligence (AI) systems.

https://www.darpa.mil/program/explainable-artificial-intelligence

As the saying goes: "Everything old is new again."

Three decades ago DARPA and other parties were funding R&D on 'explanation' / 'rationale' systems whose job was to provide a human parse-able trace for whatever-the-**** an AI was doing or had concluded.

In the immortal words of that great American philosopher Yogi Berra: "It's deja vu all over again!" :evillaugh:
 
As the saying goes: "Everything old is new again."
Three decades ago DARPA and other parties were funding R&D on 'explanation' / 'rationale' systems whose job was to provide a human parse-able trace for whatever-the-**** an AI was doing or had concluded.
In the immortal words of that great American philosopher Yogi Berra: "It's deja vu all over again!" :evillaugh:

If an institution keeps "re-inventing the wheel", you'd hope they'd eventually remember. On the other hand, computing has changed a lot since the 90's. Perhaps the new program integrates a new software approach to the problem so they re-branded it as XAI? Or perhaps is is as bad as you suggest. I hope not.:doh:
 
If an institution keeps "re-inventing the wheel", you'd hope they'd eventually remember. On the other hand, computing has changed a lot since the 90's. Perhaps the new program integrates a new software approach to the problem so they re-branded it as XAI? Or perhaps is is as bad as you suggest. I hope not.:doh:

Playing - metaphorically - on the 'wheel' allusion ...

It's not so much re-inventing a previous wheel as inventing a new approach to the same objectives that suits a new context for performing motion / movements - a context within which the very notion and importance of 'wheel' has shifted.

Continuing metaphorically ...

Let's say you have a robust approach to navigating a mobile unit on a solid 2D plane (e.g., a robot vacuum cleaner on your floors).

Then someone asks you to transplant that approach to navigating a mobile floating unit on a body of water. The surface navigation is similar to the previous scenario, but now you have to accommodate dealing with issues relating to sailing rather than rolling - e.g., sub-surface obstacle avoidance. The simplistic moving-around-in-2D motif is now complicated by increased 3D issues and constraints. Because this involves basic 2D navigation plus intrinsic attention to 3D constraints I'll call this '2.5D'. This in turn changes the context for explaining a successful outcome.

Next you're asked to apply one or both your prior approaches to navigating a mobile airborne unit that in principle has to deal with 3D issues all the time. Once again, the change in operational scenario and possibilities requires a shift in what can happen with the unit and what needs to be covered to explain how it is successfully accomplished.

In an analogous way, AI approaches to machine learning and / or explanation have to shift to fit different mode(s) of implementation for the inferences underlying action. These shifts are more radical than the shifts among the ground / marine / aerial contexts for the task(s) to which the inference strategies are applied (in the metaphorical example).

Old school symbolic AI - explanation may be detailed with respect to (e.g.) input states, rules employed, course of inference, and eventual outputs.

Neural net AI - explanation may be limited to mapping / correlating inputs versus outputs, and in the worst case such explanation is hand-waving based on 'the black box doing what the black box does'.

More recent AI (typically neural ops with a statistical / mathematical model of performance) - explanation is still similar to the second case, but the model is leveraged as a sort of surrogate means for getting a handle on what the black box did.

In effect, it appears to me XAI is revisiting these topics for a 'third generation' era of AI implementations and possibilities.
 
“When you change the way you look at things, the things you look at change.”
Max Planck {Winner, Nobel Prize for Physics, Quantum Mechanics}

And at what point, if ever, can AI change its view? - Maybe at that point it can be considered 'conscious'?
- When can a machine debate good and evil? - Is it then 'one of us' ?
 
“When you change the way you look at things, the things you look at change.”― Max Planck {Winner, Nobel Prize for Physics, Quantum Mechanics}
And at what point, if ever, can AI change its view? - Maybe at that point it can be considered 'conscious'?
- When can a machine debate good and evil? - Is it then 'one of us' ?

I am sure that moral philosophy can be reduced to an algorithm.
 
I am sure that moral philosophy can be reduced to an algorithm.
Morality has nothing to do with it - But what is the 'IT' - the AI we are talking about?

I'm reading this post and over and over the intelligent posters here keep talking about
'IT" - the AI - And yet I have yet to read exactly what they mean by AI [artificial intelligence}
- Is this intelligence, because a Human created machine is generating it and becomes its modus operandi,
a different form of intelligence than the Humans who created it? - Besides being capable of faster generation does it posses some magical property that makes it different than the biological entities that created it?

And tell me what makes it intelligent? - The fact that Man can create a machine that can act like a
super mouse trap - and trap it's creator, does not mean that it is some kind of magical alternative
intelligence - A Human creation, is a Human creation, is a Human creation until proven otherwise.

On the day that this machine Man has created can say 'I am that I am', know what that means,
be aware of 'ITSELF', define itself, and be aware of the environment where IT is
- Until that day it is nothing more than a complicated machine created by Man
- And Man creates many machines that can be, and are, dangerous.

AI is nothing more than a complicated Human machine until it is conscious
- And what is Conscious?

Not sure - but again Max Planck:

""I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness."

Humans may and will respond - Still waiting for an AI to respond..............Don't feel shy, we can get
along!
 
Morality has nothing to do with it - But what is the 'IT' - the AI we are talking about?

Is this the IT we are talking about?:

Moss.jpeg
Clearly intelligent in an artificial logic way. Criteria fulfilled.

I'm reading this post and over and over the intelligent posters here keep talking about 'IT" - the AI - And yet I have yet to read exactly what they mean by AI [artificial intelligence}
- Is this intelligence, because a Human created machine is generating it and becomes its modus operandi,
a different form of intelligence than the Humans who created it? -

Fundamentally the AI that is most discussed is a series of complex computer algorithms that have become self aware enough to learn from data and produce original insights in a fashion useful to its human audience. The Turing test is still being considered a useful concept in the discussion too https://en.wikipedia.org/wiki/Turing_test.
It should be pointed out however that it is entirely possible to produce an AI that doesn't bother with the Turing Test.
It should also be pointed out that the Computer "Watson" competed in the gameshow Jeopardy and won, which is not an inconsiderable step https://en.wikipedia.org/wiki/Watson_(computer), quite apart from Chess computers like Deep Blue, which were playing a much more limited game.

Besides being capable of faster generation does it posses some magical property that makes it different than the biological entities that created it?

One of the things that AIs will most likely be is dispassionate, allowing them far greater objectivity than human intellects. The perfect recall of AIs is also an enormous advantage, as human memory is annoyingly imperfect, and can easily forget to employ pertinent facts and techniques to a problem. Currently humans retain a better grasp of context and meaning than computers, but that may actually be a philosophical question that needs to be resolved by humans before an algorithm can apply it.

And tell me what makes it intelligent? - The fact that Man can create a machine that can act like a super mouse trap - and trap it's creator, does not mean that it is some kind of magical alternative intelligence - A Human creation, is a Human creation, is a Human creation until proven otherwise.
A crucial test of AI is when a simulated intellect can out-perform human intellects in tasks that require rapid adaptive learning. Such as, when the AI can build a human trap but a human can no longer build an AI trap.

On the day that this machine Man has created can say 'I am that I am', know what that means,
be aware of 'ITSELF', define itself, and be aware of the environment where IT is
- Until that day it is nothing more than a complicated machine created by Man
- And Man creates many machines that can be, and are, dangerous.

The issue of the seat of consciousness, and indeed what consciousness is within the human brain remains a thorny problem for neuroscience. There is no answer yet. Now when it comes to animal intelligence, the classic test is whether and animal can look in a mirror and identify that the image they see reflected is of themselves. Is this a fair test of intelligence however? Consider that goats can easily thrive in environments where humans die, and yet they cannot pass the mirror test. By analogy, various forms of AI may never be able to pass the test of self-awareness, and yet may still be enormously valuable in the skills they perform. Consider also, the automaton, that is programmed by cams to view itself in a mirror, and to touch it, and then to make a self-referential gesture, and even to speak the words, "that is my reflection" is not self aware, and yet mimics self awareness. The point being, the question of self-awareness may not matter.

AI is nothing more than a complicated Human machine until it is conscious
- And what is Conscious?

Pragmatically, we may not need to know. It may even be that the task of building AI will eventually solve the problem for us at some point of its iteration.

Not sure - but again Max Planck:
"I regard consciousness as fundamental. I regard matter as derivative from consciousness. We cannot get behind consciousness. Everything that we talk about, everything that we regard as existing, postulates consciousness."

A valid philosophical concern, but are we creating a paradox where none exists? Planck is treating consciousness as axiomatic whereas Psychoanalysis regards the Ego as the true issue, as after all a dog that is awake is conscious. In Psychoanalysis the definition of Ego is the part of the mind that mediates between the conscious and the unconscious and is responsible for reality testing and a sense of personal identity, which is perhaps more accurate a term than the use of consciousness employed by Planck. After all, a brain damaged person may be conscious, and yet have no sense of selfhood. Then we have the Buddhists who consider all notion of selfhood to be an illusion without any foundation in reality. The patent office however is the final arbiter on the issue of AI, and they are only interested in results.

Humans may and will respond - Still waiting for an AI to respond..............Don't feel shy, we can get
along!
LOL ask away: https://www.cleverbot.com/[/QUOTE]
 
Has anyone any thoughts on :artificial intelegence:,
to my mind it seems to be advancing in leaps and bounds, do you think it will ever compare with the human brain, personaly I dont think it will happen in my lifetime but I think it might in the lifetime of my grandchildren.:spinning
This was the original OP.

That was 17 years ago - Much has transpired since.

Now excuse me for diverging to make a point.

Last night I watched the latest episode of 'Elementary' - the long running, six years, modern day take
off of Sherlock Holmes, an American series staring the English actor Johnny Lee Miller - good series,
excellent acting.

In this episode Sherlock has to solve the murder of a sexbot manufacturer who apparently came into
technical info that cost him his life.

Suddenly I saw it, like a premonition from the future - It will happen this way - The synthesis of money,
sex, and science - a chemistry that drives much of the world of today.

You've all probably seen the Japanese state of the art current robots - Interesting, but not particularly
impressive - nowhere near a DATA {from Star Trek} type Humanoid android, hypotheticaly developed
by a scientist who created the 'Positronic Brain' - good sci-fi.

But as a few non-sexual demonstrations of the Human like looking sexbot where given - I saw it
happening - the impetus to create the 'feedback-loop' that will give these entities 'near' sentience
- life like abilities to perform, to interact - the creative aspects of Human sexualituy pushig science
one step further into the future.

Supposedly the sexbots in this Elementary episode cost about $15,000 - I assume there are those
so interested who wil pay a lot more than that to have their ideal female {or whatever} companion.

I say 5-10 years - the 'almost Human' companion will become a reality.

What do you think ?
 
I saw the same Elementary episode, and here are my comments ...

(1) The interaction capabilities evident in the TV episode weren't evidence of any advancement since this thread started. Most importantly, the keyword-based conversational tactics of the doll didn't exceed what could be done with older AI demonstration / prototype applications such as ELIZA:

https://en.wikipedia.org/wiki/ELIZA

... whose origins extend all the way back into the mid-1960's. I've seen, interacted with, and even helped develop natural language interactional capabilities exceeding those evident in the TV show over 30 years ago.


(2) I agree that the combination of sex, money, and technology would seem compelling. However, there's a fourth aspect - economics - that probably diminishes any near-term prospects for sex dolls being the 'killer app' for AI.

The cost structure for embedding an interactional AI within a sex doll or bot would greatly exceed the costs associated with a 'static' sex doll or a mechanically animated sex bot.

If I'm not mistaken RealDoll - or someone using a RealDoll or similar commercial sex doll as the physical platform - embedded speech capabilities in a commercial RealDoll prototype well over a decade ago.


(3) AI isn't aimed at developing human-like artificial anything other than inferential capabilities and natural language interfacing. The vision you described is really more a matter of robotics with AI support for the secondary feature of linguistic interaction with a user.


(4) As such, AI's stake in a virtual companion / lover is peripheral to advancing the field, and any benefit to the AI field would be indirect, incidental, and subordinated to how well the robotic features worked and were received in the marketplace.

(5) How big a selling point is faux 'intelligent' interaction in the context of a sex doll? Who would pay multiple times more money to add conversation to a sexual context?
 
(5) How big a selling point is faux 'intelligent' interaction in the context of a sex doll? Who would pay multiple times more money to add conversation to a sexual context?

Some would - But that is not the main point.

The companionsion aspect - the friend/lover, whatever, that is really a friend - the machine you can trust
instead of the Human you can't.

"Amazon Alexa is a virtual assistant developed by Amazon, first used in the Amazon Echo and the Amazon Echo Dot smart speakers developed by Amazon Lab126. Wikipedia"

Alexa I want a robot girlfriend that is always with me - always on my side - forever.
Alexa: "Tall, short, or medium?, sexual or intellectual, or both?
Alexa: Please fill out the following application, allow 90 days for processing and delivery.
Alexa: Remember all our Human companions come with a 60 day money back satisfaction guarantee.
 
Back
Top