• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
Emergent properties

Never been a big fan of this idea myself. A lot of people lost a lot of money in the belief that vast profits would be an "emergent property" of the Internet. Not that you can't make money out of the Internet like you can make money out of pretty much anything else, but there's nothing magical about it.

Mind you, it would be cool if there were! ;)

As others here have (sort of) suggested, the fuss about AI comes from the "Artificial" side of it. I don't believe someone in a white coat is going to throw a switch and create "intelligence" where there was none before - personally I believe it was already there (at least in the sense of "consciousness" which tends to get mixed up with "intelligence" in this sort of discussion).

Computers can already do a lot of smart things. To a large extent this is because they are programmed by smart people.

And the reverse, of course... :blah:
 
Breakfast - yeah, I knew someone would say that!

We need to tread a Fortean line betwixt skepticism and gullibility. To reject the idea of artificial intelligence as impossible would be knee-jerk skepticism, but to accept any machine that makes a convincing imitation of pain as actually experiencing real pain would be the height of gullibility. (Unless, like Daniel Dennet and other prominent philosophers, you believe that pain simply is a functional response to stimuli, and there is no such thing as the "fundamental experiential feel of pain". I can't understand how anyone could believe that though!)

That's why, when it comes to intelligence, the Turing Test is such a good idea, in my opinion - it's interactive. If a robot could provide sensible, human-like responses to virtually any question you could think up (it wouldn't have to know everything, it would just have to act like a sane human being), that would pretty much rule out a bogus form of "intelligence" such as a simple question-answer lookup table.
 
I think we're getting into the limits of the Turing test here people...

I like the definition of artificial intelligence that one of the people in the field gave

'Its probably alive when you feel bad when you turn it off'

8¬)
 
The problem with the Turing Test is that we probably all know people who would fail it...

Does this make a computer that fails it 'human'?
 
The Turing test is not carried out on one subject at a time: you have a human and a computer and must decide which is which.

I was just thinking that a very small child would probably fail the Test, but we recognise it's potential for intelligence, even if not in it's current state.

Any other thoughts on the intelligence or conciousness of human infants?
 
DanJW said:
Any other thoughts on the intelligence or conciousness of human infants?
The brain of an infant is still wiring itself together, working together with feedback from the outside world and from its own body. Apart from a few basic instincts like grasping and sucking, an infant doesn't even have control of its own body. All that has to be learned, 'potty training' being one of the last things acquired.

Social skills and language too are developed and only 'hard-wired' after some years. This is why feral children, if they have lived too long with their animal foster-parents, can never thereafter learn to live and talk as a normal human, although those who are rescued young enough can.

So whatever consciousness and intelligence an infant has is totally different from that of older people, which is why most of us have no memories at all of that part of our lives.
 
It is an interesting question as to whether a powerful AI with a very accurate NLP front end would fail a Turing Test because it would make no mistakes. To err is human, famously.

Of course, if you got an AI that would make deliberate mistakes during a Turing Test to make itself more likely to pass, that would say to me that it didn't need to take the test in the first place.
 
No, it wouldn't bother to take the test, it would be too busy in its career as a lawyer or politician...
 
One would hope they would have the moral fortitude to push drugs or go on the game before doing something so hideous...

8¬)
 
Yes, one would hope that. But I suspect that survival of the fittest to survive off others' misfortunes would still apply...
 
Artificial Intelligence

If we do ever acheive the ability to create independent, free thinking androids what would we then do with them and what gives us the right to do anything with them?
If you take intelligence and free thought as denoting life then it can't be classed as artificial can it?
Would we therefore intentionally handicap our creation so that we could enslave it?
What rights would the lifeform have?
And what if they did declare war upon us?
 
im not at all sure its in anybodies interests to create such a thing. Exept for intelectual interst the powers that be dont seem keen to have humans thinking so why make a mechine that does?...at least otside its peramiters of design and need... androids that like washin up?, sure, but why make a washerupper that questions its lot in life...
 
people would make it to prove they could and then wish they hadn't :D
 
Red Dwarf had an idea which governed Krytens behaviour, and that was that he was hardwired to believe in "Silicon Heaven", which was the place all good artificial intelligence-based devices went when they were eventually shut down. To do this, they had to do as they were told. Additionally, he was also 'designed' to get "orgasmic delight" from cleaning. These strike me as just the sort of things people would do if they created a machine designed to be capable of operating independantly, then wanted to be able to make it do their own bidding. And I suspect it will eventually come about, even if it's just because it's what the public "want" from seeing films like A.I. and Star Trek (I once saw an article which quoted a scientist as being pleased that the film A.I. was created, because it would be an inspiration to the younger generations to create such systems!)
 
Maybe we've already created AI? Think about all the internet web servers, routers, cell phone gateways, telephone exchanges, traffic light switches, etc. that are all talking to each other 24 hours a day, and have been for 100 years (going back to simple electo-mechanical telegraph systems).
 
I think we will be able to creat intelligent machines without having to go the whole hog and create sentient machines.
Still, even with true AI, I think it would be possible to hardwire certain laws, or more accurately taboos, which will successfully inhibt their behaviour.
 
I had just this discussion going to london in a car with some friends we all said lots of things but came to no conclusions.

I personally don't think we are capable within the next thousand years of creating a truly sentient machine. We could make a machine capable of inspirational problem solving i.e. making a decision based on the available facts and searching for more info to solve its own problems and not having to have those facts fed to it. But to create true intelligence we have to add in all those little random thoughts that we have.

At the same time as typing this i'm thinking what would i like for tea, did i lock my car this morning, does so and so have a problem with me...along with a few hundred other things. to design a machine capable of that level of thought is way too far off to even consider.

the first "AI" will be a vacuum cleaner that can decide if the room is dirty enough to clean or if it should leave it a bit, it would take into consideration the time (am i going to wake poeple up) that sort of thing. I don't think, at first, machines will be capable of thinking of anything other than the task at hand. To continue the Red Dwarf references just look at Talkie Toaster for an example of what i'm blathering about. sorry for the long post but its a very interesting subject.

--kiel--
 
Would a perfect (or near perfect) simulation of a sentient mind be the same thing as a sentient mind?
Is there some 'magic' effect of consciouness - be it a quantum field effect, soul, animus or an all important gestalt effect - where the whole is more than the sum of its parts - that would be impossible to replicate?
(I don't know, that's why I am asking the question ;) )
 
chatsubo said:
Would a perfect (or near perfect) simulation of a sentient mind be the same thing as a sentient mind?
Is there some 'magic' effect of consciouness - be it a quantum field effect, soul, animus or an all important gestalt effect - where the whole is more than the sum of its parts - that would be impossible to replicate?
(I don't know, that's why I am asking the question ;) )

It seems that you're asking what is life?

Does an automatic hoover constitute life?
Is it really deciding anything for itself?
The life involved would have the choice of hoovering up or staying in bed for the day. But that is all dependent on emotion and how it feels. So many questions....
 
Well, this is just my opinion, but I'll sum it up: one day we'll have some pretty clever vacuum cleaners and toasters and stuff like that. But when the day comes when everybody realizes "Wow- we've created AI", it will be somewhat of a grand suprise, and maybe not entirley what was intended. Not that it will be the Matrix or anything, but that AI will have very different motivations and drives from us. I mean, it will be something VERY different from human!
 
It's pretty clear we don't understand enough about natural intelligence to be able to create artificial intelligence. We're slowly unravelling what various bits of the brain do, but we're still stuck on how it does it. (Conciousness is yet another matter. I don't see us sorting that out before getting further with the brain itself, if we get anywhere at all.)

If we do create AI (either accidentally or deliberately), I suspect it will fairly quickly get picked up as a solution to fertility problems. (Want a child, but not physically able? Can't find a co-parent you want to raise a kid with? Want to avoid the heartbreak of your child leaving home, getting sick, dying? Don't like the idea of changing nappies, or boiling strained vegetables? Then, have we got a solution for you!) After all, that's what some people are trying to sell cloning as.
 
What about Roger Penrose's argument against Strong AI - I'm probably summerisizing badly here - but he states because mathamatics can never be a fully consistant internal language - due to amongs other thing Godel's Theorem and the Turing Halting Problem - no algorythm could provide the internal and complete logical system needed for an independent and conscious mind.
 
chatsubo said:
What about Roger Penrose's argument against Strong AI - I'm probably summerisizing badly here - but he states because mathamatics can never be a fully consistant internal language - due to amongs other thing Godel's Theorem and the Turing Halting Problem - no algorythm could provide the internal and complete logical system needed for an independent and conscious mind.

You only have to look at human beings to know that's true...heh.

My entire problem with the phrase Artificial Intelligence is the entire artificial prefix. If something is sentient then it has rights and to call it artificial is to try and remove those rights.
 
I think any AI we could create would operate on too logical a platform to be an adequate substitute for a real human child.

Nonny
 
I'm not sure we'll ever really be able to say a machine is conscious in the sense that we are. Machines'll learn to solve all sorts of complex problems, and will doubtless be able to pretend to be like us quite well. But even asserting that other humans have the same kind of interior life as we do is basically making an unfounded assumption. We just need to make it to interact with each other.

In the case of a complex lump of plastic and silicone, how can we draw such an irresponsible conclusion from such limited outward signs? The Turing test has always seemed wholly inadequate to me - all it seems to test is a machine's ability to pretend convincingly to be a human until its interlocutor gets bored with the whole thing.

Even if the machine could somehow be proved to be able to maintain this pretense for ever, we wouldn't be any closer to knowing if it had any sort of internal life at all. I think consciousness is the ultimate mystery, and that we are by definition unable to understand it from within. As Jaron Lanier says, too often attempts to make machines cleverer and more human-like ultimately end up making humans stupider and more machine-like.
 
So in summary, do androids dream of electric sheep?
 
might intellegence and self awareness just be an emergent property of any sufficently compex data processing system ?
posibly eventhe http://WWW.......
Then again Artificial Intelegence or Genuine Stupidity, witch is most likly to be created?

Wm.
 
Imagine a world populated by clones of Marvin the Paranoid Android (with GPP) :D

If we assume for a moment that true AI will one day exist - how will we recognise it? Wouldn't people just say "well it says it's alive and it acts as it was intelligent - but that's just how it's been programmed!"?

We have enough problems recognising sentience (or the lack of it) in our fellow carbon-based life forms, let alone silicon ones.

Jane.
 
Back
Top