• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.

Pete Younger

Venerable and Missed
(ACCOUNT RETIRED)
Joined
Jul 31, 2001
Messages
5,823
Has anyone any thoughts on :artificial intelegence:,
to my mind it seems to be advancing in leaps and bounds, do you think it will ever compare with the human brain, personaly I dont think it will happen in my lifetime but I think it might in the lifetime of my grandchildren.:spinning
 
Well, another question to ask is also if it did how would we know? If my computer suddenly one day got the intelligence of a human, it would really have any way of telling me. It would only be if it was hooked up to a robot or such we would notice it.

I think that the neural networks might end up creating something like human intelligence. But it will probably be another 50 years or so.
 
I worked with mainframe computers in the late 70's, and I have a PC that has more memory and processing power than the mainframes did. And this fabulous beastie is a ten-year-old 486!

My new PC is an AMD Duron, and it totaly blows away the old 'un.

The point being, the rate of change in computer power is still accelerating....

Real AI may not be possible with the next generation of computers - or the one after - but the one after that?

So, IMHO, we could be talking about 10-15 years down the line....

(This is either :D or :eek: , I'm not sure.......:confused: )
 
Problem is, we don´t have much idea of what kind of computer you need to create artificial intelligence. But perhaps if we put the Sierra program on a Cray and let it run for a while, we would see interesting things happen.
 
AI is a very broad field, encompassing many areas. From what I can tell (I did an AI course as part of my MSc) we are very unlikely to have any human-like intelligences appearing on our PCs any time soon. In fact, even if we do create "intelligence" on a machine it is very unlikely to be human-like in any case. AI will have an increasing role in our day to day lives, however, as it controls more and more areas that humans previously were in charge of (this is logical systems rather than something alive/conscious, before anyone starts panicking at the traffic lights) because it handles a lot of things better.

I would expect a breakthrough in very intelligent artificial systems in the next 20 years. I'm just looking forward to the point where someone writes a program that can not only behave in an intelligent manner, but that can re-write its own code, better. At that point you get an exponential increase in power. The results are totally unpredictable, and what you end up with may be a system that is incomprehensible and more intelligent than you. For some reason I like that idea.

In the meantime, here is an article on the current state of the art.
 
All this talk reminds me of a story i heard second hand when i was about 11 or 12 which has stuck with me as a brilliant twist in the end tale. In fact if anyone knows i'd quite like a chance to read the original story. Anyway it goes a bit like this:
man finally after years of research creates a superintelligent computer. After much debate they decide the first question they ask it should be suitably deep and meaningful. After switching on the computer they type in: "is there a God?"
The reply comes back: "There is now."
Mwah haha
 
I'm sure Douglas Adams gave a talk/ did a radio programme about this which I very nearly understood - I'll go and search for it
 
Speaking of him, how many of you are also at the H2G2 website?
 
When my toaster starts berating me I'm leaving

My hope is that it will be artifical HUMAN intellegence, (I said vaguely and being pointlessly mysterious). What I mean is that I hope that our brains can interface -- maybe with the great-granddaughter of our current www. I hope it will increase our memories and the amount of info we process to bring our reason to bear. Like if you had memorized all the works of Shakespeare & all the criticims written of his work before setting out to tell the guy on the next barstool he had missed the entire point of the second act of Hamlet.

Hey, we'd be smarter people not better people. In fact, I'm quite sure most people's time would be spent uploading porn and celebrity info instead of debating the GREAT QUESTIONS.

That's my hope for A.I., rather than HAL-like super computers "serving" us. For awhile.
 
Well, my opinion on A.I...

I can only HOPE that mankind will never invent A.I, (maybe never, as they would have to mess around with the 'human' soul and somehow take it out of the fleshy shell and put it into a new robot shell)

Why? Because, when you think about it, there are quite a few movies with robots displaying human quailities, and they end up killing their creators/humans. Wonder why? Well, here it is... humans 'fear', correct. Our 'fear' can extend to killing someone out of fear that the someone would murder us, or our families. It's only natural to put up a defence to something that could threaten you and your little equibrilleum. Thus, if man invents AI, then the AI may grow up as a child, and eventually will come to wonder and compare the differences between it and man. Then, of course, later on, it will fear, and realise the possiblity - man created him, and has the power to destroy him. Like any normal human being, there is the will to live. So ... no guessing the end.

Au Reviour,
 
Re: A.I

p.younger said:
Has anyone any thoughts on :artificial intelegence:,
to my mind it seems to be advancing in leaps and bounds, do you think it will ever compare with the human brain, personaly I dont think it will happen in my lifetime but I think it might in the lifetime of my grandchildren.:spinning

I think we might be limited in this approach by our concept of what a human being is. Is our brain simply a biological computer - an operating system and some software? Or does the whole AI debate miss one very important consideration (which we as Forteans should not) - that we may differ by possessing a spirit, which may survive and function after the body's demise. If that is true - and I believe it is - then machines could never think and reason as we do; they could never be classified as being 'alive' in the sense that we are.
 
I hope if we do get powerful AI happening it ends up working in more of a Culture kind of way rather than a terminator or matrix way (did anyone else think the Matrix was a lot like Terminator+Descartes?)
 
Well, there was the whole Descartes thing yes. But where did the Terminator fit in?
 
The whole boring cliched AI controlled future after a human vs machines war was pretty much Terminator as far as I was concerned.
 
Well, I´m sure Terminator wasn´t the first to come up with that idea. And it was a whole lot more sophisticated than that. It wasn´t just robots with a lot of firearms, they actually took over our reality.
 
I will believe in AI when I can run a search for my
name on Alta Vista without getting a picture of a very
spotty girl and the advice to
Seek treatment now from the following sites . . .

Punishment for vanity I guess. At least it does now
find my site as well as the spotty girl and all those creams.
:cross eye
 
AI

AI already mimics the brain because that's where the original methodology came from. If you read up on neural networks and genetic algorithms it explains the concepts and behaviour behind AI.
 
Neural networks and genetic algorithms only actually show a small part of the current field. The article I linked to a few posts back gives a fair idea of what is going on everywhere else.
 
I'm sorry, saying AI mimics the brain is like saying an amoeba mimics the human brain. It doesn't in any meaningful sense.

Maybe in 100 years... but the mysteries of consciousness will have to be solved first, one way or the other.
 
It may be that the only "mystery" of consciousness is that it requires a huge amount of processing power to attain, in which case it might be replicated in the future by neural computing methods.
 
Question:- How is self awareness computationally complex or intensive? I'm not saying you're wrong, I'd just like to read the source material...

Thanks

8¬)
 
My point is that it may be that using the techniques we currently have we could design a learning system that could attain consciousness if we had a computer powerful (and parallel) enough to match the human brain.

I have read that it is possible to build AI systems of a similar intelligence to insects with current technology- it may just be that consciousness is an emergent property of a very complex neural network given the right stimuli.
 
I would agree.

I refer you to an article in Scientific American in 1993 called 'Daisy Daisy...' Its about Near death experiences and neural nets. I know there is a copy in the FT clipping archives cos I sent it in By implication, there is the possibility of modelling human neurone structure digitally, therefore with sufficient parallel processing and stoage it should be possible to model something like the human brain. Whether this is desirable, I will leave to the clergy.

I would think, personally, that the base technologies should be biological rather than electronic, but that is mostly predjudice on my part since its already been done biologically...

Something I remember from an interview with a researcher at the forefront of AI technology. when asked did she think the Turing test was sufficient as a benchmark for intelligence. She said she didn't think it had to be that complex. If you felt bad when you turned it off, then it was probably intelligent. Yet more moral implications...

8¬)
 
Why does artificial intelligence have to mimic the brain?

Maybe:

Consciousness exists as a certain complex pattern. Patterns can form spontaneously or be created. Some patterns are very big, too big to be apparent right away. Human consciousness makes use of the relatively stable (for approximately 60-80 years) pattern of the cells within the brain and body, but may not be limited to those cells. The pattern could include the surrounding matter, or stretch on infinitely.

When that pattern ceases to be coherent (cells die and the energy they contain returns to where it came from) it is possible that consciousness would make use of a different pattern - the atoms in a cloud of gas, or even certain grains of sand distributed in a certain way across all the places in the univerese where sand occurs.

Patterns can also be temporal, and in an eternal or infinite universe there would be no distinction between a pattern that lasted a day or one that extended for many billions of light years. At one level, that is why computing power would not be an issue for an 'artificial' intelligence (unless it wanted to interact meanigfully with an intelligence that existed on a different time scale), the computer could be running at one cyle per hundred years, and to the relative view of the AI (or the world that existed within the computer) time would flow however it was percieved.

Any system or changing pattern can be a computer.

It is possible that a form of consciousness could exist in any pattern. It might not be human, it would just be the consciousnss of that particular pattern.

It perhaps is also possible - once a conscious pattern reaches a certain level of complexity - that that consciousnes may choose to expierence other patterns.

New intelligence will I think be created. There is nothing artificial about it. New patterns arise through 'nature', humans are part of 'nature'. Because the complexity of our consciousness allows us to reflect on our part in this process we can mentally remove ourselves from it, even though we are bound completely into the natural process of complexity.

A machine intelligence is just as natural as human intelligence. In fact, there is no real difference between the two. The functionioning of computers and the functioning of animal matter are bound by the same physical/metaphysical laws. Structure/pattern/complexity is the dividing line.

Once any pattern (computational or otherwise) reaches a certain level of complexity the potential for consciousness that humans can relate to and understand as consciousness is possible.

Of couse an 'AI' wouldn't be 'human', the pattern associated with it's consciousness would be completely different, but it is possible that it would experience thoughts/feelings/emotions and other artifacts that would be similar to our own.

These things could also explain the 'spirit', the force that may survive after pyshical death. The pattern is as (more?) important as the base matter that they are expressed through.

Or maybe not...

Sorry if that was a bit long. Thoughts just came out...

Bye

Martin
 
The Theory of consciousness (according to some):

As complexity increases... (much hand waving...) consciousness arises.

No figures are ever put on the degree of complexity, however, or an explanation given about how a complicated computer, with the addition of a few more chips, could become conscious.

Recently BT researchers at Martlesham in Suffolk have been predicting a 'Soul Catcher' chip (possible in just a few decades) that has such an enormous memory capacity (several terabytes) that it could record a whole lifetime of a person's observations and other sensations. Maybe: and presumably such things could be duplicated too, so there could be many records of my life, for example. Perhaps such a chip could be plugged into a suitable bionic robot: would the robot believe it was me? Would several such robots all believe themselves to be me? And if I am there too, which is the real me?

I'm getting sidetracked: I meant to say that if consciousness is just a matter of numbers, is the internet conscious? How many zillions of signals are flying about the world (cf neurone impulses in the brain) at any one time? There was a TV drama on this idea last year, I believe. Any one else see it?
 
Intelligence or conciousness is not merely a pattern. It is the interactionof such a pattern with an environment... conciousnes cannot exist in a sensory void.

When was the last time you thought of nothing? You were probably meditating or asleep.. ie unconcious (meditation is a bit more complex though). When was the last time you had a thought that was in no way linked to the world 'outside'. It is impossible. The thought would probably require language at the very least, which you have learnt. Hence intelligence is the result of use of memories, with a few "hard coded" rules. Recent AI research is going the right way with "learning" robots, but really they will need to record every experience they have to come near to conciousness.

Also, no "mind" is formed of just neural networks. It has recently been noted that a great many functions of the brain are chemical. It will be dificult to create a mind using logic gates alone.. that's just not how a brain works.
 
I'm getting sidetracked: I meant to say that if consciousness is just a matter of numbers, is the internet conscious? How many zillions of signals are flying about the world (cf neurone impulses in the brain) at any one time?

Why not? Just because something is conscious doesn't mean that humans would recognise it as such... We immediatly see human consciousness, because we are human, would we know how to understand the consciousness of something that isn't human. Our particular form of consciousness allows us to reflect on these things, other forms perhaps do not.

As complexity increases... (much hand waving...) consciousness arises.
Maybe consciousness was there from the beginning. As complexity increases, consciousness takes on a form that is more readily apparent to humans.

Intelligence or conciousness is not merely a pattern. It is the interactionof such a pattern with an environment... conciousnes cannot exist in a sensory void.
I was definately thinking this when I posted - the interaction of a pattern with another pattern. Where does one pattern end and another begin? Patterns within patterns... Can they be separated ultimately? Probably not.

Also, no "mind" is formed of just neural networks. It has recently been noted that a great many functions of the brain are chemical. It will be dificult to create a mind using logic gates alone.. that's just not how a brain works.
What constitutes a mind? Neurons? + chemical interactions? + the whole brain? + the nervous system? + the whole body? + the surrounding environment?

hmmm

Bye

Martin
 
What's missing from the theories of "consciousness as pattern" is any notion of the blueness of blue, or the ouch-ness of pain, or the smell of perfume, or whatever. How can a mere pattern of 0s and 1s experience pain?

Just because it goes "ouch" that doesn't prove anything. A television can make "ouch" noises; a robot could record and playback a human behavioural response.
 
Is pain just an arbitrary reaction to something that changes the pattern? Maybe pain is simply useful in that it prevents something that could compromise the integrity of the entire pattern...

Could pain be experienced in different ways? Is it possible to 'condition' pain out of human experience?

...Just thinking about the inplications for what has to constitute an AI, or how one would be recognised. Or if there is even such a thing as AI.

Bye

Martin
 
Just because it goes "ouch" that doesn't prove anything. A television can make "ouch" noises; a robot could record and playback a human behavioural response.
And just because you go "ouch" does not prove to me that you are human. Nothing that you can do can prove that to me. There has to be a limit to scepticism if you are not going to vanish completely up your own Cartesian dilemma.
 
Back
Top