• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.

What Is Consciousness?

Cochise said:
All right, if you want to be so determined - what was there before the big bang? What is outside the universe? How many dimensions are there? Science has no answers to these questions, only guesses.
That's a good example of a "Yeah, but...".
(When in doubt, wave hands, change the subject. ;) )

"Guesses" are usually referred to as hypotheses in science. And the only ones worth considering are those that can be tested. Those that pass enough tests might be elevated to a theory. But even theories might be disproved by future evidence. Science never claims absolute knowledge, but progresses by a series of refinements.

So, do you have any hypotheses on the nature of consciousness?
 
rynner2 said:
And the only ones worth considering are those that can be tested.

I'd add somehting about "at the moment".

Thinking in a no testable way often leads to testable developments :)

Or maybe that's just what happens to me :oops:
 
Cochise said:
All right, if you want to be so determined - what was there before the big bang? What is outside the universe? How many dimensions are there? Science has no answers to these questions, only guesses. I accept that new techniques might be developed that would give us insight, but the challenge to answer any one of those questions in a provable way is probably greater than the whole scientific oeuvre to the present day.

Science is good at explaining mechanisms you can locate and measure, including chemical mechanisms. It is not good at explaining why. It can't even explain why there is life on this planet and not (so far ) on any of the others in our solar system,. Can a human even understand life that is created in an entirely different environment? Would we recognise it if we saw it?

Since we seem to be approaching a post-scientific age where the actual experimental proof is replaced by statistics and computer models - and in some cases by simple assertion and group-think - its debatable how long science is going to progress. In the worst case it may even regress, as it has done as previous civilisations have gone into decline. Carl Sagan warned of this before his death, and it seems to me his warning has not been heeded.

“The first gulp from the glass of natural sciences will turn you into an atheist, but at the bottom of the glass God is waiting for you.”


Walter Heisienberg

And when you open the box theres a 50% chance that you will be attacked by an angry cat.
 
Consciousness? Or self-awareness, that is, being aware of being conscious (and alive)?

Consciousness I actually will admit I have insufficient scientific knowledge to explain reasonably, but I certainly do not think it is a process simple enough to be compared to a current computer. I am reasonably certain the brain has abilities we barely try to use, and which only get touched upon in extremity.

The particular point I take issue with is the idea that you can chart what is happening in terms of what the owner is thinking by mapping electrical activity in certain areas, because we know from medical evidence the brain is capable of reconfiguring itself in the event of severe damage. It seems to me to be an idea on the lines of the American lie detector, and its possible use in criminal cases, or to determine someone's likely character, to be equally unreliable.
 
garrick92 said:
Which philosopher was it who said that most of life's great problems are ultimately based on linguistic confusions?

We are particularly bedevilled by the fact that we use the word 'consciousness' to signify awareness as a phenomenon and as a state of awareness. Actually, even that sentence is confusing for that same reason! I mean, I am a conscious being in that I think (cogito) but I am only conscious for 16 hours a day or thereabouts. Does that make sense?

Then we have to add the phrase 'self-aware' into the mix.
Yes, good points! It's hard to discuss something when you can't define the terms you have to use.

Mathematics is the other language that often helps us make progress, but unless we have some quantifiable properties to work with it's difficut to see where to start.

It seems to me that we'll not get anywhere by a top-down method, ie, assuming that we know what consciousness is, and arguing about what properties that should entail. I think we'll make better progress with bottom-up methods, taking simpler systems that we can understand, like electronic circuits, computers, and robots, and looking for parallels between their behaviour and that of biological life. Then we can increase the complexity, and see if the parallels still hold.

Science has already made big steps along this path. Robots can already imitate many aspects of human behaviour, both physical and mental, and it seems likely that at some time in the near future we may have to ask what is the difference between them and us.

At this point in the discussion, someone (probably still living in the past with his ZX81!) will assert that robots "can only follow their programming", as if this means that all their actions are totally predictable. But long ago meteorological computers discovered a mathematical quirk now known as Chaos theory. Suddenly, randomness and unpredictability popped up where it hadn't been expected.

But the point about "following their programming" can be turned around and aimed at the most complicated biological creatures we know - humans.
People are being programmed from birth by their parents, peers, teachers, and all their other life experiences. But because conditions vary from place to place, this results in huge differences in behaviour.

But the programming did not begin at birth, or even in the womb. It was going on all through evolution, leading to different specialisations of lifeforms, interacting together, the behaviour of one species affecting that of another. This genetic inheritance is the major part of biological programming. (Some form of gene therapy would be needed to change it.)

So it's easy for some people to say that the complexity and unpredictability of humans is somehow of a different 'quality' to that of machine intelligence, but that is only an opinion that's being nibbled away at by every new development in robotics. The fact is, we are the current level that life on Earth has achieved over billions of years. Already, after just a few decades, the machines are catching up.
 
garrick92 said:
rynner2 said:
But the point about "following their programming" can be turned around and aimed at the most complicated biological creatures we know - humans.
People are being programmed from birth by their parents, peers, teachers, and all their other life experiences. But because conditions vary from place to place, this results in huge differences in behaviour.

I think this is where your humans/machines analogy breaks down. People are not 'blank slates' who are programmed by others (that opens an infinite regress for a start!). Human learning (at the conscious level (within meaning 2!)) is mediated by the learner, in a way that a machine's programming is not.
I didn't say humans are blank slates.
I specifically said "But the programming did not begin at birth, or even in the womb. It was going on all through evolution, leading to different specialisations of lifeforms, interacting together, the behaviour of one species affecting that of another. This genetic inheritance is the major part of biological programming.

The fact you ignored this makes me despair of ever having a decent discussion on a message board!

And the fact you refer to a machine's programming as if this precludes learning, likewise!

I just did a quick search on Machine Learning.
The very first result was from Wiki: https://en.wikipedia.org/wiki/Machine_learning

Machine learning is a subfield of computer science (CS) and artificial intelligence (AI) that deals with the construction and study of systems that can learn from data, rather than follow only explicitly programmed instructions. Besides CS and AI, it has strong ties to statistics and optimization, which deliver both methods and theory to the field. Machine learning is employed in a range of computing tasks where designing and programming explicit rule-based algorithms is infeasible for a variety of reasons.

...

In 1959, Arthur Samuel defined machine learning as a "Field of study that gives computers the ability to learn without being explicitly programmed"
I don't have time right now to read the whole article, but no doubt I could find a few other quotes to whack you about the head with! ;)

(And then I could start on the other articles the seach turned up...)

It seems to me that opponents of AI are not up to date with current practice, let alone current thought. That 1959 quote shows how scientists have been pondering such things for many years. And on a more popular level, Isaac Asimov was writing fiction about AI robots back before I was born!
https://en.wikipedia.org/wiki/Isaac_Asi ... bot_Series
 
I composed a long post in reply, but it suddenly vanished down the electronic plughole. :evil:

So here's a short version:
Well, if you're including works of fiction as hard evidence in your argument for the existence of artificial intelligence..."
I wasn't, and that's just one example of how you attempt to twist my meaning. I was merely pointing out that ideas of robots and AI were already abroad back in the 40s.

And who better to handle these ideas than the great polymath Isaac Asimov? His day job was Professor of Biochemistry at Boston University, and he wrote or edited over 500 books on popular science as well as SF. He lectured on cruise ships on science, and he had a very wide range of interests, including a love of Gilbert and Sullivan comic operas and Sherlock Holmes!

But you can find all this here
http://en.wikipedia.org/wiki/Isaac_Asimov
and in other sources.

As for your other quibbles, they just reinforce my idea that a MB is no place for serious debate. Sorry!
 
garrick92 said:
Your claim that thinking machines are already among us is patent bollocks.
It would be, if I'd made it! ;)

Stick to real world facts, rather than shooting down stuff I never said.
 
garrick92 said:
In the context of this discussion that seems to me to be stating that machine and human intelligence are equivalent.
No, I'm exploring the idea that they may be equivalent, in operational terms.

That's to say, there are many parallels between neuronal pathways in the brain, and electronic circuits. (But it may be that neither have anything to do with consciousness!)

As I said, many moons ago, I think the bottom-up route to investigating consciousness is likely to be productive, as then we are working up from what is known, and trying to expand our understanding into the realm of the less understood.

From the scientific point of view, this means we can expand our understanding without introducing new and numinous concepts.

(God, I hate it when I find myself promoting Occam's Razor!)
 
Turing, of course, had no idea that we would so rapidly produce computers with such vast storage that they could hold on-line so much data.

There is no evidence so far that current computer technology can or will produce artificial intelligence. It simply records decisions input by humans (the logic in programs) and replays them as required.

People get taken in by the amount of things that get stored and the multiple levels - in the processor, in the OS, in actual user programs etc, but no computer has as yet had a single original thought. Or indeed, contains any engineering, soft or hard, capable of providing one.

By coincidence, I'm giving a talk on the subject today.
 
Cochise said:
By coincidence, I'm giving a talk on the subject today.
Will you be referring to Machine Learning? ;) (See Wiki, etc.)


(Sometimes I don't know why I bother. So many people go on parroting their same old views anyway.)
 
We could ignore AI at our peril...

A robot that’s smarter than us? There’s one big problem with that…
By Tom Chivers, Science
Last updated: August 6th, 2014

The day I realised that machines were going to take over the world was March 29 2011, when I saw a YouTube video of two quadcopter drones playing ping-pong.

Yes, I know, ping-pong is an unlikely choice of combat technique. But it was creepy. They were obviously tracking the movement of the ball, responding swiftly and accurately to its movement, and taking appropriate steps to return it. There was intelligence (albeit of a limited sort) and an ability to respond to and interact with the real, physical universe in a controlled, purposeful way – all in machines about the size of a side-plate. Today ping-pong, I thought: tomorrow, auto-tracking turret-mounted plasma cannons, fiery death blazing from implacable faceless robots, and all that. :shock: The human world coming to an end not with a bang, or a whimper, but the irritating mosquito whine of electric-powered rotor blades.

Anyway. A mere three years after my insight into our terrible future, Elon Musk, the man behind the Tesla electric car company, has finally caught up. “We need to be super careful with AI,” he tweeted. “Potentially more dangerous than nukes.” He’d been reading a book entitled Superintelligence, by an Oxford University professor called Nick Bostrom, and it had clearly given him the heebie-jeebies.

The thing is, I’ve read it too, and he’s right to have the heebie-jeebies. We’ve all been hoodwinked by science fiction for too long. We’ve seen machine intelligences conquer the world, but then be defeated because they made some startlingly stupid error, such as building all their killer robots as fragile humanoids, instead of, say, as a heavily armed orbital platform which can drop lots of hydrogen bombs on the hero’s head from the comfort of space.

But that’s because we, as humans, think that human intelligence is the best. Hollywood is filled with feel-good messages about how robotic logic is no match for fuzzy, warm, human irrationality, and how the power of love will overcome pesky obstacles such as a malevolent superintelligent computer. Unfortunately there isn’t a great deal of cause to think this is the case, any more than there is that noble gorillas can defeat evil human poachers with the power of chest-beating and the ability to use rudimentary tools. :( The superintelligences – should they come to exist – will be superintelligent. They will be cleverer than us. If they want us out of the way, for whatever reason, then there probably won’t be an awful lot we can do about it.

The good news is that, for the moment at least, there’s no imminent danger of us successfully building one of these things (although, as Bostrom points out, once something a bit smarter than us is built, it won’t take long for that smarter thing to improve itself, because it’s smarter than us). But probably, one day, we will, and then the survival of our species will depend on whether or not the Silicon Valley geeks who make the first one have successfully programmed it not to turn all the matter in the entire solar system, us included, into spare parts for itself.

It is possible that we will avoid that fate – Bostrom considers ways in which we could programme values and morality into such a machine, although it would be a lot trickier than Isaac Asimov’s old Three Laws of Robotics (“Do not harm humans”, etc). But it’s also possible that attempts to do so would completely backfire. Essentially, at some point in the next century or so, we’ll find out whether we’ve built humanity’s ultimate servant, or the thing that kills us all, and we won’t know which it is until it happens.

All of which is exciting, and a little terrifying, in the way that existential threats to the species can be. I for one will never look at ping-pong in the same way again. 8)

http://blogs.telegraph.co.uk/news/tomch ... with-that/
 
An amusing-and thought provoking-vision of life after the Singularity(think of SKYNET) do read the works of Charles Stross, one of the leading lights of current SF.

Buy his books(he'll be glad for the money). His Laundry Files skewers the Secret and Civil services quite ruthlessly-as a long time bureaucrat and lawman myself, there's never a false note, and his Family Trade cycle is the best work of its kind, ever.

However, "Saturn's Children" and "Neptune's Net", where man has gone extinct and been supplanted by his bio-mechanical devices, is a thoughtful, and downright gripping look at a cyborg future.

Hint:they miss us.
 
Yes, I did refer to 'machine learning' and pointed out the risks should it ever amount to anything. But currently the _machine_ does not 'learn' anything - what happens is that past successes or failures are recorded and decisions made on that data according to an algorithm which records what a human would do in those circumstances with that data.

To take the robot shown learning to climb stars - it has numerous alternative movements that it is programmed to carry out and a sensor input that gives its height. It will keep trying the pre-programmed movements until it gets one that results in an increase in height. The program (not the 'machine') will then add a flag to say 'use this one first next time' . The human did it, not the machine. And bear in mind that should the input from the sensor be wrong, the machine has no way to detect it - then several kinds of undesirable scenarios arise from simple failure to complete the task to damage to the machine or the surrounding environment, including any humans that happen to be in it.

So again, what you have is a recording of human process, not the machine 'learning' for itself. It simply cannot be any other way with existing technology, because that is how the 'brain' part of the technology works - it basically is nothing more than a series of switches set by a human - well, many humans, typically. The only difference between a computer and say a television is that those switches are not hard wired, the sequence can be loaded dynamically in the form of a program.

Thus a machine designed using this technology may be able to emulate human behaviour, but it is neither conscious nor intelligent.
 
And what if we cannot make AI?

It would raise some interesting spiritual issues, would it not?
 
krakenten said:
And what if we cannot make AI?

It would raise some interesting spiritual issues, would it not?
But we can already make AI. What do you think drives 'Driverless' cars? ;) And they're already talking about doing away with aircraft pilots, for safety reasons. (Article in latest issue of New Scientist:
http://www.newscientist.com/article/mg2 ... urce=NSNS& )

Seems to me there's a sort of desperation amongst jellyware fans to deny that a machine could ever be intelligent or aware, and that humans will always be top dog*, for ever and ever, amen.

Cochise: "..a machine designed using this technology may be able to emulate human behaviour, but it is neither conscious nor intelligent."

I sometimes doubt if other humans are conscious or intelligent. If 'emulation' doesn't count, how could I prove it either way?

As for 'spiritual issues', no-one's ever defined what spirit is, or proved that it exists, so that is heading into murky waters. How does it advance anything to deny the possiblilty of machine intelligence, but to accept the existence of an unproven 'spiritual' intelligence?


* Mixed metaphor! ;)
 
garrick92 said:
You're asking us to accept the possibility of machine intelligence (for which there is absolutely zero evidence!) but to simultaneously deny the existence of spiritual intelligence (of which there is plenty of evidence -- but it's all first-hand accounts of numinous experiences such as OOBEs and NDEs).
Got to disagree with most of that - you're tending towards the fluffy woo-woo end of the spectrum, it seems.

I'm interested in OOBEs and NDEs (just look at my posts on those subjects), but that doesn't mean I abandon my scientific training.

And machine intelligence is hard science. It's a field which employs hundreds if not thousands of scientists and technicians, and new advances are reported almost every day. Similarly, biologists are framing their neurological researches within the parameters of cybernetics. This is the way history seems to be flowing, away from the ghosties and ghoulies of the past, and towards a more co-ordinated scientific understanding of what's really going on.
 
It may be 'hard science' - actually its engineering - but it isn't - yet - about 'intelligence' - its about what are effectively glorified tape recorders.

You'll be suggesting cartoon characters have 'intelligence' next :)

I don't argue that artificial intelligence is 'impossible', incidentally, simply that we haven't produced any yet, and that I don't believe we can using the currently available core technology - the CPU's and so on. The reason people in the field use the word 'intelligence' is because they are under the same misapprehension as Turing, though with far less excuse. A truly intelligent machine would just have its sensors plugged in and then start learning without a program.

We have pilots in big aircraft primarily to referee among the machines if they go wrong. I don't see a computer managing that task, nor do I believe driver-less cars will actually function safely once they are in an environment with lots of unpredictable humans in it.

After all, in both cases, it is not the actual piloting that is difficult, it is the reacting to unpredictable combinations of circumstances.
 
Having grown up with stories containing super-intelligent robots-Robby, Data, the 'Lost in Space' machine, there's quite a list-I am moved to ask, how much intelligence?

A self driving car needs about the same brain-power as a rat or a squirrel, perhaps as little as a mouse(though I've had several very cleaver pet mice)Since we may expect those to be common in a decade or so-or not, there are some problems-how much intelligence should we be seeking.

Many animals are self aware-one of my greyhounds would sit for hours admiring himself in a large mirror-is that what we want? Some African Grey Parrots will carry on a very basic conversation with humans, and some primates, who have learned to use a keyboard will sometimes turn up an abstract thought or two. One, scolded by her handler for throwing toys, responded to the question"Why can't you act like a normal child?" with the sign, 'gorilla'.

But so far, no machine has done this-at least to my knowledge.

Got a feeling this is going to be a long dark road.
 
garrick92 said:
rynner2 said:
And machine intelligence is hard science. It's a field which employs hundreds if not thousands of scientists and technicians, and new advances are reported almost every day.

I think you're overdoing it a bit here. AI is a theoretical field
Hardly! I recently posted in March of Technology about two or three teams of researchers making big practical advances in AI. (I didn't post it here as it didn't actually mention consciousness, although it did mention advances in machine visual perception which would obviously be useful to a conscious AI.)


I think you've got your fingers in your ears. Figuratively speaking.
Funnily enough, that's how I see the AI skeptics! ;)
Your responses are too predictable, probably created by a ZX81!


Krakenten says, "Got a feeling this is going to be a long dark road."

But it's getting brighter all the time! In the meantime, try taking those sunglasses off...
 
I'm not sceptical about AI - I keep pointing out it is entirely possible, theoretically.

But I believe the current confusion between a machine that can record data and make decisions according to predetermined algorithms (i.e., recorded human decisions) and a machine capable of, if you like, creating its own algorithms, is actually holding the field back, no matter how well they manage to make machines that simulate intelligence by using recordings of human logic.

I'm glad you think things are getting brighter - seems to me like we're in decline as a civilisation, and might well be heading for another dark age in about 50-100 years time. A reasonable topic for a new discussion, perhaps.
 
garrick92 said:
But since you dangle the bait, I'll bite.

Post your MoT stuff here, even though it has nothing to do with consciousness and let's see if it is (as you intimate) hard evidence of AI, or even of progress toward it.
Why should I go to the effort of reposting? Go search for it, if you really want another excuse to block your ears and cover your eyes so you can deny any AI at all!

(Sheesh! You can lead a horse to water, but you can't teach it to water-ski! :roll: )
 
Why bother with the asterisks? With or without them, I've noticed that you're behaving like a prize tosser yourself, recently, on a number of threads.
 
When it comes to the intelligent machine, I'm thinking it may not be such a good idea.

Notice how fond we can become of the fictional versions. People were very upset when Robbie the Robot 'died' in
"Forbidden Planet"-he appears in the final scene to put us at ease-and as the films have gone along, the Terminator' has become rather a wit. Talking robots in general seem to have a humorous bent.

But what about the very real possibility of mechanical insanity? Or of AI's developing social structures of their own, unknown to us? Could a mechanical or bio-mechanical Spartacus arise?

Dr. Who(speaking of the Androgum) said, "You can teach differential calculus to an earwig, but should you?"

I'm glad it will not happen in my lifetime. Very,very glad.
 
garrick92 said:
Since A Certain Person so politely declined to point me at other postings he'd made on another thread, I cut to the chase and went off to read about the subject for myself.
I gave you the name of the thread I posted to. Do you really need someone to explain how to Use Search on this MB?!

OK, click Search (Under Forums). Type March Technology in search box. Click Search for all terms. Click Search Topic Title. Then click Search. Easy Peasy!

But the piece I posted is not about chess playing computers, or even self-driving cars - it's much more cutting edge. But it's not about consciousness, so it doesn't belong here. Enjoy!
 
garrick92 said:
I still fail to see why you couldn't just copy/paste it here...
Because it doesn't belong here, and why clutter the FT server with two copies of the same thing, just to satisfy you?

As for 'Feel free to prove me wrong', I know I can't. Your credo is, "I've made my mind up, don't confuse me with the facts!" ;)

No doubt you think the same about me - just imagine me with my fingers in my ears!

So, to quote somebody or other, "I'll not engage with you further on this thread till you post something substantive."
 
garrick92 said:
..Now, let's put this little spat behind us and move on..
I think this is probably the best course. Name calling does no-one any favours.

Onward and upward.
 
garrick92 said:
Here is an article about developments in the study of "Out of Body Experiences", which I think ought to be read.
Much of this sort of stuff has already been discussed on various OOBE and NDE threads on this MB. Perhaps you should read them, and post anything new there.

I think that these have to be explained satisfactorily by proponents of 'consciousness as hardware', and I look forward to discussion in that area, should anyone be willing to take it up.
Well, I don't know who's been proposing 'consciousness as hardware'. I certainly haven't - quite the opposite. (And 'Consciousmess as Jellyware' still isn't fully explained either!)

I've been exploring the idea that consciousness is the result of complex programming, whether it's implemented in a biologically-evolved brain, or in advanced silicon chips (built and evolved by earlier generations of silicon chips, when human manufacture and design prove inadequate).

And if such programs are implemented in autonamous, sensing, communicatiing robots, then it seems likely they will also be conscious. And then we can speculate a little further - who's to say that such conscious robots cannot also experience NDEs or OOBEs?

After all, if such a robot is threatened with destruction, or loses its hearing, sight, etc, then its mind might well follow a similar path to that of a stressed human mind in similar circumstances. (This is especially likely as researchers extend their work with chips based on parts of the human brain, networked together to simulate brain activity.)

But if it turns out that there is some fundamental difference between biological intelligence and machine intelligence, then we'll have learned something important, and who knows where that might lead?
 
Neural networks are not new, you know, we've had them since the 80's - one of my own personal heroes / gurus - an extremely clever man - went to work on them in the early 90's before he died in an unfortunate accident.

A non-digital processor that was capable of working off voltage levels rather than simply a positive or negative pulse/charge/current would potentially be a step towards AI.

It's likely that it is something along those lines that lets our brain store so much information and as it would be a lot more sensitive to disruption by any other stray or nearby currents, it might explain why our memories and actions are so easily influenced by other things floating around in our head. Indeed, it might be that 'secondary' interaction which amounts to 'intelligence', that is , the interaction between currents in adjacent wires/cells/neurons might be 'thought'.

So, if this hypothesis has merit, by simply storing things in the machine you would put processes in train that were not pre-programmed, but were simply due to where in memory things are stored - this again seems to fit with some ideas of how the human brain functions in that activity in certain areas result in certain thought patterns. In my theory, this would not be due to where in the brain it is, but because certain things are stored near to each other - this would also explain why some people with serious brain damage need a period of relearning but can then have functionality approaching normality despite the fact that apparently vital parts of the brain had been damaged. The relearning would be re-storing information together in new parts of the brain so the necessary interactions can restart.

Can I be discharged from my 'Luddite' category now ?
 
Back
Top