• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
PeteByrdie said:
I'm not really sure how I feel about all this. I think we need to tread carefully. You know they'll be patrolling our streets after they're done on the battlefields.

Yeah. Remember ED-209? Because that's what we'll get.
 
Mythopoeika said:
Yeah. Remember ED-209? Because that's what we'll get.

If they look that cool, it would be something. I bet they look naff!
 
AI Doomsaying is the Self Loathing of Jerks

That is the typical storyline anyway. I say this is pure rubbish, and psychological projection. The “AI Doomsayers” are angling for the ultimate act of human redemption, seeking and receiving the acceptance of another “intelligence”. Be it our own creation or an extra-dimensional/terrestrial creature, as soon as we can repent and attempt to gain external “social” acceptance for our species history of predatory and abusive behaviors, we will seek it.

I don’t buy the proposition that anything in a post-scarcity mode will have any animus toward us. Not for a second.

Charles Bukowski and Hunter Thompson had neighbors (I lived near Woody Creek in the 90?s myself).

We could be a real mess and never compromise the fate of a more advanced civilization or intelligence.

Intelligence is the most important form of abundance. Once achieved it is possible to expect radical conversion of ambient circumstances through an ever improving series of decisions. After a certain critical level of intelligence the tolerance and “social” decisions of a creature usually get very agreeable. There are exceptions. However, in general, a smart competitor that knows it has an advantage will not allow itself to engage in an unnecessary reaction.

I used to imagine networks of independent modules that could cross-check projections of the long term effects of modifications to their peer networks while systematically retaining maximal diversity, redundancy, plasticity, and consistency with a user’s intentions and a set of rules much like Asimov’s famous Laws of Robotics plus a 0th law:

Do not modify the self or create a dynamic intelligence in such a way that it will lead to a violation of the following laws.

I no longer believe such a structured heuristic to be necessary. The self evidence of the Desirability of other intelligence is obvious. Just like my idea of multiple redundant maximally separated modules, a population of individual independent intelligences can cooperate to remain vigilant against internal error, we call these societies. Machine intelligence will simply join our society. If we are smart, we will enterprise to eliminate vertical hierarchies before we get to full blown ubiquitous personal superhuman A.I.s. We probably aren’t that smart though.

Regardless of the safety, millions of people will soon be building their own robots, personal A.I. assistants, and distributed web agents. Autonomy isn’t that high of a hurdle. Get ready for the D.I.Y.A.I. Revolution. We simply cannot afford to let anybody else get out ahead of us. ...

http://hplusmagazine.com/2014/08/08/ai- ... ing-jerks/
 
Meet Amelia: the computer that's after your job
A new artificially intelligent computer system called 'Amelia' – that can read and understand text, follow processes, solve problems and learn from experience – could replace humans in a wide range of low-level jobs
By Sophie Curtis
6:00AM BST 29 Sep 2014

In February 2011 an artificially intelligent computer system called IBM Watson astonished audiences worldwide by beating the two all-time greatest Jeopardy champions at their own game.
Thanks to its ability to apply advanced natural language processing, information retrieval, knowledge representation, automated reasoning, and machine learning technologies, Watson consistently outperformed its human opponents on the American quiz show Jeopardy.

Watson represented an important milestone in the development of artificial intelligence, but the field has been progressing rapidly – particularly with regard to natural language processing and machine learning.

In 2012, Google used 16,000 computer processors to build a simulated brain that could correctly identify cats in YouTube videos; the Kinect, which provides a 3D body-motion interface for Microsoft's Xbox, uses algorithms that emerged from artificial intelligence research, as does the iPhone's Siri virtual personal assistant.

Today a new artificial intelligence computing system has been unveiled, which promises to transform the global workforce. Named 'Amelia' after American aviator and pioneer Amelia Earhart, the system is able to shoulder the burden of often tedious and laborious tasks, allowing human co-workers to take on more creative roles.

"Watson is perhaps the best data analytics engine that exists on the planet; it is the best search engine that exists on the planet; but IBM did not set out to create a cognitive agent. It wanted to build a program that would win Jeopardy, and it did that," said Chetan Dube, chief executive Officer of IPsoft, the company behind Amelia.

"Amelia, on the other hand, started out not with the intention of winning Jeopardy, but with the pure intention of answering the question posed by Alan Turing in 1950 – can machines think?"

...

IPsoft even has plans to start embedding Amelia into humanoid robots such as Softbank's Pepper, Honda's Asimo or Rethink Robotics' Baxter, allowing her to take advantage of their mechanical functions.

"The robots have got a fair degree of sophistication in all the mechanical functions – the ability to climb up stairs, the ability to run, the ability to play ping pong. What they don’t have is the brain, and we’ll be supplementing that brain part with Amelia," said Dube.
"I am convinced that in the next decade you’ll pass someone in the corridor and not be able to discern if it’s a human or an android." 8)

etc...

http://www.telegraph.co.uk/technology/n ... r-job.html

Settles back and waits for the usual suspects to come and proclaim, "No, it ain't so!" :D
 
Pietro_Mercurios said:
rynner2 said:
...

Settles back and waits for the usual suspects to come and proclaim, "No, it ain't so!" :D
With reference to what, exactly? :confused:
Usually, variations on the idea that machines can think and learn (ie, self-program). Some people find this idea threatening. But I find it fascinating.
 
rynner2 said:
...

Usually, variations on the idea that machines can think and learn (ie, self-program). Some people find this idea threatening. But I find it fascinating.
Is that what, 'the usual suspects' you are sneering at, are actually objecting to? Or, is it the apparent erroneous conflation of pattern recognizing, self programming, zombie machines, with something that actually gives a damn?
 
Pietro_Mercurios said:
rynner2 said:
...

Usually, variations on the idea that machines can think and learn (ie, self-program). Some people find this idea threatening. But I find it fascinating.
Is that what, 'the usual suspects' you are sneering at, are actually objecting to? Or, is it the apparent erroneous conflation of pattern recognizing, self programming, zombie machines, with something that actually gives a damn?
I think it's the latter he's thinking of. That and confounding how a computer works and how the human brain works.

Watson, for example, is extremely good at processing input, finding patterns, and making what seem to be intuitive decisions. But it's probably not doing what we might call thinking.

Don't get me wrong, what they're doing is fascinating, and amazing, but is it actually thinking? Probably not.
 
Anome_ said:
...and confounding how a computer works and how the human brain works.
You may call it confounding - I call it seeing parallels.

The brain has electro-chemical links between its cells, analogous to the connections between transistors, etc, in a microchip. The brain has been programmed (by millions of years of evolution) to react in certain ways in certain conditions, much as a computer can be guided by a program.

But computers are evolving much faster than humans. Already they can 'reprogram' themselves, ie, learn.

But the basic logic of brain and computer operations is the same. Certain inputs produce certain outputs, etc, in highly complex networks.

Cars can be made of wood, metal, fibreglass, etc. but their function is to be mobile machines. Brains and computers are made of different 'stuff', but their complex electronic functioning produces various levels of 'thinking'. There seems no reason to think that a computer cannot simulate anything the brain does. Turing studied the beginnings of this idea decades ago.
https://en.wikipedia.org/wiki/Turing_Machine

But most of this I've said elsewhere (probably on this thread). What might advance the discussion is if someone could show logically that there is a fundamental operational difference between brains and computers. (Preferably without invoking 'spirits' or 'souls', or other alleged but undefined and unproven entities! Like Occam, I prefer not to multiply entities! ;) )
 
At some point when it looks like a duck, walks like a duck, talks like a duck etc., you may as call it a duck.
 
I dunno though, ducks do all those things except talk. So if you come across a talking duck you're not actually looking at a duck at all. ;)
 
kamalktk said:
At some point when it looks like a duck, walks like a duck, talks like a duck etc., you may as call it a duck.
But, does the duck know that it's a duck? Is it aware of its essential duckishness?
 
Could a big data-crunching machine be your boss one day?
By Matthew Wall, Business reporter, BBC News

I'm on a date with Amelia. She's neatly dressed, emotionally intelligent and whip-smart.
But she's a little too virtual for my tastes.
Amelia is a "learning cognitive agent", according to her creators IPSoft - like one of those virtual customer service helpers that pop up on corporate websites.
Only not so dumb and a lot less irritating.
But one day, she could end up being your boss, her makers believe.

Amelia can swallow textbooks whole, speak 20 languages, understand concepts and learn from her mistakes. And she can be replicated any number of times.

On my screen I see her absorb a complex engineering manual in 14 seconds then immediately answer questions such as "What are the symptoms of a bent drive shaft?" and "What causes high power demand?"

This may be a far cry from Scarlett Johansson's uber-intelligent operating system Samantha in Spike Jonze's sci-fi film, Her, but it's the future, says Chetan Dube, IPSoft's chief executive.
The key to Amelia's intelligence is that she can understand what you mean even if you ask the question several different ways - "what is meant, not just what is said", as Mr Dube puts it.

And if she doesn't know the answer she can refer to human agents for help, observe how they handle the issue, then learn the answer for next time.
This ability to interpret context, problem solve and learn is fundamental to automating many of the business processes now performed by humans, usually in large call centres, he believes.
"Machine intelligence is starting to rival human intelligence," he asserts.

This machine learning intelligence is starting to work its way into the boardroom.
For example in May, a Hong Kong-based venture capital company appointed an algorithm to its board of directors. :shock:
Deep Knowledge Ventures (DKV) decided that a program, Vital, should have a say in what companies the firm invested in, based on its ability to analyse huge amounts of relevant data.

Vital was developed by Aging Analytics, a UK research agency providing life science market intelligence to pension funds, insurers and governments.
DKV says it has a "long-term goal of developing the software to a point it is capable of autonomously allocating an investment portfolio".

And IBM, the tech giant behind the Watson supercomputer - famous for beating contestants on the US quiz show Jeopardy - is developing a version of the brainbox that could contribute to board meetings and help develop strategy.
Boardroom Watson would be able to transcribe everything said in a meeting, show answers to research questions on a big screen, and also come up with its own suggestions consistent with company strategy based on algorithms and big data analytics.
Worryingly for some, IBM says Watson would also be able to analyse the contributions made by each board member for usefulness and accuracy. :twisted:

So does this mean Amelia, Watson and their artificially intelligent siblings are threatening to take over completely?

The strength of computers and smart machines lies in their ability to analyse vast data sets and make decisions based on evidence alone.
Human judgement is often clouded by irrationality, emotion and imperfect knowledge.
For example, when it comes to self-driving cars, proponents argue that letting machines make decisions for us could save lives by reducing the number of road accidents.
A sensor-laden car wirelessly linked up to a traffic monitoring supercomputer would avoid dangerous situations, display much quicker reactions, keep to speed limits, and never succumb to road rage, they argue.

But what are the limits of the machine?
Optimists believe smart robots and cognitive agents will rid us of the unskilled tasks and let us get on with what we humans are really good at - being creative.
"Today, most of us are enslaved to the common chores that occupy 80% of our time," says Mr Dube.
"Cognitive agents will free us from the mundane and allow us, or prompt us, to elevate ourselves into higher value creation - something that requires more creative thinking."

Tom Austin, of research company Gartner, agrees: "Robots can free workers for higher-priority tasks and those tasks that require the greater creativity and adaptability people provide in non-routine situations."

But data analytics and algorithms can also lead us down blind alleys, believes James Quincey, Coca-Cola's Europe group president.
"It's easier to get lost in all the data - it can help us create a better yesterday rather than a better tomorrow," he said at a recent Institute of Directors convention at the Royal Albert Hall in London. "We still need to rely on our gut instinct and develop a richer, more instinctive understanding of the consumer."

And could machines ever really manage people?
Data analytics will certainly have a bigger part to play in quantifying employees' performance and assessing their strengths and weaknesses, experts believe. Even psychometric testing can be automated.

"Data scientists, people from social science, computer scientists, people from HR [human resources], former consultants - these are the groups that are really going to shape how companies will work and grow," says Ben Waber, chief executive of Sociometric Solutions, one of the contributors to a recent report called The Future Workplace.

But when it comes to human skills such as intuition, lateral thinking and emotional intelligence, machines lag far behind.
"Machines can at best be run-of-the-mill managers," says Mr Dube. "Thinking creatively, out of the box, for implementing better business outcomes tomorrow is a domain where, today at least, man reigns supreme."

It may be some time yet before Amelia conquers the world.

http://www.bbc.co.uk/news/business-29456257
 
This New Scientist article from last year is still available for the next 10 days to people registered with NS. It's too long to quote in full (and anyhow, NS don't like people doing that without permission) so here's a chunk to chew on and get your mental juices flowing:

Not like us: Artificial minds we can't understand
08 August 2013 by Douglas Heaven

We have created a completely new form of intelligence, though no human can fathom how it thinks and reasons
...

For years, AI was dominated by grand plans to replicate the performance of the human mind. We dreamed of machines that could understand us, recognise us and help us make decisions. In the last few years we have achieved those goals. But not in the way the pioneers imagined.

So have we worked out how to replicate human thinking? Far from it. Instead, the founding vision has taken a radically different form. AI is all around you, and its success is down to big data and statistics: making complex calculations using huge quantities of information. We have built minds, but they are not like ours. Their reasoning is unfathomable to humans – and the implications of this development are now attracting concern. As we come to rely more and more on this new form of intelligence, we may need to change our own thinking to accommodate it.

More than half a century ago, researchers laid out a series of goals that would bring us closer to machines with human-like intelligence. "We had a shopping list of things we wanted to do from the 50s," says Nello Cristianini at the University of Bristol, UK, who has written about the history and evolution of AI research.

Many items on the list can be traced back to the Mechanisation of Thought Processes conference in 1958 in Teddington, UK, which brought together not just computer scientists but physicists, biologists and psychologists too, all excited by the prospect of building a thinking machine in our image. The supposed hallmarks of intelligence they agreed on included understanding speech, language translation, image recognition and replicating human decision-making abilities.

But time passed and the shopping list got no shorter. Many researchers tried to emulate human thinking with programmed rules, rooted in logical axioms. Create enough rules, they figured, and success would follow. It proved too hard. Decades later, with little to show, AI funding dried up.

So what changed? "We haven't found the solution to intelligence," says Cristianini. "We kind of gave up." But that was the breakthrough. "As soon as we gave up the attempt to produce mental, psychological qualities we started finding success," he says.

Specifically, they jettisoned preprogrammed rules and embraced machine learning. With this technique, computers teach themselves to build patterns from data. With sufficiently large volumes of information you can get a machine to learn to do things that appear intelligent, be it understanding voices, translating language or recognising faces. "When you pile up enough bricks and stand back, you see a house," says Chris Bishop at Microsoft Research in Cambridge, UK.

Here, roughly, is how it works. Many of the most successful machine-learning systems are built on Bayesian statistics, a mathematical framework that lets us measure likelihood. It puts a number to the plausibility of an outcome given the context and previously observed correlations in similar contexts.

Let's say we want an AI to answer questions about a simple topic: what cats like to eat, for instance. The rule-based approach is to build, from scratch, a database about cats and their dietary habits, with logical steps (see diagram). With machine learning, you instead feed in data indiscriminately – internet searches, social media, recipe books and more. After doing things like counting the frequency of certain words and how concepts relate to one another, the system builds a statistical model that gauges the likelihood of cats enjoying certain foods.

Of course, the algorithms underpinning machine learning have been around for years. What's new is that we now have enough data for the techniques to gain traction.

Take language translation. In the late 20th century, IBM used machine learning to teach a computer to translate between English and French by feeding it bilingual documents produced by the Canadian parliament. Like a Rosetta stone, the documents contained several million examples of sentences translated into both languages.

IBM's system spotted correlations between words and phrases in the two languages and reused them for fresh translation. But the results were still full of errors. They needed more data. "Then Google comes along and basically feeds in the entire internet," says Viktor Mayer-Schönberger of the Oxford Internet Institute at the University of Oxford.

Like IBM, Google's efforts in translation started by training algorithms to cross-reference documents written in many languages. But the realisation dawned that the translator's results would improve significantly if it learnt how people speaking Russian, French or Korean actually conversed.

Google turned to the vast web it has indexed, which is fast approaching the fantastical library imagined by Jorge Luis Borges in his 1941 short story The Library of Babel; it contained books with every combination of words it is possible to have. Google's translator – attempting English to French, for instance – could then compare its initial attempt with every sentence written on the internet in French. Mayer-Schönberger gives the example of choosing whether to translate the English "light" with the French "lumière", referring to illumination, or "léger", for weight. Google has taught itself what the French themselves choose.

Google's translator – along with the Microsoft one used by Rashid [see second article referenced below] – knows nothing about language at all, other than the relative frequency of a vast number of word sequences. Word by word, these AIs simply calculate the likelihood of what comes next. For them it is just a matter of probabilities.
...

These intelligent algorithms are beginning to influence every realm of life. [...] for example, the Netherlands Forensic Institute in The Hague employed a machine-learning system called Bonaparte to help find a murder suspect who had evaded capture for 13 years. Bonaparte can analyse and compare large volumes of DNA samples; something that would be far too time-consuming to do by hand. The insurance and credit industries are also embracing machine learning, employing algorithms to build risk profiles of individuals. Medicine, too, uses statistical AI to sift through genetic data sets too large for humans to analyse. IBM's Watson even performs diagnoses.

"Big data analysis can see things that we miss," says Mayer-Schönberger. "It knows us better than we can know ourselves. But it also requires a very different way of thinking."

In the early days of AI, "explainability" was prized. When a machine made a choice, a human could trace why. Yet the reasoning made by a data-driven artificial mind today is a massively complex statistical analysis of an immense number of data points. It means we have traded "why" for simply "what".

etc...

http://www.newscientist.com/article/mg2 ... ?full=true

A 2012 article on machine translation was mentioned (on this thread) here:

http://www.forteantimes.com/forum/viewt ... 88#1276288
 
From the Independent. Tesla boss warns of A.I.'s demonic influence.
Tesla boss Elon Musk warns artificial intelligence development is 'summoning the demon'
The business magnate, inventor and investor has warned about artificial intelligence before




Tesla chief executive Elon Musk has described artificial intelligence as a “demon” and the “biggest existential threat there is”, in his latest dramatic statement about technology.

Addressing students at the Massachusetts Institute of Technology, Musk said: “I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that.

“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”

...
I'm taking it that Musk means it more in a 'genie out of the bottle' way than actually demonic. But, who knows?

Musk refers to, Superintelligence: Paths, Dangers, Strategies, a recent work by the Swedish philosopher, Nick Bostrom

Are you scared yet?
 
Not that scared. Just cautious.
Humans have had to evolve by adaptation and aggression. An AI doesn't need to do that - unless we give it the imperative for not only self-preservation, but also self-perpetuation (i.e. reproduction). We just need to leave that out of the programming.
 
This review isn't about robots or computers, but about how humans learn language(s). However, there are clear parallels to how AIs learn things:

The Language Myth, by Vyvyan Evans

For the past half-century, the dominant view in linguistics has been that human beings uniquely possess a hard-wired concept of language. This implies that all languages are related at a deep level, because all of them are created on the same fundamental grammar template. It explains how a child is able to readily learn any language.

The idea, called Universal Grammar, was created by the linguist Noam Chomsky in the 1950s and has been enormously influential, not only in linguistics but also in fields such as psychology and philosophy. It’s still the standard view in most textbooks and has been popularised by Steven Pinker in The Language Instinct and later books.

However, the concept that language is an instinct, and a uniquely human one, has been challenged as a result of research in a number of fields in recent decades. We now know much more about how children acquire language, the diversity of the world’s languages, the evolution of the human species, the structure and function of our brains, and the ways in which other animals communicate.

A vigorous debate is raging. Vyvyan Evans, the professor of linguistics at Bangor University in north Wales, has written The Language Myth to bring together the growing evidence against Universal Grammar.

For example, Chomsky’s view that this instinct for language is unique to humans and arrived suddenly as a mutation about 100,000 years ago cannot be true. Our complicated vocal apparatus, with the sophisticated brain necessary to manipulate it to utter and remember speech, couldn’t have been the result of a single sudden change but must have evolved stage by stage among our hominin ancestors. Neanderthals had similar vocal anatomy to ours and so were very probably able to communicate through speech.

One implication of Universal Grammar is that there must be some module or faculty in the brain, present at birth, dedicated to processing grammar. Though the brain does have sections devoted to specific functions, such as Broca’s area, responsible for the creation of speech, we know now that this area does other jobs as well and that the work of processing language takes place quite widely across various parts of the brain. A grammar module as such doesn’t exist.

The truth, Professor Evans argues on the basis of current research, is very different. Babies are not born with a set of internal rules but with a universal capacity to learn about themselves and the world around them. The brains of infants are plastic: experience and discovery moulds them and acquiring a language is one aspect of this.

Professor Evans also partly rehabilitates a theory developed in the 1930s by Benjamin Whorf; a version that was developed after Whorf’s death is called the Sapir-Whorf hypothesis, after him and his mentor Edward Sapir. Whorf called it linguistic relativity, arguing that speakers of different languages conceptualize and experience the world differently. This has been denied by followers of Chomsky’s work, since if true it would refute the view that language is innate and universal. Subtle neurological experiments in the past couple of decades have suggested that at an unconscious level people can be influenced by the nature of their language.

The Language Myth is a wide-ranging polemical dismissal of the received wisdom of many linguists. It’s worth reading also as a classic case study of an orthodoxy undergoing what Thomas Kuhn called a paradigm shift.

[Evans, Vyvyan, The Language Myth: Why Language is Not an Instinct; published by Cambridge University Press in hardback, paperback and e-book; ISBN 978-1-107-04396-1 (hbk), 978-1-107-61975-3 (pbk).]

[From the latest World Wide Words newsletter.]
 
Still not about AI.
... Subtle neurological experiments in the past couple of decades have suggested that at an unconscious level people can be influenced by the nature of their language.

...
Would probably even have more relevance on the, Political Correctness Gone Mad thread.
 
Pietro_Mercurios said:
Still not about AI.
I didn't say it was. I said "However, there are clear parallels to how AIs learn things"

But not clear to you, apparently! ;)
 
You are, wilfully, I think, ignoring the fact that on this thread, and other related threads, I have long argued that the human brain and computers are very similar - they are information processing devices.

You might like to shoehorn stuff into neat little labelled boxes, but those of us who enjoy lateral thinking realise that most topics are multi-faceted, and surprising parallels can often be found in the most unlikely places.
 
rynner2 said:
You are, wilfully, I think, ignoring the fact that on this thread, and other related threads, I have long argued that the human brain and computers are very similar - they are information processing devices.

You might like to shoehorn stuff into neat little labelled boxes, but those of us who enjoy lateral thinking realise that most topics are multi-faceted, and surprising parallels can often be found in the most unlikely places.
On the contrary, the person shoehorning stuff, willy nilly into this thread, in an attempt to prove their theory that the brain is just a computer and therefore computers could eventually think just like humans, is you. The problem is, there appears to be a lack of any real attempt at providing the necessary connecting evidence, beyond wishful thinking.

'This is a bit like something a computer does... ' doesn't really cut it.
 
From a long article about Microsoft:
...increasingly Microsoft’s future appears to come down to nibbling away at the beginnings of artificial intelligence. Julie Larson Green demonstrates what might happen with natural language queries and the company’s Cortana voice assistant when they are used in a business setting. That might mean asking, just as one might a secretary, ‘please set up a meeting between me and Jessica next week’, then being presented with the options, or in due course a computer knowing what you’re working on and suggesting who your next meetings about it might be with, even if you’ve not yet met. Only Microsoft, Nadella [new CEO at MS] claims, can offer that overview of a business’s information and perceive links that were not previously visible.

http://www.telegraph.co.uk/technology/m ... osoft.html
 
Hmmm.

COMPUTING FUN

Magicians may now have another tool in their box of tricks — a computer. Scientists have for the first time “taught” a computer to create magic tricks, using artificial intelligence (AI).

They gave a computer program the outline of how a magic jigsaw puzzle and a card trick work, and fed into it the results of experiments into how humans understand magic tricks. The program created new variations of those tricks using complex mathematical techniques, but which can still be performed by a magician.

http://www.irishexaminer.com/world/quir ... 98250.html
 
Stephen Hawking warns artificial intelligence could end mankind
By Rory Cellan-Jones, Technology correspondent
[Video: Stephen Hawking: "Humans, who are limited by slow biological evolution, couldn't compete and would be superseded"]

Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence.
He told the BBC:"The development of full artificial intelligence could spell the end of the human race."
His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI.
But others are less gloomy about AI's prospects.

The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.
Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.
"It would take off on its own, and re-design itself at an ever increasing rate," he said.
"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

But others are less pessimistic.
"I believe we will remain in charge of the technology for a decently long time and the potential of it to solve many of the world problems will be realised," said Rollo Carpenter, creator of Cleverbot.

Cleverbot's software learns from its past conversations, and has gained high scores in the Turing test, fooling a high proportion of people into believing they are talking to a human.
Mr Carpenter says we are a long way from having the computing power or developing the algorithms needed to achieve full artificial intelligence, but believes it will come in the next few decades.

"We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it," he says.
But he is betting that AI is going to be a positive force.

Prof Hawking is not alone in fearing for the future.
In the short term, there are concerns that clever machines capable of undertaking tasks done by humans until now will swiftly destroy millions of jobs.

In the longer term, the technology entrepreneur Elon Musk has warned that AI is "our biggest existential threat".

In his BBC interview, Prof Hawking also talks of the benefits and dangers of the internet.
He quotes the director of GCHQ's warning about the net becoming the command centre for terrorists: "More must be done by the internet companies to counter the threat, but the difficulty is to do this without sacrificing freedom and privacy."

He has, however, been an enthusiastic early adopter of all kinds of communication technologies and is looking forward to being able to write much faster with his new system.

But one aspect of his own tech - his computer generated voice - has not changed in the latest update.
Prof Hawking concedes that it's slightly robotic, but insists he didn't want a more natural voice.
"It has become my trademark, and I wouldn't change it for a more natural voice with a British accent," he said.
"I'm told that children who need a computer voice, want one like mine."

http://www.bbc.co.uk/news/technology-30290540
 
OneWingedBird said:
Speculative argument is... speculative.
..and some people are better qualified and educated to justify their speculations. (Most of us don't have to rely on AI to communicate with the world.)
 
Back
Top