• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
A well-balanced article:

Does AI really threaten the future of the human race?
Rory Cellan-Jones, Technology correspondent

The end of the human race - that is what is in sight if we develop full artificial intelligence, according to Stephen Hawking in an interview with the BBC. But how imminent is the danger and if it is remote, do we still need to worry about the implications of ever smarter machines?

My question to Professor Hawking about artificial intelligence comes in the context of the work done by machine learning experts at the British firm Swiftkey, who have helped upgrade his communications system. So I talk to Swiftkey's co-founder and CEO, Ben Medlock, a computer scientist with a Cambridge doctorate which focuses on how software can understand nuance in language.

Ben Medlock told me that Professor Hawking's intervention should be welcomed by anyone working in artificial intelligence: "It's our responsibility to think about all of the consequences good and bad", he told me. "We've had the same debate about atomic power and nanotechnology. With any powerful technology there's always the dialogue about how do you use it deliver the most benefit and how it can be used to deliver the most harm."

He is, however sceptical about just how far along the path to full artificial intelligence we are. "If you look at the history of AI, it has been characterised by over-optimism. The founding fathers, including Alan Turing, were overly optimistic about what we'd be able to achieve."

He points to some successes in single complex tasks, such as using machines to translate foreign languages. But he believes that replicating the processes of the human brain, which is formed by the environment in which it exists, is a far distant prospect: "We dramatically underestimate the complexity of the natural world and the human mind, "he explains. "Take any speculation that full AI is imminent with a big pinch of salt."

While Medlock is not alone in thinking it's far too early to worry about artificial intelligence putting an end to us all, he and others still see ethical issues around the technology in its current state. Google, which bought the British AI firm DeepMind earlier this year, has gone as far as setting up an ethics committee to examine such issues.

DeepMind's founder Demis Hassabis told Newsnight earlier this year that he had only agreed to sell his firm to Google on the basis that his technology would never be used for military purposes. That, of course, will depend in the long-term on Google's ethics committee, and there is no guarantee that the company's owners won't change their approach 50 years from now.

The whole question of the use of artificial intelligence in warfare has been addressed this week in a report by two Oxford academics. In a paper called Robo-Wars: The Regulation of Robotic Weapons, they call for guidelines on the use of such weapons in 21st Century warfare.

"I'm particularly concerned by situations where we remove a human being from the act of killing and war," says Dr Alex Leveringhaus, the lead author of the paper.
He says you can see artificial intelligence beginning to creep into warfare, with missiles that are not fired at a specific target: "A more sophisticated system could fly into an area and look around for targets and could engage without anyone pressing a button."

But Dr Leveringhaus, a moral philosopher rather than a computer scientist, is cautious about whether there is anything new about these dilemmas. He points out that similar ethical questions have been raised at every stage of automation, from the arrival of artillery allowing the remote killing of enemy soldiers to the removal of humans from manufacturing by mechanisation. Still, he welcomes Stephen Hawking's intervention: "We need a societal debate about AI. It's a matter of degree."

And that debate is given added urgency by the sheer pace of technological change. This week the UK government has announced three driverless car pilot projects, and Ben Medlock of Swiftkey sees an ethical issue with autonomous vehicles. "Traditionally we have a legal system that deals with a situation where cars have human agents," he explains. "When we have driverless cars we have autonomous agents... You can imagine a scenario when a driverless car has to decide whether to protect the life of someone inside the car or someone outside." :shock:

Those kind of dilemmas are going to emerge in all sorts of areas where smart machines now get to work with little or no human intervention. Stephen Hawking's theory about artificial intelligence making us obsolete may be a distant nightmare, but nagging questions about how much freedom we should give to intelligent gadgets are with us right now.

http://www.bbc.co.uk/news/technology-30326384
 
Syntax era: Sir Clive Sinclair's ZX Spectrum revolution
By Leo Kelion, Technology desk editor
[Video: Sir Clive Sinclair discusses his role in computing's past and future]

Sir Clive Sinclair appears pretty laid back about concerns that he may have hastened the demise of the human race.
His ZX Spectrum computers were in large part responsible for creating a generation of programmers back in the 1980s, when the machines and their clones became best-sellers in the UK, Russia, and elsewhere.

At the time, he forecast that software run on silicon was destined to end "the long monopoly" of carbon-based organisms being the most intelligent life on Earth.

So it seemed worth asking him what he made of Prof Stephen Hawking's recent warning that artificial intelligence could spell the end of the human race.
"Once you start to make machines that are rivalling and surpassing humans with intelligence it's going to be very difficult for us to survive - I agree with him entirely," Sir Clive remarks.
"I don't necessarily think it's a bad thing. It's just an inevitability."

So, should the human race start taking precautions?
"I don't think there's much they can do," he responds. "But it's not imminent and I can't go round worrying about it."

It marks a somewhat more relaxed view than his 1984 prediction that it would be "decades, not centuries" in which computers "capable of their own design" would rise.
"In principle, it could be stopped," he warned at the time. "There will be those who try, but it will happen nonetheless. The lid of Pandora's box is starting to open."
The reason, he says, is that we are not using advances in computing power to their full potential.
"I think progress is not that fast.
"Just look at what the machines do. They're not doing much more than what they were."

Of course, Sir Clive's computers have already given many of us a taste for "death-by-computer".
Many of today's forty-somethings lost one life after another as they struggled to complete Spectrum games such as Atic Atac, Jet Set Willy and Manic Miner.

But while such titles are remembered with fondness, it's easy to forget that back in 1982 Sir Clive was taken seriously when he claimed his technology was set to surpass IBM's PC platform to become the dominant force in home computing.
"Home computers were a fairly new thing but they were extremely expensive and what we managed to do was to bring them way down in price - about the £100 bracket," he recalls.
"We had to come up with a huge number of the innovations to get the price bracket where we wanted it... new architecture, new programs, just about everything was fresh."

Sir Clive had already enjoyed success with the ZX80 and ZX81 computers, but it was the Spectrum that really became a phenomenon.
The original adverts for the ZX Spectrum placed strong emphasis on its price
The original models had as little as 16 kilobytes of Ram memory and took five minutes or more to load programs from a cassette, but they changed lives.

"Before Sir Clive's products, home computers were kits of parts with a couple of LEDs and a numeric keypad," recalls Mike Talbot, creator of dozens of other games.
"The ZX series brought computers out of the lab and made them a book-sized resident of the bedroom and study.
"The key was programmability. Computers suddenly offered an endless series of possibilities, limited only by imagination and the ability to grasp and master the technical details.

"It was a pivotal moment for so many in the tech industry and sparked the subsequent decades of tech entrepreneurs, start-ups and innovative thinking that have fundamentally changed the industry. The fact it happened in Britain has made this country one of the tech leaders of the world."

Even so, the models had their quirks. Their graphics suffered from colour clash - where one colour would bleed into another when objects came into contact.
And some critics described the rubber keys found on the early models as resembling "dead flesh".
"They were unusual," acknowledges Sir Clive. "It was all moulded in one piece of rubber.
"It was very cost-effective but a little bit strange. But the customers didn't seem to mind."

A games computer crash caused by shops overstocking the products, lacklustre demand for the business-focused Sinclair QL computer and Sinclair Research's misadventure with the ill-fated C5 electric vehicle all contributed to the downfall of the Spectrum.

In 1986 the cash-strapped firm sold the range to Alan Sugar's Amstrad, which subsequently added built-in tape players and disk drives to the machines but never did much to develop the underlying computer technology before abandoning the platform altogether in 1992.

Even if different choices had been taken, Sir Clive now acknowledges the Spectrum never had a real chance of beating the PCs of the time.
"Their computer designs were abominable by our standards," he says.
"But because they were IBM they became the standard.
"IBM had such a powerful position, I don't think we could have challenged it."

Sir Clive went on to develop other computers, attempting to popularise the idea of laptops, first with the Z88 and then the never-released Pandora, before giving up on the field altogether.

But he has returned to the public eye thanks to a successful crowdfunding campaign to create the Spectrum Vega, a budget computer that promises to come pre-installed with hundreds of the original 48K and 128K Spectrum games.
"It's a means of getting the games back into the public domain," Sir Clive explains.

The machine lacks a physical keyboard, so it's not suited for programming.
That might seem a shame - arguably Sir Clive's legacy is that he jumpstarted coding as a hobby - but, as he notes, the Raspberry Pi already fills what had become a gap in the market.
"It's very exciting. I think it's dramatic and terribly clever," he says.
"The price point is just fantastic, and so suddenly people can again get their hands on computing power and play with it, manipulate it and really understand it."

Despite his apparent untarnished enthusiasm, it is somewhat startling to learn Sir Clive doesn't use email, the web or even own a computer.
"I don't like distraction," he says.
"My wife is very much connected to the web so if I need to do anything through that she very kindly orders it for me."

He prefers, he explains, to dedicate his time to electric vehicles, saying he is working on a "very radical" bicycle set to shake up the market.
He has, of course, made similar claims before with the C5 and later the motor-enhanced Zike.
Might it not be better to return to computing and apply his considerable intellect to the sector in which he has had most success?

"Well, maybe I'll get back to it, yes," he says, not totally convincingly. "Perhaps."

http://www.bbc.co.uk/news/technology-30333671
 
Stephen Hawking thinks computers may surpass human intelligence and take over the world. We won't ever be silicon slaves, insists an AI expert

It is not often that you are obliged to proclaim a much-loved genius wrong, but in his alarming prediction on artificial intelligence and the future of humankind, I believe Stephen Hawking has erred. To be precise, and in keeping with physics – in an echo of Schrödinger's cat – he is simultaneously wrong and right.

Asked how far engineers had come towards creating artificial intelligence, Hawking replied: "Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

In my view, he is wrong because there are strong grounds for believing that computers will never replicate all human cognitive faculties. He is right because even such emasculated machines may still pose a threat to humankind's future – as autonomous weapons, for instance.

Such predictions are not new; my former boss at the University of Reading, professor of cybernetics Kevin Warwick, raised this issue in his 1997 bookMarch of the Machines. He observed that robots with the brain power of an insect had already been created. Soon, he predicted, there would be robots with the brain power of a cat, quickly followed by machines as intelligent as humans, which would usurp and subjugate us. ...
http://www.newscientist.com/article...y-not-artificial-intelligence.html#.VJgoml4gA
 
I suspect the good proffesor doesnt like the idea of anyone who might put him out of a job.
 
I suspect the good proffesor doesnt like the idea of anyone who might put him out of a job.
I doubt he needs a job. He's almost 73 (born on Jan 8th - Happy Birthday, Prof!)

"As required by Cambridge University regulations, Hawking retired as Lucasian Professor of Mathematics in 2009.
...
..Hawking has continued to work as director of research at the Cambridge University Department of Applied Mathematics and Theoretical Physics, and indicated in 2012 that he had no plans to retire."
http://en.wikipedia.org/wiki/Stephen_Hawking#2000.E2.80.93present

Basically, his work is both his love and his life.

But at his age, his fear of AI is not for himself, but for most of the rest of us.
 
Mindclones from Social Media: New Research from Stanford Suggests Feasibility

In an episode of the popular dark sci-fi show ‘Black Mirror‘, realistic digital personalities of the dead are recreated from data alone. London-based firm “Lean Mean Fighting Machine” has developed an artificial intelligence system that can analyse a person’s Twitter feed, and then impersonate them after death. But just how realistic is this idea?

One of the more controversial ideas of transhumanism is the notion of mind uploading where the essence of a person, their mind, would be transferred to a computer. A related but less ambitious project is constructing a simulated “second self” or mindclone to continue your personality, work and relationships after death. Or perhaps just to help you be more efficient while still alive.

Companies such as eterni.me, Gordon Bell’s MyLifeBits, and Terasem’s Lifenaut are pursuing this goal.

Computer industry pioneer and extreme ‘life logger’ Gordon Bell, is the creator of MyLifeBits, a research project which is developing a ‘chatbot’ using IBM’s Cognea software with the aim of recreating dead people in part.

Eterni.me is a proposed for profit service that will offer immortality by creating “a virtual YOU, an avatar that emulates your personality and can interact with, and offer information and advice to your family and friends, even after you pass away.” ...

http://hplusmagazine.com/2015/01/15...a-new-research-stanford-suggests-feasibility/
 
An Intelligent film about AI which weighs the Turing Test in the balance and finds it wanting. It has many of the old tropes: robot as pleasure doll, Frankenstein complex, even HALesque. But theres enough fresh material and insightful direction to make this into a superior SF film.

IT'S a rare thing to see a movie about science that takes no prisoners intellectually. Alex Garland's Ex Machina is just that: a stylish, spare and cerebral psycho-techno-thriller, which gives a much-needed shot in the arm for smart science fiction.

Reclusive billionaire genius Nathan, played by Oscar Isaac, creates Ava, an intelligent and very attractive robot played by Alicia Vikander. He then struggles with the philosophical and ethical dilemmas his creation poses, while all hell breaks loose.

Many twists and turns add nuance to the plot, which centres on the evolving relationships between the balletic Ava and Caleb (Domhnall Gleeson), a hotshot programmer invited by Nathan to be the "human component in a Turing test", and between Caleb and Nathan, as Ava's extraordinary capabilities become increasingly apparent.

Everything about this movie is good. Compelling acting (with only three speaking parts), exquisite photography and set design, immaculate special effects, a subtle score and, above all, a hugely imaginative screenplay combine under Garland's precise direction to deliver a cinematic experience that grabs you and never lets go. ...

http://www.newscientist.com/article...akes-no-prisoners.html?full=true#.VMT5ev6sWug
 
Last edited:
Tonight I thought I'd look on Marine Traffic for pics of Hoegh Osaka (the car transporter that got into trouble in the Solent). I expected to find a few pics of her entering Falmouth for drydocking. Surprisingly there were none (although there were pics of her on the Bramble Bank etc), so I thought I'd upload some of my pics.

Starting on the ship's photo page, I clicked Upload Photo. The upload page automatically inserts the ship details, and asks for the photo details, including time and date, if not provided by EXIF info. I expected mine would be provided, and the box didn't want to accept any manually inserted info. But the thing that really puzzled me was, on the little chart that's provided for you to mark the position of the vessel in the photo, Hoegh Osaka was already shown, in the correct place!

I'm baffled as to how the website knew this position, before I'd actually submitted any information. Of course MT knows its current AIS position, but I might have been about to upload an older photo (prompted by the recent media coverage)...
And it shouldn't have known I was uploading a fairly recent photo until I clicked submit!

Or maybe it recognised my description of the position (there aren't that many places where visiting ships moor up...) But again, I hadn't yet submitted that info!

I have to conclude that artificial intelligence is better than mine (after a few cans of beer), but is also possibly telepathic! :eek:
 
Tonight I thought I'd look on Marine Traffic for pics of Hoegh Osaka (the car transporter that got into trouble in the Solent). I expected to find a few pics of her entering Falmouth for drydocking. Surprisingly there were none (although there were pics of her on the Bramble Bank etc), so I thought I'd upload some of my pics.

Starting on the ship's photo page, I clicked Upload Photo. The upload page automatically inserts the ship details, and asks for the photo details, including time and date, if not provided by EXIF info. I expected mine would be provided, and the box didn't want to accept any manually inserted info. But the thing that really puzzled me was, on the little chart that's provided for you to mark the position of the vessel in the photo, Hoegh Osaka was already shown, in the correct place!

I'm baffled as to how the website knew this position, before I'd actually submitted any information. Of course MT knows its current AIS position, but I might have been about to upload an older photo (prompted by the recent media coverage)...
And it shouldn't have known I was uploading a fairly recent photo until I clicked submit!

Or maybe it recognised my description of the position (there aren't that many places where visiting ships moor up...) But again, I hadn't yet submitted that info!

I have to conclude that artificial intelligence is better than mine (after a few cans of beer), but is also possibly telepathic! :eek:

Perhaps is a developing AI.

Here is yet another article about AI: will ETs be AIs?

Somewhere in the long list of topics that are relevant to astrobiology is the question of ‘intelligence’. Is human-like, technological intelligence likely to be common across the universe? Are we merely an evolutionary blip, our intelligence consigning us to a dead-end in the fossil record? Or is intelligence something that the entropy-driven, complexity-producing, universe is inevitably going to converge on?

All good questions. An equally good question is whether we can replicate our own intelligence, or something similar, and whether or not that’s actually a good idea.

In recent months, once again, this topic has made it to the mass media. First there was Stephen Hawking, then Elon Musk, and most recently Bill Gates. All of these smart people have suggested that artificial intelligence (AI) is something to be watched carefully, lest it develops to a point of existential threat.

Except it’s a little hard to find any details of what exactly that existential threat is perceived to be. Hawking has suggested that it might be the capacity of a strong AI to ‘evolve’ much, much faster than biological systems – ultimately gobbling up resources without a care for the likes of us. I think this is a fair conjecture. AI’s threat is not that it will be a sadistic megalomaniac (unless we deliberately, or carelessly make it that way) but that it will follow its own evolutionary imperative. ...

http://blogs.scientificamerican.com/life-unbounded/2015/02/13/is-ai-dangerous-that-depends/
 
youve been watching too much sf

Humans are a threat, not AI

Yeah, but I can;t see one person taking over the Internet. An AI COULD!

We need Turing Police to track down Rogue AIs like in Neuromancer.
 
From the SciAm article:

If this is how a strong AI occurs, the most immediate danger will simply be that a vast swathe of humanity now relies on the ecosystem of the internet. It’s not just how we communicate or find information, it’s how our food supplies are organized, how our pharmacists track our medicines, how our planes, trains, trucks, and cargo ships are scheduled, how our financial systems work. A strong AI emerging here could wreak havoc in the way that a small child can rearrange your sock drawer or chew on the cat’s tail.
 
... no idea where this belongs so am awaiting the typical rynner spanking LOL ... either way, CGI waves are amazing and someone's busted them back down to basic .. something us thickies should appreciate .. basic fluid code

 
Andrew Ng builds artificial intelligence systems for a living. He taught AI at Stanford, built AI at Google, and then moved to the Chinese search engine giant, Baidu, to continue his work at the forefront of applying artificial intelligence to real-world problems.

So when he hears people like Elon Musk or Stephen Hawking—people who are not intimately familiar with today’s technologies—talking about the wild potential for artificial intelligence to, say, wipe out the human race, you can practically hear him facepalming.

“For those of us shipping AI technology, working to build these technologies now,” he told me, wearily, yesterday, “I don’t see any realistic path from the stuff we work on today—which is amazing and creating tons of value—but I don’t see any path for the software we write to turn evil.” ...

http://fusion.net/story/54583/the-case-against-killer-robots-from-a-guy-actually-building-ai/
 
Researchers have proposed a Visual Turing Test in which computers would answer increasingly complex questions about a scene.

Computers are getting better each year at AI-style tasks, especially those involving vision—identifying a face, say, or telling if a picture contains a certain object. In fact, their progress has been so significant that some researchers now believe the standardized tests used to evaluate these programs have become too easy to pass, and therefore need to be made more demanding.

At issue are the “public data sets” commonly used by vision researchers to benchmark their progress, such as LabelMe at MIT or Labeled Faces in the Wild at the University of Massachusetts, Amherst. The former, for example, contains photographs that have been labeled via crowdsourcing, so that a photo of street scene might have a “car” and a “tree” and a “pedestrian” highlighted and tagged. Success rates have been climbing for computer vision programs that can find these objects, with most of the credit for that improvement going to machine learning techniques such as convolutional networks, often called Deep Learning.

But a group of vision researchers say that simply calling out objects in a photograph, in addition to having become too easy, is simply not very useful; that what computers really need to be able to do is to “understand” what is “happening” in the picture. And so with support from DARPA, Stuart Geman, a professor of applied mathematics at Brown University, and three others have developed a framework for a standardized test that could evaluate the accuracy of a new generation of more ambitious computer vision programs. ...

http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/ai-researchers-propose-a-machine-vision-turing-test?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed: IeeeSpectrum (IEEE Spectrum)
 
"And I think that some day, and this might be hundreds of years from now, I don’t think that the idea of creativity is something that will always be beyond the realm of computers.”

The Painting Fool already produces paintings that if you saw them in a museum you wouldn't guess had been made by a computer program.

slide1.png
slide21.png
 
I don't think the Turner Prize allows virtual potato prints. Er, yet.
 
Maybe getting the computer to do the face of Jesus on some cloth could be the new Turin test? :p
 
Maybe getting the computer to do the face of Jesus on some cloth could be the new Turin test? :p
When a robot can match you lot in punning, I think we could say it has passed the Turing test! ;)
 
Last edited:
The Painting Fool already produces paintings that if you saw them in a museum you wouldn't guess had been made by a computer program.


I'll be impressed when it can do the art critique rather than when it can do the art....;)
 
Back
Top