• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
Europe's robots to become 'electronic persons' under draft plan
MUNICH, Germany | By Georgina Prodhan

Europe's growing army of robot workers could be classed as "electronic persons" and their owners liable to paying social security for them if the European Union adopts a draft plan to address the realities of a new industrial revolution.

Robots are being deployed in ever-greater numbers in factories and also taking on tasks such as personal care or surgery, raising fears over unemployment, wealth inequality and alienation.

Their growing intelligence, pervasiveness and autonomy requires rethinking everything from taxation to legal liability, a draft European Parliament motion, dated May 31, suggests.

Some robots are even taking on a human form. Visitors to the world's biggest travel show in March were greeted by a lifelike robot developed by Japan's Toshiba (6502.T) and were helped by another made by France's Aldebaran Robotics.

However, Germany's VDMA, which represents companies such as automation giant Siemens (SIEGn.DE) and robot maker Kuka (KU2G.DE), says the proposals are too complicated and too early.

German robotics and automation turnover rose 7 percent to 12.2 billion euros ($13.8 billion) last year and the country is keen to keep its edge in the latest industrial technology. Kuka is the target of a takeover bid by China's Midea (000333.SZ).

The draft motion called on the European Commission to consider "that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations".

It also suggested the creation of a register for smart autonomous robots, which would link each one to funds established to cover its legal liabilities.

Patrick Schwarzkopf, managing director of the VDMA's robotic and automation department, said: "That we would create a legal framework with electronic persons - that's something that could happen in 50 years but not in 10 years."

"We think it would be very bureaucratic and would stunt the development of robotics," he told reporters at the Automatica robotics trade fair in Munich, while acknowledging that a legal framework for self-driving cars would be needed soon.

http://www.reuters.com/article/us-europe-robotics-lawmaking-idUSKCN0Z72AY?utm_campaign=trueAnthem: Trending Content&utm_content=576996e504d3010bc51b5ca3&utm_medium=trueAnthem&utm_source=twitter

More text at link...
 

OPENAI, THE ELON Musk-backed startup that wants to give away its artificial intelligence research, also wants to make sure AI isn’t used for nefarious purposes. That’s why it wants to create a new kind of police force: call them the AI cops.


As its team of top researchers help to hasten the spread AI technology, this rather unusual startup is worried that such tech could spread too far—that someone else will make a breakthrough in secret and use it “for potentially malicious ends.” So, it’s calling for other top researchers to join its ever-expanding operation and develop new technologies that can somehow detect these breakthroughs as they’re deployed in the real world.
http://www.wired.com/video/2016/03/...-go-grandmaster-beating-ai-it-s-a-good-thing/
The company’s founders believe that AI can make the world a much better place, but they also worry it could cause serious damage. “As AI systems become more and more capable and powerful, we’re going to see them deployed in a variety of ways, some more nefarious than others,” says Greg Brockman, the former chief technology officer of big-name payments startup Stripe who now oversees OpenAI. “The more the world is aware of what’s going on—the more there is scrutiny—the better.” ...

http://www.wired.com/2016/08/openai-calling-techie-cops-battle-code-gone-rogue/?mbid=social_twitter

Call them the Turing Police.
 
The first international beauty contest judged by “machines” was supposed to use objective factors such as facial symmetry and wrinkles to identify the most attractive contestants. After Beauty.AI launched this year, roughly 6,000 people from more than 100 countries submitted photos in the hopes that artificial intelligence, supported by complex algorithms, would determine that their faces most closely resembled “human beauty”.

But when the results came in, the creators were dismayed to see that there was a glaring factor linking the winners: the robots did not like people with dark skin.

Out of 44 winners, nearly all were white, a handful were Asian, and only one had dark skin. That’s despite the fact that, although the majority of contestants were white, many people of color submitted photos, including large groups from India and Africa.

The ensuing controversy has sparked renewed debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results. ...

https://www.theguardian.com/technol...est-doesnt-like-black-people?CMP=share_btn_tw
 
That is a fine example of how a programmer can unintentionally add a bias into their software.
 
That is a fine example of how a programmer can unintentionally add a bias into their software.
It may not be that. It might be just the fact that darker skinned faces are harder to see (especially in poor light), making it more difficult to assess symmetry, wrinkles, etc. Possibly this could be compensated for by adjusting the brightness and contrast of the images. Back to the drawing board!
 
the ways in which algorithms can perpetuate biases

It reminds me of the dubious*, old tale of the software which was designed to prevent kiddies accessing porn. The programmers thought they had done a great job: their filter refused to load images with more than a certain percentage of flesh-tones.

Result? A filter which blocked the Pink Panther but allowed the kiddies all the black and Asian porn they could find!

*I was told this had happened a few years earlier on a network installed by the firm I worked for. I have since heard it from other sources but no one can name the software. I suspect it is a FOAF! :confused:
 
It may not be that. It might be just the fact that darker skinned faces are harder to see (especially in poor light), making it more difficult to assess symmetry, wrinkles, etc. Possibly this could be compensated for by adjusting the brightness and contrast of the images. Back to the drawing board!

This reminds me of the music videos by black artists in the 1990s which would often depict them in high-contrast black and white presumably because it presented their features better than colour photography. Digital seems to have solved that, you don't see them so much anymore, or maybe b&w is way out of fashion.

Of course, they do say "black don't crack", so with that in mind the darker-skinned entrants should have been rated higher.
 
MINORITY CRIME REPORTS
Cops using artificial intelligence to stop crimes BEFORE they happen, researchers warn
Academics say the technology is letting policemen detect crime that hasn't taken place yet

by JASPER HAMILL
12th September 2016, 12:36 pm

1
Comments
Cops are already using computers to stop crimes before they happen, academics have warned.

In a major piece of research called “Artificial Intelligence and life in 2030”, researchers from Stanford University said “predictive policing” techniques would become commonplace in the next 15 years.


Samantha Morton starred in Minority Report, playing a woman who had pre-cognitive abilities and could predict crimes before they happened

The academics discussed the crime fighting implications of “machine learning”, which allows computers to learn for themselves and then solve problems just like a human.

This technique will have a major effect on transport, healthcare and education, potentially bringing massive benefits as well as putting millions of jobs at risk.

But in the hands of cops, AI has the potential to have a massive impact on society by allowing law enforcement to have an “overbearing or pervasive” presence.

“Cities already have begun to deploy AI technologies for public safety and security,” a team of academics wrote.

“By 2030, the typical North American city will rely heavily upon them.

“These include cameras for surveillance that can detect anomalies pointing to a possible crime, drones, and predictive policing applications.”

Machine learning and AI is already used to combat white collar crime such as fraud. It is also used to automatically scan social media to highlight people of risk of being radicalised by ISIS.
Yet the range of crimes which could be stopped by AI is likely to grow as the technology becomes more advanced.

More text at link...


https://www.thesun.co.uk/news/17684...op-crimes-before-they-happen-researchers-warn
 
Cops using artificial intelligence to stop crimes BEFORE they happen, researchers warn
Academics say the technology is letting policemen detect crime that hasn't taken place yet

by JASPER HAMILL
12th September 2016, 12:36 pm

https://www.thesun.co.uk/news/17684...op-crimes-before-they-happen-researchers-warn

Science fiction got there first. I remember reading an SF short story in the 60s (sorry, author and title forgotten) set in an American city cop-shop which already had this kind of technology. They knew which areas to send squad cars to before anything happened.

(I'll try to search out the details...)
 
The new sci-fi movie Morgan had a trailer that was created with the assistance of IBM's Watson. Watson scanned the movie and identified six minutes or so of what it felt were key/tense bits. The human trailer director then stepped in and made a trailer from what Watson identified. The "behind the scenes" bit for the Watson assisted trailer had people saying the Watson+director trailer took a day, while the normal trailer process takes about a month. The Watson+director trailer looked as good as the all human trailer.
 
Except the Watson trailer completely failed to mention the film's theme was AI gone mad.
 
Science fiction got there first. I remember reading an SF short story in the 60s (sorry, author and title forgotten) set in an American city cop-shop which already had this kind of technology. They knew which areas to send squad cars to before anything happened.

(I'll try to search out the details...)
Sounds like Minority Report (the film is mentioned in the article)
 
Sounds like Minority Report (the film is mentioned in the article)
In the film Minority Report, a group of psychics called “precogs” were able to predict crimes by reading people’s intentions and stopping them.

But real life AI will work differently by identifying trends in pre-existing crimes or learning the signs which show someone is about to commit an offence.

No, the short story only involved technology, not pyschics.

But maybe it was the original PKD story I read.
 

Neuroscientist on How to Survive a Future with Superhuman Artificial Intelligence

Excited? Wrong answer, according to neuroscientist and bestselling author Sam Harris.

By Lisa Calhoun

WRITE A COMMENT

getty_543190650_114190.jpg

CREDIT: Getty Images
Super human artificial intelligence is coming, says Sam Harris, a Stanford grad with his Ph.D in neuroscience from UCLA. He has five New York Times bestsellers under his belt. "It's very difficult to see how they won't destroy us or inspire us to destroy ourselves," he says.

How should we be preparing? Sam says our current emotional response--that it's cool--is woefully lacking.

"If you're anything like me, you'll find that it's fun to think about these things. And that response is part of the problem. OK? That response should worry you," he says. These comments are drawn from his TED Talk on artificial intelligence. "We seem unable to marshal an appropriate emotional response to the dangers that lie ahead."

The inevitability of superhuman artificial intelligence (AI)
If we're not interrupted by world wars, planetary collisions with asteroids or other unpreventable disasters, Sam shared that it's a given we'll create superhuman artificial intelligence. His logical progression goes like this:

1) We like smart things.

2) It's helpful to have smarter things and they make our life easier. (Until they don't.)

3) Thus, as long as we have the capacity to make stuff smarter, we will.

At some point, our smarter software intelligences will be of the scale that they can create smarter software, and then--game over. The urge to create yet smarter digital intelligence becomes self-replicating and human brains become biological backwash.

Superhuman artificial intelligence in the near future
In projecting the logical future of superhuman AI, Sam points out there are two major paths:

1) Separate evolution. He likens this path to the ant / human relationship. You don't hate ants. You may even step over them most of the time. They can hurt you--but barely. If they get in your way, like move into your house, you annihilate them. AI could treat us like that.

2) Co-evolution. With brain implants (neuroprosthetics) from ambitious companies like Kernel, it's possible we could plug the superhuman smarts right into our own wetware. "Now, this may in fact be the safest and only prudent path forward, but usually one's safety concerns about a technology have to be pretty much worked out before you stick it inside your head," he points out. (Personally, this is where I think we need to head, pardon the pun.)


More text on ...
http://www.inc.com/lisa-calhoun/neu...rvive-superhuman-artificial-intelligence.html
 
AI creates a Beatles song:

Sounds more like a Russian Beach Boys ditty. There's albums of this stuff to follow - released on K-Tel, I hope.
 
To be fair to the AI, its songs have been interpreted by a French musician, otherwise it might sound more like Kraftwerk. But no, it's no substitute for the real thing.
 
I just finished reading Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots by John Markoff, which recounts the history of AI research and it possible future direction.

My main take-away from the book is how disorganized and haphazard the work in this field seems to be. You have a lot of undeniably bright people with buckets of money being thrown at them, working on any and every idea that pops into their heads, seemingly with no thought as to where their work may lead or where it fits into the big picture.

They set out to build Skynet and end up with an app that tells you when you're out of clean socks. Well, that's OK, sell the app for millions and move on to the next thing.

As for the warnings about the rise of super-human AI, people really have no idea what they're talking about. We're no closer to understanding what consciousness is than we were fifty years ago. No one has a clue how to build a self-aware AI; they just take it for granted that one day a machine will wake up and announce that humans now have company. No matter how good your system becomes at pattern recognition or understanding human speech, it's not really "intelligent." When you leave it running overnight with no inputs, it doesn't dream about how nice it would be to be a real boy.

It reminds me of an Our Gang episode where the kids go to an exhibition and see a primitive humanoid robot. They're excited about how neat it would be to have a robot to do their chores for them. When they tinker a "robot" together out of miscellaneous junk, they're baffled that it doesn't work. Hilarity ensues.

There's a mindset among some AI researchers that almost approaches religious mysticism, and we know how well that's worked out for mankind in the past. If there's a threat from AI, it's not that the machines will become self-aware, but that humans will thoughtlessly unleash a system that's beyond their control.

 

As for the warnings about the rise of super-human AI, people really have no idea what they're talking about. We're no closer to understanding what consciousness is than we were fifty years ago.
I think you're wrong there. There has been a lot of advance in consciousness studies since then. Some are covered in this thread: http://forum.forteantimes.com/index.php?threads/what-is-consciousness.9240/

Artificial Intelligence and consciousness are two different things, though probably not mutually exclusive. And some people in these fields really do know what they're talking about. Don't ignore them, or you could find yourself left behind in the slipstream of advancing technological and theoretical progress.
 
I think you're wrong there. There has been a lot of advance in consciousness studies since then. Some are covered in this thread: http://forum.forteantimes.com/index.php?threads/what-is-consciousness.9240/

Artificial Intelligence and consciousness are two different things, though probably not mutually exclusive. And some people in these fields really do know what they're talking about. Don't ignore them, or you could find yourself left behind in the slipstream of advancing technological and theoretical progress.

Thanks for your thoughtful reply, rynner. You're right that I shouldn't be dismissive of people working in the field of consciousness studies, and I really don't mean to be. But I do believe that the nature and mechanism of consciousness remain complete mysteries, and as for creating an artificial conscious entity, no one has the first idea of how to proceed. At best, they assume that eventually it will just happen.

Yes, Artificial Intelligence and consciousness are two different things, and I think most present-day AI researchers are being naive (maybe pretentious) in thinking their approaches will lead to an artificial consciousness.
 
Yes, Artificial Intelligence and consciousness are two different things, and I think most present-day AI researchers are being naive (maybe pretentious) in thinking their approaches will lead to an artificial consciousness.
I've read pretty widely in this field, and I don't recall any researchers saying anything like you suggest.

Can you quote any such naive or maybe pretentious remarks? Most researchers in technical fields like AI are well aware of the limitations of the technology, and don't harbour hopes that 'consciousness' will magically arise from their work, which is usually more narrowly focused on what AI can actually achieve.
 
They are expecting to create conscious a.i. with the new deep learning neural networks.
 
They are expecting to create conscious a.i. with the new deep learning neural networks.
This is typical internet blather! Who are 'they', can they not be named and quoted?! :rolleyes:
 
This is typical internet blather! Who are 'they', can they not be named and quoted?! :rolleyes:
Must be the computer scientists.
-------------

Why we don’t want AI’s like IBM Watson learning from humans
IBM_Watson_Brain-1-1024x686.png

What AlphaGo, IBM Watson, Ajay and Bobby and Tay teach us about how Artificial Intelligence learns
Deep learning is a term we’re increasingly using to describe how we teach Artificial Intelligence (AI) to absorb new information and apply it in their interactions with the real world. In an interview with the Guardian newspaper in May 2015, Professor Geoff Hinton, an expert in artificial neural networks, said Google is “on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.” Google is currently working to encode thoughts as vectors described by a sequence of numbers. These “thought vectors” could endow AI systems with a human-like “common sense” within a decade.

Some aspects of communication are likely to prove more challenging, Hinton predicted. “Irony is going to be hard to get,” he said. “You have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits…”
Professor Geoff Hinton, from an interview with the Guardiannewspaper, 21st May 2015

http://breakingbanks.com/dont-want-ais-like-ibm-watson-learning-humans/
------------------
 
The Administration’s Report on the Future of Artificial Intelligence
OCTOBER 12, 2016 AT 6:02 AM ET BY ED FELTON AND TERAH LYONS

Summary:
A new report from the Administration focuses on the opportunities, considerations, and challenges of Artificial Intelligence (AI).
Under President Obama’s leadership, America continues to be the world’s most innovative country, with the greatest potential to develop the industries of the future and harness science and technology to help address important challenges. Over the past 8 years, President Obama has relentlessly focused on building U.S. capacity in science and technology. This Thursday, President Obama will host the White House Frontiers Conference in Pittsburgh to imagine the Nation and the world in 50 years and beyond, and to explore America’s potential to advance towards the frontiers that will make the world healthier, more prosperous, more equitable, and more secure.

Today, to ready the United States for a future in which Artificial Intelligence (AI) plays a growing role, the White House is releasing a report on future directions and considerations for AI calledPreparing for the Future of Artificial Intelligence. This report surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy. The report also makes recommendations for specific further actions. A companionNational Artificial Intelligence Research and Development Strategic Plan is also being released, laying out a strategic plan for Federally-funded research and development in AI.

Preparing for the Future of Artificial Intelligence details several policy opportunities raised by AI, including how the technology can be used to advance social good and improve government operations; how to adapt regulations that affect AI technologies, such as automated vehicles, in a way that encourages innovation while protecting the public; how to ensure that AI applications are fair, safe, and governable; and how to develop a skilled and diverse AI workforce.

The publication of this report follows a series of public-outreach activities spearheaded by the White House Office of Science and Technology Policy (OSTP) in 2016, which included five co-hosted public workshops held across the country, as well as a Request for Information (RFI) in June 2016 that received 161 responses. These activities helped inform the focus areas and recommendations included in the report. ...

https://www.whitehouse.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence
 
Artificial intelligence of another form, electronic brain stimulation:

https://www.theguardian.com/science...cal-brain-stimulation-to-enhance-staff-skills
https://www.theguardian.com/science...cal-brain-stimulation-to-enhance-staff-skills
more at link above
--------------------

US military scientists have used electrical brain stimulators to enhance mental skills of staff, in research that aims to boost the performance of air crews, drone operators and others in the armed forces’ most demanding roles.

The successful tests of the devices pave the way for servicemen and women to be wired up at critical times of duty, so that electrical pulses can be beamed into their brains to improve their effectiveness in high pressure situations.

The brain stimulation kits use five electrodes to send weak electric currents through the skull and into specific parts of the cortex. Previous studies have found evidence that by helping neurons to fire, these minor brain zaps can boost cognitive ability.
 
Back
Top