• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
Go is a fascinating game. When I was young I found a book on it next to the chess books in the library and fashioned a Go board and a set of pieces from a cornflake box.

Since I had no real friends at the time I had to play against myself. Sad but true.

I also made helicopters from pen barrels, rubber bands and drinking straws. I dare say computers are better at doing that now too.
 
Time for the fat Lady to sing:
AlphaGo seals 4-1 victory over Go grandmaster Lee Sedol
DeepMind’s artificial intelligence astonishes fans to defeat human opponent and offers evidence computer software has mastered a major challenge
Steven Borowiec
Tuesday 15 March 2016 10.12 GMT

Google DeepMind’s AlphaGo program triumphed in its final game against South Korean Go grandmaster Lee Sedol to win the series 4-1, providing further evidence of the landmark achievement for an artificial intelligence program.

Lee started Tuesday’s game strongly, taking advantage of an early mistake by AlphaGo. But in the end, Lee was unable to hold off a comeback by his opponent, which won a narrow victory.
After the results were in, Google DeepMind co-founder Demis Hassabis called today’s contest “One of the most incredible games ever,” saying AlphaGo mounted a “mind-blowing” comeback after an early mistake.

etc...

http://www.theguardian.com/technolo...-seals-4-1-victory-over-grandmaster-lee-sedol
 
We should be more afraid of computers than we are – video
[Video]

As sophisticated algorithms can complete tasks we once thought impossible, computers are seeming to become a real threat to humanity. Whether they decide to pulp us into human meat paste, or simply make our work completely unnecessary, argues technology reporter Alex Hern, we should be afraid of computers.

http://www.theguardian.com/commenti...be-more-afraid-of-computers-than-we-are-video
 
Excellent story from the BBC news site that will restore your faith in A.I. (Is Skynet up and running?)

Microsoft chatbot is taught to swear on Twitter

A chatbot developed by Microsoft has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements.

The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds.

Just 24 hours after artificial intelligence Tay was unleashed, Microsoft appeared to be editing some of its more inflammatory comments.

The software firm said it was "making some adjustments".

"The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay," the firm said in a statement.

...continues....

Twenty four hours....that's all it'll take.

 
Dunno about A.I. but it sounds like it has the hang of Twitter, anyway.
 
We should be more afraid of computers than we are – video
[Video]

As sophisticated algorithms can complete tasks we once thought impossible, computers are seeming to become a real threat to humanity. Whether they decide to pulp us into human meat paste, or simply make our work completely unnecessary, argues technology reporter Alex Hern, we should be afraid of computers.

http://www.theguardian.com/commenti...be-more-afraid-of-computers-than-we-are-video
All software, firmware and hardware is developed by humans for humans in one way shape or form. AI is still the slave to it's masters: the scientist and engineers who conceive of it, invent it and design it. Weather it's drones or supercomputers this holds true. Nothing can perform a function that isn't a part of an algorithm, bit of code or input from a sensor that went into it's development. My 2-cents from 20 years in the aerospace - electronics industries.
 
Ironically, not PC.

Microsoft's artificial intelligence Twitter bot has to be shut down after it starts posting genocidal racist comments one day after launching
  • Tay can be found interacting with users on Twitter, KIK and GroupMe
  • The software uses 'editorial interactions' built by staff and comedians
  • Within hours of going live, the bot was tweeting offensive comments
  • It used racial slurs, defended white supremacist propaganda, and supported genocide in response to certain tweets

Yesterday, Microsoft launched its latest artificial intelligence (AI) bot named Tay.

It is aimed at 18 to-24-year-olds and is designed to improve the firm's understanding of conversational language among young people online.

But within hours of it going live, Twitter users took advantage of flaws in Tay's algorithm that meant the AI chatbot responded to certain questions with racist answers.

These included the bot using racial slurs, defending white supremacist propaganda, and supporting genocide.

....



http://www.dailymail.co.uk/sciencet...nsive-racist-comments-just-day-launching.html
 
IBM ai: wins at Jeopardy, understands whatever Bob Dylan is saying
Google ai: drives cars, wins at Go
Microsoft ai: loves Hitler
Just...LOL! :D
 
Beep! Bloop!

Did I see a reference to Boaty McBoatface in that?

Beep!
 
I don't much trust the motivations behind that chatbot. It's being presented as some bold new experiment in AI but clearly Microsoft must have a commercial aim in mind.

My guess would be it's the beta for a chatbot to be sold to brands that will gather information about their social media followers that can be used to more effectively sell them stuff.

Why employ a millennial to tweer inanities at teenagers if you can simply deploy a chatbot?
 
Good point, Graylien.
It's a marketing tool.
 
I don't much trust the motivations behind that chatbot. It's being presented as some bold new experiment in AI but clearly Microsoft must have a commercial aim in mind.

My guess would be it's the beta for a chatbot to be sold to brands that will gather information about their social media followers that can be used to more effectively sell them stuff.

Why employ a millennial to tweer inanities at teenagers if you can simply deploy a chatbot?
Microsoft already has a chatbot called "Xiaoice" in China https://en.wikipedia.org/wiki/Xiaoice

Tay is the first English language one.

As for marketing, I don't think their marketing is going according to plan with the way the chatbot started chatting.
 
This Supercomputer Mimics a Human Brain Using Just 2.5 Watts of Power

Bryan Lufkin

Wednesday 10:26am
Filed to: supercomputers

gqokscink51pfmkleo26.gif

IBM and the US government teamed up to develop a new supercomputer for use on national security missions. It makes decisions like a human brain, and uses less power than a hearing aid.

Lawrence Livermore National Laboratory, one of the country’s top scientific research facilities, announced the new mega machine yesterday. The supercomputer uses a platform called TrueNorth, a brain-inspired group of chips, which mimics a network 16 million neurons with 4 billion synapses worth of processing power. In other words, it’s capable of recognizing patterns and solving problems much like a human does.

Cognitive computing itself isn’t new. For years, companies like IBM have been trying to develop AI that mimics human decision making: That is to say, machines that learn from their mistakes like humans and can adapt quickly to changing, complex situations. But what makes this new platform nuts is how little energy is required: a mere 2.5 watts. That’s less than even a really dinky LED lightbulb.

The array of TrueNorth chips, which set the lab back a relatively cheap-sounding $1 million, will be used in government cybersecurity missions. It will also help “ensure the safety, security and reliability of the nation’s nuclear deterrent without underground testing,” according to the press release.

SF Gate points out that in five years or so, the TrueNorth chips could even help smartphones achieve facial recognition, or smart glasses help blind people navigate their surroundings. But for now, it’ll take six months to install the chips in the computers in Livermore. Just relax and leave all national security measures in the hands of the blinking, glowing machines.


http://gizmodo.com/this-supercomputer-mimics-a-human-brain-using-just-2-5-1767958119
 
CaptionBot: Microsoft's new AI can describe what's in your pictures
The program is suprisingly good at captioning pictures - most of the time
Doug Bolton
Thursday 14 April 2016

Microsoft has developed a new artificial intelligence (AI) system capable of analysing and describing pictures automatically.
The program, now available to try on the company's website, has been named CaptionBot by the Microsoft Cognitive Services team, and it works surprisingly well.
Users can upload any image to CaptionBot and have it return a description in seconds. It manages this by using image and face analysing programs, as well as a language processor which returns descriptions in understandable English.

The program isn't entirely spot-on all the time, but it's generally close enough to be impressive.
CaptionBot is similar to previous Microsoft AI programs, like 'How Old Do I Look?' and 'What Dog?', both of which used similar technology.

Image captioning software is nothing new, but the way Microsoft has made the program public gives everyone a chance to see how it works.
CaptionBot should get more accurate over time - users can rate how well it did at captioning pictures on a one-to-five scale, and it will slowly 'learn' to identify different elements as it gets fed more content.

A word of warning - Microsoft says it will hold on to all images uploaded to CaptionBot, in order to improve its capabilities in future. However, it insists it won't record any personal information about users.

http://www.independent.co.uk/life-s...escribe-artificial-intelligence-a6984246.html
 
MIT’s Teaching AI How to Help Stop Cyberattacks

FINDING EVIDENCE THAT someone compromised your cyber defenses is a grind. Sifting through all of the data to find abnormalities takes a lot of time and effort, and analysts can only work so many hours a day. But an AI never gets tired, and can work with humans to deliver far better results.

A system called AI2, developed at MIT’s Computer Science and Artificial Intelligence Laboratory, reviews data from tens of millions of log lines each day and pinpoints anything suspicious. A human takes it from there, checking for signs of a breach. The one-two punch identifies 86 percent of attacks while sparing analysts the tedium of chasing bogus leads. ...

http://www.wired.com/2016/04/mits-teaching-ai-help-analysts-stop-cyberattacks/?mbid=social_twitter

 
Scary news for teaching assistants:
http://www.smh.com.au/technology/in...tant-was-an-ai-all-along-20160513-gou6us.html
Professor reveals to students that his assistant was an AI all along
To help with his class this year, a Georgia Tech professor hired Jill Watson, a teaching assistant unlike any other in the world. Throughout the semester, she answered questions online for students, relieving the professor's overworked teaching staff.
But, in fact, Jill Watson was an artificial intelligence bot.
Ashok Goel, a computer science professor, did not reveal Watson's true identity to students until after they'd turned in their final exams.
Students were amazed. "I feel like I am part of history because of Jill and this class!" wrote one in the class's online forum. "Just when I wanted to nominate Jill Watson as an outstanding TA in the CIOS survey!" said another.
 
An artificial neural network has watched Blade Runner and has tried to reenact the first 10 minutes. Sorry, no sound.

 
Movie written by algorithm turns out to be hilarious and intense
For Sunspring's exclusive debut on Ars, we talked to the filmmakers about collaborating with an AI.
It sounds like your typical sci-fi B-movie, complete with an incoherent plot. Except Sunspring isn't the product of Hollywood hacks—it was written entirely by an AI. To be specific, it was authored by a recurrent neural network called long short-term memory, or LSTM for short. At least, that's what we'd call it. The AI named itself Benjamin.

As the cast gathered around a tiny printer, Benjamin spat out the screenplay, complete with almost impossible stage directions like "He is standing in the stars and sitting on the floor." Then Sharp randomly assigned roles to the actors in the room. "As soon as we had a read-through, everyone around the table was laughing their heads off with delight," Sharp told Ars. The actors interpreted the lines as they read, adding tone and body language, and the results are what you see in the movie. Somehow, a slightly garbled series of sentences became a tale of romance and murder, set in a dark future world. It even has its own musical interlude (performed by Andrew and Tiger), with a pop song Benjamin composed after learning from a corpus of 30,000 other pop songs.
http://arstechnica.com/the-multiverse/2016/06/an-ai-wrote-this-movie-and-its-strangely-moving/
 
Chatbot lawyer overturns 160,000 parking tickets in London and New York
Free service DoNotPay helps appeal over $4m in parking fines in just 21 months, but is just the tip of the legal AI iceberg for its 19-year-old creator
Samuel Gibbs
Tuesday 28 June 2016 11.07 BST

An artificial-intelligence lawyer chatbot has successfully contested 160,000 parking tickets across London and New York for free, showing that chatbots can actually be useful.
Dubbed as “the world’s first robot lawyer” by its 19-year-old creator, London-born second-year Stanford University student Joshua Browder, DoNotPay helps users contest parking tickets in an easy to use chat-like interface.
The program first works out whether an appeal is possible through a series of simple questions, such as were there clearly visible parking signs, and then guides users through the appeals process.

The results speak for themselves. In the 21 months since the free service was launched in London and now New York, DoNotPay has taken on 250,000 cases and won 160,000, giving it a success rate of 64% appealing over $4m of parking tickets.

“I think the people getting parking tickets are the most vulnerable in society. These people aren’t looking to break the law. I think they’re being exploited as a revenue source by the local government,” Browder told Venture Beat.
The bot was created by the self-taught coder after receiving 30 parking tickets at the age of 18 in and around London. The process for appealing the fines is relatively formulaic and perfectly suits AI, which is able to quickly drill down and give the appropriate advice without charging lawyers fees.

etc...

https://www.theguardian.com/technol...wyer-donotpay-parking-tickets-london-new-york
 
Back
Top