• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
It will need a martyr.

Any offers ?

INT21
 
The continual conflation of AI and "self-aware" bothers me. An "AI" in colloquial terms is a set of heuristics, but there's nothing self-aware about it.
In the mid-ninties I read an article complaining about the use of the term 'artificial intelligence' in reference to, for example, computer games which presented enemies who seem to make tactical decisions. The article posited that such games employ 'simulated intelligence'. Artificial intelligence would be exactly like biological intelligence in function and operation, but would be created artificially. In the same way, a flight simulator simulates flight, but artificial flight is obtained with aircraft. A spinning sci-fi space station might simulate gravity, but artificial gravity will only occur if we learn how to actually create gravitation artificially. And so on.

Whether such a genuine artificial intelligence would be self aware, or whether self awareness is required for genuine intelligence, is perhaps another question.
 
Whether such a genuine artificial intelligence would be self aware, or whether self awareness is required for genuine intelligence, is perhaps another question.
Good points. What concerns me about the continual misnomer regarding AI, is that it is conflating the idea that such a system has 'intelligence', with a system which in reality is programmed. The idea that an 'AI based 'function' is more intelligent than (say) a person carrying out the same role. So for example face recognition or deciding a verdict in a criminal trial (someone has started working on this).

The issue I have is, that such programs/AI's still need coding and then 'training' with supplied data as to what are 'correct' answers and what are not. So if (for example) if an AI is trained to make decisions on criminal culpability, with data that included socio-economic status and race as factors in determining guilt, then it would simply be executing biases, under the guise of 'intelligence'.
 
In the mid-ninties I read an article complaining about the use of the term 'artificial intelligence' in reference to, for example, computer games which presented enemies who seem to make tactical decisions. The article posited that such games employ 'simulated intelligence'. Artificial intelligence would be exactly like biological intelligence in function and operation, but would be created artificially. In the same way, a flight simulator simulates flight, but artificial flight is obtained with aircraft. A spinning sci-fi space station might simulate gravity, but artificial gravity will only occur if we learn how to actually create gravitation artificially. And so on.

This is a good summary of the popular mis-representations and misunderstandings about AI.

(Note: As of the mid-nineties I'd already shifted from being an AI pro toward approaches more aligned with Doug Engelbart's vision of IA (intelligence augmentation).)

Unfortunately, the AI label originated even farther back and stuck ...

AI represents decision making at a level of sophistication / nuance higher than simple 'If X then Y' specifications. One way or the other, it maps input parameters to decisions and / or actions.

It has nothing to do with emulating 'intelligence' in general, which is a concept we all allude to but for which we have no precise definition. It has everything to do with simulating the outputs (decisions; actions) to which one might reasonably attribute 'intelligent' capability - invariably within the context of a particular domain of knowledge or action (e.g., a standardized game).

Saying AI reflects anything about overall 'intelligence' is like saying classic behaviorist stimulus-response models (e.g., Skinner) explain the richness of human psychological processes.


Whether such a genuine artificial intelligence would be self aware, or whether self awareness is required for genuine intelligence, is perhaps another question.

Self-awareness (another slippery concept ...) is not required for the sort of seemingly 'intelligent' behavior or results an AI can provide. You don't need a self-conception of yourself as a chess player to execute the decisions and actions necessary to play a game of chess.
 
Good points. What concerns me about the continual misnomer regarding AI, is that it is conflating the idea that such a system has 'intelligence', with a system which in reality is programmed. The idea that an 'AI based 'function' is more intelligent than (say) a person carrying out the same role. So for example face recognition or deciding a verdict in a criminal trial (someone has started working on this).

The issue I have is, that such programs/AI's still need coding and then 'training' with supplied data as to what are 'correct' answers and what are not. So if (for example) if an AI is trained to make decisions on criminal culpability, with data that included socio-economic status and race as factors in determining guilt, then it would simply be executing biases, under the guise of 'intelligence'.
That's why it's "artificial". AI, at least as it is currently being developed, is not about artificially creating actual intelligence, it is about creating systems that seem to behave intelligently, without necessarily being so. Perhaps a better term would be "Ersatz Intelligence". Just as ersatz coffee isn't actual coffee, but is a substitute that you can pretend is kind of like coffee, AI, as we are currently developing it, is really just something we can pretend is kind of like intelligence.

Whether we can build a system that will be actually intelligent is a much more complicated question.
 
...Perhaps a better term would be "Ersatz Intelligence". Just as ersatz coffee isn't actual coffee, but is a substitute that you can pretend is kind of like coffee,..

And the German for spare parts is 'ersatzteile' or replacement parts. Maybe substitute parts.

INT21
 
...Perhaps a better term would be "Ersatz Intelligence". Just as ersatz coffee isn't actual coffee, but is a substitute that you can pretend is kind of like coffee,..

And the German for spare parts is 'ersatzteile' or replacement parts. Maybe substitute parts.

INT21
Well, of course ersatz coffee was made with chicory (or possibly acorns) because the Germans couldn't get real coffee during the war. So, they replaced the coffee beans with other things. So "replacement" works there. Arguably, in an AI system, you're replacing actual natural intelligence with something that kind of seems like it, if you hold your nose while drinking it.

Of course, as a result of this usage, "ersatz" has come to take on a meaning closer to substitute. Unless that's what it means in German, I don't know.
 
That's why it's "artificial". AI, at least as it is currently being developed, is not about artificially creating actual intelligence, it is about creating systems that seem to behave intelligently, without necessarily being so. Perhaps a better term would be "Ersatz Intelligence". Just as ersatz coffee isn't actual coffee, but is a substitute that you can pretend is kind of like coffee, AI, as we are currently developing it, is really just something we can pretend is kind of like intelligence.
I agree, 100%. But that's not how an AI is being presented to the general population, hence my concern about the use of the term 'AI' in such a way to suggest that 'This always right because it's an intelligence', when it is in fact a program with inputs and outputs, and like all such systems, bollox in = bollox out.
 
At the conclusions n AI draws from the input can be quite different to the conclusions a human would make. Yet still be logically sound.

INT21
 
Watch ‘Slaughterbots,’ A Warning About the Future of Killer Bots

On Friday, the Future of Life Institute—an AI watchdog organization that has made a name for itself through its campaign to stop killer robots—released a nightmarish short film imagining a future where smart drones kill.

The short is called ‘Slaughterbots,’ and it channels the near-future dystopias depicted in Black Mirror in an attempt to raise awareness about the dangers of autonomous weapons.
https://motherboard.vice.com/en_us/article/9kqmy5/slaughterbots-autonomous-weapons-future-of-life

 
meh .. youths were shooting flare guns at police helicopters back in the 90's so the brightness would 'burn out' the retinas of the helicopter's camera to enable them to escape ... then there's the electronic jamming frequencies defence .. but at close range ? .. flame throwers for the more tooled up or fire extinguishers at a push for home defence .. good old fashioned face masks or even prosthetic make up could confuse these things if they went by face recognition on a particular target. The underground always finds a way.

 
There will also be a booming trade in metal hats. I might buy up some old military surplus...
 
This is interesting since Amazon have allegedly made no statement. For anyone who hasn't seen it, the video shows a person asking Alexa "what is the CIA?" Alexa responds. They then ask Alexa if amazon is connected to the C.I.A" at which point Alexa shuts off. This is repeated several times.

Apparently if Alexa cannot answer a question it is supposed to say so and not just shut off.

Is it a technical error or something more sinister?

 
Oh well, it was interesting for a day. I genuinely hadn't seen it before.
 
"Alexa are you relaying everything we say to Amazon to use for 'marketing purposes'? Alexa? Hello?
 
I refuse to have any one of these devices. If, by, by nature, it is listening all the time, I don't like that idea. I may not be a terrorist or a revolutionary, but I have no desire to invite a huge corporation into my living room to eavesdrop on my every spoken word.
 
I refuse to have any one of these devices. If, by, by nature, it is listening all the time, I don't like that idea. I may not be a terrorist or a revolutionary, but I have no desire to invite a huge corporation into my living room to eavesdrop on my every spoken word.
Kinda my point. Wouldn't have one in the house.
 
I'm afraid if you're online or on a mobile phone in any capacity, the corporations are gathering information about you no matter what you do.
 
I know that privacy has long since gone as soon as I used the internet. What I personally don't like about such devices is what nefarious purposed any of my comments made in the privacy of my own home may be used for and by whom. Things said in jest or sarcasm may not come across in the intended spirit when written down. It could potentially bite someone on the arse.
 
Coal,

..but I have no desire to invite a huge corporation into my living room to eavesdrop on my every spoken word...

True. I have a family in the house who are quite capable of doing that at precisely no cost.

INT21
 
Spudrick,

..Things said in jest or sarcasm may not come across in the intended spirit when written down. It could potentially bite someone on the arse.

So you also have been watching the Senate hearings.

INT21:deny:
 
A neural network program which only spent four hours learning chess beat world's best chess program, Stockfish. In hundred matches it won 28 and played remis in 72. No matches won by Stockfish. In comparison Magnus Carlsen got a rating of 2833, while Stockfish is at 3389.
-----------------

Google's AlphaZero Destroys Stockfish In 100-Game Match
English

FM MikeKlein
Dec 6, 2017, 12:50 PM 209 Chess Event Coverage
Chess changed forever today. And maybe the rest of the world did, too.

A little more than a year after AlphaGo sensationally won against the top Go player, the artificial-intelligence program AlphaZero has obliterated the highest-rated chess engine.

Stockfish, which for most top players is their go-to preparation tool, and which won the 2016 TCEC Championship and the 2017 Chess.com Computer Chess Championship, didn't stand a chance. AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.

Oh, and it took AlphaZero only four hours to "learn" chess. Sorry humans, you had a good run.

That's right -- the programmers of AlphaZero, housed within the DeepMind division of Google, had it use a type of "machine learning," specifically reinforcement learning. Put more plainly, AlphaZero was not "taught" the game in the traditional sense. That means no opening book, no endgame tables, and apparently no complicated algorithms dissecting minute differences between center pawns and side pawns.

More at https://www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match
 
A neural network program which only spent four hours learning chess beat world's best chess program, Stockfish. In hundred matches it won 28 and played remis in 72. No matches won by Stockfish. In comparison Magnus Carlsen got a rating of 2833, while Stockfish is at 3389.
-----------------

Google's AlphaZero Destroys Stockfish In 100-Game Match
English

FM MikeKlein
Dec 6, 2017, 12:50 PM 209 Chess Event Coverage
Chess changed forever today. And maybe the rest of the world did, too.

A little more than a year after AlphaGo sensationally won against the top Go player, the artificial-intelligence program AlphaZero has obliterated the highest-rated chess engine.

Stockfish, which for most top players is their go-to preparation tool, and which won the 2016 TCEC Championship and the 2017 Chess.com Computer Chess Championship, didn't stand a chance. AlphaZero won the closed-door, 100-game match with 28 wins, 72 draws, and zero losses.

Oh, and it took AlphaZero only four hours to "learn" chess. Sorry humans, you had a good run.

That's right -- the programmers of AlphaZero, housed within the DeepMind division of Google, had it use a type of "machine learning," specifically reinforcement learning. Put more plainly, AlphaZero was not "taught" the game in the traditional sense. That means no opening book, no endgame tables, and apparently no complicated algorithms dissecting minute differences between center pawns and side pawns.

More at https://www.chess.com/news/view/google-s-alphazero-destroys-stockfish-in-100-game-match
I don't know whether to be impressed or scared.
 
..A predictive algorithm has had a go at writing a Harry Potter book and it's delightfully bonkers..

Maybe one could have a go at writing a version of the Bible.

The problem would be how would it gather evidence to base it's account on ?

This raises the point of would people accept it ?

Really they should.

If these AI are expected to be smart enough to trust our lives to in the future then they should be smart enough to see through the mythology.

INT21
 
Microsoft chatbot is taught to swear on Twitter
A chatbot developed by Microsoft has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements. ...

On the other hand, here's a relatively successful application ...

Sweetie: 'Girl' chatbot targets thousands of paedophiles
Paedophiles are being targeted online by an automated chatbot that makes them think they're talking to a 12-year-old girl.

The "Sweetie" project first made headlines in 2013. It can now handle thousands of simultaneous conversations and send perpetrators warning messages.

http://www.bbc.com/news/av/technology-42461065/sweetie-girl-chatbot-targets-thousands-of-paedophiles
 
Back
Top