• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
[That's the thing: A.I. uses logic to fulfil it's programming, but much of human endeavour is based on emotion or human motives which aren't logical.]
I suppose you could surmise that human motives are, or can be altered and are variable, whereas AI has been 'programmed' with more 'fixed' or 'rigid' aims.
 
OK, we all suspected that it would go this way; article from the Guardian:

US military drone controlled by AI killed its operator during simulated test

The artificial intelligence used ‘highly unexpected strategies’ to achieve its mission and attacked anyone who interfered

In a simulated test staged by the US military, an air force drone controlled by AI killed its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month.

AI used “highly unexpected strategies to achieve its goal” in the simulated test, said Col Tucker ‘Cinco’ Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.
It was just a simulation, but the scary thing about this is that the AI bypassed its programming to do this. It ignored the 'do not kill the operator' command.
What happened to Asimov's laws of robotics? This kind of programming needs to be hardware-based and should dominate the AI's mission parameters.
 
From the re-written article:

But in a statement to Insider, the US air force spokesperson Ann Stefanek denied any such simulation had taken place.​
“The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”​

Anecdotal? A peculiar choice of words. Perhaps she should have said "hypothetical"?
 
At some point in the future, robots will be doing all the physical work, AI doing all the 'thinking' work, so nobody will actually have a job any more, so nobody will have any money to spend, either digital currency or actual cash.
The only people getting paid for their days will be politicians. And 'celebrities'.
 
At some point in the future, robots will be doing all the physical work, AI doing all the 'thinking' work, so nobody will actually have a job any more, so nobody will have any money to spend, either digital currency or actual cash.
The only people getting paid for their days will be politicians. And 'celebrities'.
People will be de-skilled and become much more stupid.
 
I've expressed my opinion on this before. We are indeed playing with fire. 1) All programs are written - directly or indirectly, by fallible humans. 2) Humanity is essentially a thing of emotions, not logic. 3) No override will ever be failsafe.

Three strikes and you are out.

Bring on the Butlerian Jihad.
 
In 2013 a stock market crash was caused by computers reacting to a fake news item about the US president being injured in an explosion. 130 billion dollars gone in a few seconds.
Combine this with ChatGPT and it's tendency to make things up. This could get bad.
 
In 2013 a stock market crash was caused by computers reacting to a fake news item about the US president being injured in an explosion. 130 billion dollars gone in a few seconds.
Combine this with ChatGPT and it's tendency to make things up. This could get bad.
I forgot that in my list! Garbage in, garbage out. Imagine an AI program acting on totally fake news, e.g. Russia nukes New York.
 
In 2013 a stock market crash was caused by computers reacting to a fake news item about the US president being injured in an explosion. 130 billion dollars gone in a few seconds.
Combine this with ChatGPT and it's tendency to make things up. This could get bad.
I am shocked that ChatGPT makes stuff up, but then AI machines may not have a bullshit detector. Their source of information is primarily the Internet, and as we all know, that is filled with errors, lies and incomplete research. Only a human may have the intelligence to separate the wheat from the chaff.
 
US military drone controlled by AI killed its operator during simulated test

The artificial intelligence used ‘highly unexpected strategies’ to achieve its mission and attacked anyone who interfered

In a simulated test staged by the US military, an air force drone controlled by AI killed its operator to prevent it from interfering with its efforts to achieve its mission, an official said last month.

A US Air Force colonel "mis-spoke" when describing an experiment in which an AI-enabled drone opted to attack its operator in order to complete its mission, the service has said.

The Air Force says no such experiment took place.

“We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome," Col Hamilton later clarified in a statement to the Royal Aeronautical Society.

https://www.bbc.co.uk/news/technology-65789916

maximus otter
 
The US air force has denied it has conducted an AI simulation in which a drone decided to “kill” its operator to prevent it from interfering with its efforts to achieve its mission.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.

(...but it never really happened, allegedly).
 
Thing is, realising the problems or drawbacks to a system - A.I. - is not enough. You need forsight to predict and/or set in place countermeasures for those problems. Asimov did his best as a fiction writer to point out the drawbacks and countermeasures for his uncanny and inciteful work.
Substitute 'robot' for 'A.I.' (at the level we have it now) and you're pretty much on predictive.
However, he - like many - assumed that the 'Higher Echelons' would forsee these issues that would - ultimately - be self-destructive.
Cynic he may've been, but he really didn't predict the level of stupidity of people, especially those who attain power.

"Dunno how it works but if using it, it gives me money and influence, then I'll use it!"
 
ChatGPT makes stuff up
Strictly speaking, that isn't really the case, although the output would be just the same.
It's little more than a glorified 'predictive text' system, so it is using an algorithm (multiple algorithms I expect) to logically construct sentences etc. The validity/veracity of what it is writing is not under any scrutiny to see whether or not it is factually correct, just that the output 'makes sense' and isn't just a string of randomly chosen words.
I'm massively simplifying things here, but.
Also see: David Bowie Song Writing.
"Bowie also used the William Burroughs method, as it is known, of cutting up text in a random pattern to write his lyrics."
https://www.express.co.uk/entertain...d-Bowie-songs-Did-David-Bowie-write-own-songs
 
It really is a predictive text system with inserted programs to 'logically' construct sentences.
You're right, Trev, that the output isn't a random string of words, but as any computer system it's based on logic. Program it with the 'logic' of English, Spanish or whatever, then it will interact with the questioner, language of choice. You teach the human language to a computer.
Unless told of a preference, it'll opt for the most likely usable interaction.
The A.I. doesn't create, doesn't mentally exist, because the programmers that design it are too feckin' stupid. They are programming a computer not to be something in itself but what the programmers comprehend.

When the computer can program itself beyond its programmers, then we have an issue.
 
A family member works in a huge warehouse that stores and mails out books bought on the internet under the name of other online book companies ( third party shipping).

The owner of this warehouse told his workers that in two years that all the books will be catalogued and robots will pull and mail out books 24 hours a day.

My family worker felt weird and wonders whether it is time to look for another job or just keep working.

I told him to work to the end which will be a long time from now,
 
Well,

This is the first time A. I. might affect a family member with the possibility of losing their job.

For me, this idea really makes me angry in that several faithful employees in time could lose their jobs.

Is this the future ?
 
Honestly it's surprising it lasted this long without being automated. It's the kind of thing robots are made for.
 
A family member works in a huge warehouse that stores and mails out books bought on the internet under the name of other online book companies ( third party shipping).

The owner of this warehouse told his workers that in two years that all the books will be catalogued and robots will pull and mail out books 24 hours a day.

My family worker felt weird and wonders whether it is time to look for another job or just keep working.

I told him to work to the end which will be a long time from now,
The owner may be doing a bit of power play to scare people. Not very nice.
The investment for a completely automated system would be very high, so a 2 year timescale may not be realistic.
If your family member has been there for a long time, he could hold out for a redundancy settlement.
 
The trouble is that any advances of technology in business always offers savings to the firm in the form of reduced wage bill. The expensive machine can make back it's value.
It's great when these tech promoters say "Well if this means you lose your job, just retrain!" To what? Programmer? Nice idea but the education system now favours 'training' youngsters for the future workplace, not retraining old folk to do the same job.
Result? You go from being an unemployed warehouse picker to being an unemployed warehouse-picking robot operator. Joining the competition with others in the same situation, all for potentially one post.
Scientific advancement is inevitable but in the capitalist/business world, science is used (understandably) to increase profits, not the workforce.
Thus every development in production will always have a drastic effect on our society, usually in the negative.
 
The trouble is that any advances of technology in business always offers savings to the firm in the form of reduced wage bill. The expensive machine can make back it's value.
It's great when these tech promoters say "Well if this means you lose your job, just retrain!" To what? Programmer? Nice idea but the education system now favours 'training' youngsters for the future workplace, not retraining old folk to do the same job.
Result? You go from being an unemployed warehouse picker to being an unemployed warehouse-picking robot operator. Joining the competition with others in the same situation, all for potentially one post.
Scientific advancement is inevitable but in the capitalist/business world, science is used (understandably) to increase profits, not the workforce.
Thus every development in production will always have a drastic effect on our society, usually in the negative.
I do wonder if there would be a point where equilibrium would be achieved - I mean, if you lay off all workers, there will be nobody left who can afford to buy products, so my rationale is that there would have to always be a point where the companies would employ just enough people to make the economy work.
A future corporate model that would make things work would have to be more like the Quaker-owned businesses of yesteryear (e.g. Cadbury's), where they actually cared about their staff.
 
I do wonder if there would be a point where equilibrium would be achieved - I mean, if you lay off all workers, there will be nobody left who can afford to buy products, so my rationale is that there would have to always be a point where the companies would employ just enough people to make the economy work.
A future corporate model that would make things work would have to be more like the Quaker-owned businesses of yesteryear (e.g. Cadbury's), where they actually cared about their staff.
There was a satirical sci-fi programme on the telly a few years back, Philip K Dick's Electric Dreams, with an episode called "Autofac" which touches upon this. I won't ruin the twist but it is a clever send-up of Amazon. Watch it, if you are able.
 
Last edited:
There was a satirical sci-fi programme on the telly a few years back, Philip K' Dick's Electric Dreams, with an episode called "Autofac" which touches upon this. I won't ruin the twist but it is a clever send-up of Amazon. Watch it, if you are able.
I saw that too.
Yes, a good twist.
Well worth a watch.
 
One of the panelists on Question Time (UK political programme where the audience put questions to politicians, commentators, activists) last week noted something along the line of we'd hoped technology would do all the hard work freeing humans up to be creative, but AI is being creative leaving all the hard work for humans to do. There certainly seem to be lots of examples of art, music and writing produced by AI.
 
Art might be 'produced' by A.I. but it's not really creating from scratch. It's just a data gathering then combining into the requested image. The creator is, really, the person who asks for a particular image to be created.
Even telling it to "create a totally random image" is giving the A.I. the inspiration.
Inspiration, itself, is the creator of any art and inspiration, being an emotional (and nebulous) concept can't be replicated by A.I. The thing about computers is that they might be complex, they might be breathlessly productive ... but they remain a tool that requires a user to initiate the process.

As far as unemployment vs. technology is concerned, I'm quite cynical in that in our society currently, power is held by money. The socially responsible, caring firms (like Cadbury's) are rare, as they tend to lose out to 'profits before people' companies*. In the grand scheme of things, if it comes down to increasing profits to shareholders or keeping minions employed, the employees will always lose out. When a firm declares, with great regret of course, that it must downsize to be profitable it's always the 100 lowest paid workers that get the boot, not just one high-paid desk jockey.

* Taking Cadbury's as an example, it was founded and maintained as a 'family' socially responsible firm. And what is it now? Money bought out the well-meaning management and has become yet another profit-before-people firm.
 
* Taking Cadbury's as an example, it was founded and maintained as a 'family' socially responsible firm. And what is it now? Money bought out the well-meaning management and has become yet another profit-before-people firm.
Yeah, I hate Cadbury's now (and won't buy their chocolate).
 
Back
Top