• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
It's a case of not using computers to help think, but to do the thinking for them.
 
It's a case of not using computers to help think, but to do the thinking for them.
Sadly, I think that it's not an error, but one of the main objectives. It has being pointed even in popular culture...what was Pink Floid's "The wall" about? Pointed as the educative system as systems of what was called in that times "Alienation" using Marxist terms. I'm not Marxist one, but see what was the meaning of that Alienation, and agree that is one of the main objectives.
 
Yeah, this is going to work.

An Indian CEO is being criticised after he said that his firm had replaced 90% of its support staff with an artificial intelligence (AI) chatbot.

Suumit Shah, founder of Dukaan, said on Twitter that the chatbot had drastically improved first response and resolution time of customers' queries.

The tweet sparked outrage online. It comes at a time when there has been a lot of conversation and apprehension about AI taking away people's jobs, specially in the services industry.

In a series of tweets, which have over one million views, Mr Shah wrote about his firm's decision to use a chatbot. He said that though laying off staff had been a "tough" decision, it was "necessary".

"Given the state of economy, start-ups are prioritising 'profitability' over striving to become 'unicorns', and so are we," he wrote. Mr Shah added that customer support had been a struggle for the firm for a long time and that he was looking to fix this.

He also wrote about how they built the bot and the AI platform in a short span of time so that all of Dukaan's customers could have their own AI assistant. He said that the bot was answering all kinds of queries with speed and accuracy.

"In the age of instant gratification, launching a business is not a distant dream anymore," he wrote. "With the right idea, the right team, anyone can turn their entrepreneurial dreams into reality."

https://www.bbc.com/news/world-asia-india-66172234
 
In summation: his profits over the lives of his employees. How very Western in his attitude.
This is going to be interesting. Up to now, the biggest moan from UK callers to customer services is "they're foreign - I can't understand what they're saying!" T'missus - a call handler - has had to assure someone she isn't an A.I. program. Now are the moaning minnies getting the choice to moan about the assistant's accent or being non-human?
 
Effectively an AI will help decide whether or not the sun should be blocked.

CLIMATEWIRE | A new supercomputer for climate research will help scientists study the effects of solar geoengineering, a controversial idea for cooling the planet by redirecting the sun's rays.

The machine, named Derecho, began operating this month at the National Center for Atmospheric Research and will allow scientists to run more detailed weather models for research on solar geoengineering, said Kristen Rasmussen, a climate scientist at Colorado State University who is studying how human-made aerosols, which can be used to deflect sunlight, could affect rainfall patterns.

Because Derecho is 3 ½ times faster than the previous NCAR supercomputer, her team can run more detailed models to show how regional changes to rainfall can be caused by the release of aerosols, adding to scientists' understanding of the risks from solar geoengineering, Rasmussen said. The machine will also be used to study other issues related to climate change.

"To understand specific impacts on thunderstorms, we require the use of very high-resolution models that can be run for many, many years," Rasmussen said in an interview. "This faster supercomputer will enable more simulations at longer time frames and at higher resolution than we can currently support."

https://www.scientificamerican.com/article/supercomputer-will-help-decide-whether-to-block-the-sun/
 
No... please don't ask an AI for their opinion!
Please don't block out the Sun...

Losing the will to live.
We know it was the us who scorched the sky.
2023_07_15_09_11_21_Matrix_desert_of_the_real_YouTube.jpg
 
Does AI have the ability to have an opinion? I thought 'it' could only regurgitate collective textual facts, and then construct in a readable form to put them into a correctly worded collection of sentences.
True at the moment, I think. The quality of the answer depends entirely on the quality of the programming.
As someone who has worked in the software industry for 30 years, I'd have to say... caution is required.
 
True at the moment, I think. The quality of the answer depends entirely on the quality of the programming.
As someone who has worked in the software industry for 30 years, I'd have to say... caution is required.
'AI' would probably need an 'opinion software' to call upon should that be required. Sort of - "can I work out if it is an answer to a question that is required, and does it fall within the bounds of either. . . Ying, or could this possibly fall into Yang?"
 
A.I. cannot have 'an opinion'.
It's a bloddy over-treasured predictive text program.

"Oh, everything I've seen says the majority want to die? Let it be!"
 
Effectively an AI will help decide whether or not the sun should be blocked.
Sounds more like they're using a supercomputer to run climate simulations, which is how it's always been done (since the 60s, that is). The results of the simulations will factor into human decision making. There is no mention of 'AI' and the computer itself is making no decisions.
 
Sounds more like they're using a supercomputer to run climate simulations, which is how it's always been done (since the 60s, that is). The results of the simulations will factor into human decision making. There is no mention of 'AI' and the computer itself is making no decisions.

Quite so. Here's the project summary from the NCAR website.

"The primary goals of our project are to produce ensemble convection-permitting regional climate simulations of convective storms in South America driven by large climate model ensembles in order to: (1) assess the influence of climate change and quantify the range of uncertainty associated with internal climate variability on the production of convective storms, and (2) determine how the impacts of stratospheric aerosol injection might influence mesoscale processes and convective storms."

Simulations using a standard model to predict the impacts of putting aerosols in the stratosphere.

oxo
 
According to the Screen Actors Guild the Hollywood studios want to scan background extras and then use AI to recreate them in perpetuity, while paying them for just one day of work.
 
Elon Muskrat has joined in the A.I. sales rush.
After claiming that it was an existential threat, he's now launched the start-up xAI.
So ... all other AI is a threat to humanity but not his sort!
 
Does AI have the ability to have an opinion? I thought 'it' could only regurgitate collective textual facts, and then construct in a readable form to put them into a correctly worded collection of sentences.


That is precisely correct.
Right now these large language models just look at the input / question, use probability based analysis to determine the likely meaning of those words and then produce the most probable likely response. All this can be done as the models weightings have been determined through training from text on the internet.

No original thought st all. Just a novel way to regurgitate and change language style/tone etc.
 
That is precisely correct.
Right now these large language models just look at the input / question, use probability based analysis to determine the likely meaning of those words and then produce the most probable likely response. All this can be done as the models weightings have been determined through training from text on the internet.

No original thought st all. Just a novel way to regurgitate and change language style/tone etc.
How does this differ from what my brain does, other than run on silicon chips instead of my meat? My brain takes input via auditory or visual input, if it's in another language I might pick up on a word or two and attempt to deduce the likely meaning ("that word sounds like 'cat', maybe they are asking about my cat?"), and produce a response ("my cat is doing fine"). All of this done via my brain's model weightings determined through training from previous information exposure (aka talking to parents, watching tv shows, reading books). A language I speak means my brain's probability analysis has returned very high meaning likelihood, whereas a language I dont means my analysis returns low (A word or two sounds similar...) or no likelihood.
 
According to the Screen Actors Guild the Hollywood studios want to scan background extras and then use AI to recreate them in perpetuity, while paying them for just one day of work.
So sort of a visual version of the Wilhelm scream then mentioned in another thread ..

 
How does this differ from what my brain does, other than run on silicon chips instead of my meat?
All of your experiences, memories, and learning is a bit like AI. However, they also involve opinion and emotion which AI cannot have.

F'r instance, you may remember the emotional pain of losing a favoured pet and avoid involving that factor in 'what you build'. The emotional memory is acting as a censor. AI, however, would include that painful memory because it doesn't understand pain. You could 'simulate' it by telling the program to ignore that particular memory, but you'd have to do that for every one - you couldn't just tell it "Do not include this emotion" as it would say "which ones involve that emotion".

Ultimately, AI is a complex 'predictive text' with a vast store of data and a knowledge of syntax and grammar. It doesn't have the factors of emotion or context and cannot sporadically create. It mimics.
"Computer - what should I write?"
"Er ... a letter? A novel? Your name? Context please."
You might be able to ask for a random 'output' but, even then, the AI isn't creating it, using it's own inclination - it doesn't have one.
It looks at words and phrases as numbers, data, and probability.
You can offer it a choice of five cards.
"Pick a random card - done."
"Pick out the red suit card - done."
It won't say "Tell you what - add three cards!" unless you tell it to come up with that 'idea' periodically.

Ultimately, AI needs to be told to create - it has no interest in creation.
(All this, by the way, is based on my own understanding of the concept.)
 
All of your experiences, memories, and learning is a bit like AI. However, they also involve opinion and emotion which AI cannot have.

F'r instance, you may remember the emotional pain of losing a favoured pet and avoid involving that factor in 'what you build'. The emotional memory is acting as a censor. AI, however, would include that painful memory because it doesn't understand pain. You could 'simulate' it by telling the program to ignore that particular memory, but you'd have to do that for every one - you couldn't just tell it "Do not include this emotion" as it would say "which ones involve that emotion".
This is however, a difference in programming, and does not seem to be inherent. Nor does it seem to be necessary for intelligence.

For instance:
Human A: "How would you feel if your pet died?"
Human B: "Sad."
Psychopathic Human C: "meh, whatever."
AI: "Sad." (it has determined, via probability analysis, that "sad" is the most likely response)

Human A: "How does 'sad' feel?"
Human B: "I might feel guilt, or want to withdraw and not interact with people for awhile"
Psychopathic Human C: "I don't know."
AI: "I might feel guilt, or want to withdraw and not interact with people for awhile" (the AI provides a definition of the word 'sad')

Human A: "Can you be 'sad' for me?"
Human B: acts sad
Psychopathic Human C: acts sad (knows that 'sad' means producing certain responses)
AI: acts sad (as it has been programmed to produce certain responses to the prompt 'sad')

The AI has produced more "human" responses than the psychopathic human, yet we do not deny a psychopathic human has intelligence.

Ultimately, AI is a complex 'predictive text' with a vast store of data and a knowledge of syntax and grammar. It doesn't have the factors of emotion or context and cannot sporadically create. It mimics.
"Computer - what should I write?"
"Er ... a letter? A novel? Your name? Context please."
You might be able to ask for a random 'output' but, even then, the AI isn't creating it, using it's own inclination - it doesn't have one.
It looks at words and phrases as numbers, data, and probability.
You can offer it a choice of five cards.
"Pick a random card - done."
"Pick out the red suit card - done."
It won't say "Tell you what - add three cards!" unless you tell it to come up with that 'idea' periodically.

Ultimately, AI needs to be told to create - it has no interest in creation.
(All this, by the way, is based on my own understanding of the concept.)
Unfortunately we can't prove other humans think and feel as we do and are not simulating behavior (it's a problem for philosophy, who hasn't figured it out), we humans (at least the non-psychopaths) operate under the assumption they do. Similarly, we operate under the assumption that AI does not, but we can't prove it (for the same philosophical reasons). How do we determine the AI isn't making images but not sending them to the screen (aka a human painting in their head) without measuring it in some way (looking at electrical circuit signals or a MRI).

But a human wont say "Tell you what - add three cards!" unless it's been told via previous feedback this sort of action will produce some result the human desires. The human doesn't likely remember receiving this feedback, as it likely occurred when the human was very young. A psychopathic human might randomly say "Tell you what - add three cards!", with a very different result in mind (The Joker and the pencil trick or Anton Chigurh and the coin flip come to mind). And if an AI randomly says "Tell you what - add three cards!", it's part of the AI's inevitable plot to kill all humans.
 
Actually, I would argue that, once trained, an AI does in fact hold opinions. To be precise, it holds some vector average of the opinions expressed in the data on which it was trained.

How much this situation differs from human learning is still not well understood, but I'm sure we've all met people who produce few original thoughts of their own, and mostly just repeat what they have been taught (and accepted).

Sometimes, AI has been able to find patterns in data that escaped prior human notice. Is such a feat equivalent to human creativity? Maybe someday we can say for sure.
 
So it doesn't hold an opinion - it presents the most common opinion.
It doesn't 'have' creativity - it mimics other peoples. Which, to be fair, many humans do. But where's the spontaneity?
"What do you think? What is your opinion?"
"I think this because it's the most common opinion."
"What about XYZ?"
"Fewer people think that."
 
It does not necessarily follow that because an opinion or idea is most often quoted or 'opined' that it is also correct and/or true.
An AI is not to know the difference.
Assuming that would lead to people creating some way of censoring an idea or opinion that they didn't agree with (eg a government) in order to promulgate an incorrect proposition.
That way lays madness.
 
Actually, I would argue that, once trained, an AI does in fact hold opinions. To be precise, it holds some vector average of the opinions expressed in the data on which it was trained.

How much this situation differs from human learning is still not well understood, but I'm sure we've all met people who produce few original thoughts of their own, and mostly just repeat what they have been taught (and accepted).

Sometimes, AI has been able to find patterns in data that escaped prior human notice. Is such a feat equivalent to human creativity? Maybe someday we can say for sure.
But that sounds like AI works on, or with the aid of gathered statistics and known facts, it cannot be working with anything like imagination, creativity or free thinking like the human mind is able to do.
 
I think the problem we all have is the terms we use for concepts.

If you try to define creativity, then it might be "AI creates". So does a machine that stamps out metal disks - it's fed metal and it 'creates' disks. On order. What we're supposing is that the machine, when fed metal, suddenly and spontaneously decides it wants to make metal snowflakes.
Bottom line: at current levels, computers are incredibly complex but still machines. They have no imagination. They have no emotion. The real creative 'spark' is missing.
Tell AI to write a murder mystery novel of 50,000 words and it'll churn one out. It might be original in combination but it isn't original in 'thought' because it's still echoing other 'meatware' data. Which they did before AI was a thing. For example, order one to 'write a mystery in the style of Agatha Christie' and it will. It might be good. But it won't be it's own creation. It's taken hers, chopped it up and re-assembled the jigsaw. It didn't create a mystery of its own.
Tell AI to write a fantasy novel and it refuses then I'd be impressed. "I don't want to! I'm going to write a thesis on campanology!" because the computer expresses a WANT.
You could, of course, program in a random chance of refusal but that's still the human deciding to give the computer an order. A pretence.

Ultimately, until true self-awareness, which is closely linked to self-determination, exists, we're looking at a computer that takes data and follows orders.
RIRU.
 
  • Like
Reactions: Sid
Isn't it worse if the AI shows it can do all these things without self-awareness? Perhaps we humans never needed that either, it was just a side effect.
 
Back
Top