Sadly, I think that it's not an error, but one of the main objectives. It has being pointed even in popular culture...what was Pink Floid's "The wall" about? Pointed as the educative system as systems of what was called in that times "Alienation" using Marxist terms. I'm not Marxist one, but see what was the meaning of that Alienation, and agree that is one of the main objectives.It's a case of not using computers to help think, but to do the thinking for them.
We know it was the us who scorched the sky.No... please don't ask an AI for their opinion!
Please don't block out the Sun...
Losing the will to live.
Does AI have the ability to have an opinion? I thought 'it' could only regurgitate collective textual facts, and then construct in a readable form to put them into a correctly worded collection of sentences.No... please don't ask an AI for their opinion!
Please don't block out the Sun...
Losing the will to live.
True at the moment, I think. The quality of the answer depends entirely on the quality of the programming.Does AI have the ability to have an opinion? I thought 'it' could only regurgitate collective textual facts, and then construct in a readable form to put them into a correctly worded collection of sentences.
'AI' would probably need an 'opinion software' to call upon should that be required. Sort of - "can I work out if it is an answer to a question that is required, and does it fall within the bounds of either. . . Ying, or could this possibly fall into Yang?"True at the moment, I think. The quality of the answer depends entirely on the quality of the programming.
As someone who has worked in the software industry for 30 years, I'd have to say... caution is required.
Don't be too concerned - they are only planning on doing it at night.Please don't block out the Sun...
Sounds more like they're using a supercomputer to run climate simulations, which is how it's always been done (since the 60s, that is). The results of the simulations will factor into human decision making. There is no mention of 'AI' and the computer itself is making no decisions.Effectively an AI will help decide whether or not the sun should be blocked.
Sounds more like they're using a supercomputer to run climate simulations, which is how it's always been done (since the 60s, that is). The results of the simulations will factor into human decision making. There is no mention of 'AI' and the computer itself is making no decisions.
Maybe one will cancel out the other!Elon Muskrat has joined in the A.I. sales rush.
After claiming that it was an existential threat, he's now launched the start-up xAI.
So ... all other AI is a threat to humanity but not his sort!
Does AI have the ability to have an opinion? I thought 'it' could only regurgitate collective textual facts, and then construct in a readable form to put them into a correctly worded collection of sentences.
How does this differ from what my brain does, other than run on silicon chips instead of my meat? My brain takes input via auditory or visual input, if it's in another language I might pick up on a word or two and attempt to deduce the likely meaning ("that word sounds like 'cat', maybe they are asking about my cat?"), and produce a response ("my cat is doing fine"). All of this done via my brain's model weightings determined through training from previous information exposure (aka talking to parents, watching tv shows, reading books). A language I speak means my brain's probability analysis has returned very high meaning likelihood, whereas a language I dont means my analysis returns low (A word or two sounds similar...) or no likelihood.That is precisely correct.
Right now these large language models just look at the input / question, use probability based analysis to determine the likely meaning of those words and then produce the most probable likely response. All this can be done as the models weightings have been determined through training from text on the internet.
No original thought st all. Just a novel way to regurgitate and change language style/tone etc.
So sort of a visual version of the Wilhelm scream then mentioned in another thread ..According to the Screen Actors Guild the Hollywood studios want to scan background extras and then use AI to recreate them in perpetuity, while paying them for just one day of work.
All of your experiences, memories, and learning is a bit like AI. However, they also involve opinion and emotion which AI cannot have.How does this differ from what my brain does, other than run on silicon chips instead of my meat?
This is however, a difference in programming, and does not seem to be inherent. Nor does it seem to be necessary for intelligence.All of your experiences, memories, and learning is a bit like AI. However, they also involve opinion and emotion which AI cannot have.
F'r instance, you may remember the emotional pain of losing a favoured pet and avoid involving that factor in 'what you build'. The emotional memory is acting as a censor. AI, however, would include that painful memory because it doesn't understand pain. You could 'simulate' it by telling the program to ignore that particular memory, but you'd have to do that for every one - you couldn't just tell it "Do not include this emotion" as it would say "which ones involve that emotion".
Unfortunately we can't prove other humans think and feel as we do and are not simulating behavior (it's a problem for philosophy, who hasn't figured it out), we humans (at least the non-psychopaths) operate under the assumption they do. Similarly, we operate under the assumption that AI does not, but we can't prove it (for the same philosophical reasons). How do we determine the AI isn't making images but not sending them to the screen (aka a human painting in their head) without measuring it in some way (looking at electrical circuit signals or a MRI).Ultimately, AI is a complex 'predictive text' with a vast store of data and a knowledge of syntax and grammar. It doesn't have the factors of emotion or context and cannot sporadically create. It mimics.
"Computer - what should I write?"
"Er ... a letter? A novel? Your name? Context please."
You might be able to ask for a random 'output' but, even then, the AI isn't creating it, using it's own inclination - it doesn't have one.
It looks at words and phrases as numbers, data, and probability.
You can offer it a choice of five cards.
"Pick a random card - done."
"Pick out the red suit card - done."
It won't say "Tell you what - add three cards!" unless you tell it to come up with that 'idea' periodically.
Ultimately, AI needs to be told to create - it has no interest in creation.
(All this, by the way, is based on my own understanding of the concept.)
But that sounds like AI works on, or with the aid of gathered statistics and known facts, it cannot be working with anything like imagination, creativity or free thinking like the human mind is able to do.Actually, I would argue that, once trained, an AI does in fact hold opinions. To be precise, it holds some vector average of the opinions expressed in the data on which it was trained.
How much this situation differs from human learning is still not well understood, but I'm sure we've all met people who produce few original thoughts of their own, and mostly just repeat what they have been taught (and accepted).
Sometimes, AI has been able to find patterns in data that escaped prior human notice. Is such a feat equivalent to human creativity? Maybe someday we can say for sure.