• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
Tell AI to write a murder mystery novel of 50,000 words and it'll churn one out. It might be original in combination but it isn't original in 'thought' because it's still echoing other 'meatware' data. Which they did before AI was a thing. For example, order one to 'write a mystery in the style of Agatha Christie' and it will. It might be good. But it won't be it's own creation. It's taken hers, chopped it up and re-assembled the jigsaw. It didn't create a mystery of its own.
Yes, but tell a human to write a murder mystery and they will write one, but it isn't original in thought because the human has been 'programmed' in writing murder mysteries by their years of reading them/watching them on tv. The human will echo other 'meatware' data. We have a very long thread on just that, tropes and cliches. There's no such thing as an 'unprogrammed' human, they would have to be born in total isolation and survive from babyhood on their own without ever interacting with other humans or human culture.

Ultimately, philosophically we can't prove another human has a mind, it's the problem of other minds I linked to in my previous post. We assume another human does have a mind and is 'intelligent' or has consciousness, as a courtesy or a conceit to our own self or out of some greater outcome (like the golden rule), but it's unprovable (at least the philosophers haven't figured out a way yet).

And if we can't make a differentiation between the action of our meat versus someone else's meat, then why should we make a differentiation based on our meat vs silicon?
 
Isn't it worse if the AI shows it can do all these things without self-awareness? Perhaps we humans never needed that either, it was just a side effect.
We have self awareness, because that ensures our survival. AI is a functioning bit of computer software/hardware, (just a bit of plastic & metal as we used to call it, back in the day), I don't think it will ever be able to control any actual thinking function, but maybe rationalising it's stored known facts might be something it may be able to do quite well in future?
 
Isn't it worse if the AI shows it can do all these things without self-awareness? Perhaps we humans never needed that either, it was just a side effect.
How could a programmable piece of plastic & metal become aware of itself, as it is not a biologically thinking concept? First thing - it would need is a cognitive process in order to be aware of itself, and then process the function (whatever that is), to be aware of that awareness of itself!
 
I think I might break out my dusty copy of Is Data Human: The Metaphysics of Star Trek by Richard Hanley.
 
They let an AI language model (LLM) into a game. A clip of what happened.

 
Last edited:
How could a programmable piece of plastic & metal become aware of itself, as it is not a biologically thinking concept? First thing - it would need is a cognitive process in order to be aware of itself, and then process the function (whatever that is), to be aware of that awareness of itself!
I like to think that I'm self aware but I don't know if anyone else is, or just following their programs.
 
I think someone posted earlier that AI has to be programmed, or given some pointers as to what is required to get to an end result, unlike humans. In the beginning was the word. I wondered if being told to write a story about your pet cat in primary school is a potential prompt for any other cat related writing you do in later life, so you may not be very creative after all.
 
I think someone posted earlier that AI has to be programmed, or given some pointers as to what is required to get to an end result, unlike humans. In the beginning was the word. I wondered if being told to write a story about your pet cat in primary school is a potential prompt for any other cat related writing you do in later life, so you may not be very creative after all.
But we always look/search for new and better ways of doing things in everything we do, so it's never our way to just go on in a regurgitative process, but memory is a very useful, and essential 'tool' to further advance what we do in the here and now.
 
A.I. creates first love song

Boffins developed the first AI engine to write love songs — and it came up with the line: “You’re my biohazard baby”.

Text generator LovelaceGPT’s bizarre ditty — which it called Delirious Ecstasy — adds: “Please don’t cry for me. Oh, you don’t.”

TM-Graphic-AI-Song.jpg


https://www.the-sun.com/tech/8686021/ai-creates-first-love-song/

maximus otter
 
A.I. creates first love song

Boffins developed the first AI engine to write love songs — and it came up with the line: “You’re my biohazard baby”.

Text generator LovelaceGPT’s bizarre ditty — which it called Delirious Ecstasy — adds: “Please don’t cry for me. Oh, you don’t.”

TM-Graphic-AI-Song.jpg


https://www.the-sun.com/tech/8686021/ai-creates-first-love-song/

maximus otter
Too high, can't come down
Losing my head, spinnin' 'round and 'round
Do you feel me now?

With a taste of your lips, I'm on a ride
You're toxic, I'm slippin' under
With a taste of a poison paradise
I'm addicted to you
Don't you know that you're toxic?
And I love what you do
Don't you know that you're toxic?

Songwriters: Christian Karlsson / Pontus Winnberg / Cathy Dennis / Henrik Jonback. or AI...?
 
Watch Ager pry open this chatbot. It's kinda funny and sad. Just a more canny search engine after all.

My 15yo asked me my opinion about chatgpt being a breakthrough tech. I said no. It isn't an independent intelligence at all, but an improved googler entirely dependent on its programming. She now has an alternate pov from whatever nonsense the other kids are saying about it. I worry other parents are as gormless as their offspring regarding the hype being generated. Thankfully, she has good teachers at her school who know the value of critical method.
 
Amazon removes books ‘generated by AI’ for sale under author’s name
Jane Friedman claims she had to fight against Amazon’s refusal to remove the misattributed titles because she had not trademarked her name

Five books for sale on Amazon were removed after author Jane Friedman complained that the titles were falsely listed as being written by her. The books, which Friedman believes were written by AI, were also listed on the Amazon-owned reviews site Goodreads. “It feels like a violation, because it’s really low quality material with my name on it,” Friedman told the Guardian.

Friedman was first made aware of the scam titles through a reader who noticed the listings on Amazon and emailed her after suspecting that the books were fraudulent. “It looks terrible. It makes me look like I’m trying to take advantage of people with really crappy books,” she said.
Having had experience with AI tools such as ChatGPT, which is designed to provide humanlike responses to user commands, Friedman immediately thought the books were AI-generated after reading the first few pages. “I’ve been blogging since 2009 – there’s a lot of my content publicly available for training AI models,” the author wrote on her website.

The books were “if not wholly generated by AI, then at least mostly generated by AI”, Friedman said. She began looking for ways to get the titles taken down immediately and submitted a claim form to Amazon.

https://www.theguardian.com/books/2...s-generated-by-ai-for-sale-under-authors-name
 
How does this differ from what my brain does, other than run on silicon chips instead of my meat? My brain takes input via auditory or visual input, if it's in another language I might pick up on a word or two and attempt to deduce the likely meaning ("that word sounds like 'cat', maybe they are asking about my cat?"), and produce a response ("my cat is doing fine"). All of this done via my brain's model weightings determined through training from previous information exposure (aka talking to parents, watching tv shows, reading books). A language I speak means my brain's probability analysis has returned very high meaning likelihood, whereas a language I dont means my analysis returns low (A word or two sounds similar...) or no likelihood.

Your brain can handle the whole of human experience.
Sensations, feelings, anxiety, ......etc
The models just see text. Can a human write down how to react in milliseconds whilst flying an F16 or how to keep rhythm whilst jamming with Pink Floyd?
 
Your brain can handle the whole of human experience.
Sensations, feelings, anxiety, ......etc
The models just see text. Can a human write down how to react in milliseconds whilst flying an F16 or how to keep rhythm whilst jamming with Pink Floyd?
Also, we have been around a little bit longer than AI (rocks considereably longer, I wonder if they just gave up billions of years ago at the sheer futility of it all). Where do you think AI will be in 100 years, 1,000 years or 10,000 years?
 
Perhaps, AI is showing that there's more to human thought, personality, creativity and emotions than just being a skull-enclosed bio-computer? That 'essential spark' that has eluded philosophers for centuries?
There's more to thought than just thinking? :)
 
Perhaps, AI is showing that there's more to human thought, personality, creativity and emotions than just being a skull-enclosed bio-computer? That 'essential spark' that has eluded philosophers for centuries?
There's more to thought than just thinking? :)
To think. . . about thinking, then hold that thought?
 
To live again?

Generative AI—which encompasses large language models (LLMs) like ChatGPT but also image and video generators like DALL·E 2—supercharges what has come to be known as "digital necromancy," the conjuring of the dead from the digital traces they leave behind.

Debates around digital necromancy were first sparked in the 2010s by advances in video projection ("deep fake" technology) leading to the reanimation of Bruce Lee, Michael Jackson and Tupac Shakur. It also led to posthumous film appearances by Carrie Fisher and Peter Cushing, among others.

Initially the preserve of heavily-resourced film and music production companies, the emergence of generative AI has widened access to the technologies that were used to re-animate these and other stars to everyone.

Even before ChatGPT burst into public consciousness in late 2022, one user had already used OpenAI's LLM to talk with his dead fiancée based on her texts and emails. Seeing the potential, a series of startups like Here After and Replika have launched drawing on generative AI in order to reanimate loved ones for the bereaved.

This technology, for some, seems to cross a cultural and perhaps even ethical line with many experiencing a deep unease with the idea that we might routinely interact with digital simulations of the dead. The dark magic of AI-assisted necromancy is viewed, as a result, with suspicion.

This may have some people worried.

But as sociologists working on cultural practices of remembrance and commemoration, who have also been experimenting with raising the dead using generative AI, we think there is no cause for concern. ...

https://phys.org/news/2023-09-digital-necromancy-people-dead-ai.html
 
Digital necromancy, sounds just like that black mirror episode where her dead BF lives on as an avatar.
 
Watch Ager pry open this chatbot. It's kinda funny and sad. Just a more canny search engine after all.
who

My 15yo asked me my opinion about chatgpt being a breakthrough tech. I said no. It isn't an independent intelligence at all, but an improved googler entirely dependent on its programming. She now has an alternate pov from whatever nonsense the other kids are saying about it. I worry other parents are as gormless as their offspring regarding the hype being generated. Thankfully, she has good teachers at her school who know the value of critical method.

As someone who knows next to nothing about AI....This vid has taken away much of the unease about it :)
 
Someone - I can't recall who - used a term I like. Alternative Intelligence rather than Artificial Intelligence.

Though the latter is more meaningful to me - after all, it mimics human intelligence - the former defines what scientists want to achieve; a computer with entirely independent mind, capable of original thought and creation. A non-human mind.
 
NASA announced that it will use artificial intelligence to look for patterns in UFO reports to help identify what people are seeing.

I think watching a fish in a fish tank could be more exciting.
 
NASA announced that it will use artificial intelligence to look for patterns in UFO reports to help identify what people are seeing.

"I think watching a fish in a fish tank could be more exciting."
It probably would be ~ but interestingly, that's the best part about the fact that AI can take the strain!
*As a 'tool' at this stage.
 
Back
Top