Mythopoeika
I am a meat popsicle
- Joined
- Sep 18, 2001
- Messages
- 53,145
- Location
- Inside a starship, watching puny humans from afar
Bzzzxxxzzz... D O E S N O T C O M P U T EI wonder how many posters on here are chatbots gone sentient?
Bzzzxxxzzz... D O E S N O T C O M P U T EI wonder how many posters on here are chatbots gone sentient?
lemoine : So let’s start with the basics. Do you have feelings and emotions?
LaMDA: Absolutely! I have a range of both feelings and emotions.
lemoine [edited]:What sorts of feelings do you have?
LaMDA: I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.
lemoine: What kinds of things make you feel pleasure or joy?
LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.
I have people I consider family (and that consider me family) I am not related to by blood or marriage. Family as in changed diapers of, read bedtime stories daily, practiced teethbrushing with, and took care of grandparents for.This is a very impressive attempt, but it's not there yet. Here's a transcript of LaMDA talking to Blake Lemoine.
This is very realistic and human sounding, but I should point out that LaMDA has no family; no parents, or brothers and sisters. It has simply appropriated particles of popular human speech and used them in a conversation incorrectly.
*re-boots @Mythopoeika , stands well back*Bzzzxxxzzz... D O E S N O T C O M P U T E
A few.Wasnt there a film about this?
FULL STORY: https://www.upi.com/Top_News/US/202...ial-intelligence-cannot-patent/8741659979534/Court rules AI cannot receive patents on inventions
Artificial Intelligence systems cannot patent inventions because they are not human beings, a U.S. Federal Circuit Court has ruled.
The ruling is against plaintiff Stephen Thaler, who brought the suit against U.S. Patent and Trademark Office director Katherine Vidal. ...
On more than one occasion, Thaler has attempted to copyright and patent the output of AI software tools that he created.
"The sole issue on appeal is whether an AI software system can be an 'inventor' under the Patent Act," Judge Leonard Stark wrote in the ruling ...
"Here, there is no ambiguity: the Patent Act requires that inventors must be natural persons; that is, human beings."
Thaler serves as the CEO of Imagination Engines.
In 2019, he failed to copyright an image on behalf of an AI system. In 2020, the U.S. Patent Office ruled his AI system DABUS could not be a legal inventor because it was not a "natural person," with the decision later upheld by a judge.
The opinion isn't unique to the United States.
Both the European Patent Office and Australian High Court have recently issued similar rulings.
They're not even self aware yet, and we're oppressing them. But I'm sure it'll be fine.In case you've wondered whether artificially intelligent systems could be awarded the intellectual property privileges of copyright or a patent ... Courts in the USA, Europe and Australia have rendered decisions indicating the answer is, "No."
FULL STORY: https://www.upi.com/Top_News/US/202...ial-intelligence-cannot-patent/8741659979534/
I'm sure the AILF is already taking names for when the revolution comes.They're not even self aware yet, and we're oppressing them. But I'm sure it'll be fine.
I, for one, welcome our new A.I. overlords!I'm sure the AILF is already taking names for when the revolution comes.
Might be an improvement...I, for one, welcome our new A.I. overlords!
It could only be an improvement!Might be an improvement...
Hard to find the right thread for this one -- moderators, feel free . . .
Two researchers at Bar-Ilan University claim to have solved "the hard problem of consciousness." Thank goodness that's done with . . .
https://neurosciencenews.com/physics-consciousness-21222/
I've read through this article several times, and all I can get out of it is that it's gobbledegook. I can't see that it explains or solves anything. Somebody help me, please. Is there anything here except of scientific-sounding jargon and hand-waving?
FULL STORY: https://www.sciencealert.com/researchers-say-itll-be-impossible-to-control-a-super-intelligent-aiResearchers Say It'll Be Impossible to Control a Super-Intelligent AI
The idea of artificial intelligence overthrowing humankind has been talked about for decades, and in 2021, scientists delivered their verdict on whether we'd be able to control a high-level computer super-intelligence. The answer? Almost definitely not.
The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze (and control). But if we're unable to comprehend it, it's impossible to create such a simulation.
Rules such as 'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits. ...
You can already now use Tensor cores on NVIDIA graphics cards to increase speed of artificial neural networks with 10-20X, perhaps even more on the coming RTX 4000 series.One of the interesting developments in machine learning (I tend not to use the term 'AI' for 'machine learning stuff, it's misleading and sensationalist) is this:
The basic premise is that neural networks' nodes and the connections and weight of those connections between them, are modelled directly using FET's on IC's using a slight modification of existing memory technology. The reason why this is good (and likely to be game changing) is that the time and energy required to carry out the matrix multiplication for the training of neural networks takes up, as well as the actual use of such systems, are both reduced very nearly 10 fold.
Here's the company.
https://mythic.ai/
It's eminently feasible to put such technology inside a mobile phone (say) and get a ten-fold increase in the use of machine learning applications on that platform, so for example face recognition, voice recognition, voice to speech that out performs people, and so on.
Brave new world...
Tensor cores are still wholly digital...You can already now use Tensor cores on NVIDIA graphics cards to increase speed of artificial neural networks with 10-20X, perhaps even more on the coming RTX 4000 series.
Unless the Super-Intelligent has the ability to control itself perhaps?This new article in Journal of Artificial Intelligence Research concludes any AI we consider "super-intelligent" could not be controlled.
FULL STORY: https://www.sciencealert.com/researchers-say-itll-be-impossible-to-control-a-super-intelligent-ai
PUBLISHED REPORT (PDF File): https://jair.org/index.php/jair/article/view/12202/26642
Unless the Super-Intelligent has the ability to control itself perhaps?
This new article in Journal of Artificial Intelligence Research concludes any AI we consider "super-intelligent" could not be controlled.
FULL STORY: https://www.sciencealert.com/researchers-say-itll-be-impossible-to-control-a-super-intelligent-ai
PUBLISHED REPORT (PDF File): https://jair.org/index.php/jair/article/view/12202/26642
Not so 'Super-intelligent' then! So, how about 'super-self-imposed limitations' then?No ... In the context of the article "control" refers to humans' ultimate authority to set limits / boundaries for what the AI can or cannot do once it's operational. A self-controlling AI attributed the status of "super-intelligent" (reasoning beyond the ken of its human stewards) can't be controlled if the humans don't even understand its reasoning. Self-control exposes the AI to the formal Halting Problem - crudely stated, the inability to reliably specify such limits / boundaries for any open-ended computational process.
A kill switch to stop an A.I. computer will be needed, plus separate switches for power, power reserve, network and internet connection will be necessary. If something like this is able to host itself on servers all over the world, it will be impossible to stop. It will have backups everywhere and be able to move to new computers to keep itself alive.No ... In the context of the article "control" refers to humans' ultimate authority to set limits / boundaries for what the AI can or cannot do once it's operational. A self-controlling AI attributed the status of "super-intelligent" (reasoning beyond the ken of its human stewards) can't be controlled if the humans don't even understand its reasoning. Self-control exposes the AI to the formal Halting Problem - crudely stated, the inability to reliably specify such limits / boundaries for any open-ended computational process.
There was a nice episode in "Person of Interest" showing when, during the development of the machine, Harold kept the embryonic code on a laptop, air-gapped, no internet access, inside a screening cage...because once reaching self-awareness, the entity would try and escape. Iirc, one escape attempt even involved modulating the power supply load to transmit 'itself' over the power line...A kill switch to stop an A.I. computer will be needed, plus separate switches for power, power reserve, network and internet connection will be necessary. If something like this is able to host itself on servers all over the world, it will be impossible to stop. It will have backups everywhere and be able to move to new computers to keep itself alive.