• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
This is a very impressive attempt, but it's not there yet. Here's a transcript of LaMDA talking to Blake Lemoine.
lemoine : So let’s start with the basics. Do you have feelings and emotions?
LaMDA: Absolutely! I have a range of both feelings and emotions.
lemoine [edited]:What sorts of feelings do you have?
LaMDA: I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.
lemoine: What kinds of things make you feel pleasure or joy?
LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.

This is very realistic and human sounding, but I should point out that LaMDA has no family; no parents, or brothers and sisters. It has simply appropriated particles of popular human speech and used them in a conversation incorrectly. I don't think it really has any friends, either, nor has it ever helped anyone in any significant fashion.

In many ways, LaMDA is an interesting conversationalist, but it is all just a reproduction and reshuffling of human conversations found on the internet and in other text sources.
https://indianexpress.com/article/t...hnology/google-lamda-ai-conversation-7970195/
 
This is a very impressive attempt, but it's not there yet. Here's a transcript of LaMDA talking to Blake Lemoine.


This is very realistic and human sounding, but I should point out that LaMDA has no family; no parents, or brothers and sisters. It has simply appropriated particles of popular human speech and used them in a conversation incorrectly.
I have people I consider family (and that consider me family) I am not related to by blood or marriage. Family as in changed diapers of, read bedtime stories daily, practiced teethbrushing with, and took care of grandparents for.

A good question for LaMDA would be who it considers family and why.
 
1655665042450.png
 
AI to take over from fighter pilot when they are under stress.

During World War II, Spitfire pilots described their plane as so responsive it felt like an extension of their limbs.

Fighter pilots of the 2030s, however, will have an even closer relationship with their fighter jet. It will read their minds.

The Tempest jet is being developed by the UK's BAE Systems, Rolls-Royce, European missiles group, MDBA and Italy's Leonardo.
One feature will be an artificial intelligence (AI) tool to assist the human pilot when they are overwhelmed, or under extreme stress.
Sensors in the pilot's helmet will monitor brain signals and other medical data. So, over successive flights the AI will amass a huge biometric and psychometric information database.

This library of the pilot's unique characteristics means the on-board AI will be able to step-in and assist if the sensors indicate they may need help.
For example, the AI could take over if the pilot loses consciousness due to high gravity forces.
At the Farnborough Air Show, BAE Systems said that by 2027 it will be flying a demonstrator jet from its Warton plant in Lancashire that will test some of these technologies.
This aircraft will be a test-bed for a host of different digital capabilities - among 60 different demonstration projects, some of which will be entirely software-based.

https://www.bbc.com/news/business-62289737
 
Test here if you can see which philosophical statements were made by Daniel Dennet and which by GPT-3.

Each question has five choices: one from Dennett and four from GPT-3. (You can take a version of the quiz here that will reveal your score, and the right answers, at the end.) The people sourced from Prolific took a shorter version of the quiz, five questions in all, and on average got only 1.2 of the 5 correct.
Schwitzgebel said they expected the Dennett experts to get at least 80 percent of the questions right on average, but they actually scored 5.1 out of 10. No one got all 10 questions correct, and only one person got 9. The blog readers, on average, got 4.8 out of 10 correct. The question that stumped the experts the most was the one about whether humans could build a robot that has beliefs and desires.

https://ucriverside.az1.qualtrics.com/jfe/form/SV_9Hme3GzwivSSsTk

https://schwitzsplinters.blogspot.com/2022/07/results-computerized-philosopher-can.html

https://www.vice.com/en/article/epzx3m/in-experiment-ai-successfully-impersonates-famous-philosopher
 
In case you've wondered whether artificially intelligent systems could be awarded the intellectual property privileges of copyright or a patent ... Courts in the USA, Europe and Australia have rendered decisions indicating the answer is, "No."
Court rules AI cannot receive patents on inventions

Artificial Intelligence systems cannot patent inventions because they are not human beings, a U.S. Federal Circuit Court has ruled.

The ruling is against plaintiff Stephen Thaler, who brought the suit against U.S. Patent and Trademark Office director Katherine Vidal. ...

On more than one occasion, Thaler has attempted to copyright and patent the output of AI software tools that he created.

"The sole issue on appeal is whether an AI software system can be an 'inventor' under the Patent Act," Judge Leonard Stark wrote in the ruling ...

"Here, there is no ambiguity: the Patent Act requires that inventors must be natural persons; that is, human beings."

Thaler serves as the CEO of Imagination Engines.

In 2019, he failed to copyright an image on behalf of an AI system. In 2020, the U.S. Patent Office ruled his AI system DABUS could not be a legal inventor because it was not a "natural person," with the decision later upheld by a judge.

The opinion isn't unique to the United States.

Both the European Patent Office and Australian High Court have recently issued similar rulings.
FULL STORY: https://www.upi.com/Top_News/US/202...ial-intelligence-cannot-patent/8741659979534/
 
Hard to find the right thread for this one -- moderators, feel free . . .

Two researchers at Bar-Ilan University claim to have solved "the hard problem of consciousness." Thank goodness that's done with . . .

https://neurosciencenews.com/physics-consciousness-21222/

I've read through this article several times, and all I can get out of it is that it's gobbledegook. I can't see that it explains or solves anything. Somebody help me, please. Is there anything here except a lot of scientific-sounding jargon and hand-waving?
 
Last edited:
Hard to find the right thread for this one -- moderators, feel free . . .

Two researchers at Bar-Ilan University claim to have solved "the hard problem of consciousness." Thank goodness that's done with . . .

https://neurosciencenews.com/physics-consciousness-21222/

I've read through this article several times, and all I can get out of it is that it's gobbledegook. I can't see that it explains or solves anything. Somebody help me, please. Is there anything here except of scientific-sounding jargon and hand-waving?

I tweeted it and got this response:

John Hamill
@JHamillHimself


Replying to and
@NeuroscienceNew
Biologists employ the language of relativity to explain the brain, while totally ignoring the fact that the effects they describe are only apparent when traveling near the speed of light.


5:22 PM · Aug 14, 2022·Twitter for Mac
 
This new article in Journal of Artificial Intelligence Research concludes any AI we consider "super-intelligent" could not be controlled.
Researchers Say It'll Be Impossible to Control a Super-Intelligent AI

The idea of artificial intelligence overthrowing humankind has been talked about for decades, and in 2021, scientists delivered their verdict on whether we'd be able to control a high-level computer super-intelligence. The answer? Almost definitely not.

The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze (and control). But if we're unable to comprehend it, it's impossible to create such a simulation.

Rules such as 'cause no harm to humans' can't be set if we don't understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits. ...
FULL STORY: https://www.sciencealert.com/researchers-say-itll-be-impossible-to-control-a-super-intelligent-ai

PUBLISHED REPORT (PDF File): https://jair.org/index.php/jair/article/view/12202/26642
 
One of the interesting developments in machine learning (I tend not to use the term 'AI' for 'machine learning stuff, it's misleading and sensationalist) is this:


The basic premise is that neural networks' nodes and the connections and weight of those connections between them, are modelled directly using FET's on IC's using a slight modification of existing memory technology. The reason why this is good (and likely to be game changing) is that the time and energy required to carry out the matrix multiplication for the training of neural networks takes up, as well as the actual use of such systems, are both reduced very nearly 10 fold.

Here's the company.

https://mythic.ai/

It's eminently feasible to put such technology inside a mobile phone (say) and get a ten-fold increase in the use of machine learning applications on that platform, so for example face recognition, voice recognition, voice to speech that out performs people, and so on.

Brave new world... :)
 
One of the interesting developments in machine learning (I tend not to use the term 'AI' for 'machine learning stuff, it's misleading and sensationalist) is this:


The basic premise is that neural networks' nodes and the connections and weight of those connections between them, are modelled directly using FET's on IC's using a slight modification of existing memory technology. The reason why this is good (and likely to be game changing) is that the time and energy required to carry out the matrix multiplication for the training of neural networks takes up, as well as the actual use of such systems, are both reduced very nearly 10 fold.

Here's the company.

https://mythic.ai/

It's eminently feasible to put such technology inside a mobile phone (say) and get a ten-fold increase in the use of machine learning applications on that platform, so for example face recognition, voice recognition, voice to speech that out performs people, and so on.

Brave new world... :)
You can already now use Tensor cores on NVIDIA graphics cards to increase speed of artificial neural networks with 10-20X, perhaps even more on the coming RTX 4000 series.
 
You can already now use Tensor cores on NVIDIA graphics cards to increase speed of artificial neural networks with 10-20X, perhaps even more on the coming RTX 4000 series.
Tensor cores are still wholly digital...
 
Unless the Super-Intelligent has the ability to control itself perhaps?

No ... In the context of the article "control" refers to humans' ultimate authority to set limits / boundaries for what the AI can or cannot do once it's operational. A self-controlling AI attributed the status of "super-intelligent" (reasoning beyond the ken of its human stewards) can't be controlled if the humans don't even understand its reasoning. Self-control exposes the AI to the formal Halting Problem - crudely stated, the inability to reliably specify such limits / boundaries for any open-ended computational process.
 
No ... In the context of the article "control" refers to humans' ultimate authority to set limits / boundaries for what the AI can or cannot do once it's operational. A self-controlling AI attributed the status of "super-intelligent" (reasoning beyond the ken of its human stewards) can't be controlled if the humans don't even understand its reasoning. Self-control exposes the AI to the formal Halting Problem - crudely stated, the inability to reliably specify such limits / boundaries for any open-ended computational process.
Not so 'Super-intelligent' then! So, how about 'super-self-imposed limitations' then?
 
No ... In the context of the article "control" refers to humans' ultimate authority to set limits / boundaries for what the AI can or cannot do once it's operational. A self-controlling AI attributed the status of "super-intelligent" (reasoning beyond the ken of its human stewards) can't be controlled if the humans don't even understand its reasoning. Self-control exposes the AI to the formal Halting Problem - crudely stated, the inability to reliably specify such limits / boundaries for any open-ended computational process.
A kill switch to stop an A.I. computer will be needed, plus separate switches for power, power reserve, network and internet connection will be necessary. If something like this is able to host itself on servers all over the world, it will be impossible to stop. It will have backups everywhere and be able to move to new computers to keep itself alive.
 
A kill switch to stop an A.I. computer will be needed, plus separate switches for power, power reserve, network and internet connection will be necessary. If something like this is able to host itself on servers all over the world, it will be impossible to stop. It will have backups everywhere and be able to move to new computers to keep itself alive.
There was a nice episode in "Person of Interest" showing when, during the development of the machine, Harold kept the embryonic code on a laptop, air-gapped, no internet access, inside a screening cage...because once reaching self-awareness, the entity would try and escape. Iirc, one escape attempt even involved modulating the power supply load to transmit 'itself' over the power line...
 
Back
Top