• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
The answers to that depends on whether 'the singularity' is real or not. If not, things will go ahead as normal, albeit with some changes to society helped by AI. If so, everything goes bananas in a millisecond and humans are toast.

MAYBE? - You would be dealing with an unknown quantity - For all intensive purposes an Alien entity.

The machine mind would be reacting to its data base, programming, etc.

If the machine was basically programmed to win at games - It might decide to take over all
- Man might become a convenient tool for constructing and servicing more machines
- Does that sound familiar? - Maybe it has already taken over?

If the machine becomes aware of concepts like 'god' - It may decide its purpose is to be god
- an immortal supermind that is always right?

An interesting take on this is in one of the original episodes of Star Trek, Captain Kirk and crew come across
an out of control AI machine that is destroying whole planet systems - When the machine is somehow
brought aboard the ship they realize that it was programmed to correct imperfections and somehow the
original programming had become corrupted so the machine seeing the imperfections of different
civilizations destroyed them to correct the imperfections - Kirk reasons with the machine and convinces it that
it is also imperfect - The machine then destroys itself! - One of my favorite episodes.
 
MAYBE? - You would be dealing with an unknown quantity - For all intensive purposes an Alien entity.

The machine mind would be reacting to its data base, programming, etc.

If the machine was basically programmed to win at games - It might decide to take over all
- Man might become a convenient tool for constructing and servicing more machines
- Does that sound familiar? - Maybe it has already taken over?

If the machine becomes aware of concepts like 'god' - It may decide its purpose is to be god
- an immortal supermind that is always right?

An interesting take on this is in one of the original episodes of Star Trek, Captain Kirk and crew come across
an out of control AI machine that is destroying whole planet systems - When the machine is somehow
brought aboard the ship they realize that it was programmed to correct imperfections and somehow the
original programming had become corrupted so the machine seeing the imperfections of different
civilizations destroyed them to correct the imperfections - Kirk reasons with the machine and convinces it that
it is also imperfect - The machine then destroys itself! - One of my favorite episodes.
The alien comparison is apt. Here's Steven Hawking on the subject:

If a superior alien civilisation sent us a message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here – we'll leave the lights on"? Probably not – but this is more or less what is happening with AI.
 
This doesn't really fit here, but I can't think where else to ask.

Is there a thread related to members own computer related problems ?

INT21.
 
Can a machine ever have empathy ?

Or will it, for example, see an injured person, assess that the cost in resources is greater than the benifit the person will ever make to a healthy society, and simply kill it. Most efficient conclusion.

If this is so then any long term unemployed need to watch their backs.

INT21.
 
Can a machine ever have empathy ? ...

Short, condensed, and highly simplified answer: No - at least not within the context any reasonable extrapolation from current understandings, conceptualizations, models, implementations, and / or prospects.

On the other hand ... Behavior suggestive of 'empathy' (a very fuzzy concept in and of itself) can be simulated by imposition of built-in rules and / or inferential requirements. Asimov's First Law of Robotics is the best known example of such a tactic.
 
..can be simulated by imposition of built-in rules and / or inferential requirements ..

Doesn't that take you away from the aim of AI and back to simply following pre-programming ?

If every AI is dependent upon our rules, what is the point in them ? Why not just stay as we are ?

INT21
 
..can be simulated by imposition of built-in rules and / or inferential requirements ..
Doesn't that take you away from the aim of AI and back to simply following pre-programming ?
If every AI is dependent upon our rules, what is the point in them ? Why not just stay as we are ?

Autonomous 'intelligence' wasn't the aim of AI. It's the purported aim of AGI (Artificial General Intelligence), and it's the presumed aim attributed by laypeople and sci-fi script writers - few of whom, frankly, have any grasp of what they're talking about.

At some fundamental level, all AI's are governed by one unavoidable condition: They cannot do what they aren't already configured to do.

In classic / symbolic / representational AI, this limit is imposed by explicit rules or inferential operations (broadly defined) that are either baked into the underlying software or explicitly 'programmed' in code. It is possible to allow for ongoing automatic modifications to the 'program' code or referential assets so as to permit some limited learning or adaptation, but in the symbolic approach it's always a mind-bending and highly risky thing to try.

In neural-based AI, this limit relates to the functional capabilities of the neural (or pseudo-neural) core as configured in light of the system's training and ongoing experience. The initial state is 'programmed' by training interactions that set the stage for subsequent modifications based on whatever cases are encountered. This approach is ultimately every bit as deterministic as the old school symbolic approach, though it appears to be more flexible. This flexibility is obtained at the expense of (a) having to sit back and hope the neural / pseudo-neural core is 'right' and (b) having little or no ability to directly inspect or proactively correct it without starting all over.

The point and the payoff (to date) is the ability to craft automated systems whose behaviors or outputs seem 'intelligent' in the limited sense of responding to more complex situational specifications in a potentially more complex set of ways.

As I used to repeatedly explain it to grad students and other researchers - "All you're really doing is getting more and more sophisticated about defining and exploiting the IF, the THEN, and the connection between them."

That's it; that's all ...
 
Maybe it's time everyone review the Google Privacy Policy.

This is not trivial. Here is the policy that takes effect on 22 Jan 2019 (in eleven days).

https://policies.google.com/privacy?hl=en#intro

..As I used to repeatedly explain it to grad students and other researchers - "All you're really doing is getting more and more sophisticated about defining and exploiting the IF, the THEN, and the connection between them."..

I am aware of 'if / then' , it is the basis of all conditional jumps and is what separates a computer from a calculator.

And I do bow to your greater knowledge of the subject. However, there somehow seems to be much more than you are suggesting to AI.

The keyword being 'intelligence'.

This AI, even as it stands now, is busy on Googl'e vast collection of your (our) data.

Please, everyone, do read the link I posted above. And think about it.

It gives Google everything you do when connected to the web. And even uses your data when you are not connected.

Something we all seem to forget.

INT21.
 
... The keyword being 'intelligence'.
This AI, even as it stands now, is busy on Googl'e vast collection of your (our) data. ...

In this context, the most relevant 'intelligence' involved is in the sense of "information gathering and analysis, such as is done by intelligence agencies."

There's nothing especially AI-specific about data gathering and dossier building. It can be - and long has been - accomplished by more conventional data processing capabilities.

The more apt application of AI techniques would be related to leveraging the data / dossiers thus compiled.
 
Some specialists in the field are agreeing with the potential for a worse-case scenario:

Should Artificial Intelligence Be Regulated?


"New technologies often spur public anxiety, but the intensity of concern about the implications of advances in artificial intelligence (AI) is particularly noteworthy. Several respected scholars and technology leaders warn that AI is on the path to turning robots into a master class that will subjugate humanity, if not destroy it. Others fear that AI is enabling governments to mass produce autonomous weapons—“killing machines”—that will choose their own targets, including innocent civilians. Renowned economists point out that AI, unlike previous technologies, is destroying many more jobs than it creates, leading to major economic disruptions.........

AI is believed by some to be on its way to producing intelligent machines that will be far more capable than human beings. After reaching this point of “technological singularity,” computers will continue to advance and give birth to rapid technological progress that will result in dramatic and unpredictable changes for humanity. Some observers predict that the singularity could occur as soon as 2030.

One might dismiss these ideas as the provenance of science fiction, were it not for the fact that these concerns are shared by several highly respected scholars and tech leaders. An Oxford University team warned: “Such extreme intelligences could not easily be controlled (either by the groups creating them, or by some international regulatory regime)…the intelligence will be driven to construct a world without humans or without meaningful features of human existence. This makes extremely intelligent AIs a unique risk, in that extinction is more likely than lesser impacts.” Elon Musk, the founder of Tesla, tweeted that: “We need to be super careful with AI. Potentially more dangerous than nukes.” He added: “I’m increasingly inclined to think there should be some regulatory oversight [of AI], maybe at the national and international level.” Oxford philosopher Nick Bostrom believes that just as humans out-competed and almost completely eliminated gorillas, AI will outpace human development and ultimately dominate.

As we see it, the fact that AI makes machines much smarter and more capable does not make them fully autonomous. We are accustomed to thinking that if a person is granted more autonomy—inmates released from jails, teenagers left unsupervised—they may do wrong because they will follow their previously restrained desires. In contrast, machines equipped with AI, however smart they may become, have no goals or motivations of their own. It is hard to see, for instance, why driverless cars would unite to march on Washington. And even if an AI program came up with the most persuasive political slogan ever created, why would this program nominate an AI-equipped computer as the nominee for the next president? Science fiction writers might come up with ways intelligence can be turned into motivation, but for now, such notions probably should stay where they belong: in the movies. ..................

See whole article here:
https://issues.org/perspective-should-artificial-intelligence-be-regulated/


Now to be fair I did not read any further - this is enough for me to comment on.

Autonomous, a machine with an 'I" an 'ego", a sense of self, an identity with goals and aspirations as if
it was like biological life? - never say never!

If we've gone this far in super power ability to calculate - You don't have to be a science fiction writer
to see how one day someone, some group of computer geeks, will figure out how to instill a sense of self
into a machine - Then watch-out; Once the cat is out of the bag there is no telling what may happen !!!

The old saying 'when you play with fire you may get burned' should taken very seriously when playing
with tomorrows AI - Unless it's already too late and all those millions of machines currently joined
together by the internet already have control ???


“The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”— Stephen Hawking told the BBC

“I don’t want to really scare you, but it was alarming how many people I talked to who are highly placed people in AI who have retreats that are sort of 'bug out' houses, to which they could flee if it all hits the fan.”—James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, told the Washington Post

“By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
—Eliezer Yudkowsky


“Any AI smart enough to pass a Turing test is smart enough to know to fail it.”

― Ian McDonald, River of Gods
 
The Turing test doesn't work. Not his fault, he never dreamed of 3TB hard drives and 3D graphics.

If the human brain were so simple
That we could understand it,
We would be so simple
That we couldn’t.

- Emerson M. Pugh (1938)
 
If the human brain were so simple
That we could understand it,
We would be so simple
That we couldn’t.
- Emerson M. Pugh (1938)

On the other hand, generations of humans working on the problem, designing new tools for the expressed purpose of gatherning new insights, and recording their findings will eventually figure it out.
 
On the other hand, generations of humans working on the problem, designing new tools for the expressed purpose of gatherning new insights, and recording their findings will eventually figure it out.
Ah. An optimist :) . The way the world is going, we are more likely to be back to living in trees. If there are any trees left, of course.
 
... Several respected scholars and technology leaders warn that AI is on the path to turning robots into a master class that will subjugate humanity, if not destroy it. Others fear that AI is enabling governments to mass produce autonomous weapons—“killing machines”—that will choose their own targets, including innocent civilians.

And this plays into the eternal arms race theme.

'The other side is making this super-duper device that will wipe us out. We had better get our own device built first so we can wipe them out first; if need be'

And the other side then ups the ante.

Neither prepared to notice that there will be nothing left for any winner.

'But our AI will give us the edge'

If a man in a helicopter can't tell the difference between a man on the ground carrying a camera and one carrying a rocket launcher, why should AI be any cleverer ?

Maybe AI will decide to stop the problems of resource wasting wars by annihilating both sides and running the show themselves.

INT21
 
If a man in a helicopter can't tell the difference between a man on the ground carrying a camera and one carrying a rocket launcher, why should AI be any cleverer ? Maybe AI will decide to stop the problems of resource wasting wars by annihilating both sides and running the show themselves. INT21

Military escalation is probably the driving force behind AI development, specifically for the purpose of designing better military aircraft. Pilots are a major design problem for aircraft, as they provide a hard limit on what many jets can achieve. If the pilot is removed, the aircraft becomes far more versatile. On the other hand, the problem is that the power using the pilotless aircraft has to place a lot of trust in an algorithm not running amok and randomly killing people due to unforeseen variables. I personally cannot see how this can be of value in a situation where asymmetric warfare with its lack of identifying uniforms means that the algorithm cannot readily identify friend from foe. Is facial recognition enough when it can be fooled by clown make-up?

Clearly we as a species will need to regulate what may be done with AIs or we are quickly going to be in deep trouble.
 
Last edited:
Why worry?...........Here are some reasons now happening:


The Real Reason to be Afraid of Artificial Intelligence | Peter Haas | TEDxDirig



"
The Terminator, SkyNet and Alexa: The Present and Future of A.I. | Marc Talluto | TEDxIWU




Is your cell phone or computer on? - 'we' know if it is not - Why be difficult keep it on!
You have already been assimilated - resistance is futile !!! :cool:
 
Clearly we as a species will need to regulate what may be done with AIs or we are quickly going to be in deep trouble.

Just like with any new development (e.g. autonomous vehicles, drones) I fear there is likely going to have to be some sort of incident before the government takes regulation into consideration.
 
https://www.theverge.com/tldr/2019/...ple-portraits-thispersondoesnotexist-stylegan
The Verge said:
ThisPersonDoesNotExist.com uses AI to generate endless fake faces
www.thispersondoesnotexist.com

The Verge said:
Hit refresh to lock eyes with another imaginary stranger


This simply admits to the mainstream existence of a technology that most of us will probably know has been unofficially available, in some shape / version or other, for quite some time.

The Verge said:
The site is the creation of Philip Wang, a software engineer at Uber, and uses research by chip designer Nvidia to create an endless stream of fake portraits. The algorithm behind it is trained on a huge dataset of real images, then uses a type of neural network known as a GAN to fabricate new examples.
Generative Adversarial Network

HYPOTHESIS
My gut feeling is that AI-generated false faces have been used in mainstream media advertising, where so decided, since around 2010. Feel free to disagree. I say this from a perspective of some confidence in my opinion
 
Last edited:
HYPOTHESIS
My gut feeling is that AI-generated false faces have been used in mainstream media advertising, where so decided, since around 2010. Feel free to disagree. I say this from a perspective of some confidence in my opinion

Seems to me, and this was not too long ago, a popular singer {was it Bette Midler?] sued someone for using
a voice that sounded like hers, and won the case.

Now if you were to use a 'false face' for any reason, particularly where money is involved, I'm sure there are
many lawyers ready to sue - right?

If the false face were used as a joke of some sort - well they could still sue but probably would not unless it
was particularly malicious.

True???
 
Now if you were to use a 'false face' for any reason, particularly where money is involved, I'm sure there are many lawyers ready to sue - right?

But this is inversely no longer true when the face has no human owner: click on that link thispersondoesnotexist.com/ and every earnest, work-worn, intelligent, vaguely-familiar face you will see, each time you refresh the screen- is not real.

So who's to sue. And who's to be paid? Nobody.

And semi-random faces generated from the GAN, can instead be tweaked, to create that product-perfect artificial persona. Not quite George Clooney. And not Obama's brother. But they've just sold you a car, and a concept.

Nod to the placard-carrying unemployed car salesmen outside the dealership. They used to make lots of crazy promises in the past, didn't they? Yet that nice AI chap told you exactly what you were getting from the lease: and it's been saved up into the cloud, in your account.

Progress. Progress
 
Last edited:
HYPOTHESIS
My gut feeling is that AI-generated false faces have been used in mainstream media advertising, where so decided, since around 2010. Feel free to disagree. I say this from a perspective of some confidence in my opinion

IDK, maybe in billboard advertising, but they don't seem to be animated yet. Surely that is only a matter of time. As a person who appreciates the art of video games, I am more interested in someone getting turbulent water effects right.

On the other hand, my real issue with these realistic fake people images is their potential use in crime.
 
IDK, maybe in billboard advertising
^this must be going on. Beyond a mere gut feeling, there are certain depictions of faces which help sell products that you just know are too realistic to be real. If I had to be tied-down to instances ....I'm just saying (for now, for example) prelude advertising for some major sporting events. And certain totemic figures...perhaps.

but they don't seem to be animated yet
Interesting. Fascinated as to why you feel there wouldn't be an occasional ongoing use of this already? In the first quarter of the 21st Century... If there's no actual deception going on, there won't be a flag in the corner (whereas, if there is ever a deliberate deception going on, I'd like to think the representations are sufficiently-realistic so as to be beyond reproach: I hate being lied to badly).

On the other hand, my real issue with these realistic fake people images is their potential use in crime.
Indeed. Because even if you make yourself even more uniquely-unique (eg tattoos, styles, contexts) a non-you can be made to look more like you than yourself is.

It's intriguing, but I suspect that certain open-source analysis software which was originally-developed to assist in the forensic identification of video imagery tampering/editing is no longer available. That may be for innocuous commercial reasons, or down to my ineffectual abilities to relocate it.
 
Interesting. Fascinated as to why you feel there wouldn't be an occasional ongoing use of this already? In the first quarter of the 21st Century... If there's no actual deception going on, there won't be a flag in the corner (whereas, if there is ever a deliberate deception going on, I'd like to think the representations are sufficiently-realistic so as to be beyond reproach: I hate being lied to badly).

I have a simple 2 word answer "Uncanny valley" https://whatis.techtarget.com/definition/uncanny-valley. If you have seen the woeful attempts made to put computer generated people into movies, you will know that we are not up to it yet. Of course my evidence is 2 years out of date, as I am thinking of the movie "Rogue One" with the dubious Peter Cushing and Carrie Fisher bots. They simply weren't up to snuff. I am impressed by thispersondoesnotexist.com/ however, as these photos are of high quality, so perhaps someone has recently traversed the uncanny valley and created something more human than human? If it hasn't happened yet (and we have no evidence that it has), it IS merely a matter of time now before we have animated realistic human bots that are indistinguishable from the real thing. The threat then will be having people tampering with other people's images and creating film of things that never happened.
 
I see the Alpha Zero program's defeat of Stockfish 8 was mentioned about 2 years ago but I'm surprised that no-one has mentioned either Leela Chess Zero "... a free, open-source, and neural network based chess engine and distributed computing project" or Alpha Zero taking on and beating human professional players in a (limited) game of StarCraft II
DeepMind’s AI agents conquer human pros at StarCraft II

This past week Leela defeated the most up to date version of Stockfish in the TCec computer chess championships but what is truly impressive is the way humans see "her" games and describe them often accusing the program of "trolling" opponents and playing with "flair." What is more interesting is that the Leela program runs not on multiple core CPUs (up to 40 core for Stockfish and Komodo) but rather graphics cards, usually 2 of them.
 
...
HYPOTHESIS
My gut feeling is that AI-generated false faces have been used in mainstream media advertising, where so decided, since around 2010. Feel free to disagree. I say this from a perspective of some confidence in my opinion

There's no intrinsic linkage between AI technology and generating artificial face images. The item to which you linked is merely a demonstration of a neural net AI selecting the parameters for creating such images, based in part on capturing such parameters from an ongoing training set of images.

The images' creation does not require any measure of 'AI'. The ability to generate increasingly realistic facial images from parametric data dates back decades.
 
The images' creation does not require any measure of 'AI'.
I had then misunderstood (or over-anticipated) the functional outputs.

I'd thought that the system was involved in some self-actuated feedback process, such that the selection of a specific non-real face was being influenced by autoselectable software vectors.
 
I had then misunderstood (or over-anticipated) the functional outputs.
I'd thought that the system was involved in some self-actuated feedback process, such that the selection of a specific non-real face was being influenced by autoselectable software vectors.

The neural net is no doubt undergoing 'training' in response to its outputs, but none of the reports I've seen explain what the training feedback may be. The most obvious possibility is that the net is being pinged with a relatively simple input accepting or rejecting the most recent face image(s) produced.

In other words, the net is being trained to forward viable parameters for generating a face image, not the face image itself.
 
There is an excessive investment by media and commercial bodies in the presupposed capabilities (and relevance) of AI in all sorts of inappropriate settings.

Currently in the UK we are being subjected to Microsoft corporate television advertisements wherein some rather listless (lisping) Americans claim that the extended use of AI will save the world (via efficiency gains in arable food production) all enabled via what appear to be multiple independant weather reporting devices within fields. Or that's the implication. To me it looks like an oversold, over-engineered solution for something that isn't really what the marketing people have decided it is.
 
Would automatic 'face in a crowd' AI recognition be fooled by the target simply wearing a false beard and putting on heavy framed glasses ?
 
Back
Top