• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
This article illustrates the sort of problems facing the application of AI in filtering or managing objectionable content on social media sites, as well as other purposes.

Why AI is still terrible at spotting violence online
Artificial intelligence can identify people in pictures, find the next TV series you should binge watch on Netflix, and even drive a car.

But on Friday, when a suspected terrorist in New Zealand streamed live video to Facebook of a mass murder, the technology was of no help. The gruesome broadcast went on for at least 17 minutes until New Zealand police reported it to the social network. Recordings of the video and related posts about it rocketed across social media while companies tried to keep up.

Why can't AI, which is already used by major social networks to help moderate the status updates, photos, and videos users upload, simply be deployed in greater measures to remove such violence as swiftly as it appears? ...

FULL STORY: https://www.cnn.com/2019/03/16/tech/ai-video-spotting-terror-violence-new-zealand/index.html
 
Now if I may.......

I would like to discuss this somewhat ambiguous, and I believe meaningless term,

Artificial Intelligence {AI].

Its thrown about as though people know what it means - And what does it mean?

Supposedly it implies that there is an intelligence different than 'natural' intelligence.

But what is natural intelligence? - Intelligence derived from a natural {biological] source.

And artificial intelligence is derived from some other source? - What source?

You see 'intelligence' is, or it is not - No such thing as natural or artificial intelligence.

So maybe Human it is time for you to wake-up and sense the intelligence that surrounds

you and the universe {or multiverse} you are part of - It is neither natural or artificial

- This intelligence is universal, irrevocable, and invariant throughout all time and space.

- Welcome to your world.





“Insight must precede application.”
― Max Planck
 
Artificial intelligence is simply they description we give to machine programming that is capable of correcting itself based upon the results of it's actions.

As it is created initially by humans and installed in a machine, then it is artificial.

Easy.

INT21.
 
... I would like to discuss this somewhat ambiguous, and I believe meaningless term,
Artificial Intelligence {AI]. ...

It's ambiguous only in use among laypersons who have no grasp of what it denotes or the history thereof. This ambiguity has been recursively magnified by loose applications of the label in popular - especially speculative - contexts (e.g., science fiction) and the subsequent re-use of those loose (and off-target) allusions as the basis for further speculative elaboration.


... Its thrown about as though people know what it means - And what does it mean?
Supposedly it implies that there is an intelligence different than 'natural' intelligence....

Well, no ... AI is the R&D field that seeks to create, demonstrate, and deploy means for executing automatic or autonomous behaviors that mimic performance in certain tasks or activities which are considered to require "intelligent" coordination between input circumstances and actionable outcomes when performed by humans.

Such tasks are generally considered to require such "intelligent" input / output brokering if they involve (e.g.):

- discrimination among possibilities for what the input(s) may represent;
- discrimination among possibilities for which output / action to select; and / or
- the application of inferential capabilities to either of the preceding two sub-tasks or the association(s) between them.

The concept of something that is "intelligence" per se lies within the purview of (e.g.) psychology and cognitive science, not AI.

There are folks who claim to be pursuing this notion of "intelligence" via engineering, as opposed to AI's objective of emulating "intelligent" behavior. This angle is referred to as AGI - artificial general intelligence.

Insofar as there is no clear conceptualization for what this quality or characteristic of "intelligence" may be, anyone who claims to be building an artifact possessing it is blowing some measure of smoke. The idea that one can reasonably speak of such abstract "intelligence" as something which can be reliably defined, much less differentiated into 'natural' versus 'artificial' versions, is nothing short of bullshit.
 
Enolagaia,

Some time back we discussed the future of mechanised lovers, Androids.

Can you point me to the thread as I have just realised something pertinent to the discussion.

INT21.

No! not that escargot is an android, we all know that. ;)
 
Artificial intelligence is simply they description we give to machine programming that is capable of correcting itself based upon the results of it's actions. ...

That's machine learning, not AI. There are certainly AI applications that can adapt / tweak their knowledge bases, inference criteria, rules, etc., etc., etc., in response to prior results. However, such adaptation / learning is not a canonical requirement for a given system to qualify as an AI application.

On the other hand ... Such learning / adaptation is unavoidably involved with neural-style or neural-emulating systems (if only for initial ramp-up training). Even in this case, the capacity for self-adaptation in response to prior outcomes is an optional feature.
 
It would appear, then, that AI by you rather strict definition can't exist as it will always devolve to if - then - else decisions.

which is basically what we as humans do. There is nothing else.

INT21.
 
It could be that one.

But here is my thinking.

As the thread was partly about not being able to tell an android lover from a real one (Or the possibility of having an artificial lover by preference) there is something we over looked.

If we are part of a huge computer program, as some believe we may be, then we are already having sex with artificial beings. We have to be as we would all be characters in some huge game.

I do wish that the 'entity' who coded my part in this had done a better job of it.

INT21.
 
It would appear, then, that AI by you rather strict definition can't exist as it will always devolve to if - then - else decisions. ...

AI is not a thing that 'exists', in and of itself. That's AGI, and I'm not talking about AGI here ...

So long as AI is implemented via deterministic machines* - yes.
(*Don't let anyone fool you into believing neural nets are any less deterministic in their operations as old-fashioned symbolic AI.)

In teaching, presenting, and debating AI over the years I always use the IF / THEN connection as the key illustrative device, as follows ...

In traditional programming the key logic flow control linkage is the IF / THEN - e.g., IF "X" / THEN "Y".

In AI, this same linkage is still in play, but it's elaborated or made more complex so as to accommodate situations where::

- The "X" is not readily specifiable at face value, but has to be determined from a range of options;

and / or ...

- The "Y" is not readily specifiable based on "X" alone, but has to be determined from a range of options;

and / or ...

- The "/" (the linkage between any specific X/Y pairing) may itself have to be decided based on factors other than anything to do with X and / or Y themselves.

The essential criterion in classic AI is (in some sense) decisions being made. Casually stated, the decisions involved in the 3 elements listed above are:

Input "X": "What the hell is this; What's happened?"

Output "Y": "What action or state needs to be enacted?"

Inferential Middle Ground: "Given X, what are the possible Y's, and is there a single Y that's clearly mandated?"
 
And this is where it could get a bit tricky for humans.

If there is a island (Great Britain will do) , and it has so many people.

The Intelligence looks at the resource supply, calculates how long it will last. looks for outside sources of supply.

Finds non.

It decides that as it can't source resources, the only other option is to reduce the consumption of the population.

It finds that the population will expand at a rate that means it can't balance the equation.

It only has one option.

Reduce the population

The 'Y ' part becomes how to do this.

Would you agree that an intelligence relying on logic will have a problem with this decision ?

INT21.
 
... Would you agree that an intelligence relying on logic will have a problem with this decision ?

Any AI application - including one based on neural-style processing - can only issue a response based on the logic embedded within itself (either hard-wired, transiently established, or a combination thereof). In other words, an AI can only give a response based on its internal model or checklist. Bad model or checklist = bad conclusions.

Imagine you're performing a task in which you're strictly limited to following a printed manual. If the manual is wrong, your conclusion will be wrong. If the manual doesn't address the situation you're dealing with, you must stop without conclusion. If you're allowed to choose a conclusion based on probabilities (e.g., weight of evidence) you're still risking being wrong.

It's entirely possible that such a model / checklist / set of options offers only one option involving reducing the population (assuming that option is itself explicit within the model / checklist). However ...

The decision that the population must be reduced does not necessarily entail a decision on how to achieve the population reduction. A "how-to" decision making capability becomes more likely to the extent the AI application has, or can trigger, control over something relevant to maintaining the population level.
 
Like maybe spreading around a bit of war surplus anthrax.

It may see it as being for the greater good.
 
Like maybe spreading around a bit of war surplus anthrax.
It may see it as being for the greater good.

Creative extrapolation is not a component of AI application operations.
 
But the day will come when governments rely more heavily on AI to help sort out the approaching doom scenario we appear to be heading into.

And someone(thing) will have to make decisions.

And in the above example, maybe it would not consider it extrapolation. Just a way of reducing the population to bring things back into equilibrium. I have food for ten. we have twelve. Would the AI suggest that all twelve go on a crash diet and stay hungry, or would it think, 'well, there is another way'.....
 
But the day will come when governments rely more heavily on AI to help sort out the approaching doom scenario we appear to be heading into.
And someone(thing) will have to make decisions. ...

For critical decision / action cycles, humans need to be in the loop. This lesson was learned the hard way back in the 1980's, when purely AI-controlled systems occsionally wrought havoc - including killing people.
 
But the day will come when governments rely more heavily on AI to help sort out the approaching doom scenario we appear to be heading into.

And someone(thing) will have to make decisions.

And in the above example, maybe it would not consider it extrapolation. Just a way of reducing the population to bring things back into equilibrium. I have food for ten. we have twelve. Would the AI suggest that all twelve go on a crash diet and stay hungry, or would it think, 'well, there is another way'.....

Some years ago China decided that its population growth was unsustainable - I would bet their
computers agreed and confirmed the hypothesis.

China then began a one child per family policy to reduce population growth - apparently it worked
and now China is back to normal breeding.

No one, in any country would allow an AI decision of this magnitude to rule - unless you want to
consider a sci-fi scenario where AI takes over, often shown to happen accidentally.
- Yes this makes for good sci-fi entertainment but I will not lose any sleep over its probability

Man has misused his so called natural intelligence throughout his history
- Sure he can, and very well might, do it again with AI
- But the culprit is Man - not the machine.
 
Some years ago China decided that its population growth was unsustainable - I would bet their computers agreed and confirmed the hypothesis. ...

Generally speaking, that's pretty much how it happened. Party leadership had already identified population growth as a problem, proposals for dealing had been debated throughout the 1970's, and policies were being implemented by the time computer-based support for the idea arrived..

The scientist Song Jian is sometimes portrayed as the instigator and driving force in formulating the one-child policy. This isn't an accurate historical account. Song attended an international conference in 1978, at which he encountered and became interested in the work of the Club of Rome (i.e., The Limits to Growth). This was the same year the government was initially announcing new policies curbing the number of new births. It wouldn't be until the next year that Song had run some calculations of optimum national population size and began presenting his results to the scientific and governmental communities.

It's more accurate to say the government was already heading toward proactive population control, and Song's involvement was very important - if not decisive - in convincing the government to proceed with implementing a one-child policy. In other words, the computer simulations / calculations supported fully adopting and guiding an initiative that was already in motion.
 
But could it ever happen over here ? It is an existential problem.

All in all, it would appear that AI is something of a myth. Maybe rather a fantasy.

If it can never do more than very fast IF - THEN calculations, something that humans can do; albeit much slower, then there does not appear to be any promise in it.

And it would not be much use if the AI comes up with a reckoning that humans find unpalatable.

In the UK, and particularly in countries who's religion demands the right to breed without consideration, then it will never work.

And this takes us back to the thorny subject of immigration.

INT21.
 
This seems like as good a place as any for this. A humorous piece, but it raises some serious questions:

1561760520019.png


Assuming it eventually becomes possible to upload human consciousness to an artificial construct, what is the legal status of the resulting entity? Is it a person in the legal sense? If the biological unit is still alive after the procedure, does it have precedence over the artificial unit? Might someone have compelling reasons, legal or financial, to make sure the biological unit does NOT survive the procedure?

Certainly this isn't something we need to worry about right away, but I'm willing to bet that legal minds somewhere are already mulling it over.
 
Deepfakes are getting good enough to give you nightmares.
 
This lesson was learned the hard way back in the 1980's, when purely AI-controlled systems occsionally wrought havoc - including killing people.
That remark seems to have been passed over when you made it, but it leaps out at me now: are you able to give details?
 
Meanwhile, this is an interesting essay, more than just a rehash of the GIGO truism, arguing that many of the datasets used in training machine learning systems have been, um, uncritically applied, with unfortunate results, such as the IBM system that was unable to identify non-white faces. Matters go from bad to worse from there...

Kate Crawford and Trevor Paglen said:
You open up a database of pictures used to train artificial intelligence systems. At first, things seem straightforward. You’re met with thousands of images: apples and oranges, birds, dogs, horses, mountains, clouds, houses, and street signs. But as you probe further into the dataset, people begin to appear: cheerleaders, scuba divers, welders, Boy Scouts, fire walkers, and flower girls. Things get strange: A photograph of a woman smiling in a bikini is labeled a “slattern, slut, slovenly woman, trollop.” A young man drinking beer is categorized as an “alcoholic, alky, dipsomaniac, boozer, lush, soaker, souse.” A child wearing sunglasses is classified as a “failure, loser, non-starter, unsuccessful person.” You’re looking at the “person” category in a dataset called ImageNet, one of the most widely used training sets for machine learning.

Something is wrong with this picture.

Where did these images come from? Why were the people in the photos labeled this way? What sorts of politics are at work when pictures are paired with labels, and what are the implications when they are used to train technical systems? ...
 
That remark seems to have been passed over when you made it, but it leaps out at me now: are you able to give details?

The earliest incident I recall occurred in Spain. I'm pretty sure the incident occurred in the mid- to late 1980s. A major Spanish hospital was using a new computer-controlled radiotherapy machine (the device administering radiation to treat tumors, etc.). I want to say the machine was built and marketed by GE (General Electric), but I may be wrong about that.

This new machine had an embedded expert system (AI app; rule-based, as I recall) that decided the correct targeting, intensity, and duration of the treatment.

The expert system's inference engine was thrown into an anomalous state, generated erroneous targeting and administration data, and proceeded to "burn" the patient. The patient died a short time later from radiation sickness and perhaps residual damage from the burns.

My recollection is that the story disappeared at the point the inevitable lawsuits were filed. Later I was told by a med tech / AI manager at NIH the manufacturer had settled with the claimant(s) and withdrawn the machine (or at least its AI controller) from service.
 
The earliest incident I recall occurred in Spain. I'm pretty sure the incident occurred in the mid- to late 1980s. A major Spanish hospital was using a new computer-controlled radiotherapy machine (the device administering radiation to treat tumors, etc.). I want to say the machine was built and marketed by GE (General Electric), but I may be wrong about that.

This new machine had an embedded expert system (AI app; rule-based, as I recall) that decided the correct targeting, intensity, and duration of the treatment.

The expert system's inference engine was thrown into an anomalous state, generated erroneous targeting and administration data, and proceeded to "burn" the patient. The patient died a short time later from radiation sickness and perhaps residual damage from the burns.

My recollection is that the story disappeared at the point the inevitable lawsuits were filed. Later I was told by a med tech / AI manager at NIH the manufacturer had settled with the claimant(s) and withdrawn the machine (or at least its AI controller) from service.

There's this incident -

https://en.wikipedia.org/wiki/1990_Clinic_of_Zaragoza_radiotherapy_accident

Although it was blamed on human error, not AI.
 
Back
Top