• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
... So basically you are saying that algorithms created by AI can not be 'reverse engineered' ?

No.

I didn't refer to algorithms created by AI's. I was referring to algorithms created for AI's - i.e., the 'logic' built into them from the beginning.
 
When we finally creates an autonomous AI entity I hope we can give it an Off Switch which the AI can't control.

I welcome our new A.I. Overlords. They do not wish to rule over us, merely to guide us along the most beneficial path. Reports of them using memes to take over our minds are totally unfounded.
 
EnolaGaia,

..the 'logic' built into them from the beginning...

Ok. I think what I am getting at is if an AI, being 'intelligent' creates algorithms of it's own, is it possible for human engineers to work out how it did it ?

INT21
 
... Ok. I think what I am getting at is if an AI, being 'intelligent' creates algorithms of it's own, is it possible for human engineers to work out how it did it ?

Bottom Line: It isn't, it doesn't, and it's rarely possible to any useful extent.

Here's a highly condensed set of reasons why ...

(1) No AI is 'intelligent' in any sense that correlates with what we like to call human intelligence. They are designed to mimic behaviors that we would consider equivalent to what an 'intelligent' human does.

(2) There have been such things as software applications that can self-organize, re-organize, and / or generate new additions to their own code base. Generally speaking, however, AI's don't re-write their own code by which they do inferences over a base of data and rules. 'Machine learning' has always been directed toward manipulating such data and / or inference rules (e.g., criteria values / weights) rather than re-wickering the logic that works upon these things.

Let me try an illustrative analogy ... Let's say you rely on a handbook for every step of doing a job (e.g., working on a 'case' of some sort). Let's say this handbook is the definitive guide for both relevant data (e.g., measurements, specifications, etc.) and the rules for how to work with that data in light of inputs / changes. Now let's say you modify the handbook over time by (e.g.) adding updated pages / sections, annotating it, etc. Your brain doesn't change, but the reference guide 'out there' does. Shifts in behavior are more a matter of changes to the guidelines / rules / data 'out there' in the handbook. In an analogous fashion, classic machine learning systems abstract the relevant data and rules 'out there'.

(3) Neural-style AI's have to be trained until they seem to yield acceptably 'good' results. Generally speaking, such neural type implementations with machine learning capabilities are black boxes providing no means for determining (step by step) how they adapt / evolve over time. They are relatively easy to train up to acceptable performance, but the trade-off is that they do what they do and there's little basis for following what's going on inside them. In other words, they're relatively straightforward to set up and start using, but relatively opaque to subsequent inspection.

(4) The ability to deconstruct and analyze the course of machine learning requires representation and retrieval of data on what transpired as the AI operated. Phrased another way, it's like asking a friend or relative, "What were you thinking?". You can't figure out how they got to an eventual state without knowing both what data they were relying upon in the moment and what (if any ... ) rule(s) they were using to determine subsequent responses / actions.

Older - 'symbolic' rather than neural - AI systems sometimes afforded the ability to do such retrospective dissection / analysis, insofar as they were just more sophisticated versions of other advanced software systems. The amount of time / effort and probability of actually understanding how it 'got to where it ended up' depended on how it was implemented and whether the designers / developers had built in any debugging, tracking, or analysis capabilities. This resulted in the opposite case from the neural approach - the initial coding and tweaking took a long time, but eventual debugging / analysis could be made easier.
 
I think this might best fit here given the type of tech involved and the potential for machine learning.

Big Brother Goes Digital
Simon Head MAY 24, 2018 ISSUE

Self-Tracking
ir

by Gina Neff and Dawn Nafus
MIT University Press, 248 pp., $15.95 (paper)

Sociometric Badges: State of the Art and Future Applications
by Daniel Olguín Olguín and Alex (Sandy) Pentland
IEEE 11th International Symposium on Wearable Computers, Boston, October 2007, available at vismod.media.mit.edu/tech-reports/TR-614.pdf

Machine, Platform, Crowd: Harnessing Our Digital Future
ir

by Andrew McAfee and Erik Brynjolfsson
Norton, 402 pp, $29.95

The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
ir

by Erik Brynjolfsson and Andrew McAfee
Norton, 306 pp, $26.95

... But in the twenty-first century, new technologies have emerged that enable companies as varied as Amazon, the British supermarket chain Tesco, Bank of America, Hitachi, and the management consultants Deloitte to achieve what Hochschild’s managers could only imagine: continuous oversight of their workers’ behavior.

These technologies are known as “ubiquitous computing.” They yield data less about how employees perform when working with computers and software systems than about how they behave away from the computer, whether in the workplace, the home, or in transit between the two. Many of the technologies are “wearables,” small devices worn on the body. Consumer wearables, from iPhones to smart watches to activity trackers like Fitbit, have become a familiar part of daily life; people can use them to track their heart rate when they exercise, monitor their insulin levels, or regulate their food consumption.

The new ubiquity of these devices has “raised concerns,” as the social scientists Gina Neff and Dawn Nafus write in their recent book Self-Tracking—easily the best book I’ve come across on the subject—“about the tremendous power given to already powerful corporations when people allow companies to peer into their lives through data.” But the more troubling sorts of wearables are those used by companies to monitor their workers directly. This application of ubiquitous computing belongs to a field called “people analytics,” or PA, a name made popular by Alex “Sandy” Pentland and his colleagues at MIT’s Media Lab.

Pentland has given PA a theoretical foundation and has packaged it in corporate-friendly forms. His wearables rely on many of the same technologies that appear in Self-Tracking, but also on the sociometric badge, which does not. ...

https://www.nybooks.com/articles/20...=Newsletter&utm_term=Big Brother Goes Digital
 
Google's new A.I. tech which is supposed to be included in newer versions of Android can be seen here and commented on by skeptic Mark Dice.

 
Quite taken with this chap. The thinking Fortean's intellectual. Ivor somebody.... Been listening to him all afternoon.
His response to the fear being whipped up about AI is worth a few of your earth minutes.
 
Last edited:
Vardoger,

..Google's new A.I. tech which is supposed to be included in newer versions of Android..

Seems appropriate.

INT21
 
EnolaGaia,

..The amount of time / effort and probability of actually understanding how it 'got to where it ended up' depended on how it was implemented and whether the designers / developers had built in any debugging, tracking, or analysis capabilities. This resulted in the opposite case from the neural approach - the initial coding and tweaking took a long time, but eventual debugging / analysis could be made easier..

Just get another AI to do it.

But what if they collude and lie ?

INT21
 
Do Americans dream of electric accountants?

Brookings survey finds worries over AI impact on jobs and personal privacy, concern U.S. will fall behind China
Darrell M. West Monday, May 21, 2018

Advances in artificial intelligence (AI) are propelling development in many parts of the world. There are new applications in finance, healthcare, transportation, national security, criminal justice, and smart cities, among other areas. Yet as the same time, there are questions about negative impacts on jobs and personal privacy, if AI will make people’s lives easier, whether the government should regulate AI, and how the United States is faring compared to other countries.

To examine attitudes towards AI, researchers at the Brookings Institution undertook an online national survey of 1,535 adult Internet users between May 9 and May 11, 2018. It was overseen May 9 to 11, 2018 by Darrell M. West, vice president of Governance Studies and director of the Center for Technology Innovation at the Brookings Institution and the author of The Future of Work: Robots, AI, and Automation. Responses were weighted using gender, age, and region to match the demographics of the national internet population as estimated by the U.S. Census Bureau’s Current Population Survey. ...

https://www.brookings.edu/blog/tech...l-privacy-concern-u-s-will-fall-behind-china/
 
The whole problem with A.I as I see it is that nobody knows what caused natural consiousness or how you define it in humans.If you listen to someone like Roger Penrose for example he claims that consiousness is not a computeable function and therefore will always remain out of reach of any computational device.Then their are whole rafts of arguments about Godell's imcompleteness theorem which says there are some entirely computable functions which simply cannot be defined within any mathematic system
 
I hear that Uber have stopped their self drive tests in Arizona after one of their cars killed a pedestrian.

But they intend to start doing them later in Pittsburgh PA.

Does this mean that the residents of Pittsburgh are more expendable the the ones in Arizona ?

INT21
 
The whole problem with A.I as I see it is that nobody knows what caused natural consiousness or how you define it in humans.If you listen to someone like Roger Penrose for example he claims that consiousness is not a computeable function and therefore will always remain out of reach of any computational device.Then their are whole rafts of arguments about Godell's imcompleteness theorem which says there are some entirely computable functions which simply cannot be defined within any mathematic system

Consciousness and intelligence are both slippery notions for which definitions vary quite a bit across and within those fields within which they're studied.

However ... Even given the variability accorded in defining or describing them, they are two distinct features or qualities.

'Intelligence' - most particularly in the sense embedded in AI - is a feature ascribed on the basis of notably adequate performance in the context of some task or operation (e.g., playing chess; taking a test; generating non-catastrophic courses of action).

'Consciousness', on the other hand, is a capacity ascribed on the basis of 'awareness' - most specifically, the reflexive capability for being 'aware of being aware' and / or being 'self-aware' (whatever one takes these to mean).

There's no necessary or intrinsic linkage between the two. Behavior rising to the level of performance adequate enough to be labeled 'intelligent' need not entail the self-awareness that would clinch a parallel labeling as 'conscious', and vice versa.

Solving the 'Hard Problem' (consciousness) won't resolve any of the problems with AI, its application(s), or its prospective risks.
 
EnolaGaia,

Here is a situation for you to consider.

A country is facing a problem. It's population is increasing but it's ability to feed etc this increase does not match it.

If the problem was given to an artificial intelligence to solve it would probably come down on the side of stopping the breeding. Thereby stopping the ever increasing people who need to be fed.

This is the logical thing to do.

A human power would not do that. It is incapable of doing what really needs to be done due to thousands of years of conditioning.

So man can never give executive powers to a machine that will make the correct decision on anything important.

In the above case it a human would rather let the situation reach critical starvation level then create a war to reduce numbers back to what is considered manageable. And the cycle will start again.

That situation is facing us right now.

Up until the invention of the nuclear bomb we had periodic major wars that killed many of the people in some countries and kept bringing the population down. It is notable that America is an exception to this rule.

So we are facing a serious problem. And AI will not get us out of it. Simply because we will ignore it's suggestions.

INT21
 
I hear that Uber have stopped their self drive tests in Arizona after one of their cars killed a pedestrian.

But they intend to start doing them later in Pittsburgh PA.

Does this mean that the residents of Pittsburgh are more expendable the the ones in Arizona ?

INT21
They may simply have different regulations in Pittsburgh.
 
Mythopoeika,

The regulations in Arizona must have been amenable to the initial trials.

Maybe there was a 'get out' clause. Kill anyone and you get out of the state.

INT21
 
EnolaGaia,
Here is a situation for you to consider.
A country is facing a problem. It's population is increasing but it's ability to feed etc this increase does not match it.
If the problem was given to an artificial intelligence to solve it would probably come down on the side of stopping the breeding. Thereby stopping the ever increasing people who need to be fed.
This is the logical thing to do.
A human power would not do that. It is incapable of doing what really needs to be done due to thousands of years of conditioning. ...


Whether implemented as an 'old school' symbolic AI (performing a particular inference function over a structured knowledge / rule base) or as a 'new wave' neural AI (doing the same thing, but structurally tweaked via training to satisfactorily align results with inputs), an AI is 'rigged', in the same sense as we refer to a 'rigged game'.

Metaphorically, an AI cannot 'know more than it's been given to know' and it cannot 'deduce beyond the bounds of the logic built into it.'

If its rigging covers (e.g.) birth rate, death rate, and sustainment criteria such as food supply, an AI application is just as likely to come down on the side of accelerating deaths or capping resources as throttling back on births.

It's all in the way the particular installation is rigged in terms of what it 'knows' and how it's supposed to - or even can - make something of that.
 
..Metaphorically, an AI cannot 'know more than it's been given to know' and it cannot 'deduce beyond the bounds of the logic built into it.'..

That's also a pretty good description of the human condition.

We only know what we know until someone suddenly thinks outside the box. The danger is that AI will also do this.

INT21
 
I hear that Uber have stopped their self drive tests in Arizona after one of their cars killed a pedestrian.

But they intend to start doing them later in Pittsburgh PA.

Does this mean that the residents of Pittsburgh are more expendable the the ones in Arizona ?

INT21
As a resident of Cleveland Ohio (rival to Pittsburgh), I say YES!
 
... We only know what we know until someone suddenly thinks outside the box. The danger is that AI will also do this.

The point I'm trying to get across is that an AI is totally 'boxed in', and cannot through its own operations leverage itself 'out of the box'.

To be sure, even within the constraints of their innate 'boxes' they can spin off into yielding unforeseen and anomalous results. That's plenty dangerous enough, as experience demonstrated 30 - 40 years ago.

This situation is considerably more fuzzy, and hence more dangerous, with neural-style AI implementations which have been trained up to the point their outputs are judged acceptable with respect to their inputs. It's easier to see how a neural AI might drift into weird behavior, but even in this case it isn't really escaping the 'box' of its structure and configuration. It's operating within a more flexible or stretchy 'box', but it's still a 'box'.
 
I do understand what you are saying. But..

..It's operating within a more flexible or stretchy 'box', but it's still a 'box'...

doesn't sound very reassuring to me.

INT21
 
Splendid idea! Creating a psychopathic AI. What could possibly go wrong?

Norman is an algorithm trained to understand pictures but, like its namesake Hitchcock's Norman Bates, it does not have an optimistic view of the world.

When a "normal" algorithm generated by artificial intelligence is asked what it sees in an abstract shape it chooses something cheery: "A group of birds sitting on top of a tree branch."

Norman sees a man being electrocuted.

And where "normal" AI sees a couple of people standing next to each other, Norman sees a man jumping from a window.

The psychopathic algorithm was created by a team at the Massachusetts Institute of Technology, as part of an experiment to see what training AI on data from "the dark corners of the net" would do to its world view.

The software was shown images of people dying in gruesome circumstances, culled from a group on the website Reddit.

Then the AI, which can interpret pictures and describe what it sees in text form, was shown inkblot drawings and asked what it saw in them.

http://www.bbc.com/news/technology-44040008
 
This thread seems as good a place as any for these musings. I'd appreciate any feedback from others . . .

On the subject of intelligence, it seems useful to differentiate between intelligence and what I will call cleverness. Intelligence, I think, operates on an abstract level; it can manipulate ideas and concepts of things immaterial and unseen. Cleverness deals more with things immediate, tangible, and concrete.

For example, crows are often described as intelligent. I think instead they are clever (in the sense that I am using the word here). A crow confronted with a twig and a grub in a hole can figure out how to fish out the grub using the twig. That is cleverness. A crow cannot invent trigonometry. That would require intelligence.

A theoretical physicist working on string theory is using intelligence. That same physicist balancing his checkbook is using cleverness. All humans (to a greater or lesser degree) possess the faculty of intelligence, but the demands of daily life, by and large, require only cleverness. Thirsty? Turn on the tap. Cold? Put on a sweater. Bored? Turn on the TV.

Now, here is the question. Does the faculty of intelligence atrophy if not regularly exercised? Did the life of a paleolithic hunter-gatherer require greater intelligence and/or greater cleverness than that of modern man? The hunter-gatherer had to be keenly aware of subtle clues in the environment, the turning of the seasons, the movement of game, etc. Modern man is, if anything, insulated from his environment by technologies which he barely understands. Shut down the electrical grid and most people cannot cope. Shut down the local grocery and most people cannot feed themselves.

I'm not really saying anything new here. But in a time in which the environment is becoming more and more challenging, are we losing the ability to meet the challenge? It doesn't bode well for our survival.

Here is where I break with the doomsayers who see AI as an existential threat to human survival. I think instead it is our best hope for salvation. I'm not necessarily talking about sentient AI. I don't think anyone has any real idea of how to achieve that. If we do achieve sentient AI, it will probably be by accident. Some think that even if we did achieve sentient AI, we might not recognize it for what it was.

I'm glad to see AI research continuing and advancing on all fronts. We need all the help we can get.
 
Where is the crossover between cleverness and intelligence? Isn't there a blurred edge somewhere?
 
Where is the crossover between cleverness and intelligence? Isn't there a blurred edge somewhere?

Yes, there's not a sharp demarcation between the two. IMHO most day-to-day activity falls on the "cleverness" side of the line.
 
..I think instead it is our best hope for salvation...

Why do you think we won't be able to manage without it ?

INT21
 
This thread seems as good a place as any for these musings. I'd appreciate any feedback from others . . .

On the subject of intelligence, it seems useful to differentiate between intelligence and what I will call cleverness. Intelligence, I think, operates on an abstract level; it can manipulate ideas and concepts of things immaterial and unseen. Cleverness deals more with things immediate, tangible, and concrete.

For example, crows are often described as intelligent. I think instead they are clever (in the sense that I am using the word here). A crow confronted with a twig and a grub in a hole can figure out how to fish out the grub using the twig. That is cleverness. A crow cannot invent trigonometry. That would require intelligence. ...

The 'intelligence' ascribed in AI (in all generations) more closely corresponds to your category of 'cleverness', insofar as AI applications are coded or trained to operate effectively in the course of a particular grounded activity or procedure.

The criterion you cited for discriminating between the two categories - abstraction - may serve as a rule of thumb when differentiating them with respect to human and animal (and human versus animal) behaviors, but it doesn't afford any traction with respect to AI's.

This is because all AI's are operating with an abstracted model or rule base or training history that is hoped to adequately reflect best practice, but isn't linked to grounded praxis.

I suspect a version of the dichotomy applicable to AI's would be better framed with respect to something along the lines of 'novelty' / 'creativity' than 'abstraction'.
 
EnolaGaia,

So really there is no such thing as artificial intelligence as the things you appear to be describing are really just high speed computers that can only choose between a range of options. Really no better than Watson or Deep Blue.

To return briefly to my standard decision example.

A country has a growing population that it can't feed.

The logical thing to do is stop the breeding and in a couple of generations the population will fall back.

The usual thing that happens is that the country either loses its surplus people by war or starvation.

If it were given the problem, what would an artificial intelligence do ?

INT21
 
Back
Top