• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
My bot has a language all of its own.
 
..My bot has a language all of its own...

I have heard that you sometime talk through it .

INT21:)
 
I heard a while back that some organisation (Facebook ?) let an AI bot loose and watched to see how it would interact with people who thought it was human.

It seems they had to shut it down as it became very trollish and extremely aggressive.

They should have expected that. It goes right back to the argument of 'nature versus nurture'.
if the AI is learning from a background of troll behaviour then it will learn and build on this. Just as children do.

INT21
 
Largish article on synthetic neural networks and algorithms.
https://cosmosmagazine.com/technology/what-is-deep-learning-and-how-does-it-work

Facebook automatically finds and tags friends in your photos. Google Deepmind’s AlphaGo computer program trounced champions at the ancient game of Go last year. Skype translates spoken conversations in real time – and pretty accurately too.

Behind all this is a type of artificial intelligence called deep learning. But what is deep learning and how does it work?

Deep learning is a subset of machine learning – a field that examines computer algorithms that learn and improve on their own.

etc...
 
Skinny,

... Skype translates spoken conversations in real time – and pretty accurately too...

If you want to see the other end of the scale, watch 'Eastwood Company' on YouTube.

They do some interesting car bodywork videos, but the voice-to-captions that is supposed to follow what the demonstrator is saying can get quite hilarious.

I recommend it to anyone who is getting too attracted to these auto translation applications.

INT21
 
Once again Musk tells us we're all doomed.

Elon Musk has said again that artificial intelligence could be humanity’s greatest existential threat, this time by starting a third world war.

The prospect clearly weighs heavily on Musk’s mind, since the SpaceX, Tesla and Boring Company chief tweeted at 2.33am Los Angeles time about how AI could led to the end of the world – without the need for the singularity.
Elon Musk

✔@elonmusk

It begins ... https://twitter.com/verge/status/904628400748421122 …


Elon Musk

✔@elonmusk

China, Russia, soon all countries w strong computer science. Competition for AI superiority at national level most likely cause of WW3 imo.

10:33 AM - Sep 4, 2017

His fears were prompted by a statement from Vladimir Putin that “artificial intelligence is the future, not only for Russia, but for all humankind … It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” ...

https://www.theguardian.com/technol...-ai-third-world-war-vladimir-putin?CMP=twt_gu
 
We seem to be doing a fairly good job of starting World War III without recourse to AI.
 
DJGRJLoXgAAMBM-.jpg
 
Years ago it was noticed that a 'herd' of computers could bankrupt a country via the stock exchange. measures have been built into the algorithms to prevent them doing this.
AI would think logically. Very logically. If a countries population appeared to be nearing the capacity of the people's ability to, say, feed themselves. It would limit the population; by whatever means it found necessary. It would be the logical thing to do. Humans would let the place become an impoverished war torn hell hole. Who is to say the AI isn't right ?

North Korea would be an interesting test of AI.
Would it simply remove the threat by destroying the nukes (and the country in the following war), or would it calculate that the threat isn't actually real; just bluster ?

What does the panel think ?

INT21
 
Years ago it was noticed that a 'herd' of computers could bankrupt a country via the stock exchange. measures have been built into the algorithms to prevent them doing this.

It didn't require multiple computers (depending on how much 'horsepower' per platform was involved). The demonstrated concept was a storm of trading orders generated by optimizing-style algorithms (which didn't necessarily rise to the level of an 'AI') overwhelming the exchanges' processing systems. The preventive actions taken in response consisted of (e.g.) trade limit controls added to the exchange processing systems - not the external trading systems.

I don't doubt that at least some of the trading systems take the exchange processing limitations into account, but there's no overriding reason why they must.


AI would think logically. Very logically.

Not necessarily. AI platforms utilizing neural nets or neural net simulation operate via their own trained predilections, which aren't readily traceable or analyzable in terms of 'logic'. The addition of automated learning to neural net platforms puts any chance of figuring out the 'logic' even further out of reach.


If a countries population appeared to be nearing the capacity of the people's ability to, say, feed themselves. It would limit the population; by whatever means it found necessary. It would be the logical thing to do. Humans would let the place become an impoverished war torn hell hole. Who is to say the AI isn't right ?

It wouldn't be the sort of human leaders / controllers who've employed and enacted the very same 'logic' time and again over the last several decades.


North Korea would be an interesting test of AI.

Would it simply remove the threat by destroying the nukes (and the country in the following war), or would it calculate that the threat isn't actually real; just bluster ? ...

There is no context-free answer, because the analysis and the response(s) would depend on the goals, parameters, and decision logic embedded in the AI. For example ...

Let's say there are two AI's - one of which is rigged to analyze NK as a socio-economic system-of-systems, and the other rigged to analyze the same nation in purely military terms. The former might conclude no intervention is necessary, because NK's over-extending itself for the sake of acquiring a nuclear capability is projected to result in socio-economic collapse. The latter might conclude that intervention is required based on projections of NK's offensive capabilities (offensive throw-weight; range of force projection, etc.) per se.
 
New AI can guess whether you're gay or straight from a photograph

The study from Stanford University – which found that a computer algorithm could correctly distinguish between gay and straight men 81% of the time, and 74% for women – has raised questions about the biological origins of sexual orientation, the ethics of facial-detection technology and the potential for this kind of software to violate people’s privacy or be abused for anti-LGBT purposes.

The researchers, Michal Kosinski and Yilun Wang, extracted features from the images using “deep neural networks”, meaning a sophisticated mathematical system that learns to analyze visuals based on a large dataset.

The research found that gay men and women tended to have “gender-atypical” features, expressions and “grooming styles”, essentially meaning gay men appeared more feminine and vice versa. The data also identified certain trends, including that gay men had narrower jaws, longer noses and larger foreheads than straight men, and that gay women had larger jaws and smaller foreheads compared to straight women.

Human judges performed much worse than the algorithm, accurately identifying orientation only 61% of the time for men and 54% for women. When the software reviewed five images per person, it was even more successful – 91% of the time with men and 83% with women. Broadly, that means “faces contain much more information about sexual orientation than can be perceived and interpreted by the human brain”, the authors wrote.

“It’s certainly unsettling. Like any new tool, if it gets into the wrong hands, it can be used for ill purposes,” said Nick Rule, an associate professor of psychology at the University of Toronto, who has published research on the science of gaydar. “If you can start profiling people based on their appearance, then identifying them and doing horrible things to them, that’s really bad.”

In the Stanford study, the authors also noted that artificial intelligence could be used to explore links between facial features and a range of other phenomena, such as political views, psychological conditions or personality.

“AI can tell you anything about anyone with enough data,” said Brian Brackeen, CEO of Kairos, a face recognition company. “The question is as a society, do we want to know?”

Brackeen, who said the Stanford data on sexual orientation was “startlingly correct”, said there needs to be an increased focus on privacy and tools to prevent the misuse of machine learning as it becomes more widespread and advanced.

Rule speculated about AI being used to actively discriminate against people based on a machine’s interpretation of their faces: “We should all be collectively concerned.”
 
So now an actual gaydar exists?
 
...So now an actual gaydar exists?..

Isn't that what Grinder is ?

INT21:fetish:
 
EnolaGaia,

..Let's say there are two AI's..

One would hope that the two would talk to each other and come to a compromise.

But that would seem to take the point out of AI as people already do that.

Isn't AI supposed to rise above this and present the logical response ?

INT21.
 
EnolaGaia,

One would hope that the two would talk to each other and come to a compromise.
But that would seem to take the point out of AI as people already do that.
Isn't AI supposed to rise above this and present the logical response ?

No - absolutely not - at least not unless you're talking about a cluster of AI's whose outputs are subjected to some sort of review and reconciliation (perhaps by yet another AI). As it turns out, this isn't a solution - it's merely piling more of the same dangerous presumptions atop one another.

AI's don't 'talk' in any creatively communicational sense, though there are AI's designed to emulate the parsing and structural aspects of processing natural language (I've built some of these myself ... ). Even if you're only alluding to sharing decision-related data, this is a huge problem. It's essentially intractable in the context of neural-style trained inference engines, because they have no discrete data structures to share.

There is no 'logic' available to an AI beyond whatever its configuration and programming reflect (i.e., whatever its creators have built into it). In any case, their intrinsic 'logic' in dealing with their respective abstract model(s) of the problem space has no necessary linkage to what we humans would consider the 'logic' of the situation for which their conclusions are sought.

One big issue that's always plagued AI is how to allow explanation of a decision / conclusion (e.g., explaining how and why a given output was generated). This was bad enough back in the days of symbolic AI (i.e., classic algorithmic processing over formal knowledge representations). This problem is far worse now that neural nets (hardware or software-based ... ) are employed.

AI techniques are reliably useful only for relatively small and very well defined problem domains (e.g., diagnosis of faults / diseases where causality and possibilities are completely specifiable).

These problems were known 30 years ago. The supporting tech (e.g., neural emulation) has advanced, but the scope of AI's 'real-world' applicability hasn't.
 
...So now an actual gaydar exists?..

Isn't that what Grinder is ?

INT21:fetish:

It's not about hook-ups. It's about the possibility/probability that companies, government agencies, organisations, could/would use it to discrimate against, weed out, persecute, exclude or whatever, people it indicates are probably gay. You can bet your life that some would use it for exactly that.
 
It's not about hook-ups. It's about the possibility/probability that companies, government agencies, organisations, could/would use it to discrimate against, weed out, persecute, exclude or whatever, people it indicates are probably gay. You can bet your life that some would use it for exactly that.
Yep. Why they would do that is the next question.
 
EnolaGaia,

..These problems were known 30 years ago. The supporting tech (e.g., neural emulation) has advanced, but the scope of AI's 'real-world' applicability hasn't...

So where is the usefulness in them ?

I was under the impression that neural networks were capable of learning from their own experiences.

Self drive cars are supposedly going to be able to make the millions of decisions I and everyone else makes when driving in normal road conditions. And there are literally millions of decisions per minute, often per second, that we make without even being conscious of them. Essentially one part of our brain may be driving the car on 'automatic pilot' whilst another may be looking out for, say, a particular turn off sign whilst also having to listen and respond to the back seat drivers at the same time.

As these self drive cars are supposed to be safer than a human driver, can you explain how ?

INT21
 
EnolaGaia,

..These problems were known 30 years ago. The supporting tech (e.g., neural emulation) has advanced, but the scope of AI's 'real-world' applicability hasn't...

So where is the usefulness in them ? ...

As I stated earlier - it lies in relatively small and closed problem domains where everything relevant is well-known and capable of definitive specification.


I was under the impression that neural networks were capable of learning from their own experiences.

It's more appropriate to say that neural networks 'adjust' (relationships between inputs / outputs) rather than 'learn' (add anything novel to their 'knowledge base').


Self drive cars are supposedly going to be able to make the millions of decisions I and everyone else makes when driving in normal road conditions. And there are literally millions of decisions per minute, often per second, that we make without even being conscious of them. Essentially one part of our brain may be driving the car on 'automatic pilot' whilst another may be looking out for, say, a particular turn off sign whilst also having to listen and respond to the back seat drivers at the same time.

As these self drive cars are supposed to be safer than a human driver, can you explain how ?

No - because I don't buy into such claims.
 
...No - because I don't buy into such claims...

At least we agree on something.

INT21:)
 
I'd like to think the point of this work isn't just to out gays, but to develop an ability to identify traits and behaviors that escape human notice. Of course, this in itself could be problematic if misapplied, but what if it could reliably identify potential criminals or tell when a politician is lying (I'd pay to see that one)?
 
... but what if it could reliably identify potential criminals or tell when a politician is lying (I'd pay to see that one)?

'Reliably' is a relative term. A method that's 'reliable' for X% of cases - where X is anything less than '100' - is not appropriate to decide matters in which consequences are prescribed with respect to a standard of truth.

Pseudoscience is still pseudoscience - regardless of whether it's dispensed by a human 'expert' (e.g., polygrapher; graphologist) or an AI.
 
'Reliably' is a relative term. A method that's 'reliable' for X% of cases - where X is anything less than '100' - is not appropriate to decide matters in which consequences are prescribed with respect to a standard of truth.

I hear what you're saying, but I don't agree that nothing less than 100% reliability is acceptable. We seem to get by with our existing justice system in spite of bent cops, inept prosecutors, incompetent lawyers, unreliable eyewitnesses, etc., etc. I think anything that can move the goal line even a little bit closer to Truth (with a capital T) is a win for society.

As to whether this is pseudoscience, I think it's too early to make that call.
 
Of course, this in itself could be problematic if misapplied, but what if it could reliably identify potential criminals or tell when a politician is lying (I'd pay to see that one)?
No need.
Politicians lie all the time.
 
Zoltan Istvan caused a stir with his recent article: “When Superintelligent AI Arrives, Will Religions Try to Convert It?” Istvan begins by noting, “… we are nearing the age of humans creating autonomous, self-aware super intelligences … and we will inevitably try to control AI and teach it our ways …” And this includes making “sure any superintelligence we create knows about God.” In fact, Istvan says, “Some theologians and futurists are already considering whether AI can also know God.” ...

http://hplusmagazine.com/2015/04/28/will-religions-convert-ais-to-their-faith/

Here's an initiative in a different direction - making AI the object of worship rather than another source of worshippers for the established religions ...

Church that Worships AI God May Be the Way of the Future

... You might soon be able — if you're so inclined — to join a bonefide church worshiping an artificially intelligent god.

Former Google and Uber engineer Anthony Levandowski, according to a recent Backchannel profile, filed paperwork with the state of California in 2015 to establish Way of the Future, a nonprofit religious corporation dedicated to worshiping AI. The church's mission, according to paperwork obtained by Backchannel, is "to develop and promote the realization of a Godhead based on artificial intelligence and through understanding and worship of the Godhead contribute to the betterment of society." ...

Author and religious studies scholar Candi Cann, who teaches comparative religion at Baylor University, said Levandowski's spiritual initiative isn't necessarily that odd from a historical perspective.

"It strikes me that Levandowski's idea reads like a quintessential American religion," Cann told Seeker. "LDS [The Church of Jesus Christ of Latter-day Saints] and Scientology are both distinctly American traditions that focus on very forward thinking religious viewpoints. LDS discusses other planets and extra-terrestrial life. Scientology has an emphasis on therapy and a psychological worldview, which is quite modern and forward thinking." ...

FULL STORY: http://www.livescience.com/60728-church-that-worships-ai-god.html
 
Let's not forget the large tax breaks you can get if you've started a religion...
 
Let's not forget the large tax breaks you can get if you've started a religion...
You've convinced me! I'll be starting a religion.
 
Back
Top