• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.

Watch The Australian Army Use Telepathy To Control Robot Dogs

maximus otter

Recovering policeman
Joined
Aug 9, 2001
Messages
13,981
The mere thought of controlling a robot is good enough for the Australian Army. In a new test, the land force has paired with a host of technology researchers to use telepathy to control robot dogs, part of a new wave of research aimed at eliminating the need for verbal or physically inputted commands in the control of various autonomous systems.

The Australian Army tested HoloLens 2 headsets and Raspberry Pi-based AI decoders to capture brain waves and translate them into “explainable instructions” sent via telepathy to an autonomous robot dog, in this case a Vision 60 Ghost Robot.

It worked.


With the technology proving successful in the test field, the team conducted a second test that included a simulated operation of soldiers and ghost robot dogs working in tandem to clear an area.

“This technology enables me to not only control the ghost robot as well as monitor its video feed,” Sergeant Chandan Rana of the 1st/15th Royal New Lancers says in the video, “but it allows me to be situationally aware of my surroundings as well as my team, to be able to control all movements on the battlefield clearance.”


Tollenaar says the simple system can be used with several different autonomous systems. Robot dogs aren’t the only end use for the technology. The Australian Army believes it can work with aerial drones, drone swarms, ground weapon systems, and potentially a tiny robot army.

https://www.popularmechanics.com/mi...an-army-uses-telepathy-to-control-robot-dogs/

maximus otter
 
I would think this was probably brought about by studying people who have electronic protheses? Though the prothesis learns the muscle signals that the brain produces to make a specific motion.

This time, they've connected brain signals to a robot. Wonder who is being programmed? The user or the robot?

I think the question of accountability for actions will spring up as it has with self driving cars.
 
I tell you, this forum has made me more and more paranoid:dunno:
 
I'd hazard the technology already more or less exists to control a remote agent 'by thinking about it'- we've already got 'nets' of electrodes that can be applied to the scalp and machine learning programs that can be used to interpret the outputs from a three figure numbers of such electrodes, involving phase, amplitude and possibly FFT's to separate out the various frequency waves wobbling around the brain, on the fly. Then it's a (simple) matter of training such a programs to know 'what means what'.

One of the more interesting thing about those kinds of electrodes is the difficulty that has historically existed with recording and pre-amplifying very low frequencies. Without getting into to much detail, amplifier chips with the very low bandwidths (<15Hz, often more like 3-4Hz) were required to pre-amplify electrode signals, right on the electrode on the scalp, were in the pipeline about two years ago.

I'll eat my hat if Mr. Musk isn't working on something like it - psychology is a long way behind the curve on this as there are very few people who understand the technology, the neuroscience and think about cracking the problem like an engineer would, cf. rig it up and see what comes out, although that approach for this application needs a bundle of cash that even I'm not allowed to spend.

But that's not true of everyone.
 
Well isn't this like UFO abductees reporting the 'aliens' to be communicating telepathically?
They don't 'speak', but transmit thoughts to those they abduct.
Progress.
 
Just hope this technology can distinguish between thinking about ordering the robot dog to do something, and actually ordering it to do something.
 
The following may sound like me being paranoid or even be one of those non existant 'silly questions' things.

I have some experience in coding / scripting and know how the simplest error in syntax can break code, how a simple misinterpretation of data can have unexpected result and how easily a human programmer can make an error that in turn leads to strange results from running the code / script / programme. I know that my experiences with such things is limited in comparison with others and especially compared with those involved in larger projects. Without implicating myself, so to put it, I also have some experience altering so called secure code, retrieving data from likewise secure networks and low level 'hacking' [nothing serious].

So! Silly questions time.
  • How easy would it be for a 'computing' amateur enthusiast, experienced hacker or enemy agency to interfere with this technology?
  • How dangerous is the possible result of some enemy agency or say a terrorist group gaining control of this technology?
  • Could this technology be hijacked 'in the field' and turned on it's original users?
  • If the technology finds it's way to policing are we likely to see it abused to, let's imagine, dispersing innocent members of the public?
  • Is there a possibility that this technology could be linked with or redifine 'remote viewing'?
  • Could remote viewing techniques be used in a hijacking of this technology?
I'm more intrigued than paranoid or frightened by this technology. The possibilities around the technology could be somewhat scary to think of though.
 
Silly questions time.
How easy would it be for a 'computing' amateur enthusiast, experienced hacker or enemy agency to interfere with this technology?
It would just as easy, or as difficult, as interfering with any other information-based technology. Drone technology, for example. Drone technology can certainly be jammed, but it is much more difficult to take control of enemy drones, and as far as I know this doesn't happen. Given sufficiently well-engineered code, it should be possible to encrypt the control data so that it could never be hacked in billions of years, but not all technology uses such secure encryption.

How dangerous is the possible result of some enemy agency or say a terrorist group gaining control of this technology?
Very dangerous, but we had better get used to it, because in a few decades time all weapons of war (and probably many other systems, including telecommunication systems, transport systems and domestic appliances) will have some kind of brain-computer interface (BCI) capability.

Could this technology be hijacked 'in the field' and turned on it's original users?
Well, with a well-designed system this shouldn't be possible. But it is not impossible that some systems would have weaknesses that could be exploited, let alone 'back-door' access that might have been incorporated into the system by its makers or designers. Don't buy BCI tech from your enemies.

If the technology finds its way to policing, are we likely to see it abused to, let's imagine, dispersing innocent members of the public?
Of course it will. Remote-controlled robots have already been used to kill people, and BCI controlled robots are just a refinement of this tech. One advantage is that every action performed by a remote-controlled robot could be recorded, so in theory this data could be used in a court of law. But there could be many complex law cases, especially if the BCI tech records some indication of the mind-state of the operator.

Is there a possibility that this technology could be linked with or redefine 'remote viewing'?
Could remote viewing techniques be used in a hijacking of this technology?
Not a chance, because remote viewing is bollocks. BCI technology is a form of technological telepathy, which uses electrical impulses and radio waves (or some other form of data transmission), so can be used with confidence wherever a good connection can be maintained. Remote viewing relies on no known principles, and has a success rate similar to chance, so would be a liability in any conflict.
 
@eburacum Thanks for the somewhat detailed reply. If I'm truthful then I have to write that my questions were kind of rhetorical. The last two questions where really 'tongue in cheek' and I particularly liked your response to them. There are, undoubtedley questions to be asked about the technology and the way it will be used. There will likely be debate about the morality of such technology and how safe it is for the user and others affected. Ultimately though those that want to use the technology will do so no matter what and you are correct; we had better get used to it.
 
Back
Top