• We have updated the guidelines regarding posting political content: please see the stickied thread on Website Issues.
Robo-salamander's evolution clues

The mechanical salamander is a tool to study the past
A robot is being used by a Franco-Swiss team to investigate how the first land animals on Earth might have walked.
The bot looks a lot like a salamander; and the scientists can change the way it swims, slithers and crawls with commands sent wirelessly from a PC.

The group says it provides new insight into the nervous system changes aquatic lifeforms would have had to acquire to move to a terrestrial existence.

The researchers report their study in the latest edition of Science magazine.

A decapitated chicken that runs for a while even without the brain is a good example of spinal cord regulation of locomotion

Auke Jan Ijspeert
By mimicking the nervous system and the movements of the salamander, the team hoped "to decode perhaps some of what happened during evolution", Auke Jan Ijspeert, of Ecole Polytechnique Federale de Lausanne, told BBC News.

Simple systems

The first animals capable of walking on land are thought to have emerged during the Devonian Period.


The transition is a crucial period in Earth history


Fossils mark move to land
Palaeontologists have found fossils dating back some 360 million years that show a process where fins are transformed into limbs.

Before the appearance of these tetrapods - four-legged vertebrates that mostly live on land - all backboned animals were confined to water.

Precisely how they came out on to the shore is not clear - but the latest research indicates the transition would not have required a huge leap in brain power.

Mr Ijspeert and colleagues have shown how even the simple nervous system of a lamprey (a primitive eel-like fish) can, with a few modifications, drive walking motion in a creature that resembles a salamander.

The computer system that runs their robot is based on just such a nervous system; it is no more complex.

Chicken heads

The computer sends signals through the machine's "spinal cord" to the limbs, allowing the bot to switch effortlessly between swimming and walking.


The robot was tested on the shores of Lake Geneva
The scientists chose a salamander as the inspiration for their mechanical animal because the amphibian is probably quite similar to the first vertebrates that lived on land.

When it swims, it does so like a fish - its body makes undulating movements, with its limbs folded backward.

On firm ground, however, the salamander changes to a slow stepping gait, in which diagonally opposed limbs are moved together while the body makes S-shapes.

The research group has demonstrated how salamanders can control their locomotion using largely just their spinal cord.

"Their brains are more or less only involved to regulate the speed and direction," said Mr Ijspeert.

"A decapitated chicken that runs for a while even without the brain is a good example of spinal cord regulation of locomotion."




http://news.bbc.co.uk/2/hi/science/nature/6419927.stm
 
D.I.Y. Drone League

Chris Anderson is the editor-in-chief on Wired magazine. This post is adapted from a few recent entries on his Long Tail blog.

We're big fans of the FIRST robotics championships in our house and are of course starting to work on our own Lego League sumo wrestling entries. But as challenging as it to create autonomous wheeled robots that can fight others or navigate mazes or obstacle courses, there's one thing that could be made even harder. You could add another dimension.

All the FIRST robotics contest are held on a two-dimensional plane (the ground or a tabletop). But what if you let the battle take to the skies, too? What if we created a competition for semiautonomous model airplanes, helicopters and rockets? Call it the 3D Robotics League.

After all, modern UAVs grew out of the radio-control airplane scene and the technologies that allow them to fly themselves--gyros, video and other sensors, GPS, digital radio and onboard microprocessors--are now shrinking in size and falling in price at a rapid pace. You can buy a model airplane today for less than $60 that has an onboard computer and basic sensors, and standard model helicopters have gyroscopes and autopilot modes. GPS chips are already small enough to fit into cellphones.

We're right on the verge of an era where it will be possible for regular people, not just engineers, to create home-built UAVs and guided rockets -- and do it all for under a thousand dollars. So why not create a formal set of challenges so that innovative teams could advance the state of the art, just as the FIRST league has done for terrestrial robotics? (There are already quite a few, scattered around the world.)

Autonomous aircraft challenges could include navigating a course, dropping a marker near a target, dogfighting with another plane (using ultrasonic tagging), or landing near a designated spot, all pilotless. Some levels of competition would allow for piloted take-offs and landings and switching into autonomous mode for the competition part of the flight, while more advanced levels would be entirely under computer control.

For rockets, the competition would probably be of the surface-to-air missile variety--ranging from popping a balloon to hitting a target towed behind a model airplane. This might be seen as politically incorrect in an era where terrorists with Stingers are a real threat, so I may have to think of something less warlike. But you get the idea. When one of my kids' guided rocket shoots down a sibling's UAV, I will be a very proud father indeed.

I took this first step this eve, throwing together the world's first Lego autopilot.

HiTechnic is releasing a gyro sensor for the Lego Mindstorms NXT -- which I haven't received yet. So I've got a light sensor standing in for it in the picture, but the mechanicals are pretty much in place. Cool fact of the day: According to Google, this is the first time the phrase "Lego autopilot" has ever been used. I own this space!

This autopilot only controls the rudder, keeping the plane flying level when engaged and returning to the launch area. While the autopilot is disengaged, the servo arm controls the rudder under manual radio control as usual. But when you engage the autopilot (a third servo presses the "start" button on the NXT controller brick), the NXT servo drives the gear assembly above to move the entire R/C servo back and forth, while the R/C servo arm remains stationary. The effect is the same as if the R/C arm was moving, but the rudder is under Mindstorm control, not R/C control.

This autopilot is a "return bot". When engaged, it turns the aircraft 180 degrees (thanks to the compass) to point back at the launch area and keeps the plane level until the human pilot regains manual control.

Next step is to ditch the compass and add a Bluetooth GPS module (the NXT brick has built-in Bluetooth), so it can follow waypoints and be fully autonomous. I'm told that the standard elevation output of the GPS module may be good enough to use for maintaining level flight, even if it's not high enough resolution to navigate to a soft autolanding. Once the gyro sensor arrives, I can test that.

-- Chris Anderson, cross-posted on The Long Tail

http://blog.wired.com/defense/2007/03/w ... ans_o.html
 
Thursday, March 29, 2007

Amoebalike Robots for Search and Rescue

A novel form of locomotion inspired by the way amoebas move could help robots get in places other robots can't reach.
By Duncan Graham-Rowe

Roboticists at Virginia Tech, in Blacksburg, VA, have developed a novel form of locomotion for robotics based on the way the single-celled amoeba moves. Unlike any other robots, the Virginia Tech ones are designed to use their entire outer skin as a means of propulsion.

Toroidal in shape--a bit like an elongated cylindrical doughnut--robots of this new breed differ from wheeled, tracked, or legged bots in that they move by continuously turning themselves inside out, says Dennis Hong, an assistant professor of mechanical engineering at Virginia Tech. "The entire outer skin moves," he says.

This novel type of locomotion is particularly suited to search-and-rescue applications, says Hong: "They can squeeze under a collapsed ceiling or between obstacles very easily." Indeed, preliminary experiments show that the robots, with their soft, contracting bodies, are able to push themselves through holes with diameters much smaller than their normal width, Hong says. And because the robots are able to use their entire contact surfaces for traction, they can move over and through very uneven environments with ease.



The actual motion is generated by contracting and expanding actuator rings along the length of the robot's body. By contracting the rings at the rear of the robot and expanding them toward the front, they are able to generate movement.

This is very much akin to the principle of the pseudopod used by single-celled organisms such as amoebas, says Hong. This principle consists of a process of cytoplasmic streaming, in which the liquid endoplasm within the cell flows forward inside a semi-solid ectoplasmic tubular shell. As the liquid reaches the front, it turns into the gel-like ectoplasm, forming an extension to this tube and moving the organism forward. At the same time, the ectoplasm at the rear of the tube turns into the liquid endoplasm, taking up the rear.

To produce a similar sort of motion, Hong's initial experiments have used robots consisting of flexible toroidal membranes lined with propulsion rings of either electroactive polymer or pressurized hoses. But now, with funding from a new National Science Foundation grant, Hong has forsaken the use of elastic membranes in favor of more-rugged designs. He declines to discuss these designs in detail because of intellectual property issues. However, he says that this latest work involves rigid mechanical parts that are linked in such a way as to enable this sort of motion. "It's like a 3-D tank tread," he says.

"It's an interesting idea," says Henrik Christensen, professor of robotics and director of Robotics and Intelligent Machines at Georgia Institute of Technology, in Atlanta. "We really need better locomotion mechanisms for robots." Wheels and tracks work fine until the terrain becomes very uneven, while legs are slow and terribly inefficient, he says.

This is not the first time that toroids have been proposed as part of a propulsion system, says Andrew Adamatzky, a professor of unconventional computing at the University of the West of England, in Bristol, U.K. But using electroactive polymers to produce propagating waves of contractions makes this latest research very interesting, he says. "These experimental designs open new and exciting perspectives in soft-bodied robotics."

However, with soft bodies come new challenges. For example, it is not clear how one would integrate a power supply, computerised controllers, and sensors. "The principles here are good, but the engineering really needs to be worked out," says Christensen.

Hong acknowledges that there are still many practical issues to work out with his robots. One solution to many of the design issues is to carry the power supply, controllers, sensors, and other key parts in the center of the toroid. Its shape would ensure that these key parts stayed in place, while wireless controllers could be used to trigger the contractions of the rings using inductive loops for power, says Hong.

The hardest part of search and rescue is developing mechanisms that can adapt to changing terrains, says Robin Murphy, a professor of computer science and engineering at the University of Florida and former director of the Center for Robot-Assisted Search and Rescue, in Tampa, FL. However, there is more to search and rescue than just oozing through gaps, she says.

http://www.technologyreview.com/Infotech/18456/
 
The Power of Babble
MIT researcher Deb Roy is videotaping every waking minute of his infant son's first 3 years of life. His ultimate goal: teach a robot to talk.
By Jonathan Keats


The time is late morning. The place, a home in the Boston suburbs. Wriggling around on the living room floor with his baby boy, Deb Roy invents a game. One-year-old Dwayne watches him, then joins in. Fingers wiggle and arms waver. Rules change, morphing with their moving limbs. After a while, Dwayne tires. Roy picks him up and, cradling the child in a hug, lays him gently in his crib.

Fast-forward several weeks. In a laboratory at MIT, a grad student named Rony Kubat is editing a videoclip on a PC monitor. Onscreen, there's Dwayne (a name used for this article only), resting just as his father left him in the crib that morning. Roy watches as Kubat punches keys to scroll through the footage. Other grad students sit at computers nearby. A 6-foot-tall robot slouches, deactivated, in the corner. Arms crossed, Roy scrutinizes the images, which are overlaid with spectrograms and Kubat's annotations.

Almost every new dad breaks out a videocam to record his kid's early years. But Roy is working on a much more ambitious scale. Eleven cameras and 14 microphones are embedded in the ceilings of the Roy household and connected by some 3,000 feet of cable to a terabyte disk array in the basement. Roy has already captured more than 120,000 hours of footage. Data from the disks gets backed up to an automated tape library, and every 40 days Roy shows up at work with a rolling suitcase to download his new haul of data onto a dedicated 250-terabyte array in the air-conditioned machine room of the MIT Media Lab.

Roy, 38, directs the Media Lab's Cognitive Machines Group, known for teaching remedial English to a robot named Ripley. By recording the early stages of his boy's life, Roy is seeking to supplement his steel-and-silicon investigations: His three-year-long study will document practically every utterance his young son makes, from the first gurglings of infancy through the ad hoc eloquence of toddlerdom, in an unprecedented effort to chart — uninterrupted — the entire course of early language acquisition. The goal of the Human Speechome Project, as he boldly calls his program, is to amass a huge and intricate database on a fundamental human phenomenon. Roy believes the Speechome Project will, in turn, unlock the secrets of teaching robots to understand and manipulate language.

Disarmingly convincing with a calm manner and understated black attire, Roy goes on to explain how the project will ultimately let him combine human observation and robotic-experimentation to address some of the most basic questions about how words work and what language reveals about cognition. There's a practical side to this: the motivation of an engineer who wants to make machines talk and think. There's also a speculative side: the motivation of a scientist who wants to explore language as a means of investigating the brain.

Over the past months, though, such grand problems have been the least of Roy's concerns. Kubat, along with grad students Philip DeCamp and Brandon Roy (no relation to Deb), has been wrestling with the task of managing and analyzing the hundreds of thousands of hours of multichannel video that are accruing. With input from his wife, Northeastern University speech pathologist Rupal Patel, Roy is attempting to make the project scientifically meaningful without turning baby Dwayne's life into The Truman Show. Even if Roy's work — endorsed by academic luminaries like experimental psychologist Steven Pinker and philosopher Daniel Dennett — fails to provide major linguistic insights, the data-mining techniques he's developing and the experimental protocols he's establishing will change how early childhood development is researched. His colleagues in the field are watching his methods with interest. "This is groundbreaking work ," says Carnegie Mellon developmental psychologist Brian MacWhinney, keeper of the world's leading repository of childhood speech transcripts. "More and more, it's the technology that drives the science."


Child psychology has always lacked a killer-app. The first significant use of technology was the language lab, outfitted with one-way mirrors and video cameras to provide researchers with a window into the relationship between babies and their mothers. By the early '80s, though, the laboratory setup was under attack by educational psychologists like Jerome Bruner. To get a realistic picture of parent-child interaction, Bruner claimed, you need to "study language acquisition at home, in vivo, not in the lab, in vitro." His point was well taken but not easily addressed.

Researchers might visit a house a few hours a week, producing speech recordings hardly representative of daily experience. (MacWhinney estimates that the transcripts in his archive capture less than 1.5 percent of the typical child's upbringing.) More intensive documentary efforts, narrower in scope, have been made by psychologists keeping detailed diaries on the linguistic development of their own children. These, too, are necessarily sparse and can be just as artificial as bringing a child into a lab equipped with hidden cameras. (Psychologist Michael Tomasello experienced the dreaded "observer effect" when his young daughter did something clever, then paused to ask him if he was going to write it down.)

Roy combines the best attributes of both approaches, turning the home into a lab that never shuts down. Thanks to his experience in robotics, he had the technical background to design the project. And when his wife became pregnant in 2004, he had the perfect test subject. Fifty thousand dollars in seed funding from the National Science Foundation coincided with open-checkbook support from the Media Lab's corporate backers. In less than a trimester, the Human Speechome Project was born.

Much had to be done before the birth of Dwayne in mid-2005. Assisted by a contractor and an electrician, Roy first embedded eleven 1-megapixel color video cameras in the white stucco ceilings of his house. Each camera was fitted with a hand-ground fish-eye lens — made by a Japanese manufacturer cashing in on the post-9/11 surveillance market — collectively providing overlapping coverage of all rooms that the baby might occupy. Fourteen microphones were then positioned to exploit the ceiling's own resonating qualities, canceling out low-frequency background noise to deliver CD-quality sound. Roy and his crew next ran cable through the walls to the basement, where a 10-computer cluster was programmed to time-stamp and compress the raw data — an estimated daily take of 200 Gbytes — before sending audio and video files to the 5-terabyte storage array.

Even more formidable is Roy's planned $2.5 million retrofit of the Media Lab machine room, which will include a new 1.4-petabyte storage array cooled by 30 tons of air-conditioning. Network World has rhapsodized about the setup. Grid Today described the system as "one of the largest and highest-performance data storage arrays in the world." Roy's name for the software that retrieves all this data is somewhat more provocative: Total Recall.


Deb Roy's recall of his own childhood in Winnipeg, Manitoba, in the '70s is less than total, reaching clarity only after his sixth birthday, when he started building robots. "At first they were just cosmetic," he says. "Then I got interested in building the robot brain, not just the body, and I realized that I didn't have very good ideas about how to design controllers. So I started to think, how do people work?" He went to the library, where he found more questions than answers, too many unknowns. Humans were a complicated species. Psychology was vague. By the time Roy finished high school, he had decided to pursue a degree in engineering.

At the University of Waterloo, he learned about computer engineering and programming, and he found those disciplines as unsatisfying as the idle speculations about human nature at the local library. "Given a set of specs, a traditional engineer tries to work out the optimal design," he says. "I was more interested in questioning the specs." So, after four years of applying his engineering skills to engineering school — figuring out how to pass classes with the minimum possible effort — he had a pretty good sense of the environment he needed to satisfy his particular blend of curiosity and pragmatism. He found it as a grad student at MIT's famously iconoclastic Media Lab.

"There's a basic idea of learning by doing here," Roy says, sitting in the office that came with the faculty post tendered to him upon graduation in 1999. Cluttering his office and scattered throughout the Lego-like Ames Street building is evidence of this philosophy in practice. To the uninitiated, the Media Lab resembles a high school science fair with the budget of the Pentagon and no grown-up judges on hand. Genuinely revolutionary projects (the $100 laptop) share space with the outlandish (electromagnets that give musical novices a feel for playing the piano) and the frivolous (messenger bags that change appearance using flexible digital displays). While Roy is clearly at the serious end of the spectrum, his first research robot, done up like a cartoon toucan, bears the unmistakable markings of a '90s Media Lab project. "There was a lot of interest around the lab about how to show internal states," Roy explains, pointing to Toco — as the robot is called — retired on a high bookshelf. "The eyelids would open when the vision was on, the feathers would move when it heard something, and the beak would move when there was speech output." Roy shrugs. "But basically it's a camera on a stick."

As robots go, Roy's camera on a stick was Paleolithic, but it was the start of the research that led him to the Speechome Project. As part of his doctoral work, Roy built Toco to find out how boundaries between words are discovered, sifted from the slurry of everyday speech. To do so, he would allow the robot to learn by doing.

In other words, there wasn't going to be any fancy artificial intelligence poured into Toco's empty vessel of silicon. Roy would just utter simple phrases like "Look at the red ball" to find out whether, using basic pattern- recognition software, Toco could figure out that red was one word and ball was another and that they belonged to different grammatical categories.

Of course, pattern-recognition algorithms were well developed by 1999. What made Toco unique was its interaction with the physical world. Told to look at a red ball, Roy's robot was able to do it. Previous forays into pattern recognition had given rise to chatbots with remarkable conversational skills based on a grasp of language that was completely circular. They were like dictionaries: words related to one another but not to the world. "Chatbots work beautifully, as do dictionaries," Roy says, "but the meaning of words, when you dig deep enough, is not in other words. There's a reality out there to which these symbols relate." Roy designed Toco — and ultimately the Human Speechome Project — to find out how language connects to physical reality.

Toco took well to having eyes and ears, learning with startling alacrity how to talk about the properties of simple objects. "What color is the ball?" Roy might ask, to which the robot might reply, studiously ignoring a yellow cube and a blue cone, "Red ball." A toddler could have had a stimulating discussion with Toco, perhaps even have learned a thing or two about basic geometry.

Does this mean that Toco the robotic toucan might help us understand how children learn language? To address this question, Roy uses an appropriately avian analogy, comparing birds and planes. "They don't look alike," he says, "yet both share the property of flight. We learn most of our aerodynamics by building aircraft. We learn about drag and lift, which are also principles used by birds." In other words, experiments with gliders and biplanes gave us the physics to understand how eagles and hawks stay aloft, a template for specialized investigation of wings and feathers. Likewise, the thinking went, a robot capable of humanlike behavior will provide a rough model for the study of lobes and neurons.

Back in the late '90s, Roy was a bit more brash, at least when talking to his soon-to-be fiance. "My robot is learning," he bragged. "It's learning the way kids learn. I bet that if we gave it the sort of input that kids get, the robot could learn from it."

Patel took one look at him, a guy who could read resistors based on their bands of color but wouldn't know a binkie from a blankie, and said, "Prove it."

It was no idle challenge. Patel was working toward her PhD in speech pathology at the University of Toronto, and she had access to an infant lab. So Roy bought a box of toys and flew to Canada, where Patel instructed a gathering of mothers to play with their babies while she videotaped their interactions. For an entire weekend, in hour-long sessions, the mothers babbled happily about balls and doggies and choo-choo trains. Then Roy gathered up the toys and caught a plane back to Cambridge. "After watching a few hours of video, I realized that I hadn't structured my learning algorithm correctly," Roy says. "Every parent knows that when you're talking to an 11-month-old, you stay on a very tight subject. If you're talking about a cup, you stick to the cup and you interact with the cup until the baby gets bored, and then the cup goes away." Roy needed to give his algorithm an attention span.

The idea was to supplement his robot's long-term memory with short-term memory. Both would be engaged in pattern recognition, searching speech input for recurring phonemes, but the short-term memory would focus on the recent past . By giving Toco a mild case of ADD, Roy made his robot more like the kids he was trying to emulate. Without the ability to prioritize recent experience, Toco's search algorithm had been spending valuable time cycling through every phoneme it had ever encountered.

And with the addition of short-term focus? Roy found that Toco could learn much faster if it were allowed to concentrate on the ball or the cup. Taking input directly from the baby lab — raw audio that the machine "hears" by analyzing the sound's spectrograph — Toco was building an elementary vocabulary. "It caused quite a stir," Roy says. "This was the first time that a computer took a lot of audio input without a lot of massaging."

Still, Toco was no Cicero. For instance, it couldn't make out the difference between ball and round, and it lumped them both in the same linguistic category. So Roy spent the next several years developing Newt and Ripley, younger brothers to Toco, with many more sensors and capabilities. Ripley had rudimentary motivations, balancing conflicting urges to explore its surroundings, cool its motors, and obey human commands. "Toco had no purpose in learning," Roy says. "It built associations, but there was no reason to have those associations." A robot assigned explicit responsibilities and required to coordinate them efficiently, would be motivated to know about its surroundings, balls and all. Roy was applying an idea of child psychologist Jean Piaget, that objects might be understood in terms of potential actions.

His work with Toco was bedeviled by a more fundamental problem, though. "It was unclear to me how much of the day a mother spends playing with her baby when she's not in a lab being filmed."

Enter baby Dwayne. Persuading his wife to go along with the experiment was easier than might be expected. As a professor herself, she was familiar with the history of researchers observing their own children and was curious, like any good scientist, about the potential results: Might her son's development offer some key insight in her own work on speech pathology? "But mostly," Roy says, "she has a lot of tolerance for me."

Still, Patel insisted on a zone of privacy. "Deb and I agreed that if any aspect of the project intruded on our daily lives, we would immediately make whatever changes were necessary to alleviate the problem," she says. "That included shutting the project down if we felt it was the right thing to do."

At the moment, the critical work of data mining and visualization programming is led by Kubat, a 28-year-old sporting a shaved head and an earring. With a secondary interest in theater direction and a steady, low voice that could pacify a riot, he is well suited to the task of managing the daily 200-gig deluge.

Calling up a sequence in which Dwayne plays in his elastic baby bouncer, Kubat points out how only the cameras that sense motion are filming at 14 frames per second, while the others are idling at a superlow-res 1 fps that can be filtered out automatically. "Generating a complete transcript is going to be tedious and hard," he says. "The idea is to create an attentional mechanism for the house that focuses in on what matters." While Dwayne screeches loudly — effectively demonstrating the system's sound fidelity — Kubat shows how Total Recall cues up audio in blurbs brief enough to be sequentially transcribed . On screen, Roy comments to his wife that Dwayne is laughing more lately. Kubat points out the box where those words (and a typographic representation of Dwayne's laughing screech) will be input. "My estimate is that there are about 5,000 hours of transcription time for a year of data," says Roy, hovering nearby. If you pay $10 an hour, you're looking at $50,000 for the year, so I don't think it's crazy." Roy has already put Dwayne's daytime sitter, former grad student Alexia Salata, to work as a stenographer while Dwayne naps, a task that can't be more onerous than changing diapers.

Once the transcript is complete, the data mining can zero in on critical moments and trends. For instance, as Dwayne starts to build a vocabulary, it will be possible to measure statistical correspondences between his word use and that of his parents. The larger breakthrough, though, is in data visualization, the ability to monitor activity in the Roy household, down to the second or for entire years, in search of meaningful patterns. As Kubat explains it, the principle is to create "prisms of video": By stacking video stills like playing cards, long spans of activity can be seen at a glance. The same is done with audio spectrograms, allowing Kubat and Roy to spot when key interactions occur — crying, soothing words, encouraging utterances. "After a while, it's possible to read the audio and video," Roy says. "There are distinct patterns." Eventually, these signature moments will be extracted automatically.

Kubat zooms out to a whole day, showing that the system was switched on at 9 am and switched off at 10 pm. At this scale, the aggregated patterns line up to form what Roy calls "spacetime worms." They look like a cross between a cast-off snake skin and Marcel Duchamp's Nude Descending a Staircase. Kubat zooms out to a week, a month, Dwayne's whole life. Roy looks on. No other father has ever seen so much of his son's life in a single glance.

Still, there are gaps in the record, and not only while Dwayne sleeps or when the family goes out. (Despite rumors circulating on the Internet, Dwayne isn't under house arrest and has even had his first summer vacation.) Sometimes several cameras are down; other times the spectrograms register hours of silence. These blank spots are intentional, blinders that Roy allows himself in the eye of his self-imposed panopticon. In fact, Roy is fanatical about privacy, declining all requests from reporters to visit his home and refusing to reveal his baby's real name. ("Dwayne" was chosen for this article in keeping with Roy's practice of naming his robotic research subjects after Aliens characters — in this case, Corporal Dwayne Hicks.) "It comes down to managing privacy issues in an experiment that's the first of its kind," Roy says. "I've been erring on the conservative side because right now I'm living it and my wife is living it, so I don't trust my intuition."

Erring on the conservative side means killing the system if he or his wife is in a bad mood and might want to vent over dinner. They can also switch off the cameras while Patel is breast-feeding or hit the "oops" button when something too personal gets recorded. In fact, a glowing, wall-mounted "oops" button can be found in every room, allowing them to make Total Recall's archive something less than total. Roy pressed it one day after emerging naked from the shower when the cameras were running.

For the moment, Dwayne doesn't have that option, and Roy is OK with that. He argues that parental consent is standard in child psychology. If anything, he considers the extra attention a boon for Dwayne . Roy also insists that he'll shut down the experiment when his son is consistently constructing rudimentary sentences — well before he's even aware of the cameras — which may happen before his third birthday .

Roy also plans to protect the data against Truman Show sensationalism. "If we took embarrassing things that happened to my 1-year-old and posted them online, like many people do today, I'm sure my son would be pissed off at us," Roy says. Instead, he has set up secure servers, accessible to only a few trusted people. Transcribers will be given only short stretches of audio, in random order, obliterating context. Even researchers working with the data won't do so directly. Instead, they'll use algorithms to extract meaning and insight from the giant data set. For instance, a researcher might want to use an algorithm to test the hypothesis that a child assimilates his mother's utterances into his vocabulary more rapidly than his father's . "The question becomes, whose algorithms have access to the data?" Roy says. "And that's a different story."

The promise, then, is that computers will be able to test hypotheses about language acquisition by matching researchers' predictions to recorded patterns. Moreover, the predictions themselves may be suggested by careful observation of the spacetime worms. This mix of observation and investigation is well established in child psychology, tried-and-true. The difference with the Human Speechome is that a data set of this size and quality has never before been collected.


Once the database is complete, Roy's intention is to revisit his work on his early robot, Toco, at a petabyte scale. He plans to expose his newest sensor-loaded machine, Trisk, to Speechome-generated stimuli. "The robot will step into my son's shoes," he says.

Beyond the undeniable sci-fi thrill of it, Roy has a serious motivation. "The data we're collecting is dead data," he explains. "You can describe it and model it, but you can't poke at it." A researcher cannot change a parameter — blindfold the baby, say — and see how the same three-year period would play out for the boy linguistically. But embodied in a robot, the data can be made to live again, and all parameters become malleable. With Toco, for instance, the length of short-term memory could be adjusted, and Toco could be made to relearn the same vocabulary, from the same stimuli, over and over again. A researcher could run simplified experiments on the robot to home in on how short- and long-term memory interrelate in learning. Befitting a data set many orders of magnitude larger, Roy's ambitions with Trisk are many orders of magnitude grander: He's trying to determine the optimal proportion of hardwired programming to learned behavior — nature versus nurture — in robots.

"My assumption has always been that if something is learned from the environment, it must be simpler that way," he says. "Nature builds in some simple learning principles and lets the environment do its job. But there's a counterpressure, which is that life is short." If the environment is stable, hardwiring knowledge into the brain is more efficient than making each generation learn it anew. By letting Trisk live the first few years of Dwayne's life — learning what he learns, with varied bodies of knowledge patched in — Roy hopes to gain new insight into the nature-nurture balance.

This may sound fantastic, and the Media Lab has a reputation for sometimes making promises as exaggerated and insubstantial as playground boasts. Even MacWhinney, the Carnegie Mellon researcher, is cautious, comparing Roy's investigations to humankind's first experiments with flight, reckoning that the full potential won't be realized for decades. Certainly, the relationship between nature and nurture won't be resolved simply by running 6 feet of firewire between Total Recall and Trisk. As proven by Roy's success with Toco, though, it's realistic to expect that Trisk can be given experiences roughly similar to Dwayne's and can be monitored as it accumulates and processes months or years of stimuli according to different learning algorithms. Alter the preprogramming and you change the balance between nature and nurture. The effect on Trisk's language acquisition won't tell us how humans actually learn, but at least we'll get some new ideas about what to look for as we monitor the next generation of children.

Baby Dwayne is already negotiating the twin forces of nature and nurture, though he's hardly in a position to talk about it. So far, the only word he's uttered is bath, and Roy isn't sure whether he means it as a description or a command, or whether he even understands the difference.

When he grows up, Dwayne Roy will be able to retrace his well-documented babyhood — watching himself wriggle around on the floor with his dad, playing made-up games, hearing his own first words. Like anyone's childhood, it will be a one-time event. But the robots trained by his father might live a thousand versions of Dwayne's life, babbling tirelessly, until one of them finally learns to talk.

Jonathon Keats ([email protected]) writes the Jargon Watch column and is the author of Control + Alt + Delete: A Dictionary of Cyberslang.

http://www.wired.com/wired/archive/15.04/truman_pr.html
 
AUA Lecture At The EAU 2007 "The Role Of Robotics In Urology"
02 Apr 2007

UroToday.com - Mani Menon, MD Detroit, Michigan, USA presented "The Role of Robotics in Urology" as the AUA Lecture at the plenary session of the EAU on Friday March 23, 2007.

He started with a 3-D video of a robotic laparoscopic radical prostatectomy. His technique was discussed with the video, which was received with applause. Dr. Menon has completed 3,100 of these cases. Despite no clear evidence that robotic RP has no huge advantages over open surgery, there is a significant growth in this market. The market is primarily coming from patient "advertising" to other patients by word of mouth and use of the internet. The perceived benefit is likely based upon decreased blood loss and quicker recovery. He hypothesized that this leads to decreased surgical and medical complications. Complications have a negative impact on hospital reimbursement. Based upon Medicare data, Begg in the NEJM in 2002 found that the complication rate from open radical prostatectomy was 28-35%. In a study by Dr. Lu-Yao, the surgical complication rates were virtually identical and medical complications were about 13-20%. Pure laparoscopic prostatectomy series report complication rates of about 11%. In his robotic data, medical complications were <1%. Thus, while surgical complication rates are not very different, medical complications are much less. However, he pointed out that this was not randomized data, but with the large difference (>20%) statistical methods suggest that randomized trials are not necessary to validate the difference. The difference between robotic and lap may need better study, since only a 10% difference in medical complications exist.

He stated that the benefits of minimally invasive surgery are physiological as well as surgical. Physiological benefits are more evident for complex procedures than simple procedures. Thus, the future of robotics in general, may be in making complex minimally invasive surgery safer across multiple surgical procedures. He did point out that his views are his own and do not reflect those of the AUA.

Reviewed by UroToday.com Contributing Editor By Christopher P. Evans, M.D., FACS

UroToday - the only urology website with original content written by global urology key opinion leaders actively engaged in clinical practice.

To access the latest urology news releases from UroToday, go to: www.urotoday.com

Copyright © 2006 - UroToday

Article URL:
http://www.medicalnewstoday.com/medical ... wsid=66766
 
Tuesday, April 03, 2007

Robotic Fleas Spring into Action

Tiny rubber bands can power microrobots that could serve as ultrasmall sensors.
By Duncan Graham-Rowe

An autonomous robotic flea has been developed that is capable of jumping nearly 30 times its height, thanks to what is arguably the world's smallest rubber band.

Swarms of such robots could eventually be used to create networks of distributed sensors for detecting chemicals or for military-surveillance purposes, says Sarah Bergbreiter, an electrical engineer at University of California, Berkeley, who developed the robots.

The idea is that stretching a silicone rubber band just nine microns thick can enable these microrobotic devices to move by catapulting themselves into the air. Early tests show that the solar-powered bots can store enough energy to make a 7-millimeter robot jump 200 millimeters high.

This flealike ballistic jumping would enable these sensors to be mobile, covering relatively large distances and overcoming obstacles that would normally be a major problem for micrometer-sized bots, says Bergbreiter.

Such sensors could be scattered from a plane but may not land in the most ideal positions, so making them mobile could allow them to be repositioned, if somewhat haphazardly. "Distributed sensors in general give you the large picture," Bergbreiter says. This is because they can provide a more detailed resolution over a larger area compared with more-traditional nondistributed approaches to sensing.

"With miniature robots, hopping is a good option if you're trying to move over uneven terrains," says Metin Sitti, an assistant professor at the nanorobotics lab at the Robotics Institute at Carnegie Mellon University, in Pittsburgh. "At that size, the critical issue is power, so it is a good choice to store energy," he says.

The impressive jumping skills of insects such as fleas come from their ability to store energy in an elastomeric protein called resilin. This allows them to store a large amount of energy and then release it very suddenly as movement. But while insects store the energy through compressing an elastomer, Bergbreiter opted for a system that stretches one.

Working with Kris Pister as part of the Berkeley Smart Dust Project, which was set up to build distributed-sensor networks that can communicate over long distances using mesh networks, Bergbreiter aimed to give these kinds of sensors useful mobility. She created a tiny solar-cell array to power the device, a microcontroller to govern its behavior, and a series of micro electromechanical systems (MEMS) motors on a silicon substrate. The last were used as part of a ratcheting mechanism called inchworm motors, which draw two hooks apart as a means of stretching the rubber band.

Bergbreiter, in collaboration with the Smart Dust Project, created the rubber band by cutting a circular strip measuring just nine microns thick and two millimeters long out of a thin sheet of silicone using a very fine infrared laser. It was then hooked onto the robot's stretching mechanism using nothing more than a pair of ultraprecision tweezers, a stereoscopic microscope, and a steady hand. This was a bit like playing the children's game Operation, only harder, says Bergbreiter.

To test the robot prototype, Bergbreiter hooked it up so that rather than the bot actually jumping, its leg was positioned to kick an object. This allowed her to calculate the energy being released. So far Bergbreiter has only tried partially stretching the rubber band, which would achieve a jump of about 12 millimeters for the 10-milligram robot. However, she says that based on the results of this test, a full stretch would be capable of producing jumps as high as 200 millimeters, and they would cover roughly twice as much ground horizontally. The results will be presented next week at the International Conference on Robotics and Automation, in Rome, Italy.

The current seven-millimeter-long prototype is still much larger than a flea. But Bergbreiter is keen to shrink the robot down to about one millimeter, or flea size. Also, she still needs to add the tiny photovoltaic solar cell that has been fabricated separately. "The next step is to put it all together," she says.

One of the benefits of making robots on the insect scale is that it is possible to generate very high takeoff velocities. This is why insects can achieve such relatively huge jumps. As an object's volume is reduced, its mass diminishes at a much greater rate, which in turn allows for great accelerations.

However, there is a trade-off. "Drag increases as you get smaller," says Bergbreiter. So the trick is to ensure that the bots' size offers enough benefits in terms of acceleration to outweigh the cost of any additional drag.

But generating this movement still requires more energy than the robot is capable of scavenging from its environment through its solar cells. This is often the case with autonomous robots, which is why storing the energy is necessary, says Chris Melhuish, a professor of robotics and director of the Bristol Robotics Laboratory at the University of Bristol and the University of the West of England, U.K.

It's probable that the only other way to cover such relatively large distances is through flight. "But flying adds a whole new set of challenges," says Bergbreiter. It requires very high-powered motors to flap wings or drive a propeller, and given the effect that wind can have on such small objects, there are major control issues. Jumping, on the other hand, would allow robots to move much greater distances without huge power requirements.

http://www.technologyreview.com/Infotech/18477/
 
Caterpillar robot 'treats hearts'

The robot is just a few centimetres long
A robotic caterpillar has been designed which can crawl across the surface of the heart to deliver treatment.
New Scientist reports a prototype of the HeartLander device, created by US researchers, has been tested on pigs.

The tiny robot, just a few centimetres long, can move at up to 18 centimetres per minute, controlled by "push and pull" wires from outside the body.

The British Heart Foundation said the "caterpillar" could one day be useful, but much more research was needed.

This is interesting, but currently a long way from practical use in heart therapy

Professor Peter Weissberg, British Heart Foundation

The HeartLander has been designed by scientists at Carnegie Mellon University's Robotics Institute in Pittsburgh, Pennsylvania.

The study on pigs found it could fit pacemaker leads and inject dye into the animals' hearts.

It uses two sucker-like feet with which it can crawl across the heart.

It is inserted below the ribcage by keyhole surgery and is attached to the heart via a vacuum line to the suckers.

Treating damaged tissue

Dr Cameron Riviere, who led the research, says its use could allow procedures to be carried out without having to stop the heart, reducing the risk of illness linked to heart bypass procedures.

He added that not having to stop the heart, and being able to implant the device directly onto the heart rather than having to go past the lungs could benefit patients in other ways.

"It could mean a patient did not need general anaesthetic, and may be able to go home the same day."

The team also hope it will be possible to add a radio-frequency probe to the robot to treat faulty heart rhythms by killing damaged tissue.

Adding a camera to the device, rather than relying on the magnetic tracker on the skin which is currently being used, would help surgeons see specifically where the robot was on the heart's surface.

The HeartLander could be available for surgical use within three to four years, Dr Riviere said.

Professor Peter Weissberg, medical director of the British Heart Foundation said: "Whilst this is interesting, it remains to be seen whether it can deliver useful treatments for heart patients.

"This could theoretically be a vehicle for delivering cell therapies to damaged areas of the heart, so could ultimately be a useful tool, but at the moment we still don't know if such therapies work.

"A lot more research is needed to determine whether something delivered to the outside surface of the heart can modify activity on the inside - so this is interesting, but currently a long way from practical use in heart therapy."

http://news.bbc.co.uk/2/hi/health/6569283.stm
 
Tuesday, April 17, 2007
A Robust Robot for the Elderly
Domo the robot is designed for the unpredictability of household chores.
By Rachel Ross
For more than a decade, roboticists have worked on systems for the elderly, hoping to extend the amount of time that seniors can live at home and improve their quality of life. Now MIT researchers have built a humanoid robot with a special motion-tracking system and spring-loaded actuators that make it better equipped to deal with household chores. The robot, named Domo, can size up an object by shaking it in its hand and then put it away in a cupboard.

"Demographics are changing, particularly in Japan, Europe, and the U.S.," says Aaron Edsinger, a lead researcher on the Domo project and a postdoctoral student at MIT's computer-science and artificial-intelligence lab. "There are a lot of people that are getting older and not a lot of young people to take care of them."

But developing a multipurpose robot for the elderly hasn't been easy because the home environment is so unpredictable. Industrial robots, which are widely used in manufacturing, work with parts that come in standard shapes and sizes. Food, however, does not. So a simple task such as putting away groceries can become quite complicated.

Domo takes that variability into account. Instead of preprogramming the robot so that it only knows how to deal with cans and boxes with certain dimensions, Edsinger has Domo size up each item--one at a time--before deciding how it should be stored.

The shelving process begins when a human puts an item in one of the robot's hands. The robot then determines the object's dimensions based on grip and video analysis. First, the robot wiggles the object in its hand while video cameras in the robot's head record the movement. The robot knows how much force it applied with the wiggle, so it knows how much the object it's holding should move. Using special motion-capture software, Domo finds the object in the video that moves as predicted and assumes it is the item in its hand.

Now that the robot has identified the item to be shelved, Domo must determine its shape and size. If it's a small object that fits in the robot's hand, it can determine the object's size based on its grip. For long objects, the robot must perform more video analysis.

Knowing that the tip of a long object will wiggle quicker than the rest, the software isolates the part of the object moving the fastest and considers it to be the point farthest away from the robot's hand. Once the robot knows the object's dimensions, it can determine how best to place it in the cupboard. "If it's a pack of spaghetti, it will lay it on its side instead of trying to stand it upright," Edsinger says.

This might seem like a trivial task, but that's largely because humans tend to underestimate the complexity of their daily activities. Identifying and placing objects requires a lot of different processes. The beauty of Domo is that it's a very integrated system and can handle many processes at once. That's why Domo can handle the unexpected; the same algorithm that works for a water bottle will work for a box of spaghetti.

Domo can also perform basic insertion tasks, such as placing a spoon in a bowl, and help with tidying up the house by carrying around a box in which the human can put clutter. "I can hand it a box of any size, and it can hold it between its two hands, track me, and keep the box nearby," Edsinger says.

Domo, which was created for research purposes, will probably never make it onto store shelves--or into anyone's kitchen. But the research that goes into Domo will likely be used by other roboticists in their quest to create the ideal domestic robot. For example, a robot's ability to find the tip of an object is extremely helpful for scientists developing robots that can work with household tools.

Illah Nourbakhsh, a robotics professor at Carnegie Mellon University, is impressed with the special springs incorporated into Domo's actuators. These springs, known as series elastic actuators, can be found in 22 of the robot's 29 joints. The actuators let the robot know how much force is applied by an external object and act as shock absorbers if the robot hits something. By making the system tolerant to bumps, it's safer for both the robot and the human. "In a car assembly plant, you have sensors around the robots so people can never get near them," Nourbakhsh says. But with a home-care robot, the situation is quite different: one wants the human and the robot to be able to work in close quarters.

However, whether a humanoid machine remains the best robotic solution to elder care remains controversial. Sebastian Thrun, director of Stanford's artificial-intelligence lab, questions whether it's necessary for the robot to resemble a human. "It's a great project, but by going to a humanoid form, the problem becomes harder than it needs to be," Thurn says. A robot arm mounted to a cabinet might be a simpler solution to the grocery problem, for example.

Nourbakhsh agrees. "The problem is [that] making a general-purpose robot with a human form is extremely expensive," he says. If the humanoid is mobile, then power requirements also become a problem. Nourbakhsh says that existing batteries don't run for long enough to handle routine housework. He says he envisions a future elder-care system in which the robots are incorporated into standard appliances, such as stoves and refrigerators, so that they "disappear into the world around you."

http://www.technologyreview.com/Infotech/18537/
 
Robotic Surgeon To Team Up With Doctors And Astronauts On NASA Mission

Raven, the mobile surgical robot developed in the UW's BioRobotics Lab, weighs about 50 pounds. Its nimble appendages can suture wounds and perform minimally invasive surgeries. Credit: David Clugston
by Staff Writers
Seattle WA (SPX) Apr 19, 2007
This week Raven, the mobile surgical robot developed by the University of Washington, leaves for the depths of the Atlantic Ocean. The UW will participate in NASA's mission to submerge a surgeon and robotic gear in a simulated spaceship.
For 12 days the surgical robotic system will be put through its paces in an underwater capsule that mimics conditions in a space shuttle. Surgeons back in Seattle will guide its movements.

The 12th NASA Extreme Environment Mission Operations test will take place May 7 to 18 off the coast of Florida. The robot leaves Seattle on Friday. During the mission, Raven will operate in the Aquarius Undersea Laboratory, a submarine-like research pod about 60 feet underwater. This mission will test current technology for sending remote-controlled surgical robotic systems into space.

During the mission, four crew members will assemble the robot and perform experiments. The two larger-than-life black robotic arms will use surgical instruments to suture a piece of rubber and move blocks from one spindle to another on what looks like a delicate children's toy.

The brains behind the robot's movements will be three surgeons in front of a computer screen in Seattle: Drs. Mika Sinanan and Andrew Wright of the University of Washington's Medical Center, and Dr. Thomas Lendvay of Children's Hospital and Regional Medical Center in Seattle.

Instructions will travel over a commercial Internet connection from Seattle to Key Largo, Fla., then via a special wireless connection from there to a buoy, and finally via cable underwater. Images of the simulated patient will travel back over the same network.

Raven was built over the past five years in the UW's BioRobotics Lab, co-directed by professor Blake Hannaford and research associate professor Jacob Rosen in the department of electrical engineering, with partners in the UW's department of surgery. The da Vinci surgical robot, which is used at the UW and elsewhere, weighs nearly a half-ton. Raven weighs only 50 pounds.

Lightweight, mobile robots could travel to wounded soldiers on the battlefield to treat combat injuries. Surgical robotic systems also could be used in disaster areas so doctors worldwide could perform emergency procedures.

The robots could even travel to remote areas in the developing world so local doctors could get help on difficult procedures. NASA will test the robot's suitability for a mission to space, where it could perform emergency surgery without requiring a surgeon to be onboard.

Raven went on its first road trip last summer to California's Simi Valley. Researchers installed an operating-room tent in gusting winds and temperatures nearing 100 degrees F (40 C), and hooked the equipment up to gasoline-powered generators. Surgeons completed the first field test communicating with the operating tent using an unmanned aircraft equipped with a wireless transmitter.

The NASA mission poses new challenges. Researchers shrank the computers and power supplies that support the robot so they can be carried in dive bags by technical scuba divers and fit into the limited space. Most importantly, the engineers wrote an instructional manual so crew members could reassemble the robot and troubleshoot any problems they encounter.

"When you build a technology as a lab prototype, it takes someone with a Ph.D. six weeks to put it together," Hannaford said. "If you build something for the field, it's got to be repairable, modular and robust."

Once everything is installed in the undersea lab the crew will be alone with the robot. Crew members can communicate by phone with the ground team but they will have to operate the robot and fix any problems on their own. The four-person crew includes research collaborator and surgeon Dr. Tim Broderick of the University of Cincinnati, who will observe the robot's movements and determine its suitability for space travel. Two NASA astronauts and a NASA flight surgeon complete the crew.

Also traveling to the research pod is the M7, a surgical robot developed by SRI International in Menlo Park, Calif. These two robots are the only existing prototypes for a mobile surgical robot, Hannaford said. Currently both robots are research projects and are not yet approved by the Food and Drug Administration for use on humans.

The UW's research is funded by grants from the U.S. Army's Telemedicine and Advanced Technology Research Center, the Defense Advanced Research Projects Agency and the Department of Defense's Peer Reviewed Medical Research Program.


Robot
 
Monday, April 30, 2007
Wall-Climbing Robot
A newly created robot improves upon a gecko's sticking power.
By Duncan Graham-Rowe
Researchers have created a robot that can run up a wall as smooth as glass and onto the ceiling at a rate of six centimeters a second. The robot currently uses a dry elastomer adhesive, but the research group is testing a new geckolike, ultrasticky fiber on its feet that should make it up to five times stickier.

It's not the first robot to use fiberlike dry adhesives to stick to surfaces, says Metin Sitti, an assistant professor of mechanical engineering, who led the research at the Robotics Institute at Carnegie Mellon University (CMU), in Pittsburgh. But this robot should prove to have far greater sticking power, thanks to fibers that are twice as adhesive as those used by geckos.

Such robots could, among other applications, be used to inspect the hulls of spacecraft for damage, their stickiness ensuring that they would stay attached.

In addition to its sticky feet, the robot uses two triangular wheel-like legs, each with three foot pads, and a tail to enable it to move with considerable agility compared with other robots, says Sitti. Not only can it turn very sharply, but its novel design allows it to transfer from floor to wall and wall to ceiling with great ease.

"It is very compact and has great maneuverability," says Mark Cutkosky, a professor of mechanical engineering and codirector of the Center for Design Research at California's Stanford University. "It is a practical solution for climbing."

Geckos are able to stick to surfaces thanks to very fine hairlike structures on their feet called setae. These angled fibers split into even finer fibers toward their tips, giving the gecko's foot a spatula-like appearance. These end fibers have incredibly weak intermolecular forces to thank for their adhesiveness: the attractive forces act between the fiber tips and the surface they are sticking to. Individually, the forces are negligible, but because the setae form such high areas of contact with surfaces, the forces add up.

In the past few years, a number of research groups have fabricated fiber structures designed to emulate setae. But Sitti's group has tried to improve upon the gecko's design. Using microfabrication techniques, Sitti and his colleagues created fibers just four micrometers in diameter--two orders of magnitude smaller than those used in any other robots. "This size difference makes a significant difference," says Sitti. This is because scaling down the fibers increases their surface contact and hence enhances adhesion.

Using the commercial elastomer adhesives, the robot can already climb far more nimbly than any other robot. But the fibers should make it possible for the robot to climb even rough surfaces, says Sitti. However, having only just integrated them into the robot, the researchers have yet to demonstrate this.

One of the challenges in making a robot stick to walls lies in finding a way to apply sufficient pressure to make them stick. The new CMU robot handles this using a tail. At any one moment, at least two of its six foot pads are in contact with the surface, as is the tail, which is spring-loaded so that it will always push against the surface, even when on the ceiling.

However, in developing these materials, the researchers still need to resolve some issues, says Andre Geim, a professor of condensed-matter physics at the University of Manchester, in the United Kingdom, who has also fabricated setaelike structures. "No one has yet explained why geckos can first run on a dirt road picking up dust and then somehow climb up walls," he says. "This is a major obstacle."

Cutkosky agrees that more research needs to be done into the self-cleaning abilities of geckos. "The world is dirty, and robots cannot be stopping to wash their feet every few meters," he says.

http://www.technologyreview.com/Infotech/18602/
 
Published online: 27 April 2007; | doi:10.1038/news070423-12


Robot built to spy on whales
Autonomous device should help protect animals from ships.
Katharine Sanderson


An underwater robot that can hear the calls of whales, and so help ships to avoid them, has just been successfully trialled in the Bahamas.

The scheme relies on a torpedo-shaped glider that zig-zags through the ocean. It can dive down as far as 200 metres below the surface and directs itself by shifting a weight from fore to aft. A microphone attached to the bottom of the glider can pick up calls from all whales, including the high frequency call of the beaked whale, which until now has been difficult to detect. The glider returns periodically to the surface to radio its data back to base, or if that's too far away, it can call a satellite phone and send its information anywhere in the world.

The new device is mounted on a Slocum Glider — a craft built by Webb Research, a company based in Falmouth, Massachusetts.

"We are entering a new era of underwater sensing," says Jim Theriault of Defence Research and Development Canada, Dartmouth, who ran the trial. "We can put a glider in the Bahamas and monitor it in Nova Scotia."

The hope is that naval or other ocean-going operations that use sonar will be able to more easily track where whales are, and so avoid using their noisy equipment when they are close by. There is circumstantial evidence that sonar can upset whales and a number of strandings have been seen shortly after naval sonar operations. "We're trying to lower the potential risk by knowing the animals are there," says Theriault.

Now hear this




A quiet glide means the craft won't disturb whales itself.

DRDC Atlantic

Baleen whales have been tracked by autonomous gliders before, by researchers at Woods Hole Oceanographic Institution, Massachusetts. But it is easier to hear the baleen whale, with its lower-frequency call, than the beaked whales, says Theriault. More data need to be collected to capture higher frequencies, he notes, and this has been a limitation for the relatively small, simple systems used on autonomous subs.

Theriault's glider has a signal processor capable of collecting that data. It can also use the frequency and pattern of detected calls to tell the difference between species; initial analysis is done on the glider before it comes to the surface. "At that point it already thinks it knows whether it has a beaked whale or a sperm whale," says Theriault.

Another major limitation in whale tracking has been data transmission — previously observers had to follow their gliders on a ship, and stay in the line of sight of the contraption. Other methods to listen-in on whale conversations have used fixed microphones on the ocean floor with a cable leading back to shore. Being able to move around and transmit data across the world is a major advantage. "With this sort of device you could survey a much wider area," says Peter Liss from the University of East Anglia in Norwich, UK, and chairman of the UK government's Inter-Agency Committee on Marine Science and Technology. "It sounds like a very good idea."

Stealth surveillance

Liss suggests that these gliders could be used in research to finally pin down whether noise does actually upset whales. "The link is probably there, but rather tentative," he says. Since the glider is quiet, and isn't being followed at a close distance by a noisy ship, it should be able to gather the data needed to prove — or disprove — a link between sonar and whale strandings, he suggests.

The glider being trialled runs on batteries, and can last up to a month. But plans are afoot to make a low-power glider that can prowl the oceans for up to 5 years. To do this, the glider would contain a waxy gel that changes density under different temperature conditions, thus changing its buoyancy. This would make the glider rise and fall as it entered warm and cold patches of water. Temperature and phase changes in the gel could also be harnessed to charge a battery.

The system was trialled in February, and another test is planned for July. The Australian government is also going to use Theriault's system this June to look for whales in an area where none have been spotted by eye, but where they are thought to live.

Visit our newsblog to read and post comments about this story.



Story from [email protected]:
http://news.nature.com//news/2007/070423/070423-12.html
 
'Guessing' robots find their way

The robots use educated "guesswork" to find their way around
Robots that use "guesswork" to navigate through unfamiliar surroundings are being developed by US researchers.
The mobile machines create maps of areas they have already explored and then use this information to predict what unknown environments will be like.

Trials in office buildings showed that the robots were able to find their way around, New Scientist reported.

Making robots that can navigate without prior knowledge of their surroundings was a huge challenge, the team said.

It works well in indoor environments

Professor George Lee

Most mobile robots do this using a technique called SLAM (simultaneous localisation and mapping), whereby they build up a map of their unknown environment, using various sensors, whilst keeping track of their current position at the same time.

But this technique is slow because a robot must explore a great deal of terrain to know its precise location. It is also prone to errors.

So the team from Purdue University, in Indiana, has developed a new approach.

The robots create a 2D map of the area they are exploring, but when they come to an unknown area, they check back through this information to see if it seems similar to any areas that have already explored.

They do this using an algorithm - a step-by-step problem solving procedure.

Professor George Lee who carried out the research, said: "The robot gets to a new area and thinks: 'Have I seen these sorts of things before?' Then it goes back and looks at its stored data.

"It might then think: 'Hey, this is very, very similar to something I've seen before, I don't need to explore that room or corner.' And this saves time for it to explore other areas."

He said it was similar to the human navigational process, where we build up a "mental map" of our surroundings by recognising familiar sights.

Some limitations

The scientists first tested the algorithm using virtual mazes and offices. Their computer models revealed that the robots could navigate successfully while exploring a third less of their environment than robots that simply used SLAM.

Then tests carried out using real robots inside a university office building showed that the new navigation technique was also faster and less prone to errors than SLAM.

However, the new method did have some limitations, Professor Lee said.

"Indoors, in places like office buildings, it works well; outdoors, where the scene isn't as repetitive, the result is not that good."

Self-navigating robots could have many applications, Professor Lee told the BBC News Website.

The US defence department is currently focusing on self-driving automobiles.

Professor Lee's research was funded by the National Science Foundation and was published in the journal IEEE Transactions on Robotics.


http://news.bbc.co.uk/2/hi/technology/6638209.stm
 
Counting Down To RoboCup 2007 Atlanta
Main Category: Medical Devices News
Article Date: 17 May 2007 - 16:00 PDT
| email to a friend | printer friendly | view or write opinions |




rate this article



The countdown begins for RoboCup 2007 Atlanta. The world's most renowned competition for research robotics, RoboCup 2007 Atlanta will be held at Georgia Tech July 3-10. Approximately 2,000 students and faculty from leading universities, high schools and middle schools from more than 20 countries will descend on Tech's campus to participate in events ranging from four-legged and humanoid robotic soccer games to search-and-rescue competitions. This year features a demonstration of the Nanogram League, a competition between microscopic robots. KUKA Robotics Corporation, a leading global manufacturer of industrial robots, is the event's premier sponsor.

"As an emerging global leader in robotics research and innovation, Georgia Tech is pleased to host RoboCup 2007," said Tucker Balch, Georgia Tech College of Computing associate professor and RoboCup 2007 Atlanta general chair. "We welcome the international robotics community to our campus and look forward to the exciting competition."

Other major sponsors include CITIZEN, Lockheed Martin, Microsoft and the National Science Foundation.

This summer is Robot Summer at Georgia Tech. In addition to RoboCup 2007 Atlanta, Georgia Tech will also host several other robotics-related events, including the Robotics: Science and Systems (RSS) conference and an International Aerial Robotics Competition.

RoboCup 2007 Atlanta Schedule:

July 3: RoboCup Opening Ceremony
July 3-6: RoboCup Qualifying Competitions
July 7-8: RoboCup Finals
July 9-10: RoboCup Symposium

###

About RoboCup:

RoboCup is an international research and education initiative. Its goal is to foster artificial intelligence and robotics research by providing a standard problem where a wide range of technologies can be examined and integrated. The concept of soccer-playing robots was first introduced in 1993. In July 1997, the first official conference and games were held in Nagoya, Japan, followed by Paris, Stockholm, Melbourne, Seattle, Fukuoka/Busan, Padua, Lisbon, Osaka and Bremen. This year, the 11th anniversary of RoboCup, the competition and symposium are being held in Atlanta, Georgia. For more details about RoboCup 2007 including participants and updated schedule, visit http://www.robocup-us.org/.

Contact: Rebecca Biggs
Georgia Institute of Technology

http://www.medicalnewstoday.com/medical ... wsid=70938
 
Robotic Cable Crawler

Eric Mika

Burying power cables underground has uncluttered the streets and kept lights on through storms, but water seepage, natural disasters, and general wear and tear can still cut power. As a result, a large utility company typically employs 4,000 workers and spends up to $200 million annually to monitor and maintain tens of thousands of miles of subterranean cables. Soon, instead of sending a crew to put a cable through high-voltage stress tests every time there's a mishap, companies could deploy a robot to pinpoint the problem. Researchers at the University of Washington have invented the Robotic Cable Inspection System, or Cruiser, a four-foot-long, train-like 'bot that crawls along power cables buried in utility tunnels, sniffing out trouble spots along the way.
Cruiser coasts along on hourglass-shaped wheels, and adjustable stabilizer arms keep it upright. The segmented design snakes around curves and allows for modular expansion of the robot, making it possible to add extra sensors or battery packs without a major overhaul. Human operators can upload a basic mission plan, which the robot's circuit-board brain fine-tunes as it encounters damaged cable.

Last December, Cruiser aced its first field test, inspecting segments of cable for post-hurricane water damage in New Orleans. Several large utility companies have already expressed interest in the robot, and a commercial version could roll out as soon as 2012.




Graham Murdoch






HOW IT WORKS



Acoustics
An acoustic sensor listens for electrical sparks inside the cable bundle, a sure sign of failing insulation.

Heat Sensing
Hotspots indicate that the conductors or insulators are decaying. An infrared thermal sensor sends real-time temperature data back to the command computer.

Mobility
Electric motors drive hourglass-shaped wheels that straddle the cable crest to help the 'bot keep its balance, and stabilizer arms prevent it from rolling over.

Vision
The 'bot beams the view from a front-mounted video camera to the command computer, where human eyes can look out for obstructions and sharp turns that could overwhelm its programming.

www.popsci.com/popsci/technology/b402c2 ... drcrd.html
 
Move to Create Less Clumsy Robots

Move to create less clumsy robots



The race to create more human-like robots stepped up a gear this week as scientists in Spain set about building an artificial cerebellum.

The end-game of the two-year project is to implant the man-made cerebellum in a robot to make movements and interaction with humans more natural.

The cerebellum is the part of the brain that controls motor functions.

Researchers hope that the work might also yield clues to treat cognitive diseases such as Parkinson's.

The research, being undertaken at the Department of Architecture and Computing Technology at the University of Granada, is part of a wider European project dubbed Sensopac

Sensopac brings together electronic engineers, physicists and neuroscientists from a range of universities including Edinburgh, Israel and Paris with groups such as the German Aerospace Centre. It has 6.5m euros of funding from the European Commission.

Its target is to incorporate the cerebellum into a robot designed by the German Aerospace Centre in two year's time.

The work at the University of Granada is concentrating on the design of microchips that incorporate a full neuronal system, emulating the way the cerebellum interacts with the human nervous system.

Implanting the man-made cerebellum in a robot would allow it to manipulate and interact with other objects with far greater subtlety than industrial robots can currently manage, said researcher Professor Eduardo Ros Vidal, who is co-ordinating work at the University of Granada.

"Although robots are increasingly more important to our society and have more advanced technology, they cannot yet do certain tasks like those carried out by mammals," he said.

"We have been talking about humanoids for years but we do not yet see them on the street or use the unlimited possibilities they offer us," he added.

One use of such robots would be as home-helps for disabled people.

The next stage of the Sensopac project is to develop an artificial skin for robots, making them look more human-like as well as being information-sensitive in the same way as human skin is.

This system is being developed by the German Aerospace Centre in collaboration with other research groups.

The ambitious project is just one of many attempts to create more human-like robots.

Another European research project - dubbed Feelix Growing - has been given 2.3m euros to develop robots that can learn from humans and respond socially and emotionally.

The medical community is making huge strides in the use of man-made parts for failures in the human brain. Last year US scientists implanted a sensor in a paralysed man's brain that has enabled him to control objects by using his thoughts alone.

The fast pace of current robotics research has prompted deeper questions about how androids would be integrated into human society.

Some have called for a code of ethics for robots while others question how humans will cope in the face of machine intelligence.

Story from BBC NEWS:

Published: 2007/05/29 13:19:42 GMT
 
Robot Scans Ancient Manuscript in 3-D
Amy Hackney Blackwell 06.05.07 | 2:00 AM

Researcher Matt Field wields the Faro laser, a scanner mounted on a robotic arm that will scan the manuscript in three dimensions. View Slideshow After a thousand years stuck on a dusty library shelf, the oldest copy of Homer's Iliad is about to go into digital circulation.

A team of scholars traveled to a medieval library in Venice to create an ultra-precise 3-D copy of the ancient manuscript -- complete with every wrinkle, rip and imperfection -- using a laser scanner mounted on a robot arm.

A high-resolution, 3-D copy of the entire 645-page parchment book, plus a searchable transcription, will be made available online under a Creative Commons license.

The Venetus A is the oldest existing copy of Homer's Iliad and the primary source for all modern editions of the poem. It lives in Venice at the ancient Public Library of St. Mark. It is easily damaged. Few people have seen it. The last photographic copy was made in 1901.

I was lucky enough to see the manuscript when I went to Venice with my husband, Christopher Blackwell, who is part of a team organized by the Harvard Center for Hellenic Studies to photograph and digitize the ancient book.

The idea is "to use our 3-D data to create a 'virtual book' showing the Venetus in its natural form, in a way that few scholars would ever be able to access," says Matt Field, a University of Kentucky researcher who scanned the pages. "It's not often that you see this kind of collaboration between the humanities and the technical fields."

Venice is not the most convenient work site. All the gear had to come by boat and be carried or dragged up the stairs of the library. Built in the 1500s, the library has been renovated periodically, but its builders never envisioned a need for big lights, a motorized cradle, 17 computers or wireless internet.

The group set up shop in an upstairs room, using their own electrical cables and adapters to harness the library's modest power resources. They covered the window overlooking the Piazzetta San Marco with a black sheet to keep out sunlight that could damage the manuscript. They placed the book, the size and weight of a giant dictionary, on a custom cradle that holds it steady, and turned the lights down low.

No more than four people were allowed in the room at one time, to keep down heat and humidity. The conservator turned each page with his hands and set it against a plastic bar, where light air suction held it in place. The barn doors covering the lights were flung open for the time it took the photographer to snap a shot with a 39-megapixel digital camera, a Hasselblad H1 medium-format camera with a Phase One P45 digital back. As each page was photographed, the classics scholar on duty in the hallway outside the workroom would examine its image to make sure all the text was legible.

Then Field scanned each page to create a 3-D image. Using an ordinary flatbed scanner was out of the question -- it would flatten the delicate parchments. So Brent Seales, a computer scientist from the University of Kentucky's Center for Visualization and Virtual Environments, decided to use a laser scanner on a robot arm to make a 3-D scan of the pages.

Passing about an inch from the surface, the laser rapidly scanned back and forth, painting the page with laser light. The robot arm knows precisely where in space its "hand" is, creating a precise map of each page as it scans. The data is fed into a CAD program that renders an image of the manuscript page with all its crinkles and undulations.

"The resolution yields millions of 3-D points per page," Seales says.

To store the data, the team used a 1-terabyte redundant-disk storage system on a high-speed network. The classicists on duty backed up the data every evening on two 750-GB drives and on digital tape. Blackwell carried the hard drives home with him every night, rather than leave the data in the library.

The next step is making the images readable. The Venetus A is handwritten and contains ligatures and abbreviations that boggle most text-recognition software. So, this summer a group of graduate and undergraduate students of Greek will gather at the Center for Hellenic Studies in Washington, D.C., to produce XML transcriptions of the text. Eventually, their work will be posted online for anyone to search, as part of the Homer Multitext Project.

http://www.wired.com/gadgets/miscellane ... iliad_scan
 
A robot is built to rescue soldiers
WASHINGTON, June 7 (UPI) -- U.S. researchers are developing a remote-controlled robot designed to rescue injured or abducted soldiers without putting their comrades at risk.

The prototype of the nearly 6-foot-tall Battlefield Extraction-Assist Robot, called Bear, can lift nearly 300 pounds with one arm, and its developer, Vecna Technologies of College Park, Md., is focusing on improving its two-legged lower body.

Tracks on its thighs and shins allow the robot to climb over rough terrain or up and down stairs while crouching or kneeling. Wheels at its hips, knees and feet allow it to switch to two wheels to travel over smooth surfaces while adopting a variety of positions.

The robot's humanoid body and teddy bear-style head give it a friendly appearance.

"A really important thing when you're dealing with casualties is trying to maintain that human touch," said Gary Gilbert of the U.S. Army's Telemedicine and Advanced Technology Research Center, which provided the initial $1 million development funding. Congress has since added $1.1 million.

The robot can also load trucks and carry equipment.

Bear is expected to be ready for field testing within five years.

Rescue
 
LINKY FOR BABY BOT
Humanoid toddler reacts to touch, sound

OSAKA, Japan - A group of scientists in Japan have developed a robot that acts like a toddler to better understand child development.

The Child-Robot with Biomimetic Body, or CB2, was developed by a team of researchers at Osaka University in western Japan and is designed to move just like a real child between 1 and 3 years old.

CB2, at just over 4 feet tall and weighing 73 pounds, changes facial expressions and can rock back and forth. The robot's movements are smooth as it is fitted with 56 actuators in lieu of muscle. It has 197 sensors for touch, small cameras working as eyes, and an audio sensor. CB2 can also speak using an artificial vocal cord.

When it stands up supported by a person, the robot wobbles like a child who is learning how to walk.

Minoru Asada, a professor at Osaka University who leads the project for the Japan Science and Technology Agency said the robot was developed to learn more about child development.

"Our goal is to study human recognition development such as how the child learns a language, recognizes objects and learns to communicate with his father and mother," he said.
 
http://www.sciencedaily.com/releases/2007/06/070612152446.htm


Source: Purdue University
Date: June 13, 2007

Guessing Robots Predict Their Environments, Navigate Better
Science Daily — Engineers at Purdue University are developing robots able to make "educated guesses" about what lies ahead as they traverse unfamiliar surroundings, reducing the amount of time it takes to successfully navigate those environments.


C.S. George Lee, from left, a Purdue professor of electrical and computer engineering, works with doctoral student H. Jacky Chang to operate mobile robots using a software algorithm that enables robots to make "educated guesses" about what lies ahead as they traverse unfamiliar surroundings. The approach reduces the amount of time it takes to successfully navigate those environments. Future research will extend the concept to four robots working as a team to explore an unknown environment by sharing the mapped information through a wireless network. (Credit: Purdue News Service photo/David Umberger) The method works by using a new software algorithm that enables a robot to create partial maps as it travels through an environment for the first time. The robot refers to this partial map to predict what lies ahead.

The more repetitive the environment, the more accurate the prediction and the easier it is for the robot to successfully navigate, said C.S. George Lee, a Purdue professor of electrical and computer engineering who specializes in robotics.

"For example, it's going to be easier to navigate a parking garage using this map because every floor is the same or very similar, and the same could be said for some office buildings," he said.

Both simulated and actual robots in the research used information from a laser rangefinder and odometer to measure the environment and create the maps of the layout.

The algorithm modifies an approach, called SLAM, which was originated in the 1980s. The name SLAM, for simultaneous localization and mapping, was coined in the early 1990s by Hugh F. Durrant-Whyte and John J. Leonard, then engineers at the University of Oxford in the United Kingdom.

SLAM uses data from sensors to orient a robot by drawing maps of the immediate environment. Because the new method uses those maps to predict what lies ahead, it is called P-SLAM.

"Its effectiveness depends on the presence of repeated features, similar shapes and symmetric structures, such as straight walls, right-angle corners and a layout that contains similar rooms," Lee said. "This technique enables a robot to make educated guesses about what lies ahead based on the portion of the environment already mapped."

Research findings were detailed in a paper that appeared in April in IEEE Transactions on Robotics, published by the Institute of Electrical and Electronics Engineers. The paper was authored by doctoral student H. Jacky Chang, Lee, assistant professor Yung-Hsiang Lu and associate professor Y. Charlie Hu, all in Purdue's School of Electrical and Computer Engineering.

Potential applications include domestic robots and military and law enforcement robots that search buildings and other environments.

The Purdue researchers tested their algorithm in both simulated robots and in a real robot navigating the corridors of a building on the Purdue campus. Findings showed that a simulated robot using the algorithms was able to successfully navigate a virtual maze while exploring 33 percent less of the environment than would ordinarily be required.

Future research will extend the concept to four robots working as a team, operating with ant-like efficiency to explore an unknown environment by sharing the mapped information through a wireless network. The researchers also will work toward creating an "object-based prediction" that recognizes elements such as doors and chairs, as well as increasing the robots' energy efficiency.

Robots operating without the knowledge contained in the maps must rely entirely on sensors to guide them through the environment. Those sensors, however, are sometimes inaccurate, and mechanical errors also cause the robot to stray slightly off course.

The algorithm enables robots to correct such errors by referring to the map, navigating more precisely and efficiently.

"When the robot makes a turn to round a corner, let's say there is some mechanical error and it turns slightly too sharp or not sharply enough," Lee said. "Then, if the robot continues to travel in a straight line that small turning error will result in a huge navigation error in the long run."

The research has been funded by the National Science Foundation.

In separate work, Purdue undergraduate students in a senior design class have developed a prototype firefighting robot called Firebot.

Note: This story has been adapted from a news release issued by Purdue University.
 
Robot Soccer World Cup Kicks Off

Robot soccer World Cup kicks off



A football tournament played by teams of robots has kicked off in Germany.

The 10th annual RoboCup, being held in Bremen, will see more than 400 teams of robots dribbling, tackling and shooting in an effort to become world champions.

Machines compete in 11 leagues including those designed for humanoid and four-legged robots.

The organisers of the tournament hope that in 2050 the winners of the RoboCup will be able to beat the human World Cup champions.

"RoboCup 2006 is the first step towards a vision," said Minoru Asada, president of the RoboCup Federation.

"This vision includes the development of a humanoid robot team of eleven players, which can win against a human soccer world champion team."

Teams from 36 countries have flocked to Bremen to take part in the tournament.

As well as providing a visual spectacle on the pitch, some robots will be helping out in other ways.

Live commentary of a number of matches is provided by a pair of robots developed by scientists from Carnegie Mellon University in the US.

Sango and Ami, as the duo are known, will explain the rules of the game and dissect fouls for spectators using synthesized voices.

"They don't talk at the same time," said Manuela Veloso, the Herbert Simon Professor of Computer Science and head of Carnegie Mellon's RoboCup teams.

"But if one is explaining a rule and a nice goal is made, the other has the ability to interrupt."

Sango and Ami also have very different personalities. Sango provides a very sober account of the game while Ami provides a more emotional response to proceedings.

Both celebrate by pumping their arms when a team scores.

As well as having novelty value and, the RoboCup has a more serious side.

It is a chance for 2,500 experts in artificial intelligence and robot engineering to meet and trial their latest ideas.

Football is a useful test for robotics because it has so many different elements including movement, strategy and vision.

Researchers come to assess their sensors, artificial intelligence and software on the pitch.

"After 50 years within artificial intelligence, it has been determined that these things can be better researched using soccer than the game of chess," said Hans-Dieter Burkhard, the Vice President of the RoboCup Federation.

This year all eyes are on a team from Japan who are expected to do well in the humanoid category, while the current world champions from Germany are a force to be reckoned with in the four-legged tournament.

The championships run until 18 June and are then followed by a conference for two days where the teams can dissect their play and work on improvements before the big game in 2050.

Story from BBC NEWS:

Published: 2006/06/14 11:32:42 GMT

© BBC MMVII
 
Armed autonomous robots cause concern
10:32 07 July 2007
NewScientist.com news service
Advertisement A MOVE to arm police robots with stun guns has been condemned by weapons researchers.

On 28 June, Taser International of Arizona announced plans to equip robots with stun guns. The US military already uses PackBot, made by iRobot of Massachusetts, to carry lethal weapons, but the new stun-capable robots could be used against civilians.

"The victim would have to receive shocks for longer, or repeatedly, to give police time to reach the scene and restrain them, which carries greater risk to their health," warns non-lethal weapons researcher Neil Davison, of the University of Bradford, UK.

"If someone is severely punished by an autonomous robot, who are you going to take to a tribunal?" asks Steve Wright, a security expert at Leeds Metropolitan University, UK.

www.newscientisttech.com/article/dn1220 ... ncern.html
 
Thursday, July 19, 2007
Robotic Insect Takes Off for the First Time

Researchers at Harvard have created a robotic fly that could one day be used for covert surveillance and detecting toxic chemicals.
By Rachel Ross

A life-size, robotic fly has taken flight at Harvard University. Weighing only 60 milligrams, with a wingspan of three centimeters, the tiny robot's movements are modeled on those of a real fly. While much work remains to be done on the mechanical insect, the researchers say that such small flying machines could one day be used as spies, or for detecting harmful chemicals.

"Nature makes the world's best fliers," says Robert Wood, leader of Harvard's robotic-fly project and a professor at the university's school of engineering and applied sciences.

The U.S. Defense Advanced Research Projects Agency is funding Wood's research in the hope that it will lead to stealth surveillance robots for the battlefield and urban environments. The robot's small size and fly-like appearance are critical to such missions. "You probably wouldn't notice a fly in the room, but you certainly would notice a hawk," Wood says.

Recreating a fly's efficient movements in a robot roughly the size of the real insect was difficult, however, because existing manufacturing processes couldn't be used to make the sturdy, lightweight parts required. The motors, bearings, and joints typically used for large-scale robots wouldn't work for something the size of a fly. "Simply scaling down existing macro-scale techniques will not come close to the performance that we need," Wood says.

Some extremely small parts can be made using the processes for creating microelectromechanical systems. But such processes require a lot of time and money. Wood and his colleagues at the University of California, Berkeley, needed a cheap, rapid fabrication process so they could easily produce different iterations of their designs.

Ultimately, the team developed its own fabrication process. Using laser micromachining, researchers cut thin sheets of carbon fiber into two-dimensional patterns that are accurate to a couple of micrometers. Sheets of polymer are cut using the same process. By carefully arranging the sheets of carbon fiber and polymer, the researchers are able to create functional parts.

For example, to create a flexure joint, the researchers arrange two tiny pieces of carbon composite and leave a gap in between. They then add a sheet of polymer perpendicularly across the two carbon pieces, like a tabletop on two short legs. Two new pieces of carbon fiber are placed at either end of the polymer, as a final top layer. Once all the pieces are cured together, the resulting part resembles the letter H: the center is flexible but the sides are rigid.

By fitting many little carbon-polymer pieces together, the researchers are able to create rather complicated parts that can bend and rotate precisely as required. To make parts that will move in response to electrical signals, the researchers incorporate electroactive polymers, which change shape when exposed to voltage. The entire fabrication process will be outlined in a paper appearing in an upcoming edition of the Journal of Mechanical Design.

After more than seven years of work studying flight dynamics and improving various parts, Wood's fly finally took off this spring. "When I got the fly to take off, I was literally jumping up and down in the lab," he says.

Other researchers have built robots that mimic insects, but this is the first two-winged robot built on such a small scale that can take off using the same motions as a real fly. The dynamics of such flight are very complicated and have been studied for years by researchers such as Ron Fearing, Wood's former PhD advisor at the University of California, Berkeley. Fearing, who is building his own robotic insects, says that he was very impressed with the fact that Wood's insect can fly: "It is certainly a major breakthrough." But Fearing says that it is the first of many challenges in building a practical fly.

At the moment, Wood's fly is limited by a tether that keeps it moving in a straight, upward direction. The researchers are currently working on a flight controller so that the robot can move in different directions.

The researchers are also working on an onboard power source. (At the moment, the robotic fly is powered externally.) Wood says that a scaled-down lithium-polymer battery would provide less than five minutes of flying time.

Tiny, lightweight sensors need to be integrated as well. Chemical sensors could be used, for example, to detect toxic substances in hazardous areas so that people can go into the area with the appropriate safety gear. Wood and his colleagues will also need to develop software routines for the fly so that it will be able to avoid obstacles.

Still, Wood is proud to have reached a major project milestone: flight. "It's quite a major thing," he says. "A lot of people thought it would never be able to take off."

http://www.technologyreview.com/Infotech/19068/
 
Robot to Carry Out Heart Surgery

Robot to carry out heart surgery

th_18033__44010160_cathlabrobot203_122_6lo.jpg


A robotic arm able to carry out an intricate life-saving heart operation is being pioneered by UK surgeons.

The robot is used to guide thin wires through blood vessels in the heart to treat a fast or irregular heartbeat.

Doctors at St Mary's Hospital in London say it will reduce risk for patients and increase the number of procedures they can carry out.

More than 20 patients have been operated on with the robot, which is only one of four in use in the world.

During the procedure, known as catheter ablation, several thin wires and tubes are inserted through a vein in the groin and guided into the heart where they deliver an electric current to specific areas of heart muscle.

The electric current destroys tiny portions of heart tissue which are causing the abnormal heartbeat.

With the Sensei robot, surgeons use a joystick on a computer console to more accurately position and control the wires, which often need to be placed in locations that are difficult to reach.

In the future the system could be automated so the robot guides the wires to a point in the heart selected by the doctor from images on a computer screen.

When done by hand the operation is highly skilled and a shortage of clinicians able to carry out the surgery means only 10% of people with the condition, called atrial fibrillation, are treated this way.

Tony Blair underwent the operation by hand in 2004.

Around 50,000 people develop atrial fibrillation, which is a major cause of strokes and heart failure, every year and it has been calculated to cost the NHS almost 1% of its entire annual budget.

Numbers are expected to increase even more due to an ageing population, a rising number of people with chronic heart disease and better diagnosis.

St Mary's consultant cardiologist, Dr Wyn Davies, said: "In the UK a shortage of expertise means there are too few centres where highly complex cases can be carried out.

"The robot allows accuracy and control of catheter movement which cannot currently be achieved without a skill level that usually takes considerable time to acquire."

"The attraction is the potential for automation - we can get details about the patient's heart anatomy from CT scans, then on the computer draw where you want the ablation delivered and hit return."

He added that full automation was a few years away but he could envisage a scenario where a skilled operator could oversee multiple operations happening at once.

Trudie Lobban, chief executive of the Arrhythmia Alliance charity said the operation was highly successful and allowed people to lead normal lives.

"It's like threading cotton through a very fine needle and with this new device it should be much easier and quicker to carry out and to train doctors to do it."

Professor Peter Weissberg, medical director of the British Heart Foundation, said: "Through research we have learned that abnormal heart rhythms, like atrial fibrillation, are caused by a handful of rogue cells.

"Extreme precision is required to track down and deal with these cells without damaging healthy tissue.

"The early promising results suggest that this approach may greatly improve the treatment of some patients with atrial fibrillation."

Story from BBC NEWS:

Published: 2007/07/20 10:16:08 GMT

© BBC MMVII
 
(Good diagrams on page)
http://www.dailymail.co.uk/pages/live/a ... ge_id=1965

RoboSwift: The tiny plane that flys like a swift
By NIALL FIRTH
Last updated at 16:21pm on 18th July 2007

Dutch engineering students have developed a uniquely shaped aeroplane that is inspred by the common swift - one of nature's most efficient flyers.

RoboSwift is a micro airplane fitted with 'shape-shifting' wings which mean that the wing surface area can be adjusted continuously making the plane more maneuverable and efficient.

Weighing only 80 grams, one of its key tasks once completed will be to follow groups of swifts to aid in studies of the birds as they fly.

The RoboSwift's wings are flexible and 'morph' - making the aircraft extremely manouverable, just like its namesake

It will be able to follow a group of swifts up to 20 minutes and perform ground surveillance up to one hour thanks to its lithium-polymer batteries that power the electromotor, which drives a propeller. The propeller folds back during gliding to minimize air drag.

The swift inspiration is to be found in the unique "morphing-wings".

Morphing means the wings can be swept back in flight by folding feathers over each other, thus changing the wing shape and reducing the wing surface area. RoboSwift also steers by morphing its wings.

The technique means the micro airplane is highly maneuverable at very high and very low speeds, just like the swift.

The RoboSwift will come equipped with tiny cameras which can be used in surveillance

RoboSwift is steered by asymmetrically morphing the wings. Sweeping one wing back further than the other creates a difference in lift on the wings that is used to roll and turn the micro plane in the air.

The students found out that only four feathers were needed, far fewer than the bird uses, to acheive this effect.

The aircraft will be able to go undetected while using its three micro cameras to perform surveillance on vehicles and people on the ground.

The RoboSwift team presented the design at a symposium at the Delft university and will build the high-tech micro airplane over the next few months; it is expected to fly in January 2008.

The student team will build three RoboSwifts to participate in March 2008 in the First American-Asian Micro Air Vehicle competition in India.

Taking inspiration from nature in this way is known as 'biomimetics' - a rapidly expanding area of technology research.
 
Dunno about this. An emotional cat robot might easily achieve sntience and conquer the world. What ye reckon?

Thursday, July 26, 2007
An Emotional Cat Robot
By applying logical rules for emotions, researchers say they can make robots behave more efficiently.
By Duncan Graham-Rowe
Scientists in the Netherlands are endowing a robotic cat with a set of logical rules for emotions. They believe that by introducing emotional variables to the decision-making process, they should be able to create more-natural human and computer interactions.

"We don't really believe that computers can have emotions, but we see that emotions have a certain function in human practical reasoning," says Mehdi Dastani, an artificial-intelligence researcher at Utrecht University, in the Netherlands. By bestowing intelligent agents with similar emotions, researchers hope that robots can then emulate this humanlike reasoning, he says.

The hardware for the robot, called iCAT, was developed by the Dutch research firm Philips and designed to be a generic companion robotic platform. By enabling the robot to form facial expressions using its eyebrows, eyelids, mouth, and head position, the researchers are aiming to let it show if it is confused, for example, when interacting with its human user. The long-term goal is to use Dastani's emotional-logic software to assist in human and robot interaction, but for now, the researchers intend to use the iCAT to display internal emotional states as it makes decisions.

In addition to improving interactions, this emotional logic should also help intelligent agents carrying out noninteractive tasks. For instance, it should help reduce the computational workload during the complex decision-making processes used when carrying out planning tasks.

Developed with John-Jules Meyer and Bas Steunebrink, also at Utrecht, the logical functions consist of a series of rules to define a set of 22 emotions, such as anger, hope, gratification, fear, and joy. But rather than being based on notions of feelings, these are defined in terms of a goal the robot needs to achieve and the plan by which the robot aims to achieve it.

When robots are typically attempting to carry out a task, such as navigation, there are usually two approaches they can take: they can calculate a set plan in advance, based on a starting point and the position of the goal, and then execute it, or they can continually replan their route as they go. The first method is fairly primitive and can often result in the familiar scene of a robot bashing itself against an unforeseen obstacle, unable to get around it. The latter approach is more robust, particularly when navigating unpredictable, complex environments. But this method is usually very computationally demanding because it requires the robot to be continually searching for the best route from a vast number of possible paths.

Emotional logic can help get the best of both worlds by requiring the robot to replan its route only when its emotional states dictate. For example, in this sort of navigational task, "hope" would be defined in terms of the system believing (based on sensory data) that by carrying out Plan A to achieve Goal B, Goal B will be achieved. Conversely, "fear" occurs when the system hopes to achieve Goal B by Plan A, but it believes that Goal B won't be achieved after performing Plan A. Using this sort of definition, "fear" can help the robot recognize when it's time to try a new tack. "This changes its beliefs because the rest of the plan will not make its goal reachable," says Dastani.

In essence, by attributing emotions to an agent's current status, it's possible to monitor the behavior of the system so that decision making or planning is only carried out when absolutely necessary. "It's a heuristic that can help make rational decision-making processes more realistic and much more computable," says Dastani. "The point is that here we continuously monitor whether there is a chance of failure."

Other robots have been designed to mimic human expressions. But Dastani's focus on how emotions might affect decision makes it different from many of the other projects on emotional, or affective, computing, such as MIT's Kismet robot, developed by Cynthia Breazeal. With Kismet, like other affective robots, the focus is on how to get the robot to express emotions and elicit them from people.

Dastani's emotional functions have been derived from a psychological model known as the OCC model, devised in 1988 by a trio of psychologists: Andrew Ortony and Allan Collins, of Northwestern University, and Gerald Clore, of the University of Virginia. "Different psychologists have come up with different sets of emotions," says Dastani. But his group decided to use this particular model because it specified emotions in terms of objects, actions, and events.

Indeed, one of the reasons for creating this model was to encourage such work, says Ortony. "It is very gratifying for us that the people are using the model this way," he says. Most of the time when people talk about emotional or affective computing, it's at the human-interaction level, but there's a lot of work to be done looking at how emotions influence decision making, he says.

"It cuts across a lot of philosophical debates about the nature of human emotion and, indeed, of human thought," says Blay Whitby, a philosopher who specializes in artificial intelligence at the University of Sussex, in the UK. This is not a bad thing, he says, but many philosophers would probably view the notion of emotional logic as an oxymoron, he says.

Having 22 different emotions makes for a very rich model of human emotion, even compared with some psychiatric theories, says Whitby. But it will need to be able to resolve conflicts between different emotional states, and it needs to be practically put to the test, he says. "The devil is in the detail with this sort of work, and they specifically don't consider multiagent interactions."

Dastani says that incorporating multiagent interactions--those involving multiple robots or robots and humans--is on his to-do list. He notes that it's only then that end users are likely to see the benefits of this emotional logic, in the form of more-natural robot interactions or through the responses of intelligent agents in automated call centers. Before that happens, these emotional states are more likely to function behind the scenes in more-mundane activities like navigation and scheduling tasks, Dastani says, but it's still too early to predict when such as system would be commercially available.

http://www.technologyreview.com/Infotech/19102/
 
Robots with a sense of humour

Did you ever suspect that some sitcoms were written by computers. They've now programmed them to understand jokes, so it's only a matter of time.

From New Scientist:

Sharing a joke could help man and robot interact
01 August 2007
NewScientist.com news service
Michael Reilly


Robot humourAdvertisement A MAN walks into a bar: "Ouch!" You might not find it funny, but at least you got the joke. That's more than can be said for computers, which, despite radical advances in artificial intelligence, remain notably devoid of a funny bone.

Previously AI researchers have tended not to try mimicking humour, largely because the human sense of humour is so subjective and complex, making it difficult to program.

Now Julia Taylor and Lawrence Mazlack of the University of Cincinnati in Ohio have built a computer program or "bot" that is able to get a specific type of joke - one whose crux is a simple pun. They say this budding cyber wit could lend a sense of humour to physical robots acting as human companions or helpers, which will need to be able to spot jokes if they are to be accepted and not just annoy people. The bot is also teasing apart why some people laugh at a joke, such as the one above, when most just groan.

To teach the program to spot jokes, the researchers first gave it a database of words, extracted from a children's dictionary to keep things simple, and then supplied examples of how words can be related to one another in different ways to create different meanings. When presented with a new passage, the program uses that knowledge to work out how those new words relate to each other and what they likely mean. When it finds a word that doesn't seem to fit with its surroundings, it searches a digital pronunciation guide for similar-sounding words. If any of those words fits in better with the rest of the sentence, it flags the passage as a joke. The result is a bot that "gets" jokes that turn on a simple pun.

Taylor presented the bot at the American Association for Artificial Intelligence conference in Vancouver, Canada, last week but stresses that it does still miss some puns. And of course, there are many jokes that aren't based on puns, which the bot doesn't get (see "Robot humour"). Taylor notes that past experiences are often the key to why some people find things hilarious when others don't. "If you've been in a car accident, you probably won't find a joke about a car accident funny," she says. She is now working to personalise the bot's sense of humour by flagging certain links between words as either funny or not, depending on the experiences of people it might converse with.

Meanwhile Rada Mihalcea and colleagues at the University of North Texas in Denton have built a different kind of humour-spotting bot. Instead of working out why a sentence might be funny, it learns the frequencies of words that are found in jokes, and uses that to identify humour. "We got a lot of 'can't', 'don't', 'drunk' and 'poor'," Mihalcea says. "People like laughing about bad things."

Related Articles
If you're happy, the robot knows it
http://www.newscientisttechnology.com/a ... 325966.500
22 March 2007
Jobs for the bots
http://www.newscientisttechnology.com/a ... 922774.800
10 February 2001
Forum: Laugh? You must be joking - The serious business of research into humour
http://www.newscientisttechnology.com/a ... 316725.800
08 July 1989
Weblinks
Computational humour
http://csdl2.computer.org/persagen/.../ex/2006/02/x2toc.xml&DOI=10.1109/MIS.2006.22
A theory of humour
http://www.tomveatch.com/else/humor/paper/humor.html
Lawrence Mazlack, University of Cincinnati
http://www.ececs.uc.edu/~mazlack/academ ... AILab.html
American Association of Artificial Intelligence
http://www.aaai.org/Conferences/AAAI/aaai07.php
Rad Mihalcea, University of North Texas
http://www.cs.unt.edu/%7Erada/papers.html
From issue 2615 of New Scientist magazine, 01 August 2007, page 26
 
Like something dreamed up by Philip K. Dick. The advanced model will probably eliminate snipers (may be some danger of collateral damage).

Sniper-sniffing robot created

Bombs and snipers in crowded places are spotted by the robot
A flying robot that can identify snipers and bombs in built-up areas has been shortlisted in a national Ministry of Defence (MoD) competition.
Portsmouth University said the "locust" - a multi-function sensor system - was developed with the firm Ant Scientific.

It will be put to the test against 16 other sniper-sniffing robots in the MoD's "grand challenge" in August 2008.

The university said the test was to stay one step ahead of an enemy who does not play by any rules.

University of Portsmouth said their specialists in aerodynamic modelling, robotics and wireless communications helped to design it.

We are fighting ideologies espoused by very clever extremists not constrained by a public purse or legal concerns

Charlie Baker-Wyatt, University of Portsmouth

"The challenge was to create devices that could be used in the fight against people who don't fight under established rules," said Charlie Baker-Wyatt, manager of the university's defence and homeland security research section.

"We are fighting ideologies espoused by very clever extremists.

"They are often one step ahead of the game and not constrained by a public purse, health and safety, environmental or legal concerns or even their fellow human beings."

Twenty-three teams entered the competition and 16 were shortlisted. The final will see them compete to find "targets" at Copehill Down, the army's urban warfare training facility in Wiltshire.

The winning team will be given military funding and the R J Mitchell Trophy, named after the "father" of the Spitfire.

The winners will also have the high chance of their invention being put into commercial production, potentially earning the designers enormous sums of money, the university said.

Mark Baker, head of research and knowledge transfer at the University of Portsmouth, said: "This is a good example of the university responding to the defence needs of Government and using our research capabilities in a new way."


http://news.bbc.co.uk/2/hi/uk_news/engl ... 980271.stm
 
A friend just emailed me the link to the video of BIGDOG - have to say it is the most fascinating thing I've seen in ages. I'd love to learn more about it but the Boston Dynamics site has virtually no info. How autonomous is it? Can it get up if it falls over? Interesting to note it is funded by DARPA...
 
Back
Top