Precision of GPS in Cities Improved by 90 Percent
Feb. 12, 2013 — Researchers
at Universidad Carlos III de Madrid (UC3M) have developed a new system
which improves the ability of a GPS to determine a vehicle's position as
compared to that of conventional GPS devices by up to 90 percent, and
which can be installed in any vehicle at a very low cost.
The margin of error of a commercial GPS, such as those that are used in cars, is about 15 meters in an open field, where the receiver has wide visibility from the satellites. However, in an urban setting, the determination of a vehicle's position can be off by more than 50 meters, due to the signals bouncing off of obstacles like buildings, trees, or narrow streets, for example. In certain cases, such as in tunnels, communication is lost, which hinders the GPS's applications reaching Intelligent Transport Systems, which require a high level of security. "Future applications that will benefit from the technology that we are currently working on will include cooperative driving, automatic maneuvers for the safety of pedestrians, autonomous vehicles or cooperative collision warning systems," the scientists comment.
The greatest problem presented by a commercial GPS in an urban setting is the loss of all of the satellite signals. "This occurs continually, but commercial receivers partially solve the problem by making use of the urban maps that attempt to position the vehicle in an approximate point," comments David Martín. "These devices," he continues, "can indicate to the driver approximately where s/he is, but they cannot be used as a source of information in an Intelligent Transport System like those we have cited." However, in the case of the new prototype that they have developed they have managed to guarantee the position of the vehicle to within 1 or 2 meters in urban settings.
A combination of sensors
The basic elements that make up this system are a GPS and a low cost Inertial Measurement Unit (IMU). The latter device integrates three accelerometers and three gyroscopes to measure changes in velocity and maneuvers performed by the vehicle. Then, everything is connected to a computer that has an application that merges the data and corrects the errors in the geographic coordinates. Enrique Martí, of UC3M's GIAA explains, "This software is based on an architecture that uses context information and a powerful algorithm (called Unscented Kalman Filter) that eliminates the instantaneous deviations caused by the degradation of the signals received by the GPS receiver or the total or partial loss of the satellites."
Currently the researchers have a prototype that they can install in any type of vehicle. In fact, it is already working on board the IVVI (Intelligent Vehicle based on Visual Information), a real car that has become a platform for research and experimentation for professors and students at the University. The objective of the researchers from LSI and UC3M who are working on this "intelligent car" is to be able to capture and interpret all of the information that is available on the road, and that we use when we are driving. To do this, they are using optical cameras, infrareds and laser to detect whether we are crossing the lines on the road, or whether there are pedestrians in the vehicle's path, as well as to adapt our speed to the traffic signals and even to analyze the driver's level of sleepiness in real time.
The next step these researchers intend to take is to analyze the possibility of developing a system that makes use of the sensors that are built into smartphones, since intelligent telephones are equipped with more than ten sensors, such as an accelerometer, a gyroscope, a magnetometer, GPS and cameras, in addition to WiFi, Bluetooth or GSM communications, for example. "We are now starting to work on the integration of this data fusion system into a mobile telephone," reveals Enrique Martí, "so that it can integrate all of the measurements that come from its sensors in order to obtain the same result that we have now, but at an even much lower cost, since it is something that almost everyone can carry around in his pocket."
====================================
Humans and Robots Work Better Together Following Cross-Training; Swapping of Roles Improves Efficiency
Feb. 11, 2013 — Spending a
day in someone else's shoes can help us to learn what makes them tick.
Now the same approach is being used to develop a better understanding
between humans and robots, to enable them to work together as a team.
"People aren't robots, they don't do things the same way every single time," Shah says. "And so there is a mismatch between the way we program robots to perform tasks in exactly the same way each time and what we need them to do if they are going to work in concert with people."
Most existing research into making robots better team players is based on the concept of interactive reward, in which a human trainer gives a positive or negative response each time a robot performs a task.
However, human studies carried out by the military have shown that simply telling people they have done well or badly at a task is a very inefficient method of encouraging them to work well as a team.
So Shah and PhD student Stefanos Nikolaidis began to investigate whether techniques that have been shown to work well in training people could also be applied to mixed teams of humans and robots. One such technique, known as cross-training, sees team members swap roles with each other on given days. "This allows people to form a better idea of how their role affects their partner and how their partner's role affects them," Shah says.
In a paper to be presented at the International Conference on Human-Robot Interaction in Tokyo in March, Shah and Nikolaidis will present the results of experiments they carried out with a mixed group of humans and robots, demonstrating that cross-training is an extremely effective team-building tool.
To allow robots to take part in the cross-training experiments, the pair first had to design a new algorithm to allow the devices to learn from their role-swapping experiences. So they modified existing reinforcement-learning algorithms to allow the robots to take in not only information from positive and negative rewards, but also information gained through demonstration. In this way, by watching their human counterparts switch roles to carry out their work, the robots were able to learn how the humans wanted them to perform the same task.
Each human-robot team then carried out a simulated task in a virtual environment, with half of the teams using the conventional interactive reward approach, and half using the cross-training technique of switching roles halfway through the session. Once the teams had completed this virtual training session, they were asked to carry out the task in the real world, but this time sticking to their own designated roles.
Shah and Nikolaidis found that the period in which human and robot were working at the same time -- known as concurrent motion -- increased by 71 percent in teams that had taken part in cross-training, compared to the interactive reward teams. They also found that the amount of time the humans spent doing nothing -- while waiting for the robot to complete a stage of the task, for example -- decreased by 41 percent.
What's more, when the pair studied the robots themselves, they found that the learning algorithms recorded a much lower level of uncertainty about what their human teammate was likely to do next -- a measure known as the entropy level -- if they had been through cross-training.
Finally, when responding to a questionnaire after the experiment, human participants in cross-training were far more likely to say the robot had carried out the task according to their preferences than those in the reward-only group, and reported greater levels of trust in their robotic teammate. "This is the first evidence that human-robot teamwork is improved when a human and robot train together by switching roles, in a manner similar to effective human team training practices," Nikolaidis says.
Shah believes this improvement in team performance could be due to the greater involvement of both parties in the cross-training process. "When the person trains the robot through reward it is one-way: The person says 'good robot' or the person says 'bad robot,' and it's a very one-way passage of information," Shah says. "But when you switch roles the person is better able to adapt to the robot's capabilities and learn what it is likely to do, and so we think that it is adaptation on the person's side that results in a better team performance."
The work shows that strategies that are successful in improving interaction among humans can often do the same for humans and robots, says Kerstin Dautenhahn, a professor of artificial intelligence at the University of Hertfordshire in the U.K. "People easily attribute human characteristics to a robot and treat it socially, so it is not entirely surprising that this transfer from the human-human domain to the human-robot domain not only made the teamwork more efficient, but also enhanced the experience for the participants, in terms of trusting the robot," Dautenhahn says.
====================================
Control a Virtual Spacecraft by Thought Alone
Feb. 5, 2013 — Scientists at the
University of Essex have been working with NASA on a project where they
controlled a virtual spacecraft by thought alone.
Researchers at Essex have already been undertaking extensive projects into using BCI to help people with disabilities to enable spelling, mouse control or to control a wheelchair. The research involves the user carrying our certain mental tasks which the computer then translates into commands to move the wheelchair in different directions.
The University has built-up an international reputation for its BCI research and is expanding its work into the new area of collaborative BCI, where tasks are performed by combining the signals of multiple BCI users.
The £500,000 project with NASA's Jet Propulsion Lab in Pasadena, California, involved two people together steering a virtual spacecraft to a planet using a unique BCI mouse, developed by scientists at Essex.
Using electroencephalography (EEG), the two users wore a cap with electrodes which picked up different patterns in the brainwaves depending on what they were focusing their attention on a screen - in this case one of the eight directional dots of the cursor. Brain signals representing the users' chosen direction, as interpreted by the computer, were then merged in real time to produce control commands for steering the spacecraft.
As Professor Riccardo Poli, for the University's School of Computer Science and Electronic Engineering, explained, the experiment was very intense and involved a lot of concentration. With two people taking part in the test, the results were more accurate as the system could cope if one of the users had a brief lapse in concentration.
Analysis of this collaborative approach showed that two minds could be better than one at producing accurate trajectories. Combining signals also helped reduce the random "noise" that hinders EEG signals, such as heartbeat, breathing, swallowing and muscle activity. "When you average signals from two people's brains, the noise cancels out a bit," added Professor Poli.
Professor Poli said an exciting development for BCI research in the future relates to joint decision making, where a physiological signal, like pressing a button, and brain activity can be combined to give a superior result. "It is like measuring someone's gut feeling," added Professor Poli.
More information about the project can be found on its web site -- RoBoSAS: Robotics, BCI and Secure Adaptive Systems at Essex University and NASA JPL: http://www.robosas.org.uk
No comments:
Post a Comment