Connect with us

Artificial Intelligence

A Decade of Transformation in Robotics

Published

on

Decade of Transformation in Robotics

A decade of Transformation in Robotics

By personalizing and democratizing the use of machines, robots come to the fore. Their comprehensive integration into everyday life may mean that everyone can depend on them for support for their physical tasks, just as we now depend on applications for computing tasks. As robots move from our imaginations to our homes, offices, and factories, they will become partners that help us do much more than we do alone. Robots will add endless possibilities to how we move, what we build and where, and even the materials we use to create things.

Imagine a future in which robots are so integrated into the evolution of human life that they are as common as smartphones today. The field of robotics could greatly improve our quality of life in work and home environments, and also our games, providing us with support in both cognitive and physical tasks. Robots have been helping humans perform dangerous, unpleasant or tedious tasks for years, and have made it possible to explore hard-to-reach environments, including the deep sea or outer space. More and more robots will be able to adapt and learn and interact cognitively with humans and other machines.

The rapid technological advances of the past decade have made computing essential that has transformed the way we work, live and play. The digitization of almost everything, coupled with advances in robotics, promises us a future in which access to very complex machines is democratized and personalized on a large scale. The capacity of robots is increasing as they can carry out more difficult calculations and interact with the world using increasingly precise sensors and higher quality triggers.

Our connected world, with many custom robots working alongside people, is already creating new jobs, improving the quality of existing ones, and saving people time to do what they find interesting, important, and challenging. Robots are already our partners in industrial and domestic environments. They cooperate with humans in factories and operating rooms. They mow our grass, vacuum the soil and even milk our cows. In a few years, they will be present in even more aspects of our lives.

By commuting to work in driverless cars we can read, return calls, catch up on our podcasts favorites and even take a nap. The robot car will also serve as an assistant, as it will tell us what we need to do, plan the routes so that we can carry out all our tasks and will use the most up-to-date traffic information to avoid the most congested roads. Driverless cars will help reduce road accident victims, while autonomous forklifts can help eliminate back ailments from heavy lifting. Robots may change some current jobs, but overall their contributions to society can be very positive. Those who mow the lawn or clean swimming pools have changed the way they do these tasks. Robots can help humanity solve problems of all kinds.

Robotics does not aspire to replace human beings through mechanization and automation of tasks but to find more effective ways of collaboration between robots and people. The former are better at tasks like processing numbers and moving with precision. And they can lift much heavier objects. We, humans, are better than robots at reasoning, defining abstract concepts, making generalizations, or specializing thanks to our ability to draw on past experiences. By collaborating, robots and humans can increase their capabilities and complement each other.

A DECADE OF PROGRESS TOWARDS AUTONOMY

Advances in robotics over the last decade have shown that there are robotic devices that can move and manipulate people and interact with them and their environment in unique ways. The locomotive capabilities of robots are based on the enormous availability of precise sensors (for example, laser scanners) and high-performance motors, and on the development of complex algorithms that allow mapping, locating, planning movements and orientation by coordinates. Advances in the development of robotic bodies (machinery) and robotic brains (programs) allow a multitude of new applications.

The digitization of almost everything, together with advances in robotics, promises us a future in which access to very complex machines is democratized and customized on a large scale

The capabilities of robots are defined by the close link between their physical structure and the computer program that hosts their brain. For example, a flying robot must be equipped with a body capable of flight and algorithms that control flight. Today’s robots can carry out simple movements of land, air and water. They recognize objects, map new environments, perform pick and drop operations, learn to improve control, mimic simple human movements, acquire new knowledge, and can even coordinate with each other. For example, the annual robot soccer championship called RoboCup looks at how the latest robots and algorithms designed for that sport perform.

Read More: http://www.emailmeform.com/builder/form/aw01PTm8o4rCW199cM6

The latest advances in memory drives, the scale and performance of the Internet, wireless communication, design and manufacturing tools, and the power and efficiency of electronics, all coupled with the worldwide increase in data storage, have influenced the development of robotics in multiple ways. Machinery costs are falling, electromechanical parts are more reliable, robot-building tools more versatile, computing environments more accessible, and robots can access global knowledge stored in the cloud. We can begin to imagine the leap from the personal computer to the personal robot, which will generate a multitude of situations in which omnipresent robots will collaborate closely with humans.

Robotics does not aspire to replace human beings through mechanization and automation of tasks, but to find more effective ways of collaboration between robots and people

Transportation is a great example. It is much easier to move a robot around the world than to build a robot that interacts with it. In the last decade, considerable advances in algorithms and machinery have allowed us to imagine a world in which the movement of people and goods is carried out in a much safer and more practical way through an optimized fleet of driverless vehicles.

In a single year, Americans drive nearly 50 million kilometers. At an average of 96 km / h, that means almost 50 billion hours behind the wheel. That figure increases exponentially if we take into account the rest of the planet. But the time we spend behind the wheel is not without risk. In the United States, a car accident occurs every five seconds. Worldwide, injuries caused by traffic accidents are the eighth leading cause of death and each year cause the loss of 1.24 million lives. In addition to this terrible cost in human lives, these accidents throw a huge financial bill. According to the National Highway Traffic Safety Agency (NHTSA, for its acronym in English), in the United States, its cost amounts to about 277,000 million dollars a year. Making a dent in these numbers is a huge challenge, which we cannot help but face. Driverless vehicles could end traffic accidents.

Imagine that cars could learn… to drive like us… never to be responsible for a collision… to know what we need at the wheel. What if they could become reliable partners, partners capable of helping us navigate difficult roads, replace us when we are tired, and even turn time spent in the car into something… fun? What if our car could know we’re having a bad day, put on our favorite music and help us relax while keeping a close eye on our driving? What if he also knew that we forgot to call our parents yesterday and on the way home he would politely remind us? And imagine that it was easy to make that call because on a boring section of the road we could pass the wheel to the vehicle itself.

In the past two years, recognizing this extraordinary potential, most automakers have announced driverless vehicle development projects. It is well known that Elon Musk predicted that in five years we could fall asleep at the wheel; the Google / Waymo car has made headlines for traveling several million kilometers without problems, without accidents; Nissan has promised to have driverless vehicles in 2020; In 2014 Mercedes created the autonomous prototype Model S and Toyota announced (in September 2015) an ambitious program to develop a vehicle that is never responsible for a collision and invested a billion dollars in an artificial intelligence project.

Many are the activities that take place in this field and in relation to a wide range of aspects of the automotive industry. To understand where the various advancements are focused, it is helpful to look at the five levels of autonomy established by NHTSA: Level 0 does not include any automation; Level 1 includes tools that offer additional information to the human driver, for example through rear cameras; Level 2 includes certain active controls, such as the antilock brakes; Level 3 includes mechanisms that facilitate a certain autonomy, but the human being must be willing to assume driving (as in Tesla’s Autopilot); level 4 allows autonomy in some places and times, and level 5 offers autonomy in any environment and time.

There is an alternative way to characterize the level of autonomy of a vehicle without a driver based on three axes that define (1) the speed of the vehicle, (2) the complexity of the environment in which it moves and (3) the complexity of the interactions with other mobile agents (cars, pedestrians, cyclists, etc.) in that environment. Researchers are pushing the limits of each of these axes with the aim of approaching autonomy level 5.

In the last decade, advances in algorithms and machinery have allowed us to imagine a world in which the movement of people and goods is carried out in a safer and more practical way through an optimized fleet of driverless vehicles

Advances in algorithms and machinery of the past decade allow today’s technology to be ready to enter Level 4 processes at low speed, in easy environments, and with low levels of interaction with surrounding pedestrians and vehicles. This would include autonomy on private roads such as those in retirement communities and college campuses, or on public roads that are not heavily congested.

Level 4 of autonomy has been enabled by advances made over a decade in the machinery and algorithms available to robots. The main thing is the convergence of several important advances in the algorithm: the ability to map, which means that a vehicle can use sensors to create a map; location, which means that the vehicle can use sensors to know where it is on that map; perception, which allows you to detect moving objects on the road; planning and decision-making, whereby the vehicle can decide what to do next, taking into account what it sees at any given moment, and reliable machinery, as well as driving data sets that enable vehicles to learn to drive like human beings. Today we can do many simultaneous calculations, process much more data, and apply algorithms in real-time. Those technologies have brought us to a point where it is plausible to contemplate the possibility of autonomous vehicles on the road.

However, we have not yet reached autonomy level 5. Among the technological obstacles that arise to reach it are the following: driving in dense traffic, at high speed, in bad weather (rain, snow), between vehicles with a human driver, in areas lacking detailed maps and respond to extreme situations. The perception system of a vehicle has neither the same quality nor the same efficiency as the human eye. Let’s be clear: there are things that machines can do better than people like accurately calculate how fast another vehicle is moving. But robots don’t have our cognitive abilities. How are they going to have them? We spend our lives learning to observe the world and explain it. For that, machines need algorithms and data, many, many data, with annotations that tell them what they mean. To make autonomous vehicles possible, we have to develop new algorithms that help them learn from far fewer examples and without supervision, without constant human intervention.

Two philosophies are driving autonomous driving research and development today: standard range and parallel range. The second aims to develop assisted driving technologies that keep the driver behind the wheel, but with a system that monitors what he does and intervenes if necessary – in a non-harmful way -, for example, to prevent a collision or to correct the steering that he maintains. the vehicle on the road. The elements that define the autonomy of the vehicle increase gradually, but they work in parallel to the humans. The parallel autonomy aims to allow the vehicle to operate at any time and place. The series is based on the idea that responsibility lies with the human or the vehicle, not both. When the vehicle is in autonomous function, the human does not participate in any way in driving. The autonomous elements of the vehicle also gradually increase, but the vehicle can only function according to the capabilities allowed by its autonomy program. The vehicle will gradually operate in increasingly complex environments.

Two philosophies are driving autonomous driving research and development today: standard range and parallel range. The second aims to develop assisted driving technologies that keep the driver behind the wheel, but with a system that monitors what he is doing and intervenes if necessary.

Today, standard autonomous solutions work in closed environments (which define the roads the vehicle can travel on). The recipe for autonomy begins with increasing the number of vehicles that operate with electronic throttle control and sensors such as cameras and laser scanners. The sensors are used to create maps, detect moving obstacles such as pedestrians and other vehicles, and locate the vehicle. Autonomous driving solutions are map-based and benefit from a decade of advancements in the field of Simultaneous Location and Mapping (SLAM). The maps are made by driving the autonomous vehicle on all possible road sections and collecting characteristics with the sensors.

Most manufacturers of driverless vehicles only test their fleets in large cities that have detailed three-dimensional maps, meticulously marked with the exact position of things like roads, sidewalks and stop signs. These maps show features of the environment detected by the vehicle’s sensors. Maps are created thanks to ldar three-dimensional systems that, using light to scan the environment, gather millions of reference points and determine which features define each space.

If we want driverless cars to be a viable technology around the world, it is problematic that they depend on the prior existence of detailed maps. Today’s autonomous vehicles cannot navigate rural environments for which we do not have maps: that is, the millions of kilometers of unpaved, unlit or unreliable road signs. At CSAIL we began to develop MapLite to take a first step towards the possibility of autonomous vehicles that, using only GPS and sensors, can orient themselves on roads that have never traveled before. Our system combines GPS data – such as those found in Google Maps – with other data taken from lidar sensors. The conjunction of these two elements allows an autonomous vehicle to travel multiple rural unpaved roads and to indicate, reliably, how is the firm with more than thirty meters in advance. With varying degrees of success, other researchers have been working on mapless driving systems. Methods that use perceptual sensors such as ldar tend to rely primarily on road markings or make broad generalizations about the geometry of ditches. In the meantime, vision-based approaches may work well under ideal conditions, but pose problems in bad weather or poor lighting. To get to “autonomy level 5” —that is, anytime, anywhere autonomy— we still have a few years to go, and this is due to both technical and regulatory issues. other researchers have been working on driving systems without maps. Methods that use perceptual sensors such as ldar tend to rely primarily on road markings or make broad generalizations about the geometry of ditches. In the meantime, vision-based approaches may work well under ideal conditions, but pose problems in bad weather or poor lighting. To get to “autonomy level 5” – that is, autonomy anytime, anywhere – we still have a few years to go, and this is due to both technical and regulatory issues. other researchers have been working on driving systems without maps. Methods that use perceptual sensors such as ldar tend to rely primarily on road markings or make broad generalizations about the geometry of ditches. In the meantime, vision-based approaches may work well under ideal conditions, but pose problems in bad weather or poor lighting. To get to “autonomy level 5” —that is, anytime, anywhere autonomy— we still have a few years to go, and this is due to both technical and regulatory issues. Vision-based approaches may work well under ideal conditions, but pose problems in bad weather or poor lighting. To get to “autonomy level 5” – that is, autonomy anytime, anywhere – we still have a few years to go, and this is due to both technical and regulatory issues. Vision-based approaches may work well under ideal conditions, but pose problems in bad weather or poor lighting. To get to “autonomy level 5” —that is, anytime, anywhere autonomy— we still have a few years left, and this is due to both technical and regulatory issues.

If we want driverless cars to be a viable technology around the world, it is problematic that they depend on the prior existence of detailed maps. Current autonomous vehicles cannot travel through rural environments for which we do not have maps

Although technical advances have been considerable, it is understandable that getting policies on the same level is a difficult and gradual process. Politicians continue to debate how the existence of autonomous vehicles should be regulated. What kinds of vehicles should be allowed to circulate on the roads and who should be allowed to drive them. What security tests should they be subjected to and who should carry them out. How different levels of responsibility can determine the timely and safe use of autonomous vehicles and what will have to be sacrificed. What are the consequences of a mosaic of laws and regulations, different in each state of the United States, and what will have to be renounced to harmonize them? To what extent should politicians encourage the use of such vehicles, for example, through smart road infrastructures, special lanes on motorways, incentives for manufacturers or drivers. These are all complex problems, related to the use of autonomous vehicles on public roads. At the same time, there is already a viable type of autonomy, “autonomy level 4”, which allows driving without a driver in certain environments and times. We already have the technology that allows us to operate autonomous vehicles in good weather, on private roads, and at low speeds. that allow driving without a driver in certain environments and moments. We already have the technology that allows us to operate autonomous vehicles in good weather, on private roads and at low speeds. that allow driving without a driver in certain environments and moments. We already have the technology that allows us to operate autonomous vehicles in good weather, on private roads and at low speeds.

Environments such as retirement communities, college campuses, hotel campuses, and amusement parks can benefit from technologies that enable Level 4 of autonomy. There are different types of autonomous vehicles, including golf carts, wheelchairs. , scooters, rolling suitcases, grocery carts, trash cans, and even boats. These technologies open the door to a wide range of new products and applications, ranging from on-demand mobility to autonomous shopping and freight transport to more efficient mobility in hospitals. It would benefit everyone if transport was a very easily accessible service, but those benefits will mostly influence new drivers,

The technology that empowers vehicles can have huge social repercussions. Imagine residents of a retirement community being safely transported by automated golf carts. In the future, we will be able to automate anything with wheels, not just current vacuum cleaners, but also lawnmowers and even garbage cans.

The same technology that will allow us this type of automation could even be used to help people with disabilities – for example, the blind – live in ways impossible until then. Around the world, there are some 285 million visually impaired people, who could greatly benefit from increased mobility and robotic assistance. It’s a demographic that technology has often neglected or pretended not to exist, but in this case, technology could make a huge difference. There are portable devices, including sensors used to activate driverless vehicles and autonomous driving programs, that could allow visually impaired people to live safely and with greater mobility than a cane allows.

In the immediate future, robotics will change the way people and things are transported, but soon after, it will not only contribute to the punctuality of product deliveries, but it will also allow us to manufacture them quickly and close to home.

CHALLENGES FOR ROBOTICS

Despite the great strides that this field has taken recently and its future prospects, today’s robots still have a fairly limited ability to solve problems, their communication skills are often poor, and new robots take too long to make. For their use to become widespread, they will need to be naturally integrated into the human world, not for people to be integrated into the world of machines.

Reasoning

Robots can only perform limited reasoning, as they are governed by absolutely specific computer calculations. To today’s robots, everything is explained with simple instructions and their range of possibilities is limited to the programming they contain. Tasks that humans don’t stop to think about, like wondering if you’ve been to a place before, are very difficult for robots. These record the characteristics of the places they have visited. They capture them with sensors such as cameras or laser scanners. It is difficult for a machine to distinguish between the characteristics of a scene that it has already seen and a new one that contains some objects from the previous one. In general, sensors and triggers collect too much data, which is also of a much lower level: in order for robots to be able to make good use of them, they must be related to coherent abstractions. Currently, research in machine learning from big data is focused on how to compress large data sets to obtain fewer semantically coherent reference points. Robots can also use summaries. For example, they could summarize their visual history, significantly reducing the number of images they need to determine if they have been to a place before. Robots can also use summaries. For example, they could summarize their visual history, significantly reducing the number of images they need to determine if they have been to a place before. Robots can also use summaries. For example, they could summarize their visual history, significantly reducing the number of images they need to determine if they have been to a place before.

On the other hand, robots cannot solve unexpected situations. If a robot encounters a situation for which it was not programmed or that is beyond its range of capabilities, its mechanism will fail and stop. As a general rule, the robot cannot report the cause of the error. For example, robot vacuum cleaners are designed and programmed to move on the floor, but not to climb stairs.

Robots have to learn to adjust their programs, to adapt to their environments and to the interactions they have with people, with their environments, and with other machines. Today, anyone with access to the internet, including machines, can access the world’s information at the touch of a keyboard. Robots could take advantage of this circumstance to make better decisions. They could also record and use their entire history (such as what their sensors and triggers generate) and the experiences of other machines. For example, a robot prepared to walk dogs could access weather information on the internet and then, based on previous walks, decide which is the best route. Maybe a short walk if it’s hot or raining, or a long walk to the nearby park where other robot dog-walking robots are at the moment.

Communication

To reach a world in which many robots can collaborate with each other, reliable information is needed to facilitate their coordination. Despite advances in wireless communication, there are still impediments to communication between robots. The problem is that communication-related modeling and forecasting are tremendously difficult, and all robot control methods based on current communication models are riddled with interference. Robots need more reliable forms of communication, which guarantee them the bandwidth they need and at the right time. To achieve flexible communication between robots, a new paradigm would be required, based on local measurement of the quality of communication, not its prediction through models. Starting from the measurement of communication, We can begin to imagine the use of flying robots that function as mobile base stations that coordinate with each other to offer communications on a planetary scale. Swarms of flying robots could provide internet access anywhere in the world.

At present, communication between robots and people is also limited. Although speech technologies have been used to command robots with human language (for example, “Come to the door”), these interactions are superficial in scope and vocabulary. Robots could use the help of humans when they get stuck. It turns out that even a minimal human intervention in the work of a robot completely changes the problem and allows the machine to advance.

Today, when robots run into something unexpected (a situation they are not programmed for), they get stuck. Suppose that, instead of just getting stuck, the robot could think about why it got stuck like this and get human help. For example, recent work on using robots to assemble furniture from IKEA shows that they can tell when a table leg is out of reach and ask humans to bring it closer. Upon receipt of that item, the robot resumes the assembly task. These are some of the first steps that are being taken to create symbiotic teams of humans and robots, in which one and the other can ask each other for help.

Design and manufacturing

Another great challenge for today’s robots is the amount of time it takes to design and manufacture them. You have to speed up your creation process. We currently have many kinds of robots, but all of them have taken many years to produce. Their abilities to calculate, move, and manipulate objects are inextricably linked to their bodies: they are machines. Because the bodies of today’s robots are rigid and difficult to expand, their capabilities are limited to what their body allows. It is actually not feasible to manufacture new robots, supplemental robotic modules, accessories, or specialized tools to extend their capabilities, as the design, manufacturing, assembly and programming process is long and cumbersome. We need tools that accelerate the design and manufacture of robots. Imagine that we create a compiler robot that can process a functional definition such as “I want a robot to play chess with me” and that, thanks to its computing power, conceives a design that meets that definition, a manufacturing plant, and an environment of adapted programming to use that machine. Thanks to this compiler robot, a multitude of large and small tasks could be automated if many types of robots were designed and manufactured quickly. a manufacturing plan and programming environment adapted to use that machine. Thanks to this compiler robot, a multitude of large and small tasks could be automated if many types of robots were designed and manufactured quickly. a manufacturing plan and programming environment adapted to use that machine. Thanks to this compiler robot, a multitude of large and small tasks could be automated if many types of robots were designed and manufactured quickly.

Towards the omnipresence of robotics 

Several major obstacles separate the current state of robots and their prospects for comprehensive integration into everyday life. Some have to do with the very creation of those machines: how can new robots be designed and manufactured quickly and efficiently? Others are computing in nature and affect the ability of robots to reason, change and adapt to increasingly complex tasks in increasingly difficult environments. Other obstacles affect interactions between robots and between robots and people. Current robotics research is pushing the limits in these directions, in search of better ways to make robots, which control their movements and their ability to manipulate objects and increase their ability to reason, endowing them with semantic perception through vision and facilitating that they coordinate and cooperate more flexibly with each other and with human beings. If we respond to these challenges, robots will come closer to the conception of ubiquitous robotics: that of a connected world in which many humans and robots perform a multitude of different tasks.

The omnipresence of custom robotics is a great challenge, but its scope is no different from that of the ubiquity of computing, formulated some 25 years ago. Today we can say that computing is omnipresent, that it has become a service that is available at any time and place. So what would it take for robots to become ubiquitous in everyday life? Mark Weiser, former chief scientific officer of Xerox PARC and considered by many to be the father of the ubiquity of computing, said: “The deepest technologies are those that disappear. They dissolve in everyday life until they become indistinguishable. ”

For example, electricity was once a novel technology that is now part of life. Like the personal computer and electricity, robotics technologies could become ubiquitous aspects of daily life. In the near future, robotics will change our way of thinking about many everyday aspects.

Driverless fleets can turn transportation into public service, offering customized rides anytime, anywhere. Public transport could have two aspects: a network of large vehicles (such as trains or buses), which would be the backbone of long-distance collective transport, and fleets of modules that respond to the needs of individual personalized transport in short distances. This transport network would be connected to both information infrastructures and people to provide mobility on demand. The trunk structure could incorporate routes that change dynamically to adapt to the needs of the users.  fine granularity. On-demand mobility can be facilitated by cutting-edge technologies for autonomous vehicles. Taking a self-drive car for a ride could be as easy as using a smartphone. The robot modules would know when people arrive at a station, where those who need a service are, and where the other modules are. After taking someone to their destination, the modules would direct themselves to the next customer, thanks to algorithms that responded to existing demand and coordinated, in order to optimize fleet operations and reduce customer waiting time. Public transport would be practical and personalized.

Click to comment

Leave a Reply

Your email address will not be published.

Trending