Table of Contents

What is robotics, types of robots, advantages and disadvantages of robots, the future of robotics: what’s the use of ai in robotics, a word about robot software, the future of robotics and robots, the future of robotics: how robots will change the world, choose the right program, how to get started in robotics, the future of robotics: how robots will transform our lives.

The Future of Robotics: How Robots Will Transform Our Lives

What comes to mind when you hear the word “robot”? Do you picture a metallic humanoid in a spaceship in the distant future? Perhaps you imagine a dystopian future where humanity is enslaved by its robot overlords. Or maybe you think of an automobile assembly line with robot-like machines putting cars together.

Whatever you think, one thing is sure: robots are here to stay. Fortunately, it seems likely that robots will be more about doing repetitive or dangerous tasks than seizing supreme executive power. Let’s look at robotics, defining and classifying the term, figuring out the role of Artificial Intelligence in the field, the future of robotics, and how robotics will change our lives.

Robotics is the engineering branch that deals with the conception, design, construction, operation, application, and usage of robots. Digging a little deeper, we see that robots are defined as an automatically operated machine that carries out a series of actions independently and does the work usually accomplished by a human.

Incidentally, robots don’t have to resemble humans , although some do. Look at images of automobile assembly lines for proof. Robots that appear human are typically referred to as “androids.” Although robot designers make their creations appear human so that people feel more at ease around them, it’s not always the case. Some people find robots, especially ones that resemble people, creepy.

Robots are versatile machines, evidenced by their wide variety of forms and functions. Here's a list of a few kinds of robots we see today:

  • Healthcare: Robots in the healthcare industry do everything from assisting in surgery to physical therapy to help people walk to moving through hospitals and delivering essential supplies such as meds or linens. Healthcare robots have even contributed to the ongoing fight against the pandemic, filling and sealing testing swabs and producing respirators.
  • Homelife: You need look no further than a Roomba to find a robot in someone's house. But they do more now than vacuuming floors; home-based robots can mow lawns or augment tools like Alexa.
  • Manufacturing: The field of manufacturing was the first to adopt robots, such as the automobile assembly line machines we previously mentioned. Industrial robots handle a various tasks like arc welding, material handling, steel cutting, and food packaging.
  • Logistics: Everybody wants their online orders delivered on time, if not sooner. So companies employ robots to stack warehouse shelves, retrieve goods, and even conduct short-range deliveries.
  • Space Exploration: Mars explorers such as Sojourner and Perseverance are robots. The Hubble telescope is classified as a robot, as are deep space probes like Voyager and Cassini.
  • Military: Robots handle dangerous tasks, and it doesn't get any more difficult than modern warfare. Consequently, the military enjoys a diverse selection of robots equipped to address many of the riskier jobs associated with war. For example, there's the Centaur, an explosive detection/disposal robot that looks for mines and IEDs, the MUTT, which follows soldiers around and totes their gear, and SAFFiR, which fights fires that break out on naval vessels.
  • Entertainment: We already have toy robots, robot statues, and robot restaurants. As robots become more sophisticated, expect their entertainment value to rise accordingly.
  • Travel: We only need to say three words: self-driving vehicles.

Become a AI & Machine Learning Professional

  • $267 billion Expected Global AI Market Value By 2027
  • 37.3% Projected CAGR Of The Global AI Market From 2023-2030
  • $15.7 trillion Expected Total Contribution Of AI To The Global Economy By 2030

Artificial Intelligence Engineer

  • Industry-recognized AI Engineer Master’s certificate from Simplilearn
  • Dedicated live sessions by faculty of industry experts

Post Graduate Program in AI and Machine Learning

  • Program completion certificate from Purdue University and Simplilearn
  • Gain exposure to ChatGPT, OpenAI, Dall-E, Midjourney & other prominent tools

Here's what learners are saying regarding our programs:

Indrakala Nigam Beniwal

Indrakala Nigam Beniwal

Technical consultant , land transport authority (lta) singapore.

I completed a Master's Program in Artificial Intelligence Engineer with flying colors from Simplilearn. Thanks to the course teachers and others associated with designing such a wonderful learning experience.

Akili Yang

Personal Financial Consultant , OCBC Bank

The live sessions were quite good; you could ask questions and clear doubts. Also, the self-paced videos can be played conveniently, and any course part can be revisited. The hands-on projects were also perfect for practice; we could use the knowledge we acquired while doing the projects and apply it in real life.

Like any innovation today, robots have their plusses and negatives. Here’s a breakdown of the good and bad about robots and the future of robotics.

  • They work in hazardous environments: Why risk human lives when you can send a robot in to do the job? Consider how preferable it is to have a robot fighting a fire or working on a nuclear reactor core.
  • They’re cost-effective: Robots don’t take sick days or coffee breaks, nor need perks like life insurance, paid time off, or healthcare offerings like dental and vision.
  • They increase productivity: Robots are wired to perform repetitive tasks ad infinitum; the human brain is not. Industries use robots to accomplish the tedious, redundant work, freeing employees to tackle more challenging tasks and even learn new skills.
  • They offer better quality assurance: Vigilance decrement is a lapse in concentration that hits workers who repeatedly perform the same functions. As the human’s concentration level drops, the likelihood of errors, poor results, or even accidents increases. Robots perform repetitive tasks flawlessly without having their performance slip due to boredom.

Disadvantages

  • They incur deep startup costs: Robot implementation is an investment risk, and it costs a lot. Although most manufacturers eventually see a recoup of their investment over the long run, it's expensive in the short term. However, this is a common obstacle in new technological implementation, like setting up a wireless network or performing cloud migration.
  • They might take away jobs: Yes, some people have been replaced by robots in certain situations, like assembly lines, for instance. Whenever the business sector incorporates game-changing technology, some jobs become casualties. However, this disadvantage might be overstated because robot implementation typically creates a greater demand for people to support the technology, which brings up the final disadvantage.
  • They require companies to hire skilled support staff: This drawback is good news for potential employees , but bad news for thrifty-minded companies. Robots require programmers, operators, and repair personnel. While job seekers may rejoice, the prospect of having to recruit professionals (and pay professional-level salaries!) may serve as an impediment to implementing robots.

Artificial Intelligence (AI) increases human-robot interaction, collaboration opportunities, and quality. The industrial sector already has co-bots, which are robots that work alongside humans to perform testing and assembly.

Advances in AI help robots mimic human behavior more closely, which is why they were created in the first place. Robots that act and think more like people can integrate better into the workforce and bring a level of efficiency unmatched by human employees.

Robot designers use Artificial Intelligence to give their creations enhanced capabilities like:

  • Computer Vision: Robots can identify and recognize objects they meet, discern details, and learn how to navigate or avoid specific items.
  • Manipulation: AI helps robots gain the fine motor skills needed to grasp objects without destroying the item.
  • Motion Control and Navigation: Robots no longer need humans to guide them along paths and process flows. AI enables robots to analyze their environment and self-navigate. This capability even applies to the virtual world of software. AI helps robot software processes avoid flow bottlenecks or process exceptions.
  • Natural Language Processing (NLP) and Real-World Perception: Artificial Intelligence and Machine Learning (ML) help robots better understand their surroundings, recognize and identify patterns, and comprehend data. These improvements increase the robot’s autonomy and decrease reliance on human agents.

Software robots are computer programs that perform tasks without human intervention, such as web crawlers or chatbots . These robots are entirely virtual and not considered actual robots since they have no physical characteristics.

This technology shouldn't be confused with robotic software loaded into a robot and determines its programming. However, it's normal to experience overlap between the two entities since, in both cases, the software is helping the entity (robot or computer program) perform its functions independent of human interaction.

Thanks to improved sensor technology and more remarkable advances in Machine Learning and Artificial Intelligence, robots will keep moving from mere rote machines to collaborators with cognitive functions. These advances, and other associated fields, are enjoying an upwards trajectory, and robotics will significantly benefit from these strides.

We can expect to see more significant numbers of increasingly sophisticated robots incorporated into more areas of life, working with humans. Contrary to dystopian-minded prophets of doom, these improved robots will not replace workers. Industries rise and fall, and some become obsolete in the face of new technologies, bringing new opportunities for employment and education.

That’s the case with robots. Perhaps there will be fewer human workers welding automobile frames, but there will be a greater need for skilled technicians to program, maintain, and repair the machines. In many cases, this means that employees could receive valuable in-house training and upskilling, giving them a set of skills that could apply to robot programming and maintenance and other fields and industries.

Robots will increase economic growth and productivity and create new career opportunities for many people worldwide. However, there are still warnings out there about massive job losses, forecasting losses of 20 million manufacturing jobs by 2030, or how 30% of all jobs could be automated by 2030 .

But thanks to the consistent levels of precision that robots offer, we can look forward to robots handling more of the burdensome, redundant manual labor tasks, making transportation work more efficiently, improving healthcare, and freeing people to improve themselves. But, of course, time will tell how this all works out.

Supercharge your career in AI and ML with Simplilearn's comprehensive courses. Gain the skills and knowledge to transform industries and unleash your true potential. Enroll now and unlock limitless possibilities!

Program Name AI Engineer Post Graduate Program In Artificial Intelligence Artificial Intelligence & Machine Learning Bootcamp Geo All Geos All Geos US University Simplilearn Purdue Caltech Course Duration 11 Months 11 Months 6 Months Coding Experience Required Basic Basic Yes Skills You Will Learn 10+ skills including data structure, data manipulation, NumPy, Scikit-Learn, Tableau and more. 16+ skills including chatbots, NLP, Python, Keras and more. 12+ skills including Ensemble Learning, Python, Computer Vision, Statistics and more. Additional Benefits Get access to exclusive Hackathons, Masterclasses and Ask-Me-Anything sessions by IBM Applied learning via 3 Capstone and 12 Industry-relevant Projects Purdue Alumni Association Membership Free IIMJobs Pro-Membership of 6 months Resume Building Assistance 22 CEU Credits Caltech CTME Circle Membership Cost $$ $$$$ $$$ Explore Program Explore Program Explore Program

If you want to become part of the robot revolution (revolutionizing how we live and work, not an actual overthrow of humanity), Simplilearn has what you need to get started. The AI and Machine Learning Bootcamp , delivered in partnership with IBM and Caltech, covers vital robot-related concepts such as statistics, data science with Python, Machine Learning, deep learning, NLP, and reinforcement learning.

The bootcamp covers the latest tools and technologies from the AI ecosystem, featuring masterclasses by Caltech instructors and IBM experts, including hackathons and Ask Me Anything sessions conducted by IBM.

According to Ziprecruiter, AI Engineers in the US can earn a yearly average of $164,769, and Glassdoor reports that similar positions in India pay an annual average of ₹949,364.

Visit Simplilearn today and start an exciting new career with a fantastic future!

Our AI & ML Courses Duration And Fees

AI & Machine Learning Courses typically range from a few weeks to several months, with fees varying based on program and institution.

Program NameDurationFees

Cohort Starts:

14 weeks€ 1,999

Cohort Starts:

16 weeks€ 2,199

Cohort Starts:

16 weeks€ 2,490

Cohort Starts:

16 weeks€ 2,199

Cohort Starts:

11 months€ 2,990

Cohort Starts:

11 months€ 2,290

Cohort Starts:

11 Months€ 3,990
11 Months€ 1,490

Get Free Certifications with free video courses

Machine Learning using Python

AI & Machine Learning

Machine Learning using Python

Artificial Intelligence Beginners Guide: What is AI?

Artificial Intelligence Beginners Guide: What is AI?

Learn from Industry Experts with free Masterclasses

Fast-Track Your Gen AI & ML Career to Success in 2024 with IIT Kanpur

Kickstart Your Gen AI & ML Career on a High-Growth Path in 2024 with IIT Guwahati

Ethics in Generative AI: Why It Matters and What Benefits It Brings

Recommended Reads

Digital Transformation and Future of Tech Jobs in India: A Simplilearn Report 2020

How to Become a Robotics Engineer?

The Top Five Humanoid Robots

Report: The Future of IT Jobs in India

Robotics Engineer Salary by Experience and Location

The Top 10 Most Amazing RPA Projects of All Time

Get Affiliated Certifications with Live Class programs

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

The Future of Robots and Robotics

The future of robotics evokes both exciting and cautious undertones as employees learn how to navigate a human-robot workforce.

Mike Thomas

Pop culture is perhaps the main culprit for the public’s warped perception of the future of  robotics . Although figures like C3PO in Star Wars , Data in Star Trek and the cyborg in The Terminator have given robotics some flashy mainstream appeal, they have also established narrow expectations for what robots could be and accomplish in the future.

“I’m never going to rule stuff out,” said Blake Hannaford, robotics professor at the University of Washington in Seattle. “But if you look back on science fiction from the ’50s and ’60s and compare it to today, it really missed the mark.”

For better or for worse, robots have defied human expectations. It’s unlikely that we’ll have to beware of Schwarzenegger-esque  killer robots anytime soon, but even so, the future of robotics is sure to have surprises in store.

More on Robotics 35 Robotics Companies on the Forefront of Innovation

What Is Robotics?

First, let’s cover some of the basics. Robotics is the practice of designing and manufacturing robots, which perform physical tasks for humans and may possess some degree of autonomy. The field is interdisciplinary by nature, connecting to areas like  engineering ,  computer science and  artificial intelligence .

While robots on the big screen often demonstrate human traits, the robotics field encompasses everything from  humanoid machines to  robotic arms that operate in an assembly line. Robots are already assisting humans in completing  major surgeries ,  rescue operations and  climate explorations . While robots fulfill wide-ranging roles, there are certain characteristics that link them under the same umbrella. 

What Are Robots?

A robot is a machine that performs tasks typically completed by humans. Different robots come with varying degrees of  automation , but each should be able to complete a certain set of tasks on its own. Here are a few basic traits common to all robots:

  • Robots display a physical form made of mechanical parts. 
  • All robots require an electric current — whether from batteries or built-in circuitry — to power their movements and decisions.
  • Each robot is guided by programming software and rules that allow them to complete actions and sometimes make decisions on its own.    

Contrary to people’s tendency to paint robots in a human light, it’s a degree of inhumanness that defines robots. The ways robots fall short of or surpass human abilities will shape the future of human-robot relationships, and that future proves to be complex with both compelling and cautious undertones as robot types proliferate.

Types of Robots

The robotics ecosystem undergoes constant change, but there are still types of robots that appear most often. Below are the main categories that robots fall under, covering everything from chatbots to  humanoids . 

Pre-Programmed Robots

Pre-programmed robots are given commands beforehand and cannot change their behavior while performing an action. These types of robots are ideal for completing a single, repetitive task.

Humanoid Robots

Humanoid robots exhibit human-like physical features and even facial expressions. Their human resemblance makes them a good fit for service jobs that require face-to-face human interaction.

Autonomous Robots

Autonomous robots can perform actions and make decisions without human intervention. These robots depend on complex computers to perceive and analyze their surroundings. 

Teleoperated Robots

Teleoperated robots are remotely controlled by a human operator through a wireless system like Wi-Fi. They are ideal for performing high-risk actions in extreme environments. 

Augmenting Robots

Augmenting robots combine with the human body to supplement a current ability or replace a lost ability. Examples like prosthetic legs have improved people’s quality of life. 

Software Bots

Software bots are computer applications that rely on pieces of code to complete actions on their own. Because these bots only exist in online or computer forms, they aren’t considered robots.

Advantages and Disadvantages of Robots

The dialogue surrounding robots is complicated, evoking both hope and fear from different parties. While there is no doubt that robotics is forever changing society, the impact on humans remains uncertain in light of the benefits and consequences that robots present.

Advantages of Robots

There are many reasons to have an optimistic view on robots, including more advancements in other fields and opportunities for humans to perform more interesting, highly-skilled jobs.

Increased Innovation

Robotics often leads to breakthroughs in other fields because of its interdisciplinary nature.  Computing power is necessary to fuel robots, and its growth has led to  improvements in various technologies . For example, a smartphone can last longer and perform more tasks with the same battery life as its predecessors. More efficient computing power has also helped  computer vision and  natural language processing make great strides with the goal of enabling robots to better compile and learn from visual data.

As companies push for more intelligent robots, developers will need to create more  advanced software to meet these demands as well. The interconnectedness of robotics encourages many fields to move beyond the limits of current knowledge.

Complementary Support

While some workers view robots as replacements, many workers are finding robots to be excellent complements in their work environments.  Collaborative robots , also known as cobots, have stepped up to handle repetitive, mundane job duties that require little intellectual exercise from an average human being. In the  finance industry , cobots conduct audits and detect fraud, allowing employees to reallocate more time and energy toward complex projects. 

By supporting their employees and customers with robots, companies can enjoy higher productivity levels and profits.

New Job Opportunities

It’s true that the introduction of robots will alter the job landscape, but the disappearance of some roles also makes room for higher-level jobs. For every worker replaced by a robot, companies still need to hire  software developers and other tech professionals who know how to maintain robotics technology. In this sense, one could argue that robots have overtaken boring jobs and have paved the way for more  improved jobs .  

For companies suffering from a shortage of workers, robotics also provides a golden opportunity to  upgrade their operations . Businesses can team up with robots to automate tasks, introduce employees to new technologies and give them more time to rest and apply their energies accordingly.

Disadvantages of Robots

Robotics has and will continue to change how people live and work, and not all of these changes are beneficial — which is why realistic concerns have been raised.

Privacy and Security Issues

Deciding where to draw boundaries has been a point of contention with robots. AI and robotics come with a range of potential security threats, such as performing surveillance, carrying out social engineering schemes and even committing physical attacks.

Another nightmare scenario for political and business leaders would be an  accident involving a robot , such as a drone colliding with an airplane. While these are examples of worst-case events, the industry may need more regulations to ensure robots are applied in a safe and ethical manner.

Unfamiliar Technologies

Robots may lead to a higher demand for tech-based roles to maintain this technology, but not all employees have the skills needed for these jobs. Besides in-depth training, a four-year  computer science degree comes with a certain level of prestige that no amount of upskilling may be able to match. As a result, current employees who can’t afford college may get left behind in the wake of a robot revolution. 

Job Competition

The automation capabilities of robots mean many workers are likely to be  replaced by AI and robotics technologies . In fact, it’s expected that machines will disrupt  85 million jobs by 2025 as workforces resemble more of a human-machine hybrid. And within these hybrid settings, humans may struggle to keep pace with their robot counterparts.  

A combination of expanding technologies and a  lack of tech talent could hint at a brutal job market for many workers. As AI and robots encroach into areas where humans perform manual labor, workers will need to broaden their skill sets and keep themselves marketable in a job ecosystem shaped by more high-tech roles.

Advancements in Artificial Intelligence and the Future of Robotics

AI is reshaping robotics and creating even more possibilities for how humans and robots interact with each other. Here’s how.     

Digital Twins 

Engineers already use digital twins to simulate the behavior of robots, refine robotic designs to maximize performance and even control robots from a distance. But AI takes these capabilities to another level, providing alerts for predictive equipment maintenance and simulating entire processes to find more efficient workflows. 

Because AI-powered digital twins can compile and analyze large amounts of data, they’re also ideal for revealing customer trends, pinpointing anomalies and providing other big-picture insights.

Robotic Automation

The development of artificial intelligence has led to increasing robotic automation. This has benefited service robots that perform simple tasks and hold basic conversations, as well as drones that are able to fly on their own to gather aerial data for construction sites, monitor crops for farmers and deliver packages for food companies.

Smart Cities

Visions for sustainable smart cities often include AI and robots working together. For example, Seoul has used robotics equipped with AI to provide care to the elderly and assist in classroom education for youth. The two also have a role to play in leading waste management efforts in urban environments: AI-powered robots can quickly transport and organize waste, enabling cities to maintain cleaner spaces while reusing as many resources as possible.    

Generative AI

Many robots are able to complete requests based on pre-programmed guidelines, basic controls and speech recognition technology. But ChatGPT and the innovation it sparked in the generative AI space could spill over into robotics, bringing upgrades to robots’ language models. 

Microsoft is working to implement ChatGPT into robots , allowing users to initiate interactions with robots through verbal statements. While this technology is still in development, robots infused with generative AI hint at a world where robots can understand and respond to human language to deliver faster results.

The Future of Robotics

At companies and universities around the world, engineers and computer scientists are devising ways to make robots more perceptive and dextrous.

The robotics industry worldwide keeps innovating, combining  artificial intelligence and  computer vision and other sensory technologies, according to Analytics Insight magazine. The magazine noted that newer iterations of robots are easier to set up and program than their predecessors. Some notable developments in recent years include high-tech ocean robots that explore the world underneath the waves; a robot named Saul that shoots UV rays at the Ebola virus to destroy it; and an AI-controlled therapeutic robot that helps caregivers and patients communicate more efficiently, which reduces stress.

Robots are becoming more human-like in cognitive ability and, in some cases, appearance. In warehouses and factories, at fast food joints and clothing retailers, they’re already working alongside humans. This one , in Germany, can pick like a champ. They’re even starting to perform functions that have typically been the domain of humans, such as making coffee ,  caring for the elderly and, crucially,  ferrying toilet paper . Robots have even made their way into the agriculture and biomedical sectors,  harvesting crops ,  treating diseases and performing other essential tasks. But no matter which sector they serve, robots are far less advanced than many thought they’d be by now.

Will Robots Steal Your Job?

Going forward, Hannaford said, robots will “free up people’s brains” to perform other, more complex tasks. But just as the industrial revolution displaced countless humans who performed manual labor, the robotics revolution won’t happen — and isn’t happening — “without pain and fear and disruption.”

“There’s going to be a lot of people who fall by the wayside,” he said of the countless jobs that will be automated or disappear entirely.

Almost 50 percent of workers who retain their roles through 2025 in the wake of automation will need some form of retraining. Those who do acquire the proper skills will be primed to fill one of the 97 million new roles technologies like robotics and AI are expected to create.

In a warehouse setting, for example, those who transition to other tasks that require “higher skills” such as thinking and complex movement are far less at risk of getting robo-bumped. And they will get bumped. Vince Martinelli, former head of product and marketing at RightHand Robotics, is confident that  simple but prevalent jobs like warehouse order picking will largely be done by robots in 10 to 20 years. Right now, though, the technology just isn’t there.

But some experts say the more robots outperform humans, the more humans will be expected to keep up.

“As we start to compare the speed and efficiency of humans to robots, there is a whole new set of health and safety issues that emerge,” Beth Gutelius, associate director of the Center for Urban Economic Development at the University of Illinois–Chicago,  told the New York Times .

That’s another argument for  retraining . As authors Marcus Casey and Sarah Nzau noted in a Brookings Institution blog post : 

“The development of technologies that facilitate new tasks, for which humans are better suited, could potentially lead to a much better future for workers. While the widespread introduction of computers into offices certainly displaced millions of secretaries and typists, the new tasks in associated industries meant new occupations, including computer technicians, software developers and IT consultants.”

Soft Robotics Gains Steam

Researchers in a newish niche called “soft robotics” are working on mimicking human motion. Developing high-performing robotic brains is incredibly difficult. Getting robots to physically react like people do is even harder, as mechanical engineer Christoph Keplinger explained during a  2018 TED Talk .

“The human body makes extensive use of soft and deformable materials such as muscle and skin,” he said. “We need a new generation of robot bodies that is inspired by the elegance, efficiency and by the soft materials of the designs found in nature.”

In describing his efforts to build artificial muscles called “soft activators,” Keplinger calls biological muscle “a true masterpiece of evolution” that can heal after being damaged and is “tightly integrated with sensory neurons for feedback on motion and the environment.”

To that end, he and his team in Boulder, Colorado, invented something they dubbed HASEL — hydraulically amplified self-healing electrostatic actuators, which are mechanisms that control movement. Besides expanding and contracting like real muscle, the young technology can be operated more quickly. In addition, HASEL can be adjusted to deliver larger forces for moving heavy objects, dialed down for more precise movement, and programmed to “deliver very fluidic muscle-like movement and bursts of power to shoot up a ball into the air.”

Besides being compatible with large-scale manufacturing applications, he noted, HASEL technology also could be used to “improve the quality of life” for those who need prosthetic limbs, as well as older people who would benefit from enhanced agility and dexterity.

“Maybe we can call it robotics for anti-aging,” Keplinger said, “or even a next stage of human evolution.”

Researchers have since turned to  creatures like jellyfish for further inspiration on how to design soft robots. This out-of-the-box thinking has led to promising results, spurring the development of  soft robots that can grip objects with the proper amount of force. 

The niche is still young, but many sectors believe it holds a  wealth of potential . Supporting NASA-led Mars expeditions and assisting physicians during surgeries are a few of the tasks soft robots may be expected to complete in the near future.

The Rise of Humanoid Robots

Outside of a factory or warehouse setting, some say it’s advantageous for robots to look more like humans. That’s where  humanoids come in.

Over at RightHand Robotics, Martinelli said the current focus is on wider customer adoption of robots that can solve specific problems in commercial settings. Even some very impressive and sensor-packed models that can run, jump and flip — including several from Boston Dynamics — aren’t in that category. Not yet, anyway.

Boston Dynamics CEO Marc Raibert  has said his long-term goal is to “build robots that have the functional levels of performance that are equal to or greater than people and animals. I don’t mean that they have to work the way that people and animals work, or that they have to look like them, just at the level of performance in terms of the ability to move around in the world, the ability to use our hands.”

The success of the company’s  robot dog Spot as an industrial worker has breathed new life into the humanoid space and elevated efforts to fashion humanoids into  service helpers . But behind-the-scenes labor isn’t the only area where humanoid robots could make an impact.

As Will Jackson, director at United Kingdom-based Engineered Arts, told BBC television, “Humanoid robots are great for entertainment and they’re great for communication. If you want something that interacts with people, the best way to do that is make something person-shaped.”

Like this invention from Agility Robotics. Dubbed “Digit” and reportedly priced in the low-to-mid six figures, it’s intended for vehicle-to-door delivery of packages weighing 40 pounds or less. Could we see armies of these things in the years ahead? Maybe. Digit hasn’t yet been tested in uncontrolled settings. And if viral YouTube videos are any indication, even a controlled environment is no guarantee of success.

“One of the biggest problems we have is there is nothing as good as human muscle,” Jackson explained. “We don’t come anywhere near to what a human can do. The way you will see humanoid robots is in a commercial context. So you might go into a shop and you might see a robot in there that’s trying to sell you something. Don’t worry about all the clever AI. That’s really going to stay on your computer. It’s not going to chase you up the stairs anytime soon.”

Impact of AI and Robotics on Different Industries

Artificial intelligence and robotics have wide-reaching consequences for society, but the following industries have been especially impacted by these technologies. 

Manufacturing

Robots along the assembly line produce goods with a quality and consistency unmatched by human workers. With the addition of AI , organizations can now rely on these machines to operate independently and even oversee their own predictive maintenance reporting. Human workers can then leave repetitive tasks to robots and focus on more complex business needs. 

Besides social and care robots, the healthcare industry depends on medical robots equipped with AI to aid in surgeries, power exoskeletons and guide patients through physical therapy and recovery. AI-based robots can also help doctors make more accurate diagnoses, reducing the time it takes to deliver personalized treatment to patients. 

Warehouses and logistics organizations have employed AI and robotics for heavy-lifting work. Robots can move products around warehouses, stack shelves and perform other manual labor to relieve human workforces of physical wear and tear. Companies are even entrusting AI robots like drones to make short deliveries, bringing down wait times and delays. 

Customer Service

Chatbots and virtual assistants have become commonplace for online customers, but AI and robotics are beginning to handle in-person customer interactions as well. Humanoid and non-humanoid robots conduct face-to-face conversations with customers, retrieving products, answering questions and performing other small tasks to make shoppers feel welcome.   

Hospitality

Restaurants have come to rely on robots to help with cooking and cleaning needs in the kitchen, and robots can also deliver food to waiting customers. Within the retail space , AI-powered robots can compile insights on customer behaviors in stores to determine the best ways to arrange products and ensure a smoother shopping experience. 

Hotels, resorts and other travel hubs are supporting travelers with AI-driven robots that act as concierges, front-desk help, butlers, guides and other essential personnel. Airports are also using security robots to enforce airline rules, such as detecting passengers with weapons or any illegal items not allowed on flights. 

Robotics and artificial intelligence have come to the aid of astronauts, paving the way for space exploration in places like Mars. Martian robots already have the capacity to venture into environments not suitable for humans. The addition of AI allows these robots to operate autonomously, making it easier for groups like NASA to sustain their space exploration efforts.

Similar to space exploration, the search for plentiful resources has led many companies in the energy sector to embrace AI and robotics. Robots can assist in mapping out the ocean floor and locating high concentrations of natural gas. On land, robots are also tasked with overseeing grid maintenance and fixing wind turbines and other structures. 

While robots can’t replace human teachers, they can supplement them in various ways. Leveraging AI, robots can lead one-on-one and small group sessions to help students gain a better grasp of the material. Robots with human features can encourage younger students to exercise and strengthen their social skills as well.  

Smart homes bring AI and robotics into the lives of consumers, simplifying chores with inventions like the robotic kitchen and the roomba vacuum. On a larger scale, smart cities are giving robots the responsibility of areas like waste management and pipe maintenance to keep public spaces healthy.   

What Does All This Mean for Humans?

The rise of AI and robotics is bound to forever alter society, generating both excitement and uncertainty. 

Robotics and artificial intelligence can streamline everyday chores in the home, improve operations in workplaces and contribute to efforts to make cities and public spaces more sustainable. While robots may take on a greater role in society, they may also merely supplement the work that human professionals do. In this way, robots can serve as partners in building a more efficient and safer environment alongside humans.

At the same time, AI and robots present other problems that need to be resolved. There’s no doubt that some jobs will be lost to automation, and issues around data privacy and rapidly evolving technologies leave many people vulnerable. 

Arguments for and against these technologies are valid, but they don’t change the fact that AI and robotics are here to stay. While it still remains to be seen whether these technologies will have a positive or negative impact on humanity, the one certainty is that humans must adjust to a world where robots and AI are a regular part of everyday life.

Frequently Asked Questions

How is ai changing robotics.

AI is enhancing many capabilities of robots, spurring advancements in automated machines, drone technology and the use of generative AI in robotics.

How will the future of robotics impact humans?

Robots contribute to a future where processes in homes, workplaces and public spaces become safer and more efficient. At the same time, job losses due to automation and security risks are major concerns tied to AI and robotics. The overall positive or negative effect of robotics on humans remains to be seen.

Lisa Bertagnoli contributed reporting to this story.

Recent Robotics Articles

What Is Manufacturing Technology?

Essay Service Examples Technology Robots

Essay on Robots in the Future

  • Proper editing and formatting
  • Free revision, title page, and bibliography
  • Flexible prices and money-back guarantee

document

Robotics Future in 2025

  • Swarm Robotics. In swarm robotics there are numbers of robots who communicate with each other and perform different task in very short time with high precision.
  • Micro robots. Micro robots can be deployed in places which in inaccessible for human or which is too dangerous or relatively small. So, micro robots can go in such places and work efficiently.
  • Modular Robots . Modular robots are actually toys that consists of blocks or cubes, that can arrange themselves in specific way to do some tasks. The small blocks usually have magnets so they can attach with other blocks firmly.
  • Intellectual Robots. In this time engineers and researchers are working on humanoid robots that can think and work like human. In humanoid robots, artificial intelligence and machine learning technology is incorporated so it can behave and react like human. Recently Pepper and Zora robots were introduced by different companies. The specialty of these robots is that they can understand the expressions of human and react according to them.
  • Alternate powered robots. Alternate powered robots use solar energy, wave energy when there is no electricity.
  • Exoskeleton. Exoskeleton are external skeleton that supports physical disable persons. Exoskeletons also finds its application in military for injured soldiers.

How Robots Will Transform Our Future?

Our writers will provide you with an essay sample written from scratch: any topic, any deadline, any instructions.

reviews

Cite this paper

Related essay topics.

Get your paper done in as fast as 3 hours, 24/7.

Related articles

Essay on Robots in the Future

Most popular essays

  • Code of Ethics
  • Ethical Dilemma

A popular and distinct technological advancement in the present world is robotic surgery that has...

Using robots in surgery may become a reality. Years ago, no one could imagine that using robots...

How will robots affect the future? Robots have been around for years, but recently they have...

Humans are the most socially advanced of all species and use this specialty in day-to-day...

  • Mechanical Engineering
  • Personal Experience

As a high school freshman, I became interested in prosthetics after joining a high school robotics...

  • Personal Life

In our everyday life, it encompasses the way in which we usually think, act, and feel on a daily...

  • Artificial Intelligence

In the society we live in, robots and artificial intelligence are quite loved by the media/people...

  • Hospitality

With the development of technology, more and more hotels are beginning to change their business...

  • Effects of Technology

What is Robotics? How does robotics work? Will Robotics technology actually be able to take over...

Join our 150k of happy users

  • Get original paper written according to your instructions
  • Save time for what matters most

Fair Use Policy

EduBirdie considers academic integrity to be the essential part of the learning process and does not support any violation of the academic standards. Should you have any questions regarding our Fair Use Policy or become aware of any violations, please do not hesitate to contact us via [email protected].

We are here 24/7 to write your paper in as fast as 3 hours.

Provide your email, and we'll send you this sample!

By providing your email, you agree to our Terms & Conditions and Privacy Policy .

Say goodbye to copy-pasting!

Get custom-crafted papers for you.

Enter your email, and we'll promptly send you the full essay. No need to copy piece by piece. It's in your inbox!

Shaping the future of advanced robotics

The Google DeepMind Robotics Team

  • Copy link ×

Auto-RT Robot

Introducing AutoRT, SARA-RT and RT-Trajectory to improve real-world robot data collection, speed, and generalization

Picture a future in which a simple request to your personal helper robot - “tidy the house” or “cook us a delicious, healthy meal” - is all it takes to get those jobs done. These tasks, straightforward for humans, require a high-level understanding of the world for robots.

Today we’re announcing a suite of advances in robotics research that bring us a step closer to this future. AutoRT, SARA-RT, and RT-Trajectory build on our historic Robotics Transformers work to help robots make decisions faster, and better understand and navigate their environments.

AutoRT: Harnessing large models to better train robots

We introduce AutoRT , a system that harnesses the potential of large foundation models which is critical to creating robots that can understand practical human goals. By collecting more experiential training data – and more diverse data – AutoRT can help scale robotic learning to better train robots for the real world.

AutoRT combines large foundation models such as a Large Language Model (LLM) or Visual Language Model (VLM), and a robot control model (RT-1 or RT-2) to create a system that can deploy robots to gather training data in novel environments. AutoRT can simultaneously direct multiple robots, each equipped with a video camera and an end effector, to carry out diverse tasks in a range of settings. For each robot, the system uses a VLM to understand its environment and the objects within sight. Next, an LLM suggests a list of creative tasks that the robot could carry out, such as “Place the snack onto the countertop” and plays the role of decision-maker to select an appropriate task for the robot to carry out.

In extensive real-world evaluations over seven months, the system safely orchestrated as many as 20 robots simultaneously, and up to 52 unique robots in total, in a variety of office buildings, gathering a diverse dataset comprising 77,000 robotic trials across 6,650 unique tasks.

future of robotics essay

(1) An autonomous wheeled robot finds a location with multiple objects. (2) A VLM describes the scene and objects to an LLM. (3) An LLM suggests diverse manipulation tasks for the robot and decides which tasks the robot could do unassisted, which would require remote control by a human, and which are impossible, before making a choice. (4) The chosen task is attempted, the experiential data collected, and the data scored for its diversity/novelty. Repeat.

Layered safety protocols are critical

Before robots can be integrated into our everyday lives, they need to be developed responsibly with robust research demonstrating their real-world safety.

While AutoRT is a data-gathering system, it is also an early demonstration of autonomous robots for real-world use. It features safety guardrails, one of which is providing its LLM-based decision-maker with a Robot Constitution - a set of safety-focused prompts to abide by when selecting tasks for the robots. These rules are in part inspired by Isaac Asimov’s Three Laws of Robotics – first and foremost that a robot “may not injure a human being”. Further safety rules require that no robot attempts tasks involving humans, animals, sharp objects or electrical appliances.

But even if large models are prompted correctly with self-critiquing, this alone cannot guarantee safety. So the AutoRT system comprises layers of practical safety measures from classical robotics. For example, the collaborative robots are programmed to stop automatically if the force on its joints exceed a given threshold, and all active robots were kept in line-of-sight of a human supervisor with a physical deactivation switch.

SARA-RT: Making Robotics Transformers leaner and faster

Our new system, Self-Adaptive Robust Attention for Robotics Transformers (SARA-RT), converts Robotics Transformer (RT) models into more efficient versions.

The RT neural network architecture developed by our team is used in the latest robotic control systems, including our state-of-the-art RT-2 model . The best SARA-RT-2 models were 10.6% more accurate and 14% faster than RT-2 models after being provided with a short history of images. We believe this is the first scalable attention mechanism to provide computational improvements with no quality loss. While transformers are powerful, they can be limited by computational demands that slow their decision-making. Transformers critically rely on attention modules of quadratic complexity. That means if an RT model’s input doubles – by giving a robot additional or higher-resolution sensors, for example – the computational resources required to process that input rise by a factor of four, which can slow decision-making.

SARA-RT makes models more efficient using a novel method of model fine-tuning that we call “up-training”. Up-training converts the quadratic complexity to mere linear complexity, sharply reducing the computational requirements. This conversion not only increases the original model’s speed, but also preserves its quality.

We designed our system for usability and hope many researchers and practitioners will apply it, in robotics and beyond. Because SARA provides a universal recipe for speeding up Transformers, without need for computationally expensive pre-training, this approach has the potential to massively scale up use of Transformers technology. SARA-RT does not require any additional code as various open-sourced linear variants can be used.

When we applied SARA-RT to a state-of-the-art RT-2 model with billions of parameters, it resulted in faster decision-making and better performance on a wide range of robotic tasks.

SARA-RT-2 model for manipulation tasks. Robot’s actions are conditioned on images and text commands.

And with its robust theoretical grounding, SARA-RT can be applied to a wide variety of Transformer models. For example, applying SARA-RT to Point Cloud Transformers - used to process spatial data from robot depth cameras - more than doubled their speed.

RT-Trajectory: Helping robots generalize

It may be intuitive for humans to understand how to wipe a table, but there are many possible ways a robot could translate an instruction into actual physical motions.

We developed a model called RT-Trajectory , which automatically adds visual outlines that describe robot motions in training videos. RT-Trajectory takes each video in a training dataset and overlays it with a 2D trajectory sketch of the robot arm’s gripper as it performs the task. These trajectories, in the form of RGB images, provide low-level, practical visual hints to the model as it learns its robot-control policies.

When tested on 41 tasks unseen in the training data, an arm controlled by RT-Trajectory more than doubled the performance of existing state-of-the-art RT models: it achieved a task success rate of 63%, compared with 29% for RT-2.

Traditionally, training a robotic arm relies on mapping abstract natural language (“wipe the table”) to specific movements (close gripper, move left, move right), making it hard for models to generalize to novel tasks. In contrast, an RT-Trajectory model enables RT models to understand "how to do" tasks by interpreting specific robot motions like those contained in videos or sketches.

The system is versatile: RT-Trajectory can also create trajectories by watching human demonstrations of desired tasks, and even accept hand-drawn sketches. And it can be readily adapted to different robot platforms.

Left: A robot, controlled by an RT model trained with a natural-language-only dataset, is stymied when given the novel task: “clean the table”. A robot controlled by RT-Trajectory, trained on the same dataset augmented by 2D trajectories, successfully plans and executes a wiping trajectory

Right: A trained RT-Trajectory model given a novel task (“clean the table”) can create 2D trajectories in a variety of ways, assisted by humans or on its own using a vision-language model.

RT-Trajectory makes use of the rich robotic-motion information that is present in all robot datasets, but currently under-utilized. RT-Trajectory not only represents another step along the road to building robots able to move with efficient accuracy in novel situations, but also unlocking knowledge from existing datasets.

Building the foundations for next-generation robots

By building on the foundation of our state-of-the-art RT-1 and RT-2 models, each of these pieces help create ever more capable and helpful robots. We envision a future in which these models and systems can be integrated to create robots – with the motion generalization of RT-Trajectory, the efficiency of SARA-RT, and the large-scale data collection from models like AutoRT. We will continue to tackle challenges in robotics today and to adapt to the new capabilities and technologies of more advanced robotics.

MIT Technology Review

  • Newsletters

The robots are coming. And that’s a good thing.

MIT's Daniela Rus isn’t worried that robots will take over the world. Instead, she envisions robots and humans teaming up to achieve things that neither could do alone. 

  • Daniela Rus archive page
  • Gregory Mone archive page

Robots, humans and augmented humans work in a busy city street scene

In this excerpt from the new book, The Heart and the Chip: Our Bright Future with Robots , CSAIL Director Daniela Rus explores how robots can extend the reach of human capabilities.

Years ago, I befriended the biologist Roger Payne at a meeting of MacArthur Foundation fellows. Roger, who died in 2023, was best known for discovering that humpback whales sing and that the sounds of certain whales can be heard across the oceans. I’ve always been fascinated by whales and the undersea world in general; I’m an avid scuba diver and snorkeler. So it was no surprise that I thoroughly enjoyed Roger’s lecture. As it turned out, he found my talk on robots equally fascinating.

“How can I help you?” I asked him. “Can I build you a robot?”

A robot would be great, Roger replied, but what he really wanted was a capsule that could attach to a whale so he could dive with these wonderful creatures and truly experience what it was like to be one of them. I suggested something simpler, and Roger and I began exploring how a robot might aid him in his work.

When we first met, Roger had been studying whales for decades. One project was a long-term study on the behavior of a large group of southern right whales. These majestic mammals are 15 meters in length, with long, curving mouths and heads covered with growths called callosities. Roger had built a lab on the shores of Argentina’s Peninsula Valdés, an area that is cold, windy, and inhospitable to humans. The southern right whales love it, though. Every August they gather near the coast to have babies and mate. In 2009, Roger invited me to join him down at his lab. It was one of those invitations you just don’t decline.

Roger had been going to Peninsula Valdés for more than 40 years. Each season, he’d sit atop a cliff with binoculars and paper and pencil, and note which of his aquatic friends were passing by. Roger could identify each of the returning mammals by the unique callosities on their heads. He monitored their behavior, but his primary goal was to conduct the first long-term census of the population. He hoped to quantify the life span of these magnificent creatures, which are believed to live for a century or more.

As we started planning the trip, I suggested using a drone to observe the whales. Two of my former students had recently finished their degrees and were eager for an adventure. Plus, they had a robot that, with some minor adjustments, would be perfect for the task. After much discussion, reengineering, and planning, we brought along Falcon, the first eight-rotor drone that could hold a camera between its thrusters. Today such drones can be bought off the shelf, but in 2009, it was a breakthrough.

Roger was besotted with his new research assistant, which offered a clear view of the whales for several miles without prompting behavioral changes.

The clifftop vantage point from which Roger and his researchers had been observing the whales was better than being in the water with the great creatures, as the sight of divers would alter the whales’ behavior. Helicopters and planes, meanwhile, flew too high and their images were low resolution. The only problem with the cliff was that it was finite. The whales would eventually swim away and out of view.

Falcon removed these limitations and provided close-up images. The drone could fly for 20 to 30 minutes before its batteries ran down, and was capable of autonomous flight, though we kept a human at the controls. Immediately, Roger was besotted with his new research assistant, which offered him and his team a clear view of the whales for several miles without prompting any behavioral changes. In effect, they were throwing their eyes out over the ocean.

It’s far from the only way to use drones to extend the range of human eyes. After the whale project, we lent a drone to Céline Cousteau, the documentary film producer and granddaughter of the celebrated marine scientist Jacques Cousteau. She was studying uncontacted tribes in the Amazon and wanted to observe them without the risk of bringing germs like the cold virus to people who had not developed immunity. 

In my lab, we also built a drone that launched from a self-driving car, flew ahead of the vehicle and around corners to scan the crowded confines of our subterranean parking garage, and relayed its video back to the car’s navigation system—similar to the tech that appears in the 2017 movie Spider-Man: Homecoming , when the superhero, clinging to the side of the Washington Monument, dispatches a miniature flying robot to scan the building. NASA pushed this application even further with Ingenuity, the drone that launched from the Perseverance rover to complete the first autonomous flight on Mars. Ingenuity extended the visual reach of the rover, rising into the thin sky and searching for ideal routes and interesting places to explore.

future of robotics essay

Other human capabilities could be extended robotically as well. Powered exoskeletons with extendable arms could help factory workers reach items on high shelves—a robotic version of the stretchy physicist Reed Richards from the Fantastic Four comics. At home, a simple, extendable robotic arm could be stashed in the closet and put to use to retrieve things that are hard to reach. This would be especially helpful for older individuals, letting them pick up items off the floor without having to strain their backs or test their balance.

The robotic arm is a somewhat obvious idea; other reach-extending devices could have unexpected shapes and forms. For instance, the relatively simple FLX Bot from FLX Solutions has a modular, snake-like body that’s only an inch thick, allowing it to access tight spaces, such as gaps behind walls; a vision system and intelligence enable it to choose its own path. The end of the robot can be equipped with a camera for inspecting impossible-to-reach places or a drill to make a hole for electrical wiring. The snakebot puts an intelligent spin on hammers and drills and functions as an extension of the human. 

We can already pilot our eyes around corners and send them soaring off cliffs. But what if we could extend all of our senses to previously unreachable places? What if we could throw our sight, hearing, touch, and even sense of smell to distant locales and experience these places in a more visceral way? We could visit distant cities or far-off planets, and perhaps even infiltrate animal communities to learn more about their social organization and behavior.

For instance, I love to travel and experience the sights, sounds, and smells of a foreign city or landscape. I’d visit Paris once a week if I could, to walk the Champs-Elysées or the Jardins des Tuileries or enjoy the smells wafting out of a Parisian bakery. Nothing is ever as good as being there, of course, but we could use robots to approximate the experience of strolling through the famed city like a flâneur. Instead of merely donning a virtual-reality headset to immerse yourself in a digital world, you could use one of these devices, or something similar, to inhabit a distant robot in the actual world and experience that faraway place in an entirely new way.

Imagine mobile robots stationed throughout a city, like shareable motorized scooters or Citi Bikes. On a dreary day in Boston, I could switch on my headset, rent one of these robots, and remotely guide it through the Parisian neighborhood of my choice. The robot would have cameras to provide visual feedback and high-definition bidirectional microphones to capture sound. A much bigger challenge would be giving the robot the ability to smell its surroundings, perhaps taste the local food, and pass these sensations back to me. The human olfactory system uses 400 different types of smell receptors. A given scent might contain hundreds of chemical compounds and, when it passes through the nose, activate roughly 10% of these receptors. Our brains map this information onto a stored database of smells, and we can identify, say, a freshly baked croissant. Various research groups are using machine learning and advanced materials like graphene to replicate this approach in artificial systems. But maybe we should skip smell; the sights and sounds of Paris may suffice.

Extending our perceptual reach through intelligent robots also has more practical applications. One idea we explored in my lab is a robotic Mechanical Turk for physical work. Developed by an innovative Hungarian in the late 18th century, the original Mechanical Turk was a contraption that appeared to play chess. In reality, a human chess player disguised inside the so-called machine manipulated the pieces. In 2005, Amazon launched its own variation on the concept through a service that lets businesses hire remote individuals to carry out tasks that computers can’t yet do. We envisioned a combination of the two ideas, in which a human remotely (but not secretly) operates a robot, guiding the machine through tasks that it could not complete on its own—and jobs that are too dangerous or unhealthy for humans to do themselves.

The inspiration for this project stemmed in part from my visit to a cold storage facility outside Philadelphia. I donned all the clothing that warehouse workers wear, which made the temperature manageable in the main room. But in the deep freezer room, where temperatures can be -30 °C or even colder, I barely lasted 10 minutes. I was still chilled to the bone many hours later, after several car rides and a flight, and had to take a hot bath to return my core temperature to normal. People should not have to operate in such extreme environments. Yet robots cannot handle all the needed tasks on their own without making mistakes—there are too many different sizes and shapes in the environment, and too many items packed closely together.

What if we could throw our sight, hearing, touch, and even sense of smell to distant locales and experience these places in a more visceral way?

So we wondered what would happen if we were to tap into the worldwide community of gamers and use their skills in new ways. With a robot working inside the deep freezer room, or in a standard manufacturing or warehouse facility, remote operators could remain on call, waiting for it to ask for assistance if it made an error, got stuck, or otherwise found itself incapable of completing a task. A remote operator would enter a virtual control room that re-created the robot’s surroundings and predicament. This person would see the world through the robot’s eyes, effectively slipping into its body in that distant cold storage facility without being personally exposed to the frigid temperatures. Then the operator would intuitively guide the robot and help it complete the assigned task.

To validate our concept, we developed a system that allows people to remotely see the world through the eyes of a robot and perform a relatively simple task; then we tested it on people who weren’t exactly skilled gamers. In the lab, we set up a robot with manipulators, a stapler, wire, and a frame. The goal was to get the robot to staple wire to the frame. We used a humanoid, ambidextrous robot called Baxter, plus the Oculus VR system. Then we created an intermediate virtual room to put the human and the robot in the same system of coordinates—a shared simulated space. This let the human see the world from the point of view of the robot and control it naturally, using body motions. We demoed this system during a meeting in Washington, DC, where many participants—including some who’d never played a video game—were able to don the headset, see the virtual space, and control our Boston-based robot intuitively from 500 miles away to complete the task.

The best-known and perhaps most compelling examples of remote teleoperation and extended reach are the robots NASA has sent to Mars in the last few decades. My PhD student Marsette “Marty” Vona helped develop much of the software that made it easy for people on Earth to interact with these robots tens of millions of miles away. These intelligent machines are a perfect example of how robots and humans can work together to achieve the extraordinary. Machines are better at operating in inhospitable environments like Mars. Humans are better at higher-level decision-making. So we send increasingly advanced robots to Mars, and people like Marty build increasingly advanced software to help other scientists see and even feel the faraway planet through the eyes, tools, and sensors of the robots. Then human scientists ingest and analyze the gathered data and make critical creative decisions about what the rovers should explore next. The robots all but situate the scientists on Martian soil. They are not taking the place of actual human explorers; they’re doing reconnaissance work to clear a path for a human mission to Mars. Once our astronauts venture to the Red Planet, they will have a level of familiarity and expertise that would not be possible without the rover missions.

Robots can allow us to extend our perceptual reach into alien environments here on Earth, too. In 2007, European researchers led by J.L. Deneubourg described a novel experiment in which they developed autonomous robots that infiltrated and influenced a community of cockroaches. The relatively simple robots were able to sense the difference between light and dark environments and move to one or the other as the researchers wanted. The miniature machines didn’t look like cockroaches, but they did smell like them, because the scientists covered them with pheromones that were attractive to other cockroaches from the same clan.

The goal of the experiment was to better understand the insects’ social behavior. Generally, cockroaches prefer to cluster in dark environments with others of their kind. The preference for darkness makes sense—they’re less vulnerable to predators or disgusted humans when they’re hiding in the shadows. When the researchers instructed their pheromone-soaked machines to group together in the light, however, the other cockroaches followed. They chose the comfort of a group despite the danger of the light. 

future of robotics essay

These robotic roaches bring me back to my first conversation with Roger Payne all those years ago, and his dreams of swimming alongside his majestic friends. What if we could build a robot that accomplished something similar to his imagined capsule? What if we could create a robotic fish that moved alongside marine creatures and mammals like a regular member of the aquatic neighborhood? That would give us a phenomenal window into undersea life.

Sneaking into and following aquatic communities to observe behaviors, swimming patterns, and creatures’ interactions with their habitats is difficult. Stationary observatories cannot follow fish. Humans can only stay underwater for so long.

Remotely operated and autonomous underwater vehicles typically rely on propellers or jet-based propulsion systems, and it’s hard to go unnoticed when your robot is kicking up so much turbulence. We wanted to create something different—a robot that actually swam like a fish. This project took us many years, as we had to develop new artificial muscles, soft skin, novel ways of controlling the robot, and an entirely new method of propulsion. I’ve been diving for decades, and I have yet to see a fish with a propeller. Our robot, SoFi (pronounced like Sophie), moves by swinging its tail back and forth like a shark. A dorsal fin and twin fins on either side of its body allow it to dive, ascend, and move through the water smoothly, and we’ve already shown that SoFi can navigate around other aquatic life forms without disrupting their behavior.

SoFi is about the size of an average snapper and has taken some lovely tours in and around coral reef communities in the Pacific Ocean at depths of up to 18 meters. Human divers can venture deeper, of course, but the presence of a scuba-­diving human changes the behavior of the marine creatures. A few scientists remotely monitoring and occasionally steering SoFi cause no such disruption. By deploying one or several realistic robotic fish, scientists will be able to follow, record, monitor, and potentially interact with fish and marine mammals as if they were just members of the community.

Eventually we’d like to be able to extend the reach of our ears, too, into the seas. Along with my friends Rob Wood, David Gruber, and several other biologists and AI researchers, we are attempting to use machine learning and robotic instruments to record and then decode the language of sperm whales. We hope to be able to discover common fragments of whale vocalizations and, eventually, to identify sequences that may correspond to syllables or even concepts. Humans map sounds to words, which in turn correspond to concepts or things. Do whales communicate in a similar fashion? We aim to find out. If we extend our ears into the sea and leverage machine learning, perhaps someday we will even be able to communicate meaningfully with these fascinating creatures.

The knowledge yielded would be reward enough, but the impact could be much larger. One unexpected result of Roger’s discovery that whales sing and communicate was the “save the whales” movement. His scientific verification of their intelligence spurred a global effort to protect them. He hoped that learning more about the other species on our planet could have a similar effect. As Roger often pointed out, our survival as a species depends on the survival of our small and large neighbors on this planet. Biodiversity is part of what makes Earth a wonderful place for humans to live, and the more we can do to protect these other life forms, the better the chances that our planet continues to be a habitable environment for people in the centuries to come.

These examples of how we can pair the heart with the chip to extend our perceptual reach range from the whimsical to the profound. And the potential for other applications is vast. Environmental and government organizations tasked with protecting our landscapes could dispatch eyes to autonomously monitor land for illegal deforestation without putting people at risk. Remote workers could use robots to extend their hands into dangerous environments, manipulating or moving objects at hazardous nuclear sites. Scientists could peek or listen into the secret lives of the many amazing species on this planet. Or we could harness our efforts to find a way to remotely experience Paris or Tokyo or Tangier. The possibilities are endless and endlessly exciting. We just need effort, ingenuity, strategy, and the most precious resource of all.

No, not funding, although that is helpful.

We need time. 

Excerpted from The Heart and the Chip: Our Bright Future with Robots. Copyright © 2024 by Daniela Rus and Gregory Mone. Used with permission of the publisher, W.W. Norton & Company. All rights reserved.

Keep Reading

Most popular.

BARCELONA, SPAIN - JULY 19: A passenger takes pictures of a screen displaying delayed flights at Barcelona Aiport on July 19, 2024 in Barcelona, Spain. Businesses, travel companies and Microsoft users across the globe were among those affected by a tech outage today. (Photo by David Ramos/Getty Images)

How to fix a Windows PC affected by the global outage

There is a known workaround for the blue screen CrowdStrike error that many Windows computers are currently experiencing. Here’s how to do it.

  • Rhiannon Williams archive page

He Jiankui in profile looking to a computer screen out of frame

A controversial Chinese CRISPR scientist is still hopeful about embryo gene editing. Here’s why.

He Jiankui, who went to prison for three years for making the world’s first gene-edited babies, talked to MIT Technology Review about his new research plans.

  • Zeyi Yang archive page

a protractor, a child writing math problems on a blackboard and a German text on geometry

Google DeepMind’s new AI systems can now solve complex math problems

AlphaProof and AlphaGeometry 2 are steps toward building systems that can reason, which could unlock exciting new capabilities.

person using the voice function of their phone with the openai logo and a sound wave

OpenAI has released a new ChatGPT bot that you can talk to

The voice-enabled chatbot will be available to a small group of people today, and to all ChatGPT Plus users in the fall. 

  • Melissa Heikkilä archive page

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • NEWS FEATURE
  • 28 May 2024
  • Correction 31 May 2024

The AI revolution is coming to robots: how will it change them?

  • Elizabeth Gibney

You can also search for this author in PubMed   Google Scholar

Humanoid robots developed by the US company Figure use OpenAI programming for language and vision. Credit: AP Photo/Jae C. Hong/Alamy

For a generation of scientists raised watching Star Wars, there’s a disappointing lack of C-3PO-like droids wandering around our cities and homes. Where are the humanoid robots fuelled with common sense that can help around the house and workplace?

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Nature 630 , 22-24 (2024)

doi: https://doi.org/10.1038/d41586-024-01442-5

Updates & Corrections

Correction 31 May 2024 : An earlier version of this feature gave the wrong name for Nvidia’s simulated world.

Brohan, A. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2307.15818 (2023).

Khazatsky, A. et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2403.12945 (2024).

Open X-Embodiment Collaboration et al. Preprint at arXiv https://doi.org/10.48550/arXiv.2310.08864 (2023).

Download references

Reprints and permissions

Related Articles

future of robotics essay

  • Machine learning

A guide to the Nature Index

A guide to the Nature Index

Nature Index 21 AUG 24

Has your paper been used to train an AI model? Almost certainly

Has your paper been used to train an AI model? Almost certainly

News 14 AUG 24

Estonians gave their DNA to science — now they’re learning their genetic secrets

Estonians gave their DNA to science — now they’re learning their genetic secrets

News 26 JUN 24

Red light, green light: flickering fluorophores reveal biochemistry in cells

Red light, green light: flickering fluorophores reveal biochemistry in cells

Technology Feature 12 SEP 24

How a struggling biotech company became a university ‘spin-in’

How a struggling biotech company became a university ‘spin-in’

Career Q&A 10 SEP 24

A day in the life of the world’s fastest supercomputer

A day in the life of the world’s fastest supercomputer

News Feature 04 SEP 24

This AI chatbot got conspiracy theorists to question their convictions

This AI chatbot got conspiracy theorists to question their convictions

News 12 SEP 24

Artificial intelligence can help to make animal research redundant

Correspondence 10 SEP 24

Back to the future: two books that tried to predict how science would evolve

Back to the future: two books that tried to predict how science would evolve

News & Views 10 SEP 24

Faculty Positions at SUSTech School of Medicine

SUSTech School of Medicine offers equal opportunities and welcome applicants from the world with all ethnic backgrounds.

Shenzhen, Guangdong, China

Southern University of Science and Technology, School of Medicine

future of robotics essay

Postdoctoral Fellowships Worldwide

IBSA Foundation for scientific research offers 6 fellowships offers of € 32.000 to young researchers under 40 years.

The call is open to people from research institutes and universities from all over the world.

IBSA Foundation for scientific research

future of robotics essay

Staff Scientist - Immunology

Staff Scientist- Immunology

Houston, Texas (US)

Baylor College of Medicine (BCM)

future of robotics essay

Institute for Systems Genetics, Tenure Track Faculty Positions

The Institute for Systems Genetics at NYU Langone Health has tenure track faculty positions (assistant professor level) at the new SynBioMed Center.

New York City, New York (US)

NYU Langone Health

future of robotics essay

Faculty Position

The Institute of Cellular and Organismic Biology (ICOB), Academia Sinica, Taiwan, is seeking candidates to fill multiple tenure-track faculty position

Taipei (TW)

Institute of Cellular and Organismic Biology, Academia Sinica

future of robotics essay

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

To revisit this article, visit My Profile, then View saved stories .

  • The Big Story
  • Newsletters
  • Steven Levy's Plaintext Column
  • WIRED Classics from the Archive
  • WIRED Insider
  • WIRED Consulting

The WIRED Guide to Robots

Modern robots are not unlike toddlers: It’s hilarious to watch them fall over, but deep down we know that if we laugh too hard, they might develop a complex and grow up to start World War III. None of humanity’s creations inspires such a confusing mix of awe, admiration, and fear: We want robots to make our lives easier and safer, yet we can’t quite bring ourselves to trust them. We’re crafting them in our own image, yet we are terrified they’ll supplant us.

But that trepidation is no obstacle to the booming field of robotics. Robots have finally grown smart enough and physically capable enough to make their way out of factories and labs to walk and roll and even leap among us . The machines have arrived.

You may be worried a robot is going to steal your job, and we get that. This is capitalism, after all, and automation is inevitable. But you may be more likely to work alongside a robot in the near future than have one replace you. And even better news: You’re more likely to make friends with a robot than have one murder you. Hooray for the future!

The Complete History And Future of Robots

The definition of “robot” has been confusing from the very beginning. The word first appeared in 1921, in Karel Capek’s play R.U.R. , or Rossum's Universal Robots. “Robot” comes from the Czech for “forced labor.” These robots were robots more in spirit than form, though. They looked like humans, and instead of being made of metal, they were made of chemical batter. The robots were far more efficient than their human counterparts, and also way more murder-y—they ended up going on a killing spree .

R.U.R. would establish the trope of the Not-to-Be-Trusted Machine (e.g., Terminator , The Stepford Wives , Blade Runner , etc.) that continues to this day—which is not to say pop culture hasn’t embraced friendlier robots. Think Rosie from The Jetsons . (Ornery, sure, but certainly not homicidal.) And it doesn’t get much family-friendlier than Robin Williams as Bicentennial Man .

The real-world definition of “robot” is just as slippery as those fictional depictions. Ask 10 roboticists and you’ll get 10 answers—how autonomous does it need to be, for instance. But they do agree on some general guidelines : A robot is an intelligent, physically embodied machine. A robot can perform tasks autonomously to some degree. And a robot can sense and manipulate its environment.

Think of a simple drone that you pilot around. That’s no robot. But give a drone the power to take off and land on its own and sense objects and suddenly it’s a lot more robot-ish. It’s the intelligence and sensing and autonomy that’s key.

But it wasn’t until the 1960s that a company built something that started meeting those guidelines. That’s when SRI International in Silicon Valley developed Shakey , the first truly mobile and perceptive robot. This tower on wheels was well-named—awkward, slow, twitchy. Equipped with a camera and bump sensors, Shakey could navigate a complex environment. It wasn’t a particularly confident-looking machine, but it was the beginning of the robotic revolution.

Around the time Shakey was trembling about, robot arms were beginning to transform manufacturing. The first among them was Unimate , which welded auto bodies. Today, its descendants rule car factories, performing tedious, dangerous tasks with far more precision and speed than any human could muster. Even though they’re stuck in place, they still very much fit our definition of a robot—they’re intelligent machines that sense and manipulate their environment.

Robots, though, remained largely confined to factories and labs, where they either rolled about or were stuck in place lifting objects. Then, in the mid-1980s Honda started up a humanoid robotics program. It developed P3, which could walk pretty darn good and also wave and shake hands, much to the delight of a roomful of suits . The work would culminate in Asimo, the famed biped, which once tried to take out President Obama with a well-kicked soccer ball. (OK, perhaps it was more innocent than that.)

Today, advanced robots are popping up everywhere . For that you can thank three technologies in particular: sensors, actuators, and AI.

So, sensors. Machines that roll on sidewalks to deliver falafel can only navigate our world thanks in large part to the 2004 Darpa Grand Challenge, in which teams of roboticists cobbled together self-driving cars to race through the desert. Their secret? Lidar, which shoots out lasers to build a 3-D map of the world. The ensuing private-sector race to develop self-driving cars has dramatically driven down the price of lidar, to the point that engineers can create perceptive robots on the (relative) cheap.

What You Need to Know About Grok AI and Your Privacy

Lidar is often combined with something called machine vision—2-D or 3-D cameras that allow the robot to build an even better picture of its world. You know how Facebook automatically recognizes your mug and tags you in pictures? Same principle with robots. Fancy algorithms allow them to pick out certain landmarks or objects .

Sensors are what keep robots from smashing into things. They’re why a robot mule of sorts can keep an eye on you, following you and schlepping your stuff around ; machine vision also allows robots to scan cherry trees to determine where best to shake them , helping fill massive labor gaps in agriculture.

New technologies promise to let robots sense the world in ways that are far beyond humans’ capabilities. We’re talking about seeing around corners: At MIT, researchers have developed a system that watches the floor at the corner of, say, a hallway, and picks out subtle movements being reflected from the other side that the piddling human eye can’t see. Such technology could one day ensure that robots don’t crash into humans in labyrinthine buildings, and even allow self-driving cars to see occluded scenes.

Within each of these robots is the next secret ingredient: the actuator , which is a fancy word for the combo electric motor and gearbox that you’ll find in a robot’s joint. It’s this actuator that determines how strong a robot is and how smoothly or not smoothly it moves . Without actuators, robots would crumple like rag dolls. Even relatively simple robots like Roombas owe their existence to actuators. Self-driving cars, too, are loaded with the things.

Actuators are great for powering massive robot arms on a car assembly line, but a newish field, known as soft robotics, is devoted to creating actuators that operate on a whole new level. Unlike mule robots, soft robots are generally squishy, and use air or oil to get themselves moving. So for instance, one particular kind of robot muscle uses electrodes to squeeze a pouch of oil, expanding and contracting to tug on weights . Unlike with bulky traditional actuators, you could stack a bunch of these to magnify the strength: A robot named Kengoro, for instance, moves with 116 actuators that tug on cables, allowing the machine to do unsettlingly human maneuvers like pushups . It’s a far more natural-looking form of movement than what you’d get with traditional electric motors housed in the joints.

And then there’s Boston Dynamics, which created the Atlas humanoid robot for the Darpa Robotics Challenge in 2013. At first, university robotics research teams struggled to get the machine to tackle the basic tasks of the original 2013 challenge and the finals round in 2015, like turning valves and opening doors. But Boston Dynamics has since that time turned Atlas into a marvel that can do backflips , far outpacing other bipeds that still have a hard time walking. (Unlike the Terminator, though, it does not pack heat.) Boston Dynamics has also begun leasing a quadruped robot called Spot, which can recover in unsettling fashion when humans kick or tug on it . That kind of stability will be key if we want to build a world where we don’t spend all our time helping robots out of jams. And it’s all thanks to the humble actuator.

At the same time that robots like Atlas and Spot are getting more physically robust, they’re getting smarter, thanks to AI. Robotics seems to be reaching an inflection point, where processing power and artificial intelligence are combining to truly ensmarten the machines . And for the machines, just as in humans, the senses and intelligence are inseparable—if you pick up a fake apple and don’t realize it’s plastic before shoving it in your mouth, you’re not very smart.

This is a fascinating frontier in robotics (replicating the sense of touch, not eating fake apples). A company called SynTouch, for instance, has developed robotic fingertips that can detect a range of sensations , from temperature to coarseness. Another robot fingertip from Columbia University replicates touch with light, so in a sense it sees touch : It’s embedded with 32 photodiodes and 30 LEDs, overlaid with a skin of silicone. When that skin is deformed, the photodiodes detect how light from the LEDs changes to pinpoint where exactly you touched the fingertip, and how hard.

Far from the hulking dullards that lift car doors on automotive assembly lines, the robots of tomorrow will be very sensitive indeed.

The Complete History And Future of Robots

Increasingly sophisticated machines may populate our world, but for robots to be really useful, they’ll have to become more self-sufficient. After all, it would be impossible to program a home robot with the instructions for gripping each and every object it ever might encounter. You want it to learn on its own, and that is where advances in artificial intelligence come in.

Take Brett. In a UC Berkeley lab, the humanoid robot has taught itself to conquer one of those children’s puzzles where you cram pegs into different shaped holes. It did so by trial and error through a process called reinforcement learning. No one told it how to get a square peg into a square hole, just that it needed to. So by making random movements and getting a digital reward (basically, yes, do that kind of thing again ) each time it got closer to success, Brett learned something new on its own . The process is super slow, sure, but with time roboticists will hone the machines’ ability to teach themselves novel skills in novel environments, which is pivotal if we don’t want to get stuck babysitting them.

Another tack here is to have a digital version of a robot train first in simulation, then port what it has learned to the physical robot in a lab. Over at Google , researchers used motion-capture videos of dogs to program a simulated dog, then used reinforcement learning to get a simulated four-legged robot to teach itself to make the same movements. That is, even though both have four legs, the robot’s body is mechanically distinct from a dog’s, so they move in distinct ways. But after many random movements, the simulated robot got enough rewards to match the simulated dog. Then the researchers transferred that knowledge to the real robot in the lab, and sure enough, the thing could walk—in fact, it walked even faster than the robot manufacturer’s default gait, though in fairness it was less stable.

13 Robots, Real and Imagined

Image may contain Art Painting Wood Figurine Human and Person

They may be getting smarter day by day, but for the near future we are going to have to babysit the robots. As advanced as they’ve become, they still struggle to navigate our world. They plunge into fountains , for instance. So the solution, at least for the short term, is to set up call centers where robots can phone humans to help them out in a pinch . For example, Tug the hospital robot can call for help if it’s roaming the halls at night and there’s no human around to move a cart blocking its path. The operator would them teleoperate the robot around the obstruction.

Speaking of hospital robots. When the coronavirus crisis took hold in early 2020, a group of roboticists saw an opportunity: Robots are the perfect coworkers in a pandemic. Engineers must use the crisis, they argued in an editorial , to supercharge the development of medical robots, which never get sick and can do the dull, dirty, and dangerous work that puts human medical workers in harm’s way. Robot helpers could take patients’ temperatures and deliver drugs, for instance. This would free up human doctors and nurses to do what they do best: problem-solving and being empathetic with patients, skills that robots may never be able to replicate.

The rapidly developing relationship between humans and robots is so complex that it has spawned its own field, known as human-robot interaction . The overarching challenge is this: It’s easy enough to adapt robots to get along with humans—make them soft and give them a sense of touch—but it’s another issue entirely to train humans to get along with the machines. With Tug the hospital robot, for example, doctors and nurses learn to treat it like a grandparent—get the hell out of its way and help it get unstuck if you have to. We also have to manage our expectations: Robots like Atlas may seem advanced, but they’re far from the autonomous wonders you might think.

What humanity has done is essentially invented a new species, and now we’re maybe having a little buyers’ remorse. Namely, what if the robots steal all our jobs? Not even white-collar workers are safe from hyper-intelligent AI, after all.

A lot of smart people are thinking about the singularity, when the machines grow advanced enough to make humanity obsolete. That will result in a massive societal realignment and species-wide existential crisis. What will we do if we no longer have to work? How does income inequality look anything other than exponentially more dire as industries replace people with machines?

These seem like far-out problems, but now is the time to start pondering them. Which you might consider an upside to the killer-robot narrative that Hollywood has fed us all these years: The machines may be limited at the moment, but we as a society need to think seriously about how much power we want to cede. Take San Francisco, for instance, which is exploring the idea of a robot tax, which would force companies to pay up when they displace human workers.

I can’t sit here and promise you that the robots won’t one day turn us all into batteries , but the more realistic scenario is that, unlike in the world of R.U.R. , humans and robots are poised to live in harmony—because it’s already happening. This is the idea of multiplicity , that you’re more likely to work alongside a robot than be replaced by one. If your car has adaptive cruise control, you’re already doing this, letting the robot handle the boring highway work while you take over for the complexity of city driving. The fact that the US economy ground to a standstill during the coronavirus pandemic made it abundantly clear that robots are nowhere near ready to replace humans en masse.

The machines promise to change virtually every aspect of human life, from health care to transportation to work. Should they help us drive? Absolutely. (They will, though, have to make the decision to sometimes kill , but the benefits of precision driving far outweigh the risks.) Should they replace nurses and cops? Maybe not—certain jobs may always require a human touch.

One thing is abundantly clear: The machines have arrived. Now we have to figure out how to handle the responsibility of having invented a whole new species.

The Complete History And Future of Robots

If You Want a Robot to Learn Better, Be a Jerk to It A good way to make a robot learn is to do the work in simulation, so the machine doesn’t accidentally hurt itself. Even better, you can give it tough love by trying to knock objects out of its hand.

Spot the Robot Dog Trots Into the Big, Bad World Boston Dynamics' creation is starting to sniff out its role in the workforce: as a helpful canine that still sometimes needs you to hold its paw.

Finally, a Robot That Moves Kind of Like a Tongue Octopus arms and elephant trunks and human tongues move in a fascinating way, which has now inspired a fascinating new kind of robot.

Robots Are Fueling the Quiet Ascendance of the Electric Motor For something born over a century ago, the electric motor really hasn’t fully extended its wings. The problem? Fossil fuels are just too easy, and for the time being, cheap. But now, it’s actually robots, with their actuators, that are fueling the secret ascendence of the electric motor.

This Robot Fish Powers Itself With Fake Blood A robot lionfish uses a rudimentary vasculature and “blood” to both energize itself and hydraulically power its fins.

Inside the Amazon Warehouse Where Humans and Machines Become One In an Amazon sorting center, a swarm of robots works alongside humans. Here’s what that says about Amazon—and the future of work.

This guide was last updated on April 13, 2020.

Enjoyed this deep dive? Check out more WIRED Guides .

future of robotics essay

  • Search Menu
  • Sign in through your institution
  • Computer Science
  • Earth Sciences
  • Information Science
  • Life Sciences
  • Materials Science
  • Science Policy
  • Advance Access
  • Special Topics
  • Author Guidelines
  • Submission Site
  • Open Access Options
  • Self-Archiving Policy
  • Reasons to submit
  • About National Science Review
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

What is a robot, grand challenges of robotics, micro-repairmen in our bodies, chinese students and china’s robotic industry, an ieee university opens for all, acknowledgements.

  • < Previous

What will robots be like in the future?

  • Article contents
  • Figures & tables
  • Supplementary Data

Yanfeng Lu, Weijie Zhao, What will robots be like in the future?, National Science Review , Volume 6, Issue 5, September 2019, Pages 1059–1061, https://doi.org/10.1093/nsr/nwz069

  • Permissions Icon Permissions

Robots are changing our lives: sweeping robots patrol our living rooms; interactive robots accompany our children; industrial robots assemble vehicles; rescue robots search and save lives in catastrophes; medical robots perform surgeries in hospitals. To better understand robots' challenges and impact, National Science Review (NSR) interviewed Professor Toshio Fukuda, who is one of the world’s most representative robotics experts and has developed a number of bionic robots and micro/nano-robots.

Fukuda has been a full-time professor at Beijing Institute of Technology (BIT) since 2013. Before that, he served as a professor at Nagoya University in Japan for more than 20 years. Fukuda is now a foreign member of the Chinese Academy of Sciences and has cultivated many robotic researchers for China. He has been elected as the 2020 president of the Institute of Electrical and Electronic Engineers (IEEE), which means that he will play a central leading role in the world's largest technical professional organization in the coming years.

NSR: What do you think is the definition of a robot? Why do we consider unmanned aerial vehicles (UAVs) as robots but do not consider common airplanes as robots?

Fukuda: If a flying vehicle is autonomous in some degree, we can consider it as a flying robot. By my definition, robot is such a kind of machine that has sensors, actuators, as well as inside or outside central processing units (CPUs).

The extent of automation is various. Industrial robots, which were defined by the International Standard Organization (ISO) as programmable robots with three or more degrees of freedom, can only do what they are programmed to but cannot make decisions by themselves. Many other robots, such as a number of medical robots are also strictly programmed. We do not allow the medical robots themselves to decide what to do in our bodies. They should follow the indication of doctors.

However, the intelligent robots are different. They can sense the environment and use their own CPUs to make decisions according to the environmental changes. There are two major types of intelligent robots, the teleoperated robots and the autonomous robots. The teleoperated robots interact with human closely and make decisions with human's help. The famous cartoon robot ‘Gundam’ is a teleoperated robot. On the other hand, the autonomous robots do not need human to make decisions. There are also half-teleoperated half-autonomous robots. One example is the Mars rover. When it lands on Mars, it takes photos and decides which way to go under the commands of the scientists on the Earth. However, on its way, it has to sense the obstacles such as rocks and decides how to navigate around the obstacles by itself. Some medical micro-robots are also intelligent. We can inject them into patients’ bodies; then they can navigate to the diseased region or other target organs autonomously and perform microsurgeries collaborating with doctors.

Professor Fukuda on the 2018 World Robot Conference, Beijing (Courtesy of Toshio Fukuda).

Professor Fukuda on the 2018 World Robot Conference, Beijing (Courtesy of Toshio Fukuda) .

NSR: What are the current grand challenges of robotics?

Fukuda: The grand challenges of robotics in my mind go with the megatrends of human society. I do not talk about the challenges of 5 or 10 years, but the challenges of the next generations 50 or 100 years ahead. Robotics should help human cope with the vital problems we are facing.

Robotics should help human cope with the vital problems we are facing. —Toshio Fukuda

The first megatrend is the aging society. China, Japan and many other countries are currently facing an aging society with a demographic inverted pyramid. There will be more and more senior people in our society, and how can we cope with it with robotics? Medical robots can help to analyse, diagnose and treat diseases. Better industrial robots can make it easier to work in factories so that senior people can work until a higher age. Escort robots can help seniors increase their quality of life. There are many things robotics can do and should do.

The second megatrend is global warming. It is very likely that the Earth's climate will change significantly in the coming decades. Many lands may become desert and untillable. There will be difficulties for some areas to obtain water and food. Robotics can make agriculture more autonomous and effective. A water supplemental and recycling system will also help to reduce agricultural water consumption. This kind of new agriculture system has already been tested in several countries including China.

The third vital problem we are facing is the energy problem. What should we do if fossil energy is exhausted? Robotics can help us to harvest energy from everywhere. We can place a small turbine in the toilet to harvest energy from the flush water, or design a device to harvest energy from the opening door, or use the wearable devices to harvest energy from our own body movement. All kinds of motion can be utilized.

Another megatrend is artificial intelligence (AI). It is said that there will be a singularity in 2045, when intelligent robots will become smarter than humans.

I talked about four megatrends here, but there are actually more. We should prepare for these problems with the help of robotics in order to avoid possible catastrophes.

NSR: When the robots become smarter than humans, there will be ethical problems?

Fukuda: That's right. My IEEE friends and I are working on the ethic design of robots. We should ensure by technology that robots, such as self-driving vehicles, would not harm humans. Many technological companies, such as Baidu which is cooperating with my group in BIT, are working on these issues.

NSR: Would you please give some examples of medical micro/nano-robots?

Fukuda: BIT professor Shuxiang Guo developed an assistant system for minimally invasive vascular surgery when he was my student in Japan. With this system, doctors can make an incision somewhere on the patient's body and insert a one-millimeter-diameter catheter as well as a guide wire into the blood vessel. With the help of multiple sensors, the catheter and guide wire can be navigated along the blood vessel towards the distant diseased organ, such as the brain. Then, doctors can perform microsurgeries such as dredging the chocked up blood vessels or placing vascular stents. This system has already been used in many hospitals in and out of China, and researchers are still looking for further advancements.

Professor Fukuda with students in his BIT lab (Courtesy of Toshio Fukuda).

Professor Fukuda with students in his BIT lab (Courtesy of Toshio Fukuda) .

Another example is our artificial micro-vascular. My group has successfully assembled artificial micro-vascular as long as 200 micrometers in the laboratory. I hope that in the future, when human tissues went wrong, we can use this kind of cell-level micro-technology to repair the broken tissues in situ . But of course, we should go step by step, test the technologies in animals before using them in the hospitals.

NSR: Why is it difficult to make micro-robots?

Fukuda: In the macro world, gravity is the leading force. But when the size becomes smaller and smaller, gravity becomes unimportant and the impact of surface forces become significant. So micro-engineering is different from macro-engineering. It is not easy to make micro-robots, and especially difficult to make durable micro-robots.

NSR: How is micro/nano-robotics developing in China?

Fukuda: I brought this research field into China. Now, there are four or five Chinese groups doing very well, most of which are led by my Chinese students. I am very glad to see my students spreading across China, in Shenyang, Wuhan, Suzhou, Shanghai, Shenzhen and Hong Kong.

NSR: Why did you join a Chinese university in your 60s?

Fukuda: I have many Chinese students and they are very energetic and enthusiastic. In Japan, the society is more mature and the best students always join big companies and live a rich and stable life. But Chinese students are different. Many of them are very energetic and ambitious. They work very hard and would like to create their own companies with the most advanced technologies. That is why I like China.

NSR: What suggestions would you give to the young scientists?

Fukuda: My job as a professor is to give dreams to the young generation. So my suggestion is to have a good dream and keep going.

Everybody has his or her own dream. I know someone who dreams to develop an air-exchange system. And he is now trying hard to find materials that could absorb moisture from the air. I once dreamed about improving people's sleep. So in Nagoya University, we analysed the rhythm of human and developed a biocompatible and biodegradable micro-capsule, which is made of liposomes containing proteins that can control sleep conditions. We hope that it will turn into a useable medicine one day.

So it is important to have a dream and work hard to realize it with science and technology.

NSR: Japan's robotic industry is one of the world’s best. How could China catch up?

Fukuda: I discussed this issue with one of my best Chinese friends about 15 years ago. At that time, I said that there should be four steps. Step 1, you should observe and study the state-of-the-art foreign robots carefully. Step 2, you should digest the foreign technology and make a robot as well as the existing best ones. Step 3, you should improve it. Step 4, you can create a completely new and better robot.

Now, China's robotic industry has developed a lot and is ready to break into step 4. A number of Chinese robot companies, such as SIASUN Robot and Automation Corporation, are making very nice robots. SIASUN’s automatic guided vehicles (AGVs) are the best in China.

However, there may be something still missing. One of the limitations is that China has not a strong component industry. China need to import many basic components from Japan and other countries. The Chinese government exposed the China Manufacturing 2025 plan to solve this problem. But it may still need years to decode and catch up with the foreign technologies.

NSR: You helped to organize the first Beijing World Robot Conference (WRC) in 2015. How would you evaluate this conference?

Fukuda: There are several similar conferences in other countries, and China also wanted to organize its own. I was invited as the Chair of the Advisory Committee of this new conference and helped to contact the robotic specialists, many of whom are my friends, such as MIT professor Rodney Brooks, who is one of the founders of the company iRobot, which is famous for its robotic vacuum cleaner Roomba. At last, we had more than 50 senior scientists in the first WRC.

WRC has been very successful in the past years. It makes it possible for Chinese people to know what is going on in the world within one week. There are forums, exhibitions and contests in the conference, both the scientists and the public can enjoy it.

NSR: You have been selected as the 2020 president of IEEE. What are your major missions in this position?

Fukuda: IEEE is a non-profit organization with 430 000 members and 46 technical societies and councils, covering diverse fields including computer sciences, robotics, electronics, medical engineering and more. Our aim is to advance technology for humanity. As the president, I wish to make all IEEE members connected as families.

Particularly, one of my promises during the election was to build an ‘IEEE University’. It will not be a real university, but virtual, consisting of a massive open online courses (MOOC) system. Many of our societies already have their own online courses and I want to assemble them into a more efficient and more user-friendly system. All of our courses will be open and free to anybody anywhere in the world. You do not need to pay anything unless you want a course certification for job or university applications. We will start to prepare for it in 2019, and build the system in 2020. Once the frame is built, this online university will naturally grow day by day.

It [the ‘IEEE University’] will not be a real university, but virtual, consisting of a massive open online courses (MOOC) system. —Toshio Fukuda

NSR: What are your personal plans in the coming five years?

Fukuda: I have two major aims in the coming years. First is to keep my research group at BIT as the best micro/nano-robotics group in the world. And the second is to contribute more to IEEE.

Both of the aims require a lot of communications with people. I need to communicate with my group members in Beijing. I also need to communicate with the staffs of IEEE in the US. I should listen carefully to their voices and make decisions. Fortunately, the communication technologies are highly developed now, so that I would be able to handle these jobs.

The authors thank Hong Qiao (Professor at Institute of Automation, Chinese Academy of Sciences) and Bingzi Zhang (managing editor of NSR) for their kind help.

Yanfeng Lu is an associate professor at the State Key Laboratory of Management and Control for Complex Systems, Institute of Automation, Chinese Academy of Sciences; Weijie Zhao is an NSR news editor.

Month: Total Views:
June 2019 7
July 2019 2
August 2019 2
September 2019 5
October 2019 2
November 2019 11
December 2019 216
January 2020 124
February 2020 62
March 2020 129
April 2020 45
May 2020 53
June 2020 101
July 2020 65
August 2020 55
September 2020 89
October 2020 147
November 2020 173
December 2020 122
January 2021 178
February 2021 171
March 2021 201
April 2021 257
May 2021 291
June 2021 216
July 2021 221
August 2021 235
September 2021 293
October 2021 321
November 2021 318
December 2021 210
January 2022 296
February 2022 239
March 2022 227
April 2022 242
May 2022 214
June 2022 127
July 2022 70
August 2022 113
September 2022 149
October 2022 115
November 2022 92
December 2022 64
January 2023 58
February 2023 105
March 2023 114
April 2023 68
May 2023 91
June 2023 61
July 2023 62
August 2023 56
September 2023 61
October 2023 70
November 2023 119
December 2023 99
January 2024 69
February 2024 68
March 2024 123
April 2024 91
May 2024 73
June 2024 21
July 2024 52
August 2024 35
September 2024 22

Email alerts

Citing articles via.

  • Recommend to Your Librarian

Affiliations

  • Online ISSN 2053-714X
  • Print ISSN 2095-5138
  • Copyright © 2024 China Science Publishing & Media Ltd. (Science Press)
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

By Roger Highfield on 8 June 2017

Robots in 2050 and beyond.

Extract of a speech, ‘The World in 2050 and Beyond’, by Lord Rees, Astronomer Royal and member of the Science Museum Group Foundation, at the inauguration of the Hans Rausing Lecture Theatre, in which, among other topics, he looks at the rise of robots and AI.

The smartphone, the web and their ancillaries would have seemed magic even 20 years ago. So, looking several decades ahead, we must keep our minds open, or at least ajar, to innovations that might now seem science fiction.

There’s been exciting advances in what’s called generalised machine learning. The London-based company DeepMind last year achieved a remarkable feat, its computer beat the world champion in the game of Go, and Carnegie-Mellon University in Pittsburgh has developed a machine that can bluff and calculate as well as the best human players of poker.

Of course, it’s 20 years since IBM’s ‘Deep Blue’ beat Garry Kasparov, the world chess champion. But Deep Blue was programmed in detail by expert players. In contrast, the machines that play Go and Poker gained expertise by absorbing huge numbers of games and playing against themselves. Their designers don’t themselves know how the machines make seemingly insightful decisions.

The speed of computers allows them to succeed by ‘brute force’ methods. They learn to identify dogs, cats and human faces by ‘crunching’ through millions of images, not the way babies learn. They learn to translate by reading millions of pages of, for example, multilingual European Union documents (they never get bored!).

But advances are patchy. Robots are still clumsier than a child in moving pieces on a real chessboard. They can’t tie your shoelaces. But sensor technology, speech recognition, information searches and so forth are advancing apace.

They won’t just take over manual work, indeed plumbing and gardening will be among the hardest jobs to automate, but routine legal work (conveyancing and suchlike), medical diagnostics and even surgery.

Can robots cope with emergencies? For instance, if an obstruction suddenly appears on a crowded highway, can Google’s driverless car discriminate whether it’s a paper bag, a dog or a child? The likely answer is that its judgement will never be perfect, but will be better than the average driver. Machine errors will occur, but not as often as human error, but when accidents occur, they will create a legal minefield. Who should be held responsible – the ‘driver’, the owner, or the designer?

The big social and economic question is this: Will this ‘second machine age’ be like earlier disruptive technologies, the car for instance, and create as many jobs as it destroys? Or is it really different this time?

The money ‘earned’ by robots could generate huge wealth for an elite. But to preserve a healthy society will require massive redistribution to ensure that everyone had at least a ‘living wage’, and to create and upgrade public-service jobs where the human element is crucial and is now undervalued and demand is huge, especially carers for young and old, but also custodians, gardeners in public parks and so on.

But let’s look further ahead.

Lord Rees, Astronomer Royal giving the inaugural lecture at the Hans Rausing Lecture Theatre

If robots could observe and interpret their environment as adeptly as we do they would truly be perceived as intelligent beings, to which (or to whom) we can relate. Such machines pervade popular culture; in movies like Her, Transcendence and Ex Machina .

Do we have obligations towards them? We worry if our fellow-humans, and even animals, can’t fulfil their natural potential. Should we feel guilty if our robots are under-employed or bored?

What if a machine developed a mind of its own ? Would it stay docile, or ‘go rogue’? If it could infiltrate the internet, and the internet of things, it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes, or even treat humans as an encumbrance.

Some AI pundits take this seriously, and think the field already needs guidelines, just as biotech does. But others regard these concerns as premature and worry less about artificial intelligence than about real stupidity.

Be that as it may, it’s likely that society will be transformed by autonomous robots, even though the jury’s out on whether they’ll be ‘idiot savants’ or display superhuman capabilities.

There’s disagreement about the route towards human-level intelligence. Some think we should emulate nature, and reverse-engineer the human brain. Others say that’s as misguided as designing flying machine by copying how birds flap their wings. And philosophers debate whether ‘consciousness’ is special to the wet, organic brains of humans, apes and dogs, so that robots, even if their intellects seem superhuman, will still lack self-awareness or inner life.

And now a digression into my special interest, and Hans Rausing’s: the cosmos and space. This is where robots will surely be transformative.

Lord Rees explains the role of robots in the future of our planet, and those beyond

During this century, the whole solar system will be explored by flotillas of miniaturized probes, far more advanced than the robot that ESA’s Rosetta landed on a comet, or NASA’s New Horizons probe that transmitted amazing pictures from Pluto, 10,000 times further away than the moon.

These two instruments took ten years on their journeys, and the amazing Cassini probe of Saturn is even more of an antique – it was launched 20 years ago. Think how much better we could do today.

And better, too than the ‘Curiosity’ rover on Mars.

Later this century giant robotic fabricators may assemble vast lightweight structures in space, gossamer-thin radio reflectors or solar energy collectors, for instance, using raw materials mined from the Moon or asteroids. But what about human spaceflight? Robotic and AI advances are eroding the practical case.

Nonetheless, I hope people will follow the robots, though it will be as risk-seeking adventurers rather than for practical goals. The most promising developments are spearheaded by private companies. Elon Musk’s Space X, has launched unmanned payloads and docked with the Space Station, and has successfully recovered and reused the launch-rocket’s first stage presaging real cost-saving. He hopes soon to offer orbital flights to paying customers.

Wealthy adventurers are already signing up for a week-long trip round the far side of the Moon, voyaging further from Earth than anyone has been before. I’m told they’ve sold a ticket for the second flight but not for the first flight.

We should surely acclaim these private enterprise efforts in space, they can tolerate higher risks than a western government could impose on publicly-funded civilian astronauts, and thereby cut costs compared to NASA or ESA. But should they be promoted as adventures or extreme sports, the phrase ‘space tourism’ should be avoided. It lulls people into unrealistic confidence.

By 2100 courageous pioneers in the mould of say Felix Baumgartner , who broke the sound barrier in free fall from a high-altitude balloon, or Sir Ranulph Fiennes, may have established ‘bases’ independent from the Earth, on Mars, or maybe on asteroids. Musk himself (aged 45) says he wants to die on Mars, but not on impact.

But don’t ever expect mass emigration from Earth. Nowhere in our Solar System offers an environment even as clement as the Antarctic or the top of Everest. It’s a dangerous delusion to think that space offers an escape from Earth’s problems. There’s no ‘Planet B’.

Indeed, space is an inherently hostile environment for humans. For that reason, even though we may wish to regulate genetic and cyborg technology on Earth, we should surely wish the space pioneers good luck in using all such techniques to adapt to alien conditions. They’ll be free from terrestrial regulation and have maximal incentive to do so. Indeed, these spacefarers may spearhead the post-human era, evolving within a few centuries into a new species.

The stupendous timespans of the evolutionary past are now part of common culture, outside ‘fundamentalist’ circles, at any rate, but most people still tend to regard humans as the culmination of the evolutionary tree. No astronomer can believe that. Our Sun formed 4.5 billion years ago, but it’s got 6 billion more before the fuel runs out, and the expanding universe will continue, perhaps forever. To quote Woody Allen, “eternity is very long, especially towards the end.”

So, we may not even be at the half-way stage of evolution.

It may take just decades to develop human-level AI, or it may take centuries. Be that as it may, it’s but an instant compared to the cosmic future stretching ahead.

Moreover, the Earth’s environment may suit us ‘organics’, but interplanetary and interstellar space may be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological ‘brains’ may develop powers than humans can’t even imagine.

  • Skip to main content
  • Skip to secondary menu
  • Skip to primary sidebar
  • Skip to footer

A Plus Topper

Improve your Grades

Robotics Essay | Essay on Robotics for Students and Children in English

February 14, 2024 by sastry

Robotics Essay:  What do you think of when you think about ‘robots’? If you think they are only the stuff of space movies and science fiction novels, then think again. Robots are the largest growing technological device in the world. They perform many functions ranging from space exploration to entertainment. Robotics technology is increasing at a fast rate, providing us with new technology that can assist with home chores, automobile assembly and many other tasks. Robotic technology has changed the world around us and is continuing to impact the way we do things. Robotic technology transformation from the past to present surrounds almost everyone in today’s society and it affects both our work and leisure activities.

You can read more  Essay Writing  about articles, events, people, sports, technology many more.

Long and Short Essays on Robotics for Kids and Students in English

Given below are two essays in English for students and children about the topic of ‘Robotics’ in both long and short form. The first essay is a long essay on Robotics of 400-500 words. This long essay about Robotics is suitable for students of class 7, 8, 9 and 10, and also for competitive exam aspirants. The second essay is a short essay on Robotics of 150-200 words. These are suitable for students and children in class 6 and below.

Long Essay on Robotics 500 Words in English

Below we have given a long essay on Robotics of 500 words is helpful for classes 7, 8, 9 and 10 and Competitive Exam Aspirants. This long essay on the topic is suitable for students of class 7 to class 10, and also for competitive exam aspirants.

Robotics is the branch of mechanical engineering, electrical engineering and computer science that deals with the design, construction, operation, and application of robots, as well as computer systems for, their coptrol and processing. These technologies deal with automated machines that can take’the place of a human in various kinds of work, activities, environments and processes.

The definition of the word robot has a different meaning to many people. According to the Robot Institute of America, 1979, a robot is a re-programmable, multi-functional manipulator designed to move material, parts, tools, or specialised devices through various programmed motions for the performance of a variety of tasks. The use of robots continues to change numerous aspect of our everyday life, such as health care, education and job satisfaction. Robots are going to be a major part of the world economy, they help ways to make our daily life easier and assist in producing more products.

Robotic technology is becoming one of the leading technologies in the world. They can perform many functions. They are used in many different ways in today’s society. The use of robotic technology has made an immediate impact on the world in several ways. As technological advances continue, research design and building new robots serve various practical purposes, whether domestic, commercial or military. Many robots even do the jobs that are hazardous to people such as defusing bombs, mining and exploring shipwrecks.

There are numerous uses of robots which not only give better results but also help in saving money as well as time. The robots can provide high quality components and finished products, and do so reliably and repeatedly even in hazardous or unpleasant environments. There are various industry segments which are making use of robotics to improve their production capabilities.

Much of the research in robotics focuses not on specific industrial tasks, but on investigations into new types of robots, alternative ways to think about or design robots, and new ways to manufacture them.

Recently, Apollo Hospital group installed the world’s most advanced CyberKnife robotic radio surgery system at the cancer speciality centre in Chennai, India. Although it meant substantial price for the hospital, Apollo decided to go ahead with the project due to the new-found enthusiasm for robotics in India.

From the Chandrayaan I project for sending robots to moon, to biomedical engineering and the auto industry, India has been using robotics on a wide scale. In an increasingly technology-driven country, robotics has fast assumed significance not only for industrial applications, but also in various day-to-day human activities.

Presently, robotics is the pinnacle of technical development. Though robotics in India is at a nascent stage, but industrial automation in India has opened up huge potential for robotics. Innovation coupled with consolidated research and development has catapulted India’s scientific position in robotic technology.

The country is soon to become a major hub for the production of robots. The global market for robots is projected to rise by an average of about 4%, while in India, the industry is expected to grow at a rate 2.5 times that of the global average.

In medical field, the importance of robotics has been growing. Robotics is increasingly being used in a variety of clinical and surgical settings for increasing surgical accuracy and decreasing operating time and often to create better healthcare outcomes than standard current approaches. These medical robots are used to train surgeons, assist in difficult and precise surgical procedures, and to assist patients in recovery. The automobile industry is equally dominated by robots.

There are multiple number of industrial robots functioning on fully automated production lines especially the high and efficient luxury and sports cars. The use of industrial robots has helped to increase productivity rate, efficiency and quality of distribution. Another major area where the use of robots is extensive is the packaging section. The packaging done using real robots is of very high quality as there is almost no chances of any human error. Another example where robotics is used is the electronic field. These are mainly in the mass-production with full accuracy and reliability. With these varied usages of robots Bill Gates has said

“Robots will be the Next World-Changing Technology”

Robotic has spread like an infection to an extent that so many movies and serials are also based on its theme. Some popular movies include Star Wars, Robocop, Ra one, Transformers etc. With such acclaimed popularity India too has come up with the Robotics Society of India (RSI). It is an academic society founded on 10th July, 2011, which aims at promoting Indian robotics and automation activities. The society hopes to serve as a bridge between researchers in institutes, government research centres and industry.

Robotics Essay

Short Essay on Robotics 200 Words in English

Below we have given a short essay on Robotics is for Classes 1, 2, 3, 4, 5, and 6. This short essay on the topic is suitable for students of class 6 and below.

India has also come up with specialised programmes in robotics field in IITs and other universities. Also, it has moved beyond the traditional areas and entered newer domains of education, rehabilitation, entertainment etc. Robotics has helped handicapped people by replacing their (damaged) limbs with artificial parts that can duplicate the natural movements.

Like a coin has two sides, robotics too has a flip side to it. The biggest barrier in the development of robots has been the high costs of its hardware such as sensors, motors etc. The customisation and updation is also an added problem.

With new advancements taking place each passing day, new product introduction is a problem for the existing users. Robots cut down labour, thereby reducing the opportunities of employment for many. In many developed countries, scientists are making robotic military force that can prove dangerous to others. As the power and capacity of computers continues to expand, revolution is being created in the field of robotics. Imagination is coupled with technology. It would not be wrong to say that in near future there will be a time when robots will become smarter than the human race.

Robotics Essay Word Meanings for Simple Understanding

  • Shipwreck – the destruction or loss of a ship, the remains of a ruined ship.
  • Defuse – the act of deactivating, terminating or making ineffective
  • Substantial- of ample or considerable amount, significant
  • Pinnacle – the highest or culminating point, as of success, power, etc
  • Nascent – developing, beginning, budding
  • Consolidated – united, combined
  • Catapulted – to move quickly, suddenly or forcibly
  • Reliability – dependability
  • Domain – field, area, sphere
  • Flip side – opposite side, reverse side
  • Customisation – modification, alteration
  • Picture Dictionary
  • English Speech
  • English Slogans
  • English Letter Writing
  • English Essay Writing
  • English Textbook Answers
  • Types of Certificates
  • ICSE Solutions
  • Selina ICSE Solutions
  • ML Aggarwal Solutions
  • HSSLive Plus One
  • HSSLive Plus Two
  • Kerala SSLC
  • Distance Education

Autonomous Controller Robotics: The Future of Robots Essay (Article)

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

How Autonomous Control Works

Merits of autonomy, works cited, audience analysis.

Advancement in robotic technology has made a positive contribution in many industries including the mining industry. Intelligent robotic systems are especially fast becoming of interest due to their successful application in many areas where safety is critical.

Even so, robots still continue to rely on humans to control them and in some instances, help fulfill their missions. At the present, the most prevalent means of controlling intelligent robots is by use of regular Remote Control (RC). This means makes use of transmitters and receiver devices which are incorporated in the robots.

Remote controlled robots invariably make use of human control since there has to be an operator giving the robot instructions in real time. However, autonomous control is in some instance the most desirable form of control sine it removes the need for human control. Recent developments have made autonomous control realizable for intelligent robotic systems.

Autonomous by definition means having the power to govern self and as such, autonomous controllers possess the ability for self governance in the performance of various tasks.

In practice, autonomous control systems make use of Global Positioning System (GPS) devices that are built into the control system of the intelligent robots. An autonomous controller has the capacity to plan the necessary sequence of control actions that should be taken in order to achieve set goals.

The architecture of an autonomous control system is typically made of three levels. The lowest level is the Execution level which is the interface between the robot and its environment through the use of sensors and actuators (Antsaklis, Passino and Wang 23). The Execution level has a number of control algorithms which are used in the operation of the robot.

The middle level is the Coordination level which interfaces the actions of the top and lower level s in the architecture. The higher levels issue commands to the lower levels and responsive data flows from bottom to top.

An intelligent autonomous controller is at best a complex series of systems with differing performance requirements. Even so, all autonomous controls make use of deterministic feedback controls in their operation.

This feedback requires that a particular task be completed at a certain minimum time or using certain minimum energy. The control algorithms developed for autonomous controllers are built with uncertainty in mind. This is because robots in underground mining operate in areas where which are highly dynamic and intelligent decision making is invaluable.

Ridley and Corke state that the speed of a vehicle traveling a complex path under autonomous control should be continuously regulated according to the physical conditions of the path (30). Roberts et al proposes that robots utilize scanning laser rangefinder to avoid obstacles that may be in the path (194).

For all the talks of autonomy, it must be remembered that even with autonomous controllers, human beings should possess the ultimate authority on the activities being carried out. Antsaklis, Passino and Wang propose that humans should have ultimate authority to override the control of autonomy functions at will (24).

The reason for this is that human beings have better foresight than the robotics and also, humans can prioritize on tasks based on the desired goals. In general, human beings have primacy over the robotics since robots are at the end required to fulfill the goals and objectives set by human beings.

There are a number of reasons why underground intelligent robots are favored over the use of manned machines. To begin with, the situation of the mines may be very risky and an accident such as a mine collapse may result in the loss of life. An autonomous robotic will not require any human involvement and as such, the safety of human beings is guaranteed.

Another obvious advantage of the autonomous robotic device is that it cannot suffer from errors that a human operator can introduce to the operations. When using conventional remote control, the human operator may make errors of judgment which may result in the loss of the robotic. This loss will have huge financial repercussions for the mining firm. An autonomous robotic will always choose the safest and most efficient means of achieving its goals and will not suffer from any error of judgment.

The efficiency of the intelligent robotic is greatly increased once autonomy is granted. When under a human controller, the full potential of the robot may not be realized since the operator may operate under some perceived limitations.

Autonomous robotics make use of complex computational algorithms to process the information gathered and from this, they come up with the most effective and efficient means to carry out a task (Roberts et al., 195).

In addition to this, the speed of response is significantly increased with autonomous controller as compared to response from human operators. In addition to this, the autonomous robotics’ performance increases with its use. This is because the robotic is able to enhance its performance by learning while under operation. By learning from previous encounters, efficiency is increased even more.

An autonomous robot possesses the capacity to deal with new and unexpected situations that may arise within its limits. This is possible in robotics with a high degree of autonomy where the autonomous controller has the ability to perform some hardware repairs should one of its components fail (Ridley and Corke, 33).

This is a very desirable feature since robots which operate in underground mining venture into areas which are inaccessible of highly unsafe for humans.

As a matter of fact, some of the tasks performed by the intelligent robotic are mundane in nature and time consuming. Such tasks include taking of measurements on toxicity levels deep underground and moving from one point to another through a predetermined path to name but a few.

An autonomous controller will be able to relieve the human operator of the mundane and time consuming tasks therefore increasing efficiency (Antsaklis, Passino and Wang 22). In addition to this, robotics have the advantage of enhanced reliability since they are not prone to overlooking any procedure as human operators are.

Intelligent robots have managed to meet some of the challenges that are faced in underground mining. This is because robots are able to perform tasks in environments that are too dangerous for human beings. Robots with autonomous control promise to further increase the efficiency of robotics in mining therefore increasing their worth to the industry.

While at the moment most of the autonomously controlled robotics are very expensive and used by the military and in space missions, research suggests that better, less expensive and more efficient and adaptive robots will be available in future. This will increase the prevalence of use for autonomous robotics in the mining industry with greater productivity being achieved.

Antsaklis, JP, Passino, KM, and Wang, SJ. “An Introduction to Autonomous Control Systems,” Proc of the 5th IEEE International Symposium on Intelligent Control , pp.21-26, Philadelphia, 2002.

Roberts, Jonathan et al. “Autonomous Control of Underground Mining Vehicles using Reactive Navigation”. The International Journal of Robotics Research , vol. 42, pp. 193–199, 2008.

Ridley, Peter, and Corke, Peter. “Autonomous Control of an Underground Mining Vehicle”. Australian Conference on Robotics and Automation Sydney , 2001.

The Mining Journal is an online publication that is produced on a weekly basis. The Journal styles itself as “the mining industry’s weekly newspaper” demonstrating its focus on people involved in the mining industry.

The journal covers relevant news on all the continents and its readership can be assumed to span through the continents. Access to the journal is through subscription and on approximate, the number of subscribed readers is in the tens of thousands.

The journal has a diverse audience who include investors, plant managers, miners and technical personnel. The investors read the magazine so as to identify areas where they can make investments in the industry.

The plant managers and technical personnel read the magazine to keep abreast with mining technology news. The education of this people ranges from very advanced for the technical staff and managers to high school level for the miners. The education levels greatly influence the manner in which the message is conveyed in the journal.

It is imperative to design and develop a message that all audiences will understand. As such, the publication ensures that the terms used are easy to understand and not too technical in nature. Even so, the publication makes use of terminologies which are commonplace in mining and includes some technical details to appeal to the technical readers.

  • Software Automation and the Future of Work
  • A Mobile Robotic Project in the Ohio State University Medical Center
  • Autonomous Mobile Robot: GPS and Compass
  • The Invento Robotics Products Analysis
  • Robotics in Construction: Automated and Semi-Automated Devices
  • The Evolution of the Automobile & Its Effects on Society
  • Technology Help - American Become More Knowledgeable
  • Air Force Maintenance and Production Planning
  • The Role of Design in the Website Performance
  • Information Networking as Technology
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2018, June 5). Autonomous Controller Robotics: The Future of Robots. https://ivypanda.com/essays/autonomous-controller-robotics-the-future-of-robots/

"Autonomous Controller Robotics: The Future of Robots." IvyPanda , 5 June 2018, ivypanda.com/essays/autonomous-controller-robotics-the-future-of-robots/.

IvyPanda . (2018) 'Autonomous Controller Robotics: The Future of Robots'. 5 June.

IvyPanda . 2018. "Autonomous Controller Robotics: The Future of Robots." June 5, 2018. https://ivypanda.com/essays/autonomous-controller-robotics-the-future-of-robots/.

1. IvyPanda . "Autonomous Controller Robotics: The Future of Robots." June 5, 2018. https://ivypanda.com/essays/autonomous-controller-robotics-the-future-of-robots/.

Bibliography

IvyPanda . "Autonomous Controller Robotics: The Future of Robots." June 5, 2018. https://ivypanda.com/essays/autonomous-controller-robotics-the-future-of-robots/.

IvyPanda uses cookies and similar technologies to enhance your experience, enabling functionalities such as:

  • Basic site functions
  • Ensuring secure, safe transactions
  • Secure account login
  • Remembering account, browser, and regional preferences
  • Remembering privacy and security settings
  • Analyzing site traffic and usage
  • Personalized search, content, and recommendations
  • Displaying relevant, targeted ads on and off IvyPanda

Please refer to IvyPanda's Cookies Policy and Privacy Policy for detailed information.

Certain technologies we use are essential for critical functions such as security and site integrity, account authentication, security and privacy preferences, internal site usage and maintenance data, and ensuring the site operates correctly for browsing and transactions.

Cookies and similar technologies are used to enhance your experience by:

  • Remembering general and regional preferences
  • Personalizing content, search, recommendations, and offers

Some functions, such as personalized recommendations, account preferences, or localization, may not work correctly without these technologies. For more details, please refer to IvyPanda's Cookies Policy .

To enable personalized advertising (such as interest-based ads), we may share your data with our marketing and advertising partners using cookies and other technologies. These partners may have their own information collected about you. Turning off the personalized advertising setting won't stop you from seeing IvyPanda ads, but it may make the ads you see less relevant or more repetitive.

Personalized advertising may be considered a "sale" or "sharing" of the information under California and other state privacy laws, and you may have the right to opt out. Turning off personalized advertising allows you to exercise your right to opt out. Learn more in IvyPanda's Cookies Policy and Privacy Policy .

March 23, 2009

15 min read

Rise of the Robots--The Future of Artificial Intelligence

By 2050 robot "brains" based on computers that execute 100 trillion instructions per second will start rivaling human intelligence

By Hans Moravec

Editor's Note: This article was originally printed in the 2008 Scientific American Special Report on Robots . It is being published on the Web as part of ScientificAmerican.com's In-Depth Report on Robots .

In recent years the mushrooming power, functionality and ubiquity of computers and the Internet have outstripped early forecasts about technology’s rate of advancement and usefulness in everyday life. Alert pundits now foresee a world saturated with powerful computer chips, which will increasingly insinuate themselves into our gadgets, dwellings, apparel and even our bodies.

Yet a closely related goal has remained stubbornly elusive. In stark contrast to the largely unanticipated explosion of computers into the mainstream, the entire endeavor of robotics has failed rather completely to live up to the predictions of the 1950s. In those days experts who were dazzled by the seemingly miraculous calculational ability of computers thought that if only the right software were written, computers could become the artificial brains of sophisticated autonomous robots. Within a decade or two, they believed, such robots would be cleaning our floors, mowing our lawns and, in general, eliminating drudgery from our lives.

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Obviously, it hasn’t turned out that way. It is true that industrial robots have transformed the manufacture of automobiles, among other products. But that kind of automation is a far cry from the versatile, mobile, autonomous creations that so many scientists and engineers have hoped for. In pursuit of such robots, waves of researchers have grown disheartened and scores of start-up companies have gone out of business.

It is not the mechanical “body” that is unattainable; articulated arms and other moving mechanisms adequate for manual work already exist, as the industrial robots attest. Rather it is the computer-based artificial brain that is still well below the level of sophistication needed to build a humanlike robot.

Nevertheless, I am convinced that the decades-old dream of a useful, general-purpose autonomous robot will be realized in the not too distant future. By 2010 we will see mobile robots as big as people but with cognitive abilities similar in many respects to those of a lizard. The machines will be capable of carrying out simple chores, such as vacuuming, dusting, delivering packages and taking out the garbage. By 2040, I believe, we will finally achieve the original goal of robotics and a thematic mainstay of science fiction: a freely moving machine with the intellectual capabilities of a human being.

Reasons for Optimism In light of what I have just described as a history of largely unfulfilled goals in robotics, why do I believe that rapid progress and stunning accomplishments are in the offing? My confidence is based on recent developments in electronics and software, as well as on my own observations of robots, computers and even insects, reptiles and other living things over the past 30 years.

The single best reason for optimism is the soaring performance in recent years of mass-produced computers. Through the 1970s and 1980s, the computers readily available to robotics researchers were capable of executing about one million instructions per second (MIPS). Each of these instructions represented a very basic task, like adding two 10-digit numbers or storing the result in a specified location in memory.

In the 1990s computer power suitable for controlling a research robot shot through 10 MIPS, 100 MIPS and has lately reached 50,000 MIPS in a few high-end desktop computers with multiple processors. Apple’s MacBook laptop computer, with a retail price at the time of this writing of $1,099, achieves about 10,000 MIPS. Thus, functions far beyond the capabilities of robots in the 1970s and 1980s are now coming close to commercial viability.

For example, in October 1995 an experimental vehicle called Navlab V crossed the U.S. from Washington, D.C., to San Diego, driving itself more than 95 percent of the time. The vehicle’s self-driving and navigational system was built around a 25-MIPS laptop based on a microprocessor by Sun Microsystems. The Navlab V was built by the Robotics Institute at Carnegie Mellon University, of which I am a member. Similar robotic vehicles, built by researchers elsewhere in the U.S. and in Germany, have logged thousands of highway kilometers under all kinds of weather and driving con­ditions. Dramatic progress in this field became evident in the DARPA Grand Challenge contests held in California. In October 2005 several fully autonomous cars successfully traversed a hazard-studded 132-mile desert course, and in 2007 several successfully drove for half a day in urban traffic ­conditions.

In other experiments within the past few years, mobile robots mapped and navigated unfamiliar office suites, and computer vision systems located textured objects and tracked and analyzed faces in real time. Meanwhile personal com­puters became much more adept at recognizing text and speech.

Still, computers are no match today for humans in such functions as recognition and navigation. This puzzled experts for many years, because computers are far superior to us in calculation. The explanation of this apparent paradox follows from the fact that the human brain, in its entirety, is not a true programmable, general-purpose computer (what computer scientists refer to as a universal machine; almost all computers nowadays are examples of such machines).

To understand why this is requires an evolutionary perspective. To survive, our early ancestors had to do several things repeatedly and very well: locate food, escape predators, mate and protect offspring. Those tasks depended strongly on the brain’s ability to recognize and navigate. Honed by hundreds of millions of years of evolution, the brain became a kind of ultrasophisticated—but special-­purpose—computer.

The ability to do mathematical calculations, of course, was irrelevant for survival. Nevertheless, as language trans­formed human culture, at least a small part of our brains evolved into a universal machine of sorts. One of the hallmarks of such a machine is its ability to follow an arbitrary set of instructions, and with language, such instructions could be transmitted and carried out. But because we visualize numbers as complex shapes, write them down and perform other such functions, we process digits in a monumentally awkward and inefficient way. We use hundreds of billions of neurons to do in minutes what hundreds of them, specially “rewired” and arranged for calculation, could do in milliseconds.

A tiny minority of people are born with the ability to do seemingly amazing mental calculations. In absolute terms, it’s not so amazing: they calculate at a rate perhaps 100 times that of the average person. Computers, by comparison, are millions or billions of times faster.

Can Hardware Simulate Wetware? The challenge facing roboticists is to take general-­purpose computers and program them to match the largely special-purpose human brain, with its ultraoptimized perceptual inheritance and other peculiar evolutionary traits. Today’s robot-controlling computers are much too feeble to be applied successfully in that role, but it is only a matter of time before they are up to the task.

Implicit in my assertion that computers will eventually be capable of the same kind of perception, cognition and thought as humans is the idea that a sufficiently advanced and sophisticated artificial system—for example, an electronic one—can be made and programmed to do the same thing as the human nervous system, including the brain. This issue is controversial in some circles right now, and there is room for brilliant people to disagree.

At the crux of the matter is the question of whether biological structure and behavior arise entirely from physical law and whether, moreover, physical law is computable—that is to say, amenable to computer simulation. My view is that there is no good scientific evidence to negate either of these propositions. On the contrary, there are compelling indications that both are true.

Molecular biology and neuroscience are steadily uncovering the physical mechanisms underlying life and mind but so far have addressed mainly the simpler mechanisms. Evidence that simple functions can be composed to produce the higher capabilities of nervous systems comes from programs that read, recognize speech, guide robot arms to assemble tight components by feel, classify chemicals by artificial smell and taste, reason about abstract matters, and so on. Of course, computers and robots today fall far short of broad human or even animal competence. But that situation is understandable in light of an analysis, summarized in the next section, that concludes that today’s computers are only powerful enough to function like insect nervous systems. And, in my experience, robots do indeed perform like insects on simple tasks.

Ants, for instance, can follow scent trails but become disoriented when the trail is interrupted. Moths follow pheromone trails and also use the moon for guidance. Similarly, many commercial robots can follow guide wires installed below the surface they move over, and some orient themselves using lasers that read bar codes on walls.

If my assumption that greater computer power will eventually lead to human-level mental capabilities is true, we can expect robots to match and surpass the capacity of various animals and then finally humans as computer-processing rates rise sufficiently high. If on the other hand the assumption is wrong, we will someday find specific animal or human skills that elude implementation in robots even after they have enough computer power to match the whole brain. That would set the stage for a fascinating sci­entific challenge—to somehow isolate and identify the fundamental ability that brains have and that computers lack. But there is no evidence yet for such a missing principle.

The second proposition, that physical law is amenable to computer simulation, is increasingly beyond dispute. Scientists and engineers have already produced countless useful simulations, at various levels of abstraction and approximation, of everything from automobile crashes to the “color” forces that hold quarks and gluons together to make up protons and neutrons.

Nervous Tissue and Computation If we accept that computers will eventually become powerful enough to simulate the mind, the question that naturally arises is: What processing rate will be necessary to yield performance on a par with the human brain? To explore this issue, I have considered the capabilities of the vertebrate retina, which is understood well enough to serve as a Rosetta stone roughly relating nervous tissue to computation. By comparing how fast the neural circuits in the retina perform image-processing operations with how many instructions per second it takes a computer to accomplish similar work, I believe it is possible to at least coarsely estimate the information-processing power of nervous tissue—and by extrapolation, that of the entire human nervous system.

The human retina is a patch of nervous tissue in the back of the eyeball half a millimeter thick and approximately two centimeters across. It consists mostly of light-sensing cells, but one tenth of a millimeter of its thickness is populated by image-processing circuitry that is capable of detecting edges (boundaries between light and dark) and motion for about a million tiny image regions. Each of these regions is associated with its own fiber in the optic nerve, and each performs about 10 detections of an edge or a motion each second. The results flow deeper into the brain along the associated fiber.

From long experience working on robot vision systems, I know that similar edge or motion detection, if performed by efficient software, requires the execution of at least 100 computer instructions. Therefore, to accomplish the retina’s 10 million detections per second would necessitate at least 1,000 MIPS.

The entire human brain is about 75,000 times heavier than the 0.02 gram of processing circuitry in the retina, which implies that it would take, in round numbers, 100 million MIPS (100 trillion instructions per second) to emulate the 1,500-gram human brain. Personal computers in 2008 are just about a match for the 0.1-gram brain of a guppy, but a typical PC would have to be at least 10,000 times more powerful to perform like a human brain.

Brainpower and Utility Though dispiriting to artificial-intelligence experts, the huge deficit does not mean that the goal of a humanlike artificial brain is unreachable. Computer power for a given price doubled each year in the 1990s, after doubling every 18 months in the 1980s and every two years before that. Prior to 1990 this progress made possible a great decrease in the cost and size of robot-controlling computers. Cost went from many millions of dollars to a few thousand, and size went from room-filling to handheld. Power, meanwhile, held steady at about 1 MIPS. Since 1990 cost and size reductions have abated, but power has risen to about 10,000 MIPS for a home computer. At the present pace, only about 20 or 30 years will be needed to close the gap. Better yet, useful robots don’t need full human-scale brainpower.

Commercial and research experiences convince me that the mental power of a guppy—about 10,000 MIPS—will suffice to guide mobile utility robots reliably through unfamiliar surroundings, suiting them for jobs in hundreds of thousands of industrial locations and eventually hundreds of millions of homes. A few machines with 10,000 MIPS are here already, but most industrial robots still use processors with less than 1,000 MIPS.

Commercial mobile robots have found few jobs. A paltry 10,000 work worldwide, and the companies that made them are struggling or defunct. (Makers of robot manipulators are not doing much better.) The largest class of commercial mobile robots, known as automatic guided vehicles (AGVs), transport materials in factories and warehouses. Most follow buried signal-emitting wires and detect end points and collisions with switches, a technique developed in the 1960s.

It costs hundreds of thousands of dollars to install guide wires under concrete floors, and the routes are then fixed, making the robots economical only for large, exceptionally stable factories. Some robots made possible by the advent of microprocessors in the 1980s track softer cues, like magnets or optical patterns in tiled floors, and use ultrasonics and infrared proximity sensors to detect and negotiate their way around obstacles.

The most advanced industrial mobile robots, developed since the late 1980s, are guided by occasional navigational markers—for instance, laser-sensed bar codes—and by preexisting features such as walls, corners and doorways. The costly labor of laying guide wires is replaced by custom software that is carefully tuned for each route segment. The small companies that developed the robots discovered many industrial customers eager to automate transport, floor cleaning, security patrol and other routine jobs. Alas, most buyers lost interest as they realized that installation and route changing required time-consuming and expensive work by experienced route programmers of inconsistent availability. Technically successful, the robots fizzled commercially.

In failure, however, they revealed the essentials for success. First, the physical vehicles for various jobs must be reasonably priced. Fortunately, existing AGVs, forklift trucks, floor scrubbers and other industrial machines designed for accommodating human riders or for following guide wires can be adapted for autonomy. Second, the customer should not have to call in specialists to put a robot to work or to change its routine; floor cleaning and other mundane tasks cannot bear the cost, time and uncertainty of expert installation. Third, the robots must work reliably for at least six months before encountering a problem or a situation requiring downtime for reprogramming or other alterations. Customers routinely rejected robots that after a month of flawless operation wedged themselves in corners, wandered away lost, rolled over employees’ feet or fell down stairs. Six months, though, earned the machines a sick day.

Robots exist that have worked faultlessly for years, perfected by an iterative process that fixes the most frequent failures, revealing successively rarer problems that are corrected in turn. Unfortunately, that kind of reliability has been achieved only for prearranged routes. An insectlike 10 MIPS is just enough to track a few handpicked landmarks on each segment of a robot’s path. Such robots are easily confused by minor surprises such as shifted bar codes or blocked corridors (not unlike ants thrown off a scent trail or a moth that has mistaken a streetlight for the moon).

A Sense of Space Robots that chart their own routes emerged from laboratories worldwide in the mid-1990s, as microprocessors reached 100 MIPS. Most build two-dimensional maps from sonar or laser range­finder scans to locate and route themselves, and the best seem able to navigate office hallways for days before becoming disoriented. Of course, they still fall far short of the six-month commercial criterion. Too often different locations in the coarse maps resemble one another. Conversely, the same location, scanned at different heights, looks different, or small obstacles or awkward protrusions are overlooked. But sensors, computers and techniques are improving, and success is in sight.

My efforts are in the race. In the 1980s at Carnegie Mellon we devised a way to distill large amounts of noisy sensor data into reliable maps by accumulating statistical evidence of emptiness or occupancy in each cell of a grid representing the surroundings. The approach worked well in two dimensions and still guides many of the robots described above.

Three-dimensional maps, 1,000 times richer, promised to be much better but for years seemed computationally out of reach. In 1992 we used economies of scale and other tricks to reduce the computational costs of three-dimensional maps 100-fold. Continued research led us to found a company, Seegrid, that sold its first dozen robots by late 2007. These are load-pulling warehouse and factory “tugger” robots that, on command, autonomously follow routes learned in a single human-guided walk-through. They navigate by three-dimensionally grid-mapping their route, as seen through four wide-angle stereoscopic cameras mounted on a “head,” and require no guide wires or other navigational markers.

Robot, Version 1.0 In 2008 desktop PCs offer more than 10,000 MIPS. Seegrid tuggers, using slightly older processors doing about 5,000 MIPS, distill about one visual “glimpse” per second. A few thousand visually distinctive patches in the surroundings are selected in each glimpse, and their 3-D positions are statistically estimated. When the machine is learning a new route, these 3-D patches are merged into a chain of 3-D grid maps describing a 30-meter “tunnel” around the route. When the tugger is automatically retracing a taught path, the patches are compared with the stored grid maps. With many thousands of 3-D fuzzy patches weighed statistically by a so-called sensor model, which is trained offline using calibrated example routes, the system is remarkably tolerant of poor sight, changes in lighting, movement of objects, mechanical inaccuracies and other perturbations.

Seegrid’s computers, perception programs and end products are being rapidly improved and will gain new functionalities such as the ability to find, pick up and drop loads. The potential market for materials-handling automation is large, but most of it has been inaccessible to older approaches involving buried guide wires or other path markers, which require extensive planning and installation costs and create inflexible routes. Vision-guided robots, on the other hand, can be easily installed and rerouted.

Fast Replay Plans are afoot to improve, extend and miniaturize our techniques so that they can be used in other applications. On the short list are consumer robot vacuum cleaners. Externally these may resemble the widely available Roomba machines from iRobot. The Roomba, however, is a simple beast that moves randomly, senses only its immediate obstacles and can get trapped in clutter. A Seegrid robot would see, explore and map its premises and would run unattended, with a cleaning schedule minimizing owner disturbances. It would remember its recharging locations, allowing for frequent recharges to run a powerful vacuum motor, and also would be able to frequently empty its dust load into a larger container.

Commercial success will provoke competition and ac­celerate investment in manufacturing, engineering and research. Vacuuming robots ought to beget smarter cleaning robots with dusting, scrubbing and picking-up arms, followed by larger multifunction utility robots with stronger, more dexterous arms and better sensors. Programs will

be written to make such machines pick up clutter, store, retrieve and deliver things, take inventory, guard homes, open doors, mow lawns, play games, and so on. New applications will expand the market and spur further advances when robots fall short in acuity, precision, strength, reach, dexterity, skill or processing power. Capability, numbers sold, engineering and manufacturing quality, and cost-effectiveness will increase in a mutually reinforcing spiral. Perhaps by 2010 the process will have produced the first broadly competent “universal robots,” as big as people but with lizardlike 20,000-MIPS minds that can be programmed for almost any simple chore.

Like competent but instinct-ruled reptiles, first-generation universal robots will handle only contingencies explicitly covered in their application programs. Unable to adapt to changing circumstances, they will often perform inefficiently or not at all. Still, so much physical work awaits them in businesses, streets, fields and homes that robotics could begin to overtake pure information technology commercially.

A second generation of universal robot with a mouselike 100,000 MIPS will adapt as the first generation does not and will even be trainable. Besides application programs, such robots would host a suite of software “conditioning modules” that would generate positive and negative reinforcement signals in pre­de­fined circumstances. For example, doing jobs fast and keeping its batteries charged will be positive; hitting or breaking something will be negative. There will be other ways to accomplish each stage of an application program, from the minutely specific (grasp the handle underhand or overhand) to the broadly general (work indoors or outdoors). As jobs are repeated, alternatives that result in positive reinforcement will be favored, those with negative outcomes shunned. Slowly but surely, second-generation robots will work increasingly well.

A monkeylike five million MIPS will permit a third generation of robots to learn very quickly from mental rehearsals in simulations that model physical, cultural and psychological factors. Physical properties include shape, weight, strength, texture and appearance of things, and ways to handle them. Cultural aspects include a thing’s name, value, proper location and purpose. Psychological factors, applied to humans and robots alike, include goals, beliefs, feelings and preferences. Developing the simulators will be a huge undertaking involving thousands of programmers and experience-gathering robots. The simulation would track external events and tune its models to keep them faithful to reality. It would let a robot learn a skill by imitation and afford a kind of consciousness. Asked why there are candles on the table, a third-generation robot might consult its simulation of house, owner and self to reply that it put them there because its owner likes candlelit dinners and it likes to please its owner. Further queries would elicit more details about a simple inner mental life concerned only with concrete situations and people in its work area.

Fourth-generation universal robots with a humanlike 100 million MIPS will be able to abstract and generalize. They will result from melding powerful reasoning programs to third-generation machines. These reasoning programs will be the far more sophisticated descendants of today’s theorem provers and expert systems, which mimic human reasoning to make medical diagnoses, schedule routes, make financial decisions, con­figure computer systems, analyze seismic data to locate oil deposits, and so on.

Properly educated, the resulting robots will become quite formidable. In fact, I am sure they will outperform us in any conceivable area of endeavor, intellectual or physical. Inevitably, such a development will lead to a fundamental restructuring of our society. Entire corporations will exist without any human employees or investors at all. Humans will play a pivotal role in formulating the intricate complex of laws that will govern corporate behavior. Ultimately, though, it is likely that our descendants will cease to work in the sense that we do now. They will probably occupy their days with a variety of social, recreational and artistic pursuits, not unlike today’s comfortable retirees or the wealthy leisure classes.

The path I’ve outlined roughly recapitulates the evolution of human intelligence—but 10 million times more rapidly. It suggests that robot intelligence will surpass our own well before 2050. In that case, mass-produced, fully educated robot scientists working diligently, cheaply, rapidly and increasingly effectively will ensure that most of what science knows in 2050 will have been discovered by our artificial progeny!

Home — Essay Samples — Philosophy — Future — The Machines Have Arrived: Possible Future of Robots

test_template

The Machines Have Arrived: Possible Future of Robots

  • Categories: Future Progressive Era Robots

About this sample

close

Words: 834 |

Published: Jun 6, 2019

Words: 834 | Pages: 2 | 5 min read

Image of Dr. Charlotte Jacobson

Cite this Essay

To export a reference to this article please select a referencing style below:

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Prof. Kifaru

Verified writer

  • Expert in: Philosophy History Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

1 pages / 532 words

2 pages / 1034 words

1 pages / 671 words

1 pages / 572 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Future

The world is changing quickly, and what looked like a science-fiction fantasy is gradually becoming our current reality. It seems that progress spreads to all spheres of humanity’s life, but one of the most amazing breakthroughs [...]

What does a "better life" mean? It is a subjective concept that may vary from one individual to another, depending on their beliefs, values, and goals. However, in general, a better life refers to a state of being that is [...]

Children are undeniably the future of our society, and their growth and development play a pivotal role in shaping the world we will inhabit in the years to come. As a society, it is our collective responsibility to ensure that [...]

The concept of an ideal future is a subjective and personal one, as it varies greatly from individual to individual. For some, an ideal future may involve a successful career, financial stability, and material possessions. For [...]

Wessels describes how we have exceeded carrying capacity and eventually natural order will restore itself. One negative feedback solution the author discusses is global warming and the melting of the ice caps. In the event of [...]

We’ve all seen flying cars in plethora of science fiction movies and always wondered when would we be able to fly in those three-dimensional transport vehicles. Recently, accelerated technological advancement is turning many [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

future of robotics essay

IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Logo

Essay on Robotics

Students are often asked to write an essay on Robotics in their schools and colleges. And if you’re also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic.

Let’s take a look…

100 Words Essay on Robotics

What is robotics.

Robotics is the science of creating robots. Robots are machines that can do tasks without human help. They can be as small as a toy or as big as a car. Some robots look like humans, but most just have parts to do jobs. They can be used in many places, like factories, hospitals, and homes.

History of Robotics

Robotics started in the 20th century. The first robots were simple machines. They could only do easy tasks. Over time, robots became more complex. They can now do many things humans can do. They can even learn new tasks by themselves.

Types of Robots

There are many types of robots. Some robots are used in factories to build things. These are called industrial robots. There are also robots that help doctors in hospitals. They can do surgeries. Then there are robots that can explore space. They can go to places where humans can’t.

Benefits of Robotics

Robots can do tasks faster and more accurately than humans. They can also do dangerous jobs, keeping people safe. Robots can work 24/7 without getting tired. They can help in many fields, like medicine, manufacturing, and space exploration.

Future of Robotics

250 words essay on robotics.

Robotics is a field in technology that deals with making, working, and using robots. Robots are machines that can follow instructions to do tasks. Some robots can do tasks on their own, while others need human help.

There are many types of robots. Some robots look like humans, these are called humanoid robots. Then, there are industrial robots which are used in factories to make things like cars. There are also robots used in medicine, space exploration, and even in our homes to help with cleaning.

How Robots Work

Robots are run by computers. They follow a set of instructions called a program. This program tells the robot what to do and how to do it. Robots have sensors that allow them to gather information about their surroundings. This information is used to make decisions and carry out tasks.

Benefits of Robots

Robots can do many things that humans cannot do or find hard to do. They can work in dangerous places like space, deep sea, or inside a volcano. They can also do tasks quickly and without getting tired. This is why they are very useful in many areas like science, industry, and medicine.

The future of robotics is very exciting. Scientists are working on making robots that can learn and think like humans. These robots will be able to solve problems and make decisions on their own. They will be even more helpful and can change the way we live and work.

500 Words Essay on Robotics

Robotics is a branch of technology that deals with robots. Robots are machines that can perform tasks automatically or with guidance. They can do things that are hard, dangerous, or boring for humans. This field combines different branches of science and engineering like computer science, electrical engineering, and mechanical engineering.

The idea of robots has been around for a long time. Ancient Greek myths talk about mechanical servants. The term “robot” itself comes from a Czech word “robota,” meaning forced labor. It was first used in a play in 1920. The first real industrial robot, Unimate, started work in 1961 at a General Motors plant. Since then, robotics has grown a lot.

Robots have several parts. They have a body or frame, motors to make them move, sensors to help them understand their surroundings, and a computer to control everything. The computer uses a program, which is a set of instructions, to tell the robot what to do. The sensors collect information about the world. The computer uses this information to decide what actions the robot should take.

Importance of Robotics

Robots are very important in today’s world. They can do jobs that are dangerous for humans, like defusing bombs or working in nuclear power plants. They can also do jobs that need to be very exact, like in surgery or making computer chips. Robots can also do jobs that are boring or repetitive, like assembling cars in a factory. This helps humans to focus on more interesting and creative tasks.

In conclusion, robotics is a fascinating field that combines many different areas of science and engineering. It has a rich history and an exciting future. Robots are already doing many tasks that help humans, and they are likely to do even more in the future. As we continue to develop and use robots, we must also think about how to do this in a way that benefits everyone.

That’s it! I hope the essay helped you.

If you’re looking for more, here are essays on other interesting topics:

Happy studying!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Computer Science and Engineering
  • Artificial Intelligence

The future of Robotics Technology

  • February 2017
  • Journal of Robotics Networking and Artificial Life 3(4):270
  • CC BY-NC 4.0

Luigi Pagliarini at Accademia di Belle Arti di Macerata

  • Accademia di Belle Arti di Macerata
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • SENSORS-BASEL

Xin Zhao

  • Mingzhu Sun

Qili Zhao

  • Sobolev Leonid

Linus Nwankwo

  • Fritze Clemens
  • Konrad Bartsch
  • Elmar Rueckert
  • S. Shreyanth

Sinan Kufeoglu

  • Mihael Domjan
  • Tihomir Orehovački

Sobolev Leonid Sobolev

  • Wirawan Sumbodo

Aldias Bahatmaka

  • Henrik Hautop Lund

Jari Due Jessen

  • Paul J. Springer
  • Robin Hanson
  • Patricia A. Vargas

Fabricio Lima Brasil

  • Renan C. Moioli

Shuuji Kajita

  • Hirohisa Hirukawa

Kensuke Harada

  • A. Šoštarič

Bojan Imperl

  • S. Chalikonda
  • SPACE POLICY

Ian Miles

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • About The Journalist’s Resource
  • Follow us on Facebook
  • Follow us on Twitter
  • Criminal Justice
  • Environment
  • Politics & Government
  • Race & Gender

Expert Commentary

The future of robots in the workplace: The impact on workers

For this 2015 working paper for the National Bureau of Economic Research, researchers tested economic models to predict how much smart machines may eventually replace human labor.

Republish this article

Creative Commons License

This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License .

by Rachael Stephens, The Journalist's Resource August 11, 2015

This <a target="_blank" href="https://journalistsresource.org/economics/robots-in-the-workplace/">article</a> first appeared on <a target="_blank" href="https://journalistsresource.org">The Journalist's Resource</a> and is republished here under a Creative Commons license.<img src="https://journalistsresource.org/wp-content/uploads/2020/11/cropped-jr-favicon-150x150.png" style="width:1em;height:1em;margin-left:10px;">

Not so long ago the idea of robots patrolling neighborhoods or caring for children was the domain of science fiction. While robots have yet to replace police and daycare workers, technology has become so advanced that automated systems are taking on greater roles in society and the workplace.

Academic research has explored the diverse impacts of technology on employment — what happens when jobs shift elsewhere or when they’re atomized through Internet-enabled technologies. A 2015 study from Uppsala University and the London School of Economics looked the economic impact of industrial robot use in 17 countries from 1993 to 2007 and found that robots contributed to the economy, partly by helping humans do their work better.

A 2015 study for the National Bureau of Economic Research, “Robots Are Us: Some Economics of Human Replacement,” uses an economic model to explore the potential impact of increased workplace automation. The authors — Seth Benzell, Laurence Kotlikoff and Guillermo LaGarda of Boston University and Jeffrey Sachs of Columbia University — develop a model that calculated the initial and final states of an economy with two inputs to production (capital and code) and two types of workers (low-tech and high-tech). They then tested the impact that variations on workplace conditions and industrial policies will have on the economy.

Key findings of the study include:

  • Increased workplace automation could produce both “economic misery” and prosperity. Specifically, three consequences were found to be highly probable: “A long-run decline in labor share of income (which appears underway in OECD members), tech-booms followed by tech-busts, and a growing dependency of current output on past software investment.”
  • As technology improves and its use in the workplace expands, the demand for high-tech workers falls. At the end of the simulation, nearly 68% of high-tech workers end up in the service sector, earning approximately 14% less than they did previously.
  • As high-tech workers return to the service sector, the wages of low-tech workers rise 41%, then fall to 17% above the initial steady state wage — higher than the initial state, but lower than during the “boom.” In effect, the drop in high-tech worker compensation generates a boom-bust in low-tech worker compensation.
  • In the long run, national income increases in the short term, but then falls by 17%.
  • Adding a “positive tech shock” — a technical innovation that increases reduces costs or increases productivity — to the model causes a 13% short-term increase in national income, but national income then falls again by 28%, ending up lower than in the initial steady state.
  • During a positive tech shock, labor’s share of national income also rises in the short-term but then falls, from 75 to 57%.
  • The positive tech shock also causes consumption of goods to decrease by 28%, and the price of services to decrease by 43% as compared to before the technological breakthrough.
  • Some public policy options were found to reduce the negative long-term impacts on workers and the economy. For example, a high national saving rate mitigates the impacts of a positive tech shock, resulting in workers earning very near their initial steady state wages (rather than far less), but able to consume 20% more with those wages than they were before the shock.
  • Some policies to mitigate the negative effects were found to be likely to backfire, including requiring that all code be open source or restricting the labor supply — these solutions were found to further hurt wages, savings and capital stock.

The authors conclude: “Our simple model illustrates the range of things that smart machines can do for us and to us. Its central message is disturbing. Absent appropriate fiscal policy that redistributes from winners to losers, smart machines can mean long-term misery for all.”

Keywords: technology, artificial intelligence, AI, robotics

About The Author

' src=

Rachael Stephens

Autonomy for Space Robots: Past, Present, and Future

  • Space Robotics (Y Gao, Section Editor)
  • Open access
  • Published: 19 June 2021
  • Volume 2 , pages 251–263, ( 2021 )

Cite this article

You have full access to this open access article

future of robotics essay

  • Issa A.D. Nesnas 1 ,
  • Lorraine M. Fesq 2 &
  • Richard A. Volpe 3  

10k Accesses

17 Citations

1 Altmetric

Explore all metrics

Purpose of Review

The purpose of this review is to highlight space autonomy advances across mission phases, capture the anticipated need for autonomy and associated rationale, assess state of the practice, and share thoughts for future advancements that could lead to a new frontier in space exploration.

Recent Findings

Over the past two decades, several autonomous functions and system-level capabilities have been demonstrated and used in spacecraft operations. In spite of that, spacecraft today remain largely reliant on ground in the loop to assess situations and plan next actions, using pre-scripted command sequences. Advances have been made across mission phases including spacecraft navigation; proximity operations; entry, descent, and landing; surface mobility and manipulation; and data handling. But past successful practices may not be sustainable for future exploration. The ability of ground operators to predict the outcome of their plans seriously diminishes when platforms physically interact with planetary bodies, as has been experienced in two decades of Mars surface operations. This results from uncertainties that arise due to limited knowledge, complex physical interaction with the environment, and limitations of associated models.

Robotics and autonomy are synergistic, wherein robotics provides flexibility, autonomy exercises it to more effectively and robustly explore unknown worlds. Such capabilities can be substantially advanced by leveraging the rapid growth in SmallSats, the relative accessibility of near-Earth objects, and the recent increase in launch opportunities.

Similar content being viewed by others

future of robotics essay

Space Robotics

future of robotics essay

Spacecraft Autonomy Challenges for Next-Generation Space Missions

future of robotics essay

Robots for all Reasons

Avoid common mistakes on your manuscript.

Introduction

The critical role that robotics and autonomous systems can play in enabling the exploration of planetary surfaces has been projected for many decades and was foreseen by a NASA study group on “Robotics and Machine Intelligence” in 1980 led by Carl Sagan [ 1 ]. As of this writing, we are only 2 years away from achieving a continuous robotic presence on Mars for one-quarter century. Orbiters, landers, and rovers have been exploring the Martian surface and subsurface, both at global and local scales, to understand its evolution, topography, climate, geology, and habitability. Robotics has enabled missions to traverse tens of kilometers across the red planet, sample its surface, and place different instruments on numerous targets. However, the planetary exploration of Mars has remained heavily reliant on ground in the loop for its daily operations. The situation is similar for other planetary missions, which are largely operated by a ground crew. A number of technical and programmatic factors play into the degree to which missions can and are able to operate autonomously.

Despite that, autonomy has been used across mission phases including in-space operations , small-body proximity operations , landing , surface contact and interaction , and mobility. Past successful practices may not be sustainable nor scalable for future exploration, which would drive the need for increased autonomy, as we will analyze in this article.

Autonomy for Robotic Spacecraft

Definition and scope.

NASA defines autonomy as “the ability of a system to achieve goals while operating independently of external control” [ 2 ]. In the NASA parlance, a system is the combination of elements that function together to produce the capability that is required to meet a need. These elements include personnel, hardware, software, equipment, facilities, processes, and procedures needed for this purpose [ 3 ]. So, by this definition, an autonomous system may involve a combination of astronauts and machines operating independent of an external entity such as ground control or an orbiting crew. However, in this article, we will only consider autonomy in the context of a robotic spacecraft, where the external actor is ground control. Autonomous robots operated by astronauts in proximity or remotely are outside the scope of this article.

Figure 1 shows the basic abstraction of an autonomous system. With inputs that define the desired objectives or goals, the system perceives its environment and itself (for health monitoring), reasons about them, decides what actions to take, and then executes those actions. The actions affect the system and/or the environment, which impact what would be perceived next. Today’s spacecraft operate largely within the act domain. Perception (except for sensory measurements and rudimentary signal processing) and decision-making are largely performed by personnel on Earth, who also generate commands to be uplinked to the spacecraft to initiate the next set of actions. Autonomous perceptions, decisions, and actions are delegated to the spacecraft in limited cases, when no alternative exists. Onboard autonomy eliminates communication delays, which cause stale state information that ground operators must contend with to close the loop.

figure 1

The basic abstraction of an autonomous system

Figure 2 shows the basic abstract functions of an autonomous system for situation and self-awareness as well as for reasoning and acting. Situation and self-awareness require sensing and estimation that encompass perception, system-state estimation, model building, hazard assessment, event and trend identification, anomaly detection, and prognosis. Reasoning and acting encompass planning trajectories/motion (mission), planning and managing the usage of resources, and executing activities. It is also responsible for reconciling conflicting information before execution. Some functions, such as learning and coordination, can be employed across a system and among systems. For example, learning can occur in sensing, estimation, reasoning, and/or acting.

figure 2

Basic functions of an autonomous system

The autonomous functions of a spacecraft are often categorized into two groups: functional level and system level.

Function-Level Autonomy

Functional-level autonomy is typically focused on specific subsystems and implemented with local state machines and control loops, providing a specific subsystem behavior. These domain-specific functions include perception-rich behaviors for in-space operations such as cruise trajectory corrections and proximity operations, getting to a surface via entry , descent , and landing (EDL), and mobility on/over a surface. They also include physical interaction between two assets, such as in-space spacecraft-to-spacecraft docking, grappling, or assembly as well as reasoning within a science instrument to analyze data and make decisions based on goals.

System-Level Autonomy

System-level autonomy reasons across domains: power, thermal communication, guidance, navigation, control, mobility, and manipulation, covering both software and hardware elements. It manages external interactions with ground operators to incorporate goals into current and future activities. It also plans resources and activities (scheduling, prioritizing), executes and monitors activities, and manages the system’s health to handle both nominal and off-nominal situations (fault/failure detection, isolation, diagnosis, prognosis, and repair/response).

An autonomy architecture is a necessary underpinning to properly define, integrate, and orchestrate these functions within a system and support implementations of functions in software, firmware, or hardware. Domain-specific functions have to be architected in a way that allows system-level autonomy to flexibly and consistently manage the various functions within a system, under both nominal and off-nominal situations. Designers have to identify the correct level of abstraction for a given application to define the scope that the system-level autonomy has to reason about. In other words, an integrated autonomous system should not have artificial boundaries not grounded in the fundamentals of the problem to maintain the flexibility that an autonomous system needs. Central to such an architecture is ensuring explicitness of intent, consistency of knowledge in light of faults and failures, completeness (or cognizance of limitations) of the system and behaviors for handling of situations, flexibility in the connectivity of functions to handle failures or degradations, traceability of decisions, robustness and resilience of actions, and cognizance and implications of actions, both in the near term as as well as in the long term. Explorers can or may need to operate for decades, such as the Voyager spacecraft that have been operating since their launch in 1977 [ 4 ].

The Autonomy Continuum

Autonomy that is applied to space systems is a continuum from less autonomy (highly prescriptive tasks and routines) to increasingly autonomous (richer control capabilities onboard) as shown in Fig. 3 . Automation is on the “less autonomy” side of the spectrum, which often follows prescribed actions without establishing full situational awareness onboard and without reasoning onboard about the actions the spacecraft undertakes. Such situational awareness and reasoning are handled by operators on the ground.

figure 3

Autonomy is a continuum of capabilities

It is important to note that moving more control onboard does not remove science/humans out of the loop. Rather it changes the role of operators. More autonomy allows a spacecraft to be aware of and, in many cases, make decisions about its environment and its health in order to meet its objectives. That does not preclude ground operators from interacting with the spacecraft asynchronously to communicate intent, provide guidance, or interject to the extent necessary and possible.

So, When Do We Really Need Autonomy?

As shown in Fig. 4 , there are two sets of constraints that drive the need for autonomy: (1) operational constraints derived from mission objectives and (2) system/environment constraints based on the spacecraft design and the remoteness and harshness of the environment. For the operational constraints, the use of autonomy is traded against non-autonomous approaches. Based on risk and cost, mission objectives may get adjusted, often leaning toward state-of-the-practice non-autonomous approaches wherever possible. These may include scaling back on the minimum required science, which relaxes requirements on productivity or access to more difficult sites. For the system/environment constraints, autonomy is required (not just desired) if the three conditions below are met. These conditions occur when:

Changes in the environment or spacecraft occur: Examples of changes in the environment include an erupting plume on Enceladus or winds on Titan. Examples of changes in the spacecraft include degradations, faults, or failures.

Changes are not predictable: In a counterexample, during the approach phase to a small body, the relative trajectory, the body’s motion, and the body’s shape are iterated on, carefully managing uncertainties. As such, ground in the loop is often used, obviating the need for autonomy. When changes can be adequately predicted and modeled, ground operators use this information to prescribe the set of actions and hence do not require autonomy.

Required response time is shorter than next communication cycle: This condition occurs when the spacecraft has to react to a situation before its next communication cycle with the ground. This was the case during the final stage of OSIRIS-REx’s touch-and-go sampling that employed autonomy [ 5 ]. Similarly, if a rover is on steep slope and is slipping, there is no time to communicate with ground operators before reacting to the situation.

figure 4

The need for autonomy (left) and what autonomy enables (right)

Skeptics have argued that due to the risks of deep-space exploration and the rarity of these historic opportunities, it is unlikely that future missions would entrust such critical decisions to an onboard autonomous system. Past missions, such as Cassini , have achieved great success with ground in the loop. Some would argue that the situations that arise are too numerous, complex, and unknown for a machine to reason about, and they are too risky not to rely on a broad range of ground expertise and resources (e.g., compute power). With time delays of only single-digit hours across our solar system, engaging ground operators for these remote missions would both be viable and sensible. This is a reasonable argument and one that has underscored the state of the practice, but it hinges on two assumptions that may no longer hold true in the future: (1) our ability to predict outcomes to a reasonable degree and (2) the availability of adequate resources (e.g., power, time, communication bandwidth, line-of-sight) to have ground in lock-step with the decision-making loop. Consider a Europa surface mission duration, which would be constrained by thermal and power considerations [ 6 ]. A mission that needs to collect and transfer samples from an unknown surface in a limited time requires a degree of autonomy to successfully handle its potentially unpredictable surface interaction. A different example is the Intrepid lunar mission concept [ 7 ]. With its proximity to Earth, the Moon is a destination that typically would not justify the need for autonomy. However, when you consider the proposed mission objectives that require traversing 1800 km with hundreds of instrument placement across six distinct geological regions in 4 years, such a mission would have to rely on a large degree of autonomy to be successful. A detailed study of this mission concept has shown that communication availability (through DSN as well as projected bandwidths through a commercial communication infrastructure within this decade) would drive the need for largely autonomous operations, with intermittent ground interventions to direct intent and handle anomalies that cannot be resolved onboard.

Figure 5 compares trends of past, present, and anticipated future needs. Spanning this time frame, we note a growth in the diversity of forms of robotic explorers: starting with spacecraft on flyby and orbiting missions, to today’s missions that include landers [ 8 ], subsurface probes [ 9 ], rovers [ 10 ], and most recently rotorcraft [ 11 , 12 ]. Future missions could include even more diverse forms of explorers such as balloons or dirigibles [ 13 ], hoppers [ 14 , 15 ], walkers [ 16 , 17 ], rappelers [ 18 ], deep melt probes [ 19 ], mother-daughter platforms, and multi-craft missions [ 20 ]. Whereas earlier planetary exploration missions operated in vacuum away from planetary bodies, today’s missions are operating on or into a wide range of planetary surfaces with different properties [ 9 , 21 ]. The surface and subsurface of such bodies are not well characterized. Scooping and probing in these bodies have proved more challenging than had been anticipated [ 22 , 23 , 24 ].

figure 5

Evolution of deep-space exploration drives the need for greater flexibility

Also, today’s missions have spacecraft with much richer sensing and perception than prior missions. Landing systems are equipped with high-resolution cameras and LIDARs for terrain-relative navigation and hazard assessment [ 25 , 26 ]. The Mars rovers carry tens of visual cameras, often in stereoscopic configurations, to establish situational awareness for ground operations. Interactions with the environment, whether for manipulation, probing, sampling, or mobility, are governed by empirical models that are sensitive to terrain heterogeneity [ 27 ].

All this to say that current and future robots are operating and interacting in largely unknown environments, with perception-rich systems but limited physical models that govern their interaction with the environment [ 28 ]. While past missions have been successful by relying on their ability to predict the execution of activities days or even weeks in advance based on orbital dynamics, current and future in situ missions will continue to be challenged in their ability to predict outcomes of actions given the complex and incomplete models that govern that dynamic.

In summary, future robots will likely be operating in largely unknown and diverse environments, with perception-rich systems that experience large uncertainties, causing the ability of ground operators to predict behavior to be severely limited. As such, to be successful, we argue that future robotic systems should be more flexible and more autonomous to handle situations that arise. Given this rising complexity, made worse by communication and power constraints, the desire to push the boundaries of exploration will drive the need for more autonomy [ 29 ••].

State of the Practice, Challenges, and Benefits

State of the practice.

Despite the successful demonstrations and uses of autonomous capabilities on a range of spacecraft, from in-space to surface missions, autonomy only gets adopted when it is absolutely necessary to achieve the mission objectives, such as during EDL on Mars. It is often the case that enhanced productivity does not fare well against a perceived increase in risk. The current posture vis-à-vis the adoption of autonomy is understandable, given the large costs and the rare opportunities afforded to explore such bodies.

Figure 6 captures an abstract comparison of state of the practice in spacecraft autonomy (see “Autonomous Robotic Capabilities in Past and Current Space Missions” for a summary of advances) relative to a possible future autonomous spacecraft. To date, autonomous capabilities have been deployed either within limited operational windows (green bars) or in scenarios that were carefully modeled a priori and pre-scripted for a deterministic or relatively well-anticipated outcome. Prior to handing control to the autonomous spacecraft (engagement), ground operators use telemetry, physics-based models, and expert judgement to establish situational awareness, even though the spacecraft telemetry may be stale given communication delays and ground-planning durations. As such, these missions have the benefit of ground-in-the-loop engagement to assess situations pre- and post-deployment. This awareness is also used to constrain the autonomous behaviors before engagement. For example, rover navigation is sometimes aided by ground operators’ assessments of safe keep-in-zones and unsafe keep-out-zones from orbital data to constrain the rover’s actions to remain within a relatively safe corridor [ 30 •]. This is akin to a parent guarding their child during their first steps. Spacecraft fault protection is often disabled or restricted during critical events, lest it results in an unexpected course of action that could put the mission at risk [ 31 ]. Such state-of-the-practice actions are adopted from successful past missions since they have proven reliable over the years.

figure 6

A perspective on state of the practice and a vision of the future

Therefore, today’s sequence-driven operations heavily rely on human cognizance to establish situational awareness and command the spacecraft. For most space missions, in particular flyby and orbital, operators are able to plan activities days or weeks in advance because the physics of the problem are well understood and reasonably modeled. Spacecraft that rely on orbital dynamics, such as Galileo and Cassini , were able to execute pre-programmed sequences and manage resources for up to 14 days autonomously (between two scheduled Deep Space Network antenna passes) with unattended operations [ 32 ]. This also includes recognition of on-board failures and execution of appropriate recovery operations. Contrasting this with surface operations of the Mars rovers or operations in the vicinity of a small body, the ability to predict outcomes of activities significantly diminishes to short-time horizons. In situ robotic spacecraft operating in poorly constrained or dynamic environments (e.g., Venus, Mars, or Titan) have to rely on local assessments of the situation at hand in order to take action.

Future Challenges

The adoption of increasing levels of autonomy faces both technical and non-technical challenges.

Practices that resulted in past successes do not necessarily imply their suitability for future missions, where the spacecraft has to operate in regimes of much higher uncertainties and poorly modeled physics: namely, the physical interaction with unknown and never-before-visited surfaces. For example, Mars missions have had the benefit of a priori knowledge of the Martian atmosphere and surface from prior missions, which were heavily used in developing models for testing the autonomous capabilities of entry, descent, and landing [ 33 ] as well as surface navigation [ 34 , 35 ]. Future missions that would explore unknown worlds need to handle the large uncertainties and the limited a priori knowledge of the environment in which they must operate. The topographies of such surfaces may be unknown at the scale of the platform (e.g., as of this writing, the best available resolution of Europa’s surface is at 6 m/pixel, well below what is necessary for the scale of a future landing platform [ 36 ]), and the material properties are often poorly constrained [ 27 ] (e.g., the Curiosity rover encountered uncharacteristic terrain with high sinkage in Hidden Valley [ 37 ]) or exhibit unexpected behavior (e.g., the Spirit rover unexpectedly broke through a duricrust layer and became embedded and immobilized in unconsolidated fines [ 38 ], the Phoenix scooped sample appeared to congeal, possibly as a result of solar radiation impinging on the sample, and cause the sample to stick to the scoop during delivery to the instruments [ 8 ]). Future missions would have to operate for extended periods of time depending on available communications and in light of faults and failures that they will inevitably experience during their treacherous exploration of planetary bodies. Being able to adapt and learn how to operate in these harsh environments is becoming an important aspect of deep-space exploration.

The major technical challenges for autonomy are:

Developing the autonomous functions and associated system functions to the necessary level of maturity,

Having adequate sensing and computing,

Having adequate models and frameworks,

Designing platforms with sufficient flexibility to handle large uncertainties in their interaction with the environment yet meet power, thermal, computation, and communication constraints, and

Having metrics and tools to verify and validate autonomous capabilities [ 39 ].

Among the non-technical barriers are strategic and programmatic challenges associated with the current acceptable risk posture related to mission cost, the necessary reliance on heritage for competed missions to fit within cost caps and to minimize risk, and the limited demand for autonomy from currently formulated missions, which are often conceived based on state of the practice rather than what could become viable. The current paradigm of ground-in-loop-operations does not scale well to operating multiple coordinated spacecraft in large formations [ 20 ]. While the cost of operations would initially be higher for autonomous systems, as the capability gets matured and becomes standard practice, the expected cost would eventually converge to a steady state resulting in potentially substantial reductions in operational cost. Other non-technical challenges are related to changes in people’s roles and a sense of job displacement and loss of control.

Potential Benefits

Autonomy would enable (a) exploring new destinations with unknown or dynamic environments, (b) increasing productivity of in situ operations, (c) increasing robustness and enabling graceful degradation of spacecraft operation, and (d) reducing operations cost and enhancing operability.

Exploration: Autonomy enables greater access to regions on planetary bodies that would have otherwise been inaccessible such as the liquid oceans of icy moons. It also enables new observations, in the presence of unpredictable events that require real-time decision making and execution such as close sampling of Enceladus’ plumes, landing on Europa, and accessing the surfaces of small bodies.

Productivity: Demand for greater productivity in surface operations, greater diversity in science observations, and higher quality observations are expected in the coming years. Autonomy increases productivity by allowing the spacecraft to do more in situ science within the constraints of the mission by monitoring resources and managing activities. It can also assist scientists in identifying and accessing more abundant and more interesting science targets. For example, a study of the Intrepid lunar rover concept concluded that the mission requires a sophisticated degree of autonomy to achieve its long-distance mobility and instrument placements [ 7 ]. A separate study showed that a rover operated by campaign intent rather than sequenced activities had an 80% reduction in sols required to complete a campaign and 267% increase in locations surveyed per week [ 40 ].

Robustness: Spacecraft that are self-aware and are able to self-diagnose and solve problems before they escalate into a larger failure event would increase robustness and decrease mission risk. For example, missions that baseline solar-electric propulsion are at higher risk of missing their targets if a safing event occurs during cruise [ 41 •]. On the Dawn spacecraft, a 4-day period of missed thrust resulted in a 26-day delay to the Ceres orbit [ 42 ].

Cost-effectiveness and operability: Autonomy would simplify operations and therefore reduce operational costs. This frees up spacecraft resources for more science observations, reduces the tedious parts of ground operations, and potentially scales to operating multiple spacecraft.

Autonomous Robotic Capabilities in Past and Current Space Missions

To understand where we are going, it is useful to review in detail where we have been. This section provides an overview of notable prior missions that have contributed to autonomy progress for space applications.

Over the past two decades, the world has witnessed the impact of robotics for the surface exploration of Mars. This includes the first 100 m of Sojourner’s tracks on the red planet, Spirit and Opportunity ’s exploration of dramatically different regions in different hemispheres, and Curiosity’s climbing of Mount Sharp. Complementing Spirit and Opportunity’s discovery of evidence that water once flowed on the Martian surface, the Phoenix mission used its robotic arm to sample water ice deposits in the shallow subsurface of the northern polar region. The Curiosity rover has investigated the Martian geology in more detail compared to its predecessors, using its mobility and manipulator to drill and transfer drill cuttings to its instrument suite. It found complex organic molecules in the Martian regolith and detected seasonal fluctuations of low methane concentrations in its atmosphere. Most recently, the InSight mission used its robotic arm to place two European instruments on the Martian surface: a high-precision seismometer that detected the first ever Marsquake and a heat-flow sensing mole intended to penetrate meters below the surface.

In-Space Robotic Operations

In 1999, the Remote Agent Experiment aboard the Deep Space I mission demonstrated goal-directed operations through onboard planning and execution and model-based fault diagnosis and recovery, operating two separate experiments for 2 days and later for 5 consecutive days [ 43 , 44 ]. The spacecraft demonstrated its ability to respond to high level goals by generating and executing plans on-board the spacecraft, under the watchful eye of model-based fault diagnosis and recovery software. On the same mission, autonomous spacecraft navigation was demonstrated during 3 months of cruise for the 36-month-long mission. It also executed a 30-min autonomous flyby, demonstrating onboard asteroid detection, orbit update, and low-thrust trajectory-correction maneuvers.

In the decade to follow, the Stardust mission demonstrated a similar flyby feat of one asteroid and two comets [ 45 ]. Between 2005 and 2010, the Deep Impact mission conducted an autonomous 2-h terminal guidance of a comet impactor and separately a flyby that tracked two comets [ 46 ]. It demonstrated detecting the target body, updating the relative orbits, and commanding the spacecraft using low-thrust maneuvers. Autonomy has also been used to aid science operations of Earth-orbiting missions such as the Earth-Observing-1 spacecraft, which used onboard feature and cloud detection to retarget subsequent observations for identifying regions of change or of interest [ 47 ]. The IPEX mission used autonomous image acquisition and data processing for downlink [ 48 ]. Most recently, the ASTERIA spacecraft transitioned its commanding from time-based sequences to task networks and demonstrated onboard orbit determination using passive imaging in Low Earth Orbit (LEO) without GPS [ 49 ].

Small-Body Proximity Operations

Operating in proximity of and on small bodies has proven particularly time consuming and challenging. To date, only five missions have attempted to operate for extended periods of time in close proximity to such small bodies: Shoemaker , Rosetta , Hayabusa , Hayabusa2 , and OSIRIS-REx [ 45 , 50 , 49 , 52 ]. Many factors make operating around small bodies particularly challenging: the microgravity of such bodies, debris that can be lofted off their surfaces, their irregular topography and correspondent sharp shadows and occlusions, and their unconstrained surface properties. The difficulties of reaching the surface, collecting samples, and returning these samples stem from uncertainties of the unknown environment and the dynamic interaction with a low-gravity body. The deployment and access to the surface by Hayabusa’s MINERVAs [ 53 ] and Rosetta’s Philae [ 54 ] highlight some of these challenges and together with OSIRIS-REx [ 55 ] underscore our limited knowledge of the surface properties. Because of the uncertainty associated with such knowledge, missions to small bodies typically rely on some degree of autonomy.

Landed Missions

During entry , descent , and landing (EDL) on Mars, command and control can only occur autonomously due to the communication delay and constraints. Landing on Mars is particularly challenging because of its thin atmosphere and the need to decelerate to a near-zero terminal descent velocity with limited fuel, requiring guided entry for deceleration to velocities where parachutes may be used effectively. Uncertainties arise with parachutes due to wind that contributes to a lateral velocity of the descending spacecraft. As a result, in 2004, the Mars Exploration Rover (MER) missions used onboard autonomy [ 56 ] to estimate lateral velocity from descent images and correct it if necessary.

The Chang’e 4 lunar mission carrying the Yutu-2 rover demonstrated high-precision autonomous landing in complex terrain on the lunar far side. The spacecraft used terrain relative navigation as well as hazard assessment and avoidance to land in the absence of radiometric data [ 57 ].

In addition to the Martian and lunar landings, several missions have touched the surfaces of small bodies autonomously. In 2005, the Hayabusa mission demonstrated autonomous terminal descent of the last 50 m toward a near-surface goal for sample collection using laser ranging (at < 100 m) to adjust altitude and attitude [ 58 ]. This capability was also employed on the 2019 Hayabusa2 mission, where the mission used a hybrid ground/onboard terminal-descent with ground controlling the boresight approach while the onboard system controlled the lateral motion for the final 50 m. In 2020, the OSIRIS-REx mission used terrain-relative navigation for its touch-and-go maneuver for sample acquisition. Using a ground-generated shape-model, the spacecraft matched natural features to the image renderings from the generated model to approach the body for its touch-and-go sampling. This segment was executed autonomously but with ground oversight.

Surface missions

Surface contact and interaction is typically needed for instrument placement and sampling operations in scientific exploration. The Mars Exploration Rovers demonstrated autonomous approach and instrument placement on a target selected from several meters away [ 59 , 60 ]. The OSIRIS-REx mission captured samples from the surface of asteroid Bennu using its 3.4-m extended robotic arm in a touch-and-go maneuver that penetrated to a depth of ~50 cm, well beyond the expected depth for the sample capture.

Surface mobility greatly expands the value of a landed mission by enabling contact and interaction with a broader part of the surface. To achieve surface mobility safely, every Mars rover mission hosted some form of autonomous surface navigation. In 1997, the Sojourner rover of the Mars Pathfinder mission demonstrated autonomous obstacle avoidance using laser striping together with a camera to detect untraversable rocks (positive geometric hazards). It then used its bang-bang control of brushed motors to drive and steer to avoid hazards along its path and reach its designated goal. The Mars Exploration Rovers , Spirit and Opportunity, and the Mars Science Laboratory Curiosity rover used a more sophisticated autonomous navigation algorithm, relying on dense stereo mapping from its body- and mast-mounted cameras to assess terrain hazards. Algorithms processed three-dimensional point clouds into a grid map, estimating the slope, height differences, and roughness of the rover’s footprint across each terrain patch [ 34 ].

The Mars 2020 Perseverance rover uses an even more sophisticated algorithm in evaluating a safe traverse path for the rover. It improves the stereo sensing and significantly speeds up its processing using dedicated FPGAs, which will evaluate the tracks of the wheels across the terrain to assess traversability. For path planning, given the computationally intensive calculation of assessing the body-terrain collision when placing a passively articulated rover suspension along the terrain path, a conservative approximation is used to simplify the computation while preserving a safe collision-free path. In addition to the local cost evaluation of the rover’s traverse across the nearby terrain, a global cost is calculated from orbital and previously observed rover data to determine the proper action to take.

The Mars rovers have traversed distances of hundreds of meters autonomously, well beyond what has been visible in imagery used by ground operators (i.e., over the horizon driving). Over one weekend, the Opportunity rover drove 200 m in a multi-sol autonomous driving sequence.

Future Mission Possibilities

Various possible directions for autonomy.

This is only a prelude of what is anticipated. Ongoing mission concept studies and research programs are investigating a range of robotic systems that would explore surfaces of other planetary bodies. These include robotic arms that capture and analyze samples from Europa’s surface. A rotorcraft completed multiple powered flights in the thin Martian atmosphere, through the thin Martian atmosphere, and another large one is being built to explore Titan’s surface, leveraging its thick atmosphere [ 11 , 12 ]. Probes are being studied to reach the oceans of icy worlds, either getting through kilometers of cryogenic ice or by weaving their way through vents and crevasses of Enceladus’ tiger stripes, the site of plumes in the moon’s southern region [ 61 ]. Lunar rovers are being studied to cover thousands of kilometers to explore more disparate regions near the lunar equator [ 7 ] and in the polar regions.

These increasingly rich forms of explorers will require a greater degree of autonomy, in particular, for ocean worlds and remote destinations, where the surfaces of target bodies have never been visited before and where communications and power resources are more constrained than those on the Moon and Mars. While such explorers are likely to be heterogeneous in their form, the foundational elements of autonomy might be shared among such platforms. It is precisely these foundational elements that would need to be advanced to enable robotic systems to effectively conduct their complex missions, in spite of the limited knowledge and large uncertainties of the harsh environments to be explored.

Given the aforementioned challenges, how can we take the next major step in advancing autonomy? To do so, we consider the key gap, which is to reliably operate in situ in a partially known and harsh environment and under inevitable system degradations. This would drive the maturation of the needed function- and system-level autonomy capabilities in an integrated architecture to principally handle a range of conditions. A number of such autonomy challenges have been captured from a NASA Autonomy Workshop by the Science Mission Directorate in 2018 [ 62 ].

Proposed Next Direction: Autonomous Small Body Explorer

One example that could provide an adequately challenging near-term opportunity for advancing robotics and autonomous systems is using an affordable SmallSat (e.g., a spacecraft that is less than 180 kg and has standardized form factors at smaller sizes: e.g., 6U, 12U) to travel to, approach, land, and operate on the surface of a near-Earth object (NEO) autonomously. The SmallSat would be designed to operate using high-level goals from the ground, which will also provide operational oversight. Frequently asked questions and answers for this concept are discussed below.

Why are NEOs compelling for exploration? The exploration of NEOs is important for four thrusts: science, human exploration, in situ resource utilization, and planetary defense. For example, previous missions, Hayabusa and Hayabusa2, were primarily science focused. They largely operated with ground in the loop and their surface operational capabilities were, therefore, limited at the time. We envision autonomous robotic access to the surface of NEOs that would expand on these successes and would have substantial feed-forward potential to enable access to more remote bodies such as comets, asteroids, centaurs, trans-Neptunian bodies, and Kuiper-belt objects. Small bodies are abundant and diverse in their composition and origin and are found across the solar system and out to the Oort Cloud [ 50 ].

Why are NEOs well-suited targets to advance autonomy? NEOs embody many of the challenges that would be representative of even more remote and extreme destinations, while remaining accessible by SmallSats. Given their diversity, their environments are relatively unknown a priori and the interaction of a spacecraft near or onto their surface would be dynamic, given their microgravity. Further, such a mission cannot be easily emulated in a terrestrial analog environment and the utility of simulation is limited by the unknown characteristics of the environment to be encountered.

Why is autonomy enabling for small bodies? Autonomy would enable greater access by reducing operations cost and would scale to allow reaching far more diverse bodies than the current ground-in-the-loop exploration paradigm. With on-board situational awareness, autonomy enables closer flybys, more sophisticated maneuvers during proximity operations, and safe landing and relocating on the surface. Operating near, on, or inside these bodies requires autonomy because of their largely unknown, highly rugged topographies and because of the dynamic nature of the interaction between the spacecraft and the body. Missions such as Hayabusa and Hayabusa2 that deployed surface assets largely operated with ground in the loop and their surface operational capabilities were limited at the time.

Approaching, landing, and reaching designated targets on a NEO requires technical advances in computer vision, machine learning, small spacecraft, and surface mobility. An autonomous mission with limited a priori knowledge of the body would establish, during approach, estimates of the body’s rotation rate, rotation axis, shape, gravity model, local surface topography, and safe landing sites using onboard sensing, computing, and algorithms. An onboard system has the advantage of higher image acquisition rates that would be advantageous for the computer vision algorithms and would result in a much-reduced operations team, when compared to ground operations that would be subject to limited communication bandwidth. Machine learning would be able to encode complex models and handle large uncertainties, such as identifying and tracking body-surface landmarks across large scale changes and varying lighting conditions during tens of thousands of kilometers of approach. Furthermore, machine learning would handle complex dynamic interactions with the surface, whose geo-physical properties are not known a priori, to enable effective mobility and manipulation. Such an autonomous capability, once established, would be more broadly applicable to planetary bodies with unknown motions/rotations, topographies, and atmospheric conditions, should the latter exist.

Such a scenario has clear success metrics for each stage of increasing difficulty. During cruise, trajectory correction maneuvers would guide the craft to the approach point, when the target becomes detectable (subpixel size but appears as point-spread function) in the camera’s narrow field of view. The approach is a particularly challenging phase whose success is reaching a hover point at a safe distance, having established the body parameters (trajectory, rotation, and shape). The subsequent phase would involve the successful landing site selection, guidance, and safe landing. For a NEO, such a maneuver would have the flexibility of an abort and retry given the microgravity of the body. Mobility on the surface to target locations and the ability to manipulate the shallow regolith surface to acquire measurement would constitute the last phase and success metric. While all operations would be autonomously executed, these would be responding to goals set by scientists and ground operators and the performance of the craft would be continually monitored by ground operators as the capability is proven. The last success metric is the download of key information to trace and analyze the onboard decisions that the spacecraft has been making all along.

Results from an earlier analysis of both accessibility and feasibility of such a scenario showed promise [ 63 , 64 ]. To simplify access to the surface, we design the spacecraft to be able to self-right and operate from any stable state, where it can hop and tumble similar to what has been demonstrated in parabolic flight, but possibly using cold gas thrusters in lieu of reaction wheels [ 15 ]. Once on the surface, we assume a limited lifetime to reduce constraints associated with large deployable solar panels. In addition to guiding the spacecraft during landing, micro-thrusters can also be used to relocate the platform to different sites on the body. Miniaturized manipulators developed for CubeSats could enable such a platform to manipulate the surface for sampling and other measurements [ 65 ]. Such a scenario could be extended to multi-spacecraft missions.

In addition to the functions that would have to be matured, this scenario would drive the development of an architecture that integrates function- and system-level elements to enable cross-domain models to interact at the proper fidelity levels to execute a full and adequately challenging mission, but with provisions for ground oversight and retries. The relatively low cost of such a technology demonstration would allow a more aggressive risk posture to substantially advance autonomous robotic capabilities. By making the architecture and algorithms widely available, the bar of entry for universities will be lowered, allowing greater opportunities to send SmallSat missions to diverse NEOs.

Concluding Thoughts

In this paper, we have provided a broad overview of autonomy advances for robotic spacecraft, summarized state of the practice, identified challenges, and shared potential benefits of greater adoption. We presented an argument for why the state of the practice of sequence-driven, ground-controlled operations that led to numerous successful missions would not be a well-suited paradigm for future exploration, where missions have to operate in situ with physical interactions with the bodies or their atmospheres in poorly constrained environments, with limited a priori knowledge, and under harsh environmental conditions. We examined experiences from missions over the past two decades and highlighted how unanticipated situations arise that current systems are unable to handle without the expertise of ground operators and tools. Future autonomous systems would have to handle a wide range of conditions on their own if they were to operate in more remote destinations. Despite several demonstrations of autonomous capabilities, state of the practice remains largely reliant on ground-in-the-loop operations. The need for autonomy is driven by two main constraints: mission objectives and environmental constraints. Mission objectives are typically set to reduce the need for new technologies but environment constraints would eventually drive the need for autonomy. The adoption of autonomy at a broader scale is not only constrained by technical barriers such as the advancement of the algorithms, the integration of cross-domain capabilities, and the verification and validation of such capabilities; it is also driven by non-technical factors related to acceptable mission risk, cost, and change in the roles of humans interacting with the spacecraft. We articulate why future missions would require more autonomy: our ability to predict the execution of onboard activities seriously diminishes for in situ missions, as evidenced by two decades of Mars surface exploration. We highlight key autonomy advanced across different mission phases: in-space, proximity operations, landed, and surface missions. We conclude by sharing a scenario arguing for sending an autonomous spacecraft to a NEO to approach, land, move, and sample its surface as an adequately challenging scenario to substantially advance both the function-level and system-level autonomy elements. Such a scenario could be matured and demonstrated using SmallSats and has clear success metrics for each mission phase.

As our current missions discover a multitude of planets around other stars, we are compelled to ask ourselves what it would take some day to explore those exoplanets. A mission to perform in situ exploration of even the nearest exoplanetary system is a daunting, yet an exciting challenge. Such a mission will undoubtedly require a sophisticated level of autonomy together with major advances in power, propulsion, and other spacecraft disciplines. This is a small step toward a goal we only dare to dream about.

Papers of particular interest, published recently, have been highlighted as: • Of importance •• Of major importance

Sagan C, Reddy R. Machine intelligence and robotics: report of the NASA Study Group FINAL REPORT. 715-32. 1980. Carnegie Mellon University. Retrieved 2011 7 September. http://www.rr.cs.cmu.edu/NASA%20Sagan%20Report.pdf

NASA Autonomous Systems – Systems Capability Leadership Team. Autonomous systems taxonomy. NASA Technical Reports Server. Document ID 20180003082. 2018 14 May. https://ntrs.nasa.gov/citations/20180003082 .

NASA Systems Engineering Handbook, U.S. Government Printing Office, NASA/SP-2016-6105 Rev2, 2018

The Voyager Mission: https://voyager.jpl.nasa.gov/

Berry K, Sutter B, May A, Williams K, Barbee BW, Beckman M, Williams B. OSIRIS-REx touch-and-go (TAG) mission design and analysis, 36th Annual AAS Guidance and Control Conference, 2013.

Dooley J. Mission concept for a Europa Lander. Proceedings from the 2018 IEEE Aerospace Conference. Big Sky, MT. 2018:3–10. https://doi.org/10.1109/AERO.2018.8396518 .

Elliott J, Robinson M, et al. Intrepid planetary mission concept overview. International Planetary Probe Workshop (IPPW 2020). 2020 9 July. Monterey, CA. Webinar Session 6, 2020. https://www.ippw2020.org/webinar-series .

Bonitz R, Shiraishi L, Robinson M, Carsten J, Volpe R, Trebi-Ollennu A, et al. The Phoenix Mars Lander robotic arm, Proceedings of the 2009 IEEE Aerospace Conference. Montana: Big Sky; 2009.

Google Scholar  

Kenda B, et al. Subsurface structure at the InSight landing site from compliance measurements by seismic and meteorological experiments. JGR Planets. American Geophysical Union. 2020 18 May. https://doi.org/10.1029/2020JE006387 .

The Mars 2020 mission, https://mars.nasa.gov/mars2020/

Balaram B, Canham T, Duncan C, Håvard FG, Johnson W, Maki J, Quon A, Stern R, and Zhu D, Mars helicopter technology demonstrator, AIAA Flight Mechanics Conference, 2018

Dragonfly.jhuapl.edu. What is dragonfly? https://dragonfly.jhuapl.edu/What-Is-Dragonfly/ .

Hall JL, Cameron J, Pauken M, Izraelevitz J, Dominguez MW, Wehage KT. Altitude-controlled light gas balloons for Venus and Titan exploration, AIAA Aviation Forum, 2019

Wilcox B, Jones, R. The MUSES-CN nanorover mission and related technology. Proceedings of the IEEE Aerospace Conference 2000. Big Sky, MT. 2000 7-14 7:287-295. https://doi.org/10.1109/AERO.2000.879296 .

Hockman B, Reid RG, Nesnas IA, Pavone M. Experimental methods for mobility and surface operations of microgravity robots. International Symposium on Experimental Robotics, Tokyo, Japan, 2016.

Hebert P, Bajracharya M, Ma J, Hudson N, Aydemir A, Reid J, et al. Mobile manipulation and mobility as manipulation-design and algorithms of robosimian. J Field Robotics. 2015;32:255–74. https://doi.org/10.1002/rob.21566 .

Article   Google Scholar  

Wilcox BH, Litwin T, Biesiadecki J, Matthews J, Heverly M, Morrison J, et al. ATHLETE: a cargo handling and manipulation robot for the Moon. Journal of Field Robotics. 2007;24(5):421–34. Wiley InterScience Publishing. https://doi.org/10.1002/rob.20193 .

Nesnas IA, Matthews J, Abad-Manterola P, Burdick JW, Edlund J, Morrison J, Peters R, Tanner M, Miyake R, Solish B, Axel and Du Axel rovers for the sustainable exploration of extreme terrains, Journal of Field Robotics, Feb 2012

Zimmerman W, Bonitz R, Feldman J. Cryobot: an ice penetrating robotic vehicle for Mars and Europa, IEEE Aerospace Conference Proceedings (Cat. No.01TH8542), Big Sky, MT, USA, 2001, pp. 1/311-1/323 vol.1, doi: https://doi.org/10.1109/AERO.2001.931722 .

Morgan D, Chung SJ, Hadaegh FY. Model predictive control of swarms of spacecraft using sequential convex programming, Journal of Guidance, Control and Dynamics, Vol. 37, No. 6, 2014

Boehnhardt H, et al. The Philae lander mission and science overview. Philosophical transactions of the Royal Society A; Mathematical, Physical and Engineering Sciences. 2017 20 May. Volume 375 Issue 2097. https://doi.org/10.1098/rsta.2016.0248

Hand E. Phoenix lander tastes its first ice. Nature International Weekly Journal of Science. 2008 1 August. https://doi.org/10.1038/news.2008.1002 .

Foust J. InSight mole making slow progress into Martian surface. Space News. 2020 5 May. https://spacenews.com/insight-mole-making-slow-progress-into-martian-surface/ .

Wall M. NASA’s OSIRIS-REx is overflowing with asteroid samples. Scientific American. 2020 26 October. https://www.scientificamerican.com/article/nasas-osiris-rex-is-overflowing-with-asteroid-samples/ .

Epp CD, Robertson EA, Brady T. Autonomous Landing and Hazard Avoidance Technology (ALHAT). Proceedings from the IEEE Aerospace Conference. Big Sky, MT. 2008 https://doi.org/10.1109/AERO.2008.4526297 .

SpaceTech. NASA technology enables precision landing without a pilot. 2020. 2020;(17 September) https://www.nasa.gov/directorates/spacetech/NASA_Technology_Enables_Precision_Landing_Without_a_Pilot .

Chhaniyara S, Brunskilla C, Yeomansa B, Matthews, MC, Saaja C, Ransomb S, Richter L., Terrain trafficability analysis and soil mechanical property identification for planetary rovers: a survey, Journal of Terramechanics, Vol. 49, Issue 2, pp 115-128, 2012

Matthies LH, Hall JL, Kennedy BA, Moreland SJ, Nayar HD, Nesnas IA, Sauder J, Zacny KA. Robotics technology for in situ mobility and sampling. Planetary Science Decadal 2020 White Paper, 2020. https://www.lpi.usra.edu/decadal_whitepaper_proposals/

Starek JA, Açıkmeşe B, Nesnas IA, Pavone M. Spacecraft autonomy challenges for next-generation space missions. Advances in Control System Technology for Aerospace Applications. 2016:1–48 Captures the needs, state of the art, and future directions in robotics and autonomous systems that include entry, descent, and landing, above-surface mobility, extreme-terrain mobility, and microgravity mobility for planetary exploration .

Rankin A, Maimone M, Biesiadecki J, Patel N, Levine D, Toupet O, Driving Curiosity: Mars rover mobility trends during the first seven years, IEEE Aerospace Conference, March 2020. Findings from this study capture the challenges and experiences from driving the Curiosity rover on Mars in its first 7 years, highlighting that our ability to anticipate and predict diminishes when interacting in a not-well-characterized environment and characterizing the productivity.

Morgan PS. Fault protection techniques in JPL spacecraft, Jet Propulsion Laboratory, National Aeronautics and Space Administration, http://hdl.handle.net/2014/39531 , 2005

Sollazzo C, Rakiewicz J, Wills RD. Cassini-Huygens: mission operations. Control Engineering Practice Elsevier. 1995;3(11):1631–40. https://doi.org/10.1016/0967-0661(95)00174-S .

Braun RD, Manning RM. Mars exploration entry, descent and landing challenges, 2006 IEEE Aerospace Conference. Big Sky, MT. 2006:18. https://doi.org/10.1109/AERO.2006.1655790 .

Maimone MW, Biesiadecki JJ, Tunstel E, Cheng Y, Leger C. Surface navigation and mobility intelligence on the Mars exploration rovers, Intelligence for space robotics, 2006

Biesiadecki JJ, Leger C, Maimone MW. Tradeoffs between directed and autonomous driving on the Mars Exploration Rovers. International Symposium on Robotics Research (ISRR), 2005.

Greeley R, Figueredo PH, Williams DA, Chuang FC, Klemaszewski JE, Kadel SD, Prockter LM, Pappalardo RT, Head III JW, Collins GC, Spaun NA, Sullivan, RJ, Moore JM, Senske DA, Tufts BR, Johnson TV, Belton MJ, Tanaka KL. Geologic mapping of Europa, Journal of Geophysical Research, Vol. 105, No. E9, pp 22, 559–22, 578, September 25, 2000.

Arvidson RE, Iagnemma KD, Maimone MW, Fraeman AA, Zhou F, Heverly MC, et al. Mars Science Laboratory Curiosity rover megaripple crossings up to sol 710 in Gale crater. Journal of Field Robotics. May 2017;34(3):495–518.

Arvidson RE, Bell JF, Bellutta P, Cabrol NA, Catalano JG, Cohen J, et al. Spirit Mars rover mission: overview and selected results from the northern home plate winter haven to the side of Scamander crater. Journal of Geophysical Research: Planets. 2010;115(9):1–19. https://doi.org/10.1029/2010je003633 .

Koopman P, Wagner M. Autonomous vehicle safety: an interdisciplinary challenge. IEEE Intell Transp Syst Mag. 2017 March;9(1):90–6. https://doi.org/10.1109/MITS.2016.2583491 .

Gaines D, Doran G, Paton M, Rothrock B, Russino J, Mackey R, et al. Self-reliant rovers for increased mission productivity. Journal of Field Robotics. 2020;37(7):1171–96. https://doi.org/10.1002/rob.21979 .

Amini R, Azari A, Bhaskaran S, Beauchamp P, Castillo-Rogez J, Castano R, Chung S, et al. Advancing the scientific frontier with increasingly autonomous systems, Planetary Science Decadal 2020 White Paper, 2020. This paper captures both technical and programmatic challenges to overcome in order to fully realize the potential of autonomous systems.

Fieseler PD, Taylor J, Klemm RW. Dawn spacecraft performance: resource utilization and environmental effects during an 11-year mission. Journal of Spacecraft and Rockets. 2019 30 December. 57:1. https://doi.org/10.2514/1.A34521 .

Nayak P. et al. Validating the DS1 remote agent experiment. Proceedings of the Fifth International Symposium Artificial Intelligence, Robotics and Automation in Space (ISAIRAS '99). 1999 1-3 June. ESTEC, Noordwijk, the Netherlands. Edited by M. Perry. ESA SP-440. Paris: European Space Agency, 1999., p.349 https://www.researchgate.net/publication/2828930 .

Bernard D, Dorais G, Gamble E, Kanefsky B, Kurien J, Man G, Millar W, Muscettola N, Nayak P, Rajan K, et al. Spacecraft autonomy flight experience—the DS1 Remote Agent Experiment. AIAA Space Forum Space Technology Conference and Exposition (AIAA-99-4512 28-30 September. Albuquerque, NM). 1999 https://doi.org/10.2514/6.1999-4512 .

Bhaskaran S. Autonomous navigation for deep space missions. AIAA SpaceOps 2012 Conference. 11-15 June. Stockholm, Sweden. 2012 https://doi.org/10.2514/6.2012-1267135 .

Kubitschek DG, Mastrodemos N, Werner RA, Kennedy BM, Synnott SP, Null GW, Bhaskaran S, Riedel JE, Vaughan AT. Deep impact autonomous navigation: the trials of targeting the unknown. Advances in the Astronautical Sciences. American Astronautical Society (AAS 06-081). 4-8 February. Breckenridge, CO. 2006 http://hdl.handle.net/2014/38755 .

Chien S, et al. The EO-1 Autonomous Science Craft. Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS ’04). Vol. 1:420-427. 2004. https://doi.org/10.1109/AAMAS.2004.262 .

Chien S, Doubleday J, Thompson D, Wagstaff K, Bellardo J, Francis C, Baumgarten E, Williams A, Yee E, Fluitt D, et al. Onboard autonomy on the Intelligent Payload EXperiment (IPEX) Cubesat mission as a pathfinder for the proposed HyspIRI mission intelligent payload module. Proceedings of the 12th International Symposium in Artificial Intelligence, Robotics and Automation in Space (ISAIRAS 2014). Montreal, Canada, 2014 June. http://robotics.estec.esa.int//i-SAIRAS/isairas2014/Data/Session%207c/ISAIRAS_FinalPaper_0013.pdf

Fesq, L., et al., Extended mission technology demonstrations using the ASTERIA spacecraft, IEEE Aerospace Conference , 2019.

Nesnas IA, Swindle T, Castillo J, Bhaskaran S, Gump D, Maleki L, McMahon J, Mercer C, Partridge H, Pavone M, Rivkin A, Touchton B, Kimchi G, Tan F, Jones-Bateman J, Hockman B, Gervits F. Small bodies design reference mission reports. workshop on autonomy for future NASA science missions. 2018 https://science.nasa.gov/technology/2018-autonomy-workshop .

Bhaskaran S, Nandi S, Broschart S, Wallace M, Cangahuala LA, Olson C. Small body landings using autonomous onboard optical navigation. J Astronaut Sci. 2011;58(3):409–27. https://doi.org/10.1007/BF03321177 .

Lorenz DA, Olds R, May A, Mario C, Perry ME, Palmer EE, Daly M. Lessons learned from OSIRIS-REx autonomous navigation using natural feature tracking. Proceedings of the 2017 IEEE Aerospace Conference. Big Sky, MT. 2017 4-11 https://doi.org/10.1109/AERO.2017.7943684 .

Yoshimitsu T, Kubota T, Nakatani I. Operation of MINERVA rover in Hayabusa Asteroid Mission. AIAA 57th International Astronautical Congress (IAC 2006). 22-6 October. Valencia, Spain. 2006 https://doi.org/10.2514/6.IAC-06-A3.5.01 .

Biele J, Ulamec S. (2008) Capabilities of Philae, the Rosetta Lander. In: Balsiger H., Altwegg K., Huebner W., Owen T., Schulz R. (eds) Origin and Early Evolution of Comet Nuclei. Space Sci Rev 138, vol 28:275–289 Editor Springer, New York, NY. 2008. https://doi.org/10.1007/978-0-387-85455-7_18 .

Lauretta DS, Balram-Knutson SS, Beshore E, Boynton WV, Drouet d’Aubigny C, DellaGiustina DN, et al. OSIRIS-REx: sample return from asteroid (101955) Bennu. Space Sci Rev. 2017;212:925–84. https://doi.org/10.1007/s11214-017-0405-1 .

Cheng Y, Johnson A, Matthies L. MER-DIMES: a planetary landing application of computer vision. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2005). 2005 20-26 June. San Diego, CA. 2005 vol. 1:806-813. https://doi.org/10.1109/CVPR.2005.222 .

Liu J, Ren X, Yan W, Li C, Zhang H, Jia Y, et al. Descent trajectory reconstruction and landing site positioning of Chang’E-4 on the lunar farside. Nat Commun. 2019;10:4229. https://doi.org/10.1038/s41467-019-12278-3 .

Kubota T, et al. Descent and touchdown dynamics for sampling in Hayabusa mission. Proceedings of the AIAA 57th International Astronautical Congress (IAC 2006). 2-6 October. Valenica, Spain. 2006 Vol. 6:4204-4213.

W.S. Kim, I. A. Nesnas, et al, Targeted driving using visual tracking on Mars: from research to flight, Journal of Field Robotics , vol 26, issue 3, 2009.

Baumgartner ET, Bonitz RG, Melko JP, Shiraishi LR, Leger PC. The Mars Exploration Rover Instrument Positioning System. Proceedings of the 2005 IEEE Aerospace Conference. Big Sky, MT. 2005 doi: https://doi.org/10.1109/AERO.2005.1559295 .

Ono M, et al. Exobiology Extant Life Surveyor (EELS). Proceedings of the American Geophysical Union, Fall Meeting 2019. abstract #P21D-3410. 2019 December. 2019AGUFM.P21D3410O.

2018 workshop on autonomy for future NASA science missions: Output and results—Science Mission Directorate. https://science.nasa.gov/technology/2018-autonomy-workshop/output-results . Retrieved 2020 13 August.

Papais S, Hockman B, Bandyopadhyay S, Karimi RR. Architecture trades for accessing small bodies with an autonomous small spacecraft. Proceedings from the 2020 IEEE Aerospace Conference. Big Sky, Montana, USA. January 2020. DOI: https://doi.org/10.1109/AERO47225.2020.9172471 .

Villa J, et al. Optical navigation for autonomous approach of small unknown bodies. 43 rd Annual AAS Guidance, Navigation & Control Conference. 30 January – 5 February. 2020 https://www.semanticscholar.org/paper/OPTICAL-NAVIGATION-FOR-AUTONOMOUS-APPROACH-OF-SMALL-Villa-Bandyopadhyay/d64926710b37c57da193977cc3a97febe9051136 .

R. McCormick et al., Development of miniature robotic manipulators to enable SmallSat clusters, 2017 IEEE Aerospace Conference, Big Sky, MT, 2017, pp. 1-15, doi: https://doi.org/10.1109/AERO.2017.7943713 .

Download references

Acknowledgements

This work was performed at the Jet Propulsion Laboratory, California Institute of Technology under contract to the National Aeronautics and Space Administration. Government sponsorship is also acknowledged.

Author information

Authors and affiliations.

Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., M/S 198-219, Pasadena, CA, 91109, USA

Issa A.D. Nesnas

Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., M/S 301-480, Pasadena, CA, 91109, USA

Lorraine M. Fesq

Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Dr., M/S 198-219A, Pasadena, CA, 91109, USA

Richard A. Volpe

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Issa A.D. Nesnas .

Ethics declarations

Human and animal rights and informed consent.

This article does not contain any studies with human or animal subjects performed by any of the authors.

Conflict of Interest

Issa A.D. Nesnas has a patent US Patent 13/926,973 issued, a patent US Patent 13/096,391 issued, and a patent US 62/892,728 pending. Lorraine M. Fesq has a patent US6128555 issued, and a patent US05951609 issued. Richard A. Volpe declares no conflict of interest.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Space Robotics

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Nesnas, I.A., Fesq, L.M. & Volpe, R.A. Autonomy for Space Robots: Past, Present, and Future. Curr Robot Rep 2 , 251–263 (2021). https://doi.org/10.1007/s43154-021-00057-2

Download citation

Accepted : 06 May 2021

Published : 19 June 2021

Issue Date : September 2021

DOI : https://doi.org/10.1007/s43154-021-00057-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Space robotics and autonomy
  • Autonomous systems
  • Need for space autonomy
  • Autonomous operations
  • Autonomous SmallSats

Advertisement

  • Find a journal
  • Publish with us
  • Track your research
  • Share full article

Advertisement

Supported by

Guest Essay

The U.S. Military Is Not Ready for the New Era of Warfare

A drawing of a soldier in combat gear, a machine gun strapped to his back, standing in a large grassy area. He is looking up at a swarm of drones flying above.

By Raj M. Shah and Christopher M. Kirchhoff

Mr. Shah is the managing partner of Shield Capital. Dr. Kirchhoff helped build the Pentagon’s Defense Innovation Unit.

The First Matabele War, fought between 1893 and 1894, foretold the future.

In its opening battle, roughly 700 soldiers, paramilitaries and African auxiliaries aligned with the British South Africa Company used five Maxim guns — the world’s first fully automatic weapon — to help repel over 5,000 Ndebele warriors, some 1,500 of whom were killed at a cost of only a handful of British soldiers. The brutal era of trench warfare that the Maxim gun ushered in didn’t become fully apparent until World War I. Yet initial accounts of its singular effectiveness correctly foretold the end of the cavalry, a critical piece of combat arms since the Iron Age.

We stand at the precipice of an even more consequential revolution in military affairs today. A new wave of war is bearing down on us. Artificial-intelligence-powered autonomous weapons systems are going global. And the U.S. military is not ready for them.

Weeks ago, the world experienced another Maxim gun moment: The Ukrainian military evacuated U.S.-provided M1A1 Abrams battle tanks from the front lines after many of them were reportedly destroyed by Russian kamikaze drones . The withdrawal of one of the world’s most advanced battle tanks in an A.I.-powered drone war foretells the end of a century of manned mechanized warfare as we know it. Like other unmanned vehicles that aim for a high level of autonomy, these Russian drones don’t rely on large language models or similar A.I. more familiar to civilian consumers, but rather on technology like machine learning to help identify, seek and destroy targets. Even those devices that are not entirely A.I.-driven increasingly use A.I. and adjacent technologies for targeting, sensing and guidance.

Techno-skeptics who argue against the use of A.I. in warfare are oblivious to the reality that autonomous systems are already everywhere — and the technology is increasingly being deployed to these systems’ benefit. Hezbollah’s alleged use of explosive-laden drones has displaced at least 60,000 Israelis south of the Lebanon border. Houthi rebels are using remotely controlled sea drones to threaten the 12 percent of global shipping value that passes through the Red Sea, including the supertanker Sounion , now abandoned, adrift and aflame, with four times as much oil as was carried by the Exxon Valdez. And in the attacks of Oct. 7, Hamas used quadcopter drones — which probably used some A.I. capabilities — to disable Israeli surveillance towers along the Gaza border wall, allowing at least 1,500 fighters to pour over a modern-day Maginot line and murder over 1,000 Israelis, precipitating the worst eruption of violence in Israel and Palestinian territories since the 1973 Arab-Israeli war.

Yet as this is happening, the Pentagon still overwhelmingly spends its dollars on legacy weapons systems. It continues to rely on an outmoded and costly technical production system to buy tanks, ships and aircraft carriers that new generations of weapons — autonomous and hypersonic — can demonstrably kill.

Take for example the F-35, the apex predator of the sky. The fifth-generation stealth fighter is known as a “flying computer” for its ability to fuse sensor data with advanced weapons.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

IMAGES

  1. Essay on Robots in the Future

    future of robotics essay

  2. Robotics Essay

    future of robotics essay

  3. 📗 Essay on Robotics and the Job of the Future

    future of robotics essay

  4. The Advancement of Robot Technology

    future of robotics essay

  5. (PDF) The future of Robotics Technology

    future of robotics essay

  6. The Future of Robotics and Artificial Intelligence Free Essay Example

    future of robotics essay

VIDEO

  1. Future Robotics Day 2024. Skoltech ISR Laboratory

  2. Robot Cabinet

  3. "Building the Future: Robotics Class in Action! @aradhanaacademy7700 Bangalore

  4. Next Future Robotics Technology Artificial Intelligence Documentary

  5. AI future robotics technology 🤖 @sorrowman1785

  6. Unveiling the Future: Robotics and AI are Revolutionizing Our World 🤖✨ #TechInnovation #AIRevolution

COMMENTS

  1. The Future of Robotics: How Robots Will Transform Our Lives

    The Future of Robotics: How Robots Will Change the World Robots will increase economic growth and productivity and create new career opportunities for many people worldwide. However, there are still warnings out there about massive job losses, forecasting losses of 20 million manufacturing jobs by 2030, or how 30% of all jobs could be automated ...

  2. Future of Robotics: How Are Robots Shaping the Future?

    Pop culture is perhaps the main culprit for the public's warped perception of the future of robotics.Although figures like C3PO in Star Wars, Data in Star Trek and the cyborg in The Terminator have given robotics some flashy mainstream appeal, they have also established narrow expectations for what robots could be and accomplish in the future. "I'm never going to rule stuff out," said ...

  3. Essay on Robots in the Future

    Robotics is a branch of engineering that deals with synergy mechanics, computer science, information science and electronics. Robotics deals with the construction, design and control system for controlling movements of robot by perceiving the environment. The idea of robot was first introduced in a movie. The first ever robot was made in 1952 ...

  4. Shaping the future of advanced robotics

    These tasks, straightforward for humans, require a high-level understanding of the world for robots. Today we're announcing a suite of advances in robotics research that bring us a step closer to this future. AutoRT, SARA-RT, and RT-Trajectory build on our historic Robotics Transformers work to help robots make decisions faster, and better ...

  5. The robots are coming. And that's a good thing

    March 5, 2024. Jack Snelling. In this excerpt from the new book, The Heart and the Chip: Our Bright Future with Robots, CSAIL Director Daniela Rus explores how robots can extend the reach of human ...

  6. The AI revolution is coming to robots: how will it change them?

    Gopalakrishnan thinks that hooking up AI brains to physical robots will improve the foundation models, for example giving them better spatial reasoning. Meta, says Rai, is among those pursuing the ...

  7. The Complete History And Future of Robots

    The History of Robots. The definition of "robot" has been confusing from the very beginning. The word first appeared in 1921, in Karel Capek's play R.U.R., or Rossum's Universal Robots ...

  8. What will robots be like in the future?

    Share. Robots are changing our lives: sweeping robots patrol our living rooms; interactive robots accompany our children; industrial robots assemble vehicles; rescue robots search and save lives in catastrophes; medical robots perform surgeries in hospitals. To better understand robots' challenges and impact, National Science Review (NSR ...

  9. Robots in 2050 and Beyond

    Lord Rees talks about the future of robotics. Extract of a speech, 'The World in 2050 and Beyond', by Lord Rees, Astronomer Royal and member of the Science Museum Group Foundation, at the inauguration of the Hans Rausing Lecture Theatre, in which, among other topics, he looks at the rise of robots and AI. The World in 2050 and beyond. Watch on.

  10. AI and robotics: How will robots help us in the future?

    Marc Segura of ABB, a robotics firm started in 1988, shared real stories from warehouses across the globe in which robots are managing jobs that have high-accident rates or long-term health consequences for humans.With robots that are strong enough to lift one-ton cars with just one arm, and other robots that can build delicate computer chips (a task that can cause long-term vision impairments ...

  11. Essay on Robotics for Students and Children in English

    Given below are two essays in English for students and children about the topic of 'Robotics' in both long and short form. The first essay is a long essay on Robotics of 400-500 words. This long essay about Robotics is suitable for students of class 7, 8, 9 and 10, and also for competitive exam aspirants. The second essay is a short essay ...

  12. Autonomous Controller Robotics: The Future of Robots Essay (Article)

    In practice, autonomous control systems make use of Global Positioning System (GPS) devices that are built into the control system of the intelligent robots. An autonomous controller has the capacity to plan the necessary sequence of control actions that should be taken in order to achieve set goals.

  13. Rise of the Robots--The Future of Artificial Intelligence

    Robot, Version 1.0 In 2008 desktop PCs offer more than 10,000 MIPS. Seegrid tuggers, using slightly older processors doing about 5,000 MIPS, distill about one visual "glimpse" per second.

  14. The Machines Have Arrived: Possible Future of Robots

    Take Brett the robot. In a UC Berkeley lab, the humanoid has instructed itself to vanquish one of those youngsters' riddles where you pack pegs into various molded openings. It did as such by experimentation through a procedure called support learning. Nobody revealed to it how to get a square peg into a square opening, only that it expected to.

  15. The Future of Robotics: How AI is revolutionizing this Field

    The combination of robotics and AI is transforming the discipline, leading to extraordinary advances and far-reaching repercussions. This abstract discusses the future of robotics in light of fast AI research, stressing its significant influence on numerous sectors and possible obstacles. AI-powered robots are learning and adapting. Advanced machine learning techniques allow robots to ...

  16. Science Robotics : Helping build better robots for a better future

    The history of robotics is longer than many working in the field are aware of, and decades-long perspectives from robotics leaders will help illuminate the field's roots. Deeply philosophical perspectives can be included, and futuristic visions have a place in the journal. We cannot forget that the primary mission of Science Robotics is to ...

  17. Essay on Robotics

    Future of Robotics. The future of robotics is very exciting. Robots will become smarter and more helpful. They will be able to do more complex tasks. Robots will be used in more places and in more ways. They will make our lives easier and safer. 250 Words Essay on Robotics What is Robotics? Robotics is a field in technology that deals with ...

  18. (PDF) The future of Robotics Technology

    The text concludes with chapters on other robotic platforms beyond the only current FDA approved device (Intuitive Surgical) as well as future directions, including single-site, enhanced imaging ...

  19. As "robots are moving out of the cages"

    From "robots in the cages" to "robots are moving out of the cages" There are a number of ways to overcome these analytical ambiguities and perhaps the easiest starting point is the classic product life-cycle model (Vernon Citation 1966) as at present the different robotics technologies are in very distinct stages.Mass deployment of industrial robots has taken place over the last three ...

  20. The future of robots in the workplace: The impact on workers

    As technology improves and its use in the workplace expands, the demand for high-tech workers falls. At the end of the simulation, nearly 68% of high-tech workers end up in the service sector, earning approximately 14% less than they did previously. As high-tech workers return to the service sector, the wages of low-tech workers rise 41%, then ...

  21. Autonomy for Space Robots: Past, Present, and Future

    Purpose of Review The purpose of this review is to highlight space autonomy advances across mission phases, capture the anticipated need for autonomy and associated rationale, assess state of the practice, and share thoughts for future advancements that could lead to a new frontier in space exploration. Recent Findings Over the past two decades, several autonomous functions and system-level ...

  22. An expert explores how robots will affect the future of work

    A new survey-based study explains how automation is reshaping the workplace in unexpected ways. Robots can improve efficiency and quality, reduce costs, and even help create more jobs for their human counterparts. But more robots can also reduce the need for managers. The study is titled "The Robot Revolution: Managerial and Employment ...

  23. Opinion

    Guest Essay. The U.S. Military Is Not Ready for the New Era of Warfare. Sept. 13, 2024. ... The First Matabele War, fought between 1893 and 1894, foretold the future.