Welcome to CS223A

Hello human, teach me how to dive, cs223a / me320 : introduction to robotics - winter 2024.

This course provides an introduction to physics-based design, modeling, and control of robotic systems, in particular of robotic arms. You will learn basic methodologies and tools, and build a solid foundation that will enable you to move forward in both robotic research (CS327A, CS326) and applications (CS225A). Concepts that will be covered in the course are Spatial Transformations; Forward and Inverse Kinematics of Robots; Jacobians; Robot Dynamics, Joint, Cartesian, Operational Space and Force Control as well as Vision-based Control.

Expected Learning Outcomes

After taking the class, students will be able to

  • Design a robot with an optimal workspace
  • Model a robot to sufficient precision
  • Implement and tune a robot motion controller that exposes desired behaviour
  • Implement and tune a compliant robot motion/force controller that exposes desired behaviour
  • Implement and tune a vision-based robot motion controller that is robust to noise
  • Assess limitations of traditional, model-based approaches, visualise these failure cases, and propose an approach on how they can be addressed (as assessed by bonus exercises in homework assignments)

All learning outcomes are assessed by homework assignments, midterm and final exam.

Mon & Wed from 3:00 PM - 4:20 PM Lectures conducted in-person in Gates B3 Recordings available through Panopto Course Videos on Canvas

Course Reader

Available at the Bookstore here .

All course materials will be shared through the Canvas page , including important class announcements from the Teaching Staff.

Homework: 40% Midterm (in class): 25% Final (in class): 35%

There are 8 assignments, total worth 40% of your final grade. Due @ 5:00 PM on Fridays on Gradescope (class code NPNR8W).

Oussama Khatib

[email protected], office hours: mon + wed, 4:30 - 5:30 pm, gates 203 (pending availability), course assistant, [email protected], office hours: tue 5:00 - 7:00 pm, gates 200, join by zoom, william chong, [email protected], office hours: thu 1:00 - 3:00 pm, fri 1:00 - 3:00 pm, gates 200, adrian piedra, [email protected], office hours: mon 1:00 - 3:00 pm, tue 3:00 - 5:00 pm, gates 200, chinmay devmalya, [email protected], office hours: wed 1:00 - 3:00 pm, thu 10:00 am - 12:00 pm, gates 200.

Generic placeholder image

Sreenidhi Tupuri

[email protected], office hours: mon 10:00 am - 12:00 pm, wed 10:00 am - 12:00 pm, gates 200, [email protected], office hours: mon 1:00 - 3:00 pm, tue 1:00 - 3:00 pm, gates 200, detailed syllabus info, website & other information channels.

All course materials will be shared through the Canvas website, including important class announcements from the Teaching Staff. All assignments should be submitted via Gradescope.

If you have a question, to get a response from the teaching staff quickly we strongly encourage you to post it to the Ed Discussion forum here . This is a great place to ask questions of the staff, as well as share information among your peers. For private matters, please make a private note visible only to the course instructors. For longer discussions with CA's, we strongly encourage you to come to office hours.

Assignments

There will be 8 homework problem sets that are pen-and-paper exercises. Their purpose is to practice the concepts covered in class by applying them to different robotics-related example problems. All assignments will be released on Friday at 5:00 PM and due on the following Friday at 5:00 PM. You should submit directly to Gradescope .

Collaboration Policy

Although group discussion and work is encouraged, each student should submit their own assignment and perform any necessary calculations on their own.

There will be a midterm and a final for this course. It will include similar problems to those you have encountered in the homework, and will additionally include problems and questions covering the content from the lectures. TA review sessions (schedule TBD) will help you to prepare for the exam.

Late Policy

Each student will have a total of three free late days to use for homeworks for the whole quarter. You may use up these late days for any assignment as you see fit. You can use partial late days (i.e. if you submit your first assignment 5 hours late, you will have 72-5 = 67 total late hours remaining), Once these late days are exhausted, any assignments turned in late will be penalized 20% per late day. However, no assignment will be accepted more than three days after its due date . If you need additional assignment extensions beyond the free late days given for whatever reason, contact the CAs directly and we will work something out with you.

Regrades will also be handled through Gradescope. We will begin to accept regrades for an assignment the day after grades are released for a window of three days. We will not accept regrades for an assignment outside of that window. Regrades are intended to remedy grading errors, so regrade requests must discuss why you believe your answer is correct in light of the deduction you received. We do not accept regrade requests of the form "I deserve more points for this" or "that deduction is too harsh."

The Course Reader is available at the bookstore.

Supplementary Material (Optional)

  • Textbook: Robotics - Modelling, Planning and Control by Siciliano, B., Sciavicco, L., Villani, L., Oriolo, G. Available on Springer within Stanford network.
  • Essence of Linear Algebra by 3blue1brown
  • Python tutorial

Students with Documented Disabilities

Students who may need an academic accommodation based on the impact of a disability must initiate the request with the Office of Accessible Education (OAE). Professional staff will evaluate the request with required documentation, recommend reasonable accommodations, and prepare an Accommodation Letter for faculty dated in the current quarter in which the request is made. Students should contact the OAE as soon as possible since timely notice is needed to coordinate accommodations. The OAE is located at 563 Salvatierra Walk (phone: 723-1066, URL: http://studentaffairs.stanford.edu/oae ). Please send your OAE letter directly to Wesley at [email protected].

SCPD Accommodations

SCPD students who cannot physically attend lecture can still participate and ask questions through the Canvas Course Videos tab. This tab will show a 40-second delayed livestream of the lecture, and the associated text chat will be Tueitored by a TA. The Course Videos tab also contains recordings of past lectures and out-of-class review sessions.

One of the office hours sessions will be designated as SCPD priority office hours and be made available remotely through Zoom. The TA administering these office hours will be available through the Zoom video conference platform for live discussion of course material and homework. SCPD students will receive priority during this time, but non-SCPD students are also welcome to attend.

For Exams, if you are local to the area you are welcome to come to campus to take your midterms and finals in person. If you are not, you will need to desginate an Exam Monitorby the second week of class so you can take your exam remotely. Please visit this SCPD page for more information.

The Stanford University Fundamental Standard is a part of this course

It is Stanford’s statement on student behavioral expectations articulated by Stanford’s first President David Starr Jordan in 1896. It is agreed to by every student who enrolls at Stanford. The Fundamental Standard states: Students at Stanford are expected to show both within and without the university such respect for order, morality, personal honor and the rights of others as is demanded of good citizens. Failure to do this will be sufficient cause for removal from the university.

The Stanford University Honor Code is a part of this course

It is Stanford’s statement on academic integrity first written by Stanford students in 1921. It articulates university expectations of students and faculty in establishing and maintaining the highest standards in academic work. It is agreed to by every student who enrolls and by every instructor who accepts appointment at Stanford. The Honor Code states:

  • The Honor Code is an undertaking of the students, individually and collectively
  • that they will not give or receive aid in examinations; that they will not give or receive unpermitted aid in class work, in the preparation of reports, or in any other work that is to be used by the instructor as the basis of grading;
  • that they will do their share and take an active part in seeing to it that others as well as themselves uphold the spirit and letter of the Honor Code.
  • The faculty on its part manifests its confidence in the honor of its students by refraining from proctoring examinations and from taking unusual and unreasonable precautions to prevent the forms of dishonesty mentioned above. The faculty will also avoid, as far as practicable, academic procedures that create temptations to violate the Honor Code.
  • While the faculty alone has the right and obligation to set academic requirements, the students and faculty will work together to establish optimal conditions for honorable academic work.

Stanford University

Stanford engineering, s tanford e ngineering e verywhere, cs223a - introduction to robotics, course details, course description.

The purpose of this course is to introduce you to basics of modeling, design, planning, and control of robot systems. In essence, the material treated in this course is a brief survey of relevant results from geometry, kinematics, statics, dynamics, and control. The course is presented in a standard format of lectures, readings and problem sets. There will be an in-class midterm and final examination. These examinations will be open book. Lectures will be based mainly, but not exclusively, on material in the Lecture Notes book. Lectures will follow roughly the same sequence as the material presented in the book, so it can be read in anticipation of the lectures Topics: robotics foundations in kinematics, dynamics, control, motion planning, trajectory generation, programming and design. Prerequisites: matrix algebra.

  • DOWNLOAD All Course Materials

FPO

Khatib, Oussama

Prof. Khatib was the Program Chair of ICRA2000 (San Francisco) and Editor of ``The Robotics Review'' (MIT Press). He has served as the Director of the Stanford Computer Forum, an industry affiliate program. He is currently the President of the International Foundation of Robotics Research, IFRR, and Editor of STAR, Springer Tracts in Advanced Robotics. Prof. Khatib is IEEE fellow, Distinguished Lecturer of IEEE, and recipient of the JARA Award.

Assignments

Course sessions (16):, transcripts, stanford center for professional development.

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University, Stanford, California 94305

Browse Course Material

Course info, instructors.

  • Prof. Harry Asada
  • Prof. John Leonard

Departments

  • Mechanical Engineering

As Taught In

  • Robotics and Control Systems
  • Dynamics and Control
  • Mechanical Design
  • Classical Mechanics

Learning Resource Types

Introduction to robotics, assignments.

Problem Set 1 ( PDF )

Problem Set 2 ( PDF ) simple_sim program for Problem Set 2 ( ZIP ) (The ZIP file contains C++ source code; see \doc\simple_sim.pdf for documentation.)

Problem Set 3 ( PDF )

Problem Set 4 ( PDF )

Problem Set 5 ( PDF )

Problem Set 6 ( PDF )

Problem Set 7 ( PDF )

Problem Set 8 ( PDF )

facebook

You are leaving MIT OpenCourseWare

PRIDE MONTH

Fall Term 2021 Updates

Michigan Robotics

Work together, create smart machines, serve society.

  • Robotics Courses
  • Online courses

ROB 502: Programming for Robotics | Fall 2020

Instructor: Acshi Haggenmiller (acshikh), PhD Candidate Mo/We 1:30-4:30pm Online/Remote

This whole site is a living document and subject to change.

Introduction

This class is designed for engineering students who have a basic understanding of programming but haven’t majored in computer science or taken a dedicated sequence of programming courses. The goal of this class is for students to learn how to 1) write programs from scratch that meet robotic system requirements; 2) organize programs into logical sections; 3) critique program design and implementation choices; 4) use appropriate debugging tools and methodology to efficiently understand and correct program behavior; and 5) use the command line to work with git and other relevant utilities and scripts.

As it is titled Programming for Robotics , we have tried to design the in-class problems and homework assignments to be relevant to common robotics situations and algorithms, with the greater goal of demystifying programming and avoiding black-box magic. To be relevant and exciting, we designed the homework assignments around building a robotics simulation environment. While there are many excellent libraries and tools available for this (ROS among them), we will figure it out for ourselves! The best way to learn programming is by programming, so there will not be any quizzes or exams, and algorithms and necessary math will be provided so you can focus on implementation and not derivation.

The class uses the C programming language. C is a relatively simple language that will help us understand the fundamentals of how computer programs works, without the language letting us take complicated features for granted. Although most robotics programming is done in languages like Python and C++, the fundamentals you learn in C will help you to better understand what is happening in those more complicated languages.

"In-class" assignments are intended to require about 2-5 hours to complete, with the first 2-3 hours occurring during the scheduled class block. Homework assignments are intended to require about 4 hours per class session. In general, they will be due 1 week after the end of the topic section they were assigned in. For example, the first homework will be due before class session 5.

Instructional format in light of COVID-19

ROB 502 will take a hybrid approach between 1) asynchronous videos and work on your own time and 2) synchronous class discussions and clicker questions. Office hours will be provided in both an online remote format and also in-person on campus in a large classroom with the aid of a plexiglass divider. You will need to have a relatively modern (last 10 years) laptop for use with the course. Linux is ideal (though not expected), and I am also supporting Windows 10 and Mac OS. If you don’t have a compatible laptop, the university has a Loaner Laptop Program you can use to borrow one.

Each week, we have scheduled two 3-hour blocks of class time. Before these blocks, you will be expected to have watched any relevant introduction videos that will be posted on that day’s class page (linked below in the course schedule). We will use either Zoom or BlueJeans for our classes. The first hour or so of these class blocks will consist of group activities, clicker questions, and class discussions. For the remainder of the time, students are encouraged to begin work on that day’s "in-class" assignments. These assignments are relatively low-stakes introductions to new concepts that will be further examined on the homework assignments. I (and potentially a GSI) will be available for the remainder of this time to answer questions and provide help in real time.

After the semester starts, we will take a poll to determine additional office hours when I (and potentially a GSI) will hold other synchronous office hours. At any time, students are encouraged to ask questions on the class Piazza where all students will be able to benefit from the answers and where students can also answer the questions of their classmates.

Some class sessions will not have any formally scheduled instruction or problems. Instead, topics will be addressed on an as-needed basis, with the remaining time open for working on the homework assignments with instructor help.

Class Schedule

Classes 0-3: data representation.

  • Goals: 1) Inspect abstract data (e.g. pictures, text, plans) at the byte and bit level, and understand how changing low-level numbers affects high-level meaning. 2) Use the command line with git and the class submission system to get feedback.
  • Class 0 : Setting up the ROB 502 command line system
  • Class 1 : Using Linux and bash
  • Class 2 : Using git to commit and submit code; expressing logic
  • Class 3 : Arrays, ASCII, bytes, and GDB
  • Homework 1 : Polygonal collision detection, cryptogram
  • There are variety of C concepts that will not be explicitly covered in class! We are providing a tutorial document to help explain the necessary syntax and basic ideas so we can delve right into the good stuff!
  • For an even gentler introduction to C, I highly recommend Harvard’s CS50 lectures. Although the whole lectures can be long, they have good tables of contents on each lecture on YouTube, and work well at 2X playing speed. This clip focuses on compiling C, on using make, and on common compiler errors. This one is on the compilation process. If you want to follow along with their examples, you will need to use their sandbox .

Classes 4-7: Memory concepts and debugging

  • Goals: 1) Determine when dynamic memory is appropriate and how to prevent and detect memory leaks. 2) Determine when pointers are necessary and reason about when they are valid. 3) Use feedback from GDB, Valgrind, and AddressSanitizer to fix memory and other bugs.
  • Class 4 : Addresses, pointers
  • Class 5 : Malloc/free, debugging errors, and dynamic arrays
  • Class 6 : Linked lists
  • Class 7 : As needed
  • Homework 2 : Rasterizing bitmaps, Braitenberg vehicles
  • This clip talks about how data is stored in memory. This one talks about pointers. This one talks about malloc and free. This one talks about memory addresses and hexadecimal. This one is on stack overflows.

Classes 8-10: Recursion and Search

  • Goals: 1) Reason about and write recursive algorithms. 2) Use search algorithms with forward simulation to choose robot actions.
  • Class 8 : Bisection search, midpoint method, recursion vs iteration
  • Class 9 : Tree search
  • Class 10 : As needed
  • This clip gives an overview of recursion and how the computer’s stack is used to hold multiple versions of the same function in memory.
  • Homework 3 : Equation parsing, robot chase

Classes 11-13: Object abstractions

  • Goals: 1) Analyze algorithmic complexity and determine when it matters. 2) Choose data structures based on algorithm needs. 3) Separate and hide implementation from specification.
  • Class 11 : Complexity/Big-O Notation
  • Class 12 : Hash tables
  • Class 13 : As needed
  • Homework 4 : Bigrams

Classes 14-17: Threading

  • Goals: 1) Understand when threading is necessary and how to avoid using it unnecessarily. 2) Determine when variables may be subject to race conditions and how to prevent them. 3) Use threading for terminal input control.
  • Class 14 : Basic threading
  • Class 15 : Race conditions, deadlock, mutexes
  • Class 16 : Terminal settings, I/O threading, manual robot control
  • Class 17 : As needed
  • Homework 5 : Live-tuning potential fields

Classes 18-20: Message passing and networking

  • Goals: 1) Divide robotic systems into independent parts. 2) Coordinate program communication across network nodes. 3) Use logging and playback features to debug specific modules.
  • Class 18 : LCM/ROS basics, hybrid architectures
  • Class 19 : Networking
  • Class 20 : As needed
  • Homework 6 : Split project into communicating processes

Classes 21-23: Special topics

  • Class 21 : Coding interviews
  • Class 22 : Code reviews
  • Class 23 : Introduction to Python

Grades will be 3% course feedback, 7% class participation, 5% office hours participation, 30% in-class assignments, and 55% homework assignments (evenly split between all the homework assignments). In-class assignments will be 50% correctness and 50% participation (awarded for at least 50% correctness). Assignments will report their percentage completion through the auto-grader, with points given for completing objectives and points taken away for things like memory errors or inconsistent style. Final grades will be curved if necessary.

Please notice that homework assignments are worth far more than in-class assignments, and if you get behind, prioritize your time accordingly!

Course feedback

Several times over the semester, we will ask students to submit their feedback on the course. As a relatively new course, we want to gauge the effectiveness of the course setup, assignments, and teaching style.

Class participation

During most class sessions we will have some "clicker"-type questions. We want everyone to participate in class so that you can get to know and support your classmates. Although assignments are individual and you shouldn’t write code for anyone else, the class will be better for everyone if we can give advice and support and aid to each other. Also, if you get ahead of the in-class assignments, please start working on the homework!

Office hours participation

I want to get to know my students! I also want students to be comfortable with getting help on the many challenging assignments in this course. While you may certainly may be able to work longer to finish assignments, I want everyone to work smarter by getting help at the right time. Part of your grade will be signing up for and showing up at office hours on at least 4 separate days.

Late Policy

For in-class work, the two lowest scores for individual in-class assignment problems will be dropped. If you anticipate missing a class day, you are encouraged to complete that day’s assignments beforehand.

For homework, over all the homework assignment problems, 48 total cumulative hours of tardiness are "free". After this, each hour an assignment is late (rounded up by ceiling) will reduce its maximum score by one percentage point (so 80% completion of an assignment 10 hours late would be 80% * 90% = 72%). The auto-grader will report these percentage calculations and keep your highest final score from any submission. The 48 free hours of allowed homework tardiness will be applied at the end of the semester to maximize your final grade.

At any point, run p4r-check in a problem folder to see the highest score the auto-grader has recorded for you. Keep in mind that it doesn’t take into account your free late hours for homework.

Academic Honesty

The programs you submit, for both in-class and homework assignments, must be your own work, and significant similarity to other submissions will be considered highly suspect. Ultimately, though, the basic guideline is to be reasonable.

While working on problems, you are encouraged to search the internet to learn how to perform specific functions or techniques. In general, if you find a trivial one-liner on StackOverflow, you do not need to cite this. If you are copying a full algorithm, say for quicksort, you would need to cite this (or just use the standard library function qsort!). If that algorithm is a core objective of the assignment, however, then this would not be appropriate regardless of citation. Especially when you implement trickier algorithms or mathematical calculations that you found somewhere online, it can be wise to include a link to the original description of that method in a comment. This makes it easier to check or resume your work later.

You are especially encouraged to get help from your peers! This means that after trying to figure out a problem or fix your code, please talk to other students. If you want them to look at your code, only show the part you are trying to debug. Ask them for pointers about where the error is or what concepts or techniques to review, especially debugging techniques. Keep the conversation high-level and don’t give or receive guided instructions on exactly what code to write. The most useful thing would be to point out flawed logic and allow the other student to come up with the fix themselves. For earlier brainstorming of problem solutions, discuss problems using a whiteboard or a sheet of paper so that everyone can still write their code for themselves. You should not show your own working code to another student who is struggling to complete theirs.

If on the homework you get significant help from your peers, please consider adding a comment in your code at the top of the file saying who you collaborated with and what information was shared. This may help avoid potential confusion in similar solutions. However, since sharing of code is not permitted, we still expect the small details to be significantly different.

If it has been determined that students have flagrantly violated this policy, we reserve the right to respond severely.

CodeAvail

101+ Simple Robotics Research Topics For Students

Robotics Research Topics

Imagine a world where machines come to life, performing tasks on their own or assisting humans with precision and efficiency. This captivating realm is the heart of robotics—a fusion of engineering, computer science, and technology. If you’re a student eager to dive into this mesmerizing field, you’re in for an electrifying journey. 

In this blog, we’ll unravel the secrets of robotics research, highlight its significance, and unveil an array of interesting robotics research topics. These topics are perfect for middle and high school students, making the exciting world of robotics accessible to all. Let’s embark on this adventure into the future of technology and innovation!

In your quest to explore robotics, don’t forget the valuable support of services like Engineering Assignment Help . Dive into these fascinating research topics and let us assist you on your educational journey

What is Robotics Research Topic?

Table of Contents

A robotics research topic is a specific area of study within the field of robotics that students can investigate to gain a deeper understanding of how robots work and how they can be applied to various real-world problems. These topics can range from designing and building robots to exploring the algorithms and software that control them.

Research topics in robotics can be categorized into various subfields, including:

  • Mechanical Design: Studying how to design and build the physical structure of robots, including their components and materials.
  • Sensors and Perception: Investigating how robots can sense and understand their environment through sensors like cameras, infrared sensors, and ultrasonic sensors.
  • Control Systems: Exploring the algorithms and software that enable robots to move, make decisions, and interact with their surroundings.
  • Human-Robot Interaction: Researching how robots can collaborate with humans, including topics like natural language processing and gesture recognition.
  • Artificial Intelligence (AI): Studying how AI techniques can be applied to robotics, such as machine learning for object recognition and path planning.
  • Applications: Focusing on specific applications of robotics, such as medical robotics, autonomous vehicles, and industrial automation.

Why is Robotics Research Important?

Before knowing robotics research topics, you need to know the reasons for the importance of robotics research. Robotics research is crucial for several reasons:

Advancing Technology

Research in robotics leads to the development of cutting-edge technologies that can improve our daily lives, enhance productivity, and solve complex problems.

Solving Real-World Problems

Robotics can be applied to address various challenges, such as environmental monitoring, disaster response, and healthcare assistance.

Inspiring Innovation

Engaging in robotics research encourages creativity and innovation among students, fostering a passion for STEM (Science, Technology, Engineering, and Mathematics) fields.

Educational Benefits

Researching robotics topics equips students with valuable skills in problem-solving, critical thinking, and teamwork.

Career Opportunities

A strong foundation in robotics can open doors to exciting career opportunities in fields like robotics engineering, AI, and automation.

Also Read: Quantitative Research Topics for STEM Students

Easy Robotics Research Topics For Middle School Students

Let’s explore some simple robotics research topics for middle school students:

Robot Design and Building

1. How to build a simple robot using household materials.

2. Designing a robot that can pick up and sort objects.

3. Building a robot that can follow a line autonomously.

4. Creating a robot that can draw pictures.

5. Designing a robot that can mimic animal movements.

6. Building a robot that can clean and organize a messy room.

7. Designing a robot that can water plants and monitor their health.

8. Creating a robot that can navigate through a maze of obstacles.

9. Building a robot that can imitate human gestures and movements.

10. Designing a robot that can assemble a simple puzzle.

11. Developing a robot that can assist in food preparation and cooking.

Robotics in Everyday Life

1. Exploring the use of robots in home automation.

2. Designing a robot that can assist people with disabilities.

3. How can robots help with chores and housekeeping?

4. Creating a robot pet for companionship.

5. Investigating the use of robots in education.

6. Exploring the use of robots for food delivery in restaurants.

7. Designing a robot that can help with grocery shopping.

8. Creating a robot for home security and surveillance.

9. Investigating the use of robots for waste recycling.

10. Designing a robot that can assist in organizing a bookshelf.

Robot Programming

1. Learning the basics of programming a robot.

2. How to program a robot to navigate a maze.

3. Teaching a robot to respond to voice commands.

4. Creating a robot that can dance to music.

5. Programming a robot to play simple games.

6. Teaching a robot to recognize and sort recyclable materials.

7. Programming a robot to create art and paintings.

8. Developing a robot that can give weather forecasts.

9. Creating a robot that can simulate weather conditions.

10. Designing a robot that can write and print messages or drawings.

Robotics and Nature

1. Studying how robots can mimic animal behavior.

2. Designing a robot that can pollinate flowers.

3. Investigating the use of robots in wildlife conservation.

4. Creating a robot that can mimic bird flight.

5. Exploring underwater robots for marine research.

6. Investigating the use of robots in studying insect behavior.

7. Designing a robot that can monitor and report air quality.

8. Creating a robot that can mimic the sound of various birds.

9. Studying how robots can help in reforestation efforts.

10. Investigating the use of robots in studying coral reefs and marine life.

Robotics and Space

1. How do robots assist astronauts in space exploration?

2. Designing a robot for exploring other planets.

3. Investigating the use of robots in space mining.

4. Creating a robot to assist in space station maintenance.

5. Studying the challenges of robot communication in space.

6. Designing a robot for collecting samples on other planets.

7. Creating a robot that can assist in assembling space telescopes.

8. Investigating the use of robots in space agriculture.

9. Designing a robot for space debris cleanup.

10. Studying the role of robots in exploring and mapping asteroids.

These robotics research topics offer even more exciting opportunities for middle school students to explore the world of robotics and develop their research skills.

Latest Robotics Research Topics For High School Students

Let’s get started with some robotics research topics for high school students:

Advanced Robot Design

1. Developing a robot with human-like facial expressions.

2. Designing a robot with advanced mobility for rough terrains.

3. Creating a robot with a soft, flexible body.

4. Investigating the use of drones in agriculture.

5. Developing a bio-inspired robot with insect-like capabilities.

6. Designing a robot with the ability to self-repair and adapt to damage.

7. Developing a robot with advanced tactile sensing for delicate tasks.

8. Creating a robot that can navigate both underwater and on land seamlessly.

9. Investigating the use of drones in disaster response and relief efforts.

10. Designing a robot inspired by cheetahs for high-speed locomotion.

11. Developing a robot that can assist in search and rescue missions in extreme weather conditions, such as hurricanes or wildfires.

Artificial Intelligence and Robotics

1. How can artificial intelligence enhance robot decision-making?

2. Creating a robot that can recognize and respond to emotions.

3. Investigating ethical concerns in AI-driven robotics.

4. Developing a robot that can learn from its mistakes.

5. Exploring the use of machine learning in robotic vision.

6. Exploring the role of AI-driven robots in space exploration and colonization.

7. Creating a robot that can understand and respond to human emotions in healthcare.

8. Investigating the ethical implications of autonomous vehicles in urban transportation.

9. Developing a robot that can analyze and predict weather patterns using AI.

10. Exploring the use of machine learning to enhance robotic prosthetics.

Human-Robot Interaction

1. Studying the impact of robots on human mental health.

2. Designing a robot that can assist in therapy sessions.

3. Investigating the use of robots in elderly care facilities.

4. Creating a robot that can act as a language tutor.

5. Developing a robot that can provide emotional support.

6. Studying the psychological impact of humanoid robots in educational settings.

7. Designing a robot that can assist individuals with neurodegenerative diseases.

8. Investigating the use of robots for mental health therapy and counseling.

9. Creating a robot that can help children with autism improve social skills.

10. Developing a robot companion for the elderly to combat loneliness.

Robotics and Industry

1. How are robots transforming the manufacturing industry?

2. Investigating the use of robots in 3D printing.

3. Designing robots for warehouse automation.

4. Developing robots for precision agriculture.

5. Studying the role of robotics in supply chain management.

6. Exploring the integration of robots in the construction and architecture industry.

7. Investigating the use of robots for recycling and waste management in cities.

8. Designing robots for autonomous maintenance and repair of industrial equipment.

9. Developing robotic solutions for monitoring and managing urban traffic.

10. Studying the role of robotics in the development of smart factories and Industry 4.0.

Cutting-Edge Robotics Applications

1. Exploring the use of swarm robotics for search and rescue missions.

2. Investigating the potential of exoskeletons for enhancing human capabilities.

3. Designing robots for autonomous underwater exploration.

4. Developing robots for minimally invasive surgery.

5. Studying the ethical implications of autonomous military robots.

6. Exploring the use of robotics in sustainable energy production.

7. Investigating the use of swarming robots for ecological conservation and monitoring.

8. Designing exoskeletons for individuals with mobility impairments for daily life.

9. Developing robots for autonomous planetary exploration beyond our solar system.

10. Studying the ethical and legal aspects of AI-powered military robots in warfare.

These robotics research topics offer high school students the opportunity to delve deeper into advanced robotics concepts and address some of the most challenging and impactful issues in the field.

Robotics research is a captivating field with a wide range of robotics research topics suitable for students of all ages. Whether you’re in middle school or high school, you can explore robot design, programming, AI integration , and cutting-edge applications. Robotics research not only fosters innovation but also prepares you for a future where robots will play an increasingly important role in various aspects of our lives. So, pick a topic that excites you, and embark on your journey into the fascinating world of robotics!

I hope you enjoyed this blog about robotics research topics for middle and high school students.

Related Posts

8 easiest programming language to learn for beginners.

There are so many programming languages you can learn. But if you’re looking to start with something easier. We bring to you a list of…

10 Online Tutoring Help Benefits

Do you need a computer science assignment help? Get the best quality assignment help from computer science tutors at affordable prices. They always presented to help…

Request More Info

Fill out the form below and a member of our team will reach out right away!

" * " indicates required fields

Artificial Intelligence

Introduction to AI Applications in Robotics

ai in robotics

What Is Robotics?

Are ai and robotics the same thing, how ai is used in robotics, applications of ai in robotics, what is a robotics engineer, future of ai in robotics.

The intersection of robotics and artificial intelligence (AI) is quickly becoming a driving force in the creation of new industries, cutting-edge technologies and increased productivity and efficiency in existing sectors. As the field of AI in robotics continues to evolve, its applications in the real world are becoming increasingly apparent. 

From self-driving cars, customer service and healthcare, to industrial and service robots, AI is playing a critical role in transforming industries and improving daily life. Although there have been concerns about the potential of AI and robotics to make some aspects of human work obsolete, the World Economic Forum (WEF) predicts that this technology will create 12 million more jobs than it terminates by 2025. This growth presents an opportunity for the retraining and reskilling of the workforce and investment in knowledge that aligns with the latest technologies. 

The combination of AI and robotics has the potential to revolutionize work responsibilities across various industries , from automating routine tasks within factories to introducing flexibility and learning capabilities into tedious applications. The potential uses of AI in robotics are vast and varied, making it an exciting field to explore and understand. Read on to learn more about robotics and AI, plus how you can play a role in the future of this important industry.

Robotics is a branch of engineering and computer sciences that includes the design, construction and operation of machines that are capable of performing programmed tasks without additional human involvement . At its core, robotics is about using technology to automate tasks while making them more efficient and safe.

Historically, robots have been used for tasks that are too difficult or dangerous for humans to perform — such as lifting heavy equipment — or for activities that are very repetitive, such as assembling automobiles. By automating these tasks, robotics solutions can enhance productivity and improve safety, freeing up human workers to focus on other more complex and creative endeavors.

It’s also worth noting that robots are not subject to the same limitations as humans. For example, a human doing the same task over and over may become tired, bored or disengaged, but the robot will continue to perform the same task with an unwavering level of efficiency and precision. Robotics solutions are already making a major impact across numerous industries, from meticulously harvesting crops to making deliveries and assembling cars. 

Although AI and robotics are sometimes used interchangeably, in reality, they are distinct — yet related — fields. While both AI and robotics can potentially impact various industries and aspects of life in significant ways, each serves a different purpose and operates in a unique way.

Simply put, AI neural network models are similar to biological neural networks, while robotics is comparable to the human body. AI refers to the development of systems that can perform tasks that typically require human intelligence, such as learning, problem-solving and decision-making. These systems can work autonomously, without the need for constant instructions, since they’re programmed to learn and adapt on their own. 

Robotics, on the other hand, refers to the development of robots that can perform specific physical tasks. These robots can be programmed to carry out simple, repetitive actions, such as sorting items or assembling miniscule parts. While AI can be integrated into robotics to enhance the robot’s capabilities and improve decision-making, it’s not always necessary. Some robotics applications simply require robots to carry out predictable actions without the need for additional cognitive capabilities. 

While AI and robotics are not the same things, they do complement each other and can work together to bring about a wide range of benefits and advancements in various applications.

AI has made substantial progress in recent years, and its integration with robotics has proven to be a natural progression. While AI in robotics is not yet widespread , it’s rapidly gaining momentum as AI systems become more advanced. The combination of AI and robotics holds tremendous potential, leading to increased productivity and efficiency, improved safety and greater flexibility for workers in a variety of professions.

One of the key ways in which AI is used in robotics is through machine learning. This technique enables robots to learn and perform specific tasks through observing and mimicking human actions. AI gives robots a computer vision that enables them to navigate, detect and determine their reactions accordingly. This helps them go beyond simply performing repetitive tasks to become true “ cognitive collaborators .”

Another way that AI is used in robotics is through edge computing . AI applications in robotics require the interpretation of massive amounts of data gathered by robot-based sensors in real time, which is why this data is analyzed close to the machine, rather than being sent off to the cloud for processing. This approach provides machines with real-time awareness, enabling robots to act on decisions at a rate much quicker than human capabilities allow.

AI also helps robots learn to perform specific tasks through the use of various sensors , which may include: 

  • Time-of-flight optical sensors
  • Temperature and humidity sensors
  • Ultrasonic sensors
  • Vibration sensors
  • Millimeter-wave sensors

These sensors help robots to learn and adapt, making them more intelligent and better equipped to act and react in different scenarios.

These are just a few of the ways that artificial intelligence is used in conjunction with robotics. 

In the world of robotics, AI has proven to be a valuable asset in a variety of applications. From customer service to manufacturing, AI has made its mark and continues to revolutionize the way we think about and interact with robots. Let’s take a closer look at some of the key areas where AI is being used alongside robotics today.

Customer Service: AI-powered chatbots are becoming increasingly common in customer service applications. These automated service agents can handle simple, repetitive requests without the need for human involvement. The more these systems interact with humans, the more they learn . And as AI systems become more sophisticated, we can expect to see more and more robots being used in customer service in both online and brick-and-mortar environments.

Assembly: AI has proven to be an invaluable tool in robotic assembly applications , especially in complex manufacturing industries such as aerospace. With the help of advanced vision systems, AI can enable real-time course correction and can be used to help a robot automatically learn the best paths for certain processes while in operation. 

Packaging: AI is used in the packaging industry to improve efficiency, accuracy and cost-effectiveness. By continuously refining and saving certain motions made by robotic systems, AI helps make installing and moving robotic equipment easier for everyone.

Imaging: Across many industries — including assembly and logistics — accurate imaging is crucial. With the assistance of AI, robots can achieve enhanced visual acuity and image recognition competencies, enabling greater accuracy in even the smallest of details.

Machine Learning: Machine learning is a powerful tool for robots. By exploring their surroundings, robots can learn more about their environment , find ways around obstacles and solve problems to complete tasks more efficiently. From home robots like vacuum cleaners to manufacturing robots in factories, machine learning is helping robots become more intelligent and adaptable in their work.

These are just a few of the many applications of AI in robotics today. As these technologies continue to expand and grow in sophistication, it is likely that we will see even more innovative applications in the near future.

As robotics continue to shape various industries, a robotics engineer plays a critical role in robotic design, maintenance and functionality. A robotics engineer is a specialist responsible for building, installing and maintaining the machines that perform tasks in sectors such as manufacturing, security, aerospace and healthcare.

The day-to-day responsibilities of a robotics engineer include:

  • Installing, repairing and testing equipment and components
  • Performing predictive maintenance
  • Incorporating relevant technical literature into their understanding of system operations
  • Identifying new data sources
  • Building working relationships
  • Ensuring that software solutions meet customer needs
  • Developing and deploying AI governance structure to manage ongoing implementation of AI strategies
  • Continuously evaluating and reimagining processes to incorporate conversational AI 
  • Maintaining knowledge of safety standards and regulations for the safe operation of a system 

To become a robotics engineer , a bachelor’s or master’s degree in computer engineering, computer science, electrical engineering or a related field is required. Fluency in multiple programming languages and proficiency in algorithm design and debugging are also important qualifications. A successful robotics engineer is also a continuous learner, a natural problem solver and is driven toward ongoing improvement. 

The average salary for a robotics engineer is $100,205* per year, making it a lucrative and in-demand career path for those with the right qualifications and skills. 

*Salary average according to Glassdoor as of February 2023.

The future of AI in robotics is vast and exciting. The next stage of AI, known as AGI or Artificial General Intelligence, holds the potential to reach levels of true human understanding. The key to this is integrating the computational system of AI with a robot. The robot must possess mobility, senses (such as touch, vision and hearing) and the ability to interact with physical objects, which will enable the system to experience immediate sensory feedback from every action it takes. This feedback loop enables the system to learn and comprehend, bringing it closer to achieving true AGI.

The current focus on AI in robotics is shifting from the question of what tasks robots can perform for people, to what type of input a robot can provide the AI’s “mind.” By allowing AI to explore and experiment with real objects, it will be possible for it to approach a deeper understanding, much like a human child. With this integration of AI and robotics, we can expect to see significant advancements in a wide range of industries, from manufacturing and healthcare to security and space exploration.

The future of AI in robotics is bright and holds the potential for tremendous progress in how we understand and interact with the world. By combining the computational power of AI with the physical capabilities of robots, we are opening up new doors for exploration and innovation, and the potential for true AGI is within reach.

Are you interested in pursuing a career as a robotics engineer? Our Master of Science in Applied Artificial Intelligence may be just the beginning of a worthwhile journey. Check out our informative eBook, 8 Questions to Ask Before Selecting an Applied Artificial Intelligence Master’s Degree , to learn more.

Be Sure To Share This Article

  • Share on Twitter
  • Share on Facebook
  • Share on LinkedIn

Considering a Master’s in Artificial Intelligence?

Free checklist to help you compare programs and select one that’s ideal for you.

Cover of 8 Questions to Ask Before Selecting an Artificial Intelligence Master's Degree Program Book

  • Master of Science in Applied Artificial Intelligence

Related Posts

assignment for robotics

12 Interesting Robotics Projects Ideas & Topics for Beginners & Experienced

12 Interesting Robotics Projects Ideas & Topics for Beginners & Experienced

Are you passionate about robotics and eager to excel in this exciting field? Look no further! This article is your ultimate guide to top robotics projects and ideas, tailored for both beginners and intermediates. These projects offer not only valuable insights into the world of robotics but also provide hands-on experience, helping you strengthen your skills and knowledge. 

As you dive into these projects, you’ll unlock your potential to innovate and significantly impact the rapidly evolving robotics landscape. 

So, let’s get started!

Top Robotics Project Topics & Ideas for Beginners

As a beginner in the field of robotics, it’s essential to start with projects that introduce foundational concepts and skills. These projects aim to familiarise beginners with the basics of robot design, programming, and sensor integration while providing hands-on experience to build confidence. 

Ads of upGrad blog

The following list of beginner-friendly projects serves as an excellent starting point for those looking to embark on their robotics journey and develop a strong foundation for more advanced projects in the future.

1. Line Follower Robot

The Line Follower Robot is a simple yet intriguing project for beginners that involves designing and programming a robot to follow a specific path marked by a line. This project will introduce students to the fundamentals of robot design, sensor integration, and basic programming.

assignment for robotics

Before embarking on this project, students should have a basic understanding of electronics, including working with microcontrollers, sensors, and actuators. Familiarity with a programming language like C or Python Robotics is also beneficial.

Learning outcomes:

  • Basics of robot design and assembly
  • Understanding sensors and actuators
  • Programming and control logic

2. Obstacle Avoidance Robot

The Obstacle Avoidance Robot project challenges students to create a robot capable of detecting and manoeuvring around obstacles in its path. This project entails integrating various sensors to help the robot perceive its surroundings and developing algorithms to process the sensor data for decision-making.

  • Sensor integration for detecting obstacles
  • Algorithm development for obstacle avoidance
  • Actuator control for robot manoeuvrability

3. Robotic Arm

The Robotic Arm projec t involves designing and building a robotic limb to perform various tasks, such as lifting and moving objects. This project covers the mechanical design of the robotic arm, motor control, and the principles of kinematics.

assignment for robotics

To work on this project, students should have a mechanics, electronics, and programming background. Familiarity with microcontrollers and actuators, such as servo motors, is essential for building the robotic arm.

  • Mechanical design of robotic limbs
  • Motor control and kinematics
  • Programming for precise and coordinated movements

4. Mobile-Controlled Robot

The Mobile-Controlled Robot project involves designing and programming a robot that can be controlled using a smartphone or tablet. This project teaches students how to integrate Bluetooth or Wi-Fi technology for wireless communication between the robot and the mobile device.

Before starting this project, students should have a basic understanding of electronics, microcontrollers, and programming. Familiarity with Bluetooth or Wi-Fi communication protocols is also beneficial.

  • Wireless communication using Bluetooth or Wi-Fi
  • Robot control through a mobile device interface
  • Integrating sensors and actuators for responsive movement

5. Solar-Powered Robot

The Solar-Powered Robot project focuses on designing and building a robot that can harness solar energy for its operation. This project introduces students to the concepts of renewable energy and energy-efficient design in robotics.

Students should have a foundation in electronics and programming to work on this project. Knowledge of solar panels and energy storage systems, such as batteries, is advantageous.

  • Understanding solar energy harvesting and storage
  • Designing energy-efficient robotic systems
  • Incorporating renewable energy sources in robotics

6. Maze Solver Robot

The Maze Solver Robot project challenges students to create a robot capable of navigating through a maze autonomously. This project involves developing algorithms for pathfinding and decision-making based on the robot’s surroundings.

To undertake this project, students should have a programming, algorithms, and microcontrollers background. Experience with sensors, such as infrared or ultrasonic sensors, is beneficial.

  • Developing pathfinding algorithms
  • Sensor integration for environment perception
  • Implementing decision-making strategies for maze navigation

Top Robotics Project Topics & Ideas for Intermediates

After gaining a solid foundation in robotics, it’s time to move on to more challenging projects that delve deeper into advanced concepts and technologies. Intermediate-level projects help students develop a better understanding of complex algorithms, control systems, and artificial intelligence applications in robotics. 

These projects encourage problem-solving, critical thinking, and creativity, enabling students to broaden their skill sets and prepare for more specialised roles in the field. The following list of intermediate projects offers diverse topics to expand your knowledge and expertise in robotics.

7. Voice-Controlled Robot

The Voice Controlled Robot project involves creating a robot that responds to voice commands. This project requires integrating speech recognition technology, natural language processing, and robot control systems.

Students working on this project should have prior experience with microcontrollers, programming and a basic understanding of artificial intelligence concepts. Familiarity with speech recognition libraries or APIs is a plus.

  • Integration of speech recognition technology
  • Natural language processing for command interpretation
  • Robot control based on voice commands

8. Swarm Robotics

Swarm Robotics is an advanced project that explores the coordination and cooperation of multiple robots working together to achieve a common goal. The project emphasises the development of algorithms for decentralised control and communication between the robots.

assignment for robotics

Students should have a strong foundation in programming, algorithms, and multi-agent systems to undertake this project. Experience with communication protocols and networking is beneficial.

  • Design and implementation of decentralised control algorithms
  • Communication and coordination between multiple robots
  • Understanding swarm intelligence principles

9. Autonomous Drone

The Autonomous Drone project involves designing, building, and programming a drone capable of autonomous flight and navigation. This project covers topics such as flight control systems, GPS integration, and obstacle detection.

Students interested in this project should have experience with electronics, programming, and control systems. Knowledge of aerodynamics and sensor integration is advantageous.

  • Design and assembly of drone components
  • Integration of flight control systems and GPS
  • Programming for autonomous flight and navigation

Check out our free technology courses to get an edge over the competition.

10. SLAM-Based Robot

The SLAM-Based Robot project involves creating a robot capable of Simultaneous Localisation and Mapping (SLAM) for autonomous navigation in unknown environments. This project entails integrating various sensors, such as LiDAR or cameras, and developing algorithms for mapping and localisation.

  • Understanding and implementing SLAM algorithms
  • Autonomous navigation in unknown environments

11. Robotic Exoskeleton

The Robotic Exoskeleton project focuses on designing and building a wearable robotic system to assist or augment human movement. To work on this project, students should have a mechanics, electronics, and programming background. Familiarity with force sensors and actuators, such as servo motors or linear actuators, is essential.

  • Understanding human biomechanics and movement
  • Designing wearable robotic systems
  • Integration of force sensing and actuator control

Check Out upGrad’s Software Development Courses to upskill yourself.

12. Humanoid Robot

The Humanoid Robot project involves designing, building, and programming a robot with human-like characteristics and capabilities, such as walking, talking, or facial expression recognition. This advanced project covers topics such as computer vision, natural language processing, and complex motor control.

Students interested in this project should have experience in electronics, programming, and  control systems. Knowledge of artificial intelligence and computer vision is advantageous.

Explore our Popular Software Engineering Courses

  • Design and assembly of humanoid robot components
  • Integrating computer vision and natural language processing
  • Programming for complex motor control and human-like behaviours

Importance of Robotics Skills in Today’s and Future’s Job Market

The increasing integration of robotics and automation in various industries has created an unprecedented demand for skilled professionals. From manufacturing and healthcare to agriculture and logistics, robotics has transformed businesses’ operations, leading to increased efficiency and productivity. As a result, acquiring robotics skills has become essential for those looking to excel in the modern job market.

Read our Popular Articles related to Software Development

According to a report by the World Economic Forum , by 2025, machines and algorithms will have created 12 million new jobs, with robotics and artificial intelligence playing a significant role in this growth. Furthermore, the International Federation of Robotics (IFR) estimates that the global market for robotics systems will reach $248 billion by 2025, highlighting the immense potential for job opportunities in the field.

As the adoption of robotics continues to grow, companies are increasingly seeking individuals with the necessary technical expertise to drive innovation and maintain a competitive edge in the market.

The world of robotics offers a vast array of project ideas for beginners and intermediates. Students can acquire valuable skills and knowledge by working on these projects, enabling them to excel in their robotics journey. With the right guidance and resources, such as those provided by upGrad, anyone can pursue their passion for robotics and embark on a rewarding career path.

Explore Our Software Development Free Courses

How upgrad can help you.

upGrad offers comprehensive courses and mentorship programs to help aspiring robotics enthusiasts learn the necessary skills and work on real-world projects. With a combination of online classes, hands-on workshops, and practical assignments, upGrad ensures students have a solid foundation in robotics and the confidence to tackle advanced projects.

By enrolling in upGrad courses like Post Graduate Certificate Program in Cloud Computing , in collaboration with IIIT Bangalore, students can acquire valuable skills and knowledge, enabling them to excel in their technology-focused career paths. The program focuses on equipping students with in-demand cloud computing skills, enabling them exposure to in-demand topics like cloud architecture, virtualisation, storage, networking, and cloud security.

Profile

Pavan Vadapalli

Something went wrong

Our Trending Software Engineering Courses

  • Master of Science in Computer Science from LJMU
  • Executive PG Program in Software Development Specialisation in Full Stack Development from IIIT-B
  • Advanced Certificate Programme in Cyber Security from IIITB
  • Full Stack Software Development Bootcamp
  • Software Engineering Bootcamp from upGrad

Popular Software Development Skills

  • React Courses
  • Javascript Courses
  • Core Java Courses
  • Data Structures Courses
  • ReactJS Courses
  • NodeJS Courses
  • Blockchain Courses
  • SQL Courses
  • Full Stack Development Courses
  • Big Data Courses
  • Devops Courses
  • NFT Courses
  • Cyber Security Courses
  • Cloud Computing Courses
  • Database Design Courses
  • Crypto Courses
  • Python Courses

Our Popular Software Engineering Courses

Full Stack Development

Frequently Asked Questions (FAQs)

To choose the right project, assess your current skills and knowledge in electronics, programming, and mechanics. If you are a beginner, choose a project that introduces foundational concepts, such as a Line Follower Robot or an Obstacle Avoidance Robot. As you gain more experience, move on to intermediate-level projects that involve more complex algorithms, control systems, or artificial intelligence applications, like SLAM-Based Robots.

Various resources are available for learning about robotics, including online courses, books, tutorials, and community forums. Websites like upGrad offer comprehensive courses to help you learn the necessary skills and work on real-world projects. Books and online tutorials can also provide valuable insights and practical guidance for specific projects.

Yes, working on robotics projects can significantly enhance your skills, making you more attractive to potential employers. By showcasing your robotics projects in your portfolio or resume, you demonstrate not only your technical expertise but also your problem-solving abilities, critical thinking, and creativity.

Related Programs View All

Certification

40 Hrs Live, Expert-Led Sessions

2 High-Quality Practice Exams

View Program

assignment for robotics

Master's Degree

40000+ Enrolled Learners

assignment for robotics

Executive PG Program

IIIT-B Alumni Status

assignment for robotics

2 Unique Specialisations

assignment for robotics

Job Assistance

300+ Hiring Partners

159+ Hours of Live Sessions

assignment for robotics

126+ Hours of Live Sessions

Fully Online

20+ Hrs Instructor-Led Sessions

Live Doubt-Solving Sessions

13+ Hrs Instructor-Led Sessions

17+ Hrs Instructor-Led Training

3 Real-World Capstone Projects

32-Hr Training by Dustin Brimberry

Question Bank with 300+ Practice Qs

16 Hrs Live Expert-Led Training

CLF-C02 Exam Prep Support

assignment for robotics

Microsoft-Approved Curriculum

24 Hrs Live Expert-Led Training

4 Real-World Capstone Projects

45 Hrs Live Expert-Led Training

289 Hours of Self-Paced Learning

10+ Capstone Projects

288 Hours Self-Paced Learning

9 Capstone Projects

490+ Hours Self-Paced Learning

4 Real-World Projects

690+ Hours Self-Paced Learning

Cloud Labs-Enabled Learning

40 Hrs Live Expert-Led Sessions

2 Mock Exams, 9 Assessments

assignment for robotics

Executive PG Certification

GenAI integrated curriculum

assignment for robotics

Job Prep Support

Instructor-Led Sessions

Hands-on UI/UX

16 Hrs Live Expert-Led Sessions

12 Hrs Hand-On Practice

30+ Hrs Live Expert-Led Sessions

24+ Hrs Hands-On with Open Stack

2 Days Live, Expert-Led Sessions

34+ Hrs Instructor-Led Sessions

10 Real-World Live Projects

24 Hrs Live Expert-Led Sessions

16 Hrs Hand-On Practice

8 Hrs Instructor-Led Training

Case-Study Based Discussions

40 Hrs Instructor-Led Sessions

Hands-On Practice, Exam Support

24-Hrs Live Expert-Led Sessions

Regular Doubt-Clearing Sessions

Extensive Exam Prep Support

6 Hrs Live Expert-Led Sessions

440+ Hours Self-Paced Learning

400 Hours of Cloud Labs

15-Hrs Live Expert-Led Sessions

32 Hrs Live Expert-Led Sessions

28 Hrs Hand-On Practice

Mentorship by Industry Experts

24 Hrs Live Trainer-Led Sessions

Mentorship by Certified Trainers

GenAI Integrated Curriculum

Full Access to Digital Resources

16 Hrs Live Instructor-Led Sessions

80+ Hrs Hands-On with Cloud Labs

160+ Hours Live Instructor-Led Sessions

Hackathons and Mock Interviews

31+ Hrs Instructor-Led Sessions

120+ Hrs of Cloud Labs Access

35+ Hrs Instructor-Led Sessions

6 Real-World Live Projects

24+ Hrs Instructor-Led Training

Self-Paced Course by Nikolai Schuler

Access Digital Resources Library

300+ Hrs Live Expert-Led Training

90 Hrs Doubt Clearing Sessions

56 Hours Instructor-Led Sessions

78 Hrs Live Expert-Led Sessions

22 Hrs Live, Expert-Led Sessions

CISA Job Practice Exams

Explore Free Courses

Study Abroad Free Course

Learn more about the education system, top universities, entrance tests, course information, and employment opportunities in Canada through this course.

Marketing

Advance your career in the field of marketing with Industry relevant free courses

Data Science & Machine Learning

Build your foundation in one of the hottest industry of the 21st century

Management

Master industry-relevant skills that are required to become a leader and drive organizational success

Technology

Build essential technical skills to move forward in your career in these evolving times

Career Planning

Get insights from industry leaders and career counselors and learn how to stay ahead in your career

Law

Kickstart your career in law by building a solid foundation with these relevant free courses.

Chat GPT + Gen AI

Stay ahead of the curve and upskill yourself on Generative AI and ChatGPT

Soft Skills

Build your confidence by learning essential soft skills to help you become an Industry ready professional.

Study Abroad Free Course

Learn more about the education system, top universities, entrance tests, course information, and employment opportunities in USA through this course.

Suggested Tutorials

Python Tutorial

Explore Python programming with this concise tutorial, covering basics to advanced concepts for beginners and enthusiasts alike.

C Tutorial

Introduction to C Programming, Learn all the C programming language concepts in this tutorial.

Suggested Blogs

Best Jobs in IT without coding

12 Apr 2024

Scrum Master Salary in India: For Freshers & Experienced [2023]

by Rohan Vats

05 Mar 2024

SDE Developer Salary in India: For Freshers & Experienced [2024]

by Prateek Singh

29 Feb 2024

Marquee Tag & Attributes in HTML: Features, Uses, Examples

by venkatesh Rajanala

What is Coding? Uses of Coding for Software Engineer in 2024

by Harish K

Functions of Operating System: Features, Uses, Types

by Geetika Mathur

What is Information Technology? Definition and Examples

by spandita hati

50 Networking Interview Questions & Answers (Freshers & Experienced)

Advertisement

Advertisement

Integrated task sequence planning and assignment for human–robot collaborative assembly station

  • Published: 10 December 2022
  • Volume 35 , pages 979–1006, ( 2023 )

Cite this article

assignment for robotics

  • Yichen Wang 1 ,
  • Junfeng Wang   ORCID: orcid.org/0000-0003-2756-8803 1 ,
  • Jindan Feng 2 ,
  • Jinshan Liu 2 &
  • Xiaojun Liu 3  

645 Accesses

3 Citations

Explore all metrics

Human–robot collaborative assembly (HRCA) can give full play to their respective advantages and significantly improve assembly efficiency. Rational assembly sequences and task assignment schemes facilitate an efficient and smooth assembly process. This paper proposes a method of integrated assembly sequence planning and task assignment for HRCA based on the genetic algorithm (GA). Firstly, a part assembly process is decomposed into positioning task and connection task, which include a series of activities from the practical application aspect. Then, a dual-task precedence graph model for product assembly is accordingly constructed. Subsequently, GA is used to integrate assembly sequence planning and task assignment in HRCA considering time, complexity, and coherence as optimization objectives. By using Gantt charts, a chromosome encoding and decoding method based on human–robot collaborative assembly state diagrams in collaborative work is proposed to express assembly sequences and task assignment schemes. Finally, assembly process simulation is conducted to slightly adjust the assignment result considering potential collision in the shared time and space during HRCA. The case studies illustrate the feasibility and effectiveness of the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

assignment for robotics

Similar content being viewed by others

assignment for robotics

Assembly System 4.0: Human-Robot Collaboration in Assembly Operations

Cognition-enhanced assembly sequence planning for ergonomic and productive human–robot collaboration in self-optimizing assembly cells.

assignment for robotics

Base part centered assembly task precedence generation

Alkan B (2019) An experimental investigation on the relationship between perceived assembly complexity and product design complexity. Int J Interact Des Manuf 13(3):1145–1157. https://doi.org/10.1007/s12008-019-00556-9

Article   Google Scholar  

Bogner K, Pferschy U, Unterberger R, Zeiner H (2018) Optimised scheduling in human–robot collaboration–a use case in the assembly of printed circuit boards. Int J Prod Res 56(16):5522–5540. https://doi.org/10.1080/00207543.2018.1470695

Bruno G, Antonelli D (2018) Dynamic task classification and assignment for the management of human–robot collaborative teams in workcells. Int J Adv Manuf Technol 98(9):2415–2427. https://doi.org/10.1007/s00170-018-2400-4

Chen F, Sekiyama K, Cannella F, Fukuda T (2014) Optimal subtask allocation for human and robot collaboration within hybrid assembly system. IEEE Trans Autom Sci Eng 11(4):1065–1075. https://doi.org/10.1109/TASE.2013.2274099

Chen F, Sekiyama K, Fukuda T (2012) A genetic algorithm for subtask allocation within human and robot coordinated assembly. In: Proceedings of 2012 international symposium on micro-nano mechatronics and human science, pp 507–511, https://doi.org/10.1109/MHS.2012.6492504

Chryssolouris G, Tsarouchi P, Makris S (2016) Human–robot interaction review and challenges on task planning and programming. Int J Comput Integr Manuf 29(8):916–931. https://doi.org/10.1080/0951192X.2015.1130251

Chutima P (2022) A comprehensive review of robotic assembly line balancing problem. J Intell Manuf 33:1–34. https://doi.org/10.1007/s10845-020-01641-7

Çil ZA, Li Z, Mete S et al (2020) Mathematical model and bee algorithms for mixed-model assembly line balancing problem with physical human–robot collaboration. Appl Soft Comput 93:106394. https://doi.org/10.1016/j.asoc.2020.106394

Dalle MM, Dini G (2019) Designing assembly lines with humans and collaborative robots: a genetic approach. CIRP Ann 68(1):1–4. https://doi.org/10.1016/j.cirp.2019.04.006

Ehsan S, Simon HP, Sergey T, Alexandre KD (2020) Operations management issues in design and control of hybrid human–robot collaborative manufacturing systems: a survey. Annu Rev Control 49:264–276. https://doi.org/10.1016/j.arcontrol.2020.04.009

Elmaraghy HA, Algeddawy T (2012) Co-evolution of products and manufacturing capabilities and application in auto-parts assembly. Flex Serv Manuf J 24(2):142–170. https://doi.org/10.1007/s10696-011-9088-1

Ghosh BK, Helander MG (1986) A systems approach to task allocation of human–robot interaction in manufacturing. J Manuf Syst 5(1):41–49. https://doi.org/10.1016/0278-6125(86)90066-X

Giele TRA, Mioch T, Neerincx MA, et al (2015) Dynamic task allocation for human–robot teams. In: Proceedings of the international conference on agents and artificial intelligence vol 1, pp 117–124. https://doi.org/10.5220/0005178001170124

Gjeldum N, Aljinovic A, Zizic MC, Mladineo M (2022) Collaborative robot task allocation on an assembly line using the decision support system. Int J Comput Integr Manuf 35:510–526. https://doi.org/10.1080/0951192X.2021.1946856

Ham DH, Park J, Jung W (2012) Model-based identification and use of task complexity factors of human integrated systems. Reliab Eng Syst Safe 10:33–47. https://doi.org/10.1016/j.ress.2011.12.019

Heydaryan S, SuazaBedolla J, Belingardi G (2018) Safety design and development of a human–robot collaboration assembly process in the automotive industry. Appl Sci-Basel 8(3):344. https://doi.org/10.3390/app8030344

Huang J, Pham DT, Li R et al (2021) An experimental human–robot collaborative disassembly cell. Comput Ind Eng 155:107189. https://doi.org/10.1016/j.cie.2021.107189

Johannsmeier L, Haddadin S (2017) A hierarchical human–robot interaction-planning framework for task allocation in collaborative industrial assembly processes. IEEE Robot Autom Lett 2(1):41–48. https://doi.org/10.1109/LRA.2016.2535907

Kathryn ES, Mokhtarzadeh M (2022) Balancing collaborative human–robot assembly lines to optimise cycle time and ergonomic risk. Int J Prod Res 60:25–47. https://doi.org/10.1080/00207543.2021.1989077

Liau YY, Ryu K (2022) Genetic algorithm-based task allocation in multiple modes of human–robot collaboration systems with two cobots. Int J Adv Manuf Technol 119:7291–7309. https://doi.org/10.1007/s00170-022-08670-x

Liu P, Li Z (2012) Task complexity: a review and conceptualization framework. Int J Ind Ergon 42(6):553–568. https://doi.org/10.1016/j.ergon.2012.09.001

Liu Q, Liu Z, Xu W et al (2019) Human–robot collaboration in disassembly for sustainable manufacturing. Int J Prod Res 57(12):4027–4044. https://doi.org/10.1080/00207543.2019.1578906

Lv Q, Zhang R, Sun X, Lu Y, Bao J (2021) A digital twin-driven human–robot collaborative assembly approach in the wake of COVID-19. J Manuf Syst 60:837–851. https://doi.org/10.1016/j.jmsy.2021.02.011

Malik AA, Bilberg A (2019) Complexity-based task allocation in human–robot collaborative assembly. Ind Robot 46(4):471–480. https://doi.org/10.1108/IR-11-2018-0231

Milliez G, Lallement R, Fiore M, Alami R (2016) Using human knowledge awareness to adapt collaborative plan generation, explanation and monitoring. In: Proceedings of 2016 11th ACM/IEEE international conference on human–robot interaction, pp 43–50. https://doi.org/10.1109/HRI.2016.7451732

Moretti E, Tappia E, Mauri M et al (2022) A performance model for mobile robot-based part feeding systems to supermarkets. Flex Serv Manuf J 34(3):580–613. https://doi.org/10.1007/s10696-021-09427-6

Müller R, Vette M, Geenen A (2017) Skill-based dynamic task allocation in human–robot-cooperation with the example of welding application. Proc Manuf 11:13–21. https://doi.org/10.1016/j.promfg.2017.07.113

Nikolakis N, Kousi N, Michalos G, Makris S (2018) Dynamic scheduling of shared human–robot manufacturing operations. Proc CIRP 72:9–14. https://doi.org/10.1016/j.procir.2018.04.007

Parsa S, Saadat M (2021) Human–robot collaboration disassembly planning for end-of-life product disassembly process. Robot Comput Integr Manuf 71:102170. https://doi.org/10.1016/j.rcim.2021.102170

Rabbani M, Behbahan SZB, Farrokhi-Asl H (2020) The collaboration of human–robot in mixed-model four-sided assembly line balancing problem. J Intell Robot Syst 100:71–81. https://doi.org/10.1007/s10846-020-01177-1

Raessa M, Chen JCY, Wan W et al (2020) Human-in-the-loop robotic manipulation planning for collaborative assembly. IEEE Trans Autom Sci Eng 17(4):1800–1813. https://doi.org/10.1109/TASE.2020.2978917

Rahman SM, Wang Y (2018) Mutual trust-based subtask allocation for human–robot collaboration in flexible lightweight assembly in manufacturing. Mechatronics 54:94–109. https://doi.org/10.1016/j.mechatronics.2018.07.007

Ranz F, Hummel V, Sihn W (2017) Capability-based task allocation in human–robot collaboration. Proc Manuf 9:182–189. https://doi.org/10.1016/j.promfg.2017.04.011

Riedelbauch D, Henrich D (2019) Exploiting a human-aware world model for dynamic task allocation in flexible human–robot teams. In: Proceedings of 2019 international conference on robotics and automation, pp 6511–6517. https://doi.org/10.1109/ICRA.2019.8794288

Schermerhorn P, Scheutz M (2009) Dynamic robot autonomy: investigating the effects of robot decision-making in a human–robot team task. In: Proceedings of the 2009 international conference on multi-modal interfaces, pp 63–70. https://doi.org/10.1145/1647314.1647328

Schmidt B, Wang L (2014) Depth camera-based collision avoidance via active robot control. J Manuf Syst 33(4):711–718. https://doi.org/10.1016/j.jmsy.2014.04.004

Schröter D, Jaschewski P, Kuhrke B, Verl A (2016) Methodology to identify applications for collaborative robots in powertrain assembly. Proc CIRP 55:12–17. https://doi.org/10.1016/j.procir.2016.08.015

Stadnicka D, Antonelli D (2019) Human–robot collaborative work cell implementation through lean thinking. Int J Comput Integr Manuf 32(6):580–595. https://doi.org/10.1080/0951192X.2019.1599437

Takata S, Hirano T (2011) Human and robot allocation method for hybrid assembly system. CIRP Ann 60(1):9–12. https://doi.org/10.1016/j.cirp.2011.03.128

Tsarouchi P, Michalos G, Makris S, Athanasatos T, Dimoulas K, Chryssolouris G (2017) On a human–robot workplace design and task allocation system. Int J Comput Integr Manuf 30(12):1272–1279. https://doi.org/10.1080/0951192X.2017.1307524

Wang L, Gao R, Váncza J, Krüger J, Wang XV, Makris S, Chryssolouris G (2019) Symbiotic human–robot collaborative assembly. CIRP Ann 68(2):701–726. https://doi.org/10.1016/j.cirp.2019.05.002

Weckenborg C, Kieckhäfer K, Müller C, Grunewald M, Spengler TS (2019) Balancing of assembly lines with collaborative robots. Bus Res 13:93–132. https://doi.org/10.1007/s40685-019-0101-y

Xu W, Tang Q, Liu J, Liu Z, Zhou Z, Pham DT (2020) Disassembly sequence planning using discrete Bees algorithm for human–robot collaboration in remanufacturing. Robot Comput Integr Manuf 62:101860. https://doi.org/10.1016/j.rcim.2019.101860

Yi Y, Yan Y, Liu X, Ni Z, Feng J, Liu J (2021) Digital twin-based smart assembly process design and application framework for complex products and its case study. J Manuf Syst 58:94–107. https://doi.org/10.1016/j.jmsy.2020.04.013

Zanchettin AM (2021) Robust scheduling and dispatching rules for high-mix collaborative manufacturing systems. Flex Serv Manuf J 34(2):293–316. https://doi.org/10.1007/s10696-021-09406-x

Zhu Y, Tian D, Yan F (2020) Effectiveness of entropy weight method in decision-making. Math Probl Eng 2020:3564835. https://doi.org/10.1155/2020/3564835

Download references

Research supported by the Preliminary Research Program of Equipment Development of China (Grant No. 61409230103).

Author information

Authors and affiliations.

School of Mechanical Science and Engineering, Huazhong University of Science and Technology, Wuhan, 430074, China

Yichen Wang & Junfeng Wang

Beijing Spacecrafts Limited Company, Beijing, 100094, China

Jindan Feng & Jinshan Liu

School of Mechanical Engineering, Southeast University, Nanjing, 211198, China

Xiaojun Liu

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Junfeng Wang .

Ethics declarations

Conflict of interest.

The authors claim that the paper has not been published or is not under consideration for publication elsewhere.

Competing interests

No potential conflict of interest was reported by the authors.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Wang, Y., Wang, J., Feng, J. et al. Integrated task sequence planning and assignment for human–robot collaborative assembly station. Flex Serv Manuf J 35 , 979–1006 (2023). https://doi.org/10.1007/s10696-022-09479-2

Download citation

Accepted : 27 November 2022

Published : 10 December 2022

Issue Date : December 2023

DOI : https://doi.org/10.1007/s10696-022-09479-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Human–robot collaborative assembly
  • Task sequence planning
  • Task assignment
  • Genetic Algorithm
  • Find a journal
  • Publish with us
  • Track your research
  • Open access
  • Published: 16 November 2022

Automated patient-robot assignment for a robotic rehabilitation gym: a simplified simulation model

  • Benjamin A. Miller 1 , 2 ,
  • Bikranta Adhikari 1 ,
  • Chao Jiang 1 &
  • Vesna D. Novak   ORCID: orcid.org/0000-0001-9143-2682 1 , 2  

Journal of NeuroEngineering and Rehabilitation volume  19 , Article number:  126 ( 2022 ) Cite this article

2196 Accesses

2 Citations

Metrics details

A robotic rehabilitation gym can be defined as multiple patients training with multiple robots or passive sensorized devices in a group setting. Recent work with such gyms has shown positive rehabilitation outcomes; furthermore, such gyms allow a single therapist to supervise more than one patient, increasing cost-effectiveness. To allow more effective multipatient supervision in future robotic rehabilitation gyms, we propose an automated system that could dynamically assign patients to different robots within a session in order to optimize rehabilitation outcome.

As a first step toward implementing a practical patient-robot assignment system, we present a simplified mathematical model of a robotic rehabilitation gym. Mixed-integer nonlinear programming algorithms are used to find effective assignment and training solutions for multiple evaluation scenarios involving different numbers of patients and robots (5 patients and 5 robots, 6 patients and 5 robots, 5 patients and 7 robots), different training durations (7 or 12 time steps) and different complexity levels (whether different patients have different skill acquisition curves, whether robots have exit times associated with them). In all cases, the goal is to maximize total skill gain across all patients and skills within a session.

Analyses of variance across different scenarios show that disjunctive and time-indexed optimization models significantly outperform two baseline schedules: staying on one robot throughout a session and switching robots halfway through a session. The disjunctive model results in higher skill gain than the time-indexed model in the given scenarios, and the optimization duration increases as the number of patients, robots and time steps increases. Additionally, we discuss how different model simplifications (e.g., perfectly known and predictable patient skill level) could be addressed in the future and how such software may eventually be used in practice.

Conclusions

Though it involves unrealistically simple scenarios, our study shows that intelligently moving patients between different rehabilitation robots can improve overall skill acquisition in a multi-patient multi-robot environment. While robotic rehabilitation gyms are not yet commonplace in clinical practice, prototypes of them already exist, and our study presents a way to use intelligent decision support to potentially enable more efficient delivery of technologically aided rehabilitation.

Rehabilitation robotics and the robotic gym

Over the last decade, rehabilitation robots have demonstrated the ability to deliver motor rehabilitation with results comparable to a human therapist [ 1 , 2 , 3 ]. By physically guiding the patient’s limb and applying either assistive or challenging forces [ 4 , 5 ], such robots can effectively reduce the physical workload of the human therapist. Traditionally, rehabilitation robots were operated in a setup consisting of one robot, one patient, and one supervising therapist [ 1 , 2 ]; this was likely simply due to the high cost of individual robots, which made it difficult for most rehabilitation centers to own more than one robot. However, the last few years have seen a push toward more affordable rehabilitation robotics [ 6 ], which enables rehabilitation centers to own more than one robot and introduces opportunities for multi-robot setups.

The first steps beyond the classic “one patient, one robot, one therapist” model involved connecting two robots (or passive sensorized rehabilitation devices), allowing two patients to exercise together either independently or in a competitive/collaborative manner while supervised by a single therapist [ 7 , 8 , 9 , 10 , 11 , 12 ]. To reduce the burden on the therapist, such two-robot setups commonly include automated difficulty adaptation algorithms that aim to keep the exercise difficulty appropriate for both patients [ 13 , 14 , 15 ], removing the need for the therapist to constantly modify the exercise settings. Short-term studies have shown benefits to such paired rehabilitation such as improved motivation, exercise intensity and motor learning [ 7 , 8 , 9 , 10 , 11 , 12 ], and a recent clinical trial found greater improvements in functional outcome after paired therapy than after individual therapy [ 16 ].

As the intuitive next step beyond connecting two robots, rehabilitation centers may take the form of “robotic gyms” where multiple patients train with multiple robots or passive sensorized devices in a group setting. Such a robotic gym was first demonstrated with six rehabilitation devices as early as 2016, and a pilot clinical trial suggested that it may be more efficient than the traditional “one patient, one robot, one therapist” model since it may allow therapists to supervise multiple patients simultaneously [ 17 ]. In the last few years, several rehabilitation centers have set up robotic gyms with multiple robots: for example, Fondazione Don Carlo Gnocchi, Italy [ 18 ], Shirley Ryan AbilityLab, USA; SRH Gesundheitszentrum Bad Wimpfen, Germany. Additionally, a 2020 multicenter clinical trial was conducted with such a robotic gym: patients alternated between four cheaper lower-cost rehabilitation devices, with results comparable to those previously seen with larger robots [ 3 ]. In this four-device study, each therapist was assigned to supervise 3 patients, and the authors again emphasized that such group settings may allow new organizational models that are more cost-efficient than the traditional “one patient, one robot, one therapist” model [ 3 ]. An observational study by the same team suggested that a therapist may be able to effectively supervise up to four patients exercising with four robots [ 19 ].

Improving therapist support in the robotic gym

While having a single therapist supervise multiple patients in a robotic gym may be cost-effective, there is a risk of divided attention: a therapist may not be able to effectively monitor all the patients, leading to suboptimal therapy. For example, if a device trains elbow flexion/extension, a patient’s elbow may become fatigued and the patient may benefit to moving to a different device that trains finger function instead; however, if the therapist does not notice this, the patient may continue exercising with the elbow device and getting progressively more tired and frustrated. As a more extreme example, if a robot is not optimally physically aligned to the patient and the therapist does not notice this, there is a risk of patient injury [ 20 ]. However, we believe that such issues could be avoided and cost-effectiveness in a robotic gym could be improved further if the therapist is provided with effective software support: a central system that collects information from all robots in the gym and presents it to the therapist.

Rehabilitation robots are already able to assess patient task performance and motor function using built-in sensors [ 21 , 22 , 23 , 24 ]. In a robotic gym, each robot could independently monitor the current patient, and the information from all patients could be aggregated and presented to a supervising therapist via a central portal, thus reducing therapist workload. Initial steps have already been taken in this direction: for example, Hocoma AG (Switzerland), a major manufacturer of rehabilitation robots, has introduced the Hoconet software portal, which allows a therapist to create a patient database and gather data from multiple robots in a centralized fashion.

To further reduce therapist workload and potentially improve group rehabilitation outcome, we could endow the robotic gym with a software agent that would monitor the patients as a group. The agent could then estimate when a patient might benefit from moving to a different robot (due to, e.g., fatigue, boredom, or simply lack of improvement in the current exercise). It could then suggest such a change to the patient(s) or supervising therapist, who could either accept or reject it. In the long term, this may become a bidirectional exchange of information between the therapist and the robotic supervision system: the robot intelligence could study when the therapist chooses to accept or reject its suggestions, and could learn to adapt its suggestions accordingly. More extremely, such centralized planning by an artificial intelligence could even be used in situations where a therapist is not available: for example, during a weekend group session supervised by a technician or nurse rather than a therapist. Again, basic steps have been taken in this direction: for example, Hocoma AG has introduced the Extra Time software, which allows an aide to run previously described exercises when a therapist is not available. However, the Extra Time software does not perform any monitoring or planning.

Such a centralized patient monitoring and patient-robot assignment planning system has not been implemented in existing robotic gyms, where patients either do not switch between devices within the session [ 3 ] or switch between them arbitrarily after a predefined time period [ 6 ]. However, it should be possible: theoretical models suggest that, given knowledge about ability levels, it is possible to design a multi-task training regimen that maximizes long-term retention across different trained functions [ 25 , 26 ], though this has not been applied to robotics.

Contribution of current paper

In this paper, we introduce the concept of an intelligent patient-robot assignment system for a robotic gym that monitors all patients’ exercise performance and dynamically assigns them to available robots in order to optimize training outcome for the entire patient group. Realistically, this is a very complex problem involving numerous uncertainties (e.g., the difficulty of estimating actual patient motor ability from performance in a single specific task [ 21 , 22 , 23 , 24 ]), and the success of a specific solution could only be determined via long-term human subjects testing with actual people with motor impairments. However, as initial steps toward a realistic system, we:

Developed a simplified mathematical model for patient-robot assignment and training planning in a robotic gym,

Adopted mixed-integer nonlinear programming algorithms to find optimal assignment and training solutions,

Verified the proposed assignment solutions in different simulated situations and demonstrated effective training results.

Discussed limitations of the model and next steps.

This section is divided as follows. We first present the simplified robotic rehabilitation gym scenario in plain English (Scenario description) and using mathematical formulations (Mathematical scenario formulation). We then present the mixed-integer programming algorithms used for dynamic patient-robot assignment (Optimization algorithms) and multiple evaluation scenarios in which the performance of these algorithms was evaluated (Evaluation methodology).

Scenario description

In our simplified scenario, the robotic rehabilitation gym consists of M patients exercising with N robots for a period of G time steps. Each patient has K motor skills that they need to improve; these can be considered to be, for example, different upper limb and lower limb abilities as separated by clinical scales such as the Motor Assessment Scale [ 27 ] or the Fugl-Meyer Assessment [ 28 ], but are kept general for purposes of the simplified scenario. Similarly, a robot could also be a passive sensorized rehabilitation device but is kept as a general ‘robot’ for purposes of our scenario.

For the current paper, multiple simplifications and constraints of the scenario were implemented. The scope of these simplifications and ways in which they could be expanded are examined further in the Discussion. First, there were several constraints on robot use that we consider reasonable:

At any given time, only a single patient can use a given robot. While competitive and cooperative two-robot setups could be modeled as a single robot that can be used by two patients [ 7 , 8 , 9 , 10 , 11 , 12 ], such setups are currently a minority.

At any given time, a given patient can only use a single robot. While there are some setups where a patient can train with two robots simultaneously (e.g., an arm and leg robot [ 29 ]), they are relatively rare.

Each robot has an associated nonnegative time required to ‘exit’ the robot after training that is constant across patients and does not contribute to skill improvement. Realistically, there is both an ‘enter’ and an ‘exit’ time for each robot, which represents the time needed to, e.g., adjust the lengths of the robot’s segments to the patient, strap the patient to the robot before exercise, and unstrap them after exercise. The time may in practice be different for different patients and may vary depending on the temporal sequence of patients (e.g., if a patient is followed by a patient of roughly the same size, reducing readjustment time). However, we consider simplifying this as an overall ‘exit’ time to be reasonable for the current study.

As a result of the above exit time, a patient cannot enter a new robot until they have exited their current robot, and a patient cannot enter a robot until the previous patient has exited that robot.

Second, there were two constraints on training schedules that we also consider relatively reasonable:

All patients begin and finish training simultaneously; none can train before the starting time or after the end time. In practice, patients may arrive and leave one by one.

A patient may only train with a given robot once for a single uninterrupted period. For example, they cannot leave the robot, train with another robot, and come back to the first robot; as a second example, they cannot train with the robot, take a break where they do nothing, and continue training with the same robot. This is likely reasonable if the time period to be modeled is a single session, and can be expanded later.

Third, there were five simplifications related to skill acquisition, of which the last two are quite major and are discussed extensively in the Discussion:

Each robot only trains a single motor skill. Larger robots can train multiple skills simultaneously (e.g., both distal and proximal upper extremity function in the ARMin [ 2 ]), and there is evidence that training one skill generalizes to improvements in other skills (e.g., distal to proximal [ 30 ]), but we consider this a reasonable initial simplification that can easily be expanded later.

Each skill is only trained by a single robot. In practice, a robotic gym may have multiple robots that all train the same skill (e.g., multiple identical robots), but we again consider this a reasonable initial simplification that can easily be expanded later.

Once a patient is assigned to a robot, no further choices need to be made for that robot. In practice, rehabilitation robots have adjustable difficulty settings and control strategies [ 5 ] that are set either manually by the therapist or automatically by an intelligent algorithm.

Each skill improves as a deterministic function of time spent training on a robot that trains that skill, and depends on no other factors. This is a strong simplification; while such learning functions have been described in the literature [ 31 , 32 ], they are not deterministic, and improvement is influenced by numerous other factors (e.g., forgetting [ 25 ]).

Each patient’s current skill levels are available to the patient-robot assignment algorithms at all times, and the functions that relate improvement to training time are also known. This is a very strong simplification: while rehabilitation robots can assess patient motor function using built-in sensors, this estimate is not completely accurate [ 21 , 22 , 23 , 24 ]; furthermore, standardized clinical tests of motor function such as the Fugl-Meyer Assessment [ 28 ] are not perfectly accurate and cannot be conducted during a rehabilitation session since they require significant time. Similarly, the learning function is not known to rehabilitation robots and can only be imperfectly estimated.

Mathematical scenario formulation

For the scenario described in the previous subsection, we define a set of rehabilitation robots, \(R:=\{{r}_{1},\dots ,{r}_{N}\}\) , a group of patients, \(P:=\{{p}_{1},\dots ,{p}_{M}\}\) , and a set of motor skills, \(S:=\left\{{s}_{1},\dots ,{s}_{K}\right\}\) , to be trained for each patient. Let \(N\) be the number of robots, \(M\) be the number of patients and \(K\) be the number of skills. To follow the simplifications above, \(N = K\) and each skill is trained by exactly one robot. The training session consists of \(G\) discrete time steps, i.e., \(G\) is the final time training can be done. We also define \(H\) as the final time a patient may start training on a new robot. Both \(G\) and \(H\) are non-negative integers, and \(G\ge H\) . In our specific case, we set \(H = G\) , allowing a patient to start training on a robot at the last time step and train for a single time step. The time steps can be used to represent any amount of time as long as the duration of each time step is the same. For each robot \({r}_{i}\in R\) , the robot’s exit time is defined as a non-negative integer \({e}_{{r}_{i}}\) . When no time is needed to exit a robot, \({e}_{{r}_{i}}\) = 0. For each patient \({p}_{j}\in P\) , we are given skill curves that determine skill improvements as a function of time spent training on different robots. This encapsulates the last two simplifications in the previous subsection. The objective of dynamic patient-robot assignment and training planning is to find a schedule of the patients’ skill training on the robots that optimizes the total skill gain across the patients.

The scheduling problem can be framed as a mixed-integer nonlinear programming (MINLP) problem. There are three basic MINLP formulations: the time-indexed formulation, the disjunctive formulation, and the rank-based formulation [ 33 ]. The choice of MINLP formulations is based on their flexibility to model a problem and the computational efficiency to solve the problem. We applied both time-indexed (Time-indexed model) and disjunctive (Disjunctive model) models to our scheduling problem. After presenting broad models, we applied several simplifications for the current study, described in “Simplifications for our study”.

Time-indexed model

In time-indexed models, a schedule is created by determining, for each time step \(t\le H\) , whether a patient starts training on a given robot and how long the patient trains on the robot. The decision variables used for the time-indexed model are defined as follows:

\({x}_{{r}_{i},{p}_{j},t}\) is a Boolean variable that is equal to 1 if patient \({p}_{j}\) starts training on robot \({r}_{i}\) at time step \(t\) .

\({d}_{{r}_{i},{p}_{j}}\) is a nonnegative integer that represents the amount of time patient \({p}_{j}\) trains on robot \({r}_{i}\) .

Unlike the original time-indexed models for process engineering [ 34 ] where the duration of a task is known a priori, our model introduces the variable \({d}_{{r}_{i},{p}_{j}}\) to determine when a patient should stop training on a given robot as part of the decision-making. The time-indexed MINLP model is then formulated as maximizing objective function ( 1 ), subject to constraints ( 2 )–( 8 ):

The objective (1) maximizes the total skill gain across all patients during the training session, where

is the skill curve function with \({c}_{1,{r}_{i},{p}_{j}},{c}_{2,{r}_{i},{p}_{j}},{c}_{3,{r}_{i},{p}_{j}}\) determining the shape of the skill curve and \({c}_{4,{r}_{i},{p}_{j}}\) determining how far along the skill curve a patient advances per time step. Each \(c\) value changes depending on the patient \({p}_{j}\) and robot \({r}_{i}\) pairing to create a custom skill curve. \({u}_{i,j}\) is the number of training time steps previously performed by patient \({p}_{j}\) with robot \({r}_{i}\) in previous sessions, plus one (with the plus one used to calibrate the function). The basic function is common to all patients and skills while the parameters may differ between patients and skills. It models a modified hyperbolic skill curve where patients tend to have rapid gains when first training a skill, then diminished returns as they train more [ 31 , 32 ]. The \(\frac{{c}_{1,{r}_{i},{p}_{j}}*({c}_{2,{r}_{i},{p}_{j}} + {c}_{4,{r}_{i},{p}_{j}}*{u}_{{r}_{i},{p}_{j}})}{{c}_{1,{r}_{i},{p}_{j}}+{c}_{4,{r}_{i},{p}_{j}}*{u}_{{r}_{i},{p}_{j}}+{c}_{3,{r}_{i},{p}_{j}}}\) portion of the equation represents the patient’s initial skill value before the current training session.

Constraints (2)–(5) are imposed on robot use. Constraint (2) ensures that only a single patient can start using a given robot at any given time. Constraint (3) ensures that a given patient can only start using a single robot at any given time. Constraint (4) ensures that a patient \({p}_{j}\) cannot begin training on a new robot \({r}_{b}\) until after they have exited their current robot \({r}_{a}\) (i.e., \({s}_{{r}_{a},{p}_{j}}+{d}_{{r}_{a},{p}_{j}}+{e}_{{r}_{a} }-1\) , where \({s}_{{r}_{a},{p}_{j}}\) is a nonnegative integer that represents the time when patient \({p}_{j}\) starts training on robot \({r}_{a}).\) This constraint is ignored if patient \({p}_{j}\) does not train on robot \({r}_{a}\) during the session. Constraint (5) ensures that a patient \({p}_{b}\) cannot start training on a robot \({r}_{i}\) until after the previous patient \({p}_{a}\) on robot \({r}_{i}\) has exited that robot and patient \({p}_{b}\) has entered the robot (i.e., \({s}_{{r}_{i},{p}_{a}}+{d}_{{r}_{i},{p}_{a}}+{e}_{{r}_{i}}{p}_{a})\) . Again, this constraint is ignored if patient \({p}_{a}\) does not train on robot \({r}_{i}\) during the session. Constraint (6) ensures that no one can start training or continue to train after the final time step—i.e., no one can train at \(G+1\) . Constraint (7) ensures that a person can only train on a single robot at a time. Constraint (8) ensures that a patient may only train with a given robot once for a single uninterrupted period.

Disjunctive model

In disjunctive models, a schedule is created by determining what time a patient starts and finishes training on a given robot. The start and end times together implicitly capture the amount of time spent training on a robot. The decision variables used in the disjunctive model are defined as follows:

\({x}_{{r}_{i},{p}_{j}}\) is the integer start time of patient \(j\) on robot \(i\) .

\({y}_{{r}_{i},{p}_{j}}\) is the integer end time of patient \(j\) on robot \(i\) .

\({a}_{{r}_{i},{p}_{a},{p}_{b}}\) is a Boolean precedence indicator that is equal to 1 if patient \({p}_{a}\) is on robot \({r}_{i}\) before patient \({p}_{b}\) and is equal to 0 otherwise.

\({b}_{{r}_{a},{r}_{b},{p}_{j}}\) is a Boolean precedence indicator that is equal to 1 if patient \({p}_{j}\) is on robot \({r}_{a}\) before robot \({r}_{b}\) and is equal to 0 otherwise.

\({z}_{{r}_{i},{p}_{j}}\) is a Boolean activation variable that is equal to 1 if patient \({p}_{j}\) uses robot \({r}_{i}\) during the training session and is equal to 0 otherwise.

The disjunctive MINLP model is formulated as maximizing objective ( 10 ), subject to constraints ( 11 )–( 15 ):

The objective (10) maximizes the total skill acquisition across all patients during the training session, where

is the skill improvement function. While function (16) appears slightly different than function (9), this is only because of the difference in the variables used to model the disjunctive and time-indexed models. Both Eqs. ( 9 ) and ( 16 ) represent a modified hyperbolic function that models the learning curves [ 31 , 32 ]. Passing a schedule represented in time-indexed model form through function (1) results in an equal value as passing that same schedule in disjunctive model form through function (10).

Constraint (11) ensures that the start and end times of a patient training on a given robot are between \(1\) and \(G\) , and the start time is not later than the end time. Constraints (12) and (13) are disjunctive constraints on robot use and ensure that two patients’ training activities requiring the same robot cannot overlap in time. Specifically, for any two patients \({p}_{a},{p}_{b}\in P\) , the start time of patient \({p}_{b}\) on a given robot must be at least \({e}_{{r}_{i}}\) greater than the end time of patient \({p}_{a}\) on the same robot if patient \({p}_{a}\) precedes \({p}_{b}\) . This requirement accounts for the time, i.e., \({e}_{{r}_{i}}\) , taken for patient \({p}_{a}\) to exit the robot. This is analogous to constraint (5) in the time-indexed model. The Boolean variable \({a}_{{r}_{i},{p}_{a},{p}_{b}}\) is introduced to indicate the precedence of two patients \({p}_{a}\) and \({p}_{b}\) on robot \({r}_{i}\) . \(V\) is a sufficiently large multiplier [ 35 ] that ensures that either (12) or (13), but not both, will hold depending on the precedence indicated by \({a}_{{r}_{i},{p}_{a},{p}_{b}}\) . Note that the two constraints are taken into account only if both patient \({p}_{a}\) and \({p}_{b}\) use the robot \(i\) during the training session, i.e., \({z}_{{r}_{i},{p}_{a}}= {z}_{{r}_{i},{p}_{b}}=1\) . For any two patients \({p}_{a}\) and \({p}_{b}\) who do not both use robot \({r}_{i}\) at any time during the training session, i.e., \({z}_{{r}_{i},{p}_{a}}=0\) or \({z}_{{r}_{i},{p}_{b}}=0\) , constraints (12) and (13) are always satisfied. Similarly, constraints (14) and (15) are disjunctive constraints on the schedule of a given patient that ensure that the start time of patient \({p}_{j}\) on robot \({r}_{b}\) must be at least \({e}_{{r}_{i}}\) greater than the end time of patient \({p}_{j}\) on robot \({r}_{a}\) if patient \({p}_{j}\) uses robot \({r}_{a}\) before robot \({r}_{b}\) . The two constraints hold only if both robot \({r}_{a}\) and \({r}_{b}\) are used by patient \({p}_{j}\) during the training session, i.e., \({z}_{{r}_{a},{p}_{j}}={z}_{{r}_{b},{p}_{j}}=1\) .

Simplifications for our study

While Eqs. ( 9 ) and ( 16 ) allow general representations of possible skill curves, our specific study held several variables constant for simplicity. \({c}_{1,{r}_{i},{p}_{j}}\) represents the maximum skill value that can be achieved with an infinite amount of training (e.g., maximum score on a clinical assessment scale); this was set to 100 for every skill. \({c}_{4,{r}_{i},{p}_{j}}\) determines how many “units” of training a patient gains in a skill when training that skill for one time step. This could be used to represent, e.g., more or less efficient robots for the same skill, but was not considered critical for the current study, and \({c}_{4,{r}_{i},{p}_{j}}\) was thus set to 1. Finally, we focused only on scenarios where patients have not previously trained with the robots, and \({u}_{{r}_{i},{p}_{j}}\) was thus set to 1. The skill function for the time-indexed model, Eq. ( 9 ), thus simplified to:

and Eq. ( 16 ) thus simplified to:

Optimization algorithms

The Branch-And-Reduce Optimization Navigator (BARON) optimizer (The Optimization Firm LLC, USA) [ 36 ] was applied to both time-indexed and disjunctive models. BARON was chosen due to its ability to find global solutions to nonlinear and mixed-integer nonlinear problems. As the name implies, BARON specifically uses branch-and-bound optimization that always finds the global optimum under specific conditions (e.g., having a finite lower and upper bound on the nonlinear constraints, having enough optimization iterations to complete the search). Additionally, the OPTI toolbox for MATLAB 2021a (MathWorks, USA) was used to interface with the BARON optimizer, and IBM’s ILOG CPLEX optimization studio [ 37 ] was used to increase the effectiveness of the BARON optimizer. The OPTI toolbox can accept constraints in both linear and nonlinear format. Initially, we wrote all constraints in nonlinear format; for the final evaluation, linear constraints were written in linear format since this (on average) reduced optimization duration.

While the BARON optimizer is guaranteed to find a solution close to the global maximum, this is only if it has enough time to do so. As the number of optimization iterations is always limited, the optimizer may be unable to find the optimal schedule within that limit. Therefore, the optimization was combined with an algorithm that scans the schedule in each optimization iteration for obvious weaknesses and removes them. Specifically, the algorithm looks for situations where a patient is idle and scheduled to be assigned to a robot in later time steps, but that robot is already available earlier; in such situations, the algorithm changes the schedule so the patient begins training on that robot as soon as it is available. A similar process is applied to the end time: if a patient is scheduled to exit a robot but the robot and patient would then both be idle, the training time on the robot is extended. Despite this algorithm, repeated optimization of the same scenario may still lead to slightly different results due to the finite number of optimization iterations.

Evaluation methodology

Our optimization strategy aims to maximize the total amount of skill gained during a session: the difference between the skill value at the end and the start of the session, summed across all patients and skills. This total skill gain depends on the patients’ initial skill level and their skill curves (modeled with Eqs.  17 and 18 for each patient-robot pairing). The skill curves have diminishing returns, similar to real-world situations where it becomes harder to improve a skill the more a patient has trained it [ 31 , 32 ]. To evaluate the effectiveness of the optimization strategy, we first evaluated multiple scenarios where all patients have the same skill curve for all skills (Equal skill curves) and then multiple scenarios where the patients have different skill curves (Different skill curves). Finally, to evaluate the computational cost of the optimization, we measured how optimization duration depends on the number of patients, robots, and time steps (Effect of patients, robots and time steps on optimization duration).

Equal skill curves

In this situation, all skill curves of all patients are described using Eqs. ( 17 ) and ( 18 ) and the parameter values are \({c}_{3,{r}_{i},{p}_{j}}\) = 10, \({c}_{2,{r}_{i},{p}_{j}}\) = 1 and \({c}_{4,{r}_{i},{p}_{j}}\) = 1 for each patient and skill. Three possible combinations of patients and robots were tested:

5 patients and 5 robots,

6 patients and 5 robots,

5 patients and 7 robots.

Each of these was tested in three time variants:

7 time steps total, robots have no exit time,

12 time steps total, robots have no exit time,

12 time steps total, robots have an exit time of 1 time step.

There was thus a total of 9 evaluation scenarios with equal skill curves.

In the last variant (1-step exit time), a patient must wait (remain idle) for one time step after training on a robot before they can be assigned to a new robot ( \({e}_{{r}_{i}}\) = 1 \(\forall {r}_{i}\in R\) ). Furthermore, no other patient can train with that robot until the previous patient has completed the idle period. Such exit times were not evaluated with a 7-time-step duration since the short duration would strongly favor not switching robots. The difference in computational complexity between 7 and 12 time steps was expected in advance to be greater for the time-indexed model since that model’s parameter count scales proportionally higher with respect to time while the disjunctive model’s does not.

For both models, the optimization was allowed to run for 1000 iterations. Additionally, two ‘baseline’ approaches were evaluated for each scenario:

Best robot only: Each patient was assigned to a single robot for the entire session. This was selected as the robot that would result in the highest individual skill gain over the session for that patient. In case of conflicts (two patients would get highest gain from same robot), the robot was assigned to the patient who would receive a greater gain, and the other patient was assigned to their “second-best” robot.

Switch halfway: Each patient was assigned to one robot for the first half of the session and a second robot for the second half of the session. These were again selected as the two robots that would result in the two highest individual skill gains over the session for that patient, and conflicts between patients were resolved similarly to the previous case. As the skill curves have diminishing returns, this was expected to lead to higher overall skill gain than not switching robots and is similar to a recent robotic gym paper where patients switched midway through the session [ 6 ].

As a basic statistical test, a one-way repeated-measures analysis of variance (ANOVA) with Holm-Sidak post-hoc tests was calculated with four conditions (disjunctive, time-indexed, best robot only, switch halfway) and nine samples per condition (the nine evaluation scenarios). A priori, both disjunctive and time-indexed models were expected to outperform both baseline schedules: optimization should yield a superior result unless the naïve schedules are already optimal. No significant difference was expected between disjunctive and time-indexed schedules; given infinite optimization iterations, both models should converge to the same schedule. It should be noted that the ANOVA independence assumption is violated since the “best robot only” result is the same in the “12 time steps” and “12 time steps and exit time” scenarios; we considered this acceptable since the ANOVA is simply a quick validation of the results rather than the primary outcome measure.

Different skill curves

Realistically, patients have different initial skill levels before training (e.g., different impairment levels after neurological injury) and learn skills at different rates. To represent these differences, every patient-robot pairing was assigned a different skill curve by varying variables \({c}_{2,i,j}\) and \({c}_{3,i,j}\) in Eqs. ( 17 ) and ( 18 ). These variables affect both the rate of growth and initial skill value.

42 different skill curves with different values of \({c}_{2,{r}_{i},{p}_{j}}\) and \({c}_{3,{r}_{i},{p}_{j}}\) were created and randomly distributed to patients and robots in a group. \({c}_{2,{r}_{i},{p}_{j}}\) ranged from 0.01 to 100 while \({c}_{3,{r}_{i},{p}_{j}}\) ranged from 5 to 1000. This was done three times to create three ‘groups’ of patients with different skill curves, allowing us to determine whether the system could consistently create a good schedule that was not dependent on a very specific set of skill distributions. For each of the three groups, the same 9 scenarios as in the previous subsection were simulated, and the two baseline approaches were evaluated as well. Again, optimizations ran for 1000 iterations.

As a basic statistical test, a two-way mixed ANOVA was conducted with one within-subject factor (schedule: disjunctive, time-indexed, best robot only, switch halfway), one between-subjects factor (group: 1–3) and nine samples per bin (nine evaluation scenarios). Holm-Sidak post-hoc tests were used to compare schedules, and effect size was reported as partial eta-squared. Significant differences between schedules were expected a priori since optimization should yield a superior result unless the naïve schedules are already optimal. Thus, the ANOVA serves as a quick check of the optimization rather than as the primary outcome.

Effect of patients, robots and time steps on optimization duration

The optimization duration is expected to increase as the number of patients, robots, and time steps increases. To determine the effect of each of these parameters, we applied both disjunctive and time-indexed models to the “equal skill curves” scenario, varied the three parameters, and measured the optimization duration. The following variations were tested:

At 5 patients and 5 robots, test 1, 2, 3, 4, 5, and 12 time steps,

At 5 robots and 5 time steps, test 1, 2, 3, 4 and 5 patients,

At 5 patients and 5 time steps, test 1, 2, 3, 4, and 5 robots.

All optimizations were run on a personal computer with an 8-core 3600-MHz Ryzen 7 3700X central processing unit (AMD, Santa Clara, CA).

The returned score function value represents the total skill gain: the difference between the skill value at the end and the start of the session, summed across all patients and skills. Table 1 shows total skill gain obtained with the different schedule types for the nine scenarios with equal skill curves.

The one-way repeated-measures ANOVA was significant (p < 0.001), and post-hoc tests found that both disjunctive and time-indexed schedules resulted in higher total skill gain than the “best robot” and “switch halfway” schedules (p < 0.01 in all cases). Switching halfway resulted in higher total skill gain than the “best robot” schedule (p < 0.001), but there was no significant difference between disjunctive and time-indexed schedules.

Tables 2 , 3 , and 4 show the total skill gains obtained with different schedule types for the nine scenarios with different skill curves. Each table corresponds to one of the three patient groups (skill curve assignments).

The ANOVA found a significant main effect of schedule (p < 0.001, partial eta-squared = 0.92) and a significant interaction effect of schedule × group (p < 0.001, partial eta-squared = 0.56). In post-hoc tests, both disjunctive and time-indexed schedules resulted in higher total skill gain than both baseline schedules (p < 0.001 for all comparisons), the disjunctive schedule resulted in higher total skill gain than the time-indexed schedule (p < 0.001), and the “switch halfway” schedule resulted in higher total skill gain than the “best robot” schedule (p < 0.001).

Figure  1 shows a visual representation of the total skill gain over time (rather than only at the end of the session) using the four schedule types (disjunctive, time-indexed, best robot, switch halfway) for two representative examples: (a) 6 patients, 5 robots and 7 time steps, and (b) 5 patients, 7 robots and 12 time steps (right). Figures  2 and 3 show the total skill gain over time in the same two examples and with the same schedule types, but separately for each individual patient rather than as a group.

figure 1

Two examples of total skill gain over time using four schedule types: disjunctive, time-indexed system, best robot, and switch halfway. Example a is for group 2 with 6 patients, 5 robots, and 7 time steps. Example b is for group 1 with 5 patients, 7 robots, and 12 time steps

figure 2

Total skill gain for each patient over time in group 2 with 6 patients, 5 robots, and 7 time steps. The subplots represent different scheduling approaches

figure 3

Total skill gain for each patient over time in group 1 with 5 patients, 7 robots, and 12 time steps. The subplots represent different scheduling approaches

As mentioned, this evaluation was done with the “equal skill curves” scenario. The following results were obtained for the disjunctive model:

When varying the number of time steps with 5 patients and 5 robots, the optimization duration is 0.3 s for 1 time step, 11.9 s for 2 steps, 17.7 s for 3 steps, 61.3 s for 4 steps, 1201 s for 5 steps, and 2056s for 12 steps.

When varying the number of patients with 5 robots and 5 time steps, the optimization duration is 0.3 s for 1 patient, 3.6 s for 2 patients, 240 s for 3 patients, 273 s for 4 patients, and 1201 s for 5 patients.

When varying the number of robots with 5 patients and 5 time steps, the optimization duration is 0.3 s for 1 robot, 33.9 s for 2 robots, 51.1 s for 3 robots, 69.8 s for 4 robots, and 1201 for 5 robots.

The following results were obtained for the time-indexed model:

When varying the number of time steps with 5 patients and 5 robots, the optimization duration is 0.1 s for 1 time step, 4.1 s for 2 steps, 23.8 s for 3 steps, 380 s for 4 steps, 1479 s for 5 steps, and 16,400 s for 12 steps.

When varying the number of patients with 5 robots and 5 time steps, the optimization duration is 0.3 s for 1 patient, 24 s for 2 patients, 717 s for 3 patients, 800 s for 4 patients, and 1479 s for 5 patients.

When varying the number of robots with 5 patients and 5 time steps, the optimization duration is 0.3 s for 1 robot, 51 s for 2 robots, 287 s for 3 robots, 299 s for 4 robots, and 1479 s for 5 robots.

These durations vary somewhat whenever the optimization is re-run, but we consider an approximate time to be sufficient for illustration.

The time-indexed and disjunctive systems significantly outperformed the baseline schedules, as indicated by the repeated-measures ANOVA and Table 1 . In all scenarios, both models were always better than the “best robot” schedule. While there was no statistically significant difference in total skill gain between time-indexed and disjunctive systems, there were some nonsignificant differences. Most prominently, there were a few scenarios with 1-step exit times where the disjunctive system outperformed the “switch halfway” schedule but the time-indexed system did not (Table 1 ).

Both time-indexed and disjunctive systems should converge to the same schedule given infinite optimization iterations. The outcome differences are due to a finite number of optimization iterations, which gives an advantage to the simpler disjunctive model. In that model, start and stop times can be changed simply by changing integer values assigned to a patient-robot pairing. Conversely, in the time-indexed system, the starting time of a patient on a robot is determined by a binary variable. Thus, to change the time when a patient starts training on a robot, the system must toggle off one binary variable, toggle another one on, and possibly change the duration integer, resulting in an overall more difficult optimization process that requires more iterations. To verify that time-indexed and disjunctive systems would converge to the same schedule given infinite optimization iterations, we later applied both models to several simpler scenarios (3 patients, 3 robots, 12 time steps; 5 patients, 5 robots, 1–3 time steps) and ran optimizations until no changes occurred for 2000 iterations. In all cases, disjunctive and time-indexed models converged on the same schedule.

While the variables used in the time-indexed model make the optimization take longer, they do remove an inherent limitation of the disjunctive system. The disjunctive model only allows a patient to train on a robot once within a session. While this works for the constraints in this paper, it would not work in a less constrained situation where there is no limit on how often a patient can train on a robot. Conversely, the time-indexed model could be easily modified to remove this constraint.

When the skill curves were not equal, both disjunctive and time-indexed schedules significantly outperformed both baseline schedules. This was expected since optimization should be more effective than a naïve schedule. However, the difference in total skill gain between disjunctive and time-indexed schedules was now statistically significant, with the disjunctive schedule overall resulting in higher total skill gain. Additionally, the time-indexed system was worse than the “switch halfway” schedule in two scenarios while the disjunctive system always resulted in higher total skill gain than the “switch halfway” schedule. While disjunctive and time-indexed systems should converge to the same schedule given infinite iterations, the disjunctive system thus appears to be preferable given finite optimization durations that are likely to be seen in realistic robot gyms.

As Fig.  1 shows, the baseline schedules outperform disjunctive and time-indexed schedules in initial time steps, but the opposite becomes true in later time steps. This is due to diminishing returns in individual skill curves: as patients’ skill gain on a robot decreases with time spent on that robot, assigning a patient to a single robot (as in the baseline schedules) has high initial skill gains. Conversely, the optimized schedules can plan over the long term, sacrificing high initial gains for a higher total gain.

Figures  2 and 3 also show that, while disjunctive and time-indexed systems result in higher total skill gain than baseline schedules, not all patients benefit equally. For example, in Fig.  2 , patient 5 has no skill gain at all in the “best robot” schedule; since there are more patients than robots, one patient must be neglected in the “best robot” schedule, and patient 5 benefits more from other schedules. Conversely, patient 3 exhibits the highest gain in the “switch halfway” schedule rather than in an optimized schedule. This is less of an issue in Fig.  3 , where there are more robots than patients. Nonetheless, optimizing for highest total skill gain does not mean that all patients benefit equally, and we discuss the implications later.

Finally, the significant interaction effect of group x schedule indicates that the three groups had different skill gains, with group 2 on average having the highest gains. This does not, however, mean that the system performed better for group 2. As skill curve distributions are different between groups, some groups simply have more potential for skill gain. In the future, it would be beneficial to determine what factors determine the potential of optimization, allowing researchers to decide when optimization should be performed. For example, instead of only having three groups, a Monte Carlo simulation could be used to generate many different groups, and the impact of each parameter in Eqs. ( 9 ) and ( 16 ) could be evaluated. However, we believe that more complex scenarios should be created (Expanding the scenario simplifications) and optimization duration should be reduced (Optimization duration) before such detailed evaluations are conducted.

Optimization duration

While disjunctive and time-indexed models can converge toward an optimal schedule regardless of problem complexity as long as specific conditions are met, “Effect of patients, robots and time steps on optimization duration” shows that more time and optimization iterations are generally needed to reach the optimum as the number of robots, patients or time steps increases. Due to its relative simplicity, the disjunctive system overall requires less time for optimization than the time-indexed system: for example, the most complex scenario required 2056 s with the disjunctive system but 16,400 s with the time-indexed system. The disjunctive system is thus again considered preferable for realistic situations where limited time is available for optimization.

In our evaluations, the number of iterations was set to 1000, but the schedule is not necessarily improved in every iteration. Generally, 3–6 meaningful ‘improvements’ to the schedule occurred over the course of these iterations; however, it is impossible to know beforehand when these improvements will occur. Thus, increasing the iteration cap above 1000 may result in slightly better outcomes at the cost of more optimization time. Conversely, reducing the iteration cap would reduce optimization time but may have either a minimal or critical effect on the final schedule.

The increase in optimization time as the number of patients, robots, or time steps increases is a byproduct of the problem being an MINLP. As MINLPs are NP-hard problems, an optimal solution cannot be found in polynomial time unless P = NP [ 38 , 39 ]. The optimization duration thus increases exponentially with problem size and can become impractical if the problem is too complex, as seen in “Effect of patients, robots and time steps on optimization duration”. While there is no way to avoid exponential increases in optimization duration within the current framework, some steps can nonetheless be taken to speed up optimization. For example, in the current study, the BARON optimizer created all schedules with no initial guess. It would be possible to instead begin with an initial schedule (e.g., one created by a therapist based on a quick evaluation of the patients) and have the system try to improve upon it.

As another alternative within the current framework, the number of time steps can be varied by the person supervising the optimization. Increasing the number of time steps increases the session duration or improves schedule granularity (e.g., patients being able to switch robots every 5 min vs. every 10 min), but also exponentially increases optimization duration. Furthermore, the optimal schedule for a 7-time-step scenario is not necessarily the same as the optimal schedule for the first 7 steps of a 12-step scenario—the system may make different decisions in the first 7 steps if it ‘knows’ that more steps are available later. Thus, in a real-world robotic gym, the optimization supervisor could choose a preferable tradeoff between schedule duration/granularity, schedule optimality (related to number of optimization iterations), and computational cost.

Finally, to overcome the limitations of MINLP, we have begun work on a different patient-robot assignment framework that does not use an optimizer. Instead, it uses a neural network trained to predict skill growth. Preliminary results have shown that the neural network approach drastically reduces the time needed to create a schedule, and we are currently combining it with a more complex scenario (Expanding the scenario simplifications).

Different optimization goals

In the current study, the optimization goal was always the same: maximizing total skill gain across all patients and skills. While this can be desirable in many cases, it may also have downsides in real-world situations. For example, it may compromise the rehabilitation of patients who are considered “less promising”, reducing their quality of life post-rehabilitation. Even if no patient is neglected, the same absolute gain does not always have the same practical meaning. For example, the Fugl-Meyer Assessment upper limb score can range from 0 to 66 [ 28 ], but improving it from 0 to 6 may have different implications for the patient than improving it from 60 to 66.

To address this issue, future studies could optimize different objective functions. As a preliminary follow-up, we have implemented two alternative objective functions for the time-indexed model that aim to distribute skill gains more evenly among patients. First, we modified objective function ( 17 ) to include a penalty element corresponding to the variance in skill gain among patients, thus ensuring that all patients improve to a similar degree. This resulted in the following objective function:

where m is a penalty coefficient.

Second, we implemented an objective function that takes each patient’s maximum possible skill gain into account. First, the optimization is run with each patient individually (number of patients = 1) using objective function ( 17 ) to obtain that patient’s maximum possible skill gain if they are the only patient in the gym. The optimization is then run for all patients together with a new objective function that aims to optimize total skill gain relative to each patient’s individual maximum possible gain:

where \({f(x}_{{r}_{i},{p}_{j},t},{d}_{{r}_{i},{p}_{j}})\) is Eq. ( 17 ) and m \({p}_{j}\) is a penalty coefficient for patient \({p}_{j}\) . This coefficient allows the priority of each patient to be modified as desired.

We then applied the time-indexed model to a simple scenario (3 patients, 5 robots, 5 time steps, different skill curves) using the original objective function ( 17 ) and new functions (19) and (20). This resulted in the example skill gains shown in Fig.  4 . The example shows that the two new functions lead to more similar skill gains among patients, resulting in a benefit for patient 3 but losses for the other two patients. These functions therefore also have drawbacks; for example, they may unfairly slow down patients with more “potential”. The most appropriate optimization goal may be dependent on the situation: for example, groups of patients with similar impairment levels or similar potential for improvement may benefit from a different optimization goal than more heterogeneous groups.

figure 4

A preliminary comparison of total skill gain for each patient over time using a the original time-indexed system, b a modified system whose objective function includes a penalty corresponding to variance in skill gain among patients, and c a modified system that aims to optimize patients’ skill gains relative to their individual maximum possible gains. All examples are for 3 patients, 5 robots, 5 time steps, different skill curves

Expanding the scenario simplifications

As mentioned in the Methods, two scenario simplifications are quite severe. First, we assumed that each patient’s current skill levels are known perfectly and that skill improvement curves are deterministic. Realistically, significant uncertainty is involved in skill improvement and assessment, and would need to be considered (Stochastic modeling). Second, we assumed that skill improvement depends only on time spent training the skill. Realistically, improvement is influenced by multiple factors related to the patient and rehabilitation environment, which would also need to be considered (Robot and patient characteristics). After discussing these possible expansions, we present our long-term view of how dynamic patient-robot assignment algorithms would realistically be used (Long-term vision).

Stochastic modeling

Multiple sources of uncertainty could be included in a simulated robot gym. For example, patient skill level realistically would not be observable directly, and would need to be estimated from measurements such as task performance (i.e., success rate in the exercise) as well as the amount, quality (e.g., smoothness) and intensity of movement [ 21 , 22 , 23 , 24 ]. This could be simulated by representing patient skill level as a hidden state from which observable outputs are generated via a model that describes the probability distribution of output values in a given hidden state. Second, patient skill improvement over time would be a stochastic process that could be simulated using a probabilistic model such as Markov process or Gaussian process. Patient skill forgetting and spontaneous recovery could be modeled as random changes between consecutive sessions, and standardized clinical tests could be modeled as an accurate skill estimate that can only be done between sessions. Finally, patients arriving and leaving one by one could be modeled as different start/stop times for each patient within a session that may or may not be known to the optimization algorithm in advance.

Once these uncertainties have been incorporated, the optimization algorithm would need to be executed dynamically after each time step. In the current “static” formulation, the algorithm determines the entire session schedule as an open-loop solution at the beginning of the session since patient skill improvement is perfectly predictable given the deterministic model. However, a schedule determined ahead of time cannot be effective in the presence of uncertainties. In a dynamic schedule, the algorithm would incorporate new information (e.g., actual measurements, unexpected patient arrival/departure) after each time step and make patient-robot assignment decisions for the next time step.

Robot and patient characteristics

Multiple characteristics of robots and patients could be considered in a robotic gym. For example, rehabilitation robots commonly feature selectable difficulty levels [ 13 , 14 ] and control strategies (e.g., assistive vs challenge-based [ 5 ]). These could be incorporated as a two-stage patient-robot assignment algorithm: first assign a patient to a robot, then choose the robot’s settings. In an expanded model where patient ‘hidden’ skill is observable via performance and other outputs (see previous subsection), these settings may influence both skill improvement and observable outputs. For example, if difficulty is too low, patients may exhibit high performance but low skill improvement. Challenge-based strategies may lead to lower skill improvement in unskilled patients than assistive strategies, but higher improvement in more skilled patients. Finally, some robots may include generalization of skill acquisition to other skills: a large gain in the primary skill (as in the current study) as well as smaller gains in other skills.

Additionally, patient motivation and engagement have a significant effect on immediate performance [ 7 , 8 , 9 , 11 ] and long-term functional gains [ 40 ]. This could be modeled by having unmotivated patients exhibit worse task performance and lower skill improvement. Motivation could also be modified by events within a session itself. For example, patients may dislike specific robots (and lose motivation if assigned to them) or switching robots too frequently. An excessive difficulty setting may decrease motivation, and not assigning a patient to any robot may decrease it as well (due to patient perception of being neglected). Such a patient model could also include fatigue as a related factor. For example, fatigue may increase as the patient exercises (especially at high difficulty settings), leading to decreased motivation. Fatigue could be decreased by not assigning a patient to any robot for a time step. However, if motivation decreases below a threshold, the patient may even leave the session unexpectedly.

Finally, the type and degree of patient impairment could be modeled as having an impact not only on initial patient skill levels, but also skill curves. For example, while we currently modeled all patients as having skill curves with diminishing returns, we could instead model a mix of curves: some with diminishing returns and some with, e.g., increasing returns [ 31 , 32 ].

Long-term vision

As mentioned in “Stochastic modeling”, optimization algorithms would likely need to be executed dynamically after each time step due to the presence of uncertainties. Additionally, computational cost increases with the number of time steps to be optimized (Effect of patients, robots and time steps on optimization duration). Thus, we believe that, in the long term, robot gyms will not have a fully fixed session schedule. Instead, before the session, the optimization algorithm will create a tentative schedule for the first few time steps based on patients’ medical files and data from previous sessions. The therapist will then move around the gym assisting individual patients while the robot gym software monitors all patients as a group. The software will dynamically re-optimize the schedule for the next few time steps as new data become available and will provide suggestions to the therapist when it believes that a patient should be moved to a different robot. As the computational cost for optimizing 1–3 time steps is relatively low, this can be done during the session itself, allowing a therapist to focus on helping individual patients without having to think about gym scheduling.

Learning from human experts

Finally, our study focused on purely computer-driven optimization without any human knowledge. In the future, an alternative approach could be to learn by demonstration from a human expert. For example, a therapist could manually assign patients to robots during a session, and their decisions could be recorded together with task performance metrics and robot sensor data. Supervised machine learning could then be used to train a patient-robot assignment policy to mimic the therapist’s decisions. For example, a related work [ 41 ] presented a framework for learning a set of heuristics from human demonstration for resource allocation and scheduling in a patient care scenario. This would require an entirely different class of algorithms but may represent a more practical and realistic implementation approach.

The main challenge with such an approach is that suitable datasets are currently unavailable and would need to be obtained in a well-equipped robot gym that is currently only accessible to a few rehabilitation facilities. As an initial step, a dataset could be generated by human interaction with a simulation. For example, a modified version of the simulation could be designed to present a human expert with simulated patients’ skill levels (or measurable variables as described in “Stochastic modeling”) after each time step, and the expert could then make manual patient-robot assignment choices after each time step. While this would still suffer from similar simplifications as the current scenario, it would allow machine learning algorithms to be evaluated on a simulated dataset that nonetheless involves a real human expert.

Our study presented a simplified model of a robotic rehabilitation gym, where multiple patients train with multiple robots to learn different skills. Time-indexed and disjunctive models were used to optimize total skill gain across all patients and skills within a training session. Both optimization models significantly outperformed two baseline schedule types: having each patient stay on a single robot throughout the session and having patients switch robots halfway through the session. The disjunctive model resulted in higher total skill gain and required less optimization time than the time-indexed model in the given scenarios. Though our simulation study involved unrealistically simple scenarios, it thus demonstrated that intelligently moving patients between rehabilitation robots can improve skill acquisition in a multi-patient multi-robot environment. Finally, we discussed how these simplifications could be expanded on in the future.

While robotic rehabilitation gyms have not yet become commonplace in clinical practice, prototypes of them already exist and are likely to become increasingly popular as the price of rehabilitation robots decreases. Our study thus presents a way to use automated decision-making and decision support to support chronically overworked physical and occupational therapists, allowing them to effectively supervise a larger number of patients undergoing rehabilitation. While numerous challenges would need to be solved before the envisioned system could be used in practice, it could in the long term allow more efficient delivery of technologically aided rehabilitation, and may be broadly applicable to other scenarios where groups of human work with groups of robots or virtual agents to learn skills.

Availability of data and materials

All code developed in the current study is publicly available on Zenodo at https://zenodo.org/record/7308921 ( https://doi.org/10.5281/zenodo.7308921 ).

Abbreviations

Branch-And-Reduce Optimization Navigator

  • Mixed-integer nonlinear programming

Analysis of variance

Lo AC, Guarino PD, Richards LG, Haselkorn JK, Wittenberg GF, Federman DG, et al. Robot-assisted therapy for long-term upper-limb impairment after stroke. N Engl J Med. 2010;362:1772–83.

Article   PubMed   PubMed Central   CAS   Google Scholar  

Klamroth-Marganska V, Blanco J, Campen K, Curt A, Dietz V, Ettlin T, et al. Three-dimensional, task-specific robot therapy of the arm after stroke: a multicentre, parallel-group randomised trial. Lancet Neurol. 2014;13:159–66.

Article   PubMed   Google Scholar  

Aprile I, Germanotta M, Cruciani A, Loreti S, Pecchioli C, Cecchi F, et al. Upper limb robotic rehabilitation after stroke: a multicenter, randomized clinical trial. J Neurol Phys Ther. 2020;44:3–14.

Fisher Bittmann M, Patton JL. Forces that supplement visuomotor learning: a “sensory crossover” experiment. IEEE Trans Neural Syst Rehabil Eng. 2017;25:1109–16.

Article   Google Scholar  

Marchal-Crespo L, Reinkensmeyer DJ. Review of control strategies for robotic movement training after neurologic injury. J Neuroeng Rehabil. 2009;6.

Demofonti A, Carpino G, Zollo L, Johnson MJ. Affordable robotics for upper limb stroke rehabilitation in developing countries: a systematic review. IEEE Trans Med Robot Bionic. 2021;3:11–20.

Novak D, Nagle A, Keller U, Riener R. Increasing motivation in robot-aided arm rehabilitation with competitive and cooperative gameplay. J Neuroeng Rehabil. 2014;11:64.

Article   PubMed   PubMed Central   Google Scholar  

Baur K, Schättin A, de Bruin ED, Riener R, Duarte JE, Wolf P. Trends in robot-assisted and virtual reality-assisted neuromuscular therapy: a systematic review of health-related multiplayer games. J Neuroeng Rehabil. 2018;15.

Pereira F, Bermúdez i Badia S, Jorge C, Cameirão MS. The use of game modes to promote engagement and social involvement in multi-user serious games: a within-person randomized trial with stroke survivors. J Neuroeng Rehabil. 2021;18:62.

Johnson MJ, Loureiro RCV, Harwin WS. Collaborative tele-rehabilitation and robot-mediated therapy for stroke rehabilitation at home or clinic. Intell Serv Robot. 2008;1:109–21.

Ballester BR, Bermúdez i Badia S, Verschure PFMJ. Including social interaction in stroke VR-based motor rehabilitation enhances performance: a pilot study. Presence Teleoperators Virtual Environ. 2012;21:490–501.

Batson JP, Kato Y, Shuster K, Patton JL, Reed KB, Tsuji T, et al. Haptic coupling in dyads improves motor learning in a simple force field. In: Proceedings of the 42nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2020.

Goršič M, Darzi A, Novak D. Comparison of two difficulty adaptation strategies for competitive arm rehabilitation exercises. In: Proceedings of the 2017 IEEE International Conference on Rehabilitation Robotics. London, UK; 2017. p. 640–5.

Baur K, Wolf P, Riener R, Duarte J. Making neurorehabilitation fun: Multiplayer training via damping forces balancing differences in skill levels. In: Proceedings of the 2017 IEEE International Conference on Rehabilitation Robotics. 2017.

Goršič M, Cikajlo I, Goljar N, Novak D. A multisession evaluation of a collaborative virtual environment for arm rehabilitation. Presence Virtual Augment Real. 2020;27:274–86.

Wuennemann MJ, Mackenzie SW, Lane HP, Peltz AR, Ma X, Gerber LM, et al. Dose and staffing comparison study of upper limb device-assisted therapy. NeuroRehabilitation. 2020;46:287–97.

Bustamante Valles K, Montes S, de Jesus Madrigal M, Burciaga A, Martínez ME, Johnson MJ. Technology-assisted stroke rehabilitation in Mexico: a pilot randomized trial comparing traditional therapy to circuit training in a robot/technology-assisted therapy gym. J Neuroeng Rehabil. 2016;13:83.

Jakob I, Kollreider A, Germanotta M, Benetti F, Cruciani A, Padua L, et al. Robotic and sensor technology for upper limb rehabilitation. Phys Med Rehabil. 2018;10:S189–97.

Google Scholar  

Aprile I, Pecchioli C, Loreti S, Cruciani A, Padua L, Germanotta M. Improving the efficiency of robot-mediated rehabilitation by using a new organizational model: an observational feasibility study in an Italian rehabilitation center. Appl Sci. 2019;9:5357.

Bessler J, Prange-Lasonder GB, Schaake L, Saenz JF, Bidard C, Fassi I, et al. Safety assessment of rehabilitation robots: a review identifying safety skills and current knowledge gaps. Front Robot AI. 2021;8: 602878.

Balasubramanian S, Colombo R, Sterpi I, Sanguineti V, Burdet E. Robotic assessment of upper limb motor function after stroke. Am J Phys Med Rehabil. 2012;91(11 Suppl 3):S255–69.

De Los R-G, Dimbwadyo-Terrer I, Trincado-Alonso F, Monasterio-Huelin F, Torricelli D, Gil-Agudo A. Quantitative assessment based on kinematic measures of functional impairments during upper extremity movements: a review. Clin Biomech. 2014;29:719–27.

Shirota C, Balasubramanian S, Melendez-Calderon A. Technology-aided assessments of sensorimotor function: current use, barriers and future directions in the view of different stakeholders. J Neuroeng Rehabil. 2019;16:53.

Tran V-D, Dario P, Mazzoleni S. Kinematic measures for upper limb robot-assisted therapy following stroke and correlations with clinical outcome measures: a review. Med Eng Phys. 2018;53:13–31.

Verhoeven FM, Newell KM. Unifying practice schedules in the timescales of motor learning and performance. Hum Mov Sci. 2018;59:153–69.

Lee JY, Oh Y, Kim SS, Scheidt RA, Schweighofer N. Optimal schedules in multitask motor learning. Neural Comput. 2016;28:667–85.

Carr JH, Stepherd RB, Nordholm L, Lynne D. Investigation of a new motor assessment scale for stroke patients. Phys Ther. 1985;65:175–80.

Article   PubMed   CAS   Google Scholar  

Fugl-Meyer AR, Jääskö L, Leyman I, Olsson S, Steglind S. The post-stroke hemiplegic patient. 1. A method for evaluation of physical performance. Scand J Rehabil Med. 1975;7:13–31.

PubMed   CAS   Google Scholar  

Riener R, Dislaki E, Keller U, Koenig A, Van Hedel H, Nagle A. Virtual reality aided training of combined arm and leg movements of children with CP. Stud Health Technol Inform. 2013;184:349–55.

PubMed   Google Scholar  

Mazzoleni S, Tran V-D, Dario P, Posteraro F. Wrist robot-assisted rehabilitation treatment in subacute and chronic stroke patients: from distal-to-proximal motor recovery. IEEE Trans Neural Syst Rehabil Eng. 2018;26:1889–96.

Newell KM, Liu YT, Mayer-Kress G. Time scales in motor learning and development. Psychol Rev. 2001;108:57–82.

Mazur JE, Hastle R. Learning as accumulation: a reexamination of the learning curve. Psychol Bull. 1978;85:1256–74.

Ku W-Y, Beck JC. Mixed Integer Programming models for job shop scheduling: a computational analysis. Comput Oper Res. 2016;73:165–73.

Kondili E, Pantelides CC, Sargent RWH. A general algorithm for scheduling batch operations. In: 3rd International Symposium on Process System Engineering. 1988. p. 62–75.

Kanet JJ, Ahire SL, Gorman MF. Constraint programming for scheduling. In: Handbook of Scheduling, vol. 47. Chapman and Hall/CRC Press; 2004. p. 1–21.

Tawarmalani M, Sahinidis NV. A polyhedral branch-and-cut approach to global optimization. Math Program. 2005;103:225–49.

CPLEX II. V12.8: User’s Manual for CPLEX. International Business Machines Corporation; 2017.

Ali M, Qaisar S, Naeem M, Mumtaz S, Rodrigues JJPC. Combinatorial resource allocation in D2D assisted heterogeneous relay networks. Futur Gener Comput Syst. 2020;107:956–64.

Köppe M. On the complexity of nonlinear mixed-integer optimization. In: Lee J, Leyffer S, editors. Mixed integer nonlinear programming. New York: Springer; 2012. p. 533–57.

Chapter   Google Scholar  

Rapolienė J, Endzelytė E, Jasevičienė I, Savickas R. Stroke patients motivation influence on the effectiveness of occupational therapy. Rehabil Res Pract. 2018;2018:9367942.

PubMed   PubMed Central   Google Scholar  

Gombolay M, Yang XJ, Hayes B, Seo N, Liu Z, Wadhwania S, et al. Robotic assistance in the coordination of patient care. Int J Rob Res. 2018;37:1300–16.

Download references

Acknowledgements

The authors thank Matjaž Mihelj of the University of Ljubljana for fruitful discussion on the topic of multi-robot environments.

This work was funded by the National Science Foundation under Grant no. 2024813.

Author information

Authors and affiliations.

Department of Electrical and Computer Engineering, University of Wyoming, 1000 E University Ave., Laramie, WY, 82071, USA

Benjamin A. Miller, Bikranta Adhikari, Chao Jiang & Vesna D. Novak

Department of Electrical Engineering and Computer Science, University of Cincinnati, 2600 Clifton Ave., Cincinnati, OH, 45221, USA

Benjamin A. Miller & Vesna D. Novak

You can also search for this author in PubMed   Google Scholar

Contributions

BAM implemented the majority of the mathematical model and optimization procedure, conducted the evaluations, prepared the tables and figures, and wrote part of the manuscript. BA contributed to the mathematical model and optimization procedure. CJ co-developed the study design, implemented part of the mathematical model and optimization procedure, and wrote part of the manuscript. VDN co-developed the study design, contributed to the mathematical model, and wrote part of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Vesna D. Novak .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Miller, B.A., Adhikari, B., Jiang, C. et al. Automated patient-robot assignment for a robotic rehabilitation gym: a simplified simulation model. J NeuroEngineering Rehabil 19 , 126 (2022). https://doi.org/10.1186/s12984-022-01105-4

Download citation

Received : 10 November 2021

Accepted : 27 October 2022

Published : 16 November 2022

DOI : https://doi.org/10.1186/s12984-022-01105-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Rehabilitation robotics
  • Rehabilitation gym
  • Group rehabilitation
  • Optimization
  • Mathematical modeling
  • Task scheduling

Journal of NeuroEngineering and Rehabilitation

ISSN: 1743-0003

assignment for robotics

Automatic Addison

Automatic Addison

Build the Future

How to Assign Denavit-Hartenberg Frames to Robotic Arms

bend_press_robot_png

In this tutorial, we’ll learn the fundamentals of assigning Denavit-Hartenberg coordinate frames (i.e. x, y, and z axes) to different types of robotic arms.

Denavit-Hartenberg (D-H) frames help us to derive the equations that enable us to control a robotic arm. 

The D-H frames of a particular robotic arm can be classified as follows:

  • Global coordinate frame: This coordinate frame can have many names…world frame, base frame, etc. In this two degree of freedom robotic arm , the global coordinate frame is where the robot makes contact with the dry-erase board.
  • Joint frames: We need a coordinate frame for each joint.
  • End-effector frame: We need a coordinate frame for the end effector of the robot (i.e. the gripper, hand, etc….that piece of the robot that has a direct effect on the world).

To draw the frames (i.e. x, y, and z axes), we follow four rules that will enable us to take a shortcut when deriving the mathematics for the robot. These rules collectively are known as the Denavit-Hartenberg Convention .

Table of Contents

Four Rules of the Denavit-Hartenberg Convention

Here are the four rules that guide the drawing of the D-H coordinate frames:

  • The z-axis is the axis of rotation for a revolute joint. 
  • The x-axis must be perpendicular to both the current z-axis and the previous z-axis.
  • The y-axis is determined from the x-axis and z-axis by using the right-hand coordinate system .
  • The x-axis must intersect the previous z-axis (rule does not apply to frame 0).

1. The z-axis is the axis of rotation for a revolute joint (like Joints 1 and 2 in the diagram below). 

For example, in the diagram below, the z-axis (pink line) for Joint 1 will point straight upwards out of the servo motor. This coordinate frame is the global reference frame. It is the frame that is connected to the first joint.

1-connected-to-first-jointJPG

Let’s draw the second z-axis.

2-second-jointJPG

For our last z-axis, I have a choice. I’ll put it in the same direction as the second z-axis.

3-add-z-vectorJPG

2. The x-axis must be perpendicular to both the current z-axis and the previous z-axis.

Let’s draw the x-axis for the global reference frame. We need to make sure it is perpendicular to the z 0 axis. We have a choice here. I’ll go with the x-axis pointing to the right since it is easier to see on the diagram.

4-easier-to-see-on-diagramJPG

Let’s draw the x-axis for the Joint 2 reference frame. We need to make sure it is perpendicular to both z 0 and z 1 .

5-draw-x1JPG

Now, let’s draw x 2 .

6-draw-x2JPG

3. The y-axis is determined from the x-axis and z-axis by using the right-hand coordinate system .

For the right-hand rule, you:

  • Take your right hand and point your four fingers in the direction of the x-axis.
  • Point your thumb in the direction of the z-axis.
  • Your palm points in the direction of the y-axis.

So, in this diagram below, how do we draw y 0 ?

  • x 0 points to the right (point the four fingers of your right hand in that direction).
  • z 0 points toward the sky (point your thumb in that direction).
  • Therefore, y 0 points into the page.

7-draw-y0JPG

Below I have drawn the y-axis for coordinate frames 1 and 2 using the right-hand rule.

8-using-right-hand-ruleJPG

4. The x-axis must intersect the previous z-axis (rule does not apply to frame 0).

You can see from the red lines below that this rule holds for the diagram we drew.

9-rule-holdsJPG

More Practice With Denavit-Hartenberg Frames

Example 1 – two degree of freedom robotic arm.

Let’s get some more practice drawing D-H frames on a kinematic diagram. We’ll use this diagram below of the two-degree of freedom robotic arm .

10-two-degree-of-freedom-robotic-armJPG

Remember the four rules.

Rule #1: The z-axis is the axis of rotation for a revolute joint (like Joints 1 and 2 in the diagram below). 

11-draw-z-axesJPG

Rule #2: The x-axis must be perpendicular to both the current z-axis and the previous z-axis.

12-add-x-axesJPG

Rule #3: The y-axis is determined from the x-axis and z-axis by using the right-hand coordinate system.

13-add-y-axesJPG

Rule #4: The x-axis must intersect the previous z-axis (rule does not apply to frame 0).

We draw a dashed line extending the x and z-axes and confirm that the axes intersect.

14-confirm-they-intersectJPG

Example 2 – Cartesian Robot

Let’s do some more examples so that you get comfortable drawing kinematic diagrams. 

We’ll start with the cartesian robot . You’ll often see this robot in 3D printing, laser cutting, and computer numerical control (CNC) applications.

Here is an example of a cartesian robot.

15-example-of-cartesian-robot

A cartesian robot is made up of three prismatic joints, that correspond to the x, y, and z axes. These joints are perpendicular to each other.

Whereas a revolute joint produces rotational motion, a prismatic joint produces linear (i.e. sliding) motion along a single axis. In a real application, a prismatic joint is a linear actuator . This type of actuator can be purchased at any online store that sells electronics equipment (e.g. Amazon, eBay, etc.).

Let’s draw the kinematic diagram for a cartesian robot.

Here is our first joint:

16-first-jointJPG

Let’s add our second and third joint.

17-second-third-jointJPG

Now, let’s label the links. When drawing the kinematic diagram for prismatic joints, we assume that each joint is not extended.

18-each-link-unextendedJPG

Let’s draw in arrows to show the direction of motion (we’ll use the letter d to represent the direction of motion from a 0 position of the linear actuator…i.e. prismatic joint).

19-direction-of-motionJPG

Let’s draw the axes. 

Rule #1: For a prismatic joint, the z-axis has to be the direction of motion.

20-add-z-axesJPG

You stick your fingers in the direction of x. Your thumb goes in the direction of z. Your palm faces the direction of y.

22-add-yJPG

Check each frame to see if this rule holds. You extend the z-axes and see if they intersect the next x axis. You’ll find that Rule 4 will be followed for all frames.

Example 3 – Articulated Robot

Articulated robots are your standard robot arms. They are the most common types of robots you will find in factories. This type of robot is the one that is most similar to the human arm.

Here is an example of an articulated robot.

23-example-of-articulated-robot

Articulated robots come in all shapes and sizes. Some of these types of robots can be pretty strong. If you go inside a car factory, you’ll see giant articulated robots lifting cars and trucks with ease.

Let’s draw the kinematic diagram for a three degree of freedom articulated robot. This robot is similar to an old robot named the Stanford Arm . The difference is that, in this robot diagram, we will make the third joint a revolute joint instead of a prismatic joint.

24-3-dof-armJPG

Let’s label the links ( we’ll use the letter a to represent link lengths ) and draw the direction of positive rotation.

25-letter-a-represent-link-lengthsJPG

Now, let’s go through the four rules.

Rule #1: The z-axis is the axis of rotation for a revolute joint (like Joints 0 and 1 in the diagram below).

26-add-z-axesJPG

For the end effector in the image above, I made the z-axis the same direction as the frame before it since it will make the math easier.

Note that, for the base frame, we can set the x-axis to be anything we want as long as rule #2 holds. I’ve made it go to the right in the diagram below.

For drawing the other x axes, you have a choice of which direction you want to make them. I like to look ahead to rule #4 (Rule #4: The x-axis must intersect the previous z-axis) to help with this decision.

27-add-x-axesJPG

Example 4 – SCARA Robot

Let’s see how to draw the kinematic diagram for the SCARA robot .

Here is an example of the SCARA robot:

29-scara-robot

The SCARA robot is commonly used for pick and place (i.e. moving a part from one point to another) and small assembly applications.

30-base-kinematic-diagram-scara-robotJPG

Rule #1: The z-axis is the axis of rotation for a revolute joint. For a prismatic joint, the z-axis has to be the direction of motion.

31-add-z-axisJPG

Example 5 – Six Degree of Freedom Robotic Arm

Now, let’s draw the kinematic diagram and D-H frames for a 6 degree of freedom robotic arm like the one below. 

Note that the 6th servo is located on the gripper. I won’t include that joint in the analysis since it is not part of the main arm of the robot.

6dof-diy-robotic-arm

We start by drawing the kinematic diagram. Remember that, in a kinematic diagram, we assume that all servos are at 0 degrees (i.e. all joint variables are 0). Therefore, for some servos, we’ll have to make the assumption that the angle range of the servo is from -90 to 90 degrees as opposed to the normal 0 to 180 degree range so that we have a valid kinematic diagram.

34-six-dof-armJPG

Let’s label the links and draw the direction of positive rotation.

35-add-angles-robotic-armJPG

Rule #1: The z-axis is the axis of rotation for a revolute joint.

36-add-z-axisJPG

Up until now, we have always placed the origin of a coordinate frame at the center of the joint. However, doing this is not required. We can place the frame origin wherever we want.

Notice in the diagram below that we have to move the origin of frame 4 backwards, by a distance of a 4 in order to satisfy Rule #2.

37-add-x-axisJPG

Rule #4: The x-axis must intersect the previous z-axis.

If you go frame by frame in the diagram above, you can see that the rule holds.

Example 6 – Six Degree of Freedom Collaborative Robot

Let’s draw the kinematic diagram for a six degree of freedom robot like the Universal Robots UR5 . At the end of this robotic arm, you would typically have some sort of end effector like a hand, gripper, or suction cup.

39-6dof-collaborative-robotJPG

The kinematic diagram will be drawn with the robotic arm in a flat orientation, parallel to a table, for  example.

40-kinematic-diagram-collaborative-robot-ur5JPG

Remember the four rules:

We can see in the diagram that all the rules hold.

Example 7 – Six Degree of Freedom Industrial Robot

Let’s draw the kinematic diagram for a six degree of freedom industrial robot like the FANUC LRMate 200iD.

41-fanuc-lrmate-200idJPG

Go through each of the four rules. Here is what I drew:

42-denavit-hartenberg-frames-fanuc-lrmate-200idJPG

Keep building!

Credit to Professor Angela Sodemann for teaching me this stuff. Dr. Sodemann is an excellent teacher (She runs a course on RoboGrok.com). On her YouTube channel , she provides some of the clearest explanations on robotics fundamentals you’ll ever hear.

Multi-Objective Teaching-Learning-Based Optimizer for a Multi-Weeding Robot Task Assignment Problem

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Help | Advanced Search

Computer Science > Robotics

Title: multi-auv kinematic task assignment based on self-organizing map neural network and dubins path generator.

Abstract: To deal with the task assignment problem of multi-AUV systems under kinematic constraints, which means steering capability constraints for underactuated AUVs or other vehicles likely, an improved task assignment algorithm is proposed combining the Dubins Path algorithm with improved SOM neural network algorithm. At first, the aimed tasks are assigned to the AUVs by improved SOM neural network method based on workload balance and neighborhood function. When there exists kinematic constraints or obstacles which may cause failure of trajectory planning, task re-assignment will be implemented by change the weights of SOM neurals, until the AUVs can have paths to reach all the targets. Then, the Dubins paths are generated in several limited cases. AUV's yaw angle is limited, which result in new assignments to the targets. Computation flow is designed so that the algorithm in MATLAB and Python can realizes the path planning to multiple targets. Finally, simulation results prove that the proposed algorithm can effectively accomplish the task assignment task for multi-AUV system.

Submission history

Access paper:.

  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Welcome to the Purdue Online Writing Lab

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

The Online Writing Lab at Purdue University houses writing resources and instructional material, and we provide these as a free service of the Writing Lab at Purdue. Students, members of the community, and users worldwide will find information to assist with many writing projects. Teachers and trainers may use this material for in-class and out-of-class instruction.

The Purdue On-Campus Writing Lab and Purdue Online Writing Lab assist clients in their development as writers—no matter what their skill level—with on-campus consultations, online participation, and community engagement. The Purdue Writing Lab serves the Purdue, West Lafayette, campus and coordinates with local literacy initiatives. The Purdue OWL offers global support through online reference materials and services.

A Message From the Assistant Director of Content Development 

The Purdue OWL® is committed to supporting  students, instructors, and writers by offering a wide range of resources that are developed and revised with them in mind. To do this, the OWL team is always exploring possibilties for a better design, allowing accessibility and user experience to guide our process. As the OWL undergoes some changes, we welcome your feedback and suggestions by email at any time.

Please don't hesitate to contact us via our contact page  if you have any questions or comments.

All the best,

Social Media

Facebook twitter.

IMAGES

  1. Robotics with VEX IQ

    assignment for robotics

  2. Robotics 10 Assignment by Jaden Reekie

    assignment for robotics

  3. Assignment of ict robotics

    assignment for robotics

  4. Robotics Assignment.pdf

    assignment for robotics

  5. Robotics Assignment Help by 24/7 Online Ph.D. Engineers

    assignment for robotics

  6. Robotics Assignment 1 (B318004)

    assignment for robotics

VIDEO

  1. Robotics Team10 Assignment_4

  2. Robotics lab

  3. assignment 5 Robotics

  4. Robotics Assignment 1

  5. Robotics lab

  6. ROB599 Soft Robotics Assignment 2 deliverable 1

COMMENTS

  1. 25+ Robotics Projects, Lessons, and Activities

    3. Clever Vibrobots. In the Vibrobots— Tiny Robots from Scratch lesson, students build simple robots from craft and recycled materials. With coin cell batteries and small motors (see the Bristlebot Kit), students learn about open and closed circuits and create robots that move around because of the vibration of the motor.In addition to being an entry point for students interested in robotics ...

  2. Introduction to Robotics

    This course provides an overview of robot mechanisms, dynamics, and intelligent controls. Topics include planar and spatial kinematics, and motion planning; mechanism design for manipulators and mobile robots, multi-rigid-body dynamics, 3D graphic simulation; control design, actuators, and sensors; wireless networking, task modeling, human-machine interface, and embedded software. Weekly ...

  3. CS223A

    You can use partial late days (i.e. if you submit your first assignment 5 hours late, you will have 72-5 = 67 total late hours remaining), Once these late days are exhausted, any assignments turned in late will be penalized 20% per late day. However, no assignment will be accepted more than three days after its due date. If you need additional ...

  4. Modern Robotics, Course 1: Foundations of Robot Motion

    It is not a sampler. In Course 1 of the specialization, Foundations of Robot Motion, you will learn fundamental material regarding robot configurations, for both serial robot mechanisms and robots with closed chains. You will learn about configuration space (C-space), degrees of freedom, C-space topology, implicit and explicit representations ...

  5. Stanford Engineering Everywhere

    Course Description. The purpose of this course is to introduce you to basics of modeling, design, planning, and control of robot systems. In essence, the material treated in this course is a brief survey of relevant results from geometry, kinematics, statics, dynamics, and control. The course is presented in a standard format of lectures ...

  6. First Grade, Robotics Projects, Lessons, Activities

    Follow the Flow: 2017 Engineering Challenge. Harvest Water from Fog Science Project. Enter the realm of automation and innovation with robotics science experiments. Design, build, and program your own robots. Pick the ultimate first-grade science exploration from our hands-on collection of fun experiments.

  7. Robotics Projects for Hands-On Learning [2024]

    Dive into our Robotics Projects for practical assignments in robot design, automation, control systems, and AI integration. These projects are meticulously designed to hone your skills and equip you for an exciting career in the ever-evolving field of robotics. Filter by. Subject.

  8. 15 Ways to Teach Robotics Virtually

    Here are some of the top picks from teachers. Wonder Workshop Dash: Teachers raved about the virtual platform designed by the creators of the popular Dash robot! Called Dash's Neighborhood, students use the same drag-and-drop programming language, Blockly, to navigate a digital Dash robot.

  9. Assignments

    Introduction to Robotics. Menu. More Info Syllabus Lecture Notes Assignments Exams Projects Assignments. Problem Set 1 . Problem Set 2 simple_sim program for ... assignment Programming Assignments. Download Course. Over 2,500 courses & materials Freely sharing knowledge with learners and educators around the world.

  10. PDF ECE 470 Introduction to Robotics Lab Manual

    This is a set of laboratory assignments designed to complement the introduc-tory robotics lecture taught in the College of Engineering at the University of Illinois at Urbana-Champaign. Together, the lecture and labs introduce students to robot manipulators and computer vision and serve as the founda-

  11. Teaching Robotics: Engaging, Hands-On Lesson Plan Ideas

    The lesson is designed so students can program a robot to move. It's also intended to make students more articulate about robots and how they work. The lesson plan is based on the short story "My Friend" about a robot. In this team-building exercise, students will work in pairs to discuss ideas and act out dialogue from the book before ...

  12. Frame Assignment For Robotic Manipulators

    Frame Assignment For Robotic Manipulators - Direct Kinematics IThis video shows how to assign frames (coordinate systems) to the joints in any robotic system...

  13. ROB 502: Programming for Robotics

    As it is titled Programming for Robotics, we have tried to design the in-class problems and homework assignments to be relevant to common robotics situations and algorithms, with the greater goal of demystifying programming and avoiding black-box magic. To be relevant and exciting, we designed the homework assignments around building a robotics ...

  14. 101+ Simple Robotics Research Topics For Students

    Robot Design and Building. 1. How to build a simple robot using household materials. 2. Designing a robot that can pick up and sort objects. 3. Building a robot that can follow a line autonomously. 4. Creating a robot that can draw pictures.

  15. Assignment Algorithms for Variable Robot Formations

    We consider the case when each robot is to be assigned a goal position, the individual robots are interchangeable, and the goal formation can be scaled or translated.We compute the costs for all candidate pairs of initial, goal robot assignments as functions of the parameters of the goal formation, and partition the parameter space into ...

  16. Applications of AI in Robots: An Introduction

    To become a robotics engineer, a bachelor's or master's degree in computer engineering, computer science, electrical engineering or a related field is required.Fluency in multiple programming languages and proficiency in algorithm design and debugging are also important qualifications. A successful robotics engineer is also a continuous learner, a natural problem solver and is driven ...

  17. Becoming a Robotics Engineer in 2024: A Step-by-Step Guide

    Imagine having the power to design and build intelligent machines that can explore other planets, perform life-saving surgeries, or revolutionize manufacturing - those are just a few of the many exciting innovations a career in robotics engineering could hold for you. And with the global robotics market projected to surpass $151 billion by 2031 in the United States alone, the industry is one ...

  18. 12 Interesting Robotics Projects Ideas & Topics for Beginners ...

    1. Line Follower Robot. The Line Follower Robot is a simple yet intriguing project for beginners that involves designing and programming a robot to follow a specific path marked by a line. This project will introduce students to the fundamentals of robot design, sensor integration, and basic programming. Source.

  19. Integrated task sequence planning and assignment for human-robot

    Human-robot collaborative assembly (HRCA) can give full play to their respective advantages and significantly improve assembly efficiency. Rational assembly sequences and task assignment schemes facilitate an efficient and smooth assembly process. This paper proposes a method of integrated assembly sequence planning and task assignment for HRCA based on the genetic algorithm (GA). Firstly, a ...

  20. Robotics online assignment

    Robotics online assignment. This document provides an introduction to robotics, including its history, components, and applications. It discusses the three main aspects of robots: mechanical, electrical, and programming. It describes key robot components like power sources, actuation, sensing, manipulation, and locomotion.

  21. Automated patient-robot assignment for a robotic rehabilitation gym: a

    Rehabilitation robotics and the robotic gym. Over the last decade, rehabilitation robots have demonstrated the ability to deliver motor rehabilitation with results comparable to a human therapist [1,2,3].By physically guiding the patient's limb and applying either assistive or challenging forces [4, 5], such robots can effectively reduce the physical workload of the human therapist.

  22. How to Assign Denavit-Hartenberg Frames to Robotic Arms

    Rule #1: The z-axis is the axis of rotation for a revolute joint. Rule #2: The x-axis must be perpendicular to both the current z-axis and the previous z-axis. Up until now, we have always placed the origin of a coordinate frame at the center of the joint. However, doing this is not required.

  23. Multi-Objective Teaching-Learning-Based Optimizer for a Multi-Weeding

    With the emergence of the artificial intelligence era, all kinds of robots are traditionally used in agricultural production. However, studies concerning the robot task assignment problem in the agriculture field, which is closely related to the cost and efficiency of a smart farm, are limited. Therefore, a Multi-Weeding Robot Task Assignment (MWRTA) problem is addressed in this paper to ...

  24. Matching-based Coalition Formation for Multi-robot Task Assignment

    Due to this, cost calculations for robot-to-task assignments become uncertain. However, a small amount of resources might be available to accurately localize a subset of these robots. To this end, we propose a bipartite graph matching-based task allocation strategy (centralized and distributed versions) that gracefully handles the uncertainty ...

  25. [2405.07536] Multi-AUV Kinematic Task Assignment based on Self

    To deal with the task assignment problem of multi-AUV systems under kinematic constraints, which means steering capability constraints for underactuated AUVs or other vehicles likely, an improved task assignment algorithm is proposed combining the Dubins Path algorithm with improved SOM neural network algorithm. At first, the aimed tasks are assigned to the AUVs by improved SOM neural network ...

  26. Welcome to the Purdue Online Writing Lab

    Mission. The Purdue On-Campus Writing Lab and Purdue Online Writing Lab assist clients in their development as writers—no matter what their skill level—with on-campus consultations, online participation, and community engagement. The Purdue Writing Lab serves the Purdue, West Lafayette, campus and coordinates with local literacy initiatives.