IEEE Account

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Haptic Learning and Technology: Analyses of Digital Use Cases of Haptics Using the Haptic Learning Model

  • Conference paper
  • First Online: 16 June 2022
  • Cite this conference paper

haptic technology research paper

  • Farzaneh Norouzinia 8 ,
  • Bianka Dörr 8 ,
  • Mareike Funk 8 &
  • Dirk Werth 8  

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1582))

Included in the following conference series:

  • International Conference on Human-Computer Interaction

1491 Accesses

Learning with involvement of haptic technologies can provide advanced opportunities in digital learning. Especially over the course of the pandemic the value of digital learning solutions became more obvious. There are attempts with various technologies, which can enhance the quality of learning processes and refine the learning results. However, it should be remembered that the sense of touch is not contained in all of them, even though it might be helpful, e.g., in the medical field. To show how haptic technology may improve the digital learning solutions, this paper will briefly define haptic learning and analyze some haptic learning use cases using the Haptic Learning Model of Dörr et al. [ 2 ].

We describe haptic learning as the sum of all learning processes which use haptic interactions to enhance the effectiveness and/or efficiency of learning process. In this paper, haptic technology use cases which are not directly related to learning or do not give any haptic feedback to the learners are excluded.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

haptic technology research paper

Haptic-Enabled English Education System

haptic technology research paper

Development of a Learning Support System for Electromagnetics Using Haptic Devices

haptic technology research paper

Applications of Haptic Systems in Virtual Environments: A Brief Review

Daiber, F., Kosmalla, F., Hassan, M., Wiehr, F., Krüger, A.: Towards amplified motor learning in sports using EMS. In: Proceedings of the CHI 2017 Workshop on Amplification and Augmentation of Human Perception, Denver, CO (2017). https://www.dfki.de/fileadmin/user_upload/import/9027_amplified-motor-learning-CR-bibcopy.pdf

Dörr, B., Funk, M., Norouzinia, F., Werth, D.: Haptic learning and how it can enhance digital learning experiences: an innovative approach. In: INTED 2022 Proceedings, pp. 3909–3917 (2022)

Google Scholar  

Kaluschke, M., Su Yin, M., Haddaway, P., Srimaneekarn, N., Saikaew, P., Zachmann, G.: A shared haptic virtual environment for dental surgical skill training. In: 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Lisabon, pp. 347– 52. IEEE (2021). https://doi.org/10.1109/VRW52623.2021.00069

Seim, C., Pontes, R., Kadiveti, S., Adamjee, Z., Cochran, A., Aveni, T., et al.: Towards haptic learning on a smartwatch. In: Proceedings of the 2018 ACM International Symposium on Wearable Computers, pp. 228–229. Association for Computing Machinery, New York (2018)

Shokur, S., et al.: Assimilation of virtual legs and perception of floor texture by complete paraplegic patients receiving artificial tactile feedback. Sci. Rep. 6 (1), 1–14 (2016)

Article   Google Scholar  

Download references

Author information

Authors and affiliations.

AWS Institut für digitale Produkte und Prozesse gGmbH, Saarbrücken, Germany

Farzaneh Norouzinia, Bianka Dörr, Mareike Funk & Dirk Werth

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Farzaneh Norouzinia .

Editor information

Editors and affiliations.

University of Crete and Foundation for Research and Technology – Hellas (FORTH), Heraklion, Crete, Greece

Constantine Stephanidis

Foundation for Research and Technology – Hellas (FORTH), Heraklion, Crete, Greece

Margherita Antona

Stavroula Ntoa

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Cite this paper.

Norouzinia, F., Dörr, B., Funk, M., Werth, D. (2022). Haptic Learning and Technology: Analyses of Digital Use Cases of Haptics Using the Haptic Learning Model. In: Stephanidis, C., Antona, M., Ntoa, S. (eds) HCI International 2022 Posters. HCII 2022. Communications in Computer and Information Science, vol 1582. Springer, Cham. https://doi.org/10.1007/978-3-031-06391-6_10

Download citation

DOI : https://doi.org/10.1007/978-3-031-06391-6_10

Published : 16 June 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-06390-9

Online ISBN : 978-3-031-06391-6

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Robot AI

Logo of frobt

Applications of Haptic Technology, Virtual Reality, and Artificial Intelligence in Medical Training During the COVID-19 Pandemic

Mohammad motaharifar.

1 Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran

2 Department of Electrical Engineering, University of Isfahan, Isfahan, Iran

Alireza Norouzzadeh

Parisa abdi.

3 Translational Ophthalmology Research Center, Farabi Eye Hospital, Tehran University of Medical Sciences, Tehran, Iran

Arash Iranfar

4 School of Electrical and Computer Engineering, University College of Engineering, University of Tehran, Tehran, Iran

Faraz Lotfi

Behzad moshiri.

5 Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada

Alireza Lashay

Seyed farzad mohammadi, hamid d. taghirad.

Pete Culmer , University of Leeds, United Kingdom

Soroosh Shahtalebi , Montreal Institute for Learning Algorithm (MILA), Canada

Associated Data

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding authors.

This paper examines how haptic technology, virtual reality, and artificial intelligence help to reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education process might lead to undesired complications for the patient. Therefore, training of the medical skills to the trainees have always been a challenging issue for the expert surgeons, and this is even more challenging in pandemics. The current method of surgery training needs the novice surgeons to attend some courses, watch some procedure, and conduct their initial operations under the direct supervision of an expert surgeon. Owing to the requirement of physical contact in this method of medical training, the involved people including the novice and expert surgeons confront a potential risk of infection to the virus. This survey paper reviews recent technological breakthroughs along with new areas in which assistive technologies might provide a viable solution to reduce the physical contact in the medical institutes during the COVID-19 pandemic and similar crises.

1 Introduction

After the outbreak of COVID-19 virus in Wuhan, China at the end of 2019, this virus and its mutations has rapidly spread out in the world. In view of the fact that no proven treatment has been so far introduced for the COVID-19 patients, the prevention policies such as staying home, social distancing, avoiding physical contact, remote working, and travel restrictions has strongly been recommended by the governments. As a consequence of this global problem, universities have initiated policies regarding how to keep up teaching and learning without threatening their faculty members and students to the virus. Thus, the majority of traditional in-class courses have been substituted to the online courses. Notwithstanding the fact that the emergency shift of the classes have reduced the quality of education during the COVID-19 pandemics Hodges et al. (2020) , some investigators have proposed ways for rapid adaption of the university faculty and the students to the situation and improve the quality of education Zhang et al. (2020) .

Nevertheless, the case of remote learning is different in the medical universities as the learning process in the medical universities is not just rely on the in-class courses. As an illustration, the medical training in the traditional way is accomplished by a medical student through attending some training courses, watching how the procedure is performed by a trainer, performing the procedure under supervision of a trainer, and at the final stage, independently performing the procedure. In fact, the traditional method of surgery training relies on excessive presence of students in the hospital environments and the skill labs to practice the tasks on the real environments such as physical phantoms, cadavers, and patients and that is why medical students are called “residents”. Thus, the aforementioned traditional surgery training methodology requires a substantial extent of physical contact between medical students, expert surgeons, nurses, and patients, and as a result, the risk of infection is high among those people. On the other hand, the assistive technologies based on virtual reality and haptic feedback have introduced alternative surgical training tools to increase the safety and efficiency of the surgical training procedures. Nowadays, the necessity of reducing the physical contact in the hospital environments seems to make another motivation for those assistive technologies. Therefore, it is beneficial to review those technologies from COVID-19 motivation aspect.

In this paper, the existing assistive technologies for medical training are reviewed in a COVID-19 situation. While there are several motivations for those technologies such as increasing the safety, speed, and efficiency of training, the new motivations created for those technologies during the COVID-19 pandemic are the specific focus of this paper. In spite of the existing literature on COVID-19, our main focus is surgery training technologies that help to reduce physical contact during this and other similar pandemics. Notably, a number of those studies have analyzed systemic and structural challenges applicable to medical training programs with little emphasis on technological aspects of the subject Sharma and Bhaskar (2020) , Khanna et al. (2020) . On the other hand, the methods of remote diagnostics and remote treatment have received a great deal of attention after COVID-19 pandemic and a massive body of literature have covered those topics Tavakoli et al. (2020) , Feizi et al. (2021) , Akbari et al. (2021) . In contrast, less studies have given special attention on remote training and remote skill assessment which is the subject of this paper. For this reason, this paper addresses scientific methods, technologies and solutions to reduce the amount of physical contact in the medical environments that is due to training reasons.

Relevant literature was chosen from articles published by IEEE, Frontiers, Elsevier, SAGE, and Wiley with special attention to the well-known interdisciplinary journals. The search was preformed using the keywords “remote medical training,” “skill assessment in surgery,” “virtual and augmented reality for medical training,” “medical training haptic systems,” and “artificial intelligence and machine learning for medical training” until June 30, 2021. The literature was examined to systematically address key novel concepts in remote training with sufficient attention to the future direction of the subject. Finally, it is tried to review the problem in the COVID-19 context in a way that the discussed materials are distinct from similar literature in a conventional non-COVID context.

The rest of this paper is organized as follows: The clinical motivations of the training tools are discussed in Section 2 . The virtual and augmented reality and the related areas of utilization for medical training are described in Section 3 . Section 4 explains how haptic technology may be used for medical training, while Section 5 describes some data-based approaches that may be used for skill assessment. Then, the machine vision and its relevant methods used for medical training are presented in Section 6 . Finally, concluding remarks are stated in Section 7 .

2 The Clinical Motivation

The process of skill development among medical students have always been a challenging issue for the medical universities, as the lack of expertise may lead to undesired complications for the patients Kotsis and Chung (2013) . Moreover, owing to the rapid progress of minimal invasive surgeries during the past decades, the closed procedures have been becoming a method of choice over traditional open surgeries. In the minimal invasive surgery, the instruments enter the body through one or more small incisions, while this type of surgery is applicable to a variety of procedures. The foremost advantage of this technique is the minimal affection to healthy organs, which leads to less pain, fewer post-operative complications, faster recovery time, and better long-term results.

However, the closed surgery technique is more challenging from the surgeon’s point of view since the surgeon does not have a complete and direct access on the surgical site and the tiny incisions limit the surgeon’s accessibility. Owing to the limited access, some degrees of freedom are missing and surgeon’s manipulation capability is considerably reduced. Furthermore, there is fulcrum effect at the entry point of the instrument, i.e., the motion of the tip of the instrument, which is placed inside the organ, and the external part of the instrument, which is handled by the surgeon, are reversed. This results in more difficult and even awkward instrument handling and requires specific and extensive surgical training of the surgeon. As a result, the minimal invasive surgeries demands advanced expertise level, the lack of which might cause disastrous complications for the patient. These conditions are equally important in many medical interventions, especially in minimally invasive surgeries. Here a number of specific areas of surgical operation are expressed in order to address complications that might occur during the training procedures.

  • • Eye surgery:

An important category of medical interventions which need a very high skill level is intraocular eye surgical procedures. Notably, the human eye is a delicate and highly complex organ and the required accuracy for the majority of intraocular surgeries is in the scale of 50–100 microns. The closed type of surgery is applicable to a number of eye surgeries such as the Cataract surgery in the anterior segment as well as the vitro-retinal surgical procedures in the posterior segment. Notably, some complications such as Posterior Capsule Rupture (PCR) for cataract surgery and retina puncture for the vitro-retinal surgical procedures are among the relatively frequent complications that might happen, due to the surgeon’s lack of surgical skills and dexterity. It is shown in a study on ophthalmic residents that the rate of complications such as retinal injuries is higher for the residents with less skills Jonas et al. (2003) .

  • • Laparoscopic Cholecystectomy

Another example is Laparoscopic Cholecystectomy (LC) which is now the accepted standard procedure across the world and is one of the most common general and specialist surgical procedures. However, it can be prone to an important complication that is bile duct injury (BDI). Although BDI is uncommon but it is one of the most serious iatrogenic surgical complications. In extreme BDI cases, a liver resection or even liver transplantation becomes necessary. BDI is considered as an expensive medical treatment and its mortality rate is as high as 21% Iwashita et al. (2017) .

  • • Neurosurgery

Neurosurgery is another field that deals with complex cases and requires high accuracy and ability in the surgeon’s performance. In a prospective study of 1,108 neurosurgical cases, 78.5% of errors during neurosurgery were considered preventable Stone and Bernstein (2007) . The most frequent errors reported were technical in nature. The increased use of endoscopy in neurosurgery introduces challenges and increases the potential for errors because of issues such as indirect view, elaborate surgical tools, and a confined workspace.

  • • Orthopedic surgery

In the field of orthopedics, knee and shoulder arthroscopic surgeries are among the most commonly performed procedures worldwide. There is a steep learning curve associated with arthroscopic surgery for orthopaedic surgery trainees. Extensive hands-on training is typically required to develop surgical competency. The current minimum number of cases may not be sufficient to develop competency in arthroscopic surgery. It is estimated that it takes about 170 procedures before a surgeon develops consultant-level motor skills in knee arthroscopic surgery Yari et al. (2018) . With work-hour restrictions, patient safety concerns, and fellows often taking priority over residents in performing cases, it is challenging for residents to obtain high-level arthroscopic skills by the end of their residency training.

The above motivation shows the importance of skill development among the medical students. The standard process of procedural skill development in medicine and surgery is shown as a diagram in Figure 1 . In the observation stage, the medical students need to attend a clinical environment and watch how the procedure is performed by a trainee. Then, the medical students get involved in the operation as an apprentice, while the actual procedure is performed by the trainer. Later, the medical students practice the operation under direct supervision of the trainer, while the trainer assesses the skill level of the medical students. The supervised practice and skill assessment steps are repeated as long as the trainee does not have enough experience and skill to conduct the procedures without supervision of the trainer. Finally, after obtaining sufficient skill level, the trainee is able to independently perform the operation.

An external file that holds a picture, illustration, etc.
Object name is frobt-08-612949-g001.jpg

Process of procedural skill development in medical training and surgery.

Remarkably, a learning curve is considered for each procedure, which means that performance tends to improve with experience. This concept applies for all of the medical procedures and specialties, but complex procedures, surgery in particular, are more likely to gradual learning curves, which means that improvement and expertise is achieved after longer training time. Some of the important factors in the learning curve are manual dexterity of the surgeon, the knowledge of surgical anatomy, structured training and mentoring and the nature of the procedure. The learning curve is longer for minimally invasive procedures than that for open surgical procedures. The learning curve is also influenced by the experience of the supporting surgical team. Besides, learning curves depend on the frequency of procedures performed in a specified period. Many studies suggest that complication rates are inversely proportional to the volume of surgical workload.

Notably, the above mentioned process of skill development require a considerable extent of physical contact between the trainees, the expert surgeons, the nurses, and the patients, while this shall be reduced in the COVID-19 pandemic. In addition to the high risk of infection in the medical universities with the conventional medical training approaches, the majority of the health-care capacity is focused on fighting the COVID-19 virus and consequently, the education requirements of medical universities are failed to be entirely fulfilled. As a result, the training efficiency of medical universities will be reduced, provided that they just rely on the conventional training approaches. This will have possible side-effects on the future performance of the health-care system mainly due to the insufficient number of recently graduated students with adequate expertise level.

On the other hand, traditional education takes place in hospitals and on real patients, which face several problems during the COVID-19 pandemic: the hospital environment is contaminated with the virus, hospital staff and physicians are very busy and tired and have less training capacity, prolonged hospital stays of patients to train students put them at greater risk for exposure to the virus, especially if complication occurs by a resident who does not have gained sufficient skills during the training procedure. Therefore, training with assistive devices outside the hospital may play an effective role in this situations. The highlighted factors can significantly be improved by assisted learning, especially in minimally invasive procedures. In more complex surgeries, the complications becomes more serious, the learning curve will be longer, and the role of assisted learning becomes more prominent.

To solve the above mentioned problems, assistive training tools provide a variety of solutions through which the medical universities are able to continue their education procedures, while the risks enforced by the COVID-19 outbreak are reduced. In the following sections, the main assistive training tools including the haptic systems, virtual reality, machine vision, and data mining are reviewed and the areas in which those technologies facilitate the training process during the COVID-19 pandemic are detailed. The aim of these technologies is to have the training efficiency higher or at least equal to that of the conventional training methods without risk of infection of the involved parties to the virus.

3 Virtual and Augmented Reality

Virtual Reality is employed to create an immersive experience for various applications such as visualization, learning and education. In virtual reality, a computer generated graphical presence is visualized using a head mounted display and the user can interact with 3D objects located in the virtual world. In addition to VR, the Augmented Reality (AR) is developed to add 3D objects to the real world creating a different experience by adding digital information to the real objects in the surrounding environment. Although experiencing the 3D objects in VR scenes is far from the interaction with real objects, the VR experience is getting closer to the real world environments by the help of more realistic computer graphics and full-body haptics suits.

The virtual reality (VR) and augmented reality (AR) are getting more interest as a training technique in the medical fields, unlocking significant benefits such as safety, repeatability and efficiency Desselle et al. (2020) . Furthermore, during the COVID-19 pandemic, remote training and consulting are considered as vital advantages of VR/AR based training methods ( Singh et al., 2020 ).

Some advantages of using VR/AR in medical training are depicted in Figure 2 . Safety is the first and the most important benefit of VR/AR employment in medical education. Complex medical operations may be performed in a simulated environment based on VR with complete safety and without putting the patient’s life into danger. Repeatability is the second advantage of using VR as any simulation scenario in the field of medical training can be repeated over and over until the trainee is completely satisfied. During the COVID-19 pandemic it is vital to practice social distancing which is delivered by VR/AR employment in medical education. Medical training and surgery simulation by computer graphics in VR/AR virtual environments results in reduced training costs as no material except than a computer, a VR headset and a haptic device is required. Since medical training by VR/AR is performed using a computer, the surgery simulation is always in hand as soon the computer and VR headset are ready to be used. Therefore, the efficiency of medical training is increased as no time is required for either preparation of an operation room or getting a patient ready.

An external file that holds a picture, illustration, etc.
Object name is frobt-08-612949-g002.jpg

VR/AR advantages in medical training.

VR/AR techniques are employed in various applications in surgical training as it can be seen in Figure 3 . The first application of AR/VR in surgical training is surgical procedure diagnosis and planning. Using AR/VR, the real surgical operation is simulated ahead without putting the patient’s life into danger. The AR/VR is used in surgical education and training which is mentioned as the second application. Simulation based environments are developed for training of medical students by virtual human anatomy 3D models. Another application of AR/VR is robotic and tele-surgery, by which surgical consulting becomes possible even from a far distance. The last application of AR/VR in surgical training is sensor data and image visualization during the surgical operation which makes the effective usage of patient’s medical data possible.

An external file that holds a picture, illustration, etc.
Object name is frobt-08-612949-g003.jpg

VR/AR applications in surgical training.

It is shown that the learning curve of hip arthroscopy trainees is significantly improved using a virtual reality simulator ( Bartlett et al., 2020 ). In this study, a group of twenty five inexperienced students were chosen to perform seven arthroscopies of a healthy virtual hip joint weekly. The experimental results indicated that average total time decreased by nearly 75 % while the number of collisions between arthroscope and soft-tissues decreased almost by 90 % .

VR is also employed in orthopedic surgical training, where 37 residents participated in a study to obtain an understanding of the LISS 1 plating surgical process ( Cecil et al., 2018 ). The developed virtual surgical environment is equipped with a haptic device to perform various activities such as assembling LISS plate, placing the assembled LISS plate correctly inside the patient’s leg, and attaching the LISS plate to the fractured bone. The test was divided into pre–test where the students get familiar with the surgery process and the post–test which is devoted to the actual evaluation phase. The participants had 1 h to finish both the pre–and post–tests which resulted in improvement of learning the LISS plating surgical process.

The applicability and effectiveness of VR based training in orthopedic education is evaluated in ( Lohre et al., 2020 ), where nineteen orthopedic surgical residents cooperated in this study. The surgical residents performed a glenoid exposure module on a VR based simulator using a haptic device as the input controller. The result of training of residents using VR simulator has been compared to the conventional surgery training methods. Considering the learning time, repeating 3 to 5 VR based surgery experiments by the residents, resulted in 570 % training time reduction. Additionally, VR based surgical training helped the residents to finish glenoid exposure significantly faster than the residents trained by conventional education methods.

Orthognathic surgery is another surgery field considered for VR based training as it is one of the complex surgical procedures ( Medellin-Castillo et al., 2020 ). While conventional OSG 2 learning techniques are dependent to cadavers or models and experienced surgeons are trained after several years of experiments in operating rooms, employment of VR in surgical training can reduce the learning time and the education cost at the same time. In this study, three cases are considered for evaluation of VR in OSG, cephalometry training, osteotomy training and surgery planning to be precise. The experimental results indicated that the combination of haptics and VR is effective in skill improvement of trainees and surgery time reduction. Furthermore, the surgery errors and mistakes are reduced by using haptic feedback to recreate the sense of touch as trainees can detect landmarks more precisely in comparison to conventional techniques.

In conjunction with VR, the AR technology has also been used in various medical fields for training such as neurosurgical training ( Si et al., 2019 ). Anatomical information and other sensory information can be visualized to the surgeons more properly, and therefore, more accurate decision can be made during a surgery. Although this study is only applicable to the simulated environments because of registration problem, the experiment indicated the effectiveness of the simulator in skill improvement of surgeons.

While key features of VR/AR have led to improved training specially in surgical training, there are some limitations that should be considered ( kumar Renganayagalu et al., 2021 ). The first limitation of VR simulators is the cost of VR content production, and therefore, most of simulators are made for very specific type of simulation in a limited context. The second limitation is the immaturity of interaction devices for VR simulations, which has a great affect on the user experience. Another limitation of VR usage in medical training is the inability of using VR devices for long period of time as the VR devices are made for entertainment and not for a long training session.

It can be concluded that in spite of some limitations, VR/AR based simulators equipped with a haptic device can be used in medical surgery training in order to achieve skill improvement and training time reduction. Furthermore, during the isolation requirements due to COVID-19 pandemic, VR/AR based techniques can be well employed for medical training.

4 Teleoperated Haptic Systems

Haptic systems provide the sense of touch with remote objects without the need of actual contact. It also provides collaboration between several operators without the need of any physical contact. As depicted in Figure 4 , based on the number of the operators, the haptic systems may be classified into single user, dual-user or multi-user haptic systems. Single user haptic systems enable a single human operator to interact with a remote or virtual environment, whereas dual-user or multi-user haptic systems provide a mechanism for collaboration of two or multiple human operators. The medical training applications of those systems is presented here.

An external file that holds a picture, illustration, etc.
Object name is frobt-08-612949-g004.jpg

Single user vs. dual user haptic systems. (A) Single user haptic system. (B) Dual user haptic system.

4.1 Single User Haptic Systems

Single user haptic systems extend the abilities of human operators to interact with remote, virtual, and out-of-reach environment. In the field of surgery training, a number of investigations have proposed haptic training simulators for training of minimally invasive surgery (MIS) Basdogan et al. (2004) , dental procedures Wang et al. (2014) , sonography Tahmasebi et al. (2008) , and ocular therapies Spera et al. (2020) . As shown in Figure 4A , a typical single-user haptic simulator system consists of a human operator, a haptic interface, a graphical interface, and a reference model for the virtual object. Notably both the graphical interface and the haptic interface utilize the reference model to provide necessary feedback for the operator. While the graphical interface provides a visual feedback of the environment, the haptic interface provides the kinesthetic feedback of the interaction between the tool and the surgical field. Indeed, the role of haptic feedback is to recreate the sense of contact with the virtual environment for the operator. As a result, the circumstances of actual operation is provided for the medical students, while the need of physical presence in the clinical environments is eliminated. Indeed, through haptic technology, the medical students are able to practice on a virtual environment without the need of presence at the clinical environment. Thus, the risk of infection during the COVID-19 pandemic is effectively reduced.

4.2 Dual User Haptic Systems

The cooperative and joint conduction of an operation either for the purpose of collaboration or training, as a fundamental clinical task, cannot be provided by single user haptic systems. In order to make the cooperation of two surgeons possible, the system should be upgraded to a dual user haptic system by adding another haptic console. A dual user haptic system is a more recent advancement in haptic technology, and it consists of two haptic consoles, one for the trainer and one for the trainee Shahbazi et al. (2018a) . Remarkably, the traditional collaboration methods require direct physical contact of the persons conducting the operation, whereas the haptic-based collaboration approach eliminates the physical contact of the collaborators. As a result of removing the need of physical contact, the involved people are no longer in the risk of the Corona virus. A commercial dual user haptic system developed by intuitive Surgical Inc. ® is the da Vinci Si Surgical System which supports training and collaboration during minimally invasive surgery. The da Vinci Si System builds on the existing da Vinci technology, where it has a number of enabling features such as leading-edge 3D visualization, advanced motion technology, and sufficient dexterity and workspace. However, the da Vinci Si does not provide active supervision and intervention of the trainer on the trainee’s actions. As an illustration, in the case that the trainee controls the procedure, the trainer does not have the possibility to guide the trainee during the procedure.

The issue of supervision and intervention of the trainer during the operation in dual user haptic systems have been a topic of active investigation during the past years. A number of studies have utilized the concept of dominance factor to determine the task dominance of each operator Nudehi et al. (2005) , Khademian and Hashtrudi-Zaad (2012) , Shahbazi et al. (2014b) , Motaharifar et al. (2016) . In those approaches, the trainee is given a partial or full task authority by the trainer based on his/her level of expertise. Notably, the task authority provided by these control architectures is supposed to be fixed during the operation. Thus, changing the authority of the surgeons and specially blocking the trainee’s commands is not possible in the middle of the operation. This might lead to undesired operative complications specially in the case that the trainee makes a sudden unpredictable mistake.

Fortunately, a number of investigations have developed control architectures to address the above shortcoming of the previously proposed haptic architectures Motaharifar et al. (2019b) , Shahbazi et al. (2014a) , Motaharifar and Taghirad (2020) . As a case in point, an S-shaped function is proposed in Motaharifar et al. (2019b) for the adjustment of the corrective feedback in order to shape the trainee’s muscle memory. In fact, the training approach behind the presented architecture is based on allowing the trainee to freely experience the task and be corrected as needed. Nevertheless, through the above scheme, the trainee is just granted the permission to receive the trainer’s motion profile; that is, the trainee is deprived of any realistic contribution to the surgical procedure. In contrast, several investigations have proposed mechanisms for adjusting the task dominance, through which the trainee is granted partial or full contribution to the task Shahbazi et al. (2014a) , Motaharifar and Taghirad (2020) , Liu et al. (2015) , Lu et al. (2017) , Liu et al. (2020) . Remarkably, the above approaches require both the trainer and the trainee to completely perform the operation on their haptic devices, and the actual task authority is determined based on the position error between the trainer and the trainee Shahbazi et al. (2014a) , Motaharifar and Taghirad (2020) , Liu et al. (2015) , Lu et al. (2017) , Liu et al. (2020) . This constitutes an important limitation of the above architectures, since the trainer is enforced to be involved in every detail of each operation and even the trivial ones. Notably, the trainer’s obligation to precisely perform every part of the surgical procedure has little compatibility with the trainer’s responsibilities in terms of supervisory assistance and interference. In fact, by grabbing the idea from the conventional training programs of the medical universities, the haptic architecture should be developed in such a manner that the trainer is able to intervene only in order to prevent a complication to the patient due to the trainee’s mistake. The issue of trainer’s supervisory assistance and interference is addressed in Motaharifar et al. (2019a) by adjusting the task authority based on the trainer’s hand force Motaharifar et al. (2019a) . That is, the trainer is able to grant the task authority to the trainer by holding the haptic device loosely or overrule the trainee’s action by grasping the haptic device tightly. Therefore, the active supervision and interference of the trainer is possible without the need of any physical contact between the trainer and the trainee.

Although the above investigations address the essential theoretical aspects regarding dual user haptic systems, the commercialization of collaborative haptic system needs more attention. In the past years, some research groups have developed pilot setups of dual user haptic system with the primal clinical evaluation that have the potential of commercialization. For instance, the ARASH-ASiST system provides training and collaboration of two surgeons and it is preliminary designed for Vitreoretinal eye surgical procedures ARASH-ASiST (2019) . It is expected that the commercialization and widespread utilization of those assistive surgery training tools is considerably beneficial to the health-care systems in order to decrease the physical contact during the COVID-19 pandemic, and to increase the safety and efficiency of training programs during and after this crisis.

Notwithstanding the fact that teleoperated haptic systems provide key benefits for remote training during COVID-19 pandemic, they face a number of challenges that inspire perspectives of future investigations. First, the haptic modality is not sufficient to recreate the full sense of actual presence at the surgical room near an expert surgeon. To overcome this challenge and increase the operators telepresence, the haptic, visual, and auditory components are augmented to achieve a multi–modal telepresence and teleaction architecture in Buss et al. (2010) . The choice of control structure and clinical investigation of the above multi–modal architecture is still an area of active research Shahbazi et al. (2018b) , Caccianiga et al. (2021) . On the other hand, the on-line communication system creates another challenge for the haptic training systems. Notably, owing to the high-bandwith requirement for an appropriate on-line haptic system, the majority of existing haptic architectures in applications such as collaborative teleopertion, handwriting and rehabilitation cover off-line communication Babushkin et al. (2021) . However, due to the complexity, uncertainty, and diversity of the surgical procedures, the online feedback from the expert surgeon is necessary for a safe and efficient training. The advent of 5G technology with faster and more robust communication network may provide enough bandwidth for an effective real-time remote surgery training.

5 Data Driven Scoring

A vital element of a training program is how to evaluate the effectiveness of exercises by introducing a grading system based on participants’ performance. The conventional qualitative skill assessment methods require physical contact between the trainer and the trainee since they are based on direct supervision of the trainer. On the other hand, the systematic approaches for skill assessment are based on collecting the required data using appropriate instruments and analyzing the obtained data, while they eliminate the requirement of physical contact between the trainer and the trainee. Thus, reviewing the systematic data-based methods is of utmost importance, as they can be utilized to reduce the physical contact during the COVID-19 Pandemic. In this section, some of the state of the art methods in surgical skill evaluation are reviewed. Following the trend of similar research in the context of surgical skill evaluation, we categorize the reviewed methods by two criteria. The first is the type of data, and the method uses for grading the participant. The second criterion is the features extraction techniques that are used during the evaluation stage.

Generally speaking, two types of data may be available in Robotic-Assisted surgery; kinematic and video data. Kinematic data is available when a robot or haptic device is involved. The most common form of capturing kinematic information is using IMUs, encoders, force sensors, magnetic field positioning sensors, etc. The video is generally recorded in all minimally invasive surgeries using endoscopy procedures.

Kinematic data are more comfortable to analyze because the dimensionality of kinematic data is lower than video data. Moreover, Kinematic information is superior to video in measuring the actual 3D trajectories, and 3D velocities Zappella et al. (2013) . On the other hand, video data is more convenient to capture since no additional equipment and sophisticated sensors are needed to be attached to the surgical tool. Additionally, video data reflects the contextual semantic information such as the presence or absence of another surgical instrument, which can not be derived from the kinematic data Zappella et al. (2013) . To use the video data effectively, one should overcome some common obstacles like occlusion and clutter. Using multiple cameras, if possible, can greatly assist in this procedure Abdelaal et al. (2020) . In conclusion, it can be said that each type of data has its own merits and limitations, and using kinematic data as well as the video may result in a richer dataset.

Other than the kinematic and video data, another source of information is often disregarded in the literature. The expert surgeon who conducts the training program can evaluate the trainee’s performance and provide useful feedback regarding his/her performance. This type of information, which is at another semantic level compared to the sensory data, is called soft data. The hard and soft information fusion methods can merge the expert’s opinion with the kinematic and video data (hard data) to accomplish a better grading system.

Most surgical skill evaluation methods utilize a feature extraction technique to classify the participant’s skill level after acquiring the data, like expert, intermediate, and novice. The classification problem can be solved by employing some hand-engineered features or features that are automatically extracted from the data. Hand-engineered features are interpretable and easy to obtain. However, hand-engineered features are hard to define. Specifically, defining a feature that represents the skill level regardless of the task is not trivial. Therefore, the states of the art methods are commonly based on automatic feature extraction techniques. An end-to-end deep neural network is used to unfold the input data’s spatial and temporal features and classify the participant in one of the mentioned skill levels in an automated feature extraction procedure. While, Table 1 summarizes the topic of different data types and feature extraction techniques, we are going to cover some of the reviewed methods in the next sections.

Summery of different sources of data and different feature extraction techniques.

Data type
KinematicVideoExperts’ opinion
ProsLower dimensionalityConvenient to captureHigher semantic level
Actual 3D trajectoriesInfo. of the surroundings
ConsNeeds toolsHigher dimensionalityQuantitative
No info. of the surroundingEstimated 3D trajectories
Occlusion, Clutter, etc.
Feature extraction technique
Hand-engineeredAutomatic
ProsInterpretableEnd to end solution
Easy to calculateCase independent
ConsHard to defineRequires a big dataset
Case dependentComputational cost

The most convenient hand-engineered features are those introduced by descriptive statistics Anh et al. (2020) . In a skill rating system proposed by Brown et al. (2016) , eight values of mean, standard deviation, minimum, maximum, range, root-mean-square (RMS), total sum-of-squares (TSS), and time integral of force and acceleration signals are calculated. Together with time features like task completion time, these values are used as inputs for a random forest classifier to rate the peg transfer score of 38 different participants. In Javaux et al. (2018) , metrics like mean/maximum velocity and acceleration, tool path length, depth perception, maximum and integral of planar/vertical force, and task completion time are considered as a baseline for skill assessment Lefor et al. (2020) . Another commonly used method in the literature is to use statistical tests such as Mann-Whitney test Moody et al. (2008) , Kruskal–Wallis test Javaux et al. (2018) , Pearson or Spearman correlation Zendejas et al. (2017) , etc. These tests are utilized to classify the participants directly Moody et al. (2008) or automatically calculate some of the well-known skill assessment scores like GOALS and FLS Zendejas et al. (2017) .

Since many surgical tasks are periodic by nature, the data frequency domain analysis proves to be effective Zia et al. (2015) . For periodic functions like knot tying and suturing Zia et al. (2015) suggests that transforming the data into time series and performing a Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT) on the data extracts features, will assist the skill level classification task. The results show that such an approach outperforms many machine-learning-based methods like Bag of Words (BoW) and Sequential Motion Texture (SMT). In another work by the same author, symbolic features, texture features, and frequency features are employed for the classification. A Sequential Forward Selection (SFS) algorithm is then utilized to reduce the number of elements in the feature vector and remove the irrelevant data Zia et al. (2016) . Hojati et al. (2019) suggests that since Discrete Wavelet Transform (DWT) is superior to DFT and DCT in a sense that it offers simultaneous localization in time and frequency domain, DWT is a better choice for feature extraction in surgical skill assessment tasks.

As it is mentioned before, hand-engineered features are task-specific. For example, the frequency domain analysis discussed in the previous section is only viable when the task is periodic. Otherwise, the frequency domain features should be concatenated with other features. Moreover, perceiving the correct features that reflect participants’ skill levels in different surgical tasks requires an intensive knowledge of the field. As a result, developing a method in which the essential features are identified automatically is advantageous.

With the recent success of Convolutional Neural Networks (CNN) in classification problems like image classification, action recognition, and segmentation, it is safe to assume that CNN can be used in skill assessment problems. However, unlike image classification, improvement brought by end-to-end deep CNN remains limited compared to hand-engineered features for action recognition Wang et al. (2018) . Similarly, using conventional CNN does not contribute too much to the result in surgical skill evaluation problems. For example, Fawaz et al. (2018) proposed a CNN-based approach for dry-lab skill evaluation tasks such as needle passing, suturing, and knot-tying. However, a hand-engineered-based method with a set of features introduced as holistic features (SMT, DFT, DCT, and Approximate Entropy (ApEn)) suggested by Zia and Essa (2018) reaches the same accuracy as the CNN-based method in the needle passing and suturing tasks and outperforms the CNN-based method in the knot-tying task.

Wang et al. (2018) suggests that conventional CNN falls short compared to traditional hand-crafted feature extraction techniques because it only considers the appearances (spatial features) and ignores the data’s temporal dynamics. In Wang and Fey (2018) , a parallel deep learning architecture is proposed to recognize the surgical training activity and assess trainee expertise. A Gated recurrent unit (GRU) is used for temporal feature extraction, and a CNN network is used to extract the spatial features. The overall accuracy calculated for the needle passing, suturing, and knot tying tasks is 96% using video data. The problem of extracting spatiotemporal features is addressed with 3D ConvNets in Funke et al. (2019) . In this method, inflated convolutional layers are responsible for processing the video snippets and unfolding the classifier’s input data.

To the best of our knowledge, all of the proposed methods in the literature have used single classifier techniques in their work. However, methods like classifier fusion have proved to be useful in the case of medical-related data. In Kazemian et al. (2005) an OWA-based fusion technique is used to combine multiple classifiers and improve the accuracy. For a more advanced classifier fusion technique, one can refer to the proposed method in Kazemian et al. (2010) where more advanced methods such as Dempster’s Rule of Combination (DCR) and Choquet integral are compared with more basic techniques. Activity recognition and movement classification is another efficient way to calculate metrics representing the surgical skill automatically Khan et al. (2020) . Moreover, instrument detection in a video and drawing centroid based on the orientation and movement of the instruments can reflect the focus and ability to plan moves in a surgeon. Utilizing these centroids and calculating the radius, distance, and relative orientation can aid with the classification based on skill level Lavanchy et al. (2021) .

In conclusion, the general framework illustrated in Figure 5 can summarize the reviewed techniques. The input data, either kinematic and video, is fed to a feature extraction block. A fusion block Naeini et al. (2014) can enrich the semantic of the data using expert surgeon feedback. Finally, a regression technique or a classifier can be employed to calculate a participant’s score based on his/her skill level or represent a label following his/her performance.

An external file that holds a picture, illustration, etc.
Object name is frobt-08-612949-g005.jpg

A general framework for surgical skill assessment.

6 Machine Vision

The introduction of new hardware capable of running deep learning methods with acceptable performance led artificial intelligence to play a more significant role in any intelligent system Han (2017) . It is undeniable that there is a huge potential in employing deep learning methods in a wide range of various applications Weng et al. (2019) , Antoniades et al. (2016) , Lotfi et al. (2018) , Lotfi et al. (2020) . In particular, utilizing a camera along with a deep learning algorithm, machines may precisely identify and classify objects by which either performing a proper reaction or monitoring a process may be realized automatically. For instance, considering a person in a coma, any tiny reaction is crucial to be detected, and since it is not possible to assign a person for each patient, a camera can solve the problem satisfactorily. Regarding the COVID-19 pandemic situation, artificial intelligence may be used to reduce both physical interactions and the risk of a probable infection especially when it comes to a medical training process. Considering eye surgery as an instance, not only should the novice surgeon closely track how the expert performs but also the expert should be notified of a probable mistake made by the novice surgeon during surgery. In this regard, utilizing computer vision approaches as an interface, the level of close interactions may be minimized effectively. To clarify, during the training process, the computer vision algorithm may act as both the novice surgeon looking over the expert’s hand and the expert monitoring and evaluating how the novice performs. This kind of application in a medical training process may easily extend to other cases. By this means, the demand for keeping in close contact is met properly.

Not needing a special preprocessing, deep convolutional neural networks (CNNs) are commonly used for classifying images into various distinct categories. For instance, in medical images, this may include probable lesions Farooq et al. (2017) , Chitra and Seenivasagam (2013) . Moreover, they can detect intended objects in the images which can be adopted not only to find and localize specific features but also to recognize them if needed. Since most of the medical training tasks require on-line and long-term monitoring, by utilizing a camera along with these powerful approaches, an expert may always keep an eye on the task assigned to a trainee. Besides, methods based on CNNs are capable of being implemented on graphics processor units (GPUs) to process the images with an applicable performance in terms of both speed and accuracy Chetlur et al. (2014) , Bahrampour et al. (2015) . This will reduce the probable latency and makes it possible for the trainer to be notified on time and correct the trainee remotely.

There are numerous researches carried out in the field of image processing based on CNNs. These methods are mainly divided into two single-stage and two-stage detectors. The former is known to be fast while the latter results in higher accuracy. In Figure 6 the difference between a two-stage and a single-stage detector is illustrated. Considering single-stage detectors and starting with the LeCun et al. (1998) as one of the earliest networks, plenty of different approaches have been presented in the literature among which single-shot multi-box detector (SSD) Liu et al. (2016) , RetinaNet Lin et al. (2017) , and you only look once (YOLO) Redmon and Farhadi (2018) may be counted as nominated ones. Some of these approaches have been proposed with several structures including simpler and more complex structures to be employed depending on whether the speed is of high importance or accuracy. Mainly, training and the test are two phases when utilizing these methods. While it is crucial to define a proper optimization problem in the first phase, it is indispensable to implement the trained CNN optimally. Coming up with various solutions, methods like Krizhevsky et al. (2012) , Simonyan and Zisserman (2015) , Szegedy et al. (2015) , and Szegedy et al. (2016) suggest utilizing specific CNN models to obtain better outcomes. On the other hand, to further improve the accuracy, in two-stage detectors like Girshick et al. (2014) , it is suggested to first determine a region of interest (ROI) then identify probable objects in the related area. As a representative, Uijlings et al. (2013) , which is known as selective search, is designed to propose 2 k proposal regions, while a classifier may be employed for the later stage. Dealing with some challenging problems in these detectors, He et al. (2015) , Girshick (2015) , and Ren et al. (2015) are proposed to enhance the results in terms of both accuracy and speed.

An external file that holds a picture, illustration, etc.
Object name is frobt-08-612949-g006.jpg

Example of two-stage and single-stage detectors Kathuria (2021) . (A) Two-stage detector (RCNN). (B) Single-stage detector (YOLO).

To put all in a nutshell, when dealing with critical situations such as the current COVID-19 epidemic, it is highly recommended to employ artificial intelligence techniques in image processing namely deep CNNs for medical training tasks. By this means, neither is a close physical interaction between the expert and novice necessary, nor the quality of the training is reduced adversely due to the limitations. In fact, the computer vision approach acts as an interface making it possible both to learn from the expert and to evaluate the novice, remotely.

7 Conclusion and Future Prospects

The faculty members and the students of the medical universities are classified in the high-risk category due to the potential exposure to coronavirus through direct contact and aerosol-generating procedures. As a result, many medical schools have suspended their clinical programs or implemented social distancing in their laboratory practices. Furthermore, the current fight against the COVID-19 virus have used nearly all capacity of health-care systems, and some less urgent and less emergent medical services including the education issues are limited or even paused. Therefore, unless some assistive training tools are utilized to support the educational procedures, the training efficiency of medical universities will be reduced and it have future consequences for the world health-care system.

Practicing medical tasks with current lock-down policies can be solved utilizing state of the art techniques in haptics, virtual reality, machine vision, and machine learning. Notably, utilization of the above technologies in medical education has been researched actively within the past years in order to increase the safety and efficiency of the surgical training procedures. Nowadays, another motivation is created for those assistive technologies owing to the COVID-19 pandemic. In this paper, the existing assistive technologies for medical training are reviewed in the COVID-19 context and a summary of them is presented in Table 2 .

The main tools and approaches that help to reduce physical contact in medical training.

Training tool or technologyApproachSome investigations
Virtual RealityVR Based surgical training system , , ,
AR Based surgical training system
Haptic TechnologySingle haptic simulators , ,
Dual haptic with fixed authority , , , ,
Dual haptic with variable authority , , ,
Data Driven ScoringDDS using hand-engineered features , , ,
DDS using automated feature extraction , , ,
Fusion techniques ,
Machine VisionSingle-Stage Detectors , , ,
Two-Stage Detectors ,
Classifiers ,

It is reviewed that a surgical simulator system including a VR/AR based graphical interface and a haptic interface is able to provide the circumstances of actual surgical operation for the medical students, without the necessity of attending the hospital environments. Furthermore, through augmenting the system with another haptic console and having a dual user haptic system, the opportunity of collaboration with and receiving guidance cues from an expert surgeon in a systematic manner is given to the trainees. In contrast to the traditional collaboration methodologies, the haptic-based collaboration does not require the physical contact between the involved people and the risk of infection is reduced. Assessment of the expertise level of the medical students is another element of each training program. The necessity of reducing physical contact during the COVID-19 pandemic have also affected the skill assessment methodologies as the traditional ways of skill assessment are based on direct observation by a trainer. In contrast, data-based analysis may be utilized as a systematic approach for skill assessment without any need of physical contact. In this paper, some of the ongoing methods in surgical skill evaluation have been reviewed.

Biomedical engineering technology has progressed by leaps and bounds during the past several decades and advancements in remote diagnostics and remote treatment have been considered as a leading edge in this field. For instance, the tele-surgery robotic-assisted da Vinci system have received a great deal of attention in the healthcare marketplace with more than 5 million surgeries in the last 2 decades DaVinci (2021) . However, the rate of advancement in medical training, which usually follows traditional methods, has been considerably less than the other aspects of medical field, and modern training technologies have received fewer attention during the past several decades. While remote training and remote skill assessment technologies make relatively lower risk to the patient than remote diagnostics and remote treatment, the reason behind fewer attention to the former is the lack of sufficient motivations. It is hoped that the motivations created for those advanced medical training methods during the COVID-19 crisis are strong enough to continuously increase their utilization among the medical universities. Although wide utilization of those technologies needs a considerable extent of time, effort, and investment, immediate and emergent decisions and actions are required to widely utilize those potential techniques. Notably, all of the presented approaches and techniques are targeted to be utilized in the normal situations without any pandemic in order to provide safer and more efficient medical training. Therefore, even after the world recovers from this crisis, these techniques, tools, and approaches deserve more attention, recognition, investigation, and utilization. There needs to be a global awareness among the medical universities that haptic technology and virtual reality integrated with machine learning and machine vision provides an excellent systematic medical training apparatus that ensures the requirements of health-care systems to enhance the safety, efficiency, and robustness of medical training.

1 Less invasive stabilization system

2 Orthoganthic surgery

Data Availability Statement

Author contributions.

Conceptualization, HT, SFM, and MM; original draft preparation MM, AN, PA, AI, and FL; review and editing, HT, SFM, BM, and AL.

This work was supported in part by the National Institute for Medical Research Development (NIMAD) under Grant No. 942314, in part by Tehran University of Medical Sciences, Tehran, Iran under Grant No. 35949-43-01-97, and in part by K. N. Toosi University of Technology, Tehran, Iran Research Grant.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

  • Abdelaal A. E., Avinash A., Kalia M., Hager G. D., Salcudean S. E. (2020). A Multi-Camera, Multi-View System for Training and Skill Assessment for Robot-Assisted Surgery . Int. J. CARS 15 , 1369–1377. 10.1007/s11548-020-02176-1 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Akbari M., Carriere J., Meyer T., Sloboda R., Husain S., Usmani N., et al. (2021). Robotic Ultrasound Scanning with Real-Time Image-Based Force Adjustment: Quick Response for Enabling Physical Distancing during the Covid-19 Pandemic . Front. Robotics AI 8 , 62. 10.3389/frobt.2021.645424 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Anh N. X., Nataraja R. M., Chauhan S. (2020). Towards Near Real-Time Assessment of Surgical Skills: A Comparison of Feature Extraction Techniques . Comput. Methods Programs Biomed. 187 , 105234. 10.1016/j.cmpb.2019.105234 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Antoniades A., Spyrou L., Took C. C., Sanei S. (2016). “ Deep Learning for Epileptic Intracranial Eeg Data ,” in 2016 IEEE 26th International Workshop on Machine Learning for Signal Processing (MLSP), Vietri sul Mare, Salerno, Italy, September 13–16, 2016 (IEEE; ), 1–6. 10.1109/mlsp.2016.7738824 [ CrossRef ] [ Google Scholar ]
  • ARASH-ASiST (2019). Dataset. Aras Haptics: A System for EYE Surgery Training . Available at: https://aras.kntu.ac.ir/arash-asist// . (Accessed 08 05, 2020).
  • Babushkin V., Jamil M. H., Park W., Eid M. (2021). Sensorimotor Skill Communication: A Literature Review . IEEE Access 9 , 75132–75149. 10.1109/access.2021.3081449 [ CrossRef ] [ Google Scholar ]
  • Bahrampour S., Ramakrishnan N., Schott L., Shah M. (2015). Comparative Study of Caffe, Neon, Theano, and Torch for Deep Learning . CoRR . arXiv:1511.06435. Available at: http://arxiv.org/abs/1511.06435 . [ Google Scholar ]
  • Bartlett J. D., Lawrence J. E., Yan M., Guevel B., Stewart M. E., Audenaert E., et al.fnm (2020). The Learning Curves of a Validated Virtual Reality Hip Arthroscopy Simulator . Arch. Orthopaedic Trauma Surg. 140 ( 6 ), 761–767. 10.1007/s00402-020-03352-3 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Basdogan C., De S., Kim J., Muniyandi M., Kim H., Srinivasan M. A. (2004). Haptics in Minimally Invasive Surgical Simulation and Training . IEEE Comput. Graphics Appl. 24 , 56–64. 10.1109/mcg.2004.1274062 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brown J. D., O’Brien C. E., Leung S. C., Dumon K. R., Lee D. I., Kuchenbecker K. J. (2016). Using Contact Forces and Robot Arm Accelerations to Automatically Rate Surgeon Skill at Peg Transfer . IEEE Trans. Biomed. Eng. 64 , 2263–2275. 10.1109/TBME.2016.2634861 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Buss M., Peer A., Schauß T., Stefanov N., Unterhinninghofen U., Behrendt S., et al. (2010). Development of a Multi-Modal Multi-User Telepresence and Teleaction System . Int. J. Robot. Res. 29 , 1298–1316. 10.1177/0278364909351756 [ CrossRef ] [ Google Scholar ]
  • Caccianiga G., Mariani A., de Paratesi C. G., Menciassi A., De Momi E. (2021). Multi-Sensory Guidance and Feedback for Simulation-Based Training in Robot Assisted Surgery: A Preliminary Comparison of Visual, Haptic, and Visuo-Haptic . IEEE Robot. Autom. Lett. 6 , 3801–3808. 10.1109/lra.2021.3063967 [ CrossRef ] [ Google Scholar ]
  • Cecil J., Gupta A., Pirela-Cruz M. (2018). An Advanced Simulator for Orthopedic Surgical Training . Int. J. Comput. Assist. Radiol. Surg. 13 , 305–319. 10.1007/s11548-017-1688-0 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Chetlur S., Woolley C., Vandermersch P., Cohen J., Tran J., Catanzaro B., et al. (2014). cuDNN: Efficient Primitives for Deep Learning . CoRR . arXiv: 1410.0759. Available at: http://arxiv.org/abs/1410.0759 . [ Google Scholar ]
  • Chitra R., Seenivasagam V. (2013). Heart Disease Prediction System Using Supervised Learning Classifier . Bonfring Int. J. Softw. Eng. Soft Comput. 3 , 01–07. 10.9756/bijsesc.4336 [ CrossRef ] [ Google Scholar ]
  • DaVinci (2021). Dataset. Enabling Surgical Care to Get Patients Back to what Matters . Available at: https://www.intuitive.com/en-us/products-and-services/da-vinci . (Accessed 202107 04.
  • Desselle M. R., Brown R. A., James A. R., Midwinter M. J., Powell S. K., Woodruff M. A. (2020). Augmented and Virtual Reality in Surgery . Comput. Sci. Eng. 22 , 18–26. 10.1109/mcse.2020.2972822 [ CrossRef ] [ Google Scholar ]
  • Farooq A., Anwar S., Awais M., Rehman S. (2017). “ A Deep Cnn Based Multi-Class Classification of Alzheimer’s Disease Using Mri ,” in 2017 IEEE International Conference on Imaging systems and techniques (IST) (IEEE), Beijing, China, October 18–20, 2017, 1–6. 10.1109/ist.2017.8261460 [ CrossRef ] [ Google Scholar ]
  • Fawaz H. I., Forestier G., Weber J., Idoumghar L., Muller P.-A. (2018). “ Evaluating Surgical Skills from Kinematic Data Using Convolutional Neural Networks ,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, Granada, Spain, September 16–20, 2018 (Springer; ), 214–221. 10.1007/978-3-030-00937-3_25 [ CrossRef ] [ Google Scholar ]
  • Feizi N., Tavakoli M., Patel R. V., Atashzar S. F. (2021). Robotics and Ai for Teleoperation, Tele-Assessment, and Tele-Training for Surgery in the Era of Covid-19: Existing Challenges, and Future Vision . Front. Robot. AI 8 , 610677. 10.3389/frobt.2021.610677 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Funke I., Mees S. T., Weitz J., Speidel S. (2019). Video-based Surgical Skill Assessment Using 3d Convolutional Neural Networks . Int. J. Comput. Assist. Radiol. Surg. 14 , 1217–1225. 10.1007/s11548-019-01995-1 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Girshick R., Donahue J., Darrell T., Malik J. (2014). “ Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation ,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Columbus, OH, June 23–28, 2014, 580–587. 10.1109/cvpr.2014.81 [ CrossRef ] [ Google Scholar ]
  • Girshick R. (2015). “ Fast R-Cnn ,” in Proceedings of the IEEE international conference on computer vision, Boston, MA, June 7–12, 2015, 1440–1448. 10.1109/iccv.2015.169 [ CrossRef ] [ Google Scholar ]
  • Han S. (2017). Efficient Methods and Hardware for Deep Learning . Stanford University. [ Google Scholar ]
  • He K., Zhang X., Ren S., Sun J. (2015). Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition . IEEE Trans. pattern Anal. machine intell. 37 , 1904–1916. 10.1109/tpami.2015.2389824 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Hodges C., Moore S., Lockee B., Trust T., Bond A. (2020). The Difference between Emergency Remote Teaching and Online Learning. Boulder, CO . Educause Rev. 27 ( 1 ), 1–9. [ Google Scholar ]
  • Hojati N., Motaharifar M., Taghirad H., Malekzadeh A. (2019). “ Skill Assessment Using Kinematic Signatures: Geomagic Touch Haptic Device ,” in 2019 7th International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, November 20–22, 2019 (IEEE; ), 186–191. 10.1109/icrom48714.2019.9071892 [ CrossRef ] [ Google Scholar ]
  • Iwashita Y., Hibi T., Ohyama T., Umezawa A., Takada T., Strasberg S. M., et al. (2017). Delphi Consensus on Bile Duct Injuries during Laparoscopic Cholecystectomy: an Evolutionary Cul-De-Sac or the Birth Pangs of a New Technical Framework? J. Hepato-Biliary-Pancreatic Sci. 24 , 591–602. 10.1002/jhbp.503 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Javaux A., Joyeux L., Deprest J., Denis K., Vander Poorten E. (2018). Motion-based Skill Analysis in a Fetoscopic Spina-Bifida Repair Training Model . In CRAS , Date: 2018/09/10-2018/09/11, London, United Kingdom. [ Google Scholar ]
  • Jonas J. B., Rabethge S., Bender H.-J. (2003). Computer-assisted Training System for Pars Plana Vitrectomy . Acta Ophthalmol. Scand. 81 , 600–604. 10.1046/j.1395-3907.2003.0078.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kathuria A. (2021). Dataset. Tutorial on Implementing yolo V3 from Scratch in Pytorch . Available at: https://blog.paperspace.com/how-to-implement-a-yolo-object-detector-in-pytorch/ . (Accessed on 01 07, 2021). [ Google Scholar ]
  • Kazemian M., Moshiri B., Nikbakht H., Lucas C. (2005). “ Protein Secondary Structure Classifiers Fusion Using Owa ,” in International Symposium on Biological and Medical Data Analysis, Aveiro, Portugal, November 10–11, 2005 (Springer; ), 338–345. 10.1007/11573067_34 [ CrossRef ] [ Google Scholar ]
  • Kazemian M., Moshiri B., Palade V., Nikbakht H., Lucas C. (2010). Using Classifier Fusion Techniques for Protein Secondary Structure Prediction . Int. J. Comput. Intell. Bioinf. Syst. Biol. 1 , 418–434. 10.1504/ijcibsb.2010.038225 [ CrossRef ] [ Google Scholar ]
  • Khademian B., Hashtrudi-Zaad K. (2012). Dual-user Teleoperation Systems: New Multilateral Shared Control Architecture and Kinesthetic Performance Measures . Ieee/asme Trans. Mechatron. 17 , 895–906. 10.1109/tmech.2011.2141673 [ CrossRef ] [ Google Scholar ]
  • Khan A., Mellor S., King R., Janko B., Harwin W., Sherratt R. S., et al. (2020). Generalized and Efficient Skill Assessment from Imu Data with Applications in Gymnastics and Medical Training . New York, NY, ACM Trans. Comput. Healthc. 2 ( 1 ), 1–21. [ Google Scholar ]
  • Khanna R. C., Honavar S. G., Metla A. L., Bhattacharya A., Maulik P. K. (2020). Psychological Impact of Covid-19 on Ophthalmologists-In-Training and Practising Ophthalmologists in india . Indian J. Ophthalmol. 68 , 994. 10.4103/ijo.ijo_1458_20 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kotsis S. V., Chung K. C. (2013). Application of See One, Do One, Teach One Concept in Surgical Training . Plast. Reconstr. Surg. 131 , 1194. 10.1097/prs.0b013e318287a0b3 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Krizhevsky A., Sutskever I., Hinton G. E. (2012). “ Imagenet Classification with Deep Convolutional Neural Networks ,” in Advances in neural information processing systems, Lake Tahoe, NV, December 3–6, 2012, 1097–1105. [ Google Scholar ]
  • kumar Renganayagalu S., Mallam S. C., Nazir S. (2021). Effectiveness of Vr Head Mounted Displays in Professional Training: A Systematic Review . Technol. Knowl. Learn . (Springer), 1–43. 10.1007/s10758-020-09489-9 [ CrossRef ] [ Google Scholar ]
  • Lavanchy J. L., Zindel J., Kirtac K., Twick I., Hosgor E., Candinas D., et al. (2021). Automation of Surgical Skill Assessment Using a Three-Stage Machine Learning Algorithm . Scientific Rep. 11 , 1–9. 10.1038/s41598-021-88175-x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • LeCun Y., Bottou L., Bengio Y., Haffner P. (1998). Gradient-based Learning Applied to Document Recognition . Proc. IEEE 86 , 2278–2324. 10.1109/5.726791 [ CrossRef ] [ Google Scholar ]
  • Lefor A. K., Harada K., Dosis A., Mitsuishi M. (2020). Motion Analysis of the Jhu-Isi Gesture and Skill Assessment Working Set Using Robotics Video and Motion Assessment Software . Int. J. Comput. Assist. Radiol. Surg. 15 , 2017–2025. 10.1007/s11548-020-02259-z [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lin T.-Y., Goyal P., Girshick R., He K., Dollár P. (2017). “ Focal Loss for Dense Object Detection ,” in Proceedings of the IEEE international conference on computer vision, 2980–2988. 10.1109/iccv.2017.324 [ CrossRef ] [ Google Scholar ]
  • Liu F., Lelevé A., Eberard D., Redarce T. (2015). “ A Dual-User Teleoperation System with Online Authority Adjustment for Haptic Training ,” in 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, August 25–29, 2015, 1168–1171. 10.1109/embc.2015.7318574 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Liu W., Anguelov D., Erhan D., Szegedy C., Reed S., Fu C.-Y., et al. (2016). “ Ssd: Single Shot Multibox Detector ,” in European conference on computer vision (Springer; ), 21–37. 10.1007/978-3-319-46448-0_2 [ CrossRef ] [ Google Scholar ]
  • Liu F., Licona A. R., Lelevé A., Eberard D., Pham M. T., Redarce T. (2020). An Energy-Based Approach for N-Dof Passive Dual-User Haptic Training Systems . Robotica 38 , 1155–1175. 10.1017/s0263574719001309 [ CrossRef ] [ Google Scholar ]
  • Lohre R., Bois A. J., Athwal G. S., Goel D. P. (2020). Improved Complex Skill Acquisition by Immersive Virtual Reality Training: a Randomized Controlled Trial . JBJS 102 , e26. 10.2106/jbjs.19.00982 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lotfi F., Ajallooeian V., Taghirad H. D. (2018). “ Robust Object Tracking Based on Recurrent Neural Networks ,” in 2018 6th RSI International Conference on Robotics and Mechatronics (IcRoM), 507–511. 10.1109/icrom.2018.8657608 [ CrossRef ] [ Google Scholar ]
  • Lotfi F., Hasani P., Faraji F., Motaharifar M., Taghirad H., Mohammadi S. (2020). “ Surgical Instrument Tracking for Vitreo-Retinal Eye Surgical Procedures Using Aras-Eye Dataset ,” in 2020 28th Iranian Conference on Electrical Engineering (ICEE) (IEEE; ), 1–6. 10.1109/icee50131.2020.9260679 [ CrossRef ] [ Google Scholar ]
  • Lu Z., Huang P., Dai P., Liu Z., Meng Z. (2017). Enhanced Transparency Dual-User Shared Control Teleoperation Architecture with Multiple Adaptive Dominance Factors . Int. J. Control. Autom. Syst. 15 , 2301–2312. 10.1007/s12555-016-0467-y [ CrossRef ] [ Google Scholar ]
  • Medellin-Castillo H. I., Zaragoza-Siqueiros J., Govea-Valladares E. H., de la Garza-Camargo H., Lim T., Ritchie J. M. (2020). Haptic-enabled Virtual Training in Orthognathic Surgery . Virtual Reality 25 , 53–67. 10.1007/s10055-020-00438-6 [ CrossRef ] [ Google Scholar ]
  • Moody L., Waterworth A., McCarthy A. D., Harley P. J., Smallwood R. H. (2008). The Feasibility of a Mixed Reality Surgical Training Environment . Virtual Reality 12 , 77–86. 10.1007/s10055-007-0080-8 [ CrossRef ] [ Google Scholar ]
  • Motaharifar M., Taghirad H. D. (2020). A Force Reflection Robust Control Scheme with Online Authority Adjustment for Dual User Haptic System . Mech. Syst. Signal Process. 135 , 106368. 10.1016/j.ymssp.2019.106368 [ CrossRef ] [ Google Scholar ]
  • Motaharifar M., Bataleblu A., Taghirad H. (2016). “ Adaptive Control of Dual User Teleoperation with Time Delay and Dynamic Uncertainty ,” in 2016 24th Iranian conference on electrical engineering (ICEE), Shiraz, Iran, May 10–12, 2016 (IEEE; ), 1318–1323. 10.1109/iraniancee.2016.7585725 [ CrossRef ] [ Google Scholar ]
  • Motaharifar M., Taghirad H. D., Hashtrudi-Zaad K., Mohammadi S. F. (2019a). Control of Dual-User Haptic Training System with Online Authority Adjustment: An Observer-Based Adaptive Robust Scheme . IEEE Trans. Control. Syst. Technol. 28 ( 6 ), 2404–2415. 10.1109/tcst.2019.2946943 [ CrossRef ] [ Google Scholar ]
  • Motaharifar M., Taghirad H. D., Hashtrudi-Zaad K., Mohammadi S.-F. (2019b). Control Synthesis and ISS Stability Analysis of Dual-User Haptic Training System Based on S-Shaped Function . IEEE/ASME Trans. Mechatron. 24 ( 4 ), 1553–1564. 10.1109/tmech.2019.2917448 [ CrossRef ] [ Google Scholar ]
  • Naeini M. P., Moshiri B., Araabi B. N., Sadeghi M. (2014). Learning by Abstraction: Hierarchical Classification Model Using Evidential Theoretic Approach and Bayesian Ensemble Model . Neurocomputing 130 , 73–82. 10.1016/j.neucom.2012.03.041 [ CrossRef ] [ Google Scholar ]
  • Nudehi S. S., Mukherjee R., Ghodoussi M. (2005). A Shared-Control Approach to Haptic Interface Design for Minimally Invasive Telesurgical Training . IEEE Trans. Control. Syst. Technol. 13 , 588–592. 10.1109/tcst.2004.843131 [ CrossRef ] [ Google Scholar ]
  • Redmon J., Farhadi A. (2018). Yolov3: An Incremental Improvement . CoRR abs/1804.02767 . Available at: http://arxiv.org/abs/1804.02767 . [ Google Scholar ]
  • Ren S., He K., Girshick R., Sun J. (2015). “ Faster R-Cnn: Towards Real-Time Object Detection with Region Proposal Networks ,” in Advances in neural information processing systems, Montreal, Quebec, Canada, December 7–12, 2015, 91–99. [ Google Scholar ]
  • Shahbazi M., Atashzar S. F., Talebi H. A., Patel R. V. (2014a). An Expertise-Oriented Training Framework for Robotics-Assisted Surgery . Proc. IEEE Int. Conf. Rob. Autom. , 5902–5907. 10.1109/icra.2014.6907728 [ CrossRef ] [ Google Scholar ]
  • Shahbazi M., Atashzar S. F., Talebi H. A., Patel R. V. (2014b). Novel Cooperative Teleoperation Framework: Multi-Master/single-Slave System . IEEE/ASME Trans. Mechatron. 20 , 1668–1679. 10.1109/tmech.2014.2347034 [ CrossRef ] [ Google Scholar ]
  • Shahbazi M., Atashzar S. F., Patel R. V. (2018a). A Systematic Review of Multilateral Teleoperation Systems . IEEE Trans. Haptics 11 , 338–356. 10.1109/toh.2018.2818134 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Shahbazi M., Atashzar S. F., Ward C., Talebi H. A., Patel R. V. (2018b). Multimodal Sensorimotor Integration for Expert-In-The-Loop Telerobotic Surgical Training . IEEE Trans. Robot. 34 , 1549–1564. 10.1109/tro.2018.2861916 [ CrossRef ] [ Google Scholar ]
  • Sharma D., Bhaskar S. (2020). Addressing the Covid-19 burden on Medical Education and Training: the Role of Telemedicine and Tele-Education during and beyond the Pandemic . Front. Public Health 8 , 838. 10.3389/fpubh.2020.589669 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Si W.-X., Liao X.-Y., Qian Y.-L., Sun H.-T., Chen X.-D., Wang Q., et al. (2019). Assessing Performance of Augmented Reality-Based Neurosurgical Training . Vis. Comput. Industry, Biomed. Art 2 , 6. 10.1186/s42492-019-0015-8 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Simonyan K., Zisserman A. (2015). “ Very Deep Convolutional Networks for Large-Scale Image Recognition ,” in International Conference on Learning Representations, San Diego, CA, May 7–9, 2015. [ Google Scholar ]
  • Singh R. P., Javaid M., Kataria R., Tyagi M., Haleem A., Suman R. (2020). Significant Applications of Virtual Reality for Covid-19 Pandemic . Diabetes Metab. Syndr. Clin. Res. Rev. 14 ( 4 ), 661–664. 10.1016/j.dsx.2020.05.011 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Spera C., Somerville A., Caniff S., Keenan J., Fischer M. D. (2020). Virtual Reality Haptic Surgical Simulation for Sub-retinal Administration of an Ocular Gene Therapy . Invest. Ophthalmol. Vis. Sci. 61 , 4503. 10.1039/d0ay90130j [ CrossRef ] [ Google Scholar ]
  • Stone S., Bernstein M. (2007). Prospective Error Recording in Surgery: an Analysis of 1108 Elective Neurosurgical Cases . Neurosurgery 60 , 1075–1082. 10.1227/01.neu.0000255466.22387.15 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Szegedy C., Liu W., Jia Y., Sermanet P., Reed S., Anguelov D., et al. (2015). “ Going Deeper with Convolutions ,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, MA, June 7–12, 2015, 1–9. 10.1109/cvpr.2015.7298594 [ CrossRef ] [ Google Scholar ]
  • Szegedy C., Vanhoucke V., Ioffe S., Shlens J., Wojna Z. (2016). “ Rethinking the Inception Architecture for Computer Vision ,” in Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, June 27–30, 2016, 2818–2826. 10.1109/cvpr.2016.308 [ CrossRef ] [ Google Scholar ]
  • Tahmasebi A. M., Hashtrudi-Zaad K., Thompson D., Abolmaesumi P. (2008). A Framework for the Design of a Novel Haptic-Based Medical Training Simulator . IEEE Trans. Inf. Technol. Biomed. 12 , 658–666. 10.1109/titb.2008.926496 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tavakoli M., Carriere J., Torabi A. (2020). Robotics, Smart Wearable Technologies, and Autonomous Intelligent Systems for Healthcare during the Covid-19 Pandemic: An Analysis of the State of the Art and Future Vision . Adv. Intell. Syst. 2 , 2000071. 10.1002/aisy.202000071 [ CrossRef ] [ Google Scholar ]
  • Uijlings J. R., Van De Sande K. E., Gevers T., Smeulders A. W. (2013). Selective Search for Object Recognition . Int. J. Comput. Vis. 104 , 154–171. 10.1007/s11263-013-0620-5 [ CrossRef ] [ Google Scholar ]
  • Wang Z., Fey A. M. (2018). “ Satr-dl: Improving Surgical Skill Assessment and Task Recognition in Robot-Assisted Surgery with Deep Neural Networks ,” in In 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, July 17–21, 2018 (IEEE; ), 1793–1796. 10.1109/EMBC.2018.8512575 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wang D., Shi Y., Liu S., Zhang Y., Xiao J. (2014). Haptic Simulation of Organ Deformation and Hybrid Contacts in Dental Operations . IEEE Trans. Haptics 7 , 48–60. 10.1109/toh.2014.2304734 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wang L., Xiong Y., Wang Z., Qiao Y., Lin D., Tang X., et al. (2018). Temporal Segment Networks for Action Recognition in Videos . IEEE Trans. pattern Anal. machine intell. 41 , 2740–2755. 10.1109/TPAMI.2018.2868668 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Weng J., Weng J., Zhang J., Li M., Zhang Y., Luo W. (2019). “ Deepchain: Auditable and Privacy-Preserving Deep Learning with Blockchain-Based Incentive ,” in IEEE Transactions on Dependable and Secure Computing. 10.1109/tdsc.2019.2952332 [ CrossRef ] [ Google Scholar ]
  • Yari S. S., Jandhyala C. K., Sharareh B., Athiviraham A., Shybut T. B. (2018). Efficacy of a Virtual Arthroscopic Simulator for Orthopaedic Surgery Residents by Year in Training . Orthopaedic J. Sports Med. 6 , 2325967118810176. 10.1177/2325967118810176 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zappella L., Béjar B., Hager G., Vidal R. (2013). Surgical Gesture Classification from Video and Kinematic Data . Med. image Anal. 17 , 732–745. 10.1016/j.media.2013.04.007 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zendejas B., Jakub J. W., Terando A. M., Sarnaik A., Ariyan C. E., Faries M. B., et al. (2017). Laparoscopic Skill Assessment of Practicing Surgeons Prior to Enrollment in a Surgical Trial of a New Laparoscopic Procedure . Surg. Endosc. 31 , 3313–3319. 10.1007/s00464-016-5364-1 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zhang W., Wang Y., Yang L., Wang C. (2020). Suspending Classes Without Stopping Learning: China’s Education Emergency Management Policy in the Covid-19 Outbreak . Multidisciplinary digital publishing institute, J. Risk Finan. Manag. 13 ( 3 ), 1–6. [ Google Scholar ]
  • Zia A., Essa I. (2018). Automated Surgical Skill Assessment in Rmis Training . Int. J. Comput. Assist. Radiol. Surg. 13 , 731–739. 10.1007/s11548-018-1735-5 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Zia A., Sharma Y., Bettadapura V., Sarin E. L., Clements M. A., Essa I. (2015). “ Automated Assessment of Surgical Skills Using Frequency Analysis ,” in International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, October 5–9, 2015 (Springer; ), 430–438. 10.1007/978-3-319-24553-9_53 [ CrossRef ] [ Google Scholar ]
  • Zia A., Sharma Y., Bettadapura V., Sarin E. L., Ploetz T., Clements M. A., et al. (2016). Automated Video-Based Assessment of Surgical Skills for Training and Evaluation in Medical Schools . Int. J. Comput. Assist. Radiol. Surg. 11 , 1623–1636. 10.1007/s11548-016-1468-2 [ PubMed ] [ CrossRef ] [ Google Scholar ]

haptic technology research paper

Academia.edu no longer supports Internet Explorer.

To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to  upgrade your browser .

  •  We're Hiring!
  •  Help Center

Haptic Technology

  • Most Cited Papers
  • Most Downloaded Papers
  • Newest Papers
  • Last »
  • Product Sound Design Follow Following
  • Multimodal Integration Follow Following
  • Sonic Interaction Design Follow Following
  • Sonification Follow Following
  • Virtual Reality Follow Following
  • Music Technology Follow Following
  • Perception Follow Following
  • Haptics Follow Following
  • Hackerspaces Follow Following
  • Interactive Art Follow Following

Enter the email address you signed up with and we'll email you a reset link.

  • Academia.edu Journals
  •   We're Hiring!
  •   Help Center
  • Find new research papers in:
  • Health Sciences
  • Earth Sciences
  • Cognitive Science
  • Mathematics
  • Computer Science
  • Academia ©2024
  • Social Psychology
  • Non-Verbal Communication

Haptic Technology

  • 10(7):866-868
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

No full-text available

Request Full-text Paper PDF

To read the full-text of this research, you can request a copy directly from the author.

Stefan Josef Breitschaft

  • Liu Shi Gan

Kourosh Zareinia

  • COMPUT GRAPH-UK

Monica Bordegoni

  • Fayez R. El-Far

Mohamad Eid

  • Katri Kangas
  • Stephen Brewster
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

applsci-logo

Article Menu

haptic technology research paper

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Designing pedagogically effective haptic systems for learning: a review.

haptic technology research paper

1. Introduction

2. theories of cognition for haptics applications, 3. haptics applications for learning, 3.1.1. physics, 3.1.2. biology, 3.1.3. mathematics, 3.2. medicine, 3.2.1. palpatory diagnosis and treatment, 3.2.2. surgical training, 3.2.3. veterinary medicine, 3.2.4. venipuncture, 3.3. visual disability instruction, 3.4. language and handwriting acquisition, 3.5. additional applications in training, 4. discussion.

  • Early implementation of haptic feedback during the learning process creates positive learning outcomes.
  • The more sensitive the haptic device, the better the overall learning outcome and skill gained from the simulation.
  • The designed actions in a simulation must be congruent with desired learning outcome, and realism secondary to a well-interpreted action/simulation (higher sensitivity and immersion for a more embodied experience).
  • Incorporating simple sounds, emphasizing haptic feedback, and incorporating audio guidance help improve the effectiveness of the simulation.
  • Medical simulations benefit from tactile fidelity, higher degrees of realism, and force feedback with relatively larger forces. Incorporation of “Haptic Mirrors” [ 24 ] in which the goal of the simulation is to accurately mimic the real situation as closely as possible is important.
  • Visually disabled users benefit from having another avenue of access to subject material through the phenomenon of embodied cognition.
  • Language and handwriting students benefit from the embodied nature of design and motor skills training. Partial haptic guidance followed by full haptic guidance is most effective in teaching handwriting skills; haptic guidance was more effective in teaching the overall shape of the handwriting skills, whereas the full haptic guidance was more effective in teaching the fine details [ 54 ].

5. Conclusions

Author contributions, institutional review board statement, informed consent statement, data availability statement, acknowledgments, conflicts of interest.

  • Kapoor, S.; Arora, P.; Kapoor, V.; Jayachandran, M.; Tiwaria, M. Haptics—Touch feedback Technology Widening the Horizon of Medicine. J. Clin. Diagn. Res. JCDR 2014 , 8 , 294. [ Google Scholar ] [ PubMed ]
  • Minogue, J.; Jones, M.G. Haptics in Education: Exploring an Untapped Sensory Modality. Rev. Educ. Res. 2006 , 76 , 317–348. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Zara Stone. Controllers Bring Real Pain to VR Games. Available online: https://www.wired.com/story/haptic-controllers-for-vr-bring-real-pain-to-games/ (accessed on 9 September 2019).
  • Han, I.; Black, J.B. Incorporating haptic feedback in simulation for learning physics. ScienceDirect 2011 , 57 , 2281–2290. [ Google Scholar ] [ CrossRef ]
  • Skulmowski, A.; Pradel, S.; Kuhnert, T.; Brunnett, G.; Rey, G.D. Embodied Learning using a tangible user interface: The effects of haptic perception and selective pointing on a spatial learning task. Comput. Educ. 2015 , 92–93 , 64–75. [ Google Scholar ] [ CrossRef ]
  • Hightower, B.; Lovato, S.; Davison, J.; Wartella, E.; Piper, A. Haptic explorers: Supporting science journaling through mobile haptic feedback displays. Int. J. Hum. Comput. Stud. 2019 , 122 , 103–112. [ Google Scholar ] [ CrossRef ]
  • Williams, R.; Meng-Yun, C.; Seaton, J. Haptics-Augmented Simple-Machine Educational Tools. J. Sci. Educ. Technol. 2003 , 12 , 1–12. [ Google Scholar ] [ CrossRef ]
  • Manches, A.; O’malley, C. Tangibles for learning: A representational analysis of physical manipulation. Pers. Ubiquitous Comput. 2012 , 16 , 405–419. [ Google Scholar ] [ CrossRef ]
  • Hayes, J.C.; Kraemer, D.J. Grounded understanding of abstract concepts: The case of STEM learning. Cogn. Res. Princ. Implic. 2017 , 2 . [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Han, I.S. Feel, Imagine and Learn!—Haptic Augmented Simulation and Embodied Instruction in Physics Learning ; Teachers College, Columbia University: ProQuest LLC.: Ann Arbor, MI, USA, 2010. [ Google Scholar ]
  • Paas, F.; Sweller, J. An Evolutionary Upgrade of Cognitive Load Theory: Using the Human Motor System and Collaboration to Support the Learning of Complex Cognitive Tasks. Educ. Psychol. Rev. 2012 , 24 , 27–45. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Wilson, M. Six Views of Embodied Cognition. Psychon. Bull. Rev. 2002 , 9 , 625–636. [ Google Scholar ] [ CrossRef ]
  • Skulmowski, A.; Rey, G.D. Measuring Cognitive Load in Embodied Learning Settings. Front. Psychol. 2017 , 8 , 1191. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Mitra, R.; McNeal, K.S.; Bondell, H.D. Pupillary response to complex interdependent tasks: A cognitive-load theory perspective. Behav. Res. Methods 2017 , 49 , 1905–1919. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Jong, T. Cognitive load theory, educational research, and instructional design: Some food for thought. Instr. Sci. 2010 , 38 , 105–134. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Glenberg, A.M.; Gallese, V. Action-based language: A theory of language acquisition, comprehension, and production. Cortex 2012 , 48 , 905–922. [ Google Scholar ] [ CrossRef ]
  • Weisberg, S.M.; Newcombe, N.S. Embodied Cognition and STEM learning: Overview of a topical collection in CR: PI. Cogn. Res. Princ. Implic. 2017 . [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Kontra, C.; Lyons, D.; Fischer, S.; Beilock, S. Physical experience enhances science learning. Psychol. Sci. 2015 , 26 , 737–749. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Jones, G.; Minogue, J.; Tretter, T.; Negishi, A.; Taylor, R. Haptic Augmentation of Science Instruction: Does Touch Matter? Sci. Educ. 2006 , 90 , 111–123. [ Google Scholar ] [ CrossRef ]
  • Jones, G.; Andre, T.; Superfine, R.; Taylor, R. Learning at the Nanoscale: The Impact of Students’ Use of Remote Microscopy of Concepts of Viruses, Scale, and Microscopy. J. Res. Sci. Teach. Off. J. Natl. Assoc. Res. Sci. Teach. 2003 , 40 , 303–322. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Schonborn, K.J.; Bivall, P.; Tibell, L.A. Exploring relationships between students’ interaction and learning with a haptic virtual biomolecular model. Comput. Educ. 2011 , 57 , 2095–2105. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Nathan, M.J.; Walkington, C. Grounded and embodied mathematical cognition: Promoting mathematical insight and proof using action and language. Cogn. Res. Princ. Implic. 2017 , 2 , 9. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Moyer-Packenham, P.; Lammatsch, C.; Litster, K.; Ashby, J.; Bullock, E.; Roxburgh, A.; Shumway, J.; Speed, E.; Covington, E.; Hartmann, C.; et al. How design features in digital math games support learning and mathematics connections. Comput. Hum. Behav. 2019 , 91 , 316–332. [ Google Scholar ] [ CrossRef ]
  • Davis, R.; Martinez, M.; Schneider, O.; MacLean, K.; Okamrua, A.; Blikstein, P. The Haptic Bridge: Towards a Theory for Haptic-Supported Learning ; Association for Computing Machinery: New York, NY, USA, 2017. [ Google Scholar ]
  • Yiannoutsou, N.; Johnson, R.; Price, S. Exploring How Children Interact with 3D Shapes Using Haptic Technologies ; Association for Computing Machinery: New York, NY, USA, 2018. [ Google Scholar ]
  • McWilliams and Malecha. Comparing Intravenous Insertion Instructional Methods with Haptic Simulators. Nurs. Res. Pract. 2017 , 2017 , 4685157. [ Google Scholar ]
  • Zhou, M.; Jones, D.B.; Schwaitzberg, S.D.; Cao, C.G.L. Role of Haptic Feedback and Cognitive Load in Surgical Skill Acquisition. Hum. Factors Ergon. Soc. 2007 , 51 , 631–635. [ Google Scholar ] [ CrossRef ]
  • Dunkin, B.; Adrales, G.L.; Apelgren, K.; Mellinger, J.D. Surgical simulation: A current review. Surg. Endosc. 2006 , 21 , 357–366. [ Google Scholar ] [ CrossRef ]
  • Sidarta, A.; van Vugt, F.T.; Ostry, D.J. Somatosensory working memory in human reinforcement-based motor learning. J. Neurophysiol. 2018 , 120 , 3275–3286. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Karadogan, E.; Williams, R.L. Haptic modules for palpatory diagnosis training of medical students. Virtual Real. 2013 , 17 , 45–58. [ Google Scholar ] [ CrossRef ]
  • Escobar-Castillejos, D.; Noguez, J.; Neri, L.; Magana, A.; Benes, B. A review of simulators with haptic devices for medical training. PubMed 2016 , 40 , 104. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Mahoney, L.; Csima, A. Efficiency of palpation in clinical detection of breast cancer. Can. Med. Assoc. J. 1982 , 127 , 729–730. [ Google Scholar ] [ PubMed ]
  • Li, M.; Konstantinova, J.; Secco, E.; Jiang, A.; Liu, H.; Nanayakkara, T.; Senviratne, L.; Dasgupta, P.; Althoefer, K.; Wurdemann, H. Using visual cues to enhance haptic feedback for palpation on virtual model of soft tissue. Med. Biol. Eng. Comput. 2015 , 53 , 1177–1186. [ Google Scholar ] [ CrossRef ] [ PubMed ] [ Green Version ]
  • Overtoom, E.; Horeman, T.; Jansen, F.; Dankelman, J.; Schreuder, H. Haptic Feedback, Force Feedback, and Force-Sensing in Simulation Training for Laparoscopy: A Systematic Overview. J. Surg. Educ. 2018 , 76 , 242–261. [ Google Scholar ] [ CrossRef ]
  • Van der Putten Westebring, E.P. A Sense of Touch in Laparoscopy: Using Augmented Haptic Feedback to Improve Grasp Control. 2011. Available online: http://resolver.tudelft.nl/uuid:beebbcde-0129-4e10-94b3-241609069445 (accessed on 24 January 2011).
  • Vapenstad, C.; Hofstad, E.; Bo, L.; Kuhry, E.; Johnsen, G.; Marvik, R.; Lango, T.; Hernes, T. Lack of transfer of skills after virtual reality simulator training with haptic feedback. Minim. Invasive Ther. Allied Technol. 2017 , 26 , 346–354. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Strom, P.; Hedman, L.; Sarna, L.; Kjellin, A.; Wredmark, T.; Fellander-Tsai, L. Early exposure to haptic feedback enhances performance in surgical simulator training: A prospective randomized crossover study in surgical residents. Surg. Endosc. Other Interv. Tech. 2006 , 20 , 1383–1388. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Kinnison, T.; Forrest, N.; Frean, S.; Baillie, S. Teaching Bovine Abdominal Anatomy: Use of a Haptic Simulator. Anat. Sci. Educ. 2009 , 2 , 280–285. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Parkes, R.; Forrest, N.; Baillie, S. A Mixed Reality Simulator for Feline Abdominal Palpation Training in Veterinary Medicine. 2009. Stud. Health Technol. Inform. 2009 , 142 , 244–246. [ Google Scholar ] [ PubMed ]
  • Bateman, A.; Zhao, O.; Bajcsy, A.; Jennings, M.; Toth, B.; Cohen, A.; Horton, E.; Khattat, A.; Kuo, R.; Lee, F.; et al. A user-centered design and analysis of an electrostatic haptic touchscreen system for students with visual impairments. Int. J. Hum. Comput. Stud. 2018 , 109 , 102–111. [ Google Scholar ] [ CrossRef ]
  • Darrah, M. Computer haptics: A new way of increasing access and understanding of math and science for students who are blind and visually impaired. J. Blind. Innov. Res. 2013 , 3 , 3–47. [ Google Scholar ] [ CrossRef ]
  • Bussell, L. Touch Tiles: Elementary Geometry Software with a Haptic and Auditory Interface for Visually Impaired Children. 2003. Available online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.3.4699&rep=rep1&type=pdf (accessed on 7 February 2020).
  • Avila-Soto, M.; Bahamondez, E.; Schmidt, A. TanMath: A Tangible Math Application to Support Children with Visual Impairment to Learn Basic Arithmetic ; Association for Computing Machinery: New York, NY, USA, 2017. [ Google Scholar ]
  • Murphy, K.; Darrah, M. Haptics-based apps for middle school students with visual impairments. IEEE Trans. Haptics 2015 , 8 , 318–326. [ Google Scholar ] [ CrossRef ]
  • Fernandes, L. The Abacus: A Brief History. Available online: https://www.ee.ryerson.ca/~elf/abacus/history.html (accessed on 31 July 2014).
  • Sanchez, J. Development of navigation skills through audio haptic videogaming in learners who are blind. Procedia Comput. Sci. 2012 , 14 , 102–110. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Sjostrom, C. Using Haptics in Computer Interfaces for Blind People ; Association for Computing Machinery: New York, NY, USA, 2001. [ Google Scholar ]
  • Hansen, E.; Liu, L.; Rogat, A.; Hakkinen, M.; Darrah, M. Designing Innovative Science Assessments That Are Accessible for Students Who Are Blind. J. Blind. Innov. Res. 2016 , 6 , 1. [ Google Scholar ] [ CrossRef ]
  • Roessingh and Bence. Embodied Cognition: Laying the Foundation for Early Language and Literacy Learning. Lang. Lit. 2018 , 20 , 23–39. [ Google Scholar ]
  • Bara, F.; Gentaz, E.; Cole, P. Haptics in learning to read with children from low socio-economic status families. Br. J. Dev. Psychol. 2007 , 25 , 643–663. [ Google Scholar ] [ CrossRef ]
  • Jiao, Y.; Severgnini, F.; Martinez, J.; Jung, J.; Tan, H.; Reed, C.; Wilson, E.; Lau, F.; Israr, A.; Turcott, R.; et al. A Comparative Study of Phoneme- and Word-Based Learning of English Words Presented to the Skin ; Springer: Berlin/Heidelberg, Germany, 2018. [ Google Scholar ]
  • Dunkelberger, N.; Bradley, J.; Sullivan, J.; Israr, A.; Lau, F.; Klumb, K.; Abnousi, F.; O’Malley, M. Improving Perception Accuracy with Multi-Sensory Haptic Cue Delivery ; Springer: Berlin/Heidelberg, Germany, 2018. [ Google Scholar ]
  • Palluel-Germain, R.; Bara, F.; Boisferon, A.; Hennion, B.; Gouagout, P.; Gentaz, E. A visuo-haptic device-telemaque-increases kindergarten children’s handwriting acquisition. In Proceedings of the Second Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (WHC’07), Tsukuba, Japan, 22–24 March 2007. [ Google Scholar ]
  • Teranishi, A.; Korres, G.; Park, W.; Eid, M. Combining Full and Partial Haptic Guidance Improves Handwriting Skills Development. IEEE Trans. Haptics 2018 , 11 , 509–517. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Xiong, M.; Milleville-Pennel, I.; Dumas, C.; Palluel-Germain, R. Comparing Haptic and Visual Training Method of Learning Chinese Handwriting with a Haptic Guidance. JCP 2013 , 8 , 1815–1820. [ Google Scholar ] [ CrossRef ]
  • Kim, Y.S.; Collins, M.; Bulmer, W.; Sharma, S.; Mayrose, J. Haptics Assisted Training (HAT) System for Children’s Handwriting. In Proceedings of the IEEE World Haptics Conference (WHC’13), Daejeon, Korea, 14–17 April 2013. [ Google Scholar ]
  • Ogawa, D.; Ikeno, S.; Okazaki, R.; Hachisu, T.; Kajimoto, H. Tactile Cue Presentation for Vocabulary Learning with Keyboard ; Association for Computing Machinery: New York, NY, USA, 2014. [ Google Scholar ]
  • Kasahara, S.; Takada, K.; Nishida, J.; Shibata, K.; Shimojo, S.; Lopes, P. Preserving Agency during Electrical Muscle Stimulation Training Speeds up Reaction Time Directly after Removing EMS ; Association for Computing Machinery: New York, NY, USA, 2021. [ Google Scholar ]
  • Erp, J.; Saturday, I.; Jansens, C. Application of Tactile Displays in Sports: Where to, How and When to Move. Available online: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.505.5924&rep=rep1&type=pdf (accessed on 6 July 2021).
  • Mayer, R.E.; Moreno, R. Nine Ways to Reduce Cognitive Load in Multimedia Learning. Educ. Psychol. 2003 , 38 , 43–52. [ Google Scholar ] [ CrossRef ] [ Green Version ]
  • Auksztulewicz, R.; Spitzer, B.; Goltz, D.; Blankenburg, F. Impairing somatosensory working memory using rTMS. Eur. J. Neurosci. 2011 , 34 , 839–844. [ Google Scholar ] [ CrossRef ]
  • Ayelet, S. Do Gestural Interfaces Promote Thinking? Embodied Interaction: Congruent Gestures and Direct-Touch Promote Performance in Math. Ph.D. Thesis, Columbia University, New York, NY, USA, 2011. [ Google Scholar ]
  • Olympiou, G.; Zacharia, Z.C. Blending Physical and Virtual Manipulatives: An Effort to Improve Students’ Conceptual Understanding Through Science Laboratory Experimentation. Sci. Educ. 2011 , 96 , 21–47. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

NavigationFinding ObjectsUnderstanding ObjectsHaptic WidgetsPhysical Interaction
(1) Include easily found, well-defined reference points in the virtual environment.(1) To reduce instances of the user missing objects, use salient notifiers, such as “enlarged interaction points, magnetic objects, or different surface characteristics”.(1) When realism is not paramount, allowing the user to feel just the outline of an object is beneficial.(1) The finger tends to accelerate when passing a wall or edge.(1) The haptic tool/device (stylus, thimble, joystick, etc.) utilized affects the perceived sensation significantly and should be carefully selected.
(2) If possible, avoid changing the reference system.(2) Provide paths to objects, such as grooves, magnetic lines, or ridges.(2) Rounded shapes without sharp corners or edges are typically easier to understand.(2) Avoid placing adjacent walls/edges too close in proximity as they may be missed.-
-(3) One may also consider including a “virtual search tool”.---
Learning DomainDisciplineSourceOutcome of Haptics
Augmentation
Best Design Practices
STEMPhysics[ ]Haptic augmented simulations promoted better learning outcomes.The force feedback experienced during haptic simulation was beneficial in physics learning scenarios.
[ ]Students tactically interacting with physics concept reported better learning outcomes.Increased embodiment (and related neural activity) is correlated with better learning outcomes.
[ ]The blending of physical and virtual manipulatives provides beneficial learning affordances.Virtual manipulatives can prevent natural error (thus reducing distraction when learning a novel task), alter time, simplify real-world models, provide immediate feedback, focus attention on key concepts, and cut down on experimental time and cost.
Physical manipulatives can help users deal with measurement and ambiguity, invoke the senses for observation, bolster the acquisition of psychomotor skills, allow user to experience certain characteristics of a concrete object, and allow for real-world data analysis.
Biology[ ]Haptic augmentation of a mobile science journal promoted more “on-task” conversation than the control condition.Specific texture fidelity was secondary if the goal of haptic augmentation was to guide attention toward certain parts of an artifact or prompt particular interactions.
The haptic system performed well when children had the option to add each modality in sequence, rather than all at once.
[ ]Students in haptic group had better learning experiences and outcomes.Higher fidelity/higher sensitivity haptic devices promoted the use of more haptic terms to describe the learning material.
[ ]Haptic augmentation is useful for teaching microscale morphology.Full haptic feedback (versus partial) promoted better understanding of the learning material.
[ ]Haptic feedback helps students locate feasible biomolecular docking positions more efficiently.The realistic awareness that haptic augmentation imparts can induce learning benefit when learning about microscale concepts.
Mathematics[ ]Symbolic haptic feedback can be useful in facilitating conceptual understanding.Symbolic haptics are as effective as how they are interpreted and understood by the user.
[ ]Mathematical proficiency can be improved though digital games.Linking physical actions with mathematical concepts (e.g., tracing shapes with fingers) promotes positive learning outcomes.
Helpful digital game design features include: providing user with unlimited/multiple attempts, information tutorials and hints, focused constraint, progressive levels, game efficiency, linked representation, and linked physical actions.
[ ]Direct touch interfaces better support visual-motor coordination and promote student use of advanced strategies for problem solving.Actions were found to positively affect cognition when they were congruent with participant thinking (e.g., tapping for discrete counting or sliding for continuous number line estimation).
[ ]Learning potential for haptic augmentation was identified.Haptic feedback provided simultaneously with visual feedback tended to be overlooked (Colavita visual dominance effect).
MedicineOsteopathic[ ]Subjective reports indicate majorly positive reports across metrics for usability, effectiveness, and clarity. Haptics were favorable due to lower time constraints and fewer consequences.According to subjective reports, the least favorable model was the “Pitting Edema” haptic module. This module was the only one to provide simultaneous haptic and visual feedback. Visual dominance was observed.
Veterinary[ ]Haptic simulation was proven to help students in understanding of bovine abdominal anatomy and 3D visualization.Subjective feedback determined that there was a student desire to be able to feel the haptic simulation with the whole hand as opposed to one finger.
[ ]Positive feedback from veterinarians was encouraging, suggesting that the feline abdominal haptic simulator has a potential role in veterinary student training.The veterinarians suggested that there would be benefits in using the simulator before, and as a complement to, examining live animals, particularly as the instructor can follow the movements “inside” the cat on the computer monitor while directing the trainee and identifying structures palpated.
[ ]Haptic feedback enhances performance during a laparoscopic surgery simulation and is thought to reduce cognitive load.Exaggerated haptics led to faster and more accurate completion of the surgery simulation.
General[ ]Virtual environments provide a viable alternative to traditional methods of medical training.Current haptic technologies are thought to be hindered by their lack of realism.
Different levels of technology (DoF, DoFF, immersion, visual realism) may suit various simulations better; likewise, required level of visual realism varies significantly between different medical simulations.
Haptic simulations may benefit from providing progressive levels.
[ ]Combining haptic and visual feedback led to the best simulation results.Addition of visual modality to the haptic palpation simulation increased user sensitivity by 5% and the positive predictive value by 4%, and decreased tumor detection time by 48.7%
[ ]Haptic simulators were at least equal to the traditional method of teaching IV insertion.The most effective method was concluded to be haptic simulation followed by a physical practice.
[ ]Haptic feedback can lead to improved learning gains for laparoscopic procedures.Haptic feedback is more important for learning complex tasks.
Skill improvement was minor and less pronounced for novice surgeons.
[ ]Haptic augmentation improved skill acquisition.Haptic augmentation experienced in the early training phase of skill acquisition improved learner outcomes optimally.
[ ]Skills acquired through VR haptic simulator did not transfer to a clinical setting.Haptic realism and fidelity were determined to be key factors for positive learning outcomes.
[ ]Haptic augmentation can improve laparoscopic surgical outcomes.Haptic sensors that measure exerted forces should be located in the tip of the grasper tool.
Immediate real-time feedback is required for best results.
[ ]Haptics have proven to be a beneficial tool for improving clinical proficiency, while decreasing medical costs and errors.There is room for improvement on haptic realism.
[ ]Haptic augmentation can be a useful tool for teaching math and science to visually impaired students.Students appreciated the implementation of “self-checks” that gave them feedback on their understanding.
Students expressed desire for changes in screen readers’ voices and pronunciation, button use and placement on the device, and additional instructions.
Visually disabled [ ]Preliminary usability testing and subjective reports were positive.Size of magnetic numbers used as physical icons in the study were thought to be too small to be suitable for tangible perception, thereby requiring prior knowledge of the physical icons.
The user required better guidance during the learning task to properly align the magnetic numbers on the work surface.
[ ]The haptic touchscreen meets the desired accessibility needs of the visually impaired.The best methods of haptic touchscreen exploration were systemic and rapid exploration of the screen. Haptic elements placed near the corners of the screen were more easily located.
[ ]Haptics provide the visually impaired with an additional way to interact with subject material.Best design: sharp corners can be disorienting; some textures are more confusing than others (e.g., grid texture was mistaken for moving square); strength of force feedback was influential for shape identification, with stronger feedback being preferable.
[ ]Subjective student reports indicated student preference for haptic augmentation over other methods of instruction.The 3D experience of Novint Falcon presented difficulties during a particle counting task. Users of the vibrotactile screen also struggled to locate particles during task. Background vibration was thought to impair task success.
[ ]The Audio Haptic Maze (AHM) virtual game was found to improve orientation and mobility skills in visually impaired users.Pure sounds and tones were most effective, as opposed to complex sounds. Simpler shapes were easier to identify via haptic exploration.
[ ]Haptics offer visually impaired learners an additional way of interacting with learning material.Best design: provide well-defined and consistent reference points; connect haptic objects and provide haptic path to objects; provide a virtual search tool; sharp edges and corners are more difficult to feel and understand than rounded shapes; if realism is not necessary, the outline of an object may be more easily understood; replacing boundaries with a magnetic line that pulls user to the center can be helpful; manipulandum can affect haptic sensation.
[ ]Haptic augmentation can be a useful tool for visually impaired students.Inclusion of audio information was a helpful addition to the haptic simulation.
[ ]Performance in letter recognition, initial phenome identification improved more after haptic intervention versus just visual intervention.Not available.
Language and handwriting acquisition [ ]Learning and decoding language via haptic symbols on skin is feasible through a wearable device.Not available.
[ ]Fluency of handwriting was higher after visuo-haptic training than for control training.Not available.
[ ]Learning and decoding language via haptic symbols on skin is feasible.Phenome-based haptic approach provides the most consistent approach for learning.
[ ]Children’s tracing performance on the HAT system was improved for all groups receiving haptic feedback.Children using the high-end device (Phantom Omni) in the standard class outperformed all other groups across all measures.
[ ]Embodied role in language and literacy learning is emphasized. Roessignh and Bence offer a “Play-Based Pedagogy” framework that emphasizes sensorimotor engagement.Not available.
[ ]Haptic handwriting guidance is found to be effective and pleasant for participants.A combination of full and partial haptic guidance resulted in statistically significant improvements in the quality of handwriting. Implementing partial haptic guidance in the early stages of learning and then using full haptic guidance during later stages was found to be the most effective training method. Partial haptic guidance was thought to be effective in teaching the gross shape of handwriting skills, whereas full haptic guidance was thought to be more effective in teaching fine details.
[ ]Implementation of haptic information showed significant improvement for the transfer of shape learning.The combination of visual and haptic information provided the best learning results, as opposed to only visual. It was concluded that visual information may benefit gross shape learning, while haptic information may benefit transfer of shape learning by creating an internal model of the shape of each stroke separately.
[ ]It was found that the words learned with haptic cues lead to more effective knowledge retention measured one week after training.Providing vibrotactile feedback to fingers according to the QWERTY keyboard layout led to better retention of new vocabulary learned.
MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

Crandall, R.; Karadoğan, E. Designing Pedagogically Effective Haptic Systems for Learning: A Review. Appl. Sci. 2021 , 11 , 6245. https://doi.org/10.3390/app11146245

Crandall R, Karadoğan E. Designing Pedagogically Effective Haptic Systems for Learning: A Review. Applied Sciences . 2021; 11(14):6245. https://doi.org/10.3390/app11146245

Crandall, Riley, and Ernur Karadoğan. 2021. "Designing Pedagogically Effective Haptic Systems for Learning: A Review" Applied Sciences 11, no. 14: 6245. https://doi.org/10.3390/app11146245

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

Demonstration of Wheeler: A Three-Wheeled Input Device for Usable, Efficient, and Versatile Non-Visual Interaction

Navigating multi-level menus with complex hierarchies remains a big challenge for blind and low-vision users, who predominantly use screen readers to interact with computers. To that end, we demonstrate Wheeler, a three-wheeled input device with two side buttons that can speed up complex multi-level hierarchy navigation in common applications. When in operation, the three wheels of Wheeler are each mapped to a different level in the application hierarchy. Each level can be independently traversed using its designated wheel, allowing users to navigate through multiple levels efficiently. Wheeler’s three wheels can also be repurposed for other tasks such as 2D cursor manipulation. In this demonstration, we describe the different operation modes and usage of Wheeler.

Refer to caption

1. Introduction

Navigating the complex hierarchies in modern desktop applications remains one of the key accessibility challenges for blind and low-vision users, who use a combination of keyboard and screen readers (e.g., NVDA  (NVA, 2020 ) , JAWS  (jaw, 2018 ) , VoiceOver  (Apple Inc., 2020 ) ) to interact with computers. Recent studies have shown that apps requiring a higher average number of keystrokes for navigation are perceived as less accessible  (Islam et al . , 2023 ) . As such, developing a faster mechanism to travel between menu items in an app is necessary.

While prior research has proposed alternate input modalities with faster task completion times in specific scenarios  (Billah et al . , 2017 ; Lee et al . , 2020a , b ) , the challenge of navigating to UI items that belong to a different sub-tree remains. To address this, we design and implement Wheeler   (Islam et al . , 2024 ) , a three-wheeled, mouse-shaped, stationary input device whose three wheels can be mapped to three different levels in an app’s internal hierarchy—enabling faster and more convenient navigation. The three wheels also offer versatility such as the ability to manipulate cursor in 2D space.

2. Wheeler: an Overview

Wheeler is a mouse-shaped input device with three wheels and two side push buttons, as shown in Figure  1 a. Unlike a mouse, Wheeler is stationary, i.e., users do not move it on the surface when using it. A user can grip the device with their right hand so that their index finger rests on the first wheel, the middle finger on the second wheel, the ring finger on the third wheel, and the thumb over the two buttons as shown in Figure  1 b. Of the two side buttons, the bigger one plays the role of a mouse left/primary click, and the smaller one plays the role of a mouse right/ secondary click.

In our design, Wheeler connects to a computer via USB, but a Bluetooth wireless connection is feasible. Wheeler provides audio-haptic feedback to convey cursor context. It has a buzzer and haptic motor; the buzzer beeps during significant events, and the haptic motor vibrates with each rotation. These do not interfere with screen reader audio.

3. Interaction Using Wheeler

Wheeler primarily operates in two modes: H-nav and 2d-nav .

Refer to caption

H-nav Mode. In H-nav mode, Wheeler navigates an app’s abstract UI tree (Figure  2 ). By default, Wheeler’s three wheels point to the top three levels of an app’s DOM, each with its own cursor and state. A wheel remembers the last UI object focused on and resumes from there, eliminating the need to re-explore the hierarchy.

The rotate action selects elements bi-directionally. Wheel-1 selects elements in the 1 s ⁢ t superscript 1 𝑠 𝑡 1^{st} 1 start_POSTSUPERSCRIPT italic_s italic_t end_POSTSUPERSCRIPT level, Wheel-2 selects children of Wheel-1 ’s selection, and Wheel-3 selects children of Wheel-2 ’s selection. When Wheel-1 ’s cursor moves to a node, Wheel-2 ’s cursor moves to the first child of Wheel-1 ’s node, and Wheel-3 ’s cursor moves to the first child of Wheel-2 ’s node. Figure  2 c shows the hierarchical organization and mapping in Wheeler.

For left-/right-clicks, the user presses the primary/secondary side buttons. Users can define rotation resolution (degrees) to adjust sensitivity. Wheeler provides audio-haptic feedback for valid operations and spatial information.

In H-nav mode, Wheeler’s firmware integrates with NVDA, an open-source screen reader, appearing as an NVDA plugin  (Momotaz et al . , 2021 ) with access to any app’s UI hierarchy via NVDA APIs, which uses Windows’ native UI Automation API  (Microsoft Inc., 2020 ) to extract the UI tree and relay rotational input.

Traversing Apps with More than 3 Levels. For applications with more than 3 levels, users can move all three cursors down one level in the hierarchy by holding the CTRL key and pressing Wheeler’s primary button. Similarly, to move all three cursors up one level, users can hold the CTRL key and press Wheeler’s secondary button.

Refer to caption

2d-nav Mode. In 2d-nav mode, the wheels serve different roles: Wheel-1 moves the cursor along the X-axis , Wheel-2 moves it along the Y-axis , and Wheel-3 controls the cursor speed. Figure  3 demonstrates a blind user moving the cursor from the lower-left to the upper-right corner of a 2D screen. The user can rotate Wheel-3 to adjust cursor speed.

Navigating 2D space can cause context loss for visually impaired users  (Islam and Billah, 2023 ) . To address this, pressing the CTRL key in 2d-nav mode prompts Wheeler to read out the cursor location as a percentage of the screen’s width and height. For example, if the cursor is above the ‘‘Google Chrome’’ icon in Figure  3 , Wheeler would announce something like “30% from the left and 10% from the top” . Additionally, Wheeler’s built-in TTS engine automatically reads out the name of a UI element on cursor hover.

2d-T-nav Mode. 2d-T-nav is a variant of 2d-nav mode in which Wheeler teleports the mouse cursor to the nearest neighboring UI element in the direction of cursor movement. This method is faster than 2d-nav for moving between elements.

Toggling Modes. To toggle between H-nav and 2d-nav modes, users can hold the CTRL button and simultaneously press both the primary and secondary buttons of Wheeler. When in 2d-nav mode, users can enable or disable 2d-T-nav mode by pressing and holding the secondary (small) button for a short duration (e.g., 300 ms).

  • jaw (2018) 2018. What’s New in JAWS 2018 Screen Reading Software. Retrieved September 19, 2018 from https://www.freedomscientific.com/downloads/JAWS/JAWSWhatsNew
  • NVA (2020) 2020. NV Access. https://www.nvaccess.org/ . (Accessed on 09/20/2018).
  • Apple Inc. (2020) Apple Inc. 2020. VoiceOver. https://www.apple.com/accessibility/osx/voiceover/ .
  • Billah et al . (2017) Syed Masum Billah, Vikas Ashok, Donald E. Porter, and I.V. Ramakrishnan. 2017. Speed-Dial: A Surrogate Mouse for Non-Visual Web Browsing. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility . ACM, 3132531, 110–119. https://doi.org/10.1145/3132525.3132531
  • Islam and Billah (2023) Md Touhidul Islam and Syed Masum Billah. 2023. SpaceX Mag: An Automatic, Scalable, and Rapid Space Compactor for Optimizing Smartphone App Interfaces for Low-Vision Users. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 7, 2 (2023), 1–36.
  • Islam et al . (2023) Md Touhidul Islam, E Porter Donald, and Syed Masum Billah. 2023. A Probabilistic Model and Metrics for Estimating Perceived Accessibility of Desktop Applications in Keystroke-Based Non-Visual Interactions. In The 2023 CHI Conference on Human Factors in Computing Systems (CHI ’23) . ACM. https://doi.org/10.1145/3544548.3581400
  • Islam et al . (2024) Md Touhidul Islam, Noushad Sojib, Imran Kabir, Ashiqur Rahman Amit, Mohammad Ruhul Amin, and Syed Masum Billah. 2024. Wheeler: A Three-Wheeled Input Device for Usable, Efficient, and Versatile Non-Visual Interaction. In The 37th Annual ACM Symposium on User Interface Software and Technology . Association for Computing Machinery, Pittsburgh, PA, USA. https://doi.org/10.1145/3654777.3676396
  • Lee et al . (2020a) Hae-Na Lee, Vikas Ashok, and I. V. Ramakrishnan. 2020a. Repurposing Visual Input Modalities for Blind Users: A Case Study of Word Processors. In 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC) . IEEE, 2714–2721. https://doi.org/10.1109/SMC42975.2020.9283015
  • Lee et al . (2020b) Hae-Na Lee, Vikas Ashok, and I. V. Ramakrishnan. 2020b. Rotate-and-Press: A Non-visual Alternative to Point-and-Click?. In HCI International 2020 – Late Breaking Papers: Universal Access and Inclusive Design , Constantine Stephanidis, Margherita Antona, Qin Gao, and Jia Zhou (Eds.). Springer, Springer International Publishing, Cham, 291–305.
  • Microsoft Inc. (2020) Microsoft Inc. 2020. UI Automation Overview. http://msdn.microsoft.com/en-us/library/ms747327.aspx
  • Momotaz et al . (2021) Farhani Momotaz, Md Touhidul Islam, Md Ehtesham-Ul-Haque, and Syed Masum Billah. 2021. Understanding Screen Readers’ Plugins. In The 23rd International ACM SIGACCESS Conference on Computers and Accessibility . ACM, 1–10. https://doi.org/10.1145/3441852.3471205

More From Forbes

Beyond gaming: setting the stage for haptic tech to change our lives.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Concertgoers get fitted for haptic suits created for the deaf by Music: Not Impossible, during an ... [+] outdoor concert at Lincoln Center on July 22, 2023, in New York City. The violins reverberate in the ribcage, while cello and bass are felt a little further down, with horns in the shoulders, and more often than not, soloists in the wrists. That's one way audio expert Patrick Hanlon programs wireless haptic suits designed to enable the deaf or hard of hearing to experience orchestral music, as initiatives to improve inclusivity at live music performances break new ground. At the Lincoln Center concert, audience members tried vests featuring 24 points of vibration that translate the music onstage. (Photo by ANGELA WEISS / AFP) (Photo by ANGELA WEISS/AFP via Getty Images)

Haptic technology creates a realistic touch sensation in devices. It is part of the foundation of a virtual and augmented reality (XR and VR) experience, bringing the realism of touch, taste, and smell in some cases to an XR experience.

A new report in Fortune Business Insights notes that the haptic technology will reach $7.31 B by 2030. The report points to the increasing need for complete immersion in consumer electronics.

In 2011, at the IEEE World Haptics conference, they predicted that advanced tactile capabilities would be generally available by 2020, and users would be able to touch and manipulate what they saw on screens and feel shape, texture and softness.

Ten years ago, Fujitsu Labs debuted the next generation of touch - a prototype of a haptic sensory tablet at the Mobile World Congress in 2014. Fujitsu Lab's new haptic sensor technology can simulate 3D geometric features such as bumps, ridges, edges and protrusions on touchscreen surfaces.

It's been a long time coming.

A scientific paper in Nature published earlier this year in May noted that haptic feedback technology is still in its infancy, and bridging the gap between haptic technology and the real world to enable ambient haptic feedback on a physical surface is a challenge in the field of human-computer interaction. The paper was based on research to create an active electronic (AE) skin that would be an interface towards ambient haptic feedback on physical surfaces.

Trump Signals He May Skip ABC News Debate After Bashing Network

Real madrid coach ancelotti fires warning to vinicius jr., fc barcelona announces third transfer in four days.

Shifting haptic landscape

“As a society, we have been enraptured by the wonders of immersive entertainment and incorporating haptics and tactile feedback is one way to engage senses beyond sight and hearing, offering deeper immersion in connected experiences," Philippe Guillotel, distinguished scientist at InterDigital . "Over the coming years, haptic devices are set to explode due to growing demand from the addressable market including the likes of TVs, game consoles, smartphones and headphones, now that there is a push towards a more homogenous market."

Guillotel says haptics have advanced since 2021. "When MPEG established haptics as a "first-order media type," it effectively promoted haptics to the same level as audio and video," said Guillotel.

Earlier this year, in April 2024, SenseGlove announced its wireless VR gloves, which feature feedback in the palm. The gloves combine three haptic feedback technologies in a wireless compact design: active contact feedback to feel palm impacts and grasping sensations, force feedback to feel the size and stiffness of virtual objects, and vibrotactile feedback to feel cues and basic textures.

The company says the glove doesn't restrict finger movements around virtual objects but encompasses the whole palm, enabling users to feel a range of interactions, from breaking an egg to shaking hands.

Live sports

Guillotel says haptics add another layer to immersive visuals and sensory experiences, helping everyone engage with content or even live experiences.

An example of the power of haptics is the sensory shirts used by the Newcastle Football Club. Newcastle FC implanted haptics into the shirts of deaf fans so they could feel the sounds of the crowd at the match. The shirts used conductive textiles and haptic modules integrated into the fabric. Broadcast microphones capture the sound around the pitch, which is then converted from analog to digital format with software. The software transforms the crowd noise into touch data that is wirelessly transmitted to the shirt via an antenna in real-time.

On August 20, 2024, D-BOX Technologies, known for its motion-enabled cinema seats, announced the renewal of a three-year licensing rights agreement with The Federation Internationale de l'Automobile (FIA). The company offers a range of haptic technology solutions, including sim racing and immersive training.

Human connection, STEM learning and military applications "Social communication will benefit from haptics," said Guillotel. "This is not only for people who might be hard of hearing, as demonstrated by the Newcastle FC experience, but for more traditional non-verbal communication where haptic-enhanced messages, pictures, or sounds could be the next wave.

In 2023, researchers at NTT Docomo, Keio University, and the Nagoya Institute of Technology developed sensation-sharing technology that enables users to send movement or tactile sensations—like the texture of the fabric or shaping clay with another person miles away—digitally. The researchers said this adds another layer to human communications.

STEM learning

The tablet has transformed how teachers can use technology to teach science, technology, engineering, and math (STEM) in early childhood classrooms. However, one challenge for mobile devices in STEM learning is the lack of sensory information.

A study from the University of Illinois in 2024 analyzed 12 papers covering haptic or tactile learning applications for children between the ages of three and 18. These included handwriting, reading, STEM education, and collaborative learning.

The SpARklingPaper device combines visual feedback from a tablet with the real-world tactile experience of a pen and paper. Another educational haptic device, Phantom Omni, uses force feedback to help visually impaired children interact with 3D shapes.

The study highlights handwriting as an area where haptic technology can impact learning strategies.

Amal Hatira, lead author of the study published in Advanced Intelligent Systems, said feedback aids in fine motor control and handwriting proficiency and gives guidance for learners to improve their writing abilities. Hatira also said geometry/spatial recognition and collaborative education environments are other areas where haptic technology can benefit students.

Military applications

In 2021, the US military announced it was investing in haptic technology for training. Earlier this year, Meta licensed new haptic technology and signed a deal with Immersion Corporation to leverage its patents to enhance Meta's XR hardware, software, and products. Immersion Corporation's patented technology increases realism and immersion during an AR/VR/MR experience, such as training, where haptics can simulate real-world assets.

Growth through standardization However, InterDigital's Guillotel says a lack of standards leads to fragmentation and low adoption of new technologies because standards define interoperability between platforms and vendors.

"Currently, haptics standards are fragmented, which drives up developer costs and lowers adoption rates. Like audio and visual, haptics need standards to ensure all aspects involved in the end-to-end delivery of the technology are compatible," said Guillotel.

"MPEG has been key in the development of immersive media and has adopted haptics as a recognized first-order media type across media format files, which includes the widely used .3gp and .mp4 video formats. Streaming protocols have also been extended to support the transport of haptics," said Guillotel.

Personal electronics "With the haptics market expected to grow to more than 4.1 billion haptic-enabled devices by the end of 2024, this number is expected to grow further to 6.7 billion devices by 2028," said Guillotel. "Ultimately, it's personal electronics that will drive this boom, with smartphones accounting for 79% of the haptic-enhanced audio-visual entertainment devices, and this will represent only 59% of haptic devices by 2028 as the mix of other haptic-equipped products increases.

"Today, haptics are viewed as an enhancement to content rather than fully immersive, but as standards are increasingly ratified, this will change," said Guillotel. "In XR, haptics provide an opportunity to create realistic scenarios where this tactile feedback allows the user to utilize all their senses."

Extended reality

"Extended Reality is a key market for haptics, and by 2028, we expect to see some major leaps in the headset market resulting in a renewed excitement for VR experiences, which has a knock-on effect on haptic devices," he added.

Guillotel believes the cost is holding back growth.

"XR gloves with haptics embedded can cost more than $2,000. And, because the diverse device ecosystem prevents interoperability without added configuration, developers do not invest in high-quality haptic performance due to higher costs and time to implement," he added.

Guillotel says there are a few things to be excited about once we jump over the cost hurdle.

"It should be feasible to broadcast or stream fully immersive experiences using haptics, and how haptics could enhance consumer devices such as headphones or a smartphone," said Guillotel, "Think about the haptic emoji."

Jennifer Kite-Powell

  • Editorial Standards
  • Reprints & Permissions

Join The Conversation

One Community. Many Voices. Create a free account to share your thoughts. 

Forbes Community Guidelines

Our community is about connecting people through open and thoughtful conversations. We want our readers to share their views and exchange ideas and facts in a safe space.

In order to do so, please follow the posting rules in our site's  Terms of Service.   We've summarized some of those key rules below. Simply put, keep it civil.

Your post will be rejected if we notice that it seems to contain:

  • False or intentionally out-of-context or misleading information
  • Insults, profanity, incoherent, obscene or inflammatory language or threats of any kind
  • Attacks on the identity of other commenters or the article's author
  • Content that otherwise violates our site's  terms.

User accounts will be blocked if we notice or believe that users are engaged in:

  • Continuous attempts to re-post comments that have been previously moderated/rejected
  • Racist, sexist, homophobic or other discriminatory comments
  • Attempts or tactics that put the site security at risk
  • Actions that otherwise violate our site's  terms.

So, how can you be a power user?

  • Stay on topic and share your insights
  • Feel free to be clear and thoughtful to get your point across
  • ‘Like’ or ‘Dislike’ to show your point of view.
  • Protect your community.
  • Use the report tool to alert us when someone breaks the rules.

Thanks for reading our community guidelines. Please read the full list of posting rules found in our site's  Terms of Service.

haptic technology research paper

  • UGC Approved Journals
  • Editorial Board Member
  • Reviewer Home

HAPTIC TECHNOLOGY

J.hari hara sudhan.

Abstract:  Engineering as it finds its wide range of application in every field not an exception even the medical field. One of the technologies which aid the surgeons to perform even the most complicated surgeries successfully is Virtual Reality. Even though virtual reality is employed to carry out operations the surgeon’s attention is one of the most important parameter. If he commits any mistakes it may lead to a dangerous end. So, one may think of a technology that reduces the burdens of a surgeon by providing an efficient interaction to the surgeon than VR. Now our dream came to reality by means of a technology called “HAPTIC TECHNOLOGY”. Haptic is the “science of applying tactile sensation to human interaction with computers”.

Keywords: Virtual Reality, Technology, Burdens, Interaction, Human Interaction

Title: HAPTIC TECHNOLOGY

Author: J.HARI HARA SUDHAN

International Journal of Engineering Research and Reviews

ISSN 2348-697X (Online)

Research Publish Journals

Vol. 2, Issue 2, April - June 2014

Facebook

IMAGES

  1. (PDF) Concept and application of virtual reality haptic technology: A

    haptic technology research paper

  2. (PDF) Furthering Haptic Technology Development for Academic Use: A Case

    haptic technology research paper

  3. Haptic Technology

    haptic technology research paper

  4. Review Paper: A review paper on Haptic technology applications

    haptic technology research paper

  5. Haptic Technology: Exploring How It Works and Its Benefits

    haptic technology research paper

  6. (PDF) A review paper on Haptic technology applications

    haptic technology research paper

COMMENTS

  1. Haptic Technology: A comprehensive review on its applications and

    This paper describes how haptic technology works, its devices, applications, and disadvantages. A brief explanation on haptics functions and its implementation in various fields of study is provided in this paper. ... Research Researches have been done to simulate different kinds of tactitions by using high speed vibrations or other stimuli ...

  2. (PDF) An Application-Based Review of Haptics Technology

    Haptics or haptic technology is defined as the technology of applying touch sensation. while interacting with a physical or virtual environment [. 1. ]. Physical interaction may. be performed at ...

  3. An Application-Based Review of Haptics Technology

    Introduction. Haptics or haptic technology is defined as the technology of applying touch sensation while interacting with a physical or virtual environment [ 1 ]. Physical interaction may be performed at a distance, called teleoperation and the virtual environment could be conducted through a computer-based program.

  4. Haptic technology and its application in education and learning

    Abstract: Haptic technology provides a new human-computer interactive method, which allows the user to feel the motion and haptic information in virtual environment with haptic devices, and it's also a new kind of learning means. In this paper we explained the basic concepts of haptic technology and how haptic technology works, followed by a summary of main haptic devices and the key ...

  5. Haptic display for virtual reality: progress and challenges

    This paper surveys the paradigm shift of haptic display occurred in the past 30 years, which is classified into three stages, including desktop haptics, surface haptics, and wearable haptics. ... The mainstream of the early research is the 3-DoF Haptic rendering algorithms. ... [86] using magnetic repulsion technology. How to integrate these ...

  6. Haptic Technology: A comprehensive review on its applications and

    This paper describes how haptic technology works, its devices, applications, and disadvantages. A brief explanation on haptics functions and its implementation in various fields of study is provided in this paper. A description on some of its future applications and a few limitations of this technology is also provided.

  7. Recent Advances and Opportunities of Active Materials for Haptic

    This paper's primary goal is to review the current status and opportunities of active materials or advanced functional materials-based haptic technology. This paper also intends to assess the role of active materials for haptic innovations and their potential contributions to the technological needs of emerging haptic technologies, namely tele ...

  8. Haptic Devices: Wearability-Based Taxonomy and Literature Review

    This survey describes the current technology underlying haptic devices, based on the concept of "wearability level". ... jackets and belts, and haptic devices for head, legs and feet. Finally, the paper discusses research gaps and challenges, and potential future directions. Published in: IEEE Access ( Volume: 10 ) Article #: Page(s): 91923 ...

  9. Haptic Perception, Mechanics, and Material Technologies for Virtual Reality

    Technol. 2020, 5, 2000347. Emerging haptic technologies for virtual and augmented reality hold the potential to transform human activities in myriad domains. This Progress Report describes research challenges that arise in engineering haptic devices, including performance requirements arising from human abilities.

  10. Haptics: Science, Technology, Applications

    This open access book constitutes the proceedings of the 13th International Conference on Human Haptic Sensing and Touch Enabled Computer Applications, EuroHaptics 2022, held in Hamburg, Germany, in May 2022. The 36 regular papers included in this book were carefully reviewed and selected from 129 submissions.

  11. Haptic Technology: A comprehensive review on its applications and

    This paper describes how haptic technology works, its devices, applications, and disadvantages. A brief explanation on haptics functions and its implementation in various fields of study is ...

  12. Touch to Learn: A Review of Haptic Technology's Impact on Skill

    Particularly, a regular paper is placed on the tablet, fixed with magnetic stripes, and augmented from below by the tablet's screen shining through it. ... To increase the generalizability of the data and present a more comprehensive knowledge of the effects of haptic technology, future research should attempt to get over these limitations and ...

  13. An Overview of Wearable Haptic Technologies and Their Performance in

    To re-create such extensive haptic experiences in a virtual setting requires complex technological solutions and this is a fast-growing area of interest for researchers and engineers. Because we often interact with the environment using our hands, much of the focus in haptic technology research has been dedicated to hand-based devices.

  14. (PDF) Concept and application of virtual reality haptic technology: A

    This paper re views the. general concept of virtual reality haptic s, altogether w ith its applications, previou s research findings, the. challenge a nd its super iority for developing a virtual ...

  15. Applied Sciences

    Touch is one most of the important aspects of human life. Nearly all interactions, when broken down, involve touch in one form or another. Recent advances in technology, particularly in the field of virtual reality, have led to increasing interest in the research of haptics. However, accurately capturing touch is still one of most difficult engineering challenges currently being faced. Recent ...

  16. Haptic Learning and Technology: Analyses of Digital Use Cases of

    The haptic feedback is provided by force feedback from a haptic device called the 6-DOF Phantom Omni and a head-mounted display to create a better immersion . As shown in this example, technology can be used to implement haptic learning. In this paper we will give examples of technologies that can be used to realize haptic learning.

  17. Applications of Haptic Technology, Virtual Reality, and Artificial

    This paper examines how haptic technology, virtual reality, and artificial intelligence help to reduce the physical contact in medical training during the COVID-19 Pandemic. Notably, any mistake made by the trainees during the education process might lead to undesired complications for the patient. ... some research groups have developed pilot ...

  18. Interpersonal Haptic Communication: Review and Directions for the

    Research should focus on interaction methods where haptic inputs match remote gestures triggered by another person (e.g., a haptic sleeve activated by remote stroking gestures). The sender's role is often missing in remote communication studies even though it is easy to envision how initiating touch is related to positive affect and social ...

  19. Haptic Technology Research Papers

    Haptics or Haptic technology based on touch and feel, by tactile sensation and simulation. The user can interface using various haptic devices which not only give user proper information but with feel and sensation that is, the feel of shape, weight, surface textures and temperature, the user gets to deal with the realistic world from the virtual one.

  20. Haptic Technology

    This paper describes how haptic technology works, its devices, applications, and disadvantages. A brief explanation on haptics functions and its implementation in various fields of study is ...

  21. PDF Haptic Technology

    devices. This paper includes how haptic technology works, about its devices, its technologies, its applications, future developments and disadvantages. Keywords: Human sense of touch, tactile feedback, Virtual object creation and control, Phantam, Haptic rendering 1. Introduction Haptical Technology or haptics is tacticle feedbacks that take

  22. Designing Pedagogically Effective Haptic Systems for Learning: A

    Haptic technology enables users to utilize their sense of touch while engaging with a virtual representation of objects in a simulated environment. It is a bidirectional technology in that it facilitates the interaction between the user and these virtual representations by allowing them to apply force onto one another, which is analogous to our real-world interactions with physical objects as ...

  23. Demonstration of Wheeler: A Three-Wheeled Input Device for Usable

    While prior research has proposed alternate input modalities with faster task completion times in specific scenarios (Billah et al., 2017; Lee et al., 2020a, b), the challenge of navigating to UI items that belong to a different sub-tree remains.To address this, we design and implement Wheeler (Islam et al., 2024), a three-wheeled, mouse-shaped, stationary input device whose three wheels can ...

  24. Beyond Gaming: Setting The Stage For Haptic Tech To Change Our

    The paper was based on research to create an active electronic (AE) skin that would be an interface towards ambient haptic feedback on physical surfaces. ... Meta licensed new haptic technology ...

  25. HAPTIC TECHNOLOGY, J.HARI HARA SUDHAN, Virtual Reality, Technology

    HAPTIC TECHNOLOGY, J.HARI HARA SUDHAN, Virtual Reality, Technology, Burdens, Interaction, Human Interaction, International Journal of Engineering Research and Reviews, ISSN 2348-697X (Online), Research Publish Journals