Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Open access
  • Published: 12 February 2024

Education reform and change driven by digital technology: a bibliometric study from a global perspective

  • Chengliang Wang 1 ,
  • Xiaojiao Chen 1 ,
  • Teng Yu   ORCID: orcid.org/0000-0001-5198-7261 2 , 3 ,
  • Yidan Liu 1 , 4 &
  • Yuhui Jing 1  

Humanities and Social Sciences Communications volume  11 , Article number:  256 ( 2024 ) Cite this article

7939 Accesses

2 Citations

1 Altmetric

Metrics details

  • Development studies
  • Science, technology and society

Amidst the global digital transformation of educational institutions, digital technology has emerged as a significant area of interest among scholars. Such technologies have played an instrumental role in enhancing learner performance and improving the effectiveness of teaching and learning. These digital technologies also ensure the sustainability and stability of education during the epidemic. Despite this, a dearth of systematic reviews exists regarding the current state of digital technology application in education. To address this gap, this study utilized the Web of Science Core Collection as a data source (specifically selecting the high-quality SSCI and SCIE) and implemented a topic search by setting keywords, yielding 1849 initial publications. Furthermore, following the PRISMA guidelines, we refined the selection to 588 high-quality articles. Using software tools such as CiteSpace, VOSviewer, and Charticulator, we reviewed these 588 publications to identify core authors (such as Selwyn, Henderson, Edwards), highly productive countries/regions (England, Australia, USA), key institutions (Monash University, Australian Catholic University), and crucial journals in the field ( Education and Information Technologies , Computers & Education , British Journal of Educational Technology ). Evolutionary analysis reveals four developmental periods in the research field of digital technology education application: the embryonic period, the preliminary development period, the key exploration, and the acceleration period of change. The study highlights the dual influence of technological factors and historical context on the research topic. Technology is a key factor in enabling education to transform and upgrade, and the context of the times is an important driving force in promoting the adoption of new technologies in the education system and the transformation and upgrading of education. Additionally, the study identifies three frontier hotspots in the field: physical education, digital transformation, and professional development under the promotion of digital technology. This study presents a clear framework for digital technology application in education, which can serve as a valuable reference for researchers and educational practitioners concerned with digital technology education application in theory and practice.

Similar content being viewed by others

research on computer education

What colour are your eyes? Teaching the genetics of eye colour & colour vision. Edridge Green Lecture RCOphth Annual Congress Glasgow May 2019

research on computer education

Determinants of behaviour and their efficacy as targets of behavioural change interventions

research on computer education

The impact of artificial intelligence on employment: the role of virtual agglomeration

Introduction.

Digital technology has become an essential component of modern education, facilitating the extension of temporal and spatial boundaries and enriching the pedagogical contexts (Selwyn and Facer, 2014 ). The advent of mobile communication technology has enabled learning through social media platforms (Szeto et al. 2015 ; Pires et al. 2022 ), while the advancement of augmented reality technology has disrupted traditional conceptions of learning environments and spaces (Perez-Sanagustin et al., 2014 ; Kyza and Georgiou, 2018 ). A wide range of digital technologies has enabled learning to become a norm in various settings, including the workplace (Sjöberg and Holmgren, 2021 ), home (Nazare et al. 2022 ), and online communities (Tang and Lam, 2014 ). Education is no longer limited to fixed locations and schedules, but has permeated all aspects of life, allowing learning to continue at any time and any place (Camilleri and Camilleri, 2016 ; Selwyn and Facer, 2014 ).

The advent of digital technology has led to the creation of several informal learning environments (Greenhow and Lewin, 2015 ) that exhibit divergent form, function, features, and patterns in comparison to conventional learning environments (Nygren et al. 2019 ). Consequently, the associated teaching and learning processes, as well as the strategies for the creation, dissemination, and acquisition of learning resources, have undergone a complete overhaul. The ensuing transformations have posed a myriad of novel issues, such as the optimal structuring of teaching methods by instructors and the adoption of appropriate learning strategies by students in the new digital technology environment. Consequently, an examination of the principles that underpin effective teaching and learning in this environment is a topic of significant interest to numerous scholars engaged in digital technology education research.

Over the course of the last two decades, digital technology has made significant strides in the field of education, notably in extending education time and space and creating novel educational contexts with sustainability. Despite research attempts to consolidate the application of digital technology in education, previous studies have only focused on specific aspects of digital technology, such as Pinto and Leite’s ( 2020 ) investigation into digital technology in higher education and Mustapha et al.’s ( 2021 ) examination of the role and value of digital technology in education during the pandemic. While these studies have provided valuable insights into the practical applications of digital technology in particular educational domains, they have not comprehensively explored the macro-mechanisms and internal logic of digital technology implementation in education. Additionally, these studies were conducted over a relatively brief period, making it challenging to gain a comprehensive understanding of the macro-dynamics and evolutionary process of digital technology in education. Some studies have provided an overview of digital education from an educational perspective but lack a precise understanding of technological advancement and change (Yang et al. 2022 ). Therefore, this study seeks to employ a systematic scientific approach to collate relevant research from 2000 to 2022, comprehend the internal logic and development trends of digital technology in education, and grasp the outstanding contribution of digital technology in promoting the sustainability of education in time and space. In summary, this study aims to address the following questions:

RQ1: Since the turn of the century, what is the productivity distribution of the field of digital technology education application research in terms of authorship, country/region, institutional and journal level?

RQ2: What is the development trend of research on the application of digital technology in education in the past two decades?

RQ3: What are the current frontiers of research on the application of digital technology in education?

Literature review

Although the term “digital technology” has become ubiquitous, a unified definition has yet to be agreed upon by scholars. Because the meaning of the word digital technology is closely related to the specific context. Within the educational research domain, Selwyn’s ( 2016 ) definition is widely favored by scholars (Pinto and Leite, 2020 ). Selwyn ( 2016 ) provides a comprehensive view of various concrete digital technologies and their applications in education through ten specific cases, such as immediate feedback in classes, orchestrating teaching, and community learning. Through these specific application scenarios, Selwyn ( 2016 ) argues that digital technology encompasses technologies associated with digital devices, including but not limited to tablets, smartphones, computers, and social media platforms (such as Facebook and YouTube). Furthermore, Further, the behavior of accessing the internet at any location through portable devices can be taken as an extension of the behavior of applying digital technology.

The evolving nature of digital technology has significant implications in the field of education. In the 1890s, the focus of digital technology in education was on comprehending the nuances of digital space, digital culture, and educational methodologies, with its connotations aligned more towards the idea of e-learning. The advent and subsequent widespread usage of mobile devices since the dawn of the new millennium have been instrumental in the rapid expansion of the concept of digital technology. Notably, mobile learning devices such as smartphones and tablets, along with social media platforms, have become integral components of digital technology (Conole and Alevizou, 2010 ; Batista et al. 2016 ). In recent times, the burgeoning application of AI technology in the education sector has played a vital role in enriching the digital technology lexicon (Banerjee et al. 2021 ). ChatGPT, for instance, is identified as a novel educational technology that has immense potential to revolutionize future education (Rospigliosi, 2023 ; Arif, Munaf and Ul-Haque, 2023 ).

Pinto and Leite ( 2020 ) conducted a comprehensive macroscopic survey of the use of digital technologies in the education sector and identified three distinct categories, namely technologies for assessment and feedback, mobile technologies, and Information Communication Technologies (ICT). This classification criterion is both macroscopic and highly condensed. In light of the established concept definitions of digital technology in the educational research literature, this study has adopted the characterizations of digital technology proposed by Selwyn ( 2016 ) and Pinto and Leite ( 2020 ) as crucial criteria for analysis and research inclusion. Specifically, this criterion encompasses several distinct types of digital technologies, including Information and Communication Technologies (ICT), Mobile tools, eXtended Reality (XR) Technologies, Assessment and Feedback systems, Learning Management Systems (LMS), Publish and Share tools, Collaborative systems, Social media, Interpersonal Communication tools, and Content Aggregation tools.

Methodology and materials

Research method: bibliometric.

The research on econometric properties has been present in various aspects of human production and life, yet systematic scientific theoretical guidance has been lacking, resulting in disorganization. In 1969, British scholar Pritchard ( 1969 ) proposed “bibliometrics,” which subsequently emerged as an independent discipline in scientific quantification research. Initially, Pritchard defined bibliometrics as “the application of mathematical and statistical methods to books and other media of communication,” however, the definition was not entirely rigorous. To remedy this, Hawkins ( 2001 ) expanded Pritchard’s definition to “the quantitative analysis of the bibliographic features of a body of literature.” De Bellis further clarified the objectives of bibliometrics, stating that it aims to analyze and identify patterns in literature, such as the most productive authors, institutions, countries, and journals in scientific disciplines, trends in literary production over time, and collaboration networks (De Bellis, 2009 ). According to Garfield ( 2006 ), bibliometric research enables the examination of the history and structure of a field, the flow of information within the field, the impact of journals, and the citation status of publications over a longer time scale. All of these definitions illustrate the unique role of bibliometrics as a research method for evaluating specific research fields.

This study uses CiteSpace, VOSviewer, and Charticulator to analyze data and create visualizations. Each of these three tools has its own strengths and can complement each other. CiteSpace and VOSviewer use set theory and probability theory to provide various visualization views in fields such as keywords, co-occurrence, and co-authors. They are easy to use and produce visually appealing graphics (Chen, 2006 ; van Eck and Waltman, 2009 ) and are currently the two most widely used bibliometric tools in the field of visualization (Pan et al. 2018 ). In this study, VOSviewer provided the data necessary for the Performance Analysis; Charticulator was then used to redraw using the tabular data exported from VOSviewer (for creating the chord diagram of country collaboration); this was to complement the mapping process, while CiteSpace was primarily utilized to generate keyword maps and conduct burst word analysis.

Data retrieval

This study selected documents from the Science Citation Index Expanded (SCIE) and Social Science Citation Index (SSCI) in the Web of Science Core Collection as the data source, for the following reasons:

(1) The Web of Science Core Collection, as a high-quality digital literature resource database, has been widely accepted by many researchers and is currently considered the most suitable database for bibliometric analysis (Jing et al. 2023a ). Compared to other databases, Web of Science provides more comprehensive data information (Chen et al. 2022a ), and also provides data formats suitable for analysis using VOSviewer and CiteSpace (Gaviria-Marin et al. 2019 ).

(2) The application of digital technology in the field of education is an interdisciplinary research topic, involving technical knowledge literature belonging to the natural sciences and education-related literature belonging to the social sciences. Therefore, it is necessary to select Science Citation Index Expanded (SCIE) and Social Science Citation Index (SSCI) as the sources of research data, ensuring the comprehensiveness of data while ensuring the reliability and persuasiveness of bibliometric research (Hwang and Tsai, 2011 ; Wang et al. 2022 ).

After establishing the source of research data, it is necessary to determine a retrieval strategy (Jing et al. 2023b ). The choice of a retrieval strategy should consider a balance between the breadth and precision of the search formula. That is to say, it should encompass all the literature pertaining to the research topic while excluding irrelevant documents as much as possible. In light of this, this study has set a retrieval strategy informed by multiple related papers (Mustapha et al. 2021 ; Luo et al. 2021 ). The research by Mustapha et al. ( 2021 ) guided us in selecting keywords (“digital” AND “technolog*”) to target digital technology, while Luo et al. ( 2021 ) informed the selection of terms (such as “instruct*,” “teach*,” and “education”) to establish links with the field of education. Then, based on the current application of digital technology in the educational domain and the scope of selection criteria, we constructed the final retrieval strategy. Following the general patterns of past research (Jing et al. 2023a , 2023b ), we conducted a specific screening using the topic search (Topics, TS) function in Web of Science. For the specific criteria used in the screening for this study, please refer to Table 1 .

Literature screening

Literature acquired through keyword searches may contain ostensibly related yet actually unrelated works. Therefore, to ensure the close relevance of literature included in the analysis to the research topic, it is often necessary to perform a manual screening process to identify the final literature to be analyzed, subsequent to completing the initial literature search.

The manual screening process consists of two steps. Initially, irrelevant literature is weeded out based on the title and abstract, with two members of the research team involved in this phase. This stage lasted about one week, resulting in 1106 articles being retained. Subsequently, a comprehensive review of the full text is conducted to accurately identify the literature required for the study. To carry out the second phase of manual screening effectively and scientifically, and to minimize the potential for researcher bias, the research team established the inclusion criteria presented in Table 2 . Three members were engaged in this phase, which took approximately 2 weeks, culminating in the retention of 588 articles after meticulous screening. The entire screening process is depicted in Fig. 1 , adhering to the PRISMA guidelines (Page et al. 2021 ).

figure 1

The process of obtaining and filtering the necessary literature data for research.

Data standardization

Nguyen and Hallinger ( 2020 ) pointed out that raw data extracted from scientific databases often contains multiple expressions of the same term, and not addressing these synonymous expressions could affect research results in bibliometric analysis. For instance, in the original data, the author list may include “Tsai, C. C.” and “Tsai, C.-C.”, while the keyword list may include “professional-development” and “professional development,” which often require merging. Therefore, before analyzing the selected literature, a data disambiguation process is necessary to standardize the data (Strotmann and Zhao, 2012 ; Van Eck and Waltman, 2019 ). This study adopted the data standardization process proposed by Taskin and Al ( 2019 ), mainly including the following standardization operations:

Firstly, the author and source fields in the data are corrected and standardized to differentiate authors with similar names.

Secondly, the study checks whether the journals to which the literature belongs have been renamed in the past over 20 years, so as to avoid the influence of periodical name change on the analysis results.

Finally, the keyword field is standardized by unifying parts of speech and singular/plural forms of keywords, which can help eliminate redundant entries in the knowledge graph.

Performance analysis (RQ1)

This section offers a thorough and detailed analysis of the state of research in the field of digital technology education. By utilizing descriptive statistics and visual maps, it provides a comprehensive overview of the development trends, authors, countries, institutions, and journal distribution within the field. The insights presented in this section are of great significance in advancing our understanding of the current state of research in this field and identifying areas for further investigation. The use of visual aids to display inter-country cooperation and the evolution of the field adds to the clarity and coherence of the analysis.

Time trend of the publications

To understand a research field, it is first necessary to understand the most basic quantitative information, among which the change in the number of publications per year best reflects the development trend of a research field. Figure 2 shows the distribution of publication dates.

figure 2

Time trend of the publications on application of digital technology in education.

From the Fig. 2 , it can be seen that the development of this field over the past over 20 years can be roughly divided into three stages. The first stage was from 2000 to 2007, during which the number of publications was relatively low. Due to various factors such as technological maturity, the academic community did not pay widespread attention to the role of digital technology in expanding the scope of teaching and learning. The second stage was from 2008 to 2019, during which the overall number of publications showed an upward trend, and the development of the field entered an accelerated period, attracting more and more scholars’ attention. The third stage was from 2020 to 2022, during which the number of publications stabilized at around 100. During this period, the impact of the pandemic led to a large number of scholars focusing on the role of digital technology in education during the pandemic, and research on the application of digital technology in education became a core topic in social science research.

Analysis of authors

An analysis of the author’s publication volume provides information about the representative scholars and core research strengths of a research area. Table 3 presents information on the core authors in adaptive learning research, including name, publication number, and average number of citations per article (based on the analysis and statistics from VOSviewer).

Variations in research foci among scholars abound. Within the field of digital technology education application research over the past two decades, Neil Selwyn stands as the most productive author, having published 15 papers garnering a total of 1027 citations, resulting in an average of 68.47 citations per paper. As a Professor at the Faculty of Education at Monash University, Selwyn concentrates on exploring the application of digital technology in higher education contexts (Selwyn et al. 2021 ), as well as related products in higher education such as Coursera, edX, and Udacity MOOC platforms (Bulfin et al. 2014 ). Selwyn’s contributions to the educational sociology perspective include extensive research on the impact of digital technology on education, highlighting the spatiotemporal extension of educational processes and practices through technological means as the greatest value of educational technology (Selwyn, 2012 ; Selwyn and Facer, 2014 ). In addition, he provides a blueprint for the development of future schools in 2030 based on the present impact of digital technology on education (Selwyn et al. 2019 ). The second most productive author in this field, Henderson, also offers significant contributions to the understanding of the important value of digital technology in education, specifically in the higher education setting, with a focus on the impact of the pandemic (Henderson et al. 2015 ; Cohen et al. 2022 ). In contrast, Edwards’ research interests focus on early childhood education, particularly the application of digital technology in this context (Edwards, 2013 ; Bird and Edwards, 2015 ). Additionally, on the technical level, Edwards also mainly prefers digital game technology, because it is a digital technology that children are relatively easy to accept (Edwards, 2015 ).

Analysis of countries/regions and organization

The present study aimed to ascertain the leading countries in digital technology education application research by analyzing 75 countries related to 558 works of literature. Table 4 depicts the top ten countries that have contributed significantly to this field in terms of publication count (based on the analysis and statistics from VOSviewer). Our analysis of Table 4 data shows that England emerged as the most influential country/region, with 92 published papers and 2401 citations. Australia and the United States secured the second and third ranks, respectively, with 90 papers (2187 citations) and 70 papers (1331 citations) published. Geographically, most of the countries featured in the top ten publication volumes are situated in Australia, North America, and Europe, with China being the only exception. Notably, all these countries, except China, belong to the group of developed nations, suggesting that economic strength is a prerequisite for fostering research in the digital technology education application field.

This study presents a visual representation of the publication output and cooperation relationships among different countries in the field of digital technology education application research. Specifically, a chord diagram is employed to display the top 30 countries in terms of publication output, as depicted in Fig. 3 . The chord diagram is composed of nodes and chords, where the nodes are positioned as scattered points along the circumference, and the length of each node corresponds to the publication output, with longer lengths indicating higher publication output. The chords, on the other hand, represent the cooperation relationships between any two countries, and are weighted based on the degree of closeness of the cooperation, with wider chords indicating closer cooperation. Through the analysis of the cooperation relationships, the findings suggest that the main publishing countries in this field are engaged in cooperative relationships with each other, indicating a relatively high level of international academic exchange and research internationalization.

figure 3

In the diagram, nodes are scattered along the circumference of a circle, with the length of each node representing the volume of publications. The weighted arcs connecting any two points on the circle are known as chords, representing the collaborative relationship between the two, with the width of the arc indicating the closeness of the collaboration.

Further analyzing Fig. 3 , we can extract more valuable information, enabling a deeper understanding of the connections between countries in the research field of digital technology in educational applications. It is evident that certain countries, such as the United States, China, and England, display thicker connections, indicating robust collaborative relationships in terms of productivity. These thicker lines signify substantial mutual contributions and shared objectives in certain sectors or fields, highlighting the interconnectedness and global integration in these areas. By delving deeper, we can also explore potential future collaboration opportunities through the chord diagram, identifying possible partners to propel research and development in this field. In essence, the chord diagram successfully encapsulates and conveys the multi-dimensionality of global productivity and cooperation, allowing for a comprehensive understanding of the intricate inter-country relationships and networks in a global context, providing valuable guidance and insights for future research and collaborations.

An in-depth examination of the publishing institutions is provided in Table 5 , showcasing the foremost 10 institutions ranked by their publication volume. Notably, Monash University and Australian Catholic University, situated in Australia, have recorded the most prolific publications within the digital technology education application realm, with 22 and 10 publications respectively. Moreover, the University of Oslo from Norway is featured among the top 10 publishing institutions, with an impressive average citation count of 64 per publication. It is worth highlighting that six institutions based in the United Kingdom were also ranked within the top 10 publishing institutions, signifying their leading position in this area of research.

Analysis of journals

Journals are the main carriers for publishing high-quality papers. Some scholars point out that the two key factors to measure the influence of journals in the specified field are the number of articles published and the number of citations. The more papers published in a magazine and the more citations, the greater its influence (Dzikowski, 2018 ). Therefore, this study utilized VOSviewer to statistically analyze the top 10 journals with the most publications in the field of digital technology in education and calculated the average citations per article (see Table 6 ).

Based on Table 6 , it is apparent that the highest number of articles in the domain of digital technology in education research were published in Education and Information Technologies (47 articles), Computers & Education (34 articles), and British Journal of Educational Technology (32 articles), indicating a higher article output compared to other journals. This underscores the fact that these three journals concentrate more on the application of digital technology in education. Furthermore, several other journals, such as Technology Pedagogy and Education and Sustainability, have published more than 15 articles in this domain. Sustainability represents the open access movement, which has notably facilitated research progress in this field, indicating that the development of open access journals in recent years has had a significant impact. Although there is still considerable disagreement among scholars on the optimal approach to achieve open access, the notion that research outcomes should be accessible to all is widely recognized (Huang et al. 2020 ). On further analysis of the research fields to which these journals belong, except for Sustainability, it is evident that they all pertain to educational technology, thus providing a qualitative definition of the research area of digital technology education from the perspective of journals.

Temporal keyword analysis: thematic evolution (RQ2)

The evolution of research themes is a dynamic process, and previous studies have attempted to present the developmental trajectory of fields by drawing keyword networks in phases (Kumar et al. 2021 ; Chen et al. 2022b ). To understand the shifts in research topics across different periods, this study follows past research and, based on the significant changes in the research field and corresponding technological advancements during the outlined periods, divides the timeline into four stages (the first stage from January 2000 to December 2005, the second stage from January 2006 to December 2011, the third stage from January 2012 to December 2017; and the fourth stage from January 2018 to December 2022). The division into these four stages was determined through a combination of bibliometric analysis and literature review, which presented a clear trajectory of the field’s development. The research analyzes the keyword networks for each time period (as there are only three articles in the first stage, it was not possible to generate an appropriate keyword co-occurrence map, hence only the keyword co-occurrence maps from the second to the fourth stages are provided), to understand the evolutionary track of the digital technology education application research field over time.

2000.1–2005.12: germination period

From January 2000 to December 2005, digital technology education application research was in its infancy. Only three studies focused on digital technology, all of which were related to computers. Due to the popularity of computers, the home became a new learning environment, highlighting the important role of digital technology in expanding the scope of learning spaces (Sutherland et al. 2000 ). In specific disciplines and contexts, digital technology was first favored in medical clinical practice, becoming an important tool for supporting the learning of clinical knowledge and practice (Tegtmeyer et al. 2001 ; Durfee et al. 2003 ).

2006.1–2011.12: initial development period

Between January 2006 and December 2011, it was the initial development period of digital technology education research. Significant growth was observed in research related to digital technology, and discussions and theoretical analyses about “digital natives” emerged. During this phase, scholars focused on the debate about “how to use digital technology reasonably” and “whether current educational models and school curriculum design need to be adjusted on a large scale” (Bennett and Maton, 2010 ; Selwyn, 2009 ; Margaryan et al. 2011 ). These theoretical and speculative arguments provided a unique perspective on the impact of cognitive digital technology on education and teaching. As can be seen from the vocabulary such as “rethinking”, “disruptive pedagogy”, and “attitude” in Fig. 4 , many scholars joined the calm reflection and analysis under the trend of digital technology (Laurillard, 2008 ; Vratulis et al. 2011 ). During this phase, technology was still undergoing dramatic changes. The development of mobile technology had already caught the attention of many scholars (Wong et al. 2011 ), but digital technology represented by computers was still very active (Selwyn et al. 2011 ). The change in technological form would inevitably lead to educational transformation. Collins and Halverson ( 2010 ) summarized the prospects and challenges of using digital technology for learning and educational practices, believing that digital technology would bring a disruptive revolution to the education field and bring about a new educational system. In addition, the term “teacher education” in Fig. 4 reflects the impact of digital technology development on teachers. The rapid development of technology has widened the generation gap between teachers and students. To ensure smooth communication between teachers and students, teachers must keep up with the trend of technological development and establish a lifelong learning concept (Donnison, 2009 ).

figure 4

In the diagram, each node represents a keyword, with the size of the node indicating the frequency of occurrence of the keyword. The connections represent the co-occurrence relationships between keywords, with a higher frequency of co-occurrence resulting in tighter connections.

2012.1–2017.12: critical exploration period

During the period spanning January 2012 to December 2017, the application of digital technology in education research underwent a significant exploration phase. As can be seen from Fig. 5 , different from the previous stage, the specific elements of specific digital technology have started to increase significantly, including the enrichment of technological contexts, the greater variety of research methods, and the diversification of learning modes. Moreover, the temporal and spatial dimensions of the learning environment were further de-emphasized, as noted in previous literature (Za et al. 2014 ). Given the rapidly accelerating pace of technological development, the education system in the digital era is in urgent need of collaborative evolution and reconstruction, as argued by Davis, Eickelmann, and Zaka ( 2013 ).

figure 5

In the domain of digital technology, social media has garnered substantial scholarly attention as a promising avenue for learning, as noted by Pasquini and Evangelopoulos ( 2016 ). The implementation of social media in education presents several benefits, including the liberation of education from the restrictions of physical distance and time, as well as the erasure of conventional educational boundaries. The user-generated content (UGC) model in social media has emerged as a crucial source for knowledge creation and distribution, with the widespread adoption of mobile devices. Moreover, social networks have become an integral component of ubiquitous learning environments (Hwang et al. 2013 ). The utilization of social media allows individuals to function as both knowledge producers and recipients, which leads to a blurring of the conventional roles of learners and teachers. On mobile platforms, the roles of learners and teachers are not fixed, but instead interchangeable.

In terms of research methodology, the prevalence of empirical studies with survey designs in the field of educational technology during this period is evident from the vocabulary used, such as “achievement,” “acceptance,” “attitude,” and “ict.” in Fig. 5 . These studies aim to understand learners’ willingness to adopt and attitudes towards new technologies, and some seek to investigate the impact of digital technologies on learning outcomes through quasi-experimental designs (Domínguez et al. 2013 ). Among these empirical studies, mobile learning emerged as a hot topic, and this is not surprising. First, the advantages of mobile learning environments over traditional ones have been empirically demonstrated (Hwang et al. 2013 ). Second, learners born around the turn of the century have been heavily influenced by digital technologies and have developed their own learning styles that are more open to mobile devices as a means of learning. Consequently, analyzing mobile learning as a relatively novel mode of learning has become an important issue for scholars in the field of educational technology.

The intervention of technology has led to the emergence of several novel learning modes, with the blended learning model being the most representative one in the current phase. Blended learning, a novel concept introduced in the information age, emphasizes the integration of the benefits of traditional learning methods and online learning. This learning mode not only highlights the prominent role of teachers in guiding, inspiring, and monitoring the learning process but also underlines the importance of learners’ initiative, enthusiasm, and creativity in the learning process. Despite being an early conceptualization, blended learning’s meaning has been expanded by the widespread use of mobile technology and social media in education. The implementation of new technologies, particularly mobile devices, has resulted in the transformation of curriculum design and increased flexibility and autonomy in students’ learning processes (Trujillo Maza et al. 2016 ), rekindling scholarly attention to this learning mode. However, some scholars have raised concerns about the potential drawbacks of the blended learning model, such as its significant impact on the traditional teaching system, the lack of systematic coping strategies and relevant policies in several schools and regions (Moskal et al. 2013 ).

2018.1–2022.12: accelerated transformation period

The period spanning from January 2018 to December 2022 witnessed a rapid transformation in the application of digital technology in education research. The field of digital technology education research reached a peak period of publication, largely influenced by factors such as the COVID-19 pandemic (Yu et al. 2023 ). Research during this period was built upon the achievements, attitudes, and social media of the previous phase, and included more elements that reflect the characteristics of this research field, such as digital literacy, digital competence, and professional development, as depicted in Fig. 6 . Alongside this, scholars’ expectations for the value of digital technology have expanded, and the pursuit of improving learning efficiency and performance is no longer the sole focus. Some research now aims to cultivate learners’ motivation and enhance their self-efficacy by applying digital technology in a reasonable manner, as demonstrated by recent studies (Beardsley et al. 2021 ; Creely et al. 2021 ).

figure 6

The COVID-19 pandemic has emerged as a crucial backdrop for the digital technology’s role in sustaining global education, as highlighted by recent scholarly research (Zhou et al. 2022 ; Pan and Zhang, 2020 ; Mo et al. 2022 ). The online learning environment, which is supported by digital technology, has become the primary battleground for global education (Yu, 2022 ). This social context has led to various studies being conducted, with some scholars positing that the pandemic has impacted the traditional teaching order while also expanding learning possibilities in terms of patterns and forms (Alabdulaziz, 2021 ). Furthermore, the pandemic has acted as a catalyst for teacher teaching and technological innovation, and this viewpoint has been empirically substantiated (Moorhouse and Wong, 2021 ). Additionally, some scholars believe that the pandemic’s push is a crucial driving force for the digital transformation of the education system, serving as an essential mechanism for overcoming the system’s inertia (Romero et al. 2021 ).

The rapid outbreak of the pandemic posed a challenge to the large-scale implementation of digital technologies, which was influenced by a complex interplay of subjective and objective factors. Objective constraints included the lack of infrastructure in some regions to support digital technologies, while subjective obstacles included psychological resistance among certain students and teachers (Moorhouse, 2021 ). These factors greatly impacted the progress of online learning during the pandemic. Additionally, Timotheou et al. ( 2023 ) conducted a comprehensive systematic review of existing research on digital technology use during the pandemic, highlighting the critical role played by various factors such as learners’ and teachers’ digital skills, teachers’ personal attributes and professional development, school leadership and management, and administration in facilitating the digitalization and transformation of schools.

The current stage of research is characterized by the pivotal term “digital literacy,” denoting a growing interest in learners’ attitudes and adoption of emerging technologies. Initially, the term “literacy” was restricted to fundamental abilities and knowledge associated with books and print materials (McMillan, 1996 ). However, with the swift advancement of computers and digital technology, there have been various attempts to broaden the scope of literacy beyond its traditional meaning, including game literacy (Buckingham and Burn, 2007 ), information literacy (Eisenberg, 2008 ), and media literacy (Turin and Friesem, 2020 ). Similarly, digital literacy has emerged as a crucial concept, and Gilster and Glister ( 1997 ) were the first to introduce this concept, referring to the proficiency in utilizing technology and processing digital information in academic, professional, and daily life settings. In practical educational settings, learners who possess higher digital literacy often exhibit an aptitude for quickly mastering digital devices and applying them intelligently to education and teaching (Yu, 2022 ).

The utilization of digital technology in education has undergone significant changes over the past two decades, and has been a crucial driver of educational reform with each new technological revolution. The impact of these changes on the underlying logic of digital technology education applications has been noticeable. From computer technology to more recent developments such as virtual reality (VR), augmented reality (AR), and artificial intelligence (AI), the acceleration in digital technology development has been ongoing. Educational reforms spurred by digital technology development continue to be dynamic, as each new digital innovation presents new possibilities and models for teaching practice. This is especially relevant in the post-pandemic era, where the importance of technological progress in supporting teaching cannot be overstated (Mughal et al. 2022 ). Existing digital technologies have already greatly expanded the dimensions of education in both time and space, while future digital technologies aim to expand learners’ perceptions. Researchers have highlighted the potential of integrated technology and immersive technology in the development of the educational metaverse, which is highly anticipated to create a new dimension for the teaching and learning environment, foster a new value system for the discipline of educational technology, and more effectively and efficiently achieve the grand educational blueprint of the United Nations’ Sustainable Development Goals (Zhang et al. 2022 ; Li and Yu, 2023 ).

Hotspot evolution analysis (RQ3)

The examination of keyword evolution reveals a consistent trend in the advancement of digital technology education application research. The emergence and transformation of keywords serve as indicators of the varying research interests in this field. Thus, the utilization of the burst detection function available in CiteSpace allowed for the identification of the top 10 burst words that exhibited a high level of burst strength. This outcome is illustrated in Table 7 .

According to the results presented in Table 7 , the explosive terminology within the realm of digital technology education research has exhibited a concentration mainly between the years 2018 and 2022. Prior to this time frame, the emerging keywords were limited to “information technology” and “computer”. Notably, among them, computer, as an emergent keyword, has always had a high explosive intensity from 2008 to 2018, which reflects the important position of computer in digital technology and is the main carrier of many digital technologies such as Learning Management Systems (LMS) and Assessment and Feedback systems (Barlovits et al. 2022 ).

Since 2018, an increasing number of research studies have focused on evaluating the capabilities of learners to accept, apply, and comprehend digital technologies. As indicated by the use of terms such as “digital literacy” and “digital skill,” the assessment of learners’ digital literacy has become a critical task. Scholarly efforts have been directed towards the development of literacy assessment tools and the implementation of empirical assessments. Furthermore, enhancing the digital literacy of both learners and educators has garnered significant attention. (Nagle, 2018 ; Yu, 2022 ). Simultaneously, given the widespread use of various digital technologies in different formal and informal learning settings, promoting learners’ digital skills has become a crucial objective for contemporary schools (Nygren et al. 2019 ; Forde and OBrien, 2022 ).

Since 2020, the field of applied research on digital technology education has witnessed the emergence of three new hotspots, all of which have been affected to some extent by the pandemic. Firstly, digital technology has been widely applied in physical education, which is one of the subjects that has been severely affected by the pandemic (Parris et al. 2022 ; Jiang and Ning, 2022 ). Secondly, digital transformation has become an important measure for most schools, especially higher education institutions, to cope with the impact of the pandemic globally (García-Morales et al. 2021 ). Although the concept of digital transformation was proposed earlier, the COVID-19 pandemic has greatly accelerated this transformation process. Educational institutions must carefully redesign their educational products to face this new situation, providing timely digital learning methods, environments, tools, and support systems that have far-reaching impacts on modern society (Krishnamurthy, 2020 ; Salas-Pilco et al. 2022 ). Moreover, the professional development of teachers has become a key mission of educational institutions in the post-pandemic era. Teachers need to have a certain level of digital literacy and be familiar with the tools and online teaching resources used in online teaching, which has become a research hotspot today. Organizing digital skills training for teachers to cope with the application of emerging technologies in education is an important issue for teacher professional development and lifelong learning (Garzón-Artacho et al. 2021 ). As the main organizers and practitioners of emergency remote teaching (ERT) during the pandemic, teachers must put cognitive effort into their professional development to ensure effective implementation of ERT (Romero-Hall and Jaramillo Cherrez, 2022 ).

The burst word “digital transformation” reveals that we are in the midst of an ongoing digital technology revolution. With the emergence of innovative digital technologies such as ChatGPT and Microsoft 365 Copilot, technology trends will continue to evolve, albeit unpredictably. While the impact of these advancements on school education remains uncertain, it is anticipated that the widespread integration of technology will significantly affect the current education system. Rejecting emerging technologies without careful consideration is unwise. Like any revolution, the technological revolution in the education field has both positive and negative aspects. Detractors argue that digital technology disrupts learning and memory (Baron, 2021 ) or causes learners to become addicted and distracted from learning (Selwyn and Aagaard, 2020 ). On the other hand, the prudent use of digital technology in education offers a glimpse of a golden age of open learning. Educational leaders and practitioners have the opportunity to leverage cutting-edge digital technologies to address current educational challenges and develop a rational path for the sustainable and healthy growth of education.

Discussion on performance analysis (RQ1)

The field of digital technology education application research has experienced substantial growth since the turn of the century, a phenomenon that is quantifiably apparent through an analysis of authorship, country/region contributions, and institutional engagement. This expansion reflects the increased integration of digital technologies in educational settings and the heightened scholarly interest in understanding and optimizing their use.

Discussion on authorship productivity in digital technology education research

The authorship distribution within digital technology education research is indicative of the field’s intellectual structure and depth. A primary figure in this domain is Neil Selwyn, whose substantial citation rate underscores the profound impact of his work. His focus on the implications of digital technology in higher education and educational sociology has proven to be seminal. Selwyn’s research trajectory, especially the exploration of spatiotemporal extensions of education through technology, provides valuable insights into the multifaceted role of digital tools in learning processes (Selwyn et al. 2019 ).

Other notable contributors, like Henderson and Edwards, present diversified research interests, such as the impact of digital technologies during the pandemic and their application in early childhood education, respectively. Their varied focuses highlight the breadth of digital technology education research, encompassing pedagogical innovation, technological adaptation, and policy development.

Discussion on country/region-level productivity and collaboration

At the country/region level, the United Kingdom, specifically England, emerges as a leading contributor with 92 published papers and a significant citation count. This is closely followed by Australia and the United States, indicating a strong English-speaking research axis. Such geographical concentration of scholarly output often correlates with investment in research and development, technological infrastructure, and the prevalence of higher education institutions engaging in cutting-edge research.

China’s notable inclusion as the only non-Western country among the top contributors to the field suggests a growing research capacity and interest in digital technology in education. However, the lower average citation per paper for China could reflect emerging engagement or different research focuses that may not yet have achieved the same international recognition as Western counterparts.

The chord diagram analysis furthers this understanding, revealing dense interconnections between countries like the United States, China, and England, which indicates robust collaborations. Such collaborations are fundamental in addressing global educational challenges and shaping international research agendas.

Discussion on institutional-level contributions to digital technology education

Institutional productivity in digital technology education research reveals a constellation of universities driving the field forward. Monash University and the Australian Catholic University have the highest publication output, signaling Australia’s significant role in advancing digital education research. The University of Oslo’s remarkable average citation count per publication indicates influential research contributions, potentially reflecting high-quality studies that resonate with the broader academic community.

The strong showing of UK institutions, including the University of London, The Open University, and the University of Cambridge, reinforces the UK’s prominence in this research field. Such institutions are often at the forefront of pedagogical innovation, benefiting from established research cultures and funding mechanisms that support sustained inquiry into digital education.

Discussion on journal publication analysis

An examination of journal outputs offers a lens into the communicative channels of the field’s knowledge base. Journals such as Education and Information Technologies , Computers & Education , and the British Journal of Educational Technology not only serve as the primary disseminators of research findings but also as indicators of research quality and relevance. The impact factor (IF) serves as a proxy for the quality and influence of these journals within the academic community.

The high citation counts for articles published in Computers & Education suggest that research disseminated through this medium has a wide-reaching impact and is of particular interest to the field. This is further evidenced by its significant IF of 11.182, indicating that the journal is a pivotal platform for seminal work in the application of digital technology in education.

The authorship, regional, and institutional productivity in the field of digital technology education application research collectively narrate the evolution of this domain since the turn of the century. The prominence of certain authors and countries underscores the importance of socioeconomic factors and existing academic infrastructure in fostering research productivity. Meanwhile, the centrality of specific journals as outlets for high-impact research emphasizes the role of academic publishing in shaping the research landscape.

As the field continues to grow, future research may benefit from leveraging the collaborative networks that have been elucidated through this analysis, perhaps focusing on underrepresented regions to broaden the scope and diversity of research. Furthermore, the stabilization of publication numbers in recent years invites a deeper exploration into potential plateaus in research trends or saturation in certain sub-fields, signaling an opportunity for novel inquiries and methodological innovations.

Discussion on the evolutionary trends (RQ2)

The evolution of the research field concerning the application of digital technology in education over the past two decades is a story of convergence, diversification, and transformation, shaped by rapid technological advancements and shifting educational paradigms.

At the turn of the century, the inception of digital technology in education was largely exploratory, with a focus on how emerging computer technologies could be harnessed to enhance traditional learning environments. Research from this early period was primarily descriptive, reflecting on the potential and challenges of incorporating digital tools into the educational setting. This phase was critical in establishing the fundamental discourse that would guide subsequent research, as it set the stage for understanding the scope and impact of digital technology in learning spaces (Wang et al. 2023 ).

As the first decade progressed, the narrative expanded to encompass the pedagogical implications of digital technologies. This was a period of conceptual debates, where terms like “digital natives” and “disruptive pedagogy” entered the academic lexicon, underscoring the growing acknowledgment of digital technology as a transformative force within education (Bennett and Maton, 2010 ). During this time, the research began to reflect a more nuanced understanding of the integration of technology, considering not only its potential to change where and how learning occurred but also its implications for educational equity and access.

In the second decade, with the maturation of internet connectivity and mobile technology, the focus of research shifted from theoretical speculations to empirical investigations. The proliferation of digital devices and the ubiquity of social media influenced how learners interacted with information and each other, prompting a surge in studies that sought to measure the impact of these tools on learning outcomes. The digital divide and issues related to digital literacy became central concerns, as scholars explored the varying capacities of students and educators to engage with technology effectively.

Throughout this period, there was an increasing emphasis on the individualization of learning experiences, facilitated by adaptive technologies that could cater to the unique needs and pacing of learners (Jing et al. 2023a ). This individualization was coupled with a growing recognition of the importance of collaborative learning, both online and offline, and the role of digital tools in supporting these processes. Blended learning models, which combined face-to-face instruction with online resources, emerged as a significant trend, advocating for a balance between traditional pedagogies and innovative digital strategies.

The later years, particularly marked by the COVID-19 pandemic, accelerated the necessity for digital technology in education, transforming it from a supplementary tool to an essential platform for delivering education globally (Mo et al. 2022 ; Mustapha et al. 2021 ). This era brought about an unprecedented focus on online learning environments, distance education, and virtual classrooms. Research became more granular, examining not just the pedagogical effectiveness of digital tools, but also their role in maintaining continuity of education during crises, their impact on teacher and student well-being, and their implications for the future of educational policy and infrastructure.

Across these two decades, the research field has seen a shift from examining digital technology as an external addition to the educational process, to viewing it as an integral component of curriculum design, instructional strategies, and even assessment methods. The emergent themes have broadened from a narrow focus on specific tools or platforms to include wider considerations such as data privacy, ethical use of technology, and the environmental impact of digital tools.

Moreover, the field has moved from considering the application of digital technology in education as a primarily cognitive endeavor to recognizing its role in facilitating socio-emotional learning, digital citizenship, and global competencies. Researchers have increasingly turned their attention to the ways in which technology can support collaborative skills, cultural understanding, and ethical reasoning within diverse student populations.

In summary, the past over twenty years in the research field of digital technology applications in education have been characterized by a progression from foundational inquiries to complex analyses of digital integration. This evolution has mirrored the trajectory of technology itself, from a facilitative tool to a pervasive ecosystem defining contemporary educational experiences. As we look to the future, the field is poised to delve into the implications of emerging technologies like AI, AR, and VR, and their potential to redefine the educational landscape even further. This ongoing metamorphosis suggests that the application of digital technology in education will continue to be a rich area of inquiry, demanding continual adaptation and forward-thinking from educators and researchers alike.

Discussion on the study of research hotspots (RQ3)

The analysis of keyword evolution in digital technology education application research elucidates the current frontiers in the field, reflecting a trajectory that is in tandem with the rapidly advancing digital age. This landscape is sculpted by emergent technological innovations and shaped by the demands of an increasingly digital society.

Interdisciplinary integration and pedagogical transformation

One of the frontiers identified from recent keyword bursts includes the integration of digital technology into diverse educational contexts, particularly noted with the keyword “physical education.” The digitalization of disciplines traditionally characterized by physical presence illustrates the pervasive reach of technology and signifies a push towards interdisciplinary integration where technology is not only a facilitator but also a transformative agent. This integration challenges educators to reconceptualize curriculum delivery to accommodate digital tools that can enhance or simulate the physical aspects of learning.

Digital literacy and skills acquisition

Another pivotal frontier is the focus on “digital literacy” and “digital skill”, which has intensified in recent years. This suggests a shift from mere access to technology towards a comprehensive understanding and utilization of digital tools. In this realm, the emphasis is not only on the ability to use technology but also on critical thinking, problem-solving, and the ethical use of digital resources (Yu, 2022 ). The acquisition of digital literacy is no longer an additive skill but a fundamental aspect of modern education, essential for navigating and contributing to the digital world.

Educational digital transformation

The keyword “digital transformation” marks a significant research frontier, emphasizing the systemic changes that education institutions must undergo to align with the digital era (Romero et al. 2021 ). This transformation includes the redesigning of learning environments, pedagogical strategies, and assessment methods to harness digital technology’s full potential. Research in this area explores the complexity of institutional change, addressing the infrastructural, cultural, and policy adjustments needed for a seamless digital transition.

Engagement and participation

Further exploration into “engagement” and “participation” underscores the importance of student-centered learning environments that are mediated by technology. The current frontiers examine how digital platforms can foster collaboration, inclusivity, and active learning, potentially leading to more meaningful and personalized educational experiences. Here, the use of technology seeks to support the emotional and cognitive aspects of learning, moving beyond the transactional view of education to one that is relational and interactive.

Professional development and teacher readiness

As the field evolves, “professional development” emerges as a crucial area, particularly in light of the pandemic which necessitated emergency remote teaching. The need for teacher readiness in a digital age is a pressing frontier, with research focusing on the competencies required for educators to effectively integrate technology into their teaching practices. This includes familiarity with digital tools, pedagogical innovation, and an ongoing commitment to personal and professional growth in the digital domain.

Pandemic as a catalyst

The recent pandemic has acted as a catalyst for accelerated research and application in this field, particularly in the domains of “digital transformation,” “professional development,” and “physical education.” This period has been a litmus test for the resilience and adaptability of educational systems to continue their operations in an emergency. Research has thus been directed at understanding how digital technologies can support not only continuity but also enhance the quality and reach of education in such contexts.

Ethical and societal considerations

The frontier of digital technology in education is also expanding to consider broader ethical and societal implications. This includes issues of digital equity, data privacy, and the sociocultural impact of technology on learning communities. The research explores how educational technology can be leveraged to address inequities and create more equitable learning opportunities for all students, regardless of their socioeconomic background.

Innovation and emerging technologies

Looking forward, the frontiers are set to be influenced by ongoing and future technological innovations, such as artificial intelligence (AI) (Wu and Yu, 2023 ; Chen et al. 2022a ). The exploration into how these technologies can be integrated into educational practices to create immersive and adaptive learning experiences represents a bold new chapter for the field.

In conclusion, the current frontiers of research on the application of digital technology in education are multifaceted and dynamic. They reflect an overarching movement towards deeper integration of technology in educational systems and pedagogical practices, where the goals are not only to facilitate learning but to redefine it. As these frontiers continue to expand and evolve, they will shape the educational landscape, requiring a concerted effort from researchers, educators, policymakers, and technologists to navigate the challenges and harness the opportunities presented by the digital revolution in education.

Conclusions and future research

Conclusions.

The utilization of digital technology in education is a research area that cuts across multiple technical and educational domains and continues to experience dynamic growth due to the continuous progress of technology. In this study, a systematic review of this field was conducted through bibliometric techniques to examine its development trajectory. The primary focus of the review was to investigate the leading contributors, productive national institutions, significant publications, and evolving development patterns. The study’s quantitative analysis resulted in several key conclusions that shed light on this research field’s current state and future prospects.

(1) The research field of digital technology education applications has entered a stage of rapid development, particularly in recent years due to the impact of the pandemic, resulting in a peak of publications. Within this field, several key authors (Selwyn, Henderson, Edwards, etc.) and countries/regions (England, Australia, USA, etc.) have emerged, who have made significant contributions. International exchanges in this field have become frequent, with a high degree of internationalization in academic research. Higher education institutions in the UK and Australia are the core productive forces in this field at the institutional level.

(2) Education and Information Technologies , Computers & Education , and the British Journal of Educational Technology are notable journals that publish research related to digital technology education applications. These journals are affiliated with the research field of educational technology and provide effective communication platforms for sharing digital technology education applications.

(3) Over the past two decades, research on digital technology education applications has progressed from its early stages of budding, initial development, and critical exploration to accelerated transformation, and it is currently approaching maturity. Technological progress and changes in the times have been key driving forces for educational transformation and innovation, and both have played important roles in promoting the continuous development of education.

(4) Influenced by the pandemic, three emerging frontiers have emerged in current research on digital technology education applications, which are physical education, digital transformation, and professional development under the promotion of digital technology. These frontier research hotspots reflect the core issues that the education system faces when encountering new technologies. The evolution of research hotspots shows that technology breakthroughs in education’s original boundaries of time and space create new challenges. The continuous self-renewal of education is achieved by solving one hotspot problem after another.

The present study offers significant practical implications for scholars and practitioners in the field of digital technology education applications. Firstly, it presents a well-defined framework of the existing research in this area, serving as a comprehensive guide for new entrants to the field and shedding light on the developmental trajectory of this research domain. Secondly, the study identifies several contemporary research hotspots, thus offering a valuable decision-making resource for scholars aiming to explore potential research directions. Thirdly, the study undertakes an exhaustive analysis of published literature to identify core journals in the field of digital technology education applications, with Sustainability being identified as a promising open access journal that publishes extensively on this topic. This finding can potentially facilitate scholars in selecting appropriate journals for their research outputs.

Limitation and future research

Influenced by some objective factors, this study also has some limitations. First of all, the bibliometrics analysis software has high standards for data. In order to ensure the quality and integrity of the collected data, the research only selects the periodical papers in SCIE and SSCI indexes, which are the core collection of Web of Science database, and excludes other databases, conference papers, editorials and other publications, which may ignore some scientific research and original opinions in the field of digital technology education and application research. In addition, although this study used professional software to carry out bibliometric analysis and obtained more objective quantitative data, the analysis and interpretation of data will inevitably have a certain subjective color, and the influence of subjectivity on data analysis cannot be completely avoided. As such, future research endeavors will broaden the scope of literature screening and proactively engage scholars in the field to gain objective and state-of-the-art insights, while minimizing the adverse impact of personal subjectivity on research analysis.

Data availability

The datasets analyzed during the current study are available in the Dataverse repository: https://doi.org/10.7910/DVN/F9QMHY

Alabdulaziz MS (2021) COVID-19 and the use of digital technology in mathematics education. Educ Inf Technol 26(6):7609–7633. https://doi.org/10.1007/s10639-021-10602-3

Arif TB, Munaf U, Ul-Haque I (2023) The future of medical education and research: is ChatGPT a blessing or blight in disguise? Med Educ Online 28. https://doi.org/10.1080/10872981.2023.2181052

Banerjee M, Chiew D, Patel KT, Johns I, Chappell D, Linton N, Cole GD, Francis DP, Szram J, Ross J, Zaman S (2021) The impact of artificial intelligence on clinical education: perceptions of postgraduate trainee doctors in London (UK) and recommendations for trainers. BMC Med Educ 21. https://doi.org/10.1186/s12909-021-02870-x

Barlovits S, Caldeira A, Fesakis G, Jablonski S, Koutsomanoli Filippaki D, Lázaro C, Ludwig M, Mammana MF, Moura A, Oehler DXK, Recio T, Taranto E, Volika S(2022) Adaptive, synchronous, and mobile online education: developing the ASYMPTOTE learning environment. Mathematics 10:1628. https://doi.org/10.3390/math10101628

Article   Google Scholar  

Baron NS(2021) Know what? How digital technologies undermine learning and remembering J Pragmat 175:27–37. https://doi.org/10.1016/j.pragma.2021.01.011

Batista J, Morais NS, Ramos F (2016) Researching the use of communication technologies in higher education institutions in Portugal. https://doi.org/10.4018/978-1-5225-0571-6.ch057

Beardsley M, Albó L, Aragón P, Hernández-Leo D (2021) Emergency education effects on teacher abilities and motivation to use digital technologies. Br J Educ Technol 52. https://doi.org/10.1111/bjet.13101

Bennett S, Maton K(2010) Beyond the “digital natives” debate: towards a more nuanced understanding of students’ technology experiences J Comput Assist Learn 26:321–331. https://doi.org/10.1111/j.1365-2729.2010.00360.x

Buckingham D, Burn A (2007) Game literacy in theory and practice 16:323–349

Google Scholar  

Bulfin S, Pangrazio L, Selwyn N (2014) Making “MOOCs”: the construction of a new digital higher education within news media discourse. In: The International Review of Research in Open and Distributed Learning 15. https://doi.org/10.19173/irrodl.v15i5.1856

Camilleri MA, Camilleri AC(2016) Digital learning resources and ubiquitous technologies in education Technol Knowl Learn 22:65–82. https://doi.org/10.1007/s10758-016-9287-7

Chen C(2006) CiteSpace II: detecting and visualizing emerging trends and transient patterns in scientific literature J Am Soc Inf Sci Technol 57:359–377. https://doi.org/10.1002/asi.20317

Chen J, Dai J, Zhu K, Xu L(2022) Effects of extended reality on language learning: a meta-analysis Front Psychol 13:1016519. https://doi.org/10.3389/fpsyg.2022.1016519

Article   PubMed   PubMed Central   Google Scholar  

Chen J, Wang CL, Tang Y (2022b) Knowledge mapping of volunteer motivation: a bibliometric analysis and cross-cultural comparative study. Front Psychol 13. https://doi.org/10.3389/fpsyg.2022.883150

Cohen A, Soffer T, Henderson M(2022) Students’ use of technology and their perceptions of its usefulness in higher education: International comparison J Comput Assist Learn 38(5):1321–1331. https://doi.org/10.1111/jcal.12678

Collins A, Halverson R(2010) The second educational revolution: rethinking education in the age of technology J Comput Assist Learn 26:18–27. https://doi.org/10.1111/j.1365-2729.2009.00339.x

Conole G, Alevizou P (2010) A literature review of the use of Web 2.0 tools in higher education. Walton Hall, Milton Keynes, UK: the Open University, retrieved 17 February

Creely E, Henriksen D, Crawford R, Henderson M(2021) Exploring creative risk-taking and productive failure in classroom practice. A case study of the perceived self-efficacy and agency of teachers at one school Think Ski Creat 42:100951. https://doi.org/10.1016/j.tsc.2021.100951

Davis N, Eickelmann B, Zaka P(2013) Restructuring of educational systems in the digital age from a co-evolutionary perspective J Comput Assist Learn 29:438–450. https://doi.org/10.1111/jcal.12032

De Belli N (2009) Bibliometrics and citation analysis: from the science citation index to cybermetrics, Scarecrow Press. https://doi.org/10.1111/jcal.12032

Domínguez A, Saenz-de-Navarrete J, de-Marcos L, Fernández-Sanz L, Pagés C, Martínez-Herráiz JJ(2013) Gamifying learning experiences: practical implications and outcomes Comput Educ 63:380–392. https://doi.org/10.1016/j.compedu.2012.12.020

Donnison S (2009) Discourses in conflict: the relationship between Gen Y pre-service teachers, digital technologies and lifelong learning. Australasian J Educ Technol 25. https://doi.org/10.14742/ajet.1138

Durfee SM, Jain S, Shaffer K (2003) Incorporating electronic media into medical student education. Acad Radiol 10:205–210. https://doi.org/10.1016/s1076-6332(03)80046-6

Dzikowski P(2018) A bibliometric analysis of born global firms J Bus Res 85:281–294. https://doi.org/10.1016/j.jbusres.2017.12.054

van Eck NJ, Waltman L(2009) Software survey: VOSviewer, a computer program for bibliometric mapping Scientometrics 84:523–538 https://doi.org/10.1007/s11192-009-0146-3

Edwards S(2013) Digital play in the early years: a contextual response to the problem of integrating technologies and play-based pedagogies in the early childhood curriculum Eur Early Child Educ Res J 21:199–212. https://doi.org/10.1080/1350293x.2013.789190

Edwards S(2015) New concepts of play and the problem of technology, digital media and popular-culture integration with play-based learning in early childhood education Technol Pedagogy Educ 25:513–532 https://doi.org/10.1080/1475939x.2015.1108929

Article   MathSciNet   Google Scholar  

Eisenberg MB(2008) Information literacy: essential skills for the information age DESIDOC J Libr Inf Technol 28:39–47. https://doi.org/10.14429/djlit.28.2.166

Forde C, OBrien A (2022) A literature review of barriers and opportunities presented by digitally enhanced practical skill teaching and learning in health science education. Med Educ Online 27. https://doi.org/10.1080/10872981.2022.2068210

García-Morales VJ, Garrido-Moreno A, Martín-Rojas R (2021) The transformation of higher education after the COVID disruption: emerging challenges in an online learning scenario. Front Psychol 12. https://doi.org/10.3389/fpsyg.2021.616059

Garfield E(2006) The history and meaning of the journal impact factor JAMA 295:90. https://doi.org/10.1001/jama.295.1.90

Article   PubMed   Google Scholar  

Garzón-Artacho E, Sola-Martínez T, Romero-Rodríguez JM, Gómez-García G(2021) Teachers’ perceptions of digital competence at the lifelong learning stage Heliyon 7:e07513. https://doi.org/10.1016/j.heliyon.2021.e07513

Gaviria-Marin M, Merigó JM, Baier-Fuentes H(2019) Knowledge management: a global examination based on bibliometric analysis Technol Forecast Soc Change 140:194–220. https://doi.org/10.1016/j.techfore.2018.07.006

Gilster P, Glister P (1997) Digital literacy. Wiley Computer Pub, New York

Greenhow C, Lewin C(2015) Social media and education: reconceptualizing the boundaries of formal and informal learning Learn Media Technol 41:6–30. https://doi.org/10.1080/17439884.2015.1064954

Hawkins DT(2001) Bibliometrics of electronic journals in information science Infor Res 7(1):7–1. http://informationr.net/ir/7-1/paper120.html

Henderson M, Selwyn N, Finger G, Aston R(2015) Students’ everyday engagement with digital technology in university: exploring patterns of use and “usefulness J High Educ Policy Manag 37:308–319 https://doi.org/10.1080/1360080x.2015.1034424

Huang CK, Neylon C, Hosking R, Montgomery L, Wilson KS, Ozaygen A, Brookes-Kenworthy C (2020) Evaluating the impact of open access policies on research institutions. eLife 9. https://doi.org/10.7554/elife.57067

Hwang GJ, Tsai CC(2011) Research trends in mobile and ubiquitous learning: a review of publications in selected journals from 2001 to 2010 Br J Educ Technol 42:E65–E70. https://doi.org/10.1111/j.1467-8535.2011.01183.x

Hwang GJ, Wu PH, Zhuang YY, Huang YM(2013) Effects of the inquiry-based mobile learning model on the cognitive load and learning achievement of students Interact Learn Environ 21:338–354. https://doi.org/10.1080/10494820.2011.575789

Jiang S, Ning CF (2022) Interactive communication in the process of physical education: are social media contributing to the improvement of physical training performance. Universal Access Inf Soc, 1–10. https://doi.org/10.1007/s10209-022-00911-w

Jing Y, Zhao L, Zhu KK, Wang H, Wang CL, Xia Q(2023) Research landscape of adaptive learning in education: a bibliometric study on research publications from 2000 to 2022 Sustainability 15:3115–3115. https://doi.org/10.3390/su15043115

Jing Y, Wang CL, Chen Y, Wang H, Yu T, Shadiev R (2023b) Bibliometric mapping techniques in educational technology research: a systematic literature review. Educ Inf Technol 1–29. https://doi.org/10.1007/s10639-023-12178-6

Krishnamurthy S (2020) The future of business education: a commentary in the shadow of the Covid-19 pandemic. J Bus Res. https://doi.org/10.1016/j.jbusres.2020.05.034

Kumar S, Lim WM, Pandey N, Christopher Westland J (2021) 20 years of electronic commerce research. Electron Commer Res 21:1–40

Kyza EA, Georgiou Y(2018) Scaffolding augmented reality inquiry learning: the design and investigation of the TraceReaders location-based, augmented reality platform Interact Learn Environ 27:211–225. https://doi.org/10.1080/10494820.2018.1458039

Laurillard D(2008) Technology enhanced learning as a tool for pedagogical innovation J Philos Educ 42:521–533. https://doi.org/10.1111/j.1467-9752.2008.00658.x

Li M, Yu Z (2023) A systematic review on the metaverse-based blended English learning. Front Psychol 13. https://doi.org/10.3389/fpsyg.2022.1087508

Luo H, Li G, Feng Q, Yang Y, Zuo M (2021) Virtual reality in K-12 and higher education: a systematic review of the literature from 2000 to 2019. J Comput Assist Learn. https://doi.org/10.1111/jcal.12538

Margaryan A, Littlejohn A, Vojt G(2011) Are digital natives a myth or reality? University students’ use of digital technologies Comput Educ 56:429–440. https://doi.org/10.1016/j.compedu.2010.09.004

McMillan S(1996) Literacy and computer literacy: definitions and comparisons Comput Educ 27:161–170. https://doi.org/10.1016/s0360-1315(96)00026-7

Mo CY, Wang CL, Dai J, Jin P (2022) Video playback speed influence on learning effect from the perspective of personalized adaptive learning: a study based on cognitive load theory. Front Psychology 13. https://doi.org/10.3389/fpsyg.2022.839982

Moorhouse BL (2021) Beginning teaching during COVID-19: newly qualified Hong Kong teachers’ preparedness for online teaching. Educ Stud 1–17. https://doi.org/10.1080/03055698.2021.1964939

Moorhouse BL, Wong KM (2021) The COVID-19 Pandemic as a catalyst for teacher pedagogical and technological innovation and development: teachers’ perspectives. Asia Pac J Educ 1–16. https://doi.org/10.1080/02188791.2021.1988511

Moskal P, Dziuban C, Hartman J (2013) Blended learning: a dangerous idea? Internet High Educ 18:15–23

Mughal MY, Andleeb N, Khurram AFA, Ali MY, Aslam MS, Saleem MN (2022) Perceptions of teaching-learning force about Metaverse for education: a qualitative study. J. Positive School Psychol 6:1738–1745

Mustapha I, Thuy Van N, Shahverdi M, Qureshi MI, Khan N (2021) Effectiveness of digital technology in education during COVID-19 pandemic. a bibliometric analysis. Int J Interact Mob Technol 15:136

Nagle J (2018) Twitter, cyber-violence, and the need for a critical social media literacy in teacher education: a review of the literature. Teach Teach Education 76:86–94

Nazare J, Woolf A, Sysoev I, Ballinger S, Saveski M, Walker M, Roy D (2022) Technology-assisted coaching can increase engagement with learning technology at home and caregivers’ awareness of it. Comput Educ 188:104565

Nguyen UP, Hallinger P (2020) Assessing the distinctive contributions of simulation & gaming to the literature, 1970-2019: a bibliometric review. Simul Gaming 104687812094156. https://doi.org/10.1177/1046878120941569

Nygren H, Nissinen K, Hämäläinen R, Wever B(2019) Lifelong learning: formal, non-formal and informal learning in the context of the use of problem-solving skills in technology-rich environments Br J Educ Technol 50:1759–1770. https://doi.org/10.1111/bjet.12807

Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Moher D (2021) The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Int J Surg 88:105906

Pan SL, Zhang S(2020) From fighting COVID-19 pandemic to tackling sustainable development goals: an opportunity for responsible information systems research Int J Inf Manage 55:102196. https://doi.org/10.1016/j.ijinfomgt.2020.102196

Pan X, Yan E, Cui M, Hua W(2018) Examining the usage, citation, and diffusion patterns of bibliometric mapping software: a comparative study of three tools J Informetr 12:481–493. https://doi.org/10.1016/j.joi.2018.03.005

Parris Z, Cale L, Harris J, Casey A (2022) Physical activity for health, covid-19 and social media: what, where and why?. Movimento, 28. https://doi.org/10.22456/1982-8918.122533

Pasquini LA, Evangelopoulos N (2016) Sociotechnical stewardship in higher education: a field study of social media policy documents. J Comput High Educ 29:218–239

Pérez-Sanagustín M, Hernández-Leo D, Santos P, Delgado Kloos C, Blat J(2014) Augmenting reality and formality of informal and non-formal settings to enhance blended learning IEEE Trans Learn Technol 7:118–131. https://doi.org/10.1109/TLT.2014.2312719

Pinto M, Leite C (2020) Digital technologies in support of students learning in Higher Education: literature review. Digital Education Review 343–360. https://doi.org/10.1344/der.2020.37.343-360

Pires F, Masanet MJ, Tomasena JM, Scolari CA(2022) Learning with YouTube: beyond formal and informal through new actors, strategies and affordances Convergence 28(3):838–853. https://doi.org/10.1177/1354856521102054

Pritchard A (1969) Statistical bibliography or bibliometrics 25:348

Romero M, Romeu T, Guitert M, Baztán P (2021) Digital transformation in higher education: the UOC case. In ICERI2021 Proceedings (pp. 6695–6703). IATED https://doi.org/10.21125/iceri.2021.1512

Romero-Hall E, Jaramillo Cherrez N (2022) Teaching in times of disruption: faculty digital literacy in higher education during the COVID-19 pandemic. Innovations in Education and Teaching International 1–11. https://doi.org/10.1080/14703297.2022.2030782

Rospigliosi PA(2023) Artificial intelligence in teaching and learning: what questions should we ask of ChatGPT? Interactive Learning Environments 31:1–3. https://doi.org/10.1080/10494820.2023.2180191

Salas-Pilco SZ, Yang Y, Zhang Z(2022) Student engagement in online learning in Latin American higher education during the COVID-19 pandemic: a systematic review. Br J Educ Technol 53(3):593–619. https://doi.org/10.1111/bjet.13190

Selwyn N(2009) The digital native-myth and reality In Aslib proceedings 61(4):364–379. https://doi.org/10.1108/00012530910973776

Selwyn N(2012) Making sense of young people, education and digital technology: the role of sociological theory Oxford Review of Education 38:81–96. https://doi.org/10.1080/03054985.2011.577949

Selwyn N, Facer K(2014) The sociology of education and digital technology: past, present and future Oxford Rev Educ 40:482–496. https://doi.org/10.1080/03054985.2014.933005

Selwyn N, Banaji S, Hadjithoma-Garstka C, Clark W(2011) Providing a platform for parents? Exploring the nature of parental engagement with school Learning Platforms J Comput Assist Learn 27:314–323. https://doi.org/10.1111/j.1365-2729.2011.00428.x

Selwyn N, Aagaard J (2020) Banning mobile phones from classrooms-an opportunity to advance understandings of technology addiction, distraction and cyberbullying. Br J Educ Technol 52. https://doi.org/10.1111/bjet.12943

Selwyn N, O’Neill C, Smith G, Andrejevic M, Gu X (2021) A necessary evil? The rise of online exam proctoring in Australian universities. Media Int Austr 1329878X2110058. https://doi.org/10.1177/1329878x211005862

Selwyn N, Pangrazio L, Nemorin S, Perrotta C (2019) What might the school of 2030 be like? An exercise in social science fiction. Learn, Media Technol 1–17. https://doi.org/10.1080/17439884.2020.1694944

Selwyn, N (2016) What works and why?* Understanding successful technology enabled learning within institutional contexts 2016 Final report Appendices (Part B). Monash University Griffith University

Sjöberg D, Holmgren R (2021) Informal workplace learning in swedish police education-a teacher perspective. Vocations and Learning. https://doi.org/10.1007/s12186-021-09267-3

Strotmann A, Zhao D (2012) Author name disambiguation: what difference does it make in author-based citation analysis? J Am Soc Inf Sci Technol 63:1820–1833

Article   CAS   Google Scholar  

Sutherland R, Facer K, Furlong R, Furlong J(2000) A new environment for education? The computer in the home. Comput Educ 34:195–212. https://doi.org/10.1016/s0360-1315(99)00045-7

Szeto E, Cheng AY-N, Hong J-C(2015) Learning with social media: how do preservice teachers integrate YouTube and Social Media in teaching? Asia-Pac Educ Res 25:35–44. https://doi.org/10.1007/s40299-015-0230-9

Tang E, Lam C(2014) Building an effective online learning community (OLC) in blog-based teaching portfolios Int High Educ 20:79–85. https://doi.org/10.1016/j.iheduc.2012.12.002

Taskin Z, Al U(2019) Natural language processing applications in library and information science Online Inf Rev 43:676–690. https://doi.org/10.1108/oir-07-2018-0217

Tegtmeyer K, Ibsen L, Goldstein B(2001) Computer-assisted learning in critical care: from ENIAC to HAL Crit Care Med 29:N177–N182. https://doi.org/10.1097/00003246-200108001-00006

Article   CAS   PubMed   Google Scholar  

Timotheou S, Miliou O, Dimitriadis Y, Sobrino SV, Giannoutsou N, Cachia R, Moné AM, Ioannou A(2023) Impacts of digital technologies on education and factors influencing schools' digital capacity and transformation: a literature review. Educ Inf Technol 28(6):6695–6726. https://doi.org/10.1007/s10639-022-11431-8

Trujillo Maza EM, Gómez Lozano MT, Cardozo Alarcón AC, Moreno Zuluaga L, Gamba Fadul M (2016) Blended learning supported by digital technology and competency-based medical education: a case study of the social medicine course at the Universidad de los Andes, Colombia. Int J Educ Technol High Educ 13. https://doi.org/10.1186/s41239-016-0027-9

Turin O, Friesem Y(2020) Is that media literacy?: Israeli and US media scholars’ perceptions of the field J Media Lit Educ 12:132–144. https://doi.org/10.1007/s11192-009-0146-3

Van Eck NJ, Waltman L (2019) VOSviewer manual. Universiteit Leiden

Vratulis V, Clarke T, Hoban G, Erickson G(2011) Additive and disruptive pedagogies: the use of slowmation as an example of digital technology implementation Teach Teach Educ 27:1179–1188. https://doi.org/10.1016/j.tate.2011.06.004

Wang CL, Dai J, Xu LJ (2022) Big data and data mining in education: a bibliometrics study from 2010 to 2022. In 2022 7th International Conference on Cloud Computing and Big Data Analytics ( ICCCBDA ) (pp. 507-512). IEEE. https://doi.org/10.1109/icccbda55098.2022.9778874

Wang CL, Dai J, Zhu KK, Yu T, Gu XQ (2023) Understanding the continuance intention of college students toward new E-learning spaces based on an integrated model of the TAM and TTF. Int J Hum-Comput Int 1–14. https://doi.org/10.1080/10447318.2023.2291609

Wong L-H, Boticki I, Sun J, Looi C-K(2011) Improving the scaffolds of a mobile-assisted Chinese character forming game via a design-based research cycle Comput Hum Behav 27:1783–1793. https://doi.org/10.1016/j.chb.2011.03.005

Wu R, Yu Z (2023) Do AI chatbots improve students learning outcomes? Evidence from a meta-analysis. Br J Educ Technol. https://doi.org/10.1111/bjet.13334

Yang D, Zhou J, Shi D, Pan Q, Wang D, Chen X, Liu J (2022) Research status, hotspots, and evolutionary trends of global digital education via knowledge graph analysis. Sustainability 14:15157–15157. https://doi.org/10.3390/su142215157

Yu T, Dai J, Wang CL (2023) Adoption of blended learning: Chinese university students’ perspectives. Humanit Soc Sci Commun 10:390. https://doi.org/10.3390/su142215157

Yu Z (2022) Sustaining student roles, digital literacy, learning achievements, and motivation in online learning environments during the COVID-19 pandemic. Sustainability 14:4388. https://doi.org/10.3390/su14084388

Za S, Spagnoletti P, North-Samardzic A(2014) Organisational learning as an emerging process: the generative role of digital tools in informal learning practices Br J Educ Technol 45:1023–1035. https://doi.org/10.1111/bjet.12211

Zhang X, Chen Y, Hu L, Wang Y (2022) The metaverse in education: definition, framework, features, potential applications, challenges, and future research topics. Front Psychol 13:1016300. https://doi.org/10.3389/fpsyg.2022.1016300

Zhou M, Dzingirai C, Hove K, Chitata T, Mugandani R (2022) Adoption, use and enhancement of virtual learning during COVID-19. Education and Information Technologies. https://doi.org/10.1007/s10639-022-10985-x

Download references

Acknowledgements

This research was supported by the Zhejiang Provincial Social Science Planning Project, “Mechanisms and Pathways for Empowering Classroom Teaching through Learning Spaces under the Strategy of High-Quality Education Development”, the 2022 National Social Science Foundation Education Youth Project “Research on the Strategy of Creating Learning Space Value and Empowering Classroom Teaching under the background of ‘Double Reduction’” (Grant No. CCA220319) and the National College Student Innovation and Entrepreneurship Training Program of China (Grant No. 202310337023).

Author information

Authors and affiliations.

College of Educational Science and Technology, Zhejiang University of Technology, Zhejiang, China

Chengliang Wang, Xiaojiao Chen, Yidan Liu & Yuhui Jing

Graduate School of Business, Universiti Sains Malaysia, Minden, Malaysia

Department of Management, The Chinese University of Hong Kong, Hong Kong, China

College of Humanities and Social Sciences, Beihang University, Beijing, China

You can also search for this author in PubMed   Google Scholar

Contributions

Conceptualization: Y.J., C.W.; methodology, C.W.; software, C.W., Y.L.; writing-original draft preparation, C.W., Y.L.; writing-review and editing, T.Y., Y.L., C.W.; supervision, X.C., T.Y.; project administration, Y.J.; funding acquisition, X.C., Y.L. All authors read and approved the final manuscript. All authors have read and approved the re-submission of the manuscript.

Corresponding author

Correspondence to Yuhui Jing .

Ethics declarations

Ethical approval.

Ethical approval was not required as the study did not involve human participants.

Informed consent

Informed consent was not required as the study did not involve human participants.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Wang, C., Chen, X., Yu, T. et al. Education reform and change driven by digital technology: a bibliometric study from a global perspective. Humanit Soc Sci Commun 11 , 256 (2024). https://doi.org/10.1057/s41599-024-02717-y

Download citation

Received : 11 July 2023

Accepted : 17 January 2024

Published : 12 February 2024

DOI : https://doi.org/10.1057/s41599-024-02717-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

A meta-analysis of learners’ continuance intention toward online education platforms.

  • Chengliang Wang

Education and Information Technologies (2024)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

research on computer education

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

What 126 studies say about education technology

Press contact :.

J-PAL North America's recently released publication summarizes 126 rigorous evaluations of different uses of education technology and their impact on student learning.

Previous image Next image

In recent years, there has been widespread excitement around the transformative potential of technology in education. In the United States alone, spending on education technology has now exceeded $13 billion . Programs and policies to promote the use of education technology may expand access to quality education, support students’ learning in innovative ways, and help families navigate complex school systems.

However, the rapid development of education technology in the United States is occurring in a context of deep and persistent inequality . Depending on how programs are designed, how they are used, and who can access them, education technologies could alleviate or aggravate existing disparities. To harness education technology’s full potential, education decision-makers, product developers, and funders need to understand the ways in which technology can help — or in some cases hurt — student learning.

To address this need, J-PAL North America recently released a new publication summarizing 126 rigorous evaluations of different uses of education technology. Drawing primarily from research in developed countries, the publication looks at randomized evaluations and regression discontinuity designs across four broad categories: (1) access to technology, (2) computer-assisted learning or educational software, (3) technology-enabled nudges in education, and (4) online learning.

This growing body of evidence suggests some areas of promise and points to four key lessons on education technology.

First, supplying computers and internet alone generally do not improve students’ academic outcomes from kindergarten to 12th grade, but do increase computer usage and improve computer proficiency. Disparities in access to information and communication technologies can exacerbate existing educational inequalities. Students without access at school or at home may struggle to complete web-based assignments and may have a hard time developing digital literacy skills.

Broadly, programs to expand access to technology have been effective at increasing use of computers and improving computer skills. However, computer distribution and internet subsidy programs generally did not improve grades and test scores and in some cases led to adverse impacts on academic achievement. The limited rigorous evidence suggests that distributing computers may have a more direct impact on learning outcomes at the postsecondary level.

Second, educational software (often called “computer-assisted learning”) programs designed to help students develop particular skills have shown enormous promise in improving learning outcomes, particularly in math. Targeting instruction to meet students’ learning levels has been found to be effective in improving student learning, but large class sizes with a wide range of learning levels can make it hard for teachers to personalize instruction. Software has the potential to overcome traditional classroom constraints by customizing activities for each student. Educational software programs range from light-touch homework support tools to more intensive interventions that re-orient the classroom around the use of software.

Most educational software that have been rigorously evaluated help students practice particular skills through personalized tutoring approaches. Computer-assisted learning programs have shown enormous promise in improving academic achievement, especially in math. Of all 30 studies of computer-assisted learning programs, 20 reported statistically significant positive effects, 15 of which were focused on improving math outcomes.

Third, technology-based nudges — such as text message reminders — can have meaningful, if modest, impacts on a variety of education-related outcomes, often at extremely low costs. Low-cost interventions like text message reminders can successfully support students and families at each stage of schooling. Text messages with reminders, tips, goal-setting tools, and encouragement can increase parental engagement in learning activities, such as reading with their elementary-aged children.

Middle and high schools, meanwhile, can help parents support their children by providing families with information about how well their children are doing in school. Colleges can increase application and enrollment rates by leveraging technology to suggest specific action items, streamline financial aid procedures, and/or provide personalized support to high school students.

Online courses are developing a growing presence in education, but the limited experimental evidence suggests that online-only courses lower student academic achievement compared to in-person courses. In four of six studies that directly compared the impact of taking a course online versus in-person only, student performance was lower in the online courses. However, students performed similarly in courses with both in-person and online components compared to traditional face-to-face classes.

The new publication is meant to be a resource for decision-makers interested in learning which uses of education technology go beyond the hype to truly help students learn. At the same time, the publication outlines key open questions about the impacts of education technology, including questions relating to the long-term impacts of education technology and the impacts of education technology on different types of learners.

To help answer these questions, J-PAL North America’s Education, Technology, and Opportunity Initiative is working to build the evidence base on promising uses of education technology by partnering directly with education leaders.

Education leaders are invited to submit letters of interest to partner with J-PAL North America through its  Innovation Competition . Anyone interested in learning more about how to apply is encouraged to contact initiative manager Vincent Quan .

Share this news article on:

Related links.

  • J-PAL Education, Technology, and Opportunity Initiative
  • Education, Technology, and Opportunity Innovation Competition
  • Article: "Will Technology Transform Education for the Better?"
  • Abdul Latif Jameel Poverty Action Lab
  • Department of Economics

Related Topics

  • School of Humanities Arts and Social Sciences
  • Education, teaching, academics
  • Technology and society
  • Computer science and technology

Related Articles

research on computer education

J-PAL North America calls for proposals from education leaders

J-PAL North America’s Education, Technology, and Opportunity Innovation Competition supports education leaders in using randomized evaluations to generate evidence on how technology can improve student learning, particularly for students from disadvantaged backgrounds.

J-PAL North America’s Education, Technology, and Opportunity Innovation Competition announces inaugural partners

Applications for second offering of the ReACT Computer and Data Science Program are now open.

New learning opportunities for displaced persons

J-PAL North America will partner with the Sacramento-based California Franchise Tax Board to evaluate the impact of strategies to encourage households to file for the California Earned Income Tax Credit (CalEITC).

J-PAL North America announces new partnerships with three state and local governments

research on computer education

A new way to measure women’s and girls’ empowerment in impact evaluations

Previous item Next item

More MIT News

A rendering shows the MIT campus and Cambridge, with MIT buildings in red.

Students research pathways for MIT to reach decarbonization goals

Read full story →

Namrata Kala sits in glass-walled building

Improving working environments amid environmental distress

Ashesh Rambachan converses with a student in the front of a classroom.

A data-driven approach to making better choices

On the left, Erik Lin-Greenberg talks, smiling, with two graduate students in his office. On the right, Tracy Slatyer sits with two students on a staircase, conversing warmly.

Paying it forward

Portrait photo of John Fucillo posing on a indoor stairwell

John Fucillo: Laying foundations for MIT’s Department of Biology

Graphic of hand holding a glowing chip-based 3D printer

Researchers demonstrate the first chip-based 3D printer

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram
  • Review article
  • Open access
  • Published: 02 October 2017

Computer-based technology and student engagement: a critical review of the literature

  • Laura A. Schindler   ORCID: orcid.org/0000-0001-8730-5189 1 ,
  • Gary J. Burkholder 2 , 3 ,
  • Osama A. Morad 1 &
  • Craig Marsh 4  

International Journal of Educational Technology in Higher Education volume  14 , Article number:  25 ( 2017 ) Cite this article

392k Accesses

145 Citations

39 Altmetric

Metrics details

Computer-based technology has infiltrated many aspects of life and industry, yet there is little understanding of how it can be used to promote student engagement, a concept receiving strong attention in higher education due to its association with a number of positive academic outcomes. The purpose of this article is to present a critical review of the literature from the past 5 years related to how web-conferencing software, blogs, wikis, social networking sites ( Facebook and Twitter ), and digital games influence student engagement. We prefaced the findings with a substantive overview of student engagement definitions and indicators, which revealed three types of engagement (behavioral, emotional, and cognitive) that informed how we classified articles. Our findings suggest that digital games provide the most far-reaching influence across different types of student engagement, followed by web-conferencing and Facebook . Findings regarding wikis, blogs, and Twitter are less conclusive and significantly limited in number of studies conducted within the past 5 years. Overall, the findings provide preliminary support that computer-based technology influences student engagement, however, additional research is needed to confirm and build on these findings. We conclude the article by providing a list of recommendations for practice, with the intent of increasing understanding of how computer-based technology may be purposefully implemented to achieve the greatest gains in student engagement.

Introduction

The digital revolution has profoundly affected daily living, evident in the ubiquity of mobile devices and the seamless integration of technology into common tasks such as shopping, reading, and finding directions (Anderson, 2016 ; Smith & Anderson, 2016 ; Zickuhr & Raine, 2014 ). The use of computers, mobile devices, and the Internet is at its highest level to date and expected to continue to increase as technology becomes more accessible, particularly for users in developing countries (Poushter, 2016 ). In addition, there is a growing number of people who are smartphone dependent, relying solely on smartphones for Internet access (Anderson & Horrigan, 2016 ) rather than more expensive devices such as laptops and tablets. Greater access to and demand for technology has presented unique opportunities and challenges for many industries, some of which have thrived by effectively digitizing their operations and services (e.g., finance, media) and others that have struggled to keep up with the pace of technological innovation (e.g., education, healthcare) (Gandhi, Khanna, & Ramaswamy, 2016 ).

Integrating technology into teaching and learning is not a new challenge for universities. Since the 1900s, administrators and faculty have grappled with how to effectively use technical innovations such as video and audio recordings, email, and teleconferencing to augment or replace traditional instructional delivery methods (Kaware & Sain, 2015 ; Westera, 2015 ). Within the past two decades, however, this challenge has been much more difficult due to the sheer volume of new technologies on the market. For example, in the span of 7 years (from 2008 to 2015), the number of active apps in Apple’s App Store increased from 5000 to 1.75 million. Over the next 4 years, the number of apps is projected to rise by 73%, totaling over 5 million (Nelson, 2016 ). Further compounding this challenge is the limited shelf life of new devices and software combined with significant internal organizational barriers that hinder universities from efficiently and effectively integrating new technologies (Amirault, 2012 ; Kinchin, 2012 ; Linder-VanBerschot & Summers 2015 ; Westera, 2015 ).

Many organizational barriers to technology integration arise from competing tensions between institutional policy and practice and faculty beliefs and abilities. For example, university administrators may view technology as a tool to attract and retain students, whereas faculty may struggle to determine how technology coincides with existing pedagogy (Lawrence & Lentle-Keenan, 2013 ; Lin, Singer, & Ha, 2010 ). In addition, some faculty may be hesitant to use technology due to lack of technical knowledge and/or skepticism about the efficacy of technology to improve student learning outcomes (Ashrafzadeh & Sayadian, 2015 ; Buchanan, Sainter, & Saunders, 2013 ; Hauptman, 2015 ; Johnson, 2013 ; Kidd, Davis, & Larke, 2016 ; Kopcha, Rieber, & Walker, 2016 ; Lawrence & Lentle-Keenan, 2013 ; Lewis, Fretwell, Ryan, & Parham, 2013 ; Reid, 2014 ). Organizational barriers to technology adoption are particularly problematic given the growing demands and perceived benefits among students about using technology to learn (Amirault, 2012 ; Cassidy et al., 2014 ; Gikas & Grant, 2013 ; Paul & Cochran, 2013 ). Surveys suggest that two-thirds of students use mobile devices for learning and believe that technology can help them achieve learning outcomes and better prepare them for a workforce that is increasingly dependent on technology (Chen, Seilhamer, Bennett, & Bauer, 2015 ; Dahlstrom, 2012 ). Universities that fail to effectively integrate technology into the learning experience miss opportunities to improve student outcomes and meet the expectations of a student body that has grown accustomed to the integration of technology into every facet of life (Amirault, 2012 ; Cook & Sonnenberg, 2014 ; Revere & Kovach, 2011 ; Sun & Chen, 2016 ; Westera, 2015 ).

The purpose of this paper is to provide a literature review on how computer-based technology influences student engagement within higher education settings. We focused on computer-based technology given the specific types of technologies (i.e., web-conferencing software, blogs, wikis, social networking sites, and digital games) that emerged from a broad search of the literature, which is described in more detail below. Computer-based technology (hereafter referred to as technology) requires the use of specific hardware, software, and micro processing features available on a computer or mobile device. We also focused on student engagement as the dependent variable of interest because it encompasses many different aspects of the teaching and learning process (Bryson & Hand, 2007 ; Fredricks, Blumenfeld, & Parks, 1994; Wimpenny & Savin-Baden, 2013 ), compared narrower variables in the literature such as final grades or exam scores. Furthermore, student engagement has received significant attention over the past several decades due to shifts towards student-centered, constructivist instructional methods (Haggis, 2009 ; Wright, 2011 ), mounting pressures to improve teaching and learning outcomes (Axelson & Flick, 2011 ; Kuh, 2009 ), and promising studies suggesting relationships between student engagement and positive academic outcomes (Carini, Kuh, & Klein, 2006 ; Center for Postsecondary Research, 2016 ; Hu & McCormick, 2012 ). Despite the interest in student engagement and the demand for more technology in higher education, there are no articles offering a comprehensive review of how these two variables intersect. Similarly, while many existing student engagement conceptual models have expanded to include factors that influence student engagement, none highlight the overt role of technology in the engagement process (Kahu, 2013 ; Lam, Wong, Yang, & Yi, 2012 ; Nora, Barlow, & Crisp, 2005 ; Wimpenny & Savin-Baden, 2013 ; Zepke & Leach, 2010 ).

Our review aims to address existing gaps in the student engagement literature and seeks to determine whether student engagement models should be expanded to include technology. The review also addresses some of the organizational barriers to technology integration (e.g., faculty uncertainty and skepticism about technology) by providing a comprehensive account of the research evidence regarding how technology influences student engagement. One limitation of the literature, however, is the lack of detail regarding how teaching and learning practices were used to select and integrate technology into learning. For example, the methodology section of many studies does not include a pedagogical justification for why a particular technology was used or details about the design of the learning activity itself. Therefore, it often is unclear how teaching and learning practices may have affected student engagement levels. We revisit this issue in more detail at the end of this paper in our discussions of areas for future research and recommendations for practice. We initiated our literature review by conducting a broad search for articles published within the past 5 years, using the key words technology and higher education , in Google Scholar and the following research databases: Academic Search Complete, Communication & Mass Media Complete, Computers & Applied Sciences Complete, Education Research Complete, ERIC, PsycARTICLES, and PsycINFO . Our initial search revealed themes regarding which technologies were most prevalent in the literature (e.g., social networking, digital games), which then lead to several, more targeted searches of the same databases using specific keywords such as Facebook and student engagement. After both broad and targeted searches, we identified five technologies (web-conferencing software, blogs, wikis, social networking sites, and digital games) to include in our review.

We chose to focus on technologies for which there were multiple studies published, allowing us to identify areas of convergence and divergence in the literature and draw conclusions about positive and negative effects on student engagement. In total, we identified 69 articles relevant to our review, with 36 pertaining to social networking sites (21 for Facebook and 15 for Twitter ), 14 pertaining to digital games, seven pertaining to wikis, and six pertaining to blogs and web-conferencing software respectively. Articles were categorized according to their influence on specific types of student engagement, which will be described in more detail below. In some instances, one article pertained to multiple types of engagement. In the sections that follow, we will provide an overview of student engagement, including an explanation of common definitions and indicators of engagement, followed by a synthesis of how each type of technology influences student engagement. Finally, we will discuss areas for future research and make recommendations for practice.

  • Student engagement

Interest in student engagement began over 70 years ago with Ralph Tyler’s research on the relationship between time spent on coursework and learning (Axelson & Flick, 2011 ; Kuh, 2009 ). Since then, the study of student engagement has evolved and expanded considerably, through the seminal works of Pace ( 1980 ; 1984 ) and Astin ( 1984 ) about how quantity and quality of student effort affect learning and many more recent studies on the environmental conditions and individual dispositions that contribute to student engagement (Bakker, Vergel, & Kuntze, 2015 ; Gilboy, Heinerichs, & Pazzaglia, 2015 ; Martin, Goldwasser, & Galentino, 2017 ; Pellas, 2014 ). Perhaps the most well-known resource on student engagement is the National Survey of Student Engagement (NSSE), an instrument designed to assess student participation in various educational activities (Kuh, 2009 ). The NSSE and other engagement instruments like it have been used in many studies that link student engagement to positive student outcomes such as higher grades, retention, persistence, and completion (Leach, 2016 ; McClenney, Marti, & Adkins, 2012 ; Trowler & Trowler, 2010 ), further convincing universities that student engagement is an important factor in the teaching and learning process. However, despite the increased interest in student engagement, its meaning is generally not well understood or agreed upon.

Student engagement is a broad and complex phenomenon for which there are many definitions grounded in psychological, social, and/or cultural perspectives (Fredricks et al., 1994; Wimpenny & Savin-Baden, 2013 ; Zepke & Leach, 2010 ). Review of definitions revealed that student engagement is defined in two ways. One set of definitions refer to student engagement as a desired outcome reflective of a student’s thoughts, feelings, and behaviors about learning. For example, Kahu ( 2013 ) defines student engagement as an “individual psychological state” that includes a student’s affect, cognition, and behavior (p. 764). Other definitions focus primarily on student behavior, suggesting that engagement is the “extent to which students are engaging in activities that higher education research has shown to be linked with high-quality learning outcomes” (Krause & Coates, 2008 , p. 493) or the “quality of effort and involvement in productive learning activities” (Kuh, 2009 , p. 6). Another set of definitions refer to student engagement as a process involving both the student and the university. For example, Trowler ( 2010 ) defined student engagement as “the interaction between the time, effort and other relevant resources invested by both students and their institutions intended to optimize the student experience and enhance the learning outcomes and development of students and the performance, and reputation of the institution” (p. 2). Similarly, the NSSE website indicates that student engagement is “the amount of time and effort students put into their studies and other educationally purposeful activities” as well as “how the institution deploys its resources and organizes the curriculum and other learning opportunities to get students to participate in activities that decades of research studies show are linked to student learning” (Center for Postsecondary Research, 2017 , para. 1).

Many existing models of student engagement reflect the latter set of definitions, depicting engagement as a complex, psychosocial process involving both student and university characteristics. Such models organize the engagement process into three areas: factors that influence student engagement (e.g., institutional culture, curriculum, and teaching practices), indicators of student engagement (e.g., interest in learning, interaction with instructors and peers, and meaningful processing of information), and outcomes of student engagement (e.g., academic achievement, retention, and personal growth) (Kahu, 2013 ; Lam et al., 2012 ; Nora et al., 2005 ). In this review, we examine the literature to determine whether technology influences student engagement. In addition, we will use Fredricks et al. ( 2004 ) typology of student engagement to organize and present research findings, which suggests that there are three types of engagement (behavioral, emotional, and cognitive). The typology is useful because it is broad in scope, encompassing different types of engagement that capture a range of student experiences, rather than narrower typologies that offer specific or prescriptive conceptualizations of student engagement. In addition, this typology is student-centered, focusing exclusively on student-focused indicators rather than combining student indicators with confounding variables, such as faculty behavior, curriculum design, and campus environment (Coates, 2008 ; Kuh, 2009 ). While such variables are important in the discussion of student engagement, perhaps as factors that may influence engagement, they are not true indicators of student engagement. Using the typology as a guide, we examined recent student engagement research, models, and measures to gain a better understanding of how behavioral, emotional, and cognitive student engagement are conceptualized and to identify specific indicators that correspond with each type of engagement, as shown in Fig. 1 .

Conceptual framework of types and indicators of student engagement

Behavioral engagement is the degree to which students are actively involved in learning activities (Fredricks et al., 2004 ; Kahu, 2013 ; Zepke, 2014 ). Indicators of behavioral engagement include time and effort spent participating in learning activities (Coates, 2008 ; Fredricks et al., 2004 ; Kahu, 2013 ; Kuh, 2009 ; Lam et al., 2012 ; Lester, 2013 ; Trowler, 2010 ) and interaction with peers, faculty, and staff (Coates, 2008 ; Kahu, 2013 ; Kuh, 2009 ; Bryson & Hand, 2007 ; Wimpenny & Savin-Baden, 2013 : Zepke & Leach, 2010 ). Indicators of behavioral engagement reflect observable student actions and most closely align with Pace ( 1980 ) and Astin’s ( 1984 ) original conceptualizations of student engagement as quantity and quality of effort towards learning. Emotional engagement is students’ affective reactions to learning (Fredricks et al., 2004 ; Lester, 2013 ; Trowler, 2010 ). Indicators of emotional engagement include attitudes, interests, and values towards learning (Fredricks et al., 2004 ; Kahu, 2013 ; Lester, 2013 ; Trowler, 2010 ; Wimpenny & Savin-Baden, 2013 ; Witkowski & Cornell, 2015 ) and a perceived sense of belonging within a learning community (Fredricks et al., 2004 ; Kahu, 2013 ; Lester, 2013 ; Trowler, 2010 ; Wimpenny & Savin-Baden, 2013 ). Emotional engagement often is assessed using self-report measures (Fredricks et al., 2004 ) and provides insight into how students feel about a particular topic, delivery method, or instructor. Finally, cognitive engagement is the degree to which students invest in learning and expend mental effort to comprehend and master content (Fredricks et al., 2004 ; Lester, 2013 ). Indicators of cognitive engagement include: motivation to learn (Lester, 2013 ; Richardson & Newby, 2006 ; Zepke & Leach, 2010 ); persistence to overcome academic challenges and meet/exceed requirements (Fredricks et al., 2004 ; Kuh, 2009 ; Trowler, 2010 ); and deep processing of information (Fredricks et al., 2004 ; Kahu, 2013 ; Lam et al., 2012 ; Richardson & Newby, 2006 ) through critical thinking (Coates, 2008 ; Witkowski & Cornell, 2015 ), self-regulation (e.g., set goals, plan, organize study effort, and monitor learning; Fredricks et al., 2004 ; Lester, 2013 ), and the active construction of knowledge (Coates, 2008 ; Kuh, 2009 ). While cognitive engagement includes motivational aspects, much of the literature focuses on how students use active learning and higher-order thinking, in some form, to achieve content mastery. For example, there is significant emphasis on the importance of deep learning, which involves analyzing new learning in relation previous knowledge, compared to surface learning, which is limited to memorization, recall, and rehearsal (Fredricks et al., 2004 ; Kahu, 2013 ; Lam et al., 2012 ).

While each type of engagement has distinct features, there is some overlap across cognitive, behavioral, and emotional domains. In instances where an indicator could correspond with more than one type of engagement, we chose to match the indicator to the type of engagement that most closely aligned, based on our review of the engagement literature and our interpretation of the indicators. Similarly, there is also some overlap among indicators. As a result, we combined and subsumed similar indicators found in the literature, where appropriate, to avoid redundancy. Achieving an in-depth understanding of student engagement and associated indicators was an important pre-cursor to our review of the technology literature. Very few articles used the term student engagement as a dependent variable given the concept is so broad and multidimensional. We found that specific indicators (e.g., interaction, sense of belonging, and knowledge construction) of student engagement were more common in the literature as dependent variables. Next, we will provide a synthesis of the findings regarding how different types of technology influence behavioral, emotional, and cognitive student engagement and associated indicators.

Influence of technology on student engagement

We identified five technologies post-literature search (i.e., web-conferencing, blogs, wikis, social networking sites , and digital games) to include in our review, based on frequency in which they appeared in the literature over the past 5 years. One commonality among these technologies is their potential value in supporting a constructivist approach to learning, characterized by the active discovery of knowledge through reflection of experiences with one’s environment, the connection of new knowledge to prior knowledge, and interaction with others (Boghossian, 2006 ; Clements, 2015 ). Another commonality is that most of the technologies, except perhaps for digital games, are designed primarily to promote interaction and collaboration with others. Our search yielded very few studies on how informational technologies, such as video lectures and podcasts, influence student engagement. Therefore, these technologies are notably absent from our review. Unlike the technologies we identified earlier, informational technologies reflect a behaviorist approach to learning in which students are passive recipients of knowledge that is transmitted from an expert (Boghossian, 2006 ). The lack of recent research on how informational technologies affect student engagement may be due to the increasing shift from instructor-centered, behaviorist approaches to student-centered, constructivist approaches within higher education (Haggis, 2009 ; Wright, 2011 ) along with the ubiquity of web 2.0 technologies.

  • Web-conferencing

Web-conferencing software provides a virtual meeting space where users login simultaneously and communicate about a given topic. While each software application is unique, many share similar features such as audio, video, or instant messaging options for real-time communication; screen sharing, whiteboards, and digital pens for presentations and demonstrations; polls and quizzes for gauging comprehension or eliciting feedback; and breakout rooms for small group work (Bower, 2011 ; Hudson, Knight, & Collins, 2012 ; Martin, Parker, & Deale, 2012 ; McBrien, Jones, & Cheng, 2009 ). Of the technologies included in this literature review, web-conferencing software most closely mimics the face-to-face classroom environment, providing a space where instructors and students can hear and see each other in real-time as typical classroom activities (i.e., delivering lectures, discussing course content, asking/answering questions) are carried out (Francescucci & Foster, 2013 ; Hudson et al., 2012 ). Studies on web-conferencing software deployed Adobe Connect, Cisco WebEx, Horizon Wimba, or Blackboard Collaborate and made use of multiple features, such as screen sharing, instant messaging, polling, and break out rooms. In addition, most of the studies integrated web-conferencing software into courses on a voluntary basis to supplement traditional instructional methods (Andrew, Maslin-Prothero, & Ewens, 2015 ; Armstrong & Thornton, 2012 ; Francescucci & Foster, 2013 ; Hudson et al., 2012 ; Martin et al., 2012 ; Wdowik, 2014 ). Existing studies on web-conferencing pertain to all three types of student engagement.

Studies on web-conferencing and behavioral engagement reveal mixed findings. For example, voluntary attendance in web-conferencing sessions ranged from 54 to 57% (Andrew et al., 2015 ; Armstrong & Thornton, 2012 ) and, in a comparison between a blended course with regular web-conferencing sessions and a traditional, face-to-face course, researchers found no significant difference in student attendance in courses. However, students in the blended course reported higher levels of class participation compared to students in the face-to-face course (Francescucci & Foster, 2013 ). These findings suggest while web-conferencing may not boost attendance, especially if voluntary, it may offer more opportunities for class participation, perhaps through the use of communication channels typically not available in a traditional, face-to-face course (e.g., instant messaging, anonymous polling). Studies on web-conferencing and interaction, another behavioral indicator, support this assertion. For example, researchers found that students use various features of web-conferencing software (e.g., polling, instant message, break-out rooms) to interact with peers and the instructor by asking questions, expressing opinions and ideas, sharing resources, and discussing academic content (Andrew et al., 2015 ; Armstrong & Thornton, 2012 ; Hudson et al., 2012 ; Martin et al., 2012 ; Wdowik, 2014 ).

Studies on web-conferencing and cognitive engagement are more conclusive than those for behavioral engagement, although are fewer in number. Findings suggest that students who participated in web-conferencing demonstrated critical reflection and enhanced learning through interactions with others (Armstrong & Thornton, 2012 ), higher-order thinking (e.g., problem-solving, synthesis, evaluation) in response to challenging assignments (Wdowik, 2014 ), and motivation to learn, particularly when using polling features (Hudson et al., 2012 ). There is only one study examining how web-conferencing affects emotional engagement, although it is positive suggesting that students who participated in web-conferences had higher levels of interest in course content than those who did not (Francescucci & Foster, 2013 ). One possible reason for the positive cognitive and emotional engagement findings may be that web-conferencing software provides many features that promote active learning. For example, whiteboards and breakout rooms provide opportunities for real-time, collaborative problem-solving activities and discussions. However, additional studies are needed to isolate and compare specific web-conferencing features to determine which have the greatest effect on student engagement.

A blog, which is short for Weblog, is a collection of personal journal entries, published online and presented chronologically, to which readers (or subscribers) may respond by providing additional commentary or feedback. In order to create a blog, one must compose content for an entry, which may include text, hyperlinks, graphics, audio, or video, publish the content online using a blogging application, and alert subscribers that new content is posted. Blogs may be informal and personal in nature or may serve as formal commentary in a specific genre, such as in politics or education (Coghlan et al., 2007 ). Fortunately, many blog applications are free, and many learning management systems (LMSs) offer a blogging feature that is seamlessly integrated into the online classroom. The ease of blogging has attracted attention from educators, who currently use blogs as an instructional tool for the expression of ideas, opinions, and experiences and for promoting dialogue on a wide range of academic topics (Garrity, Jones, VanderZwan, de la Rocha, & Epstein, 2014 ; Wang, 2008 ).

Studies on blogs show consistently positive findings for many of the behavioral and emotional engagement indicators. For example, students reported that blogs promoted interaction with others, through greater communication and information sharing with peers (Chu, Chan, & Tiwari, 2012 ; Ivala & Gachago, 2012 ; Mansouri & Piki, 2016 ), and analyses of blog posts show evidence of students elaborating on one another’s ideas and sharing experiences and conceptions of course content (Sharma & Tietjen, 2016 ). Blogs also contribute to emotional engagement by providing students with opportunities to express their feelings about learning and by encouraging positive attitudes about learning (Dos & Demir, 2013 ; Chu et al., 2012 ; Yang & Chang, 2012 ). For example, Dos and Demir ( 2013 ) found that students expressed prejudices and fears about specific course topics in their blog posts. In addition, Yang and Chang ( 2012 ) found that interactive blogging, where comment features were enabled, lead to more positive attitudes about course content and peers compared to solitary blogging, where comment features were disabled.

The literature on blogs and cognitive engagement is less consistent. Some studies suggest that blogs may help students engage in active learning, problem-solving, and reflection (Chawinga, 2017 ; Chu et al., 2012 ; Ivala & Gachago, 2012 ; Mansouri & Piki, 2016 ), while other studies suggest that students’ blog posts show very little evidence of higher-order thinking (Dos & Demir, 2013 ; Sharma & Tietjen, 2016 ). The inconsistency in findings may be due to the wording of blog instructions. Students may not necessarily demonstrate or engage in deep processing of information unless explicitly instructed to do so. Unfortunately, it is difficult to determine whether the wording of blog assignments contributed to the mixed results because many of the studies did not provide assignment details. However, studies pertaining to other technologies suggest that assignment wording that lacks specificity or requires low-level thinking can have detrimental effects on student engagement outcomes (Hou, Wang, Lin, & Chang, 2015 ; Prestridge, 2014 ). Therefore, blog assignments that are vague or require only low-level thinking may have adverse effects on cognitive engagement.

A wiki is a web page that can be edited by multiple users at once (Nakamaru, 2012 ). Wikis have gained popularity in educational settings as a viable tool for group projects where group members can work collaboratively to develop content (i.e., writings, hyperlinks, images, graphics, media) and keep track of revisions through an extensive versioning system (Roussinos & Jimoyiannis, 2013 ). Most studies on wikis pertain to behavioral engagement, with far fewer studies on cognitive engagement and none on emotional engagement. Studies pertaining to behavioral engagement reveal mixed results, with some showing very little enduring participation in wikis beyond the first few weeks of the course (Nakamaru, 2012 ; Salaber, 2014 ) and another showing active participation, as seen in high numbers of posts and edits (Roussinos & Jimoyiannis, 2013 ). The most notable difference between these studies is the presence of grading, which may account for the inconsistencies in findings. For example, in studies where participation was low, wikis were ungraded, suggesting that students may need extra motivation and encouragement to use wikis (Nakamaru, 2012 ; Salaber, 2014 ). Findings regarding the use of wikis for promoting interaction are also inconsistent. In some studies, students reported that wikis were useful for interaction, teamwork, collaboration, and group networking (Camacho, Carrión, Chayah, & Campos, 2016 ; Martínez, Medina, Albalat, & Rubió, 2013 ; Morely, 2012 ; Calabretto & Rao, 2011 ) and researchers found evidence of substantial collaboration among students (e.g., sharing ideas, opinions, and points of view) in wiki activity (Hewege & Perera, 2013 ); however, Miller, Norris, and Bookstaver ( 2012 ) found that only 58% of students reported that wikis promoted collegiality among peers. The findings in the latter study were unexpected and may be due to design flaws in the wiki assignments. For example, the authors noted that wiki assignments were not explicitly referred to in face-to-face classes; therefore, this disconnect may have prevented students from building on interactive momentum achieved during out-of-class wiki assignments (Miller et al., 2012 ).

Studies regarding cognitive engagement are limited in number but more consistent than those concerning behavioral engagement, suggesting that wikis promote high levels of knowledge construction (i.e., evaluation of arguments, the integration of multiple viewpoints, new understanding of course topics; Hewege & Perera, 2013 ), and are useful for reflection, reinforcing course content, and applying academic skills (Miller et al., 2012 ). Overall, there is mixed support for the use of wikis to promote behavioral engagement, although making wiki assignments mandatory and explicitly referring to wikis in class may help bolster participation and interaction. In addition, there is some support for using wikis to promote cognitive engagement, but additional studies are needed to confirm and expand on findings as well as explore the effect of wikis on emotional engagement.

Social networking sites

Social networking is “the practice of expanding knowledge by making connections with individuals of similar interests” (Gunawardena et al., 2009 , p. 4). Social networking sites, such as Facebook, Twitter, Instagram, and LinkedIn, allow users to create and share digital content publicly or with others to whom they are connected and communicate privately through messaging features. Two of the most popular social networking sites in the educational literature are Facebook and Twitter (Camus, Hurt, Larson, & Prevost, 2016 ; Manca & Ranieri, 2013 ), which is consistent with recent statistics suggesting that both sites also are exceedingly popular among the general population (Greenwood, Perrin, & Duggan, 2016 ). In the sections that follow, we examine how both Facebook and Twitter influence different types of student engagement.

Facebook is a web-based service that allows users to create a public or private profile and invite others to connect. Users may build social, academic, and professional connections by posting messages in various media formats (i.e., text, pictures, videos) and commenting on, liking, and reacting to others’ messages (Bowman & Akcaoglu, 2014 ; Maben, Edwards, & Malone, 2014 ; Hou et al., 2015 ). Within an educational context, Facebook has often been used as a supplementary instructional tool to lectures or LMSs to support class discussions or develop, deliver, and share academic content and resources. Many instructors have opted to create private Facebook groups, offering an added layer of security and privacy because groups are not accessible to strangers (Bahati, 2015 ; Bowman & Akcaoglu, 2014 ; Clements, 2015 ; Dougherty & Andercheck, 2014 ; Esteves, 2012 ; Shraim, 2014 ; Maben et al., 2014 ; Manca & Ranieri, 2013 ; Naghdipour & Eldridge, 2016 ; Rambe, 2012 ). The majority of studies on Facebook address behavioral indicators of student engagement, with far fewer focusing on emotional or cognitive engagement.

Studies that examine the influence of Facebook on behavioral engagement focus both on participation in learning activities and interaction with peers and instructors. In most studies, Facebook activities were voluntary and participation rates ranged from 16 to 95%, with an average of rate of 47% (Bahati, 2015 ; Bowman & Akcaoglu, 2014 ; Dougherty & Andercheck, 2014 ; Fagioli, Rios-Aguilar, & Deil-Amen, 2015 ; Rambe, 2012 ; Staines & Lauchs, 2013 ). Participation was assessed by tracking how many students joined course- or university-specific Facebook groups (Bahati, 2015 ; Bowman & Akcaoglu, 2014 ; Fagioli et al., 2015 ), visited or followed course-specific Facebook pages (DiVall & Kirwin, 2012 ; Staines & Lauchs, 2013 ), or posted at least once in a course-specific Facebook page (Rambe, 2012 ). The lowest levels of participation (16%) arose from a study where community college students were invited to use the Schools App, a free application that connects students to their university’s private Facebook community. While the authors acknowledged that building an online community of college students is difficult (Fagioli et al., 2015 ), downloading the Schools App may have been a deterrent to widespread participation. In addition, use of the app was not tied to any specific courses or assignments; therefore, students may have lacked adequate incentive to use it. The highest level of participation (95%) in the literature arose from a study in which the instructor created a Facebook page where students could find or post study tips or ask questions. Followership to the page was highest around exams, when students likely had stronger motivations to access study tips and ask the instructor questions (DiVall & Kirwin, 2012 ). The wide range of participation in Facebook activities suggests that some students may be intrinsically motivated to participate, while other students may need some external encouragement. For example, Bahati ( 2015 ) found that when students assumed that a course-specific Facebook was voluntary, only 23% participated, but when the instructor confirmed that the Facebook group was, in fact, mandatory, the level of participation rose to 94%.

While voluntary participation in Facebook activities may be lower than desired or expected (Dyson, Vickers, Turtle, Cowan, & Tassone, 2015 ; Fagioli et al., 2015 ; Naghdipour & Eldridge, 2016 ; Rambe, 2012 ), students seem to have a clear preference for Facebook compared to other instructional tools (Clements, 2015 ; DiVall & Kirwin, 2012 ; Hurt et al., 2012 ; Hou et al., 2015 ; Kent, 2013 ). For example, in one study where an instructor shared course-related information in a Facebook group, in the LMS, and through email, the level of participation in the Facebook group was ten times higher than in email or the LMS (Clements, 2015 ). In other studies, class discussions held in Facebook resulted in greater levels of participation and dialogue than class discussions held in LMS discussion forums (Camus et al., 2016 ; Hurt et al., 2012 ; Kent, 2013 ). Researchers found that preference for Facebook over the university’s LMS is due to perceptions that the LMS is outdated and unorganized and reports that Facebook is more familiar, convenient, and accessible given that many students already visit the social networking site multiple times per day (Clements, 2015 ; Dougherty & Andercheck, 2014 ; Hurt et al., 2012 ; Kent, 2013 ). In addition, students report that Facebook helps them stay engaged in learning through collaboration and interaction with both peers and instructors (Bahati, 2015 ; Shraim, 2014 ), which is evident in Facebook posts where students collaborated to study for exams, consulted on technical and theoretical problem solving, discussed course content, exchanged learning resources, and expressed opinions as well as academic successes and challenges (Bowman & Akcaoglu, 2014 ; Dougherty & Andercheck, 2014 ; Esteves, 2012 Ivala & Gachago, 2012 ; Maben et al., 2014 ; Rambe, 2012 ; van Beynen & Swenson, 2016 ).

There is far less evidence in the literature about the use of Facebook for emotional and cognitive engagement. In terms of emotional engagement, studies suggest that students feel positively about being part of a course-specific Facebook group and that Facebook is useful for expressing feelings about learning and concerns for peers, through features such as the “like” button and emoticons (Bowman & Akcaoglu, 2014 ; Dougherty & Andercheck, 2014 ; Naghdipour & Eldridge, 2016 ). In addition, being involved in a course-specific Facebook group was positively related to students’ sense of belonging in the course (Dougherty & Andercheck, 2014 ). The research on cognitive engagement is less conclusive, with some studies suggesting that Facebook participation is related to academic persistence (Fagioli et al., 2015 ) and self-regulation (Dougherty & Andercheck, 2014 ) while other studies show low levels of knowledge construction in Facebook posts (Hou et al., 2015 ), particularly when compared to discussions held in the LMS. One possible reason may be because the LMS is associated with formal, academic interactions while Facebook is associated with informal, social interactions (Camus et al., 2016 ). While additional research is needed to confirm the efficacy of Facebook for promoting cognitive engagement, studies suggest that Facebook may be a viable tool for increasing specific behavioral and emotional engagement indicators, such as interactions with others and a sense of belonging within a learning community.

Twitter is a web-based service where subscribers can post short messages, called tweets, in real-time that are no longer than 140 characters in length. Tweets may contain hyperlinks to other websites, images, graphics, and/or videos and may be tagged by topic using the hashtag symbol before the designated label (e.g., #elearning). Twitter subscribers may “follow” other users and gain access to their tweets and also may “retweet” messages that have already been posted (Hennessy, Kirkpatrick, Smith, & Border, 2016 ; Osgerby & Rush, 2015 ; Prestridge, 2014 ; West, Moore, & Barry, 2015 ; Tiernan, 2014 ;). Instructors may use Twitter to post updates about the course, clarify expectations, direct students to additional learning materials, and encourage students to discuss course content (Bista, 2015 ; Williams & Whiting, 2016 ). Several of the studies on the use of Twitter included broad, all-encompassing measures of student engagement and produced mixed findings. For example, some studies suggest that Twitter increases student engagement (Evans, 2014 ; Gagnon, 2015 ; Junco, Heibergert, & Loken, 2011 ) while other studies suggest that Twitter has little to no influence on student engagement (Junco, Elavsky, & Heiberger, 2013 ; McKay, Sanko, Shekhter, & Birnbach, 2014 ). In both studies suggesting little to no influence on student engagement, Twitter use was voluntary and in one of the studies faculty involvement in Twitter was low, which may account for the negative findings (Junco et al., 2013 ; McKay et al., 2014 ). Conversely, in the studies that show positive findings, Twitter use was mandatory and often directly integrated with required assignments (Evans, 2014 ; Gagnon, 2015 ; Junco et al., 2011 ). Therefore, making Twitter use mandatory, increasing faculty involvement in Twitter, and integrating Twitter into assignments may help to increase student engagement.

Studies pertaining to specific behavioral student engagement indicators also reveal mixed findings. For example, in studies where course-related Twitter use was voluntary, 45-91% of students reported using Twitter during the term (Hennessy et al., 2016 ; Junco et al., 2013 ; Ross, Banow, & Yu, 2015 ; Tiernan, 2014 ; Williams & Whiting, 2016 ), but only 30-36% reported making contributions to the course-specific Twitter page (Hennessy et al., 2016 ; Tiernan, 2014 ; Ross et al., 2015 ; Williams & Whiting, 2016 ). The study that reported a 91% participation rate was unique because the course-specific Twitter page was accessible via a public link. Therefore, students who chose only to view the content (58%), rather than contribute to the page, did not have to create a Twitter account (Hennessy et al., 2016 ). The convenience of not having to create an account may be one reason for much higher participation rates. In terms of low participation rates, a lack of literacy, familiarity, and interest in Twitter , as well as a preference for Facebook , are cited as contributing factors (Bista, 2015 ; McKay et al., 2014 ; Mysko & Delgaty, 2015 ; Osgerby & Rush, 2015 ; Tiernan, 2014 ). However, when the use of Twitter was required and integrated into class discussions, the participation rate was 100% (Gagnon, 2015 ). Similarly, 46% of students in one study indicated that they would have been more motivated to participate in Twitter activities if they were graded (Osgerby & Rush, 2015 ), again confirming the power of extrinsic motivating factors.

Studies also show mixed results for the use of Twitter to promote interactions with peers and instructors. Researchers found that when instructors used Twitter to post updates about the course, ask and answer questions, and encourage students to tweet about course content, there was evidence of student-student and student-instructor interactions in tweets (Hennessy et al., 2016 ; Tiernan, 2014 ). Some students echoed these findings, suggesting that Twitter is useful for sharing ideas and resources, discussing course content, asking the instructor questions, and networking (Chawinga, 2017 ; Evans, 2014 ; Gagnon, 2015 ; Hennessy et al., 2016 ; Mysko & Delgaty, 2015 ; West et al., 2015 ) and is preferable over speaking aloud in class because it is more comfortable, less threatening, and more concise due to the 140 character limit (Gagnon, 2015 ; Mysko & Delgaty, 2015 ; Tiernan, 2014 ). Conversely, other students reported that Twitter was not useful for improving interaction because they viewed it predominately for social, rather than academic, interactions and they found the 140 character limit to be frustrating and restrictive. A theme among the latter studies was that a large proportion of the sample had never used Twitter before (Bista, 2015 ; McKay et al., 2014 ; Osgerby & Rush, 2015 ), which may have contributed to negative perceptions.

The literature on the use of Twitter for cognitive and emotional engagement is minimal but nonetheless promising in terms of promoting knowledge gains, the practical application of content, and a sense of belonging among users. For example, using Twitter to respond to questions that arose in lectures and tweet about course content throughout the term is associated with increased understanding of course content and application of knowledge (Kim et al., 2015 ; Tiernan, 2014 ; West et al., 2015 ). While the underlying mechanisms pertaining to why Twitter promotes an understanding of content and application of knowledge are not entirely clear, Tiernan ( 2014 ) suggests that one possible reason may be that Twitter helps to break down communication barriers, encouraging shy or timid students to participate in discussions that ultimately are richer in dialogue and debate. In terms of emotional engagement, students who participated in a large, class-specific Twitter page were more likely to feel a sense of community and belonging compared to those who did not participate because they could more easily find support from and share resources with other Twitter users (Ross et al., 2015 ). Despite the positive findings about the use of Twitter for cognitive and emotional engagement, more studies are needed to confirm existing results regarding behavioral engagement and target additional engagement indicators such as motivation, persistence, and attitudes, interests, and values about learning. In addition, given the strong negative perceptions of Twitter that still exist, additional studies are needed to confirm Twitter ’s efficacy for promoting different types of behavioral engagement among both novice and experienced Twitter users, particularly when compared to more familiar tools such as Facebook or LMS discussion forums.

  • Digital games

Digital games are “applications using the characteristics of video and computer games to create engaging and immersive learning experiences for delivery of specified learning goals, outcomes and experiences” (de Freitas, 2006 , p. 9). Digital games often serve the dual purpose of promoting the achievement of learning outcomes while making learning fun by providing simulations of real-world scenarios as well as role play, problem-solving, and drill and repeat activities (Boyle et al., 2016 ; Connolly, Boyle, MacArthur, Hainey, & Boyle, 2012 ; Scarlet & Ampolos, 2013 ; Whitton, 2011 ). In addition, gamified elements, such as digital badges and leaderboards, may be integrated into instruction to provide additional motivation for completing assigned readings and other learning activities (Armier, Shepherd, & Skrabut, 2016 ; Hew, Huang, Chu, & Chiu, 2016 ). The pedagogical benefits of digital games are somewhat distinct from the other technologies addressed in this review, which are designed primarily for social interaction. While digital games may be played in teams or allow one player to compete against another, the focus of their design often is on providing opportunities for students to interact with academic content in a virtual environment through decision-making, problem-solving, and reward mechanisms. For example, a digital game may require students to adopt a role as CEO in a computer-simulated business environment, make decisions about a series of organizational issues, and respond to the consequences of those decisions. In this example and others, digital games use adaptive learning principles, where the learning environment is re-configured or modified in response to the actions and needs of students (Bower, 2016 ). Most of the studies on digital games focused on cognitive and emotional indicators of student engagement, in contrast to the previous technologies addressed in this review which primarily focused on behavioral indicators of engagement.

Existing studies provide support for the influence of digital games on cognitive engagement, through achieving a greater understanding of course content and demonstrating higher-order thinking skills (Beckem & Watkins, 2012 ; Farley, 2013 ; Ke, Xie, & Xie, 2016 ; Marriott, Tan, & Marriott, 2015 ), particularly when compared to traditional instructional methods, such as giving lectures or assigning textbook readings (Lu, Hallinger, & Showanasai, 2014 ; Siddique, Ling, Roberson, Xu, & Geng, 2013 ; Zimmermann, 2013 ). For example, in a study comparing courses that offered computer simulations of business challenges (e.g, implementing a new information technology system, managing a startup company, and managing a brand of medicine in a simulated market environment) and courses that did not, students in simulation-based courses reported higher levels of action-directed learning (i.e., connecting theory to practice in a business context) than students in traditional, non-simulation-based courses (Lu et al., 2014 ). Similarly, engineering students who participated in a car simulator game, which was designed to help students apply and reinforce the knowledge gained from lectures, demonstrated higher levels of critical thinking (i.e., analysis, evaluation) on a quiz than students who only attended lectures (Siddique et al., 2013 ).

Motivation is another cognitive engagement indicator that is linked to digital games (Armier et al., 2016 ; Chang & Wei, 2016 ; Dichev & Dicheva, 2017 ; Grimley, Green, Nilsen, & Thompson, 2012 ; Hew et al., 2016 ; Ibáñez, Di-Serio, & Delgado-Kloos, 2014 ; Ke et al., 2016 ; Liu, Cheng, & Huang, 2011 ; Nadolny & Halabi, 2016 ). Researchers found that incorporating gamified elements into courses, such as giving students digital rewards (e.g., redeemable points, trophies, and badges) for participating in learning activities or creating competition through the use of leaderboards where students can see how they rank against other students positively affects student motivation to complete learning tasks (Armier et al., 2016 ; Chang & Wei, 2016 ; Hew et al., 2016 ; Nadolny & Halabi, 2016 ). In addition, students who participated in gamified elements, such as trying to earn digital badges, were more motivated to complete particularly difficult learning activities (Hew et al., 2016 ) and showed persistence in exceeding learning requirements (Ibáñez et al., 2014 ). Research on emotional engagement may help to explain these findings. Studies suggest that digital games positively affect student attitudes about learning, evident in student reports that games are fun, interesting, and enjoyable (Beckem & Watkins, 2012 ; Farley, 2013 ; Grimley et al., 2012 ; Hew et al., 2016 ; Liu et al., 2011 ; Zimmermann, 2013 ), which may account for higher levels of student motivation in courses that offered digital games.

Research on digital games and behavioral engagement is more limited, with only one study suggesting that games lead to greater participation in educational activities (Hew et al., 2016 ). Therefore, more research is needed to explore how digital games may influence behavioral engagement. In addition, research is needed to determine whether the underlying technology associated with digital games (e.g., computer-based simulations and virtual realities) produce positive engagement outcomes or whether common mechanisms associated with both digital and non-digital games (e.g., role play, rewards, and competition) account for those outcomes. For example, studies in which non-digital, face-to-face games were used also showed positive effects on student engagement (Antunes, Pacheco, & Giovanela, 2012 ; Auman, 2011 ; Coffey, Miller, & Feuerstein, 2011 ; Crocco, Offenholley, & Hernandez, 2016 ; Poole, Kemp, Williams, & Patterson, 2014 ; Scarlet & Ampolos, 2013 ); therefore, it is unclear if and how digitizing games contributes to student engagement.

Discussion and implications

Student engagement is linked to a number of academic outcomes, such as retention, grade point average, and graduation rates (Carini et al., 2006 ; Center for Postsecondary Research, 2016 ; Hu & McCormick, 2012 ). As a result, universities have shown a strong interest in how to increase student engagement, particularly given rising external pressures to improve learning outcomes and prepare students for academic success (Axelson & Flick, 2011 ; Kuh, 2009 ). There are various models of student engagement that identify factors that influence student engagement (Kahu, 2013 ; Lam et al., 2012 ; Nora et al., 2005 ; Wimpenny & Savin-Baden, 2013 ; Zepke & Leach, 2010 ); however, none include the overt role of technology despite the growing trend and student demands to integrate technology into the learning experience (Amirault, 2012 ; Cook & Sonnenberg, 2014 ; Revere & Kovach, 2011 ; Sun & Chen, 2016 ; Westera, 2015 ). Therefore, the primary purpose of our literature review was to explore whether technology influences student engagement. The secondary purpose was to address skepticism and uncertainty about pedagogical benefits of technology (Ashrafzadeh & Sayadian, 2015 ; Kopcha et al., 2016 ; Reid, 2014 ) by reviewing the literature regarding the efficacy of specific technologies (i.e., web-conferencing software, blogs, wikis, social networking sites, and digital games) for promoting student engagement and offering recommendations for effective implementation, which are included at the end of this paper. In the sections that follow, we provide an overview of the findings, an explanation of existing methodological limitations and areas for future research, and a list of best practices for integrating the technologies we reviewed into the teaching and learning process.

Summary of findings

Findings from our literature review provide preliminary support for including technology as a factor that influences student engagement in existing models (Table 1 ). One overarching theme is that most of the technologies we reviewed had a positive influence on multiple indicators of student engagement, which may lead to a larger return on investment in terms of learning outcomes. For example, digital games influence all three types of student engagement and six of the seven indicators we identified, surpassing the other technologies in this review. There were several key differences in the design and pedagogical use between digital games and other technologies that may explain these findings. First, digital games were designed to provide authentic learning contexts in which students could practice skills and apply learning (Beckem & Watkins, 2012 ; Farley, 2013 ; Grimley et al., 2012 ; Ke et al., 2016 ; Liu et al., 2011 ; Lu et al., 2014 ; Marriott et al., 2015 ; Siddique et al., 2013 ), which is consistent with experiential learning and adult learning theories. Experiential learning theory suggests that learning occurs through interaction with one’s environment (Kolb, 2014 ) while adult learning theory suggests that adult learners want to be actively involved in the learning process and be able apply learning to real life situations and problems (Cercone, 2008 ). Second, students reported that digital games (and gamified elements) are fun, enjoyable, and interesting (Beckem & Watkins, 2012 ; Farley, 2013 ; Grimley et al., 2012 ; Hew et al., 2016 ; Liu et al., 2011 ; Zimmermann, 2013 ), feelings that are associated with a flow-like state where one is completely immersed in and engaged with the activity (Csikszentmihalyi, 1988 ; Weibel, Wissmath, Habegger, Steiner, & Groner, 2008 ). Third, digital games were closely integrated into the curriculum as required activities (Farley, 2013 ; Grimley et al., 2012 , Ke et al., 2016 ; Liu et al., 2011 ; Marriott et al., 2015 ; Siddique et al., 2013 ) as opposed to wikis, Facebook , and Twitter , which were often voluntary and used to supplement lectures (Dougherty & Andercheck, 2014 Nakamaru, 2012 ; Prestridge, 2014 ; Rambe, 2012 ).

Web-conferencing software and Facebook also yielded the most positive findings, influencing four of the seven indicators of student engagement, compared to other collaborative technologies, such as blogs, wikis, and Twitter . Web-conferencing software was unique due to the sheer number of collaborative features it offers, providing multiple ways for students to actively engage with course content (screen sharing, whiteboards, digital pens) and interact with peers and the instructor (audio, video, text chats, breakout rooms) (Bower, 2011 ; Hudson et al., 2012 ; Martin et al., 2012 ; McBrien et al., 2009 ); this may account for the effects on multiple indicators of student engagement. Positive findings regarding Facebook ’s influence on student engagement could be explained by a strong familiarity and preference for the social networking site (Clements, 2015 ; DiVall & Kirwin, 2012 ; Hurt et al., 2012 ; Hou et al., 2015 ; Kent, 2013 ; Manca & Ranieri, 2013 ), compared to Twitter which was less familiar or interesting to students (Bista, 2015 ; McKay et al., 2014 ; Mysko & Delgaty, 2015 ; Osgerby & Rush, 2015 ; Tiernan, 2014 ). Wikis had the lowest influence on student engagement, with mixed findings regarding behavioral engagement, limited, but conclusive findings, regarding one indicator of cognitive engagement (deep processing of information), and no studies pertaining to other indicators of cognitive engagement (motivation, persistence) or emotional engagement.

Another theme that arose was the prevalence of mixed findings across multiple technologies regarding behavioral engagement. Overall, the vast majority of studies addressed behavioral engagement, and we expected that technologies designed specifically for social interaction, such as web-conferencing, wikis, and social networking sites, would yield more conclusive findings. However, one possible reason for the mixed findings may be that the technologies were voluntary in many studies, resulting in lower than desired participation rates and missed opportunities for interaction (Armstrong & Thornton, 2012 ; Fagioli et al., 2015 ; Nakamaru, 2012 ; Rambe, 2012 ; Ross et al., 2015 ; Williams & Whiting, 2016 ), and mandatory in a few studies, yielding higher levels of participation and interaction (Bahati, 2015 ; Gagnon, 2015 ; Roussinos & Jimoyiannis, 2013 ). Another possible reason for the mixed findings is that measures of variables differed across studies. For example, in some studies participation meant that a student signed up for a Twitter account (Tiernan, 2014 ), used the Twitter account for class (Williams & Whiting, 2016 ), or viewed the course-specific Twitter page (Hennessy et al., 2016 ). The pedagogical uses of the technologies also varied considerably across studies, making it difficult to make comparisons. For example, Facebook was used in studies to share learning materials (Clements, 2015 ; Dyson et al., 2015 ), answer student questions about academic content or administrative issues (Rambe, 2012 ), prepare for upcoming exams and share study tips (Bowman & Akcaoglu, 2014 ; DiVall & Kirwin, 2012 ), complete group work (Hou et al., 2015 ; Staines & Lauchs, 2013 ), and discuss course content (Camus et al., 2016 ; Kent, 2013 ; Hurt et al., 2012 ). Finally, cognitive indicators (motivation and persistence) drew the fewest amount of studies, which suggests that research is needed to determine whether technologies affect these indicators.

Methodological limitations

While there appears to be preliminary support for the use of many of the technologies to promote student engagement, there are significant methodological limitations in the literature and, as a result, findings should be interpreted with caution. First, many studies used small sample sizes and were limited to one course, one degree level, and one university. Therefore, generalizability is limited. Second, very few studies used experimental or quasi-experimental designs; therefore, very little evidence exists to substantiate a cause and effect relationship between technologies and student engagement indicators. In addition, in many studies that did use experimental or quasi-experimental designs, participants were not randomized; rather, participants who volunteered to use a specific technology were compared to those who chose not to use the technology. As a result, there is a possibility that fundamental differences between users and non-users could have affected the engagement results. Furthermore, many of the studies did not isolate specific technological features (e.g, using only the breakout rooms for group work in web-conferencing software, rather than using the chat feature, screen sharing, and breakout rooms for group work). Using multiple features at once could have conflated student engagement results. Third, many studies relied on one source to measure technological and engagement variables (single source bias), such as self-report data (i.e., reported usage of technology and perceptions of student engagement), which may have affected the validity of the results. Fourth, many studies were conducted during a very brief timeframe, such as one academic term. As a result, positive student engagement findings may be attributed to a “novelty effect” (Dichev & Dicheva, 2017 ) associated with using a new technology. Finally, many studies lack adequate details about learning activities, raising questions about whether poor instructional design may have adversely affected results. For example, an instructor may intend to elicit higher-order thinking from students, but if learning activity instructions are written using low-level verbs, such as identify, describe, and summarize, students will be less likely to engage in higher-order thinking.

Areas for future research

The findings of our literature review suggest that the influence of technology on student engagement is still a developing area of knowledge that requires additional research to build on promising, but limited, evidence, clarify mixed findings, and address several gaps in the literature. As such, our recommendations for future areas of research are as follows:

Examine the effect of collaborative technologies (i.e., web-conferencing, blogs, wikis, social networking sites ) on emotional and cognitive student engagement. There are significant gaps in the literature regarding whether these technologies affect attitudes, interests, and values about learning; a sense of belonging within a learning community; motivation to learn; and persistence to overcome academic challenges and meet or exceed requirements.

Clarify mixed findings, particularly regarding how web-conferencing software, wikis, and Facebook and Twitter affect participation in learning activities. Researchers should make considerable efforts to gain consensus or increase consistency on how participation is measured (e.g., visited Facebook group or contributed one post a week) in order to make meaningful comparisons and draw conclusions about the efficacy of various technologies for promoting behavioral engagement. In addition, further research is needed to clarify findings regarding how wikis and Twitter influence interaction and how blogs and Facebook influence deep processing of information. Future research studies should include justifications for the pedagogical use of specific technologies and detailed instructions for learning activities to minimize adverse findings from poor instructional design and to encourage replication.

Conduct longitudinal studies over several academic terms and across multiple academic disciplines, degree levels, and institutions to determine long-term effects of specific technologies on student engagement and to increase generalizability of findings. Also, future studies should take individual factors into account, such as gender, age, and prior experience with the technology. Studies suggest that a lack of prior experience or familiarity with Twitter was a barrier to Twitter use in educational settings (Bista, 2015 , Mysko & Delgaty, 2015 , Tiernan, 2014 ); therefore, future studies should take prior experience into account.

Compare student engagement outcomes between and among different technologies and non-technologies. For example, studies suggest that students prefer Facebook over Twitter (Bista, 2015 ; Osgerby & Rush, 2015 ), but there were no studies that compared these technologies for promoting student engagement. Also, studies are needed to isolate and compare different features within the same technology to determine which might be most effective for increasing engagement. Finally, studies on digital games (Beckem & Watkins, 2012 ; Grimley et al., 2012 ; Ke et al., 2016 ; Lu et al., 2014 ; Marriott et al., 2015 ; Siddique et al., 2013 ) and face-to-face games (Antunes et al., 2012 ; Auman, 2011 ; Coffey et al., 2011 ; Crocco et al., 2016 ; Poole et al., 2014 ; Scarlet & Ampolos, 2013 ) show similar, positive effects on student engagement, therefore, additional research is needed to determine the degree to which the delivery method (i.e.., digital versus face-to-face) accounts for positive gains in student engagement.

Determine whether other technologies not included in this review influence student engagement. Facebook and Twitter regularly appear in the literature regarding social networking, but it is unclear how other popular social networking sites, such as LinkedIn, Instagram, and Flickr, influence student engagement. Future research should focus on the efficacy of these and other popular social networking sites for promoting student engagement. In addition, there were very few studies about whether informational technologies, which involve the one-way transmission of information to students, affect different types of student engagement. Future research should examine whether informational technologies, such as video lectures, podcasts, and pre-recorded narrated Power Point presentations or screen casts, affect student engagement. Finally, studies should examine the influence of mobile software and technologies, such as educational apps or smartphones, on student engagement.

Achieve greater consensus on the meaning of student engagement and its distinction from similar concepts in the literature, such as social and cognitive presence (Garrison & Arbaugh, 2007 )

Recommendations for practice

Despite the existing gaps and mixed findings in the literature, we were able to compile a list of recommendations for when and how to use technology to increase the likelihood of promoting student engagement. What follows is not an exhaustive list; rather, it is a synthesis of both research findings and lessons learned from the studies we reviewed. There may be other recommendations to add to this list; however, our intent is to provide some useful information to help address barriers to technology integration among faculty who feel uncertain or unprepared to use technology (Ashrafzadeh & Sayadian, 2015 ; Hauptman, 2015 ; Kidd et al., 2016 ; Reid, 2014 ) and to add to the body of practical knowledge in instructional design and delivery. Our recommendations for practice are as follows:

Consider context before selecting technologies. Contextual factors such as existing technological infrastructure and requirements, program and course characteristics, and the intended audience will help determine which technologies, if any, are most appropriate (Bullen & Morgan, 2011 ; Bullen, Morgan, & Qayyum, 2011 ). For example, requiring students to use a blog that is not well integrated with the existing LMS may prove too frustrating for both the instructor and students. Similarly, integrating Facebook- and Twitter- based learning activities throughout a marketing program may be more appropriate, given the subject matter, compared to doing so in an engineering or accounting program where social media is less integral to the profession. Finally, do not assume that students appreciate or are familiar with all technologies. For example, students who did not already have Facebook or Twitter accounts were less likely to use either for learning purposes and perceived setting up an account to be an increase in workload (Bista, 2015 , Clements, 2015 ; DiVall & Kirwin, 2012 ; Hennessy et al., 2016 ; Mysko & Delgaty, 2015 , Tiernan, 2014 ). Therefore, prior to using any technology, instructors may want to determine how many students already have accounts and/or are familiar with the technology.

Carefully select technologies based on their strengths and limitations and the intended learning outcome. For example, Twitter is limited to 140 characters, making it a viable tool for learning activities that require brevity. In one study, an instructor used Twitter for short pop quizzes during lectures, where the first few students to tweet the correct answer received additional points (Kim et al., 2015 ), which helped students practice applying knowledge. In addition, studies show that students perceive Twitter and Facebook to be primarily for social interactions (Camus et al., 2016 ; Ross et al., 2015 ), which may make these technologies viable tools for sharing resources, giving brief opinions about news stories pertaining to course content, or having casual conversations with classmates rather than full-fledged scholarly discourse.

Incentivize students to use technology, either by assigning regular grades or giving extra credit. The average participation rates in voluntary web-conferencing, Facebook , and Twitter learning activities in studies we reviewed was 52% (Andrew et al., 2015 ; Armstrong & Thornton, 2012 ; Bahati, 2015 ; Bowman & Akcaoglu, 2014 ; Divall & Kirwin, 2012 ; Dougherty & Andercheck, 2014 ; Fagioli et al., 2015 ; Hennessy et al., 2016 ; Junco et al., 2013 ; Rambe, 2012 ; Ross et al., 2015 ; Staines & Lauchs, 2013 ; Tiernan, 2014 ; Williams & Whiting, 2016 ). While there were far fewer studies on the use of technology for graded or mandatory learning activities, the average participation rate reported in those studies was 97% (Bahati2015; Gagnon, 2015 ), suggesting that grading may be a key factor in ensuring students participate.

Communicate clear guidelines for technology use. Prior to the implementation of technology in a course, students may benefit from an overview the technology, including its navigational features, privacy settings, and security (Andrew et al., 2015 ; Hurt et al., 2012 ; Martin et al., 2012 ) and a set of guidelines for how to use the technology effectively and professionally within an educational setting (Miller et al., 2012 ; Prestridge, 2014 ; Staines & Lauchs, 2013 ; West et al., 2015 ). In addition, giving students examples of exemplary and poor entries and posts may also help to clarify how they are expected to use the technology (Shraim, 2014 ; Roussinos & Jimoyiannis, 2013 ). Also, if instructors expect students to use technology to demonstrate higher-order thinking or to interact with peers, there should be explicit instructions to do so. For example, Prestridge ( 2014 ) found that students used Twitter to ask the instructor questions but very few interacted with peers because they were not explicitly asked to do so. Similarly, Hou et al., 2015 reported low levels of knowledge construction in Facebook , admitting that the wording of the learning activity (e.g., explore and present applications of computer networking) and the lack of probing questions in the instructions may have been to blame.

Use technology to provide authentic and integrated learning experiences. In many studies, instructors used digital games to simulate authentic environments in which students could apply new knowledge and skills, which ultimately lead to a greater understanding of content and evidence of higher-order thinking (Beckem & Watkins, 2012 ; Liu et al., 2011 ; Lu et al., 2014 ; Marriott et al., 2015 ; Siddique et al., 2013 ). For example, in one study, students were required to play the role of a stock trader in a simulated trading environment and they reported that the simulation helped them engage in critical reflection, enabling them to identify their mistakes and weaknesses in their trading approaches and strategies (Marriott et al., 2015 ). In addition, integrating technology into regularly-scheduled classroom activities, such as lectures, may help to promote student engagement. For example, in one study, the instructor posed a question in class, asked students to respond aloud or tweet their response, and projected the Twitter page so that everyone could see the tweets in class, which lead to favorable comments about the usefulness of Twitter to promote engagement (Tiernan, 2014 ).

Actively participate in using the technologies assigned to students during the first few weeks of the course to generate interest (Dougherty & Andercheck, 2014 ; West et al., 2015 ) and, preferably, throughout the course to answer questions, encourage dialogue, correct misconceptions, and address inappropriate behavior (Bowman & Akcaoglu, 2014 ; Hennessy et al., 2016 ; Junco et al., 2013 ; Roussinos & Jimoyiannis, 2013 ). Miller et al. ( 2012 ) found that faculty encouragement and prompting was associated with increases in students’ expression of ideas and the degree to which they edited and elaborated on their peers’ work in a course-specific wiki.

Be mindful of privacy, security, and accessibility issues. In many studies, instructors took necessary steps to help ensure privacy and security by creating closed Facebook groups and private Twitter pages, accessible only to students in the course (Bahati, 2015 ; Bista, 2015 ; Bowman & Akcaoglu, 2014 ; Esteves, 2012 ; Rambe, 2012 ; Tiernan, 2014 ; Williams & Whiting, 2016 ) and by offering training to students on how to use privacy and security settings (Hurt et al., 2012 ). Instructors also made efforts to increase accessibility of web-conferencing software by including a phone number for students unable to access audio or video through their computer and by recording and archiving sessions for students unable to attend due to pre-existing conflicts (Andrew et al., 2015 ; Martin et al., 2012 ). In the future, instructors should also keep in mind that some technologies, like Facebook and Twitter , are not accessible to students living in China; therefore, alternative arrangements may need to be made.

In 1985, Steve Jobs predicted that computers and software would revolutionize the way we learn. Over 30 years later, his prediction has yet to be fully confirmed in the student engagement literature; however, our findings offer preliminary evidence that the potential is there. Of the technologies we reviewed, digital games, web-conferencing software, and Facebook had the most far-reaching effects across multiple types and indicators of student engagement, suggesting that technology should be considered a factor that influences student engagement in existing models. Findings regarding blogs, wikis, and Twitter, however, are less convincing, given a lack of studies in relation to engagement indicators or mixed findings. Significant methodological limitations may account for the wide range of findings in the literature. For example, small sample sizes, inconsistent measurement of variables, lack of comparison groups, and missing details about specific, pedagogical uses of technologies threaten the validity and reliability of findings. Therefore, more rigorous and robust research is needed to confirm and build upon limited but positive findings, clarify mixed findings, and address gaps particularly regarding how different technologies influence emotional and cognitive indicators of engagement.

Abbreviations

Learning management system

Amirault, R. J. (2012). Distance learning in the 21 st century university. Quarterly Review of Distance Education, 13 (4), 253–265.

Google Scholar  

Anderson, M. (2016). More Americans using smartphones for getting directions, streaming TV . Washington, D.C.: Pew Research Center Retrieved from http://www.pewresearch.org/fact-tank/2016/01/29/us-smartphone-use/ .

Anderson, M., & Horrigan, J. B. (2016). Smartphones help those without broadband get online, but don’t necessary bridge the digital divide . Washington, D.C.: Pew Research Center Retrieved from http://www.pewresearch.org/fact-tank/2016/10/03/smartphones-help-those-without-broadband-get-online-but-dont-necessarily-bridge-the-digital-divide/ .

Andrew, L., Maslin-Prothero, S., & Ewens, B. (2015). Enhancing the online learning experience using virtual interactive classrooms. Australian Journal of Advanced Nursing, 32 (4), 22–31.

Antunes, M., Pacheco, M. R., & Giovanela, M. (2012). Design and implementation of an educational game for teaching chemistry in higher education. Journal of Chemical Education, 89 (4), 517–521. doi: 10.1021/ed2003077 .

Article   Google Scholar  

Armier, D. J., Shepherd, C. E., & Skrabut, S. (2016). Using game elements to increase student engagement in course assignments. College Teaching, 64 (2), 64–72 https://doi.org/10.1080/87567555.2015.1094439 .

Armstrong, A., & Thornton, N. (2012). Incorporating Brookfield’s discussion techniques synchronously into asynchronous online courses. Quarterly Review of Distance Education, 13 (1), 1–9.

Ashrafzadeh, A., & Sayadian, S. (2015). University instructors’ concerns and perceptions of technology integration. Computers in Human Behavior, 49 , 62–73. doi: 10.1016/j.chb.2015.01.071 .

Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Personnel, 25 (4), 297–308.

Auman, C. (2011). Using simulation games to increase student and instructor engagement. College Teaching, 59 (4), 154–161. doi: 10.1080/87567555 .

Axelson, R. D., & Flick, A. (2011). Defining student engagement. Change: The magazine of higher learning, 43 (1), 38–43.

Bahati, B. (2015). Extending student discussions beyond lecture room walls via Facebook. Journal of Education and Practice, 6 (15), 160–171.

Bakker, A. B., Vergel, A. I. S., & Kuntze, J. (2015). Student engagement and performance: A weekly diary study on the role of openness. Motivation and Emotion, 39 (1), 49–62. doi: 10.1007/s11031-014-9422-5 .

Beckem, J. I., & Watkins, M. (2012). Bringing life to learning: Immersive experiential learning simulations for online and blended courses. Journal if Asynchronous Learning Networks, 16 (5), 61–70 https://doi.org/10.24059/olj.v16i5.287 .

Bista, K. (2015). Is Twitter an effective pedagogical tool in higher education? Perspectives of education graduate students. Journal of the Scholarship Of Teaching And Learning, 15 (2), 83–102 https://doi.org/10.14434/josotl.v15i2.12825 .

Boghossian, P. (2006). Behaviorism, constructivism, and Socratic pedagogy. Educational Philosophy and Theory, 38 (6), 713–722 https://doi.org/10.1111/j.1469-5812.2006.00226.x .

Bower, M. (2011). Redesigning a web-conferencing environment to scaffold computing students’ creative design processes. Journal of Educational Technology & Society, 14 (1), 27–42.

MathSciNet   Google Scholar  

Bower, M. (2016). A framework for adaptive learning design in a Web-conferencing environment. Journal of Interactive Media in Education, 2016 (1), 11 http://doi.org/10.5334/jime.406 .

Article   MathSciNet   Google Scholar  

Bowman, N. D., & Akcaoglu, M. (2014). “I see smart people!”: Using Facebook to supplement cognitive and affective learning in the university mass lecture. The Internet and Higher Education, 23 , 1–8. doi: 10.1016/j.iheduc.2014.05.003 .

Boyle, E. A., Hainey, T., Connolly, T. M., Gray, G., Earp, J., Ott, M., et al. (2016). An update to the systematic literature review of empirical evidence of the impacts and outcomes of computer games and serious games. Computers & Education, 94 , 178–192. doi: 10.1016/j.compedu.2015.11.003 .

Bryson, C., & Hand, L. (2007). The role of engagement in inspiring teaching and learning. Innovations in Education and Teaching International, 44 (4), 349–362. doi: 10.1080/14703290701602748 .

Buchanan, T., Sainter, P., & Saunders, G. (2013). Factors affecting faculty use of learning technologies: Implications for models of technology adoption. Journal of Computer in Higher Education, 25 (1), 1–11.

Bullen, M., & Morgan, T. (2011). Digital learners not digital natives. La Cuestión Universitaria, 7 , 60–68.

Bullen, M., Morgan, T., & Qayyum, A. (2011). Digital learners in higher education: Generation is not the issue. Canadian Journal of Learning and Technology, 37 (1), 1–24.

Calabretto, J., & Rao, D. (2011). Wikis to support collaboration of pharmacy students in medication management workshops -- a pilot project. International Journal of Pharmacy Education & Practice, 8 (2), 1–12.

Camacho, M. E., Carrión, M. D., Chayah, M., & Campos, J. M. (2016). The use of wiki to promote students’ learning in higher education (Degree in Pharmacy). International Journal of Educational Technology in Higher Education, 13 (1), 1–8 https://doi.org/10.1186/s41239-016-0025-y .

Camus, M., Hurt, N. E., Larson, L. R., & Prevost, L. (2016). Facebook as an online teaching tool: Effects on student participation, learning, and overall course performance. College Teaching, 64 (2), 84–94 https://doi.org/10.1080/87567555.2015.1099093 .

Carini, R. M., Kuh, G. D., & Klein, S. P. (2006). Student engagement and student learning: Testing the linkages. Research in Higher Education, 47 (1), 1–32. doi: 10.1007/s11162-005-8150-9 .

Cassidy, E. D., Colmenares, A., Jones, G., Manolovitz, T., Shen, L., & Vieira, S. (2014). Higher Education and Emerging Technologies: Shifting Trends in Student Usage. The Journal of Academic Librarianship, 40 , 124–133. doi: 10.1016/j.acalib.2014.02.003 .

Center for Postsecondary Research (2016). Engagement insights: Survey findings on the quality of undergraduate education . Retrieved from http://nsse.indiana.edu/NSSE_2016_Results/pdf/NSSE_2016_Annual_Results.pdf .

Center for Postsecondary Research (2017). About NSSE. Retrieved on February 15, 2017 from http://nsse.indiana.edu/html/about.cfm

Cercone, K. (2008). Characteristics of adult learners with implications for online learning design. AACE Journal, 16 (2), 137–159.

Chang, J. W., & Wei, H. Y. (2016). Exploring Engaging Gamification Mechanics in Massive Online Open Courses. Educational Technology & Society, 19 (2), 177–203.

Chawinga, W. D. (2017). Taking social media to a university classroom: teaching and learning using Twitter and blogs. International Journal of Educational Technology in Higher Education, 14 (1), 3 https://doi.org/10.1186/s41239-017-0041-6 .

Chen, B., Seilhamer, R., Bennett, L., & Bauer, S. (2015). Students’ mobile learning practices in higher education: A multi-year study. In EDUCAUSE Review Retrieved from http://er.educause.edu/articles/2015/6/students-mobile-learning-practices-in-higher-education-a-multiyear-study .

Chu, S. K., Chan, C. K., & Tiwari, A. F. (2012). Using blogs to support learning during internship. Computers & Education, 58 (3), 989–1000. doi: 10.1016/j.compedu.2011.08.027 .

Clements, J. C. (2015). Using Facebook to enhance independent student engagement: A case study of first-year undergraduates. Higher Education Studies, 5 (4), 131–146 https://doi.org/10.5539/hes.v5n4p131 .

Coates, H. (2008). Attracting, engaging and retaining: New conversations about learning . Camberwell: Australian Council for Educational Research Retrieved from http://research.acer.edu.au/cgi/viewcontent.cgi?article=1015&context=ausse .

Coffey, D. J., Miller, W. J., & Feuerstein, D. (2011). Classroom as reality: Demonstrating campaign effects through live simulation. Journal of Political Science Education, 7 (1), 14–33.

Coghlan, E., Crawford, J. Little, J., Lomas, C., Lombardi, M., Oblinger, D., & Windham, C. (2007). ELI Discovery Tool: Guide to Blogging . Retrieved from https://net.educause.edu/ir/library/pdf/ELI8006.pdf .

Connolly, T. M., Boyle, E. A., MacArthur, E., Hainey, T., & Boyle, J. M. (2012). A systematic literature review of empirical evidence on computer games and serious games. Computers & Education, 59 , 661–686. doi: 10.1016/j.compedu.2012.03.004 .

Cook, C. W., & Sonnenberg, C. (2014). Technology and online education: Models for change. ASBBS E-Journal, 10 (1), 43–59.

Crocco, F., Offenholley, K., & Hernandez, C. (2016). A proof-of-concept study of game-based learning in higher education. Simulation & Gaming, 47 (4), 403–422. doi: 10.1177/1046878116632484 .

Csikszentmihalyi, M. (1988). The flow experience and its significance for human psychology. In M. Csikszentmihalyi & I. Csikszentmihalyi (Eds.), Optimal experience: Psychological studies of flow in consciousness (pp. 15–13). Cambridge, UK: Cambridge University Press.

Chapter   Google Scholar  

Dahlstrom, E. (2012). ECAR study of undergraduate students and information technology, 2012 (Research Report). Retrieved from http://net.educause.edu/ir/library/pdf/ERS1208/ERS1208.pdf

de Freitas, S. (2006). Learning in immersive worlds: A review of game-based learning . Retrieved from https://curve.coventry.ac.uk/open/file/aeedcd86-bc4c-40fe-bfdf-df22ee53a495/1/learning%20in%20immersive%20worlds.pdf .

Dichev, C., & Dicheva, D. (2017). Gamifying education: What is known, what is believed and what remains uncertain: A critical review. International Journal of Educational Technology in Higher Education, 14 (9), 1–36. doi: 10.1186/s41239-017-0042-5 .

DiVall, M. V., & Kirwin, J. L. (2012). Using Facebook to facilitate course-related discussion between students and faculty members. American Journal of Pharmaceutical Education, 76 (2), 1–5 https://doi.org/10.5688/ajpe76232 .

Dos, B., & Demir, S. (2013). The analysis of the blogs created in a blended course through the reflective thinking perspective. Educational Sciences: Theory & Practice, 13 (2), 1335–1344.

Dougherty, K., & Andercheck, B. (2014). Using Facebook to engage learners in a large introductory course. Teaching Sociology, 42 (2), 95–104 https://doi.org/10.1177/0092055x14521022 .

Dyson, B., Vickers, K., Turtle, J., Cowan, S., & Tassone, A. (2015). Evaluating the use of Facebook to increase student engagement and understanding in lecture-based classes. Higher Education: The International Journal of Higher Education and Educational Planning, 69 (2), 303–313 https://doi.org/10.1007/s10734-014-9776-3.

Esteves, K. K. (2012). Exploring Facebook to enhance learning and student engagement: A case from the University of Philippines (UP) Open University. Malaysian Journal of Distance Education, 14 (1), 1–15.

Evans, C. (2014). Twitter for teaching: Can social media be used to enhance the process of learning? British Journal of Educational Technology, 45 (5), 902–915 https://doi.org/10.1111/bjet.12099 .

Fagioli, L., Rios-Aguilar, C., & Deil-Amen, R. (2015). Changing the context of student engagement: Using Facebook to increase community college student persistence and success. Teachers College Record, 17 , 1–42.

Farley, P. C. (2013). Using the computer game “FoldIt” to entice students to explore external representations of protein structure in a biochemistry course for nonmajors. Biochemistry and Molecular Biology Education, 41 (1), 56–57 https://doi.org/10.1002/bmb.20655 .

Francescucci, A., & Foster, M. (2013). The VIRI classroom: The impact of blended synchronous online courses on student performance, engagement, and satisfaction. Canadian Journal of Higher Education, 43 (3), 78–91.

Fredricks, J., Blumenfeld, P., & Paris, A. (2004). School engagement: Potential of the concept, state of the evidence. Review of Educational Research, 74 (1), 59–109. doi: 10.3102/00346543074001059 .

Gagnon, K. (2015). Using twitter in health professional education: A case study. Journal of Allied Health, 44 (1), 25–33.

Gandhi, P., Khanna, S., & Ramaswamy, S. (2016). Which industries are the most digital (and why?) . Retrieved from https://hbr.org/2016/04/a-chart-that-shows-which-industries-are-the-most-digital-and-why .

Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues, and future directions. The Internet and Higher Education, 10 (3), 157–172 http://dx.doi.org/10.1016/j.iheduc.2007.04.001 .

Garrity, M. K., Jones, K., VanderZwan, K. J., de la Rocha, A. R., & Epstein, I. (2014). Integrative review of blogging: Implications for nursing education. Journal of Nursing Education, 53 (7), 395–401. doi: 10.3928/01484834-20140620-01 .

Gikas, J., & Grant, M. M. (2013). Mobile computing devices in higher education: Student perspectives on learning with cellphones, smartphones & social media. The Internet and Higher Education, 19 , 18–26 http://dx.doi.org/10.1016/j.iheduc.2013.06.002 .

Gilboy, M. B., Heinerichs, S., & Pazzaglia, G. (2015). Enhancing student engagement using the flipped classroom. Journal of Nutrition Education and Behavior, 47 (1), 109–114 http://dx.doi.org/10.1016/j.jneb.2014.08.008 .

Greenwood, S., Perrin, A., & Duggan, M. (2016). Social media update 2016 . Washington.: Pew Research Center Retrieved from http://www.pewinternet.org/2016/11/11/social-media-update-2016/ .

Grimley, M., Green, R., Nilsen, T., & Thompson, D. (2012). Comparing computer game and traditional lecture using experience ratings from high and low achieving students. Australasian Journal of Educational Technology, 28 (4), 619–638 https://doi.org/10.14742/ajet.831 .

Gunawardena, C. N., Hermans, M. B., Sanchez, D., Richmond, C., Bohley, M., & Tuttle, R. (2009). A theoretical framework for building online communities of practice with social networking tools. Educational Media International, 46 (1), 3–16 https://doi.org/10.1080/09523980802588626 .

Haggis, T. (2009). What have we been thinking of? A critical overview of 40 years of student learning research in higher education. Studies in Higher Education, 34 (4), 377–390. doi: 10.1080/03075070902771903 .

Hauptman, P.H. (2015). Mobile technology in college instruction. Faculty perceptions and barriers to adoption (Doctoral dissertation). Retrieved from ProQuest. (AAI3712404).

Hennessy, C. M., Kirkpatrick, E., Smith, C. F., & Border, S. (2016). Social media and anatomy education: Using twitter to enhance the student learning experience in anatomy. Anatomical Sciences Education, 9 (6), 505–515 https://doi.org/10.1002/ase.1610 .

Hew, K. F., Huang, B., Chu, K. S., & Chiu, D. K. (2016). Engaging Asian students through game mechanics: Findings from two experiment studies. Computers & Education, 93 , 221–236. doi: 10.1016/j.compedu.2015.10.010 .

Hewege, C. R., & Perera, L. R. (2013). Pedagogical significance of wikis: Towards gaining effective learning outcomes. Journal of International Education in Business, 6 (1), 51–70 https://doi.org/10.1108/18363261311314953 .

Hou, H., Wang, S., Lin, P., & Chang, K. (2015). Exploring the learner’s knowledge construction and cognitive patterns of different asynchronous platforms: comparison of an online discussion forum and Facebook. Innovations in Education and Teaching International, 52 (6), 610–620. doi: 10.1080/14703297.2013.847381 .

Hu, S., & McCormick, A. C. (2012). An engagement-based student typology and its relationship to college outcomes. Research in Higher Education, 53 , 738–754. doi: 10.1007/s11162-012-9254-7 .

Hudson, T. M., Knight, V., & Collins, B. C. (2012). Perceived effectiveness of web conferencing software in the digital environment to deliver a graduate course in applied behavior analysis. Rural Special Education Quarterly, 31 (2), 27–39.

Hurt, N. E., Moss, G. S., Bradley, C. L., Larson, L. R., Lovelace, M. D., & Prevost, L. B. (2012). The ‘Facebook’ effect: College students’ perceptions of online discussions in the age of social networking. International Journal for the Scholarship of Teaching & Learning, 6 (2), 1–24 https://doi.org/10.20429/ijsotl.2012.060210 .

Ibáñez, M. B., Di-Serio, A., & Delgado-Kloos, C. (2014). Gamification for engaging computer science students in learning activities: A case study. IEEE Transactions on Learning Technologies, 7 (3), 291–301 https://doi.org/10.1109/tlt.2014.2329293 .

Ivala, E., & Gachago, D. (2012). Social media for enhancing student engagement: The use of facebook and blogs at a university of technology. South African Journal of Higher Education, 26 (1), 152–167.

Johnson, D. R. (2013). Technological change and professional control in the professoriate. Science, Technology & Human Values, 38 (1), 126–149. doi: 10.1177/0162243911430236 .

Junco, R., Elavsky, C. M., & Heiberger, G. (2013). Putting Twitter to the test: Assessing outcomes for student collaboration, engagement and success. British Journal of Educational Technology, 44 (2), 273–287. doi: 10.1111/j.1467-8535.2012.01284.x .

Junco, R., Heibergert, G., & Loken, E. (2011). The effect of Twitter on college student engagement and grades. Journal of Computer Assisted Learning, 27 (2), 119–132. doi: 10.1111/j.1365-2729.2010.00387.x .

Kahu, E. R. (2013). Framing student engagement in higher education. Studies in Higher Education, 38 (5), 758–773. doi: 10.1080/03075079.2011.598505 .

Kaware, S. S., & Sain, S. K. (2015). ICT Application in Education: An Overview. International Journal of Multidisciplinary Approach & Studies, 2 (1), 25–32.

Ke, F., Xie, K., & Xie, Y. (2016). Game-based learning engagement: A theory- and data-driven exploration. British Journal of Educational Technology, 47 (6), 1183–1201 https://doi.org/10.1111/bjet.12314 .

Kent, M. (2013). Changing the conversation: Facebook as a venue for online class discussion in higher education. Journal of Online Learning & Teaching, 9 (4), 546–565 https://doi.org/10.1353/rhe.2015.0000 .

Kidd, T., Davis, T., & Larke, P. (2016). Experience, adoption, and technology: Exploring the phenomenological experiences of faculty involved in online teaching at once school of public health. International Journal of E-Learning, 15 (1), 71–99.

Kim, Y., Jeong, S., Ji, Y., Lee, S., Kwon, K. H., & Jeon, J. W. (2015). Smartphone response system using twitter to enable effective interaction and improve engagement in large classrooms. IEEE Transactions on Education, 58 (2), 98–103 https://doi.org/10.1109/te.2014.2329651 .

Kinchin. (2012). Avoiding technology-enhanced non-learning. British Journal of Educational Technology, 43 (2), E43–E48.

Kolb, D. A. (2014). Experiential learning: Experience as the source of learning and development (2nd ed.). Upper Saddle River: Pearson Education, Inc..

Kopcha, T. J., Rieber, L. P., & Walker, B. B. (2016). Understanding university faculty perceptions about innovation in teaching and technology. British Journal of Educational Technology, 47 (5), 945–957. doi: 10.1111/bjet.12361 .

Krause, K., & Coates, H. (2008). Students’ engagement in first-year university. Assessment and Evaluation in Higher Education, 33 (5), 493–505. doi: 10.1080/02602930701698892 .

Kuh, G. D. (2009). The National Survey of Student Engagement: Conceptual and empirical foundations. New Directions for Institutional Research, 141 , 5–20.

Lam, S., Wong, B., Yang, H., & Yi, L. (2012). Understanding student engagement with a contextual model. In S. L. Christenson, A. L. Reschly, & C. Wylie (Eds.), Handbook of Research on Student Engagement (pp. 403–419). New York: Springer.

Lawrence, B., & Lentle-Keenan, S. (2013). Teaching beliefs and practice, institutional context, and the uptake of Web-based technology. Distance Education, 34 (1), 4–20.

Leach, L. (2016). Enhancing student engagement in one institution. Journal of Further and Higher Education, 40 (1), 23–47.

Lester, D. (2013). A review of the student engagement literature. Focus on Colleges, Universities, and Schools, 7 (1), 1–8.

Lewis, C. C., Fretwell, C. E., Ryan, J., & Parham, J. B. (2013). Faculty use of established and emerging technologies in higher education: A unified theory of acceptance and use of technology perspective. International Journal of Higher Education, 2 (2), 22–34 http://dx.doi.org/10.5430/ijhe.v2n2p22 .

Lin, C., Singer, R., & Ha, L. (2010). Why university members use and resist technology? A structure enactment perspective. Journal of Computing in Higher Education, 22 (1), 38–59. doi: 10.1007/s12528-010-9028-1 .

Linder-VanBerschot, J. A., & Summers, L. L. (2015). Designing instruction in the face of technology transience. Quarterly Review of Distance Education, 16 (2), 107–118.

Liu, C., Cheng, Y., & Huang, C. (2011). The effect of simulation games on the learning of computational problem solving. Computers & Education, 57 (3), 1907–1918 https://doi.org/10.1016/j.compedu.2011.04.002 .

Lu, J., Hallinger, P., & Showanasai, P. (2014). Simulation-based learning in management education: A longitudinal quasi-experimental evaluation of instructional effectiveness. Journal of Management Development, 33 (3), 218–244. doi: 10.1108/JMD-11-2011-0115 .

Maben, S., Edwards, J., & Malone, D. (2014). Online engagement through Facebook groups in face-to-face undergraduate communication courses: A case study. Southwestern Mass Communication Journal, 29 (2), 1–27.

Manca, S., & Ranieri, M. (2013). Is it a tool suitable for learning? A critical review of the literature on Facebook as a technology-enhanced learning environment. Journal of Computer Assisted Learning, 29 (6), 487–504. doi: 10.1111/jcal.12007 .

Mansouri, S. A., & Piki, A. (2016). An exploration into the impact of blogs on students’ learning: Case studies in postgraduate business education. Innovations in Education And Teaching International, 53 (3), 260–273 http://dx.doi.org/10.1080/14703297.2014.997777 .

Marriott, P., Tan, S. W., & Marriot, N. (2015). Experiential learning – A case study of the use of computerized stock market trading simulation in finance education. Accounting Education, 24 (6), 480–497 http://dx.doi.org/10.1080/09639284.2015.1072728 .

Martin, F., Parker, M. A., & Deale, D. F. (2012). Examining interactivity in synchronous virtual classrooms. International Review of Research in Open and Distance Learning, 13 (3), 227–261.

Martin, K., Goldwasser, M., & Galentino, R. (2017). Impact of Cohort Bonds on Student Satisfaction and Engagement. Current Issues in Education, 19 (3), 1–14.

Martínez, A. A., Medina, F. X., Albalat, J. A. P., & Rubió, F. S. (2013). Challenges and opportunities of 2.0 tools for the interdisciplinary study of nutrition: The case of the Mediterranean Diet wiki. International Journal of Educational Technology in Higher Education, 10 (1), 210–225 https://doi.org/10.7238/rusc.v10i1.1341 .

McBrien, J. L., Jones, P., & Cheng, R. (2009). Virtual spaces: Employing a synchronous online classroom to facilitate student engagement in online learning. International Review of Research in Open and Distance Learning, 10 (3), 1–17 https://doi.org/10.19173/irrodl.v10i3.605 .

McClenney, K., Marti, C. N., & Adkins, C. (2012). Student engagement and student outcomes: Key findings from “CCSSE” validation research . Austin: Community College Survey of Student Engagement.

McKay, M., Sanko, J., Shekhter, I., & Birnbach, D. (2014). Twitter as a tool to enhance student engagement during an interprofessional patient safety course. Journal of Interprofessional Care, 28 (6), 565–567 https://doi.org/10.3109/13561820.2014.912618 .

Miller, A. D., Norris, L. B., & Bookstaver, P. B. (2012). Use of wikis in pharmacy hybrid elective courses. Currents in Pharmacy Teaching & Learning, 4 (4), 256–261. doi: 10.1016/j.cptl.2012.05.004 .

Morley, D. A. (2012). Enhancing networking and proactive learning skills in the first year university experience through the use of wikis. Nurse Education Today, 32 (3), 261–266.

Mysko, C., & Delgaty, L. (2015). How and why are students using Twitter for #meded? Integrating Twitter into undergraduate medical education to promote active learning. Annual Review of Education, Communication & Language Sciences, 12 , 24–52.

Nadolny, L., & Halabi, A. (2016). Student participation and achievement in a large lecture course with game-based learning. Simulation and Gaming, 47 (1), 51–72. doi: 10.1177/1046878115620388 .

Naghdipour, B., & Eldridge, N. H. (2016). Incorporating social networking sites into traditional pedagogy: A case of facebook. TechTrends, 60 (6), 591–597 http://dx.doi.org/10.1007/s11528-016-0118-4 .

Nakamaru, S. (2012). Investment and return: Wiki engagement in a “remedial” ESL writing course. Journal of Research on Technology in Education, 44 (4), 273–291.

Nelson, R. (2016). Apple’s app store will hit 5 million apps by 2020, more than doubling its current size . Retrieved from https://sensortower.com/blog/app-store-growth-forecast-2020 .

Nora, A., Barlow, E., & Crisp, G. (2005). Student persistence and degree attainment beyond the first year in college. In A. Seidman (Ed.), College Student Retention (pp. 129–154). Westport: Praeger Publishers.

Osgerby, J., & Rush, D. (2015). An exploratory case study examining undergraduate accounting students’ perceptions of using Twitter as a learning support tool. International Journal of Management Education, 13 (3), 337–348. doi: 10.1016/j.ijme.2015.10.002 .

Pace, C. R. (1980). Measuring the quality of student effort. Current Issues in Higher Education, 2 , 10–16.

Pace, C. R. (1984). Student effort: A new key to assessing quality . Los Angeles: University of California, Higher Education Research Institute.

Paul, J. A., & Cochran, J. D. (2013). Key interactions for online programs between faculty, students, technologies, and educational institutions: A holistic framework. Quarterly Review of Distance Education, 14 (1), 49–62.

Pellas, N. (2014). The influence of computer self-efficacy, metacognitive self-regulation, and self-esteem on student engagement in online learning programs: Evidence from the virtual world of Second Life. Computers in Human Behavior, 35 , 157–170. doi: 10.1016/j.chb.2014.02.048 .

Poole, S. M., Kemp, E., Williams, K. H., & Patterson, L. (2014). Get your head in the game: Using gamification in business education to connect with Generation Y. Journal for Excellence in Business Education, 3 (2), 1–9.

Poushter, J. (2016). Smartphone ownership and internet usage continues to climb in emerging economies . Washington, D.C.: Pew Research Center Retrieved from http://www.pewglobal.org/2016/02/22/smartphone-ownership-and-internet-usage-continues-to-climb-in-emerging-economies/ .

Prestridge, S. (2014). A focus on students’ use of Twitter - their interactions with each other, content and interface. Active Learning in Higher Education, 15 (2), 101–115.

Rambe, P. (2012). Activity theory and technology mediated interaction: Cognitive scaffolding using question-based consultation on “Facebook”. Australasian Journal of Educational Technology, 28 (8), 1333–1361 https://doi.org/10.14742/ajet.775 .

Reid, P. (2014). Categories for barriers to adoption of instructional technologies. Education and Information Technologies, 19 (2), 383–407.

Revere, L., & Kovach, J. V. (2011). Online technologies for engagement learning: A meaningful synthesis for educators. Quarterly Review of Distance Education, 12 (2), 113–124.

Richardson, J. C., & Newby, T. (2006). The role of students’ cognitive engagement in online learning. American Journal of Distance Education, 20 (1), 23–37 http://dx.doi.org/10.1207/s15389286ajde2001_3 .

Ross, H. M., Banow, R., & Yu, S. (2015). The use of Twitter in large lecture courses: Do the students see a benefit? Contemporary Educational Technology, 6 (2), 126–139.

Roussinos, D., & Jimoyiannis, A. (2013). Analysis of students’ participation patterns and learning presence in a wiki-based project. Educational Media International, 50 (4), 306–324 https://doi.org/10.1080/09523987.2013.863471 .

Salaber, J. (2014). Facilitating student engagement and collaboration in a large postgraduate course using wiki-based activities. International Journal of Management Education, 12 (2), 115–126. doi: 10.1016/j.ijme.2014.03.006 .

Scarlet, J., & Ampolos, L. (2013). Using game-based learning to teach psychopharmacology. Psychology Learning and Teaching, 12 (1), 64–70 https://doi.org/10.2304/plat.2013.12.1.64 .

Sharma, P., & Tietjen, P. (2016). Examining patterns of participation and meaning making in student blogs: A case study in higher education. American Journal of Distance Education, 30 (1), 2–13 http://dx.doi.org/10.1080/08923647.2016.1119605 .

Shraim, K. Y. (2014). Pedagogical innovation within Facebook: A case study in tertiary education in Palestine. International Journal of Emerging Technologies in Learning, 9 (8), 25–31. doi: 10.3991/ijet.v9i8.3805 .

Siddique, Z., Ling, C., Roberson, P., Xu, Y., & Geng, X. (2013). Facilitating higher-order learning through computer games. Journal of Mechanical Design, 135 (12), 121004–121010.

Smith, A., & Anderson, M. (2016). Online Shopping and E-Commerce . Washington, D.C.: Pew Research Center Retrieved from http://www.pewinternet.org/2016/12/19/online-shopping-and-e-commerce/ .

Staines, Z., & Lauchs, M. (2013). Students’ engagement with Facebook in a university undergraduate policing unit. Australasian Journal of Educational Technology, 29 (6), 792–805 https://doi.org/10.14742/ajet.270 .

Sun, A., & Chen, X. (2016). Online education and its effective practice: A research review. Journal of Information Technology Education: Research, 15 , 157–190.

Tiernan, P. (2014). A study of the use of Twitter by students for lecture engagement and discussion. Education and Information Technologies, 19 (4), 673–690 https://doi.org/10.1007/s10639-012-9246-4 .

Trowler, V. (2010). Student engagement literature review . Lancaster: Lancaster University Retrieved from http://www.lancaster.ac.uk/staff/trowler/StudentEngagementLiteratureReview.pdf .

Trowler, V., & Trowler, P. (2010). Student engagement evidence summary . Lancaster: Lancaster University Retrieved from http://eprints.lancs.ac.uk/61680/1/Deliverable_2._Evidence_Summary._Nov_2010.pdf .

van Beynen, K., & Swenson, C. (2016). Exploring peer-to-peer library content and engagement on a student-run Facebook group. College & Research Libraries, 77 (1), 34–50 https://doi.org/10.5860/crl.77.1.34 .

Wang, S. (2008). Blogs in education. In M. Pagani (Ed.), Encyclopedia of Multimedia Technology and Networking (2nd ed., pp. 134–139). Hershey: Information Sciences Reference.

Wdowik, S. (2014). Using a synchronous online learning environment to promote and enhance transactional engagement beyond the classroom. Campus — Wide Information Systems, 31 (4), 264–275. doi: 10.1108/CWIS-10-2013-0057 .

Weibel, D., Wissmath, B., Habegger, S., Steiner, Y., & Groner, R. (2008). Playing online games against computer-vs. human-controlled opponents: Effects on presence, flow, and enjoyment. Computers in Human Behavior, 24 (5), 2274–2291 https://doi.org/10.1016/j.chb.2007.11.002 .

West, B., Moore, H., & Barry, B. (2015). Beyond the tweet: Using Twitter to enhance engagement, learning, and success among first-year students. Journal of Marketing Education, 37 (3), 160–170. doi: 10.1177/0273475315586061 .

Westera, W. (2015). Reframing the role of educational media technologies. Quarterly Review of Distance Education, 16 (2), 19–32.

Whitton, N. (2011). Game engagement theory and adult learning. Simulation & Gaming, 42 (5), 596–609.

Williams, D., & Whiting, A. (2016). Exploring the relationship between student engagement, Twitter, and a learning management system: A study of undergraduate marketing students. International Journal of Teaching & Learning in Higher Education, 28 (3), 302–313.

Wimpenny, K., & Savin-Baden, M. (2013). Alienation, agency, and authenticity: A synthesis of the literature on student engagement. Teaching in Higher Education, 18 (3), 311–326. doi: 10.1080/13562517.2012.725223 .

Witkowski, P., & Cornell, T. (2015). An Investigation into Student Engagement in Higher Education Classrooms. InSight: A Journal of Scholarly Teaching, 10 , 56–67.

Wright, G. B. (2011). Student-centered learning in higher education. International Journal of Teaching and Learning in Higher Education, 23 (3), 92–97.

Yang, C., & Chang, Y. (2012). Assessing the effects of interactive blogging on student attitudes towards peer interaction, learning motivation, and academic achievements. Journal of Computer Assisted Learning, 28 (2), 126–135 https://doi.org/10.1111/j.1365-2729.2011.00423.x .

Zepke, N. (2014). Student engagement research in higher education: questioning an academic orthodoxy. Teaching in Higher Education, 19 (6), 697–708 http://dx.doi.org/10.1080/13562517.2014.901956 .

Zepke, N., & Leach, L. (2010). Improving student engagement: Ten proposals for action. Active Learning in Higher Education, 11 (3), 167–177. doi: 10.1177/1469787410379680 .

Zickuhr, K., & Raine, L. (2014). E-reading rises as device ownership jumps . Washington, D.C.: Pew Research Center Retrieved from http://www.pewinternet.org/2014/01/16/e-reading-rises-as-device-ownership-jumps/ .

Zimmermann, L. K. (2013). Using a virtual simulation program to teach child development. College Teaching, 61 (4), 138–142. doi: 10.1080/87567555.2013.817377 .

Download references

Acknowledgements

Not applicable.

This research was supported in part by a Laureate Education, Incl. David A. Wilson research grant study awarded to the second author, “A Comparative Analysis of Student Engagement and Critical Thinking in Two Approaches to the Online Classroom”.

Availability of data and materials

Authors’ contributions.

The first and second authors contributed significantly to the writing, review, and conceptual thinking of the manuscript. The third author provided a first detailed outline of what the paper could address, and the fourth offer provided input and feedback through critical review. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Consent for publication

Ethics approval and consent to participate.

The parent study was approved by the University of Liverpool Online International Online Ethics Review Committee, approval number 04-24-2015-01.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Author information

Authors and affiliations.

University of Liverpool Online, Liverpool, UK

Laura A. Schindler & Osama A. Morad

Laureate Education, Inc., Baltimore, USA

Gary J. Burkholder

Walden University, Minneapolis, USA

University of Lincoln, Lincoln, UK

Craig Marsh

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Laura A. Schindler .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Schindler, L.A., Burkholder, G.J., Morad, O.A. et al. Computer-based technology and student engagement: a critical review of the literature. Int J Educ Technol High Educ 14 , 25 (2017). https://doi.org/10.1186/s41239-017-0063-0

Download citation

Received : 31 March 2017

Accepted : 06 June 2017

Published : 02 October 2017

DOI : https://doi.org/10.1186/s41239-017-0063-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Social networking

research on computer education

Exploring the state of computer science education amid rapid policy expansion

Subscribe to the brown center on education policy newsletter, michael hansen and michael hansen senior fellow - brown center on education policy , the herman and george r. brown chair - governance studies @drmikehansen nicolas zerbino nicolas zerbino senior research analyst.

April 11, 2022

  • 35 min read

The role of computers in daily life and the economy grows yearly, and that trend is only expected to continue for the foreseeable future. Those who learn and master computer science (CS) skills are widely expected to enjoy increased employment opportunities and more flexibility in their futures, though the U.S. currently produces too few specialists to meet future employment demands. Thus, providing exposure to CS during compulsory schooling years is believed to be key to maintaining economic growth, increasing employment outcomes for individuals, and reducing historical gaps in participation in technology fields by gender and race. Consequently, providing young people with access to quality CS education is increasingly seen as an urgent priority for public school systems in the U.S. and around the globe .

Primary objectives of CS education, as described in the “K-12 Computer Science Framework”—a guiding document assembled by several CS and STEM education groups in collaboration with school leaders across the country—are to help students “develop as learners, users, and creators of computer science knowledge and artifacts” ( p. 10 ) and to understand the general role of computing in society. CS skills enable individuals to understand how technology works and how best to harness its potential in their personal and professional lives. CS education is distinct from digital literacy as it is primarily concerned with computer design and operations, rather than the simple use of computer software. Common occupations that heavily utilize CS skills include software engineers, data scientists, and computer network managers; however, as described below, CS skills are becoming more integral to many occupations in the economy beyond technology fields.

The past decade has been an active period of policy expansion in CS education across states and growing student engagement in CS courses. Yet, little is known about how policies may have influenced student outcomes. This report offers a first look at the relationship between recent policy changes and participation, as well as pass rates on the Advanced Placement Computer Science (AP CS) exams.

Based on our analysis looking over the last decade, we present five key findings:

  • We observe sharp, coinciding increases in both state adoption of CS education policies and overall participation in AP CS exams.
  • AP CS participation rates for all student subgroups have also increased, with representation gaps between student groups narrowing.
  • Narrowing participation gaps for females and especially Black and Latino students have been primarily driven by the introduction of a new AP CS exam (CS Principles), with gaps changing little since then.
  • Passing rates on AP CS exams have modestly increased for underrepresented student groups during this period, resulting in slightly narrower passing gaps.
  • AP CS student participation overall is associated with increased CS policy adoption, though participation gaps between over- and underrepresented groups appear to be uncorrelated with recent policy adoptions.

Providing universal access to CS education

CS education is undergoing an important transformation in schools. Classes in computing and CS have long been offered in K-12 public schools, though have not been uniformly required, nor universally available. Thus, access to CS has been uneven across student populations. Yet, the growing importance of technological and computing skills in modern society has compelled many school systems to adopt policies to provide universal access to CS education. Several reasons often motivate this expanded access.

First, expanding CS education is expected to directly benefit students. Individuals who develop expertise in computer and technology fields enjoy higher wages and employment. Even those who do not pursue technical occupations still reap these benefits , as computing and data analysis skills have been broadly integrated into many industries and occupations. Finally, CS education also benefits students who do not use computers in their future careers. Prior studies have documented cognitive and interpersonal skills that CS education uniquely provides to students, which transfer outside of computing domains. Moreover, understanding CS fundamentals contributes valuable life skills that prepare and protect students for a future in which many aspects of daily life are carried out in digital contexts.

“The growing importance of technological and computing skills in modern society has compelled many school systems to adopt policies to provide universal access to computer science education.”

Next, economies overall fare better when individuals are more technologically competent. Studies show a positive relationship between economic growth, technology, and human-capital investments in related skills. Many states and countries view computing and technology jobs as engines of economic growth; thus, providing public school students with quality CS education enables sustainable growth. Federal and local politicians often appeal to this economic rationale to justify investments in CS education to public stakeholders— early CS policy-adopter Arkansas is a prime example.

And third, universal access to high-quality CS education is necessary to close historical gaps in technology fields. Black, Latino, and Indigenous populations and women have long been underrepresented in STEM occupations that heavily rely on CS and computing skills . Given the higher wages and job prospects associated with these fields, this underrepresentation of diverse populations in STEM implicitly contributes to race- and gender-based gaps along economic lines. Developing technical skills provides a path to upward social mobility, as has been shown through the assimilation experience of some immigrant groups : Those with computing and other STEM skills reach earnings parity with native workers far faster than those without these skills.

Prior research indicates low access to CS educational opportunities and resources being critical drivers of STEM participation gaps, which tend to mirror larger socioeconomic inequalities based on race, income, or locale. For example, when the only CS offering in a school is an extracurricular robotics club, only those with intrinsic motivation and the resources to participate will gain access to this learning opportunity. Lower access to CS could manifest in various ways from infrequent exposures to computer-based learning applications in the classroom to fewer courses being offered in high schools. Unequal access fails to explain gender-based participation gaps, though these are likely driven by other socialized gender norms that deter girls from computing and other STEM fields . Universal access, however, is expected to both provide CS skills to all students and stimulate greater engagement among underrepresented groups, increasing diversity in STEM occupations.

“Student access to computer science education is highly variable across the U.S.”

Student access to CS education is highly variable across the U.S. Though many schools have provided computer labs and classes in computer literacy (e.g., typing, internet use, word processing), CS courses go beyond basics to provide instruction on computational thinking and other digital operations, and they require teachers with these skills. In many places across the U.S., CS is only offered to students as elective courses or extracurricular activities , if at all. Leaving the provision of CS education to these voluntary contexts leaves the quality of the CS experience highly variable, and dependent on the availability of local resources. Universal access to CS education , however, is expected to standardize learning standards, augment local resource constraints, and ensure equal access to quality instruction.

Enacting CS education policy laws

Calls for universal CS education have been around for years—ranging from corporate efforts and nonprofit advocacy to federal awareness-raising events —though progress has been slow until very recently. Only since 2015 have these efforts yielded the critical mass to push many states to adopt sweeping change in support of CS education.

To illustrate this transformation, consider the policy changes documented through the annual “State of Computer Science Education” (State of CS) reports, co-authored by Code.org Advocacy Coalition, Computer Science Teachers Association, and Expanding Computing Education Pathways. Since 2017, the State of CS reports have promoted and tracked nine different policies intended to promote CS education in schools. 1 The nine policies are:

  • whether the state has adopted a formal plan for CS education (abbreviated as State Plan for reporting);
  • whether the state has implemented K-12 CS education standards (Standards);
  • whether state-level funding is dedicated to CS programs (Funding);
  • whether a CS teacher’s certification exists (Certification);
  • whether a state-approved pre-service teacher-preparation program for future CS educators is provided at any higher education institutions (Pre-service);
  • whether a state-level CS officer exists (State CS);
  • whether all high schools are required to offer computer science (Require HS);
  • whether a CS course can satisfy a core high school graduation requirement (Count); and,
  • whether CS can satisfy a core admissions requirement at state colleges and universities (Higher ed).

In just five years, states showed a remarkable policy transformation; Figure 1 combines and animates this evolution. 2 In the 2017 report, Arkansas was the only state that had adopted at least seven of the nine tracked policies. Meanwhile, 36 states had adopted three or fewer policies, including nine states that had adopted no state-level CS policies at all. But in the 2021 report, 24 states had at least seven policies on the books—a remarkable shift observed across all geographical regions. Only 10 states remain in the lowest adoption category, and all states have adopted at least one policy.

Policy map

Figure 1 also identifies which policies are adopted. The most commonly adopted policy is having a CS course satisfy a core high school graduation requirement, with all 50 states plus Washington, D.C., adopting it by 2021. Other popular policies include having a state CS plan, funding CS initiatives, creating a state-level CS officer, adopting K-12 CS standards, and recognizing a CS certification for teachers; each of these policy categories counts more than 30 states taking action in the area by 2021.

Providing universal access to CS education in many locales has typically followed the provision of (near) universal access to personal computing devices and broadband. Though some elements of CS fundamentals can be taught without the aid of computers and an internet connection, these are required inputs for a full CS curriculum . Historically, schools and households located in low-income or rural communities have had lower access to digital infrastructure—a phenomenon widely known as the digital divide. Aside from a host of other negative consequences , the implications of this divide on CS education is that students in these contexts have fewer opportunities to regularly interact with computing devices in learning contexts and will have less access to high-quality CS instruction.

More recently, however, the COVID-19 pandemic has acted as a catalyst in making real progress on closing the digital divide. Providing widespread access to needed computing resources has been an urgent priority for many school systems as they have worked to stay connected with students while schools were closed for extended periods. With new devices and ready access to the internet, previously disconnected students are beginning to regularly interact with computers to facilitate their learning. Thus, where some communities may have been less able to offer CS for these reasons in the past, we anticipate that hardware and infrastructure barriers should be less formidable moving forward.

More students are taking AP CS exams

In this active era of CS policy adoption, we explore whether these actions correspond to changes in students’ outcomes in CS. Are students more likely to participate and succeed in CS learning? Do race- and sex-based gaps reduce with more universal access?

To investigate these questions, we use state-level outcomes on the College Board’s AP exams in CS. AP exams are useful outcome measures for this investigation because they are standardized, administered nationally, and represent meaningful competencies in the field that are broadly recognized. This section provides background detail about the AP CS exams.

Situated at the transition point between high school and college, AP courses in multiple subjects are offered in most high schools to advanced students, typically in their final year(s) of high school. Students may opt to take the AP exam at the end of the school year to demonstrate their mastery of the course material. When students matriculate to college, many institutions will award those who passed an AP test with college credits corresponding to an introductory course in the field. Thus, participating in and passing an AP CS exam should probably be considered as a capstone student outcome; that is, one that is realized after multiple years of CS learning opportunities.

Students’ participation in AP courses and exams are widely perceived as important signals of college readiness, and many high schools have expanded their AP course offerings to signal rigor to parents and motivate students. Some scholars question the extent to which participation in AP classes genuinely increases students’ likelihood of college success (since it is primarily advanced students who are enrolling in these courses), and controlling for many student background characteristics sharply diminishes the apparent advantage to AP participation. Other evidence from incentive-driven expansions of AP courses in disadvantaged settings points to AP participation having a causal, positive impact on SAT/ACT scores and college enrollment. Though looking across many studies of the AP program, the academic benefits accrue almost exclusively to those who pass the AP exam (participating in the course without passing the exam provides little, if any, academic benefit).

“Socioeconomically disadvantaged groups lack equal access to AP programming in their schools.”

Even if only those who successfully pass the AP exam benefit, socioeconomically disadvantaged groups lack equal access to AP programming in their schools. In 2014, the Department of Education’s Office for Civil Rights conducted a special data collection on student access to advanced coursework. Reporting shows Black and Latino students account for 27% of those enrolled in at least one AP course and 18% of those passing at least one AP exam, despite these groups accounting for 37% of all students. Further, these gaps are not limited to AP courses but are also evident in advanced STEM courses (like algebra II and physics).

During the years of our investigation, the College Board administered two AP exams covering CS content: Computer Science A (AP CS A) and Computer Science Principles (AP CS P). AP CS A is intended to cover material expected of a first-year CS course in college (with a heavy emphasis on coding), while AP CS P is expected to cover a first-year computing course (including more foundational content such as technology’s impacts on society and understanding how algorithms and networks function). Students in both courses will learn to design a computer program, but only students taking AP CS A will develop the algorithms and code needed for implementation. This does not necessarily mean that AP CS A is more effective—though it is more rigorous and would come after AP CS P in a course sequence. A recent College Board report concludes that students who take AP CS P (relative to those not given the chance) are more likely to take AP CS A in later high school years or declare a CS college major. Though not causal, these findings underscore the importance of AP CS P in developing student interest in the field, particularly among underrepresented student groups.

Of the two exams, AP CS A has a longer history, tracing its origins back to 1984. For much of its history, a modest 20,000 or fewer students would take the exam annually, though these numbers have begun to expand in the last decade. The AP CS P exam, however, was introduced in the 2016-17 school year and has quickly surged in popularity. By spring 2018, its second year of administration, student demand for the AP CS P exam (62,868 public school students) had already surpassed demand for AP CS A (51,645 students).

Figure 2 presents the number of exams taken between 2012-2020 (the most recent year with data available). The first half of the series, AP CS A was the only AP CS exam offered and student demand grew modestly year to year. The AP CS P exam quickly dominated once introduced. In 2020, over 150,000 students took one of these AP CS exams, with nearly two-thirds of that demand coming from AP CS P. For reference, participation in AP exams overall has grown from over 950,000 students in 2012 to 1.21 million in 2020 (27% growth). The surging interest in AP CS exams has significantly outpaced general increases in the other AP subjects.

A recent comparative study of the two AP CS exams finds important differences between students, skill mastery, and intended occupational fields. Students who take the AP CS A exam frequently take several other AP exams and intend to pursue majors in either CS or other STEM fields once in college. Conversely, students taking the AP CS P exam only reported less interest in pursuing CS or STEM majors and careers, and they expressed lower computing confidence (as expected, given the more foundational material).

Further, students who took only the AP CS P were more diverse than those who took AP CS A, though underrepresentation for Black, Latino, and female students is still apparent in both exams. 3 Figure 3 illustrates the differences in diversity between the two AP CS exams. Like the preceding figure, it shows the recent time series of AP test-takers, though instead of numerical counts we are looking at the share of Black and Latino (light blue lines) or female (dark blue lines) test-takers on the y-axis. Black and Latino students constitute between 13-18% of AP CS A test-takers for the entire series but represent 28-30% of AP CS P test-takers. Similarly, female students grew from 18% of AP CS A test-takers in 2012 to 25% in 2020; they constituted an even greater share of AP CS P test-takers during the years it was administered (growing from 30% in 2017 to 34% in 2020).

Throughout the remainder of the report, we combine student results on both AP CS exams and report pooled statistics. We do this primarily for simplicity in reporting, as most outcomes show roughly redundant patterns when analyzed separately by exam; exceptions to this will be noted in the text.

Exploring AP CS outcomes by student race and sex

The AP CS exam results provide two discrete outcomes that we use in the remaining analysis: test-taking and passing. The College Board reports state-level statistics by year and student race and sex for both outcomes, and these will be linked to state policy changes that we described earlier. This section first investigates how the expansion of testing in AP CS evolved through the lens of race and sex representation.

Before proceeding, we should note an important limitation regarding the AP CS exam passing data: When small numbers of students are present in a reported cell, the College Board censors the cell to protect students’ privacy. Cell censoring is common in states with small populations when reporting is broken out by state, year, exam, and race or gender combinations. Consequently, we are constrained in our ability to investigate state policies and their association with passing outcomes by race and sex. We will report some passing rates as pertinent below, though much of the analysis that follows uses test-taking as the primary AP CS outcome.

As discussed previously, increasing racial and gender diversity in CS and related STEM fields is an important motivating factor in adopting universal CS education policies. Have narrowing gaps in AP CS test-taking and passing coincided with the expansion of state-level CS education policies?

Related Content

Michael Hansen, Emiliana Vegas, Fred Dews

November 19, 2021

Online only

10:00 am - 11:00 am EDT

Figure 4 illustrates how differences in representation on AP test-taking have evolved in recent years. The figure is comprised of two animated scatterplots that trace the differences in representation between overrepresented groups on the x-axis (males on the left, white and Asian students on the right) and underrepresented groups on the y-axis (females on the left, Black and Latino students on the right). On both axes are the state’s proportion of each student group represented among test-takers (referenced against the state’s population of 12 th -grade students). 4 Both panels have a 45-degree reference line, marking parity on AP CS test-taking between overrepresented and underrepresented groups. Points falling below this reference line represent test-taking gaps where whites, Asians, and males continue to be overrepresented. A line is also fitted across state observations—points lying on this line share the same relative proportions in the test-taking population between under- and overrepresented groups.

Scatters race and sex

In 2012, the earliest year of the animation, all states are clustered into the bottom left-hand corner of the scatterplots. The position of these points shows low participation overall, and participation is especially low among Black, Latino, and female students. When play is pressed on the animation, the points shift away from the origins, though almost exclusively within the same halves of the plot areas southeast of the reference lines. The fitted line between state observations shows that representation gaps in test-taking have narrowed slightly with time (as the fitted line takes on a steeper slope, moving it closer to parity), though large gaps persist in most states.

Table 1 below provides two key metrics that help to describe how these test-taking patterns by student subgroups have evolved over time. The first metric is the ratio of participation gaps (underrepresented groups/overrepresented groups), which is essentially what the fitted lines in Figure 4 illustrate. A value of 1 represents parity between groups (just as the 45-degree line above has a slope of 1). Participation rates were more than four times higher among male 12 th graders compared to females in 2012, resulting in a participation ratio of 0.24. Increasing female participation in recent years has brought them closer to parity with a 2020 value of 0.46. Table 1 also reports the difference in the share of test-takers from overrepresented groups less underrepresented groups, where a value of 0 represents a 50-50 split in test-takers’ demographics. In 2012, AP CS test-takers were just under 20% female, and just over 80% male, resulting in a test-taking share gap exceeding 62 percentage points. This gap has narrowed to less than 40 percentage points as of 2020. Similar patterns of progress are shown on race-based metrics.

T1 Evolution of AP computer science participation gaps

Table 1 shows both the participation ratios and test-taking share gaps calculated by sex and race for three selected years: the first year of data (2012), the year AP CS P was introduced (2017), and the final year (2020). Examining how these metrics have changed over the series is instructive: Much of the overall improvements in the metrics were realized in 2017 with the introduction of the AP CS P exam. Progress made in the years since has been more modest in comparison, and the gains have been larger on sex gaps rather than racial gaps.

We find other encouraging patterns of narrowing gaps when focusing on AP CS passing rates. When rapidly expanding the test-taking pool, one might be concerned that students who are induced to take the AP CS exams will not be as prepared for the exams as those students who had already prepared for AP CS before the expansion. This concern resonates especially for the AP CS P exam, which has expanded dramatically to more than 100,000 exams taken annually in just a few years. To the contrary, though, our analysis of the data suggests that passing rates among underrepresented groups have increased during this period of AP CS expansion and increased faster than those among overrepresented groups.

Figure 5 presents the passing rates on AP CS exams by sex (on the left) and race (on the right) over recent years. The x-axes represent years and the y-axes represent the passing rates for each student group; passing rates are pooled across both AP CS exams. In both panels, the overrepresented groups are passing the exams at higher rates, and an especially large margin is apparent between racial groups. Yet, during these years of participation growth, passing rates among underrepresented groups simultaneously increased. Meanwhile, the passing rates for overrepresented groups (males on the left, whites and Asians on the right) inched upward during this period of expansion. On net, the gaps between these groups narrowed, and female passing rates overtook that of males in 2020.

To confirm that the narrowing gaps depicted in Figure 5 are not simply driven by the surging popularity of the AP CS P exam, we separately investigated passing rates on each of the AP CS exams. The narrowing gaps observed in Figure 5 are also observed in each test. For example, female passing rates on the AP CS A exam increased from 56% (2012) to 68% (2020), and they increased on the AP CS P exam from 70% (2017) to 75% (2020). Increases of 5 or more percentage points were similarly observed among Black and Latino test-takers on both tests during this period. Meanwhile, the passing rates among overrepresented groups increased slightly on the AP CS A exam over the period, while dropping slightly on the AP CS P exam. Again, the net results showed narrowing gaps for underrepresented groups both by race and sex on both exams.

Associating CS education policy changes with AP test-taking

Finally, we explore whether states that are making more progress on their CS education policies show more favorable outcomes on AP CS exams. For example, it’s possible that those states taking more policy actions to improve universal access to CS education have seen greater uptakes in AP CS participation or sharper reductions in underrepresented gaps when compared with those states doing little.

Before discussing our results, though, we must acknowledge that policy adoption metrics are imperfect proxies for practice. The State of CS reports are careful to note that state policies vary widely, even within the same policy categories. Further, a state may decide to adopt a given CS education policy, but implementation may be thwarted by barriers that curtail its practical impact. Other states may put CS-enhancing practices into place even in the absence of a formalized state policy. This difficulty can be seen in Figure 6, which represents the differences in observed practices under three different policy-status categories. Figure 6 focuses on the percentage of high schools in a state offering foundational CS courses (y-axis), a practice that provides more universal access to CS for all students. The State of CS policy corresponding to this action is whether states have a policy requiring all high schools to offer CS (Require HS). The x-axis separates those states that have no policy, those that have adopted a policy with a target implementation goal in the future (in progress), and those with the policy already in force (yes).

The box-whisker plots represent the means and distributions of states observed within each of the three policy-status categories. Those states with a state policy in force have the highest mean percentage of high schools offering CS, and those with the policy in progress have higher percentages than states with no policy. Yet, the observed differences in practice across states are far smaller than the policy-status variables alone would indicate. The key point here is that we are constrained to look at the data available to us on policy status, not actual practices; consequently, we may be failing to capture important differences in practice in our analyses.

To conduct the analysis, we merged the State of CS policy adoption data with the AP CS exam data by state and year. 5 We ran a series of two-way fixed-effects models, which are intended to net out other correlated changes in test-taking behavior observed within the state over time and across other states contemporaneously. We ran a separate model on each of the nine tracked CS policies and looped this operation across different test-taking metrics as dependent variables. The results of this exercise are presented in Table 2 below.

T2 Policies regressed individually with overall participation and test-taking share gaps

The columns of Table 2 correspond to different analytical models in which the outcomes of interest are the overall test-taking rate (column 1) as well as the percentage of test-takers that are female (column 2) and Black or Latino (column 3). The nine CS policies are represented down the row headings. The cell corresponding to a row-column combination represents the point estimate and standard error of a two-way fixed-effects model with the policy in the row heading being used as the explanatory variable and the student group in the column heading as the output of interest. Cells are color coded for ease of interpretation to highlight where the estimates are largest.

The high-level summary of the Table 2 results is that several of these CS education policies are positively associated with AP CS test-taking behavior among students overall. The first column shows the largest and most statistically significant estimates correspond to policies that 1) allocate state funding for CS education initiatives, 2) require state colleges to recognize CS courses as STEM courses in admissions decisions, and 3) require all high schools in the state to offer CS courses. We are generally unsurprised at this result, as all three of these policies feasibly have a direct impact on late-high-school students, which are the target population for AP CS exams. Other policies, like offering a teacher certification program in CS education or having a state-level officer responsible for CS education, would likely influence these late-high-school outcomes through more indirect means.

Another finding from Table 2 is that none of the policies seem to be associated with a relative increase in the proportion of test-takers from underrepresented groups. Only one point estimate is significant in column 2 (whether a CS course counts toward a STEM graduation requirement), and it is in the direction of widening the sex-based gap. This result must be taken with a grain of salt because this policy (Count) was primarily adopted in the earlier years of the past decade when gaps were at their largest. A crucial factor driving these estimates is the (almost) constant proportion of underrepresented test-takers between 2018 and 2020, the years for which we have an overlap of policy implementation and AP test-taking data.

We should also note that with the high levels of state policy activity coinciding with a rapid expansion of AP CS test-taking, we cannot claim that any of the point estimates reported in Table 2 represent a causal relationship. Rather, this is our best attempt to isolate associations that are unique to certain policy-outcome combinations to explore the relationship; results are not intended to be definitive evaluations of any given policy.

Even if the expansion of these CS policies had little apparent relationship with test-taking gaps overall, this does not mean that that was the experience of students in all states. We wish to explore whether surges in the performance of underrepresented groups accompanied CS policy expansions in any state, and we do this in the map presented in Figure 7.

Figure 7 presents a bivariate map of the U.S., where states are color coded based on observed changes in two directions: growth in state-level CS education policy adoption and growth in Black and Latino AP CS test-taking rates. States above the median on both dimensions are shaded in dark blue, and states below the median on both are shaded in light gray. The light blue and dark gray shades represent states high on one dimension or the other, but not both.

This analysis reveals some surprising geographical differences. Using the Mississippi River as the dividing line, nearly all states with the highest increases in test-taking among Black and Latino student groups are east of the river (Nevada and Montana are the only exceptions west of the Mississippi). And among the states with the highest test-taking increases in the East, states are split about evenly between high and low policy-adoption categories. Contrast this pattern against states west of the Mississippi, where nearly all states are in the low-growth category for Black and Latino AP CS test-taking, with over two-thirds of those are in the low-growth policy category.

Reflecting on the map leaves us with two important lessons. First, the map vividly illustrates that policy adoption itself is not an accurate predictor of stronger outcomes for underrepresented groups. We observe many states with high policy growth that see comparably little improvement in test-taking outcomes for Black and Latino students; meanwhile, we also see many examples with high growth among Black and Latino students that did not display the same aggressive levels of policy adoption.

“Policy adoption itself is not an accurate predictor of stronger outcomes for underrepresented groups.”

And second, the map suggests that geographical commonalities may be an important lever supporting CS student outcomes. It is unclear from this analysis how those geographical relationships will matter, but this offers some useful direction for future work. A suggestive clue comes from the 2021 State of CS report ( p. 14 ), which shows a policy map of the percentage of schools offering foundational CS, with a similar East-West divide evident. We confirm that the percentage of high schools offering CS at the state level is also positively correlated with both our measure of policy growth and increasing Black and Latino participation. Though merely suggestive, more universal high school CS offerings presents a clear mechanism through which greater shares of underrepresented groups will be exposed to CS instruction, and therefore participate in meaningful coursework leading to AP CS exams.

Concluding discussion and recommendations

We investigated CS education policy adoption and AP CS exam outcomes in recent years—both of which saw rapid expansion during this time. We found gaps modestly narrowing for historically underrepresented student groups in CS and STEM fields, though much of the narrowing was associated with the introduction of the AP CS P exam. Our further investigations made it clear that overall participation rates on AP CS exams appear to be associated with CS policy adoptions, though none of these policies show any clear relationship with increasing the share of historically underrepresented groups among test-takers.

We recognize that some of these findings cut against a dominant narrative in CS education circles, which states that increased access to CS education will lead to narrowing participation gaps. While we do find gaps narrowing in recent years, these do not appear to be related to policy adoption. We clarify, however, that these results are based on a narrow dataset immediately in the wake of policy changes. These findings are not observed over long periods of implementation nor on a broad set of outcomes, which could counter these early patterns. For example, recall from our earlier discussion that white and Asian students are more likely to enroll in a richer set of STEM and AP-level courses generally , and they are more likely to engage in CS courses specifically . It seems probable that, as states kickstart CS education initiatives, the overrepresented student groups that enjoy preferred access may be better positioned to take advantage of newly available opportunities. Similarly, more fundamental outcomes like student exposure to coding or discussions of new technology in class (which contrast with the capstone AP CS outcomes in our data) may be more likely to have a disproportionate impact on underrepresented groups, narrowing formative exposure gaps. In either case, it seems plausible that narrowing CS and STEM participation gaps over a period of several years of policy implementation may still result even if AP CS gaps appear to be uncorrelated with short-term policy changes.

“Even as AP computer science test-taking has increased among underrepresented groups, the passing rate has also increased, resulting in narrower gaps with overrepresented students.”

Our results also provide some unambiguously encouraging news. First, even as AP CS test-taking has increased among underrepresented groups, the passing rate has also increased, resulting in narrower gaps with overrepresented students. Also, even states that have not been as active in promoting CS education policies have still shown large surges in AP CS participation; thus, even in the absence of policy action, we see reason to be optimistic about the trajectory of CS education overall.

We hope these findings invite reflection and re-evaluation of how states are approaching the expansion of CS education. As we close, we offer the following recommendations to state education agencies and policymakers working to expand CS education:

  • Track multiple dimensions of CS education. CS is unique among academic disciplines in that it has previously been offered as an elective, but it is becoming more integrated into the academic core curriculum. Consequently, we do not have systematic measures in place tracking student competencies, access to coursework, teacher quality, or other similar outcomes as we do for core academic disciplines. More consistent measurement of inputs and outputs will help to steer states’ actions in CS.
  • Prioritize diversity and inclusion in implementing CS policies. The oft-invoked link between expanding universal access to CS education and narrower participation and interest gaps in CS and STEM does have some empirical support, but certainly not enough to conclude that one necessarily leads to the other. Leaders and educators must ensure CS policies are implemented in inclusive ways to increase the chances of narrowing these persistent gaps. We encourage attention to both the classroom experiences of underrepresented student groups and CS educator diversity, too, as race- and gender-based role modeling are important predictors of future interest in CS and STEM .
  • Take the long view on CS implementation. This report documents a flurry of activity around CS education in recent years, though we also urge patience and strategy here. Many states are still building the capacity to offer high-quality CS education—perhaps not so much in terms of physical capital (devices and broadband infrastructure), but more so in human capital (building capacity in the teacher workforce and scaling up high-quality instruction). By nature, these investments will take time to mature before students fully realize the benefits. We should not be discouraged by lackluster immediate results.

Computing and technology will be integral parts of the economic and social future awaiting the children of today. Providing access to high-quality CS education will be key in ensuring that all students can meet that future head on.

The authors thank Logan Booker and Marguerite Franco for excellent research assistance, and Nicol Turner Lee, Pat Yongpradit, and Jon Valant for helpful feedback.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Support for this publication was generously provided by Howmet Aerospace Foundation. The findings, interpretations, and conclusions in this report are not influenced by any donation. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment.

  • The nine policies that the State of CS annual report tracks were first described in a Code.org policy document, “ The Nine Policy Ideas to Make Computer Science Fundamental to K-12 Education ,” (n.d.), though we do not know of the policies being systematically tracked until the first State of CS report in 2017. The 2017 report included a 10th policy on promoting diversity in CS education, though this policy was dropped in subsequent years.
  • The State of CS report counts states that have CS policies in progress (that is, the policy decision has been passed or issued, though the policies have a target implementation date in the future) as earning a half point on their policy tracker. Policies that have been passed and are implemented earn a full point. For ease of interpretation, we counted both implemented policies and policies in progress as earning a full point.
  • We focus on Black, Latino, white, and Asian students because other racial/ethnic groups are inconsistently recorded over the time series; they represent roughly 90-95% of observations across years. Our results are qualitatively similar if we include other underrepresented racial/ethnic groups in the calculations.
  • Student demographic information on 12th graders comes from the Department of Education’s Common Core of Data. Not all students taking an AP exam will be 12th graders, but we use their demographics as a baseline due to the tendency of younger cohorts of students to become progressively more racially diverse with time.
  • This merging process results in three years in which we have observations of both CS education policies in place and AP CS outcomes (2018, 2019, and 2020). Because some of the policies documented in the 2017 State of CS report may not have been passed and implemented before the AP CS administration in the spring of that year, we lag all of the State of CS reports back one year before merging with AP CS exam results.

Education Policy K-12 Education

Governance Studies

Brown Center on Education Policy

Jing Liu, Cameron Conrad, David Blazar

May 1, 2024

Hannah C. Kistler, Shaun M. Dougherty

April 9, 2024

Darrell M. West, Joseph B. Keller

February 12, 2024

How technology is shaping learning in higher education

About the authors.

This article is a collaborative effort by Claudio Brasca, Charag Krishnan , Varun Marya , Katie Owen, Joshua Sirois, and Shyla Ziade, representing views from McKinsey’s Education Practice.

The COVID-19 pandemic forced a shift to remote learning overnight for most higher-education students, starting in the spring of 2020. To complement video lectures and engage students in the virtual classroom, educators adopted technologies that enabled more interactivity and hybrid models of online and in-person activities. These tools changed learning, teaching, and assessment in ways that may persist after the pandemic. Investors have taken note. Edtech start-ups raised record amounts of venture capital in 2020 and 2021, and market valuations for bigger players soared.

A study conducted by McKinsey in 2021 found that to engage most effectively with students, higher-education institutions can focus on eight dimensions  of the learning experience. In this article, we describe the findings of a study of the learning technologies that can enable aspects of several of those eight dimensions (see sidebar “Eight dimensions of the online learning experience”).

Eight dimensions of the online learning experience

Leading online higher-education institutions focus on eight key dimensions of the learning experience across three overarching principles.

Seamless journey

Clear education road map: “My online program provides a road map to achieve my life goals and helps me structure my day to day to achieve steady progress.”

Seamless connections: “I have one-click access to classes and learning resources in the virtual learning platform through my laptop or my phone.”

Engaging teaching approach

Range of learning formats: “My program offers a menu of engaging courses with both self-guided and real-time classes, and lots of interaction with instructors and peers.”

Captivating experiences: “I learn from the best professors and experts. My classes are high quality, with up-to-date content.”

Adaptive learning: “I access a personalized platform that helps me practice exercises and exams and gives immediate feedback without having to wait for the course teacher.”

Real-world skills application: “My online program helps me get hands-on practice using exciting virtual tools to solve real-world problems.”

Caring network

Timely support: “I am not alone in my learning journey and have adequate 24/7 support for academic and nonacademic issues.”

Strong community: “I feel part of an academic community and I’m able to make friends online.”

In November 2021, McKinsey surveyed 600 faculty members and 800 students from public and private nonprofit colleges and universities in the United States, including minority-serving institutions, about the use and impact of eight different classroom learning technologies (Exhibit 1). (For more on the learning technologies analyzed in this research, see sidebar “Descriptions of the eight learning technologies.”) To supplement the survey, we interviewed industry experts and higher-education professionals who make decisions about classroom technology use. We discovered which learning tools and approaches have seen the highest uptake, how students and educators view them, the barriers to higher adoption, how institutions have successfully adopted innovative technologies, and the notable impacts on learning (for details about our methodology, see sidebar “About the research”).

Double-digit growth in adoption and positive perceptions

Descriptions of the eight learning technologies.

  • Classroom interactions: These are software platforms that allow students to ask questions, make comments, respond to polls, and attend breakout discussions in real time, among other features. They are downloadable and accessible from phones, computers, and tablets, relevant to all subject areas, and useful for remote and in-person learning.
  • Classroom exercises: These platforms gamify learning with fun, low-stakes competitions, pose problems to solve during online classes, allow students to challenge peers to quizzes, and promote engagement with badges and awards. They are relevant to all subject areas.
  • Connectivity and community building: A broad range of informal, opt-in tools, these allow students to engage with one another and instructors and participate in the learning community. They also include apps that give students 24/7 asynchronous access to lectures, expanded course materials, and notes with enhanced search and retrieval functionality.
  • Group work: These tools let students collaborate in and out of class via breakout/study rooms, group preparation for exams and quizzes, and streamlined file sharing.
  • Augmented reality/virtual reality (AR/VR): Interactive simulations immerse learners in course content, such as advanced lab simulations for hard sciences, medical simulations for nursing, and virtual exhibit tours for the liberal arts. AR can be offered with proprietary software on most mobile or laptop devices. VR requires special headsets, proprietary software, and adequate classroom space for simultaneous use.
  • AI adaptive course delivery: Cloud-based, AI-powered software adapts course content to a student’s knowledge level and abilities. These are fully customizable by instructors and available in many subject areas, including business, humanities, and sciences.
  • Machine learning–powered teaching assistants: Also known as chatbot programs, machine learning–powered teaching assistants answer student questions and explain course content outside of class. These can auto-create, deliver, and grade assignments and exams, saving instructors’ time; they are downloadable from mobile app stores and can be accessed on personal devices.
  • Student progress monitoring: These tools let instructors monitor academic progress, content mastery, and engagement. Custom alerts and reports identify at-risk learners and help instructors tailor the content or their teaching style for greater effectiveness. This capability is often included with subscriptions to adaptive learning platforms.

Survey respondents reported a 19 percent average increase in overall use of these learning technologies since the start of the COVID-19 pandemic. Technologies that enable connectivity and community building, such as social media–inspired discussion platforms and virtual study groups, saw the biggest uptick in use—49 percent—followed by group work tools, which grew by 29 percent (Exhibit 2). These technologies likely fill the void left by the lack of in-person experiences more effectively than individual-focused learning tools such as augmented reality and virtual reality (AR/VR). Classroom interaction technologies such as real-time chatting, polling, and breakout room discussions were the most widely used tools before the pandemic and remain so; 67 percent of survey respondents said they currently use these tools in the classroom.

About the research

In November 2021, McKinsey surveyed 634 faculty members and 818 students from public, private, and minority-serving colleges and universities over a ten-day period. The survey included only students and faculty who had some remote- or online-learning experience with any of the eight featured technologies. Respondents were 63 percent female, 35 percent male, and 2 percent other gender identities; 69 percent White, 18 percent Black or African American, 8 percent Asian, and 4 percent other ethnicities; and represented every US region. The survey asked respondents about their:

  • experiences with technology in the classroom pre-COVID-19;
  • experiences with technology in the classroom since the start of the COVID-19 pandemic; and
  • desire for future learning experiences in relation to technology.

The shift to more interactive and diverse learning models will likely continue. One industry expert told us, “The pandemic pushed the need for a new learning experience online. It recentered institutions to think about how they’ll teach moving forward and has brought synchronous and hybrid learning into focus.” Consequently, many US colleges and universities are actively investing to scale up their online and hybrid program offerings .

Differences in adoption by type of institution observed in the research

  • Historically Black colleges and universities (HBCUs) and tribal colleges and universities made the most use of classroom interactions and group work tools (55 percent) and the least use of tools for monitoring student progress (15 percent).
  • Private institutions used classroom interaction technologies (84 percent) more than public institutions (63 percent).
  • Public institutions, often associated with larger student populations and course sizes, employed group work and connectivity and community-building tools more often than private institutions.
  • The use of AI teaching-assistant technologies increased significantly more at public institutions (30 percent) than at private institutions (9 percent), though overall usage remained comparatively higher at private institutions.
  • The use of tools for monitoring student progress increased by 14 percent at private institutions, versus no growth at public institutions.

Some technologies lag behind in adoption. Tools enabling student progress monitoring, AR/VR, machine learning–powered teaching assistants (TAs), AI adaptive course delivery, and classroom exercises are currently used by less than half of survey respondents. Anecdotal evidence suggests that technologies such as AR/VR require a substantial investment in equipment and may be difficult to use at scale in classes with high enrollment. Our survey also revealed utilization disparities based on size. Small public institutions use machine learning–powered TAs, AR/VR, and technologies for monitoring student progress at double or more the rates of medium and large public institutions, perhaps because smaller, specialized schools can make more targeted and cost-effective investments. We also found that medium and large public institutions made greater use of connectivity and community-building tools than small public institutions (57 to 59 percent compared with 45 percent, respectively). Although the uptake of AI-powered tools was slower, higher-education experts we interviewed predict their use will increase; they allow faculty to tailor courses to each student’s progress, reduce their workload, and improve student engagement at scale (see sidebar “Differences in adoption by type of institution observed in the research”).

While many colleges and universities are interested in using more technologies to support student learning, the top three barriers indicated are lack of awareness, inadequate deployment capabilities, and cost (Exhibit 3).

Students want entertaining and efficient tools

More than 60 percent of students said that all the classroom learning technologies they’ve used since COVID-19 began had improved their learning and grades (Exhibit 4). However, two technologies earned higher marks than the rest for boosting academic performance: 80 percent of students cited classroom exercises, and 71 percent cited machine learning–powered teaching assistants.

Although AR/VR is not yet widely used, 37 percent of students said they are “most excited” about its potential in the classroom. While 88 percent of students believe AR/VR will make learning more entertaining, just 5 percent said they think it will improve their ability to learn or master content (Exhibit 5). Industry experts confirmed that while there is significant enthusiasm for AR/VR, its ability to improve learning outcomes is uncertain. Some data look promising. For example, in a recent pilot study, 1 “Immersive biology in the Alien Zoo: A Dreamscape Learn software product,” Dreamscape Learn, accessed October 2021. students who used a VR tool to complete coursework for an introductory biology class improved their subject mastery by an average of two letter grades.

Faculty embrace new tools but would benefit from more technical support and training

Faculty gave learning tools even higher marks than students did, for ease of use, engagement, access to course resources, and instructor connectivity. They also expressed greater excitement than students did for the future use of technologies. For example, while more than 30 percent of students expressed excitement for AR/VR and classroom interactions, more than 60 percent of faculty were excited about those, as well as machine learning–powered teaching assistants and AI adaptive technology.

Eighty-one percent or more of faculty said they feel the eight learning technology tools are a good investment of time and effort relative to the value they provide (Exhibit 6). Expert interviews suggest that employing learning technologies can be a strain on faculty members, but those we surveyed said this strain is worthwhile.

While faculty surveyed were enthusiastic about new technologies, experts we interviewed stressed some underlying challenges. For example, digital-literacy gaps have been more pronounced since the pandemic because it forced the near-universal adoption of some technology solutions, deepening a divide that was unnoticed when adoption was sporadic. More tech-savvy instructors are comfortable with interaction-engagement-focused solutions, while staff who are less familiar with these tools prefer content display and delivery-focused technologies.

According to experts we interviewed, learning new tools and features can bring on general fatigue. An associate vice president of e-learning at one university told us that faculty there found designing and executing a pilot study of VR for a computer science class difficult. “It’s a completely new way of instruction. . . . I imagine that the faculty using it now will not use it again in the spring.” Technical support and training help. A chief academic officer of e-learning who oversaw the introduction of virtual simulations for nursing and radiography students said that faculty holdouts were permitted to opt out but not to delay the program. “We structured it in a ‘we’re doing this together’ way. People who didn’t want to do it left, but we got a lot of support from vendors and training, which made it easy to implement simulations.”

Reimagining higher education in the United States

Reimagining higher education in the United States

Takeaways from our research.

Despite the growing pains of digitizing the classroom learning experience, faculty and students believe there is a lot more they can gain. Faculty members are optimistic about the benefits, and students expect learning to stay entertaining and efficient. While adoption levels saw double-digit growth during the pandemic, many classrooms have yet to experience all the technologies. For institutions considering the investment, or those that have already started, there are several takeaways to keep in mind.

  • It’s important for administration leaders, IT, and faculty to agree on what they want to accomplish by using a particular learning technology. Case studies and expert interviews suggest institutions that seek alignment from all their stakeholders before implementing new technologies are more successful. Is the primary objective student engagement and motivation? Better academic performance? Faculty satisfaction and retention? Once objectives are set, IT staff and faculty can collaborate more effectively in choosing the best technology and initiating programs.
  • Factor in student access to technology before deployment. As education technology use grows, the digital divide for students puts access to education at risk. While all the institution types we surveyed use learning technologies in the classroom, they do so to varying degrees. For example, 55 percent of respondents from historically Black colleges and universities and tribal colleges and universities use classroom interaction tools. This is lower than public institutions’ overall utilization rate of 64 percent and private institutions’ utilization rate of 84 percent. Similarly, 15 percent of respondents from historically Black colleges and universities and tribal colleges and universities use tools for monitoring student progress, while the overall utilization rate for both public and private institutions is 25 percent.
  • High-quality support eases adoption for students and faculty. Institutions that have successfully deployed new learning technologies provided technical support and training for students and guidance for faculty on how to adapt their course content and delivery. For example, institutions could include self-service resources, standardize tools for adoption, or provide stipend opportunities for faculty who attend technical training courses. One chief academic officer told us, “The adoption of platforms at the individual faculty level can be very difficult. Ease of use is still very dependent upon your IT support representative and how they will go to bat to support you.”
  • Agree on impact metrics and start measuring in advance of deployment. Higher-education institutions often don’t have the means to measure the impact of their investment in learning technologies, yet it’s essential for maximizing returns. Attributing student outcomes to a specific technology can be complex due to the number of variables involved in academic performance. However, prior to investing in learning technologies, the institution and its faculty members can align on a core set of metrics to quantify and measure their impact. One approach is to measure a broad set of success indicators, such as tool usage, user satisfaction, letter grades, and DFW rates (the percentage of students who receive a D, F, or Withdraw) each term. The success indicators can then be correlated by modality—online versus hybrid versus in-class—to determine the impact of specific tools. Some universities have offered faculty grants of up to $20,000 for running pilot programs that assess whether tools are achieving high-priority objectives. “If implemented properly, at the right place, and with the right buy-in, education technology solutions are absolutely valuable and have a clear ROI,” a senior vice president of academic affairs and chief technology officer told us.

In an earlier article , we looked at the broader changes in higher education that have been prompted by the pandemic. But perhaps none has advanced as quickly as the adoption of digital learning tools. Faculty and students see substantial benefits, and adoption rates are a long way from saturation, so we can expect uptake to continue. Institutions that want to know how they stand in learning tech adoption can measure their rates and benchmark them against the averages in this article and use those comparisons to help them decide where they want to catch up or get ahead.

Claudio Brasca is a partner in McKinsey’s Bay Area office, where Varun Marya is a senior partner; Charag Krishnan is a partner in the New Jersey office; Katie Owen is an associate partner in the St. Louis office, where Joshua Sirois is a consultant; and Shyla Ziade is a consultant in the Denver office.

The authors wish to thank Paul Kim, chief technology officer and associate dean at Stanford School of Education, and Ryan Golden for their contributions to this article.

Explore a career with us

Related articles.

Woman using laptop

Setting a new bar for online higher education

How to transform higher-education institutions for the long term

How to transform higher-education institutions for the long term

Scaling online education: Five lessons for colleges

Scaling online education: Five lessons for colleges

Harvard Education Press

On The Site

A child wearing headphones and holding a pen sits at a computer

Teaching About Technology in Schools Through Technoskeptical Inquiry

June 3, 2024 | victorialynn | Harvard Educational Review Contributors , Voices in Education

By Jacob Pleasants, Daniel G. Krutka, and T. Philip Nichols

New technologies are rapidly transforming our societies, our relationships, and our schools. Look no further than the intense — and often panicked — discourse around generative AI , the metaverse , and the creep of digital media into all facets of civic and social life . How are schools preparing students to think about and respond to these changes?

In various ways, students are taught how to use technologies in school. Most schools teach basic computing skills and many offer elective vocational-technical classes. But outside of occasional conversations around digital citizenship, students rarely wrestle with deeper questions about the effects of technologies on individuals and society.

Decades ago, Neil Postman (1995) argued for a different form of technology education focused on teaching students to critically examine technologies and their psychological and social effects. While Postman’s ideas have arguably never been more relevant, his suggestion to add technology education as a separate subject to a crowded curriculum gained little traction. Alternatively, we argue that technology education could be an interdisciplinary endeavor that occurs across core subject areas. Technology is already a part of English Language Arts (ELA), Science, and Social Studies instruction. What is missing is a coherent vision and common set of practices and principles that educators can use to align their efforts.

To provide a coherent vision, in our recent HER article , we propose “technoskepticism” as an organizing goal for teaching about technology. We define technoskepticism as a critical disposition and practice of investigating the complex relationships between technologies and societies. A technoskeptical person is not necessarily anti-technology, but rather one who deeply examines technological issues from multiple dimensions and perspectives akin to an art critic.

We created the Technoskepticism Iceberg as a framework to support teachers and students in conducting technological inquiries. The metaphor of an iceberg conveys how many important influences of technology lie beneath our conscious awareness. People often perceive technologies as tools (the “visible” layer of the iceberg), but technoskepticism requires that they be seen as parts of systems (with interactions that produce many unintended effects) and embedded with values about what is good and desirable (and for whom). The framework also identifies three dimensions of technology that students can examine. The technical dimension concerns the design and functions of a technology, including how it may work differently for different people. The psychosocial dimension addresses how technologies change our individual cognition and our larger societies. The political dimension considers who makes decisions concerning the terms, rules, or laws that govern technologies.

research on computer education

To illustrate these ideas, how might we use the Technoskeptical Iceberg to interrogate generative AI such as ChatGPT in the core subject areas?

A science/STEM classroom might focus on the technical dimension by investigating how generative AI works and demystifying its ostensibly “intelligent” capabilities. Students could then examine the infrastructures involved in AI systems , such as immense computing power and specialized hardware that in turn have profound environmental consequences. A teacher could ask students to use their values to weigh the costs and potential benefits of ChatGPT.

A social studies class could investigate the psychosocial dimension through the longer histories of informational technologies (e.g., the printing press, telegraph, internet, and now AI) to consider how they shifted people’s lives. They could also explore political questions about what rules or regulations governments should impose on informational systems that include people’s data and intellectual property.

In an ELA classroom, students might begin by investigating the psychosocial dimensions of reading and writing, and the values associated with different literacy practices. Students could consider how the concept of “authorship” shifts when one writes by hand, with word processing software, or using ChatGPT. Or how we are to engage with AI-generated essays, stories, and poetry differently than their human-produced counterparts. Such conversations would highlight how literary values are mediated by technological systems . 

Students who use technoskepticism to explore generative AI technologies should be better equipped to act as citizens seeking to advance just futures in and out of schools. Our questions are, what might it take to establish technoskepticism as an educational goal in schools? What support will educators need? And what might students teach us through technoskeptical inquiries?

Postman, N. (1995). The End of Education: Redefining the Value of School. Vintage Books.

About the Authors

Jacob Pleasants is an assistant professor of science education at the University of Oklahoma. Through his teaching and research, he works to humanize STEM education by helping students engage with issues at the intersection of STEM and society.

Daniel G. Krutka is a dachshund enthusiast, former high school social studies teacher, and associate professor of social studies education at the University of North Texas. His research concerns technology, democracy, and education, and he is the cofounder of the Civics of Technology project ( www.civicsoftechnology.org ).

T. Philip Nichols is an associate professor in the Department of Curriculum and Instruction at Baylor University. He studies the digitalization of public education and the ways science and technology condition the ways we practice, teach, and talk about literacy.

They are the authors of “ What Relationships Do We Want with Technology? Toward Technoskepticism in Schools ” in the Winter 2023 issue of Harvard Educational Review .

  • Gift Guides
  • Voices in Education

Artificial Intelligence in Education: Implications for Policymakers, Researchers, and Practitioners

  • Original research
  • Open access
  • Published: 04 June 2024

Cite this article

You have full access to this open access article

research on computer education

  • Dirk Ifenthaler   ORCID: orcid.org/0000-0002-2446-6548 1 , 2 ,
  • Rwitajit Majumdar 3 ,
  • Pierre Gorissen 4 ,
  • Miriam Judge 5 ,
  • Shitanshu Mishra 6 ,
  • Juliana Raffaghelli 7 &
  • Atsushi Shimada 8  

211 Accesses

Explore all metrics

One trending theme within research on learning and teaching is an emphasis on artificial intelligence (AI). While AI offers opportunities in the educational arena, blindly replacing human involvement is not the answer. Instead, current research suggests that the key lies in harnessing the strengths of both humans and AI to create a more effective and beneficial learning and teaching experience. Thus, the importance of ‘humans in the loop’ is becoming a central tenet of educational AI. As AI technology advances at breakneck speed, every area of society, including education, needs to engage with and explore the implications of this phenomenon. Therefore, this paper aims to assist in this process by examining the impact of AI on education from researchers’ and practitioners' perspectives. The authors conducted a Delphi study involving a survey administered to N  = 33 international professionals followed by in-depth face-to-face discussions with a panel of international researchers to identify key trends and challenges for deploying AI in education. The results indicate that the three most important and impactful trends were (1) privacy and ethical use of AI; (2) the importance of trustworthy algorithms; and (3) equity and fairness. Unsurprisingly, these were also identified as the three key challenges. Based on these findings, the paper outlines policy recommendations for AI in education and suggests a research agenda for closing identified research gaps.

Similar content being viewed by others

research on computer education

Artificial intelligence in education: Addressing ethical challenges in K-12 settings

research on computer education

The Promises and Challenges of Artificial Intelligence for Teachers: a Systematic Review of Research

research on computer education

Systematic review of research on artificial intelligence applications in higher education – where are the educators?

Avoid common mistakes on your manuscript.

1 Introduction

Artificial intelligence (AI) is finding its way into people's everyday lives at breathtaking speed and with almost unlimited possibilities. Typical points of contact with AI include pattern, image and speech recognition, auto-completion or correction suggestions for digital search queries. Since the 1950s, AI has been recognised in computer science and interdisciplinary fields such as philosophy, cognitive science, neuroscience, and economics (Tegmark, 2018 ). AI refers to the attempt to develop machines that can do things that were previously only possible using human cognition (Zeide, 2019 ). In contrast to humans, however, AI systems can process much more data in real-time (De Laat et al., 2020 ).

AI in education represents a generic term to describe a wide collection of different technologies, algorithms, and related multimodal data applied in education's formal, non-formal, and informal contexts. It involves techniques such as data mining, machine learning, natural language processing, large language models (LLMs), generative models, and neural networks. The still-emerging field of AI in education has introduced new frameworks, methodological approaches, and empirical investigations into educational research; for example, novel methods in academic research include machine learning, network analyses, and empirical approaches based on computational modelling experiments (Bozkurt et al., 2021 ).

With the emerging opportunities of AI, learning and teaching may be supported in situ and in real-time for more efficient and valid solutions (Ifenthaler & Schumacher, 2023 ). Hence, AI has the potential to further revolutionise the integration of human and artificial intelligence and impact human and machine collaboration in learning and teaching (De Laat et al., 2020 ). The discourse around the utilization of AI in education shifted from being narrowly focused on automation-based tasks to the augmentation of human capabilities linked to learning and teaching (Chatti et al., 2020 ). Notably, the concept of ‘humans in the loop’ (U.S. Department of Education, 2023 ) has gained more traction in recent education discourse as concerns about ethics, risks, and equity emerge.

Due to the remaining challenges of implementing meaningful AI in educational contexts, especially for more sophisticated tasks, the reciprocal collaboration of humans and AI might be a suitable approach for enhancing the capacities of both (Baker, 2016 ). However, the importance of understanding how AI, as a stakeholder among humans, selects and acquires data in the process of learning and knowledge creation, learns to process and forget information, and shares knowledge with collaborators is yet to be empirically investigated (Al-Mahmood, 2020 ; Zawacki-Richter et al., 2019 ).

This paper is based on (a) a literature review focussing on the impact of AI in the context of education, (b) a Delphi study (Scheibe et al., 1975 ) involving N  = 33 international professionals and a focus discussion on current opportunities and challenges of AI as well as (c) outlining policy recommendations and (d) a research agenda for closing identified research gaps.

2 Background

2.1 artificial intelligence.

From a conceptual point of view, AI refers to the sequence and application of algorithms that enable specific commands to transform a data input into a data output. Following Graf Ballestrem et al. ( 2020 ), among several definitions related to AI (Sheikh et al., 2023 ), AI refers to a system that exhibits intelligent behaviour by analysing the environment and taking targeted measures to achieve specific goals using certain degrees of freedom. In this context, intelligent behaviour is associated with human cognition. The focus here is on human cognitive functions such as decision-making, problem-solving and learning (Bellman, 1978 ). AI is, therefore, a machine developed by humans that can achieve complex goals (partially) autonomously. By applying machine learning techniques, these machines can increasingly analyse the application environment and its context and adapt to changing conditions (De Laat et al., 2020 ).

Daugherty and Wilson ( 2018 ) analyse the interaction between humans and AI. They identified three fields of activity: (a) Human activities, such as leading teams, clarifying points of view, creating things, or assessing situations. The human activities remain an advantage for humans when compared to AI. (b) Activities performed by machines, such as carrying out processes and repeating them as required, forecasting target states, or adapting processes. The machine activities are regarded as an advantage when compared to humans. In between are the (c) human–machine alliances. In this alliance, people must develop, train, and manage AI systems—to empower them. In this alliance, machines extend the capabilities of humans to analyse large amounts of data from countless sources in (near) real time. In these alliances, humans and machines are not competitors. Instead, they become symbiotic partners that drive each other to higher performance levels. The paradigm shift from computers as tools to computers as partners is becoming increasingly differentiated in various fields of application (Wesche & Sonderegger, 2019 ), including in the context of education.

2.2 Artificial Intelligence in Education

Since the early 2010s, data and algorithms have been increasingly used in the context of higher education to support learning and teaching, for assessments, to develop curricula further, and to optimize university services (Pinkwart & Liu, 2020 ). A systematic review by Zawacki-Richter et al. ( 2019 ) identifies various fields of application for AI in the context of education: (a) modelling student data to make predictions about academic success, (b) intelligent tutoring systems that present learning artifacts or provide assistance and feedback, (c) adaptive systems that support learning processes and, if necessary, offer suggestions for learning support, and (d) automated examination systems for classifying learning achievements. In addition, (e) support functions are implemented in the area of pedagogical decisions by teachers (Arthars et al., 2019 ), and the (f) further development of course content and curricula (Ifenthaler, Gibson, et al., 2018 ).

However, there are only a few reliable empirical studies on the potential of AI in the context of education concerning its impact (Zawacki-Richter et al., 2019 ). System-wide implementations of the various AI application fields in the education context are also still pending (Gibson & Ifenthaler, 2020 ). According to analyses by Bates et al. ( 2020 ), AI remains a sleeping giant in the context of education. Despite the great attention paid to the topic of AI in educational organizations, the practical application of AI lags far behind the anticipated potential (Buckingham Shum & McKay, 2018 ). Deficits in organizational structures and a lack of personnel and technological equipment at educational organizations have been documented as reasons for this (Ifenthaler, 2017 ).

Despite its hesitant implementation, AI has far more potential to transform the education arena than any technology before it. Potentials for educational organizations made possible by AI include expanding access to education, increasing student success, improving student retention, lowering costs and reducing the duration of studies. The application of AI systems in the context of education can be categorized on various levels (Bates et al., 2020 ).

The first level is aimed at institutional processes. These include scalable applications for managing application and admission procedures (Adekitan & Noma-Osaghae, 2019 ) and AI-based support for student counselling and services (Jones, 2019 ). Another field of application is aimed at identifying at-risk students and preventing students from dropping out (Azcona et al., 2019 ; Hinkelmann & Jordine, 2019 ; Russell et al., 2020 ). For example, Hinkelmann and Jordine ( 2019 ) report an implementation of a machine learning algorithm to identify students-at-risk, based on their study behaviour. This information triggered a student counselling process, offering support for students toward meeting their study goals or understanding personal needs for continuing the study programme.

The second level aims to support learning and teaching processes. This includes the recommendation of relevant next learning steps and learning materials (Schumacher & Ifenthaler, 2021 ; Shimada et al., 2018 ), the automation of assessments and feedback (Ifenthaler, Grieff, et al., 2018 ), the promotion of reflection and awareness of the learning process (Schumacher & Ifenthaler, 2018 ), supporting social learning (Gašević et al., 2019 ), detecting undesirable learning behaviour and difficulties (Nespereira et al., 2015 ), identifying the current emotional state of learners (Taub et al., 2020 ), and predicting learning success (Glick et al., 2019 ). For instance, Schumacher and Ifenthaler ( 2021 ) successfully utilised different types of prompts related to their current learning process to support student self-regulation.

Furthermore, a third level, which encompasses learning about AI and related technologies, has also been identified (U.S. Department of Education, 2023 ). AI systems are also used for the quality assurance of curricula and the associated didactic arrangements (Ifenthaler, Gibson, et al., 2018 ) and to support teachers (Arthars et al., 2019 ). For example, Ifenthale, Gibson, et al. ( 2018 ) applied graph-network analysis to identify study patterns that supported re-designing learning tasks, materials, and assessments.

2.3 Ethics Related to Artificial Intelligence in Education

The tension between AI's potential and ethical principles in education was recognized early on (Slade & Prinsloo, 2013 ). Ifenthaler and Tracey ( 2016 ) continued the discourse on ethical issues, data protection, and privacy of data in the context of AI applications. The present conceptual and empirical contributions on ethics and AI in the context of education show that data protection and privacy rights are a central problem area in the implementation of AI (Li et al., 2023 ).

AI systems in the context of education are characterised by their autonomy, interactivity and adaptability. These properties enable effective management of the dynamic and often incompletely understood learning and teaching processes. However, AI systems with these characteristics are difficult to assess, and their predictions or recommendations can lead to unexpected behaviour or unwanted activities (i.e., black box). Richards and Dignum ( 2019 ) propose a value-centred design approach that considers ethical principles at every stage of developing and using AI systems for education. Following this approach, AI systems in the context of education must (a) identify relevant stakeholders; (b) identify stakeholders' values and requirements; (c) provide opportunities to aggregate the values and value interpretation of all stakeholders; (d) ensure linkage of values and system functionalities to support implementation decisions and sustainable use; (e) provide support in the selection of system components (from within or outside the organisation) against the background of ethical principles. Dignum ( 2017 ) integrates a multitude of ethical criteria into the so-called ART principles (Accountability, Responsibility, Transparency).

Education organisations must embrace the ART principles while implementing AI systems to ensure responsible, transparent and explainable use of AI systems. Initial study results indicate (Howell et al., 2018 ; Viberg et al., 2022 ; West, Heath, et al., 2016 ; West, Huijser, et al., 2016a , 2016b ) that students are not willing to disclose all data for AI applications despite anticipated benefits. Although a willingness to share learning-related data is signalled, personal information or social user paths are not. This remains a critical aspect, especially when implementing the many adaptive AI systems that rely on a large amount of data.

Future AI systems may take over decision-making responsibilities if they are integrated into education organisations' decision-making processes. For instance, this could happen if AI systems are used in automated examination or admissions processes (Prinsloo & Slade, 2014 ; Willis & Strunk, 2015 ; Willis et al., 2016 ). Education organisations and their stakeholders will, therefore, decide against the background of ethical principles whether this responsibility can be delegated to AI. At the same time, those involved in the respective education organisations must assess the extent to which AI systems can take responsibility (if any) for the decisions made.

2.4 Context and Research Questions

EDUsummIT is a UNESCO (United Nations Educational, Scientific and Cultural Organization; https://www.unesco.org ) endorsed global community of researchers, policy-makers, and practitioners committed to supporting the effective integration of Information Technology (IT) in education by promoting active dissemination and use of research. Approximately 90 leading researchers, policymakers, and practitioners from all continents and over 30 countries gathered in Kyoto, Japan, from 29 May to 01 June 2023, to discuss emerging themes and to define corresponding action items. Previous to the meeting, thematic working groups (TWGs) conducted research related to current challenges in educational technologies with a global impact. This paper is based on the work of the TWG, which focuses on ‘Artificial Intelligence for Learning and Teaching’. The authors of this article constituted the TWG.

The research questions addressed by the researchers of TWG ‘Artificial Intelligence for Learning and Teaching’ are as follows:

What recent research and innovations in artificial intelligence in education are linked to supporting learning, teaching, and educational decision-making?

What recommendations for artificial intelligence in education can be proposed for policy, practice, and research?

3 Delphi Study

This study aimed to uncover global trends and educational practices pertaining to AI in education. A panel of multinational specialists from industry and research institutions reached a consensus on a set of current trends using the Delphi method.

3.1 Methodology

The Delphi method is a robust approach for determining forecasts or policy positions considered to be the most essential (Scheibe et al., 1975 ). A Delphi study can be conducted using paper-and-pencil instruments, computer- or web-based approaches, as well as face-to-face communication processes. For this study, the researchers applied a mixed Delphi design, including (a) computer-based and (b) face-to-face discussion methods. In order to assure the reliability and validity of the current study, we closely followed the guidelines proposed by Beiderbeck et al. ( 2021 ), including the general phases of preparing, conducting, and analysing the Delphi study.

In the first phase, using the computer-based method, a panel of international researchers in artificial intelligence in education were invited to submit trends and institutional practices related to AI in the educational arena. The initial list consisted of N  = 70 trends. This initial list was then aggregated through agreement, eliminating duplicates and trends with similar meanings. Agreement on aggregated constructs was met through in-depth research debriefing and discussion among the involved researchers. The final consolidated list included N  = 20 topics of AI in education. In an additional step of the computer-based method, the list was disseminated to global specialists in AI in education. Each participant was asked to rate the 20 topics on the list concerning (1) importance, (2) impact, and (3) key challenges on a scale of 1–10 (with 10 being the highest). The instructions for the ratings were as follows:

Please rate the IMPORTANCE of each of the trends (on a scale of 10, where 10 is the highest IMPORTANCE) for learning and teaching related to AI in organizations within the next 3 years.

Please rate the IMPACT of each of the trends (on a scale of 10, where 10 is the highest IMPACT) on learning and teaching related to AI and how organizations will utilize them.

Please rate the KEY CHALLENGES of each of the trends in AI in education (on a scale of 10, where 10 is the highest CHALLENGE) that organizations will face within the next 3 years.

In preparation for the second phase, face-to-face discussion , the panel of international researchers were asked to provide three relevant scientific literature resources related to the identified key areas in the first phase and explain their contribution to the respective development area. Next, the panel of international researchers met face-to-face for a 3-day workshop. During the face-to-face meeting, the panel of international researchers and policymakers followed a discussion protocol made available before the meeting (Beiderbeck et al., 2021 ). Discussion questions included but were not limited to: (1) What new educational realities have you identified in AI in education so far? (2) What are recommendations for future educational realities in AI in education for practice, policy, and research? The panel of international researchers discussed and agreed on several trends, challenges, and recommendations concerning research gaps and important implications for educational stakeholders, including policymakers and practitioners.

3.2 Participants

The research team sent open invitations to recruit participants through relevant professional networks, conferences, and personal invitations. As a result, a convenience sample of N  = 33 participants (14 = female; 17 = male; 2 = undecided) with an average age of M  = 46.64 years ( SD  = 9.83) took part in the study. The global specialists were from research institutions ( n ri  = 26), industry ( n in  = 5), and government organizations ( n in  = 2). They had an average of M  = 17.8 years ( SD  = 9.4) of experience in research and development in educational technology and are currently focused on artificial intelligence. Participants were based in Argentina ( n  = 1), Australia ( n  = 3), Canada ( n  = 2), China ( n  = 1), Croatia ( n  = 1), Finland ( n  = 1), France ( n  = 1), Germany ( n  = 1), India ( n  = 1), Ireland ( n  = 3), Japan ( n  = 2), Philippines ( n  = 1), Spain ( n  = 2), Sweden ( n  = 1), The Netherlands ( n  = 6), UK ( n  = 4), and USA ( n  = 2).

3.3 Data Analysis

All data were saved and analysed using an anonymized process as per conventional research data protection procedures. Data were cleaned and combined for descriptive and inferential statistics using r Statistics ( https://www.r-project.org ). All effects were tested at the 0.05 significance level, and effect size measures were computed where relevant. Further, discussion protocols of the face-to-face discussion were transcribed and analysed using QCAmap, a software for qualitative content analysis (Mayring & Fenzl, 2022 ). Both inductive and deductive coding techniques were used (Mayring, 2015 ). Regular researcher debriefing was conducted during data analysis to enhance the reliability and validity of the quantitative and qualitative analysis. The deductive coding followed pre-established categories derived from theory and existing research findings as well as the initial list of trends (e.g., ethics and AI, diversity and inclusion). The inductive process included critical reflections on new realities that emerged since the project's initial phase (e.g., generative AI, LLMs).

4.1 Phase 1: Global Trends in Artificial Intelligence in Education

The first phase (i.e., computer-based method) resulted in a preliminary list of trends in AI in education. These trends were rated concerning importance (see Table  1 ), impact (see Table  2 ), and challenges (see Table  3 ).

As shown in Table  1 , the most important trends included (1) Privacy and ethical use of AI and big data in education ( M  = 8.7; SD  = 1.286), (2) Trustworthy algorithms for supporting education ( M  = 8.3; SD  = 1.608), and Fairness & equity of AI in education ( M  = 8.2; SD  = 1.674). Less important trends included (18) Generalization of AI models in education ( M  = 6.2; SD  = 2.018), (19) Intelligent and social robotics for education ( M  = 5.8; SD  = 2.335), and (20) Blockchain technology in education ( M  = 4.9; SD  = 2.482) (see Table  1 ).

Table 2 shows the most impactful trends, including (1) Privacy and ethical use of AI and big data in education ( M  = 8.2; SD  = 1.608), (2) Trustworthy algorithms for supporting education ( M  = 7.7; SD  = 2.268), and (3) Fairness & equity of AI in education ( M  = 7.7; SD  = 1.736). Less impactful trends included (18) Generalization of AI models in education ( M  = 6.4; SD  = 2.115), (19) Intelligent and social robotics for education ( M  = 5.5; SD  = 2.298), and (20) Blockchain technology in education ( M  = 5.0; SD  = 2.650) (see Table  2 ).

Challenges related to the trends in AI in education are presented in Table  3 . Key challenges included (1) Privacy and ethical use of AI and big data in education ( M  = 8.8; SD  = 1.455), (2) Trustworthy algorithms for supporting education ( M  = 8.3; SD  = 1.804), and (3) Fairness & equity of AI in education ( M  = 8.3; SD  = 1.855). Even the weakest challenges received ratings above the mean (18) Intelligent and social robotics for education ( M  = 7.0; SD  = 1.941), (19) Multimodal learning analytics in education ( M  = 6.9; SD  = 2.187), and (20) Blockchain technology in education ( M  = 6.6; SD  = 2.599) (see Table  3 ).

Overall, the challenges ( M  = 7.68, SD  = 0.315) of AI in education have been rated significantly higher than impact ( M  = 7.05, SD  = 0.593) and importance ( M  = 7.28, SD  = 0.829), F (2, 57) = 3.512, p  < 0.05, Eta2  = 0.110 (medium effect).

4.2 Phase 2: Consensus Related to Identified Areas of Artificial Intelligence in Education

For the second phase, the top three trends for importance, impact, and challenges of AI in education were critically reflected and linked with an in-depth and research-informed group discussion. However, all other trends have been recognized during the consensus phase and for developing recommendations toward strategies and actions. As shown in Table  4 , the panel of international researchers and policymakers agreed that (a) privacy and ethical use of AI and big data in education, (b) trustworthy algorithms for supporting education, and (c) fairness and equity of AI in education remain the key drivers of AI in education. Further, the panel of international researchers and policymakers identified emerging educational realities with AI, including (d) new roles of stakeholders in education, (e) human-AI-alliance in education, and (f) precautionary pre-emptive policies preceding practice for AI in education.

5 Discussion

This Delphi study included global specialists from research institutions, industry, and policymaking. The primary goal of the Delphi method is to structure a group discussion systematically. However, reaching a consensus in the discussion may also lead to a biased perspective on the research topic (Beiderbeck et al., 2021 ). Another limitation of the current study is the limited sample size. Hence, our convenience sample could have included more participants and further differentiated the various experience levels in AI in education. Hence, future studies may increase the empirical basis as well as the experience of participants related to AI in education. Further, a limitation may be seen in possible overlaps between the identified constructs during the Delphi study. However, through the in-depth face-to-face discussion of the panel of international researchers, the constructs were constantly monitored concerning their content validity and refined accordingly.

In summary, the highest-rated trends in AI in education regarding importance, impact, and challenges included privacy and ethical use of AI and big data in education, trustworthy algorithms for supporting education, and fairness and equity of AI in education. In addition, new roles of stakeholders in education, human-AI-alliance in education, and precautionary pre-emptive policies precede practice for AI in education have been identified as emerging realities of AI in education.

5.1 Trends Identified for AI in Education

Privacy and ethical use of AI and big data in education emphasise the importance of data privacy (data ownership, data access, and data protection) concerning the development, implementation, and use of AI systems in education. Inevitably, the handling of these data privacy issues has significant ethical implications for the stakeholders involved. For instance, Adejo and Connolly ( 2017 ) discuss ethical issues related to using learning analytics tools and technologies, focusing on privacy, accuracy, property, and accessibility concerns. Further, a survey study by Ifenthaler and Schumacher ( 2016 ) examined student perceptions of privacy principles in learning analytics systems. The findings show that students remained conservative in sharing personal data, and it was recommended that all stakeholders be involved in implementing learning analytics systems. Thus, the sustainable involvement of stakeholders increases trust and establishes transparency regarding the need for and use of data.

More recently, Celik ( 2023 ) focused on teachers' professional knowledge and ethical integration of AI-based tools in education and suggested that teachers with higher knowledge of interacting with AI tools have a better understanding of their pedagogical contributions. Accordingly, AI literacy among all stakeholders appears to be inevitable, including understanding AI capabilities, utilizing AI, and applying AI (Papamitsiou et al., 2021 ; Wang & Lester, 2023 ).

Trustworthy algorithms for supporting education focus on trustworthiness, which is defined as the security, reliability, validity, transparency, and accuracy of AI algorithms and the interpretability of the AI outputs used in education. It particularly focuses on the impact of algorithmic bias (systematic and repeated errors resulting in unfair outcomes) on different stakeholders and stages of algorithm development. Research has demonstrated that algorithmic bias is a problem for algorithms used in education (OECD, 2023 ). Bias, which can occur at all stages of the machine learning life cycle, is a multilayered phenomenon encompassing historical bias, representation bias, measurement bias, aggregation bias, evaluation bias and deployment bias (Suresh & Guttag, 2021 ). For instance, Baker and Hawn ( 2021 ) review algorithmic bias in education, discussing its causes and empirical evidence of its manifestation, focusing on the impacts of algorithmic bias on different groups and stages of algorithm development and deployment in education. Alexandron et al. ( 2019 ) raise concerns about reliability issues, identify the presence of fake learners who manipulate data, and demonstrate how their activity can bias analytics results. Li et al. ( 2023 ) also mention the inhibition of predictive fairness due to data bias in their systematic review of existing research on prediction bias in education. Minn et al. ( 2022 ) argue that it is challenging to extract psychologically meaningful explanations that are relevant to cognitive theory from large-scale models such as Deep Knowledge Tracing (DKT) and Dynamic Key-Value Memory Network (DKVMN), which have useful performance in knowledge tracking, and mention the necessity for simpler models to improve interpretability. On the contrary, such simplifications may result in limited validity and accuracy of the underlying models.

Fairness and equity of AI in education emphasises the need for explainability and accountability in the design of AI in education. It requires lawful, ethical, and robust AI systems to address technical and social perspectives. Current research related to the three trends overlaps and emphasises the importance of considering stakeholder involvement, professional knowledge, ethical guidelines, as well as the impact on learners, teachers, and organizations. For instance, Webb et al. ( 2021 ) conducted a comprehensive review of machine learning in education, highlighting the need for explainability and accountability in machine learning system design. They emphasised the importance of integrating ethical considerations into school curricula and providing recommendations for various stakeholders. Further, Bogina et al. ( 2021 ) focused on educating stakeholders about algorithmic fairness, accountability, transparency, and ethics in AI systems. They highlight the need for educational resources to address fairness concerns and provide recommendations for educational initiatives.

New roles of stakeholders in education is related to the phenomena that AI will be omnipresent in education, which inevitably involves stakeholders interacting with AI systems in an educational context. New roles and profiles are emerging beyond traditional ones. For instance, Buckingham Shum ( 2023 ) emphasises the need for enterprise-wide deployment of AI in education, which is accompanied by extensive staff training and support. Further, new forms of imagining AI and of deciding its integration into socio-cultural systems will have to be discussed by all stakeholders, particularly minority or excluded collectives. Hence, AI deployment reflects different levels of influence, partnership and adaptation that are required to introduce and sustain novel technologies in the complex system that constitutes an educational organisation. Further, Andrews et al. ( 2022 ) recommend appointing a Digital Ethics Officer (DEO) in educational organisations who would be responsible for overseeing ethical guidelines, controlling AI activities, ethics training, as well as creating an ethical awareness culture and advising management.

Human-AI-alliance in education emphasises that AI in education shifted from being narrowly focused on automation-based tasks to augmenting human capabilities linked to learning and teaching. Seeber et al. ( 2020 ) propose a research agenda to develop interrelated programs to explore the philosophical and pragmatic implications of integrating humans and AI in augmenting human collaboration. Similarly, De Laat et al. ( 2020 ) and Joksimovic et al. ( 2023 ) highlight the challenge of bringing human and artificial intelligence together so that learning in situ and in real-time will be supported. Multiple opportunities and challenges arise from the human-AI-alliances in education for educators, learners, and researchers. For instance, Kasneci et al. ( 2023 ) suggest educational content creation, improving student engagement and interaction, as well as personalized learning and teaching experiences.

Precautionary pre-emptive policies precede practice for AI in education, underlining that, overwhelmed by the rapid change in the technology landscape, decision-makers tend to introduce restrictive policies in reaction to initial societal concerns with emerging AI developments. Jimerson and Childs ( 2017 ) highlight the issue of educational data use and how state and local policies fail to align with the broader evidence base of educational organisations. As a reaction toward uninformed actions in educational organisations, Tsai et al. ( 2018 ) introduced a policy and strategy framework that may support large-scale implementation involving multi-stakeholder engagement and approaches toward needs analysis. This framework suggests various dimensions, including mapping the political context, identifying the key stakeholders, identifying the desired behaviour changes, developing an engagement strategy, analysing the capacity to effect change, and establishing monitoring and learning opportunities.

5.2 Strategies and Actions

Based on the findings of the Delphi study as well as current work by other researchers, we recommend the following actions for policymakers (PM), researchers (RE), and practitioners (PR), each strategy linked to the corresponding challenges identified above. A detailed implementation plan for the strategies and related stakeholders can be found in a related paper published during EDUsummIT ( https://www.let.media.kyoto-u.ac.jp/edusummit2022/ ):

In order to support the new roles of stakeholders in education

Identify the elements involved in the new roles (RE)

Identify and implement pedagogical practices for AI in education (PR, RE)

Develop policies to support AI and data literacies through curriculum development (PM)

In order to support the Human-AI-Alliance in education

Encourage and support collaborative interaction between stakeholders and AI systems in education (RE)

Take control of available AI systems and optimize teaching and learning strategies (PR)

Promote institutional strategies and actions in order to support teachers’ agency and avoid teachers’ de-professionalization (PM, PR)

In order to support evidence-informed practices of AI in education

Use both the results of fundamental research into AI and the results of live case studies to build a robust body of knowledge and evidence about AI in education (RE)

Support open science and research on AI in education (PM)

Implement evidence-informed development of AI applications (RE, PR)

Implement evidence-informed pedagogical practices (PR, RE)

In order to support ethical considerations of AI in education

Forefront privacy and ethical considerations utilizing a multi-perspective and interdisciplinary approach as the core of AI in education (PM, RE, PR)

Consider the context, situatedness, and complexity of AI in education’s impacts at the time of exploring ethical implications (PR)

Continuously study the effects of AI systems in the context of education (RE)

6 Conclusion

The evolution of Artificial Intelligence (AI) in education has witnessed a profound transformation over recent years, holding tremendous promise for the future of learning (Bozkurt et al., 2021 ). As we stand at the convergence of technology and education, the potential impact of AI is poised to reshape traditional educational paradigms in multifaceted ways. Through supporting personalised learning experiences, AI has showcased its ability to cater to individual student needs, offering tailored curricula and adaptive assessments (Brusilovsky, 1996 ; Hemmler & Ifenthaler, 2022 ; Jones & Winne, 1992 ; Martin et al., 2020 ). This customisability of education fosters a more inclusive and effective learning environment, accommodating diverse learning needs and regulations. Moreover, AI tools augment the role of educators by automating administrative tasks, enabling them to allocate more time to mentoring, fostering creativity, and critical thinking (Ames et al., 2021 ). However, the proliferation of AI in education also raises pertinent ethical concerns, including data privacy, algorithmic biases, and the digital divide (Baker & Hawn, 2021 ; Ifenthaler, 2023 ). Addressing these concerns requires a conscientious approach, emphasising transparency, equity, and responsible AI development and deployment. In addition, in recent years, the emergence of generative AI, such as ChatGPT, is expected to facilitate interactive learning and assist instructors, while concerns such as the generation of incorrect information and privacy issues are also being addressed (Baidoo-Anu & Owusu Ansah, 2023 ; Lo, 2023 ).

Looking forward, the future of AI in education holds tremendous potential for transformation of learning and teaching. Yet, realising the full potential of AI in education necessitates concerted efforts from stakeholders—educators, policymakers, technologists, and researchers—to collaborate, innovate, and navigate the evolving ethical and pedagogical considerations. Embracing AI's potential while safeguarding against its pitfalls will be crucial in harnessing its power to create a more equitable, accessible, and effective educational arena.

Data availability

The data that support the findings of this study are available from the authors upon reasonable request.

Adejo, O., & Connolly, T. (2017). Learning analytics in a shared-network educational environment: ethical issues and countermeasures. International Journal of Advanced Computer Science and Applications , 8 (4). https://doi.org/10.14569/IJACSA.2017.080404

Adekitan, A. I., & Noma-Osaghae, E. (2019). Data mining approach to predicting the performance of first year student in a university using the admission requirements. Education and Information Technologies, 24 , 1527–1543. https://doi.org/10.1007/s10639-018-9839-7

Article   Google Scholar  

Alexandron, G., Yoo, L., Ruipérez-Valiente, J. A., Lee, S., & Pritchard, D. (2019). Are MOOC learning analytics results trustworthy? With fake learners, they might not be! International Journal of Artificial Intelligence in Education, 29 , 484–506. https://doi.org/10.1007/s40593-019-00183-1

Al-Mahmood, R. (2020). The politics of learning analytics. In D. Ifenthaler & D. C. Gibson (Eds.), Adoption of data analytics in higher education learning and teaching (pp. 20–38). Springer.

Google Scholar  

Ames, K., Harris, L. R., Dargusch, J., & Bloomfield, C. (2021). ‘So you can make it fast or make it up’: K–12 teachers’ perspectives on technology’s affordances and constraints when supporting distance education learning. The Australian Educational Researcher, 48 , 359–376. https://doi.org/10.1007/s13384-020-00395-8

Andrews, D., Leitner, P., Schön, S., & Ebner, M. (2022). Developing a professional profile of a digital ethics officer in an educational technology unit in higher education. In P. Zaphiris & A. Ioannou (Eds.), Learning and collaboration technologies. Designing the learner and teacher experience. HCII 2022. Lecture notes in computer science (Vol. 13328, pp. 157–175). Springer. https://doi.org/10.1007/978-3-031-05657-4_12

Arthars, N., Dollinger, M., Vigentini, L., Liu, D. Y., Kondo, E., & King, D. M. (2019). Empowering teachers to personalize learning support. In D. Ifenthaler, D.-K. Mah, & J. Y.-K. Yau (Eds.), Utilizing learning analytics to support study success (pp. 223–248). Springer. https://doi.org/10.1007/978-3-319-64792-0_13

Azcona, D., Hsiao, I., & Smeaton, A. F. (2019). Detecting students-at-risk in computer programming classes with learning analytics from students’ digital footprints. User Modeling and User-Adapted Interaction, 29 , 759–788. https://doi.org/10.1007/s11257-019-09234-7

Baidoo-Anu, D., & Owusu Ansah, L. (2023). Education in the era of generative artificial intelligence (AI): understanding the potential benefits of chatgpt in promoting teaching and learning. Journal of AI , 7 (1), 52–62. https://doi.org/10.61969/jai.1337500

Baker, R. S. (2016). Stupid tutoring systems, intelligent humans. International Journal of Artificial Intelligence in Education, 26 , 60–6140. https://doi.org/10.1007/s40593-016-0105-0

Baker, R. S., & Hawn, A. (2021). Algorithmic bias in education. International Journal of Artificial Intelligence in Education, 32 (4), 1052–1092. https://doi.org/10.1007/s40593-021-00285-9

Bates, T., Cobo, C., Mariño, O., & Wheeler, S. (2020). Can artificial intelligence transform higher education? International Journal of Educational Technology in Higher Education, 17 (42), 1–12. https://doi.org/10.1186/s41239-020-00218-x

Beiderbeck, D., Frevel, N., von der Gracht, H. A., Schmidt, S. L., & Schweitzer, V. M. (2021). Preparing, conducting, and analyzing Delphi surveys: Cross-disciplinary practices, new directions, and advancements. MethodsX, 8 , 101401. https://doi.org/10.1016/j.mex.2021.101401

Bellman, R. (1978). An introduction to artificial intelligence: can computers think? . Boyd & Fraser.

Bogina, V., Hartman, A., Kuflik, T., & Shulner-Tal, A. (2021). Educating software and AI stakeholders about algorithmic fairness, accountability, transparency and ethics. International Journal of Artificial Intelligence in Education . https://doi.org/10.1007/s40593-021-00248-0

Bozkurt, A., Karadeniz, A., Bañeres, D., Guerrero-Roldán, A., & Rodríguez, M. E. (2021). Artificial intelligence and reflections from educational landscape: A review of ai studies in half a century. Sustainability, 13 , 800. https://doi.org/10.3390/su13020800

Brusilovsky, P. (1996). Methods and techniques of adaptive hypermedia. User Modeling and User-Adapted Interaction, 6 (2–3), 87–129. https://doi.org/10.1007/BF00143964

Buckingham Shum, S., & McKay, T. A. (2018). Architecting for learning analytics. Innovating for sustainable impact. EDUCAUSE Review , 53 (2), 25–37. https://er.educause.edu/articles/2018/3/architecting-for-learning-analytics-innovating-for-sustainable-impact

Buckingham Shum, S. (2023). Embedding learning analytics in a university: Boardroom, staff room, server room, classroom. In O. Viberg & Å. Grönlund (Eds.), Practicable learning analytics (pp. 17–33). Springer. https://doi.org/10.1007/978-3-031-27646-0_2

Celik, I. (2023). Towards Intelligent-TPACK: An empirical study on teachers’ professional knowledge to ethically integrate artificial intelligence (AI)-based tools into education. Computers in Human Behavior, 138 , 107468. https://doi.org/10.1016/j.chb.2022.107468

Chatti, M. A., Muslim, A., Guesmi, M., Richtscheid, F., Nasimi, D., Shahin, A., & Damera, R. (2020). How to design effective learning analytics indicators? a human-centered design approach. In C. Alario-Hoyos, M. J. Rodríguez-Triana, M. Scheffel, I. Arnedillo-Sánchez, & S. M. Dennerlein (Eds.), Addressing global challenges and quality education. EC-TEL 2020 (Vol. 12315, pp. 303–317). Springer. https://doi.org/10.1007/978-3-030-57717-9_22

Daugherty, P. R., & Wilson, H. J. (2018). Human + machine: Reimagining work in the age of AI . Harvard Business Review Press.

De Laat, M., Joksimovic, S., & Ifenthaler, D. (2020). Artificial intelligence, real-time feedback and workplace learning analytics to support in situ complex problem-solving: A commentary. International Journal of Information and Learning Technology, 37 (5), 267–277. https://doi.org/10.1108/IJILT-03-2020-0026

Dignum, V. (2017). Responsible autonomy. In Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence , Melbourne, VIC, AUS.

Gašević, D., Joksimović, S., Eagan, B. R., & Shaffer, D. W. (2019). SENS: Network analytics to combine social and cognitive perspectives of collaborative learning. Computers in Human Behavior, 92 , 562–577. https://doi.org/10.1016/j.chb.2018.07.003

Gibson, D. C., & Ifenthaler, D. (2020). Adoption of learning analytics. In D. Ifenthaler & D. C. Gibson (Eds.), Adoption of data analytics in higher education learning and teaching (pp. 3–20). Springer. https://doi.org/10.1007/978-3-030-47392-1_1

Glick, D., Cohen, A., Festinger, E., Xu, D., Li, Q., & Warschauer, M. (2019). Predicting success, preventing failure. In D. Ifenthaler, D.-K. Mah, & J. Y.-K. Yau (Eds.), Utilizing learning analytics to support study success (pp. 249–273). Springer. https://doi.org/10.1007/978-3-319-64792-0_14

Graf Ballestrem, J., Bär, U., Gausling, T., Hack, S., & von Oelffen, S. (2020). Künstliche Intelligenz. Rechtsgrundlagen und Strategien in der Praxis . Springer Gabler.

Hemmler, Y., & Ifenthaler, D. (2022). Four perspectives on personalized and adaptive learning environments for workplace learning. In D. Ifenthaler & S. Seufert (Eds.), Artificial intelligence education in the context of work (pp. 27–39). Springer. https://doi.org/10.1007/978-3-031-14489-9_2

Hinkelmann, M., & Jordine, T. (2019). The LAPS project: Using machine learning techniques for early student support. In D. Ifenthaler, J.Y.-K. Yau, & D.-K. Mah (Eds.), Utilizing learning analytics to support study success (pp. 105–117). Springer.

Chapter   Google Scholar  

Howell, J. A., Roberts, L. D., Seaman, K., & Gibson, D. C. (2018). Are we on our way to becoming a “helicopter university”? Academics’ views on learning analytics. Technology, Knowledge and Learning, 23 (1), 1–20. https://doi.org/10.1007/s10758-017-9329-9

Ifenthaler, D. (2017). Are higher education institutions prepared for learning analytics? TechTrends, 61 (4), 366–371. https://doi.org/10.1007/s11528-016-0154-0

Ifenthaler, D., & Schumacher, C. (2016). Student perceptions of privacy principles for learning analytics. Educational Technology Research and Development, 64 (5), 923–938. https://doi.org/10.1007/s11423-016-9477-y

Ifenthaler, D., & Schumacher, C. (2023). Reciprocal issues of artificial and human intelligence in education. Journal of Research on Technology in Education, 55 (1), 1–6. https://doi.org/10.1080/15391523.2022.2154511

Ifenthaler, D., & Tracey, M. W. (2016). Exploring the relationship of ethics and privacy in learning analytics and design: Implications for the field of educational technology. Educational Technology Research and Development, 64 (5), 877–880. https://doi.org/10.1007/s11423-016-9480-3

Ifenthaler, D., Gibson, D. C., & Dobozy, E. (2018). Informing learning design through analytics: Applying network graph analysis. Australasian Journal of Educational Technology , 34 (2), 117–132. https://doi.org/10.14742/ajet.3767

Ifenthaler, D., Greiff, S., & Gibson, D. C. (2018). Making use of data for assessments: harnessing analytics and data science. In J. Voogt, G. Knezek, R. Christensen, & K.-W. Lai (Eds.), International handbook of IT in primary and secondary education (2 ed., pp. 649–663). Springer. https://doi.org/10.1007/978-3-319-71054-9_41

Ifenthaler, D. (2023). Ethische Perspektiven auf künstliche Intelligenz im Kontext der Hochschule. In T. Schmohl, A. Watanabe, & K. Schelling (Eds.), Künstliche Intelligenz in der Hochschulbildung. hancen und Grenzen des KI-gestützten Lernens und Lehrens (pp. 71–86). Transcript-Verlag. https://doi.org/10.14361/9783839457696

Jimerson, J. B., & Childs, J. (2017). Signal and symbol: How state and local policies address data-informed practice. Educational Policy, 31 (5), 584–614. https://doi.org/10.1177/089590481561344

Joksimovic, S., Ifenthaler, D., De Laat, M., Siemens, G., & Marronne, R. (2023). Opportunities of artificial intelligence for supporting complex problem-solving: Findings from a scoping review. Computers & Education: Artificial Intelligence, 4 , 100138. https://doi.org/10.1016/j.caeai.2023.100138

Jones, M., & Winne, P. H. (Eds.). (1992). Adaptive learning environments . Springer.

Jones, K. M. L. (2019). Advising the whole student: EAdvising analytics and the contextual suppression of advisor values. Education and Information Technologies, 24 , 437–458. https://doi.org/10.1007/s10639-018-9781-8

Kasneci, E., Sessler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., Gasser, U., Groh, G., Günnemann, S., Hüllermeier, E., Krusche, S., Kutyniok, G., Michaeli, T., Nerdel, C., Pfeffer, J., Poquet, O., Sailer, M., Schmidt, A., Seidel, T., Stadler, M., Weller, J., Kuhn, J., & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and Individual Differences , 103 , 102274. https://doi.org/10.1016/j.lindif.2023.102274

Li, F., Ruijs, R., & Lu, Y. (2023). Ethics & AI: a systematic review on ethical concerns and related strategies for designing with ai in healthcare. AI , 4 (1), 28–53. https://doi.org/10.3390/ai4010003

Lo, C. K. (2023). What Is the impact of ChatGPT on education? A rapid review of the literature. Education Sciences, 13 (4), 410. https://doi.org/10.3390/educsci13040410

Martin, F., Chen, Y., Moore, R. L., & Westine, C. D. (2020). Systematic review of adaptive learning research designs, context, strategies, and technologies from 2009 to 2018. Educational Technology Research and Development, 68 , 1903–1929. https://doi.org/10.1007/s11423-020-09793-2

Mayring, P. (2015). Qualitative content analysis: Theoretical background and procedures. In A. Bikner-Ahsbahs, C. Knipping, & N. Presmeg (Eds.), Approaches to qualitative research in mathematics education (pp. 365–380). Springer.

Mayring, P., & Fenzl, T. (2022). QCAmap: A software for qualitative content analysis [Computer software]. https://www.qcamap.org/ui/en/home

Minn, S., Vie, J.-J., Takeuchi, K., Kashima, H., & Zhu, F. (2022). Interpretable knowledge tracing: Simple and efficient student modeling with causal relations. Proceedings of the AAAI Conference on Artificial Intelligence, 36 (11), 12810–12818. https://doi.org/10.1609/aaai.v36i11.21560

Nespereira, C., Vilas, A., & Redondo, R. (2015). Am I failing this course?: risk prediction using e-learning data Conference on Technological Ecosystems for enhancing Multiculturality,

OECD. (2023). OECD digital education outlook 2023: Towards an effective digital education ecosystem. OECD Publishing . https://doi.org/10.1787/c74f03de-en

Papamitsiou, Z., Filippakis, M., Poulou, M., Sampson, D. G., Ifenthaler, D., & Giannakos, M. (2021). Towards an educational data literacy framework: Enhancing the profiles of instructional designers and e-tutors of online and blended courses with new competences. Smart Learning Environments, 8 , 18. https://doi.org/10.1186/s40561-021-00163-w

Pinkwart, N., & Liu, S. (Eds.). (2020). Artificial intelligence supported educational technologies . Springer.

Prinsloo, P., & Slade, S. (2014). Student data privacy and institutional accountability in an age of surveillance. In M. E. Menon, D. G. Terkla, & P. Gibbs (Eds.), Using data to improve higher education. Research, policy and practice (pp. 197–214). Sense Publishers.

Richards, D., & Dignum, V. (2019). Supporting and challenging learners through pedagogical agents: Addressing ethical issues through designing for values. British Journal of Educational Technology, 50 (6), 2885–2901. https://doi.org/10.1111/bjet.12863

Russell, J.-E., Smith, A., & Larsen, R. (2020). Elements of Success: Supporting at-risk student resilience through learning analytics. Computers & Education , 152 . https://doi.org/10.1016/j.compedu.2020.103890

Scheibe, M., Skutsch, M., & Schofer, J. (1975). Experiments in Delphi methodology. In H. A. Linestone & M. Turoff (Eds.), The Delphi method - techniques and applications (pp. 262–287). Addison-Wesley.

Schumacher, C., & Ifenthaler, D. (2018). The importance of students’ motivational dispositions for designing learning analytics. Journal of Computing in Higher Education, 30 (3), 599–619. https://doi.org/10.1007/s12528-018-9188-y

Schumacher, C., & Ifenthaler, D. (2021). Investigating prompts for supporting students’ self-regulation—A remaining challenge for learning analytics approaches? The Internet and Higher Education, 49 , 100791. https://doi.org/10.1016/j.iheduc.2020.100791

Seeber, I., Bittner, E., Briggs, R. O., Vreede, T., de Vreede, G.-J., de Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M. (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57 (2), 103174. https://doi.org/10.1016/j.im.2019.103174

Sheikh, H., Prins, C., & Schrijvers, E. (2023). Artificial intelligence: Definition and background. In H. Sheikh, C. Prins, & E. Schrijvers (Eds.), Mission AI. Research for policy (pp. 15–41). Springer. https://doi.org/10.1007/978-3-031-21448-6_2

Shimada, A., Okubo, F., Yin, C., & Ogata, H. (2018). Automatic summarization of lecture slides for enhanced student preview-technical report and user study. IEEE Transaction of Learning Technologies, 11 (2), 165–178. https://doi.org/10.1109/TLT.2017.2682086

Slade, S., & Prinsloo, P. (2013). Learning analytics: Ethical issues and dilemmas. American Behavioral Scientist, 57 (10), 1510–1529. https://doi.org/10.1177/0002764213479366

Suresh, H., & Guttag, J. (2021). A framework for understanding sources of harm throughout the machine learning life cycle. In EAAMO '21: Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, Article 17 (pp. 1–9). ACM. https://doi.org/10.1145/3465416.3483305

Taub, M., Azevedo, R., Rajendran, R., Cloude, E. B., Biswas, G., & Price, M. J. (2020). How are students’ emotions related to the accuracy of cognitive and metacognitive processes during learning with an intelligent tutoring system? Learning and Instruction . https://doi.org/10.1016/j.learninstruc.2019.04.001

Tegmark, M. (2018). Life 3.0: Being human in the age of artificial intelligence . Penguin Books.

Tsai, Y.-S., Moreno-Marcos, P. M., Jivet, I., Scheffel, M., Tammets, K., Kollom, K., & Gašević, D. (2018). The SHEILA framework: informing institutional strategies and policy processes of learning analytics. Journal of Learning Analytics , 5 (3), 5–20. https://doi.org/10.18608/jla.2018.53.2

U.S. Department of Education. (2023). Artificial intelligence and future of teaching and learning: insights and recommendations . https://tech.ed.gov

Viberg, O., Engström, L., Saqr, M., & Hrastinski, S. (2022). Exploring students’ expectations of learning analytics: A person-centered approach. Education and Information Technologies, 27 , 8561–8581. https://doi.org/10.1007/s10639-022-10980-2

Wang, N., & Lester, J. (2023). K-12 education in the age of AI: A call to action for K-12 AI literacy. International Journal of Artificial Intelligence in Education, 33 , 228–232. https://doi.org/10.1007/s40593-023-00358-x

Webb, M., Fluck, A., Magenheim, J., Malyn-Smith, J., Waters, J., Deschênes, M., & Zagami, J. (2021). Machine learning for human learners: Opportunities, issues, tensions and threats. Educational Technology Research & Development, 69 (4), 2109–2130. https://doi.org/10.1007/s11423-020-09858-2

Wesche, J. S., & Sonderegger, A. (2019). When computers take the lead: The automation of leadership. Computers in Human Behavior, 101 , 197–209. https://doi.org/10.1016/j.chb.2019.07.027

West, D., Huijser, H., & Heath, D. (2016b). Putting an ethical lens on learning analytics. Educational Technology Research and Development, 64 (5), 903–922. https://doi.org/10.1007/s11423-016-9464-3

West, D., Heath, D., & Huijser, H. (2016). Let’s talk learning analytics: A framework for implementation in relation to student retention. Online Learning , 20 (2), 1–21. https://doi.org/10.24059/olj.v20i2.792

Willis, I. J. E., & Strunk, V. A. (2015). Ethical responsibilities of preserving academecians in an age of mechanized learning: Balancing the demands of educating at capacity and preserving human interactivity. In J. White & R. Searle (Eds.), Rethinking machine ethics in the age of ubiquitous technology (pp. 166–195). IGI Global.

Willis, I. J. E., Slade, S., & Prinsloo, P. (2016). Ethical oversight of student data in learning analytics: A typology derived from a cross-continental, cross-institutional perspective. Educational Technology Research and Development, 64 (5), 881–901. https://doi.org/10.1007/s11423-016-9463-4

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education—where are the educators? International Journal of Educational Technology in Higher Education, 16 (39), 1–27. https://doi.org/10.1186/s41239-019-0171-0

Zeide, E. (2019). Artificial intelligence in higher education: Applications, promise and perils, and ethical questions. EDUCAUSE Review, 54 (3), 21–39.

Download references

Open Access funding enabled and organized by Projekt DEAL. The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.

Author information

Authors and affiliations.

University of Mannheim L4, 1, 68131, Mannheim, Germany

Dirk Ifenthaler

Curtin University, Perth, Australia

Kumamoto University, Kumanoto, Japan

Rwitajit Majumdar

AN University of Applied Sciences, Arnhem, The Netherlands

Pierre Gorissen

Dublin City University, Dublin, Ireland

Miriam Judge

UNESCO MGEIP, New Delhi, India

Shitanshu Mishra

University of Padua, Padua, Italy

Juliana Raffaghelli

Kyushu University, Fukuoka, Japan

Atsushi Shimada

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the study conception, design, data collection, and analysis, as well as draft writing and commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Dirk Ifenthaler .

Ethics declarations

Conflict of interests.

The authors have no relevant financial or non-financial interests to disclose.

Ethical Approval

All procedures performed in studies involving human participants were under the ethical standards of the institutional and national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors.

Informed Consent

Informed consent was obtained from all individual participants included in the study. Additional informed consent was obtained from all individual participants for whom identifying information is included in this article.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Ifenthaler, D., Majumdar, R., Gorissen, P. et al. Artificial Intelligence in Education: Implications for Policymakers, Researchers, and Practitioners. Tech Know Learn (2024). https://doi.org/10.1007/s10758-024-09747-0

Download citation

Accepted : 21 May 2024

Published : 04 June 2024

DOI : https://doi.org/10.1007/s10758-024-09747-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Adaptive learning
  • Data protection
  • Policy recommendation
  • Algorithmic bias
  • Stakeholders
  • Human-AI-Alliance
  • Delphi study
  • Find a journal
  • Publish with us
  • Track your research

Site Logo

Outstanding Senior Spotlight: Adityaa Ravi

  • by College of Engineering Communications
  • June 07, 2024

Adityaa Ravi is gearing up for a new role as a software engineer after graduating from the University of California, Davis, with a Bachelor of Science degree in computer science. He talks with us about how inspiring faculty and riveting research projects helped take his passion for robotics and STEM to the next level.

What first sparked your interest in engineering?   

Adityaa Ravi stands in front of a building with blue trim

I was initially inspired to pursue engineering when I saw a Tamil-language movie about robots when I was just 7 years old. Noticing my interest in robotics and STEM, my parents enrolled me in a robotics class offered by a small industrial robotics company in Chennai, India, that encouraged me to tinker with lots of electronics, helping me learn and strengthen my passion. I honed in on my interest in software engineering after participating in a hackathon about eight years ago.

How has UC Davis nurtured that initial interest?   

UC Davis helped me take my passion to the next level by providing me with opportunities beyond my wildest dreams, from the amazing coursework to the three research projects I had the opportunity to work on with engineering faculty. I also had the opportunity to share my passion and inspire others to pursue computer science by leading three student organizations — Google Developer Student Club, SacHacks and the CS Tutoring Club — during my time at UC Davis.  

What is a specific experience you found particularly rewarding or impactful?   

My senior design project was to try to use a patient’s kinematic and behavioral data from their smartphones to predict health outcomes, starting out with end-stage renal disease patients receiving dialysis. Kidney disease is a major problem in the U.S., with more than 14% of adults experiencing chronic kidney disease and more than half a million people requiring regular dialysis treatments. It has been one of the most interesting, challenging and rewarding projects that I have worked on.

My team of three other students and I knew going into the project that this was going to be large-scale research that could take years to complete, so our goal was to create an initial proof of concept and set the stage for future researchers to continue our work. We worked with amazing faculty from both the College of Engineering and UC Davis Health: Dipak Ghosal, chair of the computer science department, and Dr. Sophoclis Pantelis Alexopoulos, the medical director of the UC Davis Transplant Center. They guided us and provided amazing feedback throughout the course of the project.  

As we had to build the proof of concept from scratch with nothing but a few research papers to validate our conviction, we had to be pretty resourceful and often improvise to overcome challenges. Furthermore, as we were dealing with sensitive patient data, we had to take every precaution for data security and also work with Dr. Alexopoulos’ research team to submit for Institutional Review Board approval [which is required for research involving human subjects] for an official clinical trial to allow us to get access to patient data. While there were hurdles due to the complexity of our project, thinking about how many people’s lives our research could positively impact kept us motivated to push through and create an initial proof of concept in the limited time we had.   

It sounds like Professor Ghosal and Dr. Alexopoulos really made an impression on you.   

Professor Ghosal’s guidance and feedback throughout my time working on the project was instrumental in us making a headway in our project despite it being a massive undertaking. It was inspiring to see him take the time out of his extremely busy schedule to work closely with us and other student projects through his startup club, showing how much he cares about helping students make a lasting impact.   

What advice would you give to incoming students about making the most of their time in the College of Engineering?  

Get involved in research as soon as possible. For both graduate school and job applications, I found that research experience is among the most valued. Reach out to professors even if you think you are not ready yet. Through conversations I got to have with professors during my time at UC Davis, I learned that professors value passion and dedication and are quite supportive.   

I would also highly encourage incoming students to join student clubs and to try to take up leadership positions. My time as the president of the Google Developer Student Club, SacHacks and the CS Tutoring Club helped me in more ways than I could describe. Not only did I learn necessary soft skills like resiliency, conflict resolution and public speaking but I also learned how organizations work, how to motivate people even when the outlook seems bleak and much more.   

Primary Category

  • News & Events
  • Contact & Visit
  • Faculty & Staff
  • McCormick Advisory Council
  • Departments & Institutes
  • Diversity Data
  • Faculty Journal Covers
  • Areas of Study
  • Bachelor's Degrees
  • Music & Engineering
  • Combined BS / MS Program Collapse Combined BS / MS Program Submenu
  • Murphy Scholars Program Projects
  • Undergraduate Honors
  • Certificates & Minors Collapse Certificates & Minors Submenu
  • Integrated Engineering Studies
  • Engineering First® Program
  • Theme Requirement
  • Research Opportunities
  • Personal & Career Development
  • Global Opportunities
  • Existing Groups
  • McCormick Community
  • Transfer AP/IB Credits
  • ABET Course Partitioning
  • Enrollment and Graduation Data
  • Full-time Master's
  • Part-time Master's
  • MS with Interdepartmental Minors
  • Application Checklist
  • Application FAQs
  • Financial Aid
  • International Students
  • Student Groups
  • Career & Professional Development
  • All Areas of Study
  • Departments & Programs
  • Apply to Northwestern Engineering
  • Faculty Fellows
  • Office of the Dean
  • Administration, Finance, Facilities, & Planning
  • Alumni Relations & Development
  • Career Development
  • Corporate Engagement
  • Customer Service Center
  • Faculty Affairs
  • Global Initiatives
  • Graduate Studies
  • Information Technology
  • Marketing & Communications
  • McCormick Advising System
  • Personal Development StudioLab
  • Professional Education
  • Research Offices
  • Undergraduate Engineering
  • Newsletter Signup
  • Information for the Media
  • Tech Room Finder

Grad Spotlight: What Maya Garcia is Taking from her McCormick Education

Garcia is graduating with a degree in computer engineering.

When she first started at Northwestern Engineering, Maya Garcia had doubts. She used those doubts as a prop, keeping her quiet in classes and restricting her from asking questions.

By the end of her time at the McCormick School of Engineering, the crutch was gone.

A computer engineering major, Garcia is graduating this month from the McCormick School of Engineering. While she picked up plenty of academic knowledge, Garcia also learned plenty about herself.

“I gained a skill in being confident to exist in spaces not traditionally made for me. I learned to validate my experiences and knowledge to the same level as my peers and professors,” Garcia said. “Doubt was a dangerous crutch during my first couple of years and the more I pushed myself to talk in class, ask professors ‘dumb’ questions, and feel secure in my presence, the more I constructively engaged with my learning. This confidence allowed me to engage with all aspects of the learning process and transformed my experience as an indigenous female engineer.” 

Maya Garcia

In a Q&A, Garcia reflected on her time at Northwestern Engineering. 

Why did you decide to pursue engineering at Northwestern? As a student not fully divorced from humanities, it was crucial for me to nurture my creativity when problem-solving. Northwestern’s whole-brain engineering philosophy nurtured this relationship, offering a space where all my skills were technically trained, celebrated, and encouraged. With this encouragement, I did an English minor and took courses ranging from psychology to environmental policy.

How did the McCormick curriculum help build a balanced, whole-brain ecosystem around your studies in your major?  More important than creating solutions to real-world problems in our Design Thinking and Communication courses, we were pushed to think of ourselves as a collective whole and work as a team. With the class’s writing component, we learned to clearly communicate with our clients and each other through speech and writing. Similarly, classes like COMP_SCI 330: Human-Computer Interaction highlighted that our work is not separate from the people for whom we create. My engineering solutions became well-rounded and unique because of this strengthened relationship between my solutions and the people they serve.

What are some examples of collaborative or interdisciplinary experiences at Northwestern that were impactful to your education and/or research? I participated in a variety of collaborative spaces, including an internship at the San Francisco International Airport and research groups across different schools. These spaces presented opportunities to interact with and listen to unique perspectives on engineering outside of the classroom. My understanding of what it means to be an engineer was broadened with this range of perspectives with people of different ages, disciplines, and identities. Now, I’ve become much more confident about the career I want to have and the kind of people I want to work with.

What's next?  After taking Environmental Justice: CIV_ENG 308 and SOC 212: Environment and Society , I felt inspired to shift to creating sustainable solutions with engineering. So, this coming fall, I will pursue a master's degree in environmental engineering at the University of California, Berkeley.

What advice do you have for current and future Northwestern Engineering students? I maintained a healthy balance while completing my engineering degree and prevented burnout by taking at least one humanities course every quarter. The only time I took a full STEM load, not only did my grades suffer, but I did too. My humanities courses gave me a break from the stressful STEM but still helped push me toward completing my degree.

Get our news in your inbox.

Sign up for our newsletter.

Check out our magazine.

Find more in depth stories and get to know Northwestern Engineering.

IMAGES

  1. (PDF) Computer Science Education Research: An Overview and Some Proposals

    research on computer education

  2. The Importance of Computer Science Education

    research on computer education

  3. Research on Computer Technology with Informatization in Higher

    research on computer education

  4. $1.7M from Gates Foundation aims to improve computer learning

    research on computer education

  5. (PDF) Qualitative research in computer science education

    research on computer education

  6. Memo:Computer Careers

    research on computer education

VIDEO

  1. Liquid Research Computer Core freezedown

  2. Role of computer in research, PhD course work #role #computer #researchmethodology #important #paper

  3. Research Without Programming Theoretical Research in Computer Science

  4. Importance of Computer in Education

  5. Reinforcement Learning: Bringing Together Computation, Behavior and Neural Coding

  6. The Alice Project: A Different Way to Teach Introductory Computer Science [1/2]

COMMENTS

  1. Computers & Education

    Computers & Education aims to increase knowledge and understanding of ways in which digital technology can enhance education, through the publication of high-quality research, which extends theory and practice. The Editors welcome research papers on the pedagogical uses of digital technology, where the focus is broad enough to be of interest to a wider education community.

  2. Journal of Educational Computing Research: Sage Journals

    The Journal of Educational Computing Research (JECR) is a peer-reviewed, interdisciplinary scholarly journal that publishes research reports and critical analyses on educational computing to both theorists and practitioners.The … | View full journal description. This journal is a member of the Committee on Publication Ethics (COPE).

  3. Exploring four decades of research in Computers & Education

    The journal publishes research articles about the use of computers as a tool for learning (see thematic region of tool in the center of the concept map), e.g. for computer-mediated communication (Mason & Bacsich, 1998; Steeples, Goodyear, & Mellar, 1994).Emphasis is placed on studies on the design and evaluation of computer-based learning environments, systems, programs and courses and the ...

  4. The Cambridge Handbook of Computing Education Research

    Gomes, Anabela Teixeira, Ana Rita Assuncao Eloy, Joana and Mendes, Antonio Jose 2019. An Exploratory Study of Brain Computer Interfaces in Computer Science Education. IEEE Revista Iberoamericana de Tecnologias del Aprendizaje, Vol. 14, Issue. 4, p. 152. Tshukudu, Ethel and Jensen, Siri Annethe Moe ...

  5. Education reform and change driven by digital technology: a

    The high citation counts for articles published in Computers & Education suggest that research disseminated through this medium has a wide-reaching impact and is of particular interest to the ...

  6. Computer Science Education

    Journal overview. Computer Science Education publishes high-quality papers with a specific focus on teaching and learning within the computing discipline. The journal seeks novel contributions that are accessible and of interest to researchers and practitioners alike. We invite work with learners of all ages and across both classroom and out-of ...

  7. Computing Education Research in Schools

    Abstract. One of the most researched domains of computing education research (CER) that attracts attention is computing education in schools, starting from pre-primary level up to upper secondary level (K-12). A high number of initiatives and related research contributions have appeared over half a century of computing history in schools.

  8. Home

    JCE is an interdisciplinary forum for communication of perspectives among researchers, practitioner, and policy makers on theories and practices in technology enhanced learning. The journal aims at making an impact on educational practices and thus to transform learning. The journal publishes up-to-date research and experiences in information ...

  9. The Computational Thinking Scale for Computer Literacy Education

    Tsai M.-J. (2002). Do male students often perform better than female students when learning computers? A study of Taiwanese eighth graders' computer education through strategic and cooperative learning. Journal of Educational Computing Research, 26(1), 67-85.

  10. What 126 studies say about education technology

    To address this need, J-PAL North America recently released a new publication summarizing 126 rigorous evaluations of different uses of education technology. Drawing primarily from research in developed countries, the publication looks at randomized evaluations and regression discontinuity designs across four broad categories: (1) access to ...

  11. Computers in education: the association between computer use and test

    Previous research on computers in education has, however, found mixed results and the estimated impacts are often small and insignificant. See, e.g., Bulman and Fairlie (Citation 2016) and Hall, Lundin, and Sibbmark (Citation 2019) for a literature overview.

  12. Research in Computer Science Education

    Abstract. Computer science education research refers to students' difficulties, misconceptions, and cognitive abilities, activities that can be integrated in the learning process, usage of visualization and animations tools, the computer science teachers' role, difficulties and professional development, and many more topics.

  13. Computers & Education

    Computers & Education. Technology-based instruction represents a new recent pedagogical paradigm that is rooted in the realization that new generations are much more comfortable with, and excited ...

  14. Home

    Home - Inst for Adv Computing Ed. Our mission is to advance K-12 computer science (CS) education for all children by enabling and disseminating exemplary evidence-driven research, with a focus on identifying culturally relevant promising practices and transforming student learning. Read More.

  15. What do we know about the expansion of K-12 computer science education

    Demand for CS education. Due to a high demand for their skills, CS professionals enjoy stable, high-income careers. According to the Bureau of Labor Statistics, the median annual salary for CS ...

  16. Understanding the role of digital technologies in education: A review

    The utilisation of projectors, computers, and other cutting-edge technical gear in the classroom may make studying fascinating and entertaining for students. ... on the challenges of digital technologies in education along with a discussion on the future of digital technologies in education. 1.1. Research objectives. The primary research ...

  17. Computer-based technology and student engagement: a ...

    Computer-based technology has infiltrated many aspects of life and industry, yet there is little understanding of how it can be used to promote student engagement, a concept receiving strong attention in higher education due to its association with a number of positive academic outcomes. The purpose of this article is to present a critical review of the literature from the past 5 years related ...

  18. Computer and Internet Usage in Education: Theories, Practices, and

    This article covered the theories, practices, and the research basics of the computer and Internet usage in education. The article reviewed the studies of computer and Internet usage in education and the practices of computer and Internet in open and distance learning, then discussed the two participants in effective online education, ended with reviewing on the researches about the ...

  19. Exploring the state of computer science education amid ...

    Primary objectives of CS education, as described in the "K-12 Computer Science Framework"—a guiding document assembled by several CS and STEM education groups in collaboration with school ...

  20. PDF The Importance of Computing Education Research

    The Importance of Computing Education Research. Steve Cooper, Jeff Forbes, Armando Fox, Susanne Hambrusch, Andrew Ko, and Beth Simon. Version 2. January 14, 2016. 1. Introduction. Interest in computer science is growing. As a result, computer science (CS) and related departments are experiencing an explosive increase in undergraduate ...

  21. Technology is shaping learning in higher education

    Investors have taken note. Edtech start-ups raised record amounts of venture capital in 2020 and 2021, and market valuations for bigger players soared. A study conducted by McKinsey in 2021 found that to engage most effectively with students, higher-education institutions can focus on eight dimensions of the learning experience. In this article ...

  22. Teaching About Technology in Schools Through Technoskeptical Inquiry

    Daniel G. Krutka is a dachshund enthusiast, former high school social studies teacher, and associate professor of social studies education at the University of North Texas. His research concerns technology, democracy, and education, and he is the cofounder of the Civics of Technology project (www.civicsoftechnology.org).

  23. Artificial Intelligence in Education: Implications for ...

    One trending theme within research on learning and teaching is an emphasis on artificial intelligence (AI). While AI offers opportunities in the educational arena, blindly replacing human involvement is not the answer. Instead, current research suggests that the key lies in harnessing the strengths of both humans and AI to create a more effective and beneficial learning and teaching experience ...

  24. American Psychological Association (APA)

    The American Psychological Association (APA) is a scientific and professional organization that represents psychologists in the United States. APA educates the public about psychology, behavioral science and mental health; promotes psychological science and practice; fosters the education and training of psychological scientists, practitioners and educators; advocates for psychological ...

  25. Outstanding Senior Spotlight: Adityaa Ravi

    June 07, 2024. Adityaa Ravi is gearing up for a new role as a software engineer after graduating from the University of California, Davis, with a Bachelor of Science degree in computer science. He talks with us about how inspiring faculty and riveting research projects helped take his passion for robotics and STEM to the next level.

  26. Grad Spotlight: What Maya Garcia is Taking from her McCormick Education

    Academics Overview Explore our degrees, programs, courses, and other enrichment opportunities.; All Areas of Study View a chart of all study areas cross-categorized by degree type.; Undergraduate Study Explore majors, minors, student groups, research, enrichment, and support opportunities. Plan your visit to campus and start your application. Graduate Study Explore our full-time and part-time ...

  27. Start An Education Fundraiser

    Start an Education Fundraiser on GoFundMe. Teachers, students, parents, clubs, and more use GoFundMe as a trusted and easy way to raise money for education needs. Education fundraising is available for your classroom, tuition assistance, after school program, or school supplies. Start a GoFundMe