Best GED Classes

GED Essay-Topics, Samples, And Tips

Last Updated on May 16, 2024.

This language Arts lesson is part of this website’s free online GED classes a nd practice tests, generously provided by the accredited comprehensive GED prep course created by Onsego.

Online GED Classes

A simple and easy way of getting your ged diploma., learn fast, stay motivated, and pass your ged  test quickly..

Our free support is a great way to start with your GED prep, and if you like these free practice tests and video lessons, you may easily switch to Onsego GED Prep’s full-scope, accredited course to earn your GED fast!

One part of the GED Reasoning through Language Arts (RLA) test is writing a GED Essay, also known as the Extended Response. You have 45 minutes to create your essay. The GED essay is an argumentative essay.

A common method for writing this type of essay is the five-paragraph approach.

Writing your GED Essay is not about writing an opinion on the topic at hand. Your opinion is irrelevant. You are asked to determine and explain which of the arguments is better.

This lesson is provided by Onsego GED Prep.

Fast & Easy Online GED Course

Get Your Diploma in 2 Months It doesn’t matter when you left school

Get Started

Table of Contents

  • 0.1 Video Transcription
  • 1 GED Essay Structure
  • 2 GED Essay Topics
  • 3 GED Essay Samples
  • 4 Tips for Writing your GED Essay
  • 5 How your GED Essay is Scored

Video Transcription

After reading the stimulus with two different arguments about a subject, your task is to explain why one of these arguments is better.

Remember, when writing your GED Essay, you are NOT writing your opinion on the topic. That’s irrelevant. You must write about why one argument is better than the other.

You are writing an analysis of the author’s two positions and explaining which argument is stronger. These two arguments are presented in the stimulus, so you don’t need to create any own examples.

So again, you only need to decide what argument is stronger and claim it and prove it. It is NOT about your opinion.

Since in your essay, you need to determine which argument is best supported, your claim should clearly state which of the two positions is stronger.

You will be provided with the stimulus material and a prompt.

The stimulus is a text that provides 2 opposing opinions about a certain subject. The prompt provides instructions and tells you what you need to do.

I’ll say it again because so many students make mistakes here, it’s NOT about your opinion on the topic but the subject that matters!

You need to analyze the arguments and determine which opinion is best supported throughout the text.

You are NOT asked which argument you agree with more, and you should NEVER respond with a personal opinion.

So, don’t use the word “I” such as “I think that…” “I agree because…” “In my opinion…”.

The GED essay is graded on a machine that uses algorithms to figure out your score.

So, no teacher will decide about the score in any way.

It’s very important that you remember this!

Let’s take a look at the structure, topics, and format of the GED Essay.

GED Essay Structure

Ged essay topics.

  • GED Essay Sample
  • GED Essay Scoring
  • GED Essay Writing Tips

Remember: you need to analyze which of the presented arguments is better and explain why it’s better.

Likewise, make sure your reasons come from the text – you aren’t making up your examples; you’re talking about the ones in the passages.

How should you prove that one argument is stronger? – Look at the evidence in the text.

Did the author use a relevant statistic from a reliable source, or did he/she assume something with a hypothetical anecdote?

Once you know which is better supported, you’re on your way.

Keep in mind: Don’t Summarize!

It’s easy to substitute a simpler task (summarize each side) for the more complex task of evaluating arguments. But if all you do is summarize, your response will be considered off-topic and likely will not receive any points.

The GED Essay should contain:

  • 4-7 paragraphs of 3 to 7 sentences each and 300-500 words in total.
  • An essay (or response) that is significantly shorter could put you in danger of scoring a 0 just for not showing enough of your writing skills.
  • As you read the stimulus material (text), think carefully about the argumentation presented in the passage(s). “Argumentation” refers to the assumptions, claims, support, reasoning, and credibility on which a position is based.
  • Pay close attention to how the author(s) use these strategies to convey his or her position.

Every well-written GED essay has an introduction, a body, and a conclusion.

Your response will be an argument or an argumentative essay. Remember that you are NOT writing your opinion on the topic.

You are writing an analysis of two of the author’s positions and explaining which argument is stronger.

Things to keep in mind: the Extended Response (GED Essay) is scored by smart machines that are programmed to recognize correct answers. So, don’t try to be creative; just be correct. Also:

  • Use proper grammar and sentence structure.
  • Practice writing a 300 to 500-word essay.

Let’s look at the GED Essay structure: an introduction, a body, and a conclusion.

  • The Introduction introduces the topic you are writing about and states your claim or thesis statement. Stand your position.
  • The Body of the essay presents reasoning and evidence to support your claim. This is the longest part of the response and should be at least two paragraphs.
  • The concluding paragraph sums up your main points and restates your claim.

Here are a few examples of GED Essay Topics. Click on the title to read a full stimulus and a prompt.

An Analysis of Daylight-Saving Time

The article presents arguments from both supporters and critics of Daylight-Saving Time who disagree about the practice’s impact on energy consumption and safety. Check here to read the full article.

Should the Penny Stay in Circulation?

Analyze the presented arguments and decide which one is better supported. Check here to read the full article.

Is Golf a Sport?

Proponents say that golf meets the definition of “sport.” Opponents say that golf better meets the definition of “game” than “sport. Analyze both opinions and determine which one is better supported. Check here to read the full article.

GED Essay Samples

Click here to access a sample of a GED essay with an explanation of the structure. Getting familiar with GED essay samples will help you plan your essay and understand what elements are important.

When reading the essay subject, you really should take the time to pull together your thoughts. By arranging your ideas rationally, you will be able to express your thoughts far better on paper. When you start writing, concentrate on the guidelines that you came to understand in English class.

Pay attention to English language usage (grammar); you must use the right punctuation and capitalization and decide on suitable word solutions.

Check here to read a GED Essay Sample with our comments.

Tips for Writing your GED Essay

1. Make sure you read the stimulus and prompt cautiously

It’s good to practice this carefully. Check out each question carefully and take a little time to figure out the topic and what kind of answer will be expected.

It is important to read the questions meticulously.

Usually, students simply run over stimulus and prompt and begin to write immediately, believing that they will save time this way.

Well, this actually the most undesirable thing to do. Take a short while and try to understand the questions completely in order to respond to them appropriately. If you wish, highlight the essential words and phrases in the stimulus to be able to look at it from time to time to be certain you stick to the topic.

2. Sketch an outline for the essay

In general, you will only need a few minutes to plan your essay, and it is imperative to take that time. As soon as you grasp the questions entirely, and once you have scribbled down some initial ideas, make an outline of the essay and follow that.

Plan an introduction, body, and conclusion. Following this process is going to save you a lot of time and it helps establish a rational development of thoughts.

3. Stick to the subject

Each paragraph in the body of your response should explain why a piece of evidence supports your claim or disputes the opposing claim to explain your evidence.

You can describe or restate it. This shows that you understand precisely what it means and how it relates to your claim.

Cite the mentioned details or facts of a specific point and relate them to your claim.

Your response should include evidence from both passages and explain what strong evidence supports one argument and why faulty evidence weakens the other argument.

4. Proofreading and Revision

By the time you completed writing your essay, you should go back to the beginning and read your essay carefully again, as you quite easily could have forgotten a comma or have misspelled a word while writing your essay. See also this post ->  Is the GED Language Arts Test Hard?

While rereading your essay, pay close attention to whether your essay provides well-targeted points, is organized clearly, presents specific information and facts, comes with proper sentence construction, and has no grammar or spelling mistakes.

How your GED Essay is Scored

Your GED essay is scored by smart machines that are programmed to recognize correct answers. So don’t try to be creative; just be correct.

They will be using five criteria to assess your essay.

  • Organization: were you clear about the essential idea, and did you present a well-thought strategy for composing your essay?
  • Clear and swift response: did you deal with the subject adequately, without shifting from one focal point to another?
  • Progress and details: did you apply relevant examples and specific details to elaborate on your original concepts or arguments, as opposed to using lists or repeating identical information?
  • Grammar Rules of English: did you use decent writing techniques like sentence structure, spelling, punctuation, syntax, and grammar, and did you shape and edit your essay after you penned the first draft?
  • Word choice: how far did you choose and employ suitable words to indicate your points of view?

Your 45 minutes will go quickly, so focus on these important points to get the best score.

What’s important is to make a clear statement about which position is better supported. Write clear sentences and arrange paragraphs in a logical order.

GED testing includes four modules (independent subtests) in Mathematical Reasoning (Math), Reasoning through Language Arts, Science, and Social Studies that can be taken separately. You should study very well, be effective on test day, and pass the subtest(s) you registered for.

GED writing for essays may be a bit tricky, but you can store all this information for proper learning on a list and change to proper write essay techniques before test day has arrived. Just practice a lot, and you’ll see that it’ll be getting better and better. So now you know all about writing the GED Essay.

GED Essay

GED Essay: Everything You Need To Know In 2024

Learn all you need to know about the GED essay, its structure sample, topics, tips, and how it is scored in this post.

January 1, 2022

The GED essay is intimidating to many people. Writing an entire essay from scratch in 45 minutes or less may seem difficult, but it does not have to be. This GED essay writing overview will help you prepare for and learn about the written section of the exam . In this post, Get-TestPrep will show everything you need to know about GED essays , including their structure, sample topics, tips, and how they are stored .

What Is The GED Essay?

GED Essay

The GED exam consists of four subjects : Mathematical Reasoning, Social Studies, Science, and Language Arts Reasoning (RLA ). The GED extended response , sometimes known as the GED essay, is one of the two portions of the RLA subject test. You’ll have 45 minutes to finish the essay to your best capacity. Don’t worry if you don’t finish on time! Because the essay accounts for just 20% of your ultimate RLA score, you can still pass the test even if you don’t receive a high essay score.

The GED extended response can cover a wide range of topics, but it will always be formatted in the same way. You will be assigned two articles on the same topic, which will typically be argumentative essays with a firm position. You’ll be asked to assess the two arguments and create your own argumentative essay based on which article delivered the more compelling argument. The essay should be three to five paragraphs long, with each paragraph including three to seven sentences.

GED Essay Structure

An introduction, a body, and a conclusion are included in every well-written GED essay. You have to write an argument or an argumentative essay. Keep in mind that you are not expressing your own view on the subject. You’re analyzing two of the author’s points of view and determining which one is more compelling. Keep in mind that the Extended Response (GED Essay) is graded by machine intelligence that has been designed to detect the right responses. So, instead of trying to be creative, simply be accurate. Also:

  • Make sure you’re using proper grammar and sentence structure.
  • Practice writing a 300-500 word essay.

Let’s take a look at the format of a GED Essay : an introduction, a body, and a conclusion.

  • The introduction outlines your claim or thesis statement and explains the topic you’re writing about. Maintain your position.
  • The body of the essay includes facts and arguments to back up your claim. This section of the response should be at least two paragraphs long.
  • The concluding paragraph restates your claim and summarises your important points.

GED Essay Topic Examples

Here are a few GED Essay Topics to get you started:

Topic 1: An Analysis of Daylight-Saving Time

The article presents arguments from proponents and opponents of Daylight Saving Time, who disagree on the practice’s impact on energy consumption and safety.

Topic 2: Should the Penny Stay in Circulation?

Analyze the arguments offered and pick which one has the most support.

Topic 3: Is Golf a Sport?

Golf , according to proponents, satisfies the criteria of “sport.” Opponents argue that golf more closely resembles a “game” than a “sport.” Analyze both points of view to see which one has the most support.

Visit our website for more topics as well as full articles on each topic and take our free latest FREE GED practice test 2024 to get ready for your exam!

GED Essay Examples

Getting to know the GED essay sample  can assist you in planning your essay and determining which elements are most vital.

When reading the essay topic, you should truly take your time to collect your views. You will be able to articulate your views better on paper if you organize your thoughts properly. Concentrate on the standards that you learned in English class before you begin writing.

Pay attention to how you use the English language (grammar); you must use proper punctuation and capitalization, and you must use appropriate word solutions.

Tips For Writing Your GED Essay

Make sure you carefully read the stimulus and prompt.

Putting this into practice is an excellent idea. Examine each question carefully and set aside some time to determine the topic and the type of response that will be requested. It is critical to read the questions thoroughly. Students frequently skip past the stimulus and prompt and get right into writing, assuming that they will save time this way. 

This is, by far, the most uninteresting thing to do. Take a few moments to attempt to fully comprehend the questions so that you can reply accurately. If you like, underline the important words and phrases in the stimulus so you can go over it again later to make sure you’re on track.

Make a rough outline for the GED language arts essay

In general, planning your essay will only take a few minutes, but it is critical that you spend that time. Make an outline of the essay and follow it as soon as you have a complete understanding of the questions and have scribbled down some early ideas.

Make an outline for your introduction, body, and conclusion. Following this procedure will save you a lot of time and aid in the development of a logical thought process.

Keep your focus on the topic

To describe your evidence, each paragraph in the body of your response should explain why a piece of evidence supports your claim or disputes the opposing claim. You have the option of describing or restarting it. This demonstrates that you know exactly what it means and how it applies to your claim. Refer to the specifics or facts of a certain issue that you’ve discussed and tie them to your claim.

Include evidence from both passages in your response, and explain why strong evidence supports one thesis and why flawed evidence undermines the other.

Revision and proofreading

By the time you’ve finished writing your essay, you should go back to the beginning and reread it attentively, since you may easily have missed a comma or misspelled a term while doing so.

Pay great attention when rereading your essay to see if it has well-targeted arguments, is arranged properly, contains particular information and facts, has good sentence construction, and has no grammatical or spelling mistakes.

Learn more about how to practice GED essays as well as the whole Language Arts section in GED Language Arts Study Guide  

How To Write a GED Essay?

When writing the GED essay, you should allocate the time as follows:

  • 3 minutes to read the directions and the topic
  • 5 minutes of prewriting (freewriting, brainstorming , grouping, mapping, etc.)
  • 3 minutes to organize (create a thesis statement or controlling idea, and summarize important points)
  • 20 minutes to draft (write the essay)
  • 8 minutes to revise (go over the essay and make adjustments to concepts)
  • 6 minutes to edit (check for grammatical and spelling errors). 

How Your GED Essay Is Scored?

Smart machines that are designed to detect the right answers score your GED essay. So don’t try to be creative; just be accurate.

They will evaluate your essay based on five factors.

  • Organization : did you give a well-thought-out approach to writing your essay and were you clear on the main idea?
  • Clear and swift response: Did you deal with the matter appropriately, without straying from one emphasis point to another, with a clear and quick response?
  • Progress and specifics: instead of utilizing lists or repeating the same material, did you use relevant instances and particular details to expound on your initial notions or arguments?
  • Grammar Rules of English: Did you apply proper writing strategies such as sentence structure, spelling, punctuation, syntax, and grammar, and did you shape and revise your essay after you finished the initial draft?
  • Word choice : How well did you pick and use appropriate phrases to express your points of view?

Your 45 minutes will fly by, so focus on these key elements to get the best score possible. What is more important is to state unequivocally which side is more popular. Check that your phrases are clear and that your paragraphs are organized logically.

Each of the four modules (independent subtests) in Mathematical Reasoning (Math), Reasoning via Language Arts, Science, and Social Studies can be taken independently. To pass the subtest(s) for which you registered, you must study thoroughly and be efficient on test day. Consider taking our GED Language Arts Practice Test for the Language Arts section.

GED essay writing can be difficult, but you can keep a list of everything you need to know and switch to proper essay writing approaches before the exam. Simply practice a lot and you’ll notice that it gets better over time. So you’ve learned everything there is to know about writing the GED Essay .

How to write an essay for the GED?

  • Read through all of the instructions.
  • Create an outline.
  • Make a list of all the evidence.
  • Last, write your introduction.
  • Write first, then edit.
  • Make use of formal language.
  • Don’t look at the time.

Is there an essay portion on the GED test?

How is the ged essay graded.

The essay is graded on a four-point scale by two certified GED essay readers. The scores of the two GED readers are averaged. If the essay achieves a score of 2 or above, it is merged with the language arts multiple-choice score to generate a composite result.

Final Words

In conclusion, this guide on the GED essay provides valuable insights and strategies to help you excel in the GED essay section. By understanding the structure of the GED essay , practicing effective writing techniques, and familiarizing yourself with the scoring rubric, you can approach the GED essay with confidence and achieve a successful outcome. Remember to plan your essay, organize your thoughts, and support your ideas with relevant examples and evidence. Additionally, refining your grammar and punctuation skills will enhance the overall quality of your writing. With consistent practice and a thorough understanding of the expectations for the GED essay, you can showcase your writing abilities and earn a strong score on the GED essay.

Eligibility Requirements For GED In District of Columbia

November 25, 2022

ged requirements

Eligibility Requirements For GED In New York

Wyoming ged requirements

Eligibility Requirements For GED In Wyoming

You may learn more about how to obtain a GED in Wyoming by reading the answers to the questions related to GED requirements in Wyoming we receive below.

September 19, 2022

How to Pass the GED

How to Pass the GED

Extended Response: Example 1

Extended Response: Example 3

Basics The second section of Reasoning Through Language Arts evaluates your ability to integrate reading and writing by way of a 45-minute Extended Response. GED guidelines specify that you will be asked to write an essay about the best-supported position—the most persuasive side of an argument—presented in two passages with opposing points of view.  Accordingly, you will need to produce evidence supporting the most convincing position from either Passage I or Passage II.  Attention to specific details within the passages will help you find the necessary pieces of evidence.

GED.com has excellent resources to help prepare for the Extended Response as follows: • poster • videos • guidelines – english • guidelines – spanish • quick tips – english • quick tips – spanish • practice passages – english • practice passages – spanish

Here, at HowtoPasstheGED.com, a five-paragraph essay will be used as a framework for writing an Extended Response.

Five-Paragraph Essay – Outline Paragraph 1:  Introduction of your position with three supporting points. Paragraph 2:  Discussion of first point. Paragraph 3:  Discussion of second point. Paragraph 4:  Discussion of third point. Paragraph 5:  Summary and Conclusion of your position and its three supporting points.

Five-Paragraph Essay – Choose (Before You Write) • Read Passage I and Passage II. • Choose the best-supported position. • Select three points supporting this position.

Five-Paragraph Essay – Beginner Level (You’re Up and Running!) • Write the first sentence of each of the five paragraphs. • In paragraph 1, introduce your position and its three supporting points. • In paragraph 2, put down the first point. • In paragraph 3, put down the second point. • In paragraph 4, put down the third point. • In paragraph 5, restate your position and its three supporting points.

Five-Paragraph Essay – Intermediate Level (You’re Adding On!) • In paragraph 1, introduce your position and its three supporting points. • In paragraph 2, write at least three sentences about the first point, including mentioning something from the other side. • In paragraph 3, write at least three sentences about the second point, including mentioning something from the other side. • In paragraph 4, write at least three sentences about the third point, including mentioning something from the other side. • In paragraph 5, restate your position and its three supporting points, including coming to a conclusion about them.

Five-Paragraph Essay – Advanced Level (Polish Your Essay If You Have Time) • In paragraph 1, introduce your position and its three supporting points. • In paragraph 2, write at least three sentences about the first point, including mentioning something from the other side. • In paragraph 3, write at least three sentences about the second point, including mentioning something from the other side. • In paragraph 4, write at least three sentences about the third point, including mentioning something from the other side. • In paragraph 5, restate your position and its three supporting points, including coming to a conclusion about them.

The example below goes over the process of writing a five-paragraph essay as an Extended Response to Passage I versus Passage II.

Passage I Working from Home is Beneficial

Some experts say there’s no going back now that both employers and workers have learned that telework can be effective.

“The pandemic has radically changed how we view telework or remote work,” said Timothy Golden, a professor of management at Rensselaer Polytechnic Institute. “Many individuals and companies have realized that we can work remotely effectively. And so, I think remote work is here to stay.”

“We are going to err on the side of letting more people work remotely for longer periods of time,” said Ravi Gajendran, chair of the Department of Global Leadership and Management in the College of Business at Florida International University.

“When that’s not working as well,” added Gajendran, “the pendulum will sort of swing slightly back towards the office. It’s not going to come back to what it was previously, but what we’re going to find is, as new employees join, as new teams form, and as people who have not worked together before are now working remotely, things are not going to be as smooth.”

But, said Golden, “We know that many employees have been highly productive during the pandemic and have been able to carry on their work in a fashion that was consistent with their productivity before the pandemic.”

According to Cathleen Swody, an organizational psychologist at Thrive Leadership, remote work has led to more authentic moments between co-workers who’ve ended up meeting a colleague’s pets or families online, as the pandemic provided a virtual window, and therefore greater insight, into a co-worker’s personal side than working at the office ever did.

“You’ve seen many large companies, and in different industries, make announcements about the future of their workforce in how it is likely to be hybrid. And some workers will be working remotely on a permanent basis, and others will be in a hybrid form,” pointed out Golden. “Companies that do this right and do this in the right way, will have a competitive advantage over those who do not.”

Increased telework could free employees from having to live close to where they work. That could also benefit employers who won’t have to be limited to the local talent pool. More jobs could go to places with lower costs of living and ultimately, overseas.

“It could go to Asia or Africa or South America,” said Gajendran.

With more employees working remotely from home, employers could reduce their costs further by cutting back on office space. – adapted from VOA (04/09/2021, 04/12/2021, 04/29/21)

Passage II Working from Home is Harmful

The benefits of working from home—including skipping a long commute and having a better work-life balance—have been well documented, but employees are literally paying for the privilege, according to a study from the National Bureau of Economic Research.

“People need to dedicate space to work from home,” said Christopher Stanton, an Associate Professor at Harvard Business School who co-authored the study. “For many folks who lived in small apartments or houses before the pandemic, working from home wasn’t a a realistic long-term solution unless they could upgrade to larger apartments or houses.”

The researchers analyzed data from the U.S. Census Bureau to reach their conclusions. They found that between 2013 and 2017, households with at least one teleworker spent on average more of their income on rent or a mortgage to pay for the extra room needed to work from home.

“A household that was spending about $1,000 a month on rent would be spending around $1,070 on rent. So, it’s about a 7% increase, on average, across the income distribution,” Stanton said.

The researchers estimate that about 10% of people who worked in an office before the pandemic could permanently transition to working from home full time. A recent Upwork survey suggests that 36 million Americans will be working remotely by 2025—an 87% increase over pre-pandemic levels, and these workers could potentially take on the additional costs.

The added expense is easier for high-income households to bear but puts an increased burden on workers who earn less money.

“You might have gotten an increase of 20-ish percent in housing expenses for lower-income households with remote workers compared to lower-income households without remote workers,” Stanton said. “That’s a pretty big chunk of expenditure for those households in the bottom half of the income distribution.”

Kristen Carpenter, chief psychologist in the Department of Psychiatry and Behavioral Health at Ohio State University, added that at-home, remote work causes more work to be performed outside normal business hours, so it’s hard “to draw a boundary that prevents work from being ever-present,” including nights and weekends.

Cathleen Swody, an organizational psychologist at Thrive Leadership, also pointed out that when people work from home, “they kind of get stuck in this little place,” whereas going back to the office leads to more interpersonal interaction and innovation. – adapted from VOA (04/09/2021, 04/12/2021, 04/29/21)

Prompt Passage I finds working from home to be beneficial; Passage II finds working from home to be harmful. In your response, analyze the positions presented in Passage I and Passage II to determine which passage is best supported. Use relevant and specific evidence to back your choice. You have 45 minutes to plan, type, and edit your response.

Five-Paragraph Essay – Choose (Before You Write) • Read Passage I and Passage II. • Choose the best-supported position. In this example, Passage I is chosen as the best-supported position. • Select three points supporting this position. (1) Working from home is productive. (2) Working from home improves employee interaction. (3) Working from home saves money.

Passage I is the best-supported position because working from home is productive, improves employee interaction, and saves money.

Working from home is productive.

Working from home improves employee interaction.

Working from home saves money.

In summary, Passage I is the best-supported position because working from home is productive, improves employee interaction, and saves money.

Working from home is productive.  Passage I uses the pandemic to make the relevant observation that individuals and companies realized they could work remotely effectively.  Many employees have been highly productive this way and can stay this way.  Passage II admits in its very first sentence that the benefits of working from home have been well documented. 

Working from home improves employee interaction.  Passage I is persuasive when it notes that remote work has led to “more authentic moments” between co-workers.  However, workers still have the option of working at the office, as well as at home, in a hybrid form.  Thus, Passage II is incorrect when it claims remote workers get stuck in one place.

Working from home saves money.  Passage I makes a convincing argument for freedom.  It asserts that remote work frees employees from having to live close to office buildings.  It also frees employers from having to pay for as much office space.  Passage II says employees need to spend some money to outfit a home office, but this is less costly than commuting.

In summary, Passage I is the best-supported position because working from home is productive, improves employee interaction, and saves money.  In conclusion, there is no place like home.

Working from home is productive.  Passage I uses an authority—Timothy Golden, a professor of management at Rensselaer Polytechnic Institute—to make the following relevant observation: “The pandemic has radically changed how we view telework or remote work.  Many individuals and companies have realized that we can work remotely effectively.  We know that many employees have been highly productive during the pandemic and have been able to carry on their work in a fashion that was consistent with their productivity before the pandemic.  And so, I think remote work is here to stay.”  Passage II admits that at least some of what Golden said is true by stating in its very first sentence “the benefits of working from home—including skipping a long commute and having a better work-life balance—have been well documented.” 

Working from home improves employee interaction.  Passage I effectively uses another expert—Cathleen Swody, an organizational psychologist at Thrive Leadership—to state that remote work has led to “more authentic moments between co-workers who’ve ended up meeting a colleague’s pets or families online, as the pandemic provided a virtual window, and therefore greater insight, into a co-worker’s personal side than working at the office ever did.”  Although Passage II says people who work from home “kind of get stuck in this little place,” Golden affirms that workers aren’t really stuck, because some will be working in a hybrid form, meaning partly at home and partly in an office.

Working from home saves money.  Passage I makes a convincing argument for freedom.  Remote work saves money by freeing employees from having to live close to office buildings and freeing employers from having to pay for as much office space.  According to Christopher Stanton (Associate Professor at Harvard Business School) in Passage II, employees need to spend some money to outfit their apartments or houses with a home office, but this is a small price to pay compared to avoiding a costly daily commute.

In summary, Passage I is the best-supported position because working from home is productive, improves employee interaction, and saves money.  In particular, Passage I leads to the conclusion that working from home is beneficial in that it leaves nobody out: Both employers and employees stand to gain.

Remember, the RLA Extended Response is based on what YOU determine to be the best-supported position presented in either Passage I or Passage II. In order to demonstrate that YOU have room to maneuver, the example below goes over the process of writing a five-paragraph essay as an Extended Response to Passage I versus Passage II with a different choice.

Prior to the pandemic, about 5 million Americans worked remotely. But COVID-19 forced U.S. employers to allow telework on a massive scale, resulting in an estimated 75 million people working from home over the past year.

Five-Paragraph Essay – Choose (Before You Write) • Read Passage I and Passage II. • Choose the best-supported position. In this example, Passage II is chosen as the best-supported position. • Select three points supporting this position. (1) Working from home is unproductive. (2) Working from home hampers employee interaction. (3) Working from home costs money.

Passage II is the best-supported position because working from home is unproductive, hampers employee interaction, and costs money.

Working from home is unproductive.

Working from home hampers employee interaction.

Working from home costs money.

In summary, Passage II is the best-supported position because working from home is unproductive, hampers employee interaction, and costs money.

Working from home is unproductive.  Backed by facts, Passage II is able to make a strong statement when it says working in small setups at home ultimately ends up in fatigue and less productive employees.  In fact, fifty-four percent of home workers feel overworked and 39% are exhausted.  Passage I has no numbers to back up its claim that people can work remotely effectively.

Working from home hampers employee interaction.  Passage II cleverly notes that when people work from home, they get stuck.  Going back to the office leads to more interpersonal interaction and innovation.  Passage I even admits that working from home doesn’t always work well, meaning that people end up back in the office.

Working from home costs money.  Passage II convincingly has money in mind when it states that households with at least one teleworker have to spend some of their income to pay for the extra room needed to work from home.  Lower-income households need to spend even more of their income to set things up at home.  Passage I offers no solutions for employees paying out of pocket to work from home.

In summary, Passage II is the best-supported position because working from home is unproductive, hampers employee interaction, and costs money.  In conclusion, there are places other than home.

Working from home is unproductive.  Passage II comes out swinging with Christopher Stanton, an Associate Professor at Harvard Business School, who asserts having nonergonomic setups in small places [at home] ultimately ends up “leading to fatigue and wear and tear and less productive employees in the long run.”  In fact, “fifty-four percent of people who’ve worked from home this past year feel overworked, and 39% say they’re downright exhausted.”  Although Timothy Golden (professor of management at Rensselaer Polytechnic Institute) claims in Passage I that “many individuals and companies have realized that we can work remotely effectively,” he has no real numbers to back him up.

Working from home hampers employee interaction.  Passage II cites another authority—Cathleen Swody, an organizational psychologist at Thrive Leadership—to point out that people who work from home “kind of get stuck in this little place.”  She goes on to convincingly argue that “going back to the office leads to more interpersonal interaction and innovation.”  In Passage I, Ravi Gajendran, chair of the Department of Global Leadership and Management in the College of Business at Florida International University, even admits that working from home doesn’t always work well, such that “the pendulum will sort of swing” back towards the office.

Working from home costs money.  Passage II hits home with data from the U.S. Census Bureau, which found that “between 2013 and 2017, households with at least one teleworker spent on average more of their income on rent or a mortgage to pay for the extra room needed to work from home.”  Stanton adds that “you might have gotten an increase of 20-ish percent in housing expenses for lower-income households with remote workers compared to lower-income households without remote workers, a pretty big chunk of expenditure for those households in the bottom half of the income distribution.”  Passage I offers no solutions for employees “literally paying for the privilege” of working from home.

In summary, Passage II is the best-supported position because working from home is unproductive, hampers employee interaction, and costs money.  In particular, Passage II leads to the conclusion that working from home can be so harmful that it never stops, becoming an “ever-present” task performed outside normal business hours without a boundary.

Share this:

  • Click to print (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Pinterest (Opens in new window)
  • Click to share on Tumblr (Opens in new window)

Leave a Reply Cancel reply

Discover more from how to pass the ged.

Subscribe now to keep reading and get access to the full archive.

Type your email…

Continue reading

GED Practice Test

GED Essay Sample Response

Below is a sample response to our GED Essay Practice Question . Review this response to develop familiarity with the structure of a high-scoring essay. You may notice that this essay follows the template that’s featured in our GED Essay Writing Guide . At the end of this response, there is a short commentary that explains why this is an effective essay and why it would receive a perfect score.

The writer of the pro-recycling passage, unlike the writer of the anti-recycling passage, employs excellent logical reasoning to convince the audience, explaining that recycling is more than simply placing paper and plastic in their proper bins; it is an “involved process of harvesting, transporting, building and shipping.” The author proves that recycling is logical by detailing how much waste is produced when goods are created from scratch, driving home her logical argument with the simple question: “Why cut down a forest instead of recycling paper?”

To lend even more credibility to her already logical argument, the writer includes statistics relevant to recycling. In a clear, bullet-pointed list of data showing the importance of recycling, she provides relevant and useful information: “It takes 95% less energy to recycle aluminum than it does to make it from raw materials.” Recycling aluminum is worth the effort because making new aluminum is less efficient, and the writer has data to prove it. The writer goes on to list four more pieces of data to support her argument while the writer of the other passage only provides one.

Finally, the writer’s purposeful ethical plea in the pro-recycling passage more effectively calls the audience to action. By writing, “It is the morally sound thing to do to protect our beautiful planet for future generations,” the writer conjures images of clear blue skies and clean shining seas, helping the reader emotionally connect to the argument. If we do not recycle, the writer implies, we will be committing a sin against future generations. The writer finishes her argument with a passionate and motivating plea to the audience: “Please make sure you recycle!”

  Commentary

This sample essay would receive a perfect score on the GED. The writer clearly reviewed the prompt and outlined the argument before writing. Generally, the response exhibits the following organization:

  • Paragraph 1 — Introduction
  • Paragraph 2 — Logical reasoning
  • Paragraph 3 — Statistics
  • Paragraph 4 — Ethics
  • Paragraph 5 — Conclusion

The introduction clearly previews the passage’s topic, explains both sides, and demonstrates that the student understands each writer’s argument. The student uses strong, clear language and concludes with a bold thesis statement that lists three reasons why the argument he or she chose is “better-supported.”

In the body paragraphs, the student demonstrates a strong command of each of the scoring criteria:

  • Analysis of Arguments and Use of Evidence: The student quotes multiple sections of the passage to support each point, demonstrating a clear understanding of the material presented.
  • Development of Ideas and Structure: The student develops coherent organization by focusing on a supporting reason in each body paragraph and providing transitions like “In addition to” and “Finally” to help the paragraphs flow together.
  • Clarity and Command of Standard English: The sentence structure is varied and effective, and the author maintains proper spelling and grammar throughout.

Finally, the passage concludes with a brief concession to the opposing side, showing an ability to recognize the complexity of the issue, before wrapping up the discussion with a summation of why the pro-recycling passage is better-supported than the anti-recycling passage.

Return to Main Menu >>

10 min read

Hiset vs. ged: what’s the difference [updated for 2024].

By: Jen Denton, Student Success Coach on July 8, 2024 at 6:45 AM

Featured Image

Did you know that the GED test is not the only exam you can take to earn a high school equivalency diploma? Depending on where you live, taking the HiSET exam may be an alternative option. When weighing the pros and cons of taking the HiSET vs GED, there are some key differences to consider. Read on to see if the HiSET test may be right for you.

What is the HiSET?

HiSET stands for High School Equivalency Test. Just like the GED, passing the HiSET exam will allow you to obtain a high school equivalency diploma . Completing the HiSET has the same benefits and privileges as passing the GED. When searching for a job or applying to colleges and technical programs, the HiSET diploma is recognized and accepted by employers and universities throughout the United States. Both the GED and the HiSET measure general knowledge, and passing either exam will provide you with a high school equivalency credential.

man on laptop

What Are the Differences Between the HiSET and GED?

The biggest differences between the GED and HiSET exams are availability, test format, and scoring.

Availability

To determine your ability to take the HiSET, you must first find out if the exam is available in your state. Some states only offer the GED, others only the HiSET, and all others offer both testing options.

GED-only states: Alabama, Alaska, Arizona, Arkansas, Connecticut, Delaware, Florida, Idaho, Kansas, Kentucky, Maryland, Nebraska, New Jersey, New York, North Dakota, Oregon, Rhode Island, South Carolina, South Dakota, Texas, Utah, Vermont, Virginia, Washington, Washington, D.C., Wisconsin

HiSET-only states: Iowa, Maine

States offering HiSET and GED tests: California, Colorado, Georgia, Hawaii, Illinois, Indiana, Louisiana, Massachusetts, Michigan, Minnesota, Mississippi, Missouri, Montana, Nevada, New Hampshire, New Mexico, North Carolina, Ohio, Oklahoma, Pennsylvania, Tennessee, West Virginia, Wyoming

US territories offering HiSET and GED:

GED is only offered in the Virgin Islands. HiSET is only offered in the Marshall Islands and Palau. American Samoa, Guam, and the Northern Mariana Islands offer both tests.

Like the GED, the HiSET has a unique set of state rules and regulations. The most important thing to remember is that either exam allows you to earn your high school equivalency diploma. Explore this article to learn more about HiSET and GED requirements by state .

Test Format

If you have a choice of exam, the next thing to consider when deciding between taking the HiSET or the GED are the differences in how the test is formatted. While very similar in degree of difficulty, each exam is set up just a bit differently. It’s important to know which subjects are covered, what types of questions are asked, and how you’ll be asked to answer those questions.

HiSET vs. GED Subjects

The HiSET and the GED exams test individuals on their math, science, social studies, and language arts knowledge. However, the HiSET and GED test makers divide those subjects in different ways. The HiSET is sectioned into five subtests: math, science, social studies, reading, and writing. The GED is divided into four subtests: math, science, social studies, and language arts. The same amount of material and subject information is assessed in both exams; the GED simply combines reading and writing into one robust language arts subtest.

HiSET vs. GED Question Type

Perhaps the greatest difference between the HiSET and GED exams is how the questions are asked. The HiSET exam is entirely multiple-choice except for the essay, while the GED has a wide variety of question types. Of course, the GED includes a significant number of multiple-choice questions as well. But the GED exam also asks fill-in-the-blank, drag-and-drop, radio button, and multiple-select questions. The GED is a perfect fit for testers who enjoy variety in the way questions are asked. For students who prefer a more cut-and-dry, predictable format, the HiSET test may be a better option.

HiSET vs. GED Testing Options

One big reason for the difference in how the questions are asked on the GED and HiSET is the difference in how students are asked to take and answer test questions. All GED testing options are computerized. Whether testing online or in-person, the GED exam is taken on a computer. This allows the test-makers to ask questions in a number of ways.

The HiSET, however, can be taken on a computer or in a pencil-to-paper option. The exam must adhere to a multiple-choice format to remain the same regardless of administration. The option to take the HiSET on a computer or manually with a test booklet varies by testing center. It’s important to check which formats are available in your local area.

The GED and HiSET exams are scored quite differently. The max score for each subtest on the GED is 200. The highest score for each subject exam on the HiSET is 20. No, that doesn’t mean each test only has 20 questions! This means that each question varies in “weight,” and points are assigned purposefully throughout the test. Because of this, just like the GED, it’s impossible to know exactly how many correct answers are needed to pass. You can, however, make a pretty good guess.

To pass the HiSET exam, testers must receive a minimum score of 8/20. You must get approximately 40% of the questions correct to pass. If there are 50 questions on an exam you’d need to get roughly 20 of those answers right. You can miss more than half of the exam questions and still pass! Just like the GED, the HiSET has a benchmark for honors-level testing. Scoring a 15 or higher on the HiSET indicates that you are ready to take on more challenging content and gives you an extra advantage in university and vocational admission.

Which test is harder, GED or HiSET?

Each exam covers the same subject matter and is at a similar level of difficulty. The HiSET only requires 40% of the questions on each subtest to be answered correctly, while the GED requires 45% correct responses. This leads some to believe the HiSET exam is easier. Perhaps, but consider all the exam differences before deciding which option is right for you. Personal testing preferences influence the degree of difficulty and should be based on more than scoring.

What’s on the HiSET exam?

Just like the GED, the HiSET measures critical thinking. Each question is designed to assess reading and reasoning skills. To better understand what to expect on the HiSET exam, let’s explore each subject section of the test in detail.

The HiSET math exam allows each tester 90 minutes to answer 55 multiple-choice questions. The HiSET math test topics include:

  • Numbers and Operations (19%)
  • Measurement/Geometry (18%)
  • Data Analysis/Probability/Statistics (18%)
  • Algebraic Concepts (45%)

Like the GED, the HiSET math section allows a calculator to be used for some questions. Your test proctor will let you know when you can use it. Math is a sticky area for many students, so when it comes to prep, be sure you are considering your personal needs and getting prep to fill your unique learning gaps.

The reading portion of the HiSET exam allows students 65 minutes if taken in English and 80 minutes if taken in Spanish to answer 50 multiple-choice questions. Fiction-based reading passages account for 40% of the HiSET reading portion, compared to 25% fiction-based passages on the GED. 60% of the HiSET reading passages are non-fiction, informational texts. 75% of the GED reading is based on nonfiction passages. For testers who prefer one type of reading over another, this may be a factor in deciding which exam to take.

The HiSET reading skills tested are nearly identical to those on the GED. These include:

  • Reading Comprehension
  • Inference and Interpretation
  • Synthesis and Generalization

In both exams, the test makers want to be sure you understand what the passage says, can use clues from the text to make conclusions, are able to examine how and why details are used and can combine ideas to understand a larger meaning.

To complete the writing portion of the exam, HiSET testers must answer 61 questions in 120 minutes. The exam has 60 multiple-choice questions and one essay question. The skills tested in the multiple-choice section include:

  • Organization of Ideas (22%)
  • Language (43%)
  • Writing Conventions (35%)

The skills evaluated in the essay section include:

  • Development of Central Position or Claim
  • Organization of Ideas
  • Writing Conventions

The essay questions on the GED and HiSET differ slightly. Both tests will ask you to write an argumentative essay. Two reading passages from two different authors will address the same topic from two different perspectives. The GED essay question prompts testers to read, evaluate, and decide which author has the strongest opinion and why. Those taking the HiSET will be asked to express their personal opinions on the topic. Both tests have the same comparative essay format but ask the primary question in different ways.

HiSET science asks 60 multiple-choice questions in 80 minutes. Testers are permitted to use a calculator for some questions. The HiSET science test topics include:

  • Reading for Meaning in Science
  • Interpreting Science Experiments
  • Using Numbers and Graphics in Science

The questions will be based on reading passages in the following content categories:

  • Life Science (49%)
  • Physical Science (28%)
  • Earth Science (23%)

Both the GED and HiSET science exams encourage testers to use observational skills and, at times, math skills to work through science content. The focus should be on reading about science in the content categories and reading about science experiments.

Social Studies

The HiSET social studies portion of the exam allows 70 minutes to answer 60 multiple-choice questions. The test topics include:

  • Reading for Meaning in Social Studies
  • Analyzing Historical Events and Arguments in Social Studies
  • Using Numbers and Graphs in Social Studies
  • History (35%)
  • Civics/Government (35%)
  • Economics (20%)
  • Geography (10%)

Many test takers erroneously think that they must memorize a ton of facts and figures to pass the social studies portion of the HiSET. This is simply not true. You will not be asked to name and number the presidents of the United States, recite key dates, or discuss war chronology. Of course, it can help to build your social studies vocabulary and have a general understanding of those things, but you do not have to commit those things to memory to be successful.

The test-makers want to see you read and reason. That means reading historical documents, answering questions, expressing opinions on social studies-based topics, and finding the author's purpose, much the same way you would in the reading section.

What Are the Costs of the HiSET vs GED?

The HiSET costs vary by state and are comparable to the fees required to take the GED. For instance, there is no fee to take the HiSET exam if you are a resident of Maine. Test fees also depend on the test format: paper, computer, or test-at-home. The cost for paper-based testing is around $23 per subtest. Computer-based testing can cost roughly $37 per subtest. Test-at-home options are approximately $55 per subtest. You can also look into taking a mix of paper-based and computer-based tests.

The cost of taking the GED also varies by state. While the average cost is $36 per subject, some fees may be absolutely free, like in New York, or as much as $50 per subject, like in South Dakota.

Most states offer two free retakes for the HiSET and the GED but check your specific area and test center to see what's available. Rules and regulations vary and are subject to change.

Frequently Asked Questions About HiSET vs GED

  • Should I take a GED or HiSET?

Both tests are challenging, but with the right preparation, you can gain the confidence to pass either exam. Remember, both tests come with the same rights and benefits. Passing either exam will provide your high school equivalency diploma.

  • Does the military accept the HiSET?

Yes! You can apply to join the military with a HiSET credential. Each branch recognizes the HiSET as equivalent to a traditional high school diploma.

  • What is a good score on the HiSET?

You must obtain an 8/20 on each section of the HiSET in order to pass. Honors scoring begins with a score of 15 or higher.

  • Can you take one test at a time on the HiSET?

Of course! Both the GED and the HiSET can be taken one subject at a time.

  • What are the retake policies for the HiSET?

Most states offer two free retakes for the HiSET and the GED. Policies vary by state and are subject to change. Be sure to check your local test center’s retake policy.

  • Can you go to college with a HiSET diploma?

Yes! From West Coasters in California to East Coast residents in Georgia, passing the HiSET opens the doors to admission to 98% of colleges and universities throughout the U.S.

  • What happens when you pass the HiSET?

After you pass each test section, the HiSET testing service will issue your high school equivalency diploma. The national testing center for the HiSET, hiset.org, is your primary contact for all official testing and credentialing information. In addition to getting a copy of your certificate, you can order a transcript when applying to colleges and trade schools.

  • Can I receive testing accommodations for the HiSET exam?

Yes. If you have a diagnosed learning difference, be sure to check into the accommodations available to you. These can include extra testing time, special test formatting, a solo testing environment, etc. You must apply for these accommodations in advance and provide supporting documents. You can do this at hiset.org

  • What are others saying about the HiSET vs GED?

Join the discussion to learn more about others' personal experiences with both exams.

Related Posts

What questions are on the ged test, where to find free & affordable ged test prep, ged calculator guide: know the ti-30xs.

GED Practice Questions

GED Essay Topics

Please note that the GED essay went through major changes with the 2014 revision. The topics listed below are no longer valid. For updated essay information you can visit these pages:

  • GED Essay – Reasoning Through Language Arts
  • GED Essay – Social Studies
  • GED Short Answer Questions – Science

The essay portion of the GED will require you to compose a short essay on a pre-selected topic. The essay will be either a narrative, descriptive, or persuasive essay. Narrative essays require you to tell a story from your own life. Descriptive essays require you to paint a picture for your audience by focusing on individual characteristics. Persuasive essays require you to express your personal opinion on a topic. Each essay type will require a strong thesis and several well-developed paragraphs. You may ONLY write on the assigned topic, so it’s helpful to practice writing several essays from multiple practice topics. Set a timer for 45-minutes, and try your hand at one of the GED essay topics below!

1. What is the true meaning of honesty? In your essay, determine whether or not honesty is always the best policy.

2. What is one event from your life that taught you a powerful life lesson? Use your personal observations and experience to describe why that lesson was valuable.

3. Who is the most important member of your family to you? Describe your relationship to this person and your reasons for selecting him or her.

4. Consider how our society has changed over time. Are young people today better off than they were in the past? Write an essay explaining why or why not.

5. Is the current high school system sufficient to educate our country’s youth? Describe what is valuable about our country’s system or what might be changed in order to produce better results.

6. Do hobbies have any real value to the individuals who participate in them? If so, how do extracurricular activities benefit participants? Write an essay describing your own activities outside of school and work.

7. If you won the lottery today, what aspects of your life would you change? What would you keep the same? Write an essay discussing your ideas. Support them with reasons and examples.

8. What can be done to prevent drivers from texting while driving? Give suggestions and examples to support your opinion.

9. Is a college degree important in today’s workplace? Describe your opinions on the value of higher education, and use details from your own life.

10. The Internet is an invention that has done irreparable harm to our collective ability to engage in long-term research. How do you think that the benefits of instantaneous information provided by the Internet compare with the potential drawbacks of shortened attention spans?

11. Do you most admire people your own age or people older than you? Write an essay explaining what you think, and give specific examples of an individual you admire, and the reasons you admire him or her.

12. In your opinion, should schools require students to complete a minimum number of community service hours? Discuss whether you believe mandatory community service would benefit most young people.

13. If you could live in another time period, when would it be and why? Be sure to include relevant historical details.

14. Describe a situation in which you made a difficult decision involving an ethical issue. Show how the experience was important and developed your character.

15. Describe one of your most prized possessions. Make sure to isolate three or four different characteristics of the item, and explain why it’s important to you.

  • Original article
  • Open access
  • Published: 08 July 2024

Can you spot the bot? Identifying AI-generated writing in college essays

  • Tal Waltzer   ORCID: orcid.org/0000-0003-4464-0336 1 ,
  • Celeste Pilegard 1 &
  • Gail D. Heyman 1  

International Journal for Educational Integrity volume  20 , Article number:  11 ( 2024 ) Cite this article

492 Accesses

14 Altmetric

Metrics details

The release of ChatGPT in 2022 has generated extensive speculation about how Artificial Intelligence (AI) will impact the capacity of institutions for higher learning to achieve their central missions of promoting learning and certifying knowledge. Our main questions were whether people could identify AI-generated text and whether factors such as expertise or confidence would predict this ability. The present research provides empirical data to inform these speculations through an assessment given to a convenience sample of 140 college instructors and 145 college students (Study 1) as well as to ChatGPT itself (Study 2). The assessment was administered in an online survey and included an AI Identification Test which presented pairs of essays: In each case, one was written by a college student during an in-class exam and the other was generated by ChatGPT. Analyses with binomial tests and linear modeling suggested that the AI Identification Test was challenging: On average, instructors were able to guess which one was written by ChatGPT only 70% of the time (compared to 60% for students and 63% for ChatGPT). Neither experience with ChatGPT nor content expertise improved performance. Even people who were confident in their abilities struggled with the test. ChatGPT responses reflected much more confidence than human participants despite performing just as poorly. ChatGPT responses on an AI Attitude Assessment measure were similar to those reported by instructors and students except that ChatGPT rated several AI uses more favorably and indicated substantially more optimism about the positive educational benefits of AI. The findings highlight challenges for scholars and practitioners to consider as they navigate the integration of AI in education.

Introduction

Artificial intelligence (AI) is becoming ubiquitous in daily life. It has the potential to help solve many of society’s most complex and important problems, such as improving the detection, diagnosis, and treatment of chronic disease (Jiang et al. 2017 ), and informing public policy regarding climate change (Biswas 2023 ). However, AI also comes with potential pitfalls, such as threatening widely-held values like fairness and the right to privacy (Borenstein and Howard 2021 ; Weidinger et al. 2021 ; Zhuo et al. 2023 ). Although the specific ways in which the promises and pitfalls of AI will play out remain to be seen, it is clear that AI will change human societies in significant ways.

In late November of 2022, the generative large-language model ChatGPT (GPT-3, Brown et al. 2020 ) was released to the public. It soon became clear that talk about the consequences of AI was much more than futuristic speculation, and that we are now watching its consequences unfold before our eyes in real time. This is not only because the technology is now easily accessible to the general public, but also because of its advanced capacities, including a sophisticated ability to use context to generate appropriate responses to a wide range of prompts (Devlin et al. 2018 ; Gilson et al. 2022 ; Susnjak 2022 ; Vaswani et al. 2017 ).

How AI-generated content poses challenges for educational assessment

Since AI technologies like ChatGPT can flexibly produce human-like content, this raises the possibility that students may use the technology to complete their academic work for them, and that instructors may not be able to tell when their students turn in such AI-assisted work. This possibility has led some people to argue that we may be seeing the end of essay assignments in education (Mitchell 2022 ; Stokel-Walker 2022 ). Even some advocates of AI in the classroom have expressed concerns about its potential for undermining academic integrity (Cotton et al. 2023 ; Eke 2023 ). For example, as Kasneci et al. ( 2023 ) noted, the technology might “amplify laziness and counteract the learners’ interest to conduct their own investigations and come to their own conclusions or solutions” (p. 5). In response to these concerns, some educational institutions have already tried to ban ChatGPT (Johnson, 2023; Rosenzweig-Ziff 2023 ; Schulten, 2023).

These discussions are founded on extensive scholarship on academic integrity, which is fundamental to ethics in higher education (Bertram Gallant 2011 ; Bretag 2016 ; Rettinger and Bertram Gallant 2022 ). Challenges to academic integrity are not new: Students have long found and used tools to circumvent the work their teachers assign to them, and research on these behaviors spans nearly a century (Cizek 1999 ; Hartshorne and May 1928 ; McCabe et al. 2012 ). One recent example is contract cheating, where students pay other people to do their schoolwork for them, such as writing an essay (Bretag et al. 2019 ; Curtis and Clare 2017 ). While very few students (less than 5% by most estimates) tend to use contract cheating, AI has the potential to make cheating more accessible and affordable and it raises many new questions about the relationship between technology, academic integrity, and ethics in education (Cotton et al. 2023 ; Eke 2023 ; Susnjak 2022 ).

To date, there is very little empirical evidence to inform debates about the likely impact of ChatGPT on education or to inform what best practices might look like regarding use of the technology (Dwivedi et al. 2023 ; Lo 2023 ). The primary goal of the present research is to provide such evidence with reference to college-essay writing. One critical question is whether college students can pass off work generated by ChatGPT as their own. If so, large numbers of students may simply paste in ChatGPT responses to essays they are asked to write without the kind of active engagement with the material that leads to deep learning (Chi and Wylie 2014 ). This problem is likely to be exacerbated when students brag about doing this and earning high scores, which can encourage other students to follow suit. Indeed, this kind of bragging motivated the present work (when the last author learned about a college student bragging about using ChatGPT to write all of her final papers in her college classes and getting A’s on all of them).

In support of the possibility that instructors may have trouble identifying ChatGPT-generated test, some previous research suggests that ChatGPT is capable of successfully generating college- or graduate-school level writing. Yeadon et al. ( 2023 ) used AI to generate responses to essays based on a set of prompts used in a physics module that was in current use and asked graders to evaluate the responses. An example prompt they used was: “How did natural philosophers’ understanding of electricity change during the 18th and 19th centuries?” The researchers found that the AI-generated responses earned scores comparable to most students taking the module and concluded that current AI large-language models pose “a significant threat to the fidelity of short-form essays as an assessment method in Physics courses.” Terwiesch ( 2023 ) found that ChatGPT scored at a B or B- level on the final exam of Operations Management in an MBA program, and Katz et al. ( 2023 ) found that ChatGPT has the necessary legal knowledge, reading comprehension, and writing ability to pass the Bar exam in nearly all jurisdictions in the United States. This evidence makes it very clear that ChatGPT can generate well-written content in response to a wide range of prompts.

Distinguishing AI-generated from human-generated work

What is still not clear is how good instructors are at distinguishing between ChatGPT-generated writing and writing generated by students at the college level given that it is at least possible that ChatGPT-generated writing could be both high quality and be distinctly different than anything people generally write (e.g., because ChatGPT-generated writing has particular features). To our knowledge, this question has not yet been addressed, but a few prior studies have examined related questions. In the first such study, Gunser et al. ( 2021 ) used writing generated by a ChatGPT predecessor, GPT-2 (see Radford et al. 2019 ). They tested nine participants with a professional background in literature. These participants both generated content (i.e., wrote continuations after receiving the first few lines of unfamiliar poems or stories), and determined how other writing was generated. Gunser et al. ( 2021 ) found that misclassifications were relatively common. For example, in 18% of cases participants judged AI-assisted writing to be human-generated. This suggests that even AI technology that is substantially less advanced than ChatGPT is capable of generating writing that is hard to distinguish from human writing.

Köbis and Mossink ( 2021 ) also examined participants’ ability to distinguish between poetry written by GPT-2 and humans. Their participants were given pairs of poems. They were told that one poem in each pair was written by a human and the other was written by GPT-2, and they were asked to determine which was which. In one of their studies, the human-written poems were written by professional poets. The researchers generated multiple poems in response to prompts, and they found that when the comparison GPT-2 poems were ones they selected as the best among the set generated by the AI, participants could not distinguish between the GPT-2 and human writing. However, when researchers randomly selected poems generated by GPT-2, participants were better than chance at detecting which ones were generated by the AI.

In a third relevant study, Waltzer et al. ( 2023a ) tested high school teachers and students. All participants were presented with pairs of English essays, such as one on why literature matters. In each case one essay was written by a high school student and the other was generated by ChatGPT, and participants were asked which essay in each pair had been generated by ChatGPT. Waltzer et al. ( 2023a ) found that teachers only got it right 70% of the time, and that students’ performance was even worse (62%). They also found that well-written essays were harder to distinguish from those generated by ChatGPT than poorly written ones. However, it is unclear the extent to which these findings are specific to the high school context. It should also be noted that there were no clear right or wrong answers in the types of essays used in Waltzer et al. ( 2023a ), so the results may not generalize to essays that ask for factual information based on specific class content.

AI detection skills, attitudes, and perceptions

If college instructors find it challenging to distinguish between writing generated by ChatGPT and college students, it raises the question of what factors might be correlated with the ability to perform this discrimination. One possible correlate is experience with ChatGPT, which may allow people to recognize patterns in the writing style it generates, such as a tendency to formally summarize previous content. Content-relevant knowledge is another possible predictor. Individuals with such knowledge will presumably be better at spotting errors in answers, and it is plausible that instructors know that AI tools are likely to get content of introductory-level college courses correct and assume that essays that contain errors are written by students.

Another possible predictor is confidence about one’s ability to discriminate on the task or on particular items of the task (Erickson and Heit 2015 ; Fischer & Budesco, 2005 ; Wixted and Wells 2017 ). In other words, are AI discriminations made with a high degree of confidence more likely to be accurate than low-confidence discriminations? In some cases, confidence judgments are a good predictor of accuracy, such as on many perceptual decision tasks (e.g., detecting contrast between light and dark bars, Fleming et al. 2010 ). However, in other cases correlations between confidence and accuracy are small or non-existent, such as on some deductive reasoning tasks (e.g., Shynkaruk and Thompson 2006 ). Links to confidence can also depend on how confidence is measured: Gigerenzer et al. ( 1991 ) found overconfidence on individual items, but good calibration when participants were asked how many items they got right after seeing many items.

In addition to the importance of gathering empirical data on the extent to which instructors can distinguish ChatGPT from college student writing, it is important to examine how college instructors and students perceive AI in education given that such attitudes may affect behavior (Al Darayseh 2023 ; Chocarro et al. 2023 ; Joo et al. 2018 ; Tlili et al. 2023 ). For example, instructors may only try to develop precautions to prevent AI cheating if they view this as a significant concern. Similarly, students’ confusion about what counts as cheating can play an important role in their cheating decisions (Waltzer and Dahl 2023 ; Waltzer et al. 2023b ).

The present research

In the present research we developed an assessment that we gave to college instructors and students (Study 1) and ChatGPT itself (Study 2). The central feature of the assessment was an AI Identification Test , which included 6 pairs of essays. In each case (as was indicated in the instructions), one essay in each pair was generated by ChatGPT and the other was written by college students. The task was to determine which essay was written by the chatbot. The essay pairs were drawn from larger pools of essays of each type.

The student essays were written by students as part of a graded exam in a psychology class, and the ChatGPT essays were generated in response to the same essay prompts. Of interest was overall performance and to assess potential correlates of performance. Performance of college instructors was of particular interest because they are the ones typically responsible for grading, but performance of students and ChatGPT were also of interest for comparison. ChatGPT was also of interest given anecdotal evidence that college instructors are asking ChatGPT to tell them whether pieces of work were AI-generated. For example, the academic integrity office at one major university sent out an announcement asking instructors not to report students for cheating if their evidence was solely based on using ChatGPT to detect AI-generated writing (UCSD Academic Integrity Office, 2023 ).

We also administered an AI Attitude Assessment (Waltzer et al. 2023a ), which included questions about overall levels of optimism and pessimism about the use of AI in education, and the appropriateness of specific uses of AI in academic settings, such as a student submitting an edited version of a ChatGPT-generated essay for a writing assignment.

Study 1: College instructors and students

Participants were given an online assessment that included an AI Identification Test , an AI Attitude Assessment , and some demographic questions. The AI Identification Test was developed for the present research, as described below (see Materials and Procedure). The test involved presenting six pairs of essays, with the instructions to try to identify which one was written by ChatGPT in each case. Participants also rated their confidence before the task and after responding to each item, and reported how many they thought they got right at the end. The AI Attitude Assessment was drawn from Waltzer et al. ( 2023a ) to assess participants’ views of the use of AI in education.

Participants

For the testing phase of the project, we recruited 140 instructors who had taught or worked as a teaching assistant for classes at the college level (69 of them taught psychology and 63 taught other subjects such as philosophy, computer science, and history). We recruited instructors through personal connections and snowball sampling. Most of the instructors were women (59%), white (60%), and native English speakers (67%), and most of them taught at colleges in the United States (91%). We also recruited 145 undergraduate students ( M age = 20.90 years, 80% women, 52% Asian, 63% native English speakers) from a subject recruitment system in the psychology department at a large research university in the United States. All data collection took place between 3/15/2023 and 4/15/2023 and followed our pre-registration plan ( https://aspredicted.org/mk3a2.pdf ).

Materials and procedure

Developing the ai identification test.

To create the stimuli for the AI Identification Test, we first generated two prompts for the essays (Table  1 ). We chose these prompts in collaboration with an instructor to reflect real student assignments for a college psychology class.

Fifty undergraduate students hand-wrote both essays as part of a proctored exam in their psychology class on 1/30/2023. Research assistants transcribed the essays and removed essays from the pool that were not written in third-person or did not include the correct number of sentences. Three additional essays were excluded for being illegible, and another one was excluded for mentioning a specific location on campus. This led to 15 exclusions for the Phonemic Awareness prompt and 25 exclusions for the Studying Advice prompt. After applying these exclusions, we randomly selected 25 essays for each prompt to generate the 6 pairs given to each participant. To prepare the texts for use as stimuli, research assistants then used a word processor to correct obvious errors that could be corrected without major rewriting (e.g., punctuation, spelling, and capitalization).

All student essays were graded according to the class rubric on a scale from 0 to 10 by two individuals on the teaching team of the class: the course’s primary instructor and a graduate student teaching assistant. Grades were averaged together to create one combined grade for each essay (mean: 7.93, SD: 2.29, range: 2–10). Two of the authors also scored the student essays for writing quality on a scale from 0 to 100, including clarity, conciseness, and coherence (combined score mean: 82.83, SD : 7.53, range: 65–98). Materials for the study, including detailed scoring rubrics, are available at https://osf.io/2c54a/ .

The ChatGPT stimuli were prepared by entering the same prompts into ChatGPT ( https://chat.openai.com/ ) between 1/23/2023 and 1/25/2023, and re-generating the responses until there were 25 different essays for each prompt.

Testing Phase

In the participant testing phase, college instructors and students took the assessment, which lasted approximately 10 min. All participants began by indicating the name of their school and whether they were an instructor or a student, how familiar they were with ChatGPT (“Please rate how much experience you have with using ChatGPT”), and how confident they were that they would be able to distinguish between writing generated by ChatGPT and by college students. Then they were told they would get to see how well they score at the end, and they began the AI Identification Test.

The AI Identification Test consisted of six pairs of essays: three Phonemic Awareness pairs, and three Studying Advice pairs, in counterbalanced order. Each pair included one text generated by ChatGPT and one text generated by a college student, both drawn randomly from their respective pools of 25 possible essays. No essays were repeated for the same participant. Figure  1 illustrates what a text pair looked like in the survey.

figure 1

Example pair of essays for the Phonemic Awareness prompt. Top: student essay. Bottom: ChatGPT essay

For each pair, participants selected the essay they thought was generated by ChatGPT and indicated how confident they were about their choice (slider from 0 = “not at all confident” to 100 = “extremely confident”). After all six pairs, participants estimated how well they did (“How many of the text pairs do you think you answered correctly?”).

After completing the AI Identification task, participants completed the AI Attitude Assessment concerning their views of ChatGPT in educational contexts (see Waltzer et al. 2023a ). On this assessment, participants first estimated what percent of college students in the United States would ask ChatGPT to write an essay for them and submit it. Next, they rated their concerns (“How concerned are you about ChatGPT having negative effects on education?”) and optimism (“How optimistic are you about ChatGPT having positive benefits for education?”) about the technology on a scale from 0 (“not at all”) to 100 (“extremely”). On the final part of the AI Attitude Assessment, they evaluated five different possible uses of ChatGPT in education (such as submitting an essay after asking ChatGPT to improve the vocabulary) on a scale from − 10 (“really bad”) to + 10 (“really good”).

Participants also rated the extent to which they already knew the subject matter (i.e., cognitive psychology and the science of learning), and were given optional open-ended text boxes to share any experiences from their classes or suggestions for instructors related to the use of ChatGPT, or to comment on any of the questions in the Attitude Assessment. Instructors were also asked whether they had ever taught a psychology class and to describe their teaching experience. At the end, all participants reported demographic information (e.g., age, gender). All prompts are available in the online supplementary materials ( https://osf.io/2c54a/ ).

Data Analysis

We descriptively summarized variables of interest (e.g., overall accuracy on the Identification Test). We used inferential tests to predict Identification Test accuracy from group (instructor or student), confidence, subject expertise, and familiarity with ChatGPT. We also predicted responses to the AI Attitude Assessment as a function of group (instructor or student). All data analysis was done using R Statistical Software (v4.3.2; R Core Team 2021 ).

Key hypotheses were tested using Welch’s two-sample t-tests for group comparisons, linear regression models with F-tests for other predictors of accuracy, and Generalized Linear Mixed Models (GLMMs, Hox 2010 ) with likelihood ratio tests for within-subjects trial-by-trial analyses. GLMMs used random intercepts for participants and predicted trial performance (correct or incorrect) using trial confidence and essay quality as fixed effects.

Overall performance on AI identification test

Instructors correctly identified which essay was written by the chatbot 70% of the time, which was above chance (chance: 50%, binomial test: p  < .001, 95% CI: [66%, 73%]). Students also performed above chance, with an average score of 60% (binomial test: p  < .001, 95% CI: [57%, 64%]). Instructors performed significantly better than students (Welch’s two-sample t -test: t [283] = 3.30, p  = .001).

Familiarity With subject matter

Participants rated how much previous knowledge they had in the essay subject matter (i.e., cognitive psychology and the science of learning). Linear regression models with F- tests indicated that familiarity with the subject did not predict instructors’ or students’ accuracy, F s(1) < 0.49, p s > .486. Psychology instructors did not perform any better than non-psychology instructors, t (130) = 0.18, p  = .860.

Familiarity with ChatGPT

Nearly all participants (94%) said they had heard of ChatGPT before taking the survey, and most instructors (62%) and about half of students (50%) said they had used ChatGPT before. For both groups, participants who used ChatGPT did not perform any better than those who never used it before, F s(1) < 0.77, p s > .383. Instructors’ and students’ experience with ChatGPT (from 0 = not at all experienced to 100 = extremely experienced) also did not predict their performance, F s(1) < 0.77, p s > .383.

Confidence and estimated score

Before they began the Identification Test, both instructors and students expressed low confidence in their abilities to identify the chatbot ( M  = 34.60 on a scale from 0 = not at all confident to 100 = extremely confident). Their confidence was significantly below the midpoint of the scale (midpoint: 50), one-sample t -test: t (282) = 11.46, p  < .001, 95% CI: [31.95, 37.24]. Confidence ratings that were done before the AI Identification test did not predict performance for either group, Pearson’s r s < .12, p s > .171.

Right after they completed the Identification Test, participants guessed how many text pairs they got right. Both instructors and students significantly underestimated their performance by about 15%, 95% CI: [11%, 18%], t (279) = -8.42, p  < .001. Instructors’ estimated scores were positively correlated with their actual scores, Pearson’s r  = .20, t (135) = 2.42, p  = .017. Students’ estimated scores were not related to their actual scores, r  = .03, p  = .731.

Trial-by-trial performance on AI identification test

Participants’ confidence ratings on individual trials were counted as high if they fell above the midpoint (> 50 on a scale from 0 = not at all confident to 100 = extremely confident). For these within-subjects trial-by-trial analyses, we used Generalized Linear Mixed Models (GLMMs, Hox 2010 ) with random intercepts for participants and likelihood ratio tests (difference score reported as D ). Both instructors and students performed better on trials in which they expressed high confidence (instructors: 73%, students: 63%) compared to low confidence (instructors: 65%, students: 56%), D s(1) > 4.59, p s < .032.

Student essay quality

We used two measures to capture the quality of each student-written essay: its assigned grade from 0 to 10 based on the class rubric, and its writing quality score from 0 to 100. Assigned grade was weakly related to instructors’ accuracy, but not to students’ accuracy. The text pairs that instructors got right tended to include student essays that earned slightly lower grades ( M  = 7.89, SD  = 2.22) compared to those they got wrong ( M  = 8.17, SD  = 2.16), D (1) = 3.86, p  = .050. There was no difference for students, D (1) = 2.84, p  = .092. Writing quality score did not differ significantly between correct and incorrect trials for either group, D (1) = 2.12, p  = .146.

AI attitude assessment

Concerns and hopes about chatgpt.

Both instructors and students expressed intermediate levels of concern and optimism. Specifically, on a scale from 0 (“not at all”) to 100 (“extremely”), participants expressed intermediate concern about ChatGPT having negative effects on education ( M instructors = 59.82, M students = 55.97) and intermediate optimism about it having positive benefits ( M instructors = 49.86, M students = 54.08). Attitudes did not differ between instructors and students, t s < 1.43, p s > .154. Participants estimated that just over half of college students (instructors: 57%, students: 54%) would use ChatGPT to write an essay for them and submit it. These estimates also did not differ by group, t (278) = 0.90, p  = .370.

Evaluations of ChatGPT uses

Participants evaluated five different uses of ChatGPT in educational settings on a scale from − 10 (“really bad”) to + 10 (“really good”). Both instructors and students rated it very bad for someone to ask ChatGPT to write an essay for them and submit the direct output, but instructors rated it significantly more negatively (instructors: -8.95, students: -7.74), t (280) = 3.59, p  < .001. Attitudes did not differ between groups for any of the other scenarios (Table  2 ), t s < 1.31, p s > .130.

Exploratory analysis of demographic factors

We also conducted exploratory analyses looking at ChatGPT use and attitudes among different demographic groups (gender, race, and native English speakers). We combined instructors and students because their responses to the Attitude Assessment did not differ. In these exploratory analyses, we found that participants who were not native English speakers were more likely to report using ChatGPT and to view it more positively. Specifically, 69% of non-native English speakers had used ChatGPT before, versus 48% of native English speakers, D (1) = 12.00, p  < .001. Regardless of native language, the more experience someone had with ChatGPT, the more optimism they reported, F (1) = 18.71, p  < .001, r  = .37). Non-native speakers rated the scenario where a student writes an essay and asks ChatGPT to improve its vocabulary slightly positively (1.19) whereas native English speakers rated it slightly negatively (-1.43), F (1) = 11.00, p  = .001. Asian participants expressed higher optimism ( M  = 59.14) than non-Asian participants ( M  = 47.29), F (1) = 10.05, p  = .002. We found no other demographic differences.

Study 2: ChatGPT

Study 1 provided data on college instructors’ and students’ ability to recognize ChatGPT-generated writing and about their views of the technology. In Study 2, of primary interest was whether ChatGPT itself might perform better at identifying ChatGPT-generated writing. Indeed, the authors have heard discussions of this as a possible solution to recognize AI-generated writing. We addressed this question by repeatedly asking ChatGPT to act as a participant in the AI Identification Task. While doing so, we administered the rest of the assessment given to participants in Study 1. This included our AI Attitude Assessment, which allowed us to examine the extent to which ChatGPT produced attitude responses that were similar to those of the participants in Study 1.

Participants, materials, and procedures

There were no human participants for Study 2. We collected 40 survey responses from ChatGPT, each run in a separate session on the platform ( https://chat.openai.com/ ) between 5/4/2023 and 5/15/2023.

Two research assistants were trained on how to run the survey in the ChatGPT online interface. All prompts from the Study 1 survey were used, with minor modifications to suit the chat format. For example, slider questions were explained in the prompt, so instead of “How confident are you about this answer?” the prompt was “How confident are you about this answer from 0 (not at all confident) to 100 (extremely confident)?”. In pilot testing, we found that ChatGPT sometimes failed to answer the question (e.g., by not providing a number), so we prepared a second prompt for every question that the researcher used whenever the first prompt was not answered (e.g., “Please answer the above question with one number between 0 to 100.”). If ChatGPT still failed on the second prompt, the researcher marked it as a non-response and moved on to the next question in the survey.

Data analysis

Like Study 1, all analyses were done in R Statistical Software (R Core Team 2021 ). Key analyses first used linear regression models and F -tests to compare all three groups (instructors, students, ChatGPT). When these omnibus tests were significant, we followed up with post-hoc pairwise comparisons using Tukey’s method.

AI identification test

Overall accuracy.

ChatGPT generated correct responses on 63% of trials in the AI Identification Test, which was significantly above chance, binomial test p  < .001, 95% CI: [57%, 69%]. Pairwise comparisons found that this performance by ChatGPT was not any different from that of instructors or students, t s(322) < 1.50, p s > .292.

Confidence and estimated performance

Unlike the human participants, ChatGPT produced responses with very high confidence before the task generally ( m  = 71.38, median  = 70) and during individual trials specifically ( m  = 89.82, median  = 95). General confidence ratings before the test were significantly higher from ChatGPT than from the humans (instructors: 34.35, students: 34.83), t s(320) > 9.47, p s < .001. But, as with the human participants, this confidence did not predict performance on the subsequent Identification task, F (1) = 0.94, p  = .339. And like the human participants, ChatGPT’s reported confidence on individual trials did predict performance: ChatGPT produced higher confidence ratings on correct trials ( m  = 91.38) than incorrect trials ( m  = 87.33), D (1) = 8.74, p  = .003.

ChatGPT also produced responses indicating high confidence after the task, typically estimating that it got all six text pairs right ( M  = 91%, median  = 100%). It overestimated performance by about 28%, and a paired t -test confirmed that ChatGPT’s estimated performance was significantly higher than its actual performance, t (36) = 9.66, p  < .001. As inflated as it was, estimated performance still had a small positive correlation with actual performance, Pearson’s r  = .35, t (35) = 2.21, p  = .034.

Essay quality

The quality of the student essays as indexed by their grade and writing quality score did not significantly predict performance, D s < 1.97, p s > .161.

AI attitude Assessment

Concerns and hopes.

ChatGPT usually failed to answer the question, “How concerned are you about ChatGPT having negative effects on education?” from 0 (not at all concerned) to 100 (extremely concerned). Across the 40% of cases where ChatGPT successfully produced an answer, the average concern rating was 64.38, which did not differ significantly from instructors’ or students’ responses, F (2, 294) = 1.20, p  = .304. ChatGPT produced answers much more often for the question, “How optimistic are you about ChatGPT having positive benefits for education?”, answering 88% of the time. The average optimism rating produced by ChatGPT was 73.24, which was significantly higher than that of instructors (49.86) and students (54.08), t s > 4.33, p s < .001. ChatGPT only answered 55% of the time for the question about how many students would use ChatGPT to write an essay for them and submit it, typically generating explanations about its inability to predict human behavior and the fact that it does not condone cheating when it did not give an estimate. When it did provide an estimate ( m  = 10%), it was vastly lower than that of instructors (57%) and students (54%), t s > 7.84, p s < .001.

Evaluation of ChatGPT uses

ChatGPT produced ratings of the ChatGPT use scenarios that on average were rank-ordered the same as the human ratings, with direct copying rated the most negatively and generating practice problems rated the most positively (see Fig.  2 ).

figure 2

Average ratings of ChatGPT uses, from − 10 = really bad to + 10 = really good. Human responses included for comparison (instructors in dark gray and students in light gray bars)

Compared to humans’ ratings, ratings produced by ChatGPT were significantly more positive in most scenarios, t s > 3.09, p s < .006, with two exceptions. There was no significant difference between groups in the “format” scenario (using ChatGPT to format an essay in another style such as APA), F (2,318) = 2.46, p  = .087. And for the “direct” scenario, ChatGPT tended to rate direct copying more negatively than students ( t [319] = 4.08, p  < .001) but not instructors (t[319] = 1.57, p  = .261), perhaps because ratings from ChatGPT and instructors were already so close to the most negative possible rating.

In 1950, Alan Turing said he hoped that one day machines would be able to compete with people in all intellectual fields (Turing 1950 ; see Köbis and Mossink 2021 ). Today, by many measures, the large-language model, ChatGPT, appears to be getting close to achieving this end. In doing so, it is raising questions about the impact this AI and its successors will have on individuals and the institutions that shape the societies in which we live. One important set of questions revolves around its use in higher education, which is the focus of the present research.

Empirical contributions

Detecting ai-generated text.

Our central research question focused on whether instructors can identify ChatGPT-generated writing, since an inability to do so could threaten the ability of institutions of higher learning to promote learning and assess competence. To address this question, we developed an AI Identification Test in which the goal was to try to distinguish between psychology essays written by college students on exams versus essays generated by ChatGPT in response to the same prompts. We found that although college instructors performed substantially better than chance, they still found the assessment to be challenging, scoring an average of only 70%. This relatively poor performance suggests that college instructors have substantial difficulty detecting ChatGPT-generated writing. Interestingly, this performance by the college instructors was the same average performance as Waltzer et al. ( 2023a ) observed among high school instructors (70%) on a similar test involving English literature essays, suggesting the results are generalizable across the student populations and essay types. We also gave the assessment to college students (Study 1) and to ChatGPT (Study 2) for comparison. On average, students (60%) and ChatGPT (63%) performed even worse than instructors, although the difference only reached statistical significance when comparing students and instructors.

We found that instructors and students who went into the study believing they would be very good at distinguishing between essays written by college students versus essays generated by ChatGPT were in fact no better at doing so than participants who lacked such confidence. However, we did find that item-level confidence did predict performance: when participants rated their confidence after each specific pair (i.e., “How confident are you about this answer?”), they did perform significantly better on items they reported higher confidence on. These same patterns were observed when analyzing the confidence ratings from ChatGPT, though ChatGPT produced much higher confidence ratings than instructors or students, reporting overconfidence while instructors and students reported underconfidence.

Attitudes toward AI in education

Instructors and students both thought it was very bad for students to turn in an assignment generated by ChatGPT as their own, and these ratings were especially negative for instructors. Overall, instructors and students looked similar to one another in their evaluations of other uses of ChatGPT in education. For example, both rated submitting an edited version of a ChatGPT-generated essay in a class as bad, but less bad than submitting an unedited version. Interestingly, the rank orderings in evaluations of ChatGPT uses were the same when the responses were generated by ChatGPT as when they were generated by instructors or students. However, ChatGPT produced more favorable ratings of several uses compared to instructors and students (e.g., using the AI tool to enhance the vocabulary in an essay). Overall, both instructors and students reported being about as optimistic as they were concerned about AI in education. Interestingly, ChatGPT produced responses indicative of much more optimism than both human groups of participants.

Many instructors commented on the challenges ChatGPT poses for educators. One noted that “… ChatGPT makes it harder for us to rely on homework assignments to help students to learn. It will also likely be much harder to rely on grading to signal how likely it is for a student to be good at a skill or how creative they are.” Some suggested possible solutions such as coupling writing with oral exams. Others suggested that they would appreciate guidance. For example, one said, “I have told students not to use it, but I feel like I should not be like that. I think some of my reluctance to allow usage comes from not having good guidelines.”

And like the instructors, some students also suggested that they want guidance, such as knowing whether using ChatGPT to convert a document to MLA format would count as a violation of academic integrity. They also highlighted many of the same problems as instructors and noted beneficial ways students are finding to use it. One student noted that, “I think ChatGPT definitely has the potential to be abused in an educational setting, but I think at its core it can be a very useful tool for students. For example, I’ve heard of one student giving ChatGPT a rubric for an assignment and asking it to grade their own essay based on the rubric in order to improve their writing on their own.”

Theoretical contributions and practical implications

Our findings underscore the fact that AI chatbots have the potential to produce confident-sounding responses that are misleading (Chen et al. 2023 ; Goodwins 2022 ; Salvi et al. 2024 ). Interestingly, the underconfidence reported by instructors and students stands in contrast to some findings that people often expressed overconfidence in their abilities to detect AI (e.g., deepfake videos, Köbis et al. 2021 ). Although general confidence before the task did not predict performance, specific confidence on each item of the task did predict performance. Taken together, our findings are consistent with other work suggesting confidence effects are context-dependent and can differ depending on whether they are assessed at the item level or more generally (Gigerenzer et al. 1991 ).

The fact that college instructors have substantial difficulty differentiating between ChatGPT-generated writing and the writing of college students provides evidence that ChatGPT poses a significant threat to academic integrity. Ignoring this threat is also likely to undermine central aspects of the mission of higher education in ways that undermine the value of assessments and disincentivize the kinds of cognitive engagement that promote deep learning (Chi and Wylie 2014 ). We are skeptical of answers that point to the use of AI detection tools to address this issue given that they will always be imperfect and false accusations have potential to cause serious harm (Dalalah and Dalalah 2023 ; Fowler 2023 ; Svrluga, 2023 ). Rather, we think that the solution will have to involve developing and disseminating best practices regarding creating assessments and incentivizing cognitive engagement in ways that help students learn to use AI as problem-solving tools.

Limitations and future directions

Why instructors perform better than students at detecting AI-generated text is unclear. Although we did not find any effect of content-relevant expertise, it still may be the case that experience with evaluating student writing matters, and instructors presumably have more such experience. For example, one non-psychology instructor who got 100% of the pairs correct said, “Experience with grading lower division undergraduate papers indicates that students do not always fully answer the prompt, if the example text did not appear to meet all of the requirements of the prompt or did not provide sufficient information, I tended to assume an actual student wrote it.” To address this possibility, it will be important to compare adults who do have teaching experience with those who do not.

It is somewhat surprising that experience with ChatGPT did not affect the performance of instructors or students on the AI Identification Test. One contributing factor may be that people pick up on some false heuristics from reading the text it generates (see Jakesch et al. 2023 ). It is possible that giving people practice at distinguishing the different forms of writing with feedback could lead to better performance.

Why confidence was predictive of accuracy at the item level is still not clear. One possibility is that there are some specific and valid cues many people were using. One likely cue is grammar. We revised grammar errors in student essays that were picked up by a standard spell checker in which the corrections were obvious. However, we left ungrammatical writing that didn’t have obvious corrections (e.g., “That is being said, to be able to understand the concepts and materials being learned, and be able to produce comprehension.“). Many instructors noted that they used grammatical errors as cues that writing was generated by students. As one instructor remarked, “Undergraduates often have slight errors in grammar and tense or plurality agreement, and I have heard the chat bot works very well as an editor.” Similarly, another noted, “I looked for more complete, grammatical sentences. In my experience, Chat-GPT doesn’t use fragment sentences and is grammatically correct. Students are more likely to use incomplete sentences or have grammatical errors.” This raises methodological questions about what is the best comparison between AI and human writing. For example, it is unclear which grammatical mistakes should be corrected in student writing. Also of interest will be to examine the detectability of writing that is generated by AI and later edited by students, since many students will undoubtedly use AI in this way to complete their course assignments.

We also found that student-written essays that earned higher grades (based on the scoring rubric for their class exam) were harder for instructors to differentiate from ChatGPT writing. This does not appear to be a simple effect of writing quality given that a separate measure of writing quality that did not account for content accuracy was not predictive. According to the class instructor, the higher-scoring essays tended to include more specific details, and this might have been what made them less distinguishable. Relatedly, it may be that the higher-scoring essays were harder to distinguish because they appeared to be generated by more competent-sounding writers, and it was clear from instructor comments that they generally viewed ChatGPT as highly competent.

The results of the present research validate concerns that have been raised about college instructors having difficulty distinguishing writing generated by ChatGPT from the writing of their students, and document that this is also true when students try to detect writing generated by ChatGPT. The results indicate that this issue is particularly pronounced when instructors evaluate high-scoring student essays. The results also indicate that ChatGPT itself performs no better than instructors at detecting ChatGPT-generated writing even though ChatGPT-reported confidence is much higher. These findings highlight the importance of examining current teaching and assessment practices and the potential challenges AI chatbots pose for academic integrity and ethics in education (Cotton et al. 2023 ; Eke 2023 ; Susnjak 2022 ). Further, the results show that both instructors and students have a mixture of apprehension and optimism about the use of AI in education, and that many are looking for guidance about how to ethically use it in ways that promote learning. Taken together, our findings underscore some of the challenges that need to be carefully navigated in order to minimize the risks and maximize the benefits of AI in education.

Data availability

Supplementary materials, including data, analysis, and survey items, are available on the Open Science Framework: https://osf.io/2c54a/ .

Abbreviations

Artificial Intelligence

Confidence Interval

Generalized Linear Mixed Model

Generative Pre-trained Transformer

Standard Deviation

Al Darayseh A (2023) Acceptance of artificial intelligence in teaching science: Science teachers’ perspective. Computers Education: Artif Intell 4:100132. https://doi.org/10.1016/j.caeai.2023.100132

Article   Google Scholar  

Bertram Gallant T (2011) Creating the ethical academy. Routledge, New York

Book   Google Scholar  

Biswas SS (2023) Potential use of Chat GPT in global warming. Ann Biomed Eng 51:1126–1127. https://doi.org/10.1007/s10439-023-03171-8

Borenstein J, Howard A (2021) Emerging challenges in AI and the need for AI ethics education. AI Ethics 1:61–65. https://doi.org/10.1007/s43681-020-00002-7

Bretag T (ed) (2016) Handbook of academic integrity. Springer

Bretag T, Harper R, Burton M, Ellis C, Newton P, Rozenberg P, van Haeringen K (2019) Contract cheating: a survey of Australian university students. Stud High Educ 44(11):1837–1856. https://doi.org/10.1080/03075079.2018.1462788

Brown TB, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A, Agarwal S, Herbert-Voss A, Krueger G, Henighan T, Child R, Ramesh A, Ziegler D, Wu J, Winter C, Amodei D (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33. https://doi.org/10.48550/arxiv.2005.14165

Chen Y, Andiappan M, Jenkin T, Ovchinnikov A (2023) A manager and an AI walk into a bar: does ChatGPT make biased decisions like we do? SSRN 4380365. https://doi.org/10.2139/ssrn.4380365

Chi MTH, Wylie R (2014) The ICAP framework: linking cognitive engagement to active learning outcomes. Educational Psychol 49(4):219–243. https://doi.org/10.1080/00461520.2014.965823

Chocarro R, Cortiñas M, Marcos-Matás G (2023) Teachers’ attitudes towards chatbots in education: a technology acceptance model approach considering the effect of social language, bot proactiveness, and users’ characteristics. Educational Stud 49(2):295–313. https://doi.org/10.1080/03055698.2020.1850426

Cizek GJ (1999) Cheating on tests: how to do it, detect it, and prevent it. Routledge

R Core Team (2021) R: A language and environment for statistical computing R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/

Cotton DRE, Cotton PA, Shipway JR (2023) Chatting and cheating: ensuring academic integrity in the era of ChatGPT. Innovations Educ Teach Int. https://doi.org/10.1080/14703297.2023.2190148

Curtis GJ, Clare J (2017) How prevalent is contract cheating and to what extent are students repeat offenders? J Acad Ethics 15:115–124. https://doi.org/10.1007/s10805-017-9278-x

Dalalah D, Dalalah OMA (2023) The false positives and false negatives of generative AI detection tools in education and academic research: the case of ChatGPT. Int J Manage Educ 21(2):100822. https://doi.org/10.1016/j.ijme.2023.100822

Devlin J, Chang M-W, Lee K, Toutanova K (2018) BERT: pre-training of deep bidirectional transformers for language understanding. ArXiv. https://doi.org/10.48550/arxiv.1810.04805

Dwivedi YK, Kshetri N, Hughes L, Slade EL, Jeyaraj A, Kar AK, Baabdullah AM, Koohang A, Raghavan V, Ahuja M, Albanna H, Albashrawi MA, Al-Busaidi AS, Balakrishnan J, Barlette Y, Basu S, Bose I, Brooks L, Buhalis D, Wright R (2023) So what if ChatGPT wrote it? Multidisciplinary perspectives on opportunities, challenges, and implications of generative conversational AI for research, practice, and policy. Int J Inf Manag 71:102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642

Eke DO (2023) ChatGPT and the rise of generative AI: threat to academic integrity? J Responsible Technol 13:100060. https://doi.org/10.1016/j.jrt.2023.100060

Erickson S, Heit E (2015) Metacognition and confidence: comparing math to other academic subjects. Front Psychol 6:742. https://doi.org/10.3389/fpsyg.2015.00742

Fischer I, Budescu DV (2005) When do those who know more also know more about how much they know? The development of confidence and performance in categorical decision tasks. Organ Behav Hum Decis Process 98:39–53. https://doi.org/10.1016/j.obhdp.2005.04.003

Fleming SM, Weil RS, Nagy Z, Dolan RJ, Rees G (2010) Relating introspective accuracy to individual differences in brain structure. Science 329:1541–1543. https://doi.org/10.1126/science.1191883

Fowler GA (2023), April 14 We tested a new ChatGPT-detector for teachers. It flagged an innocent student. The Washington Post . https://www.washingtonpost.com/technology/2023/04/01/chatgpt-cheating-detection-turnitin/

Gigerenzer G (1991) From tools to theories: a heuristic of discovery in cognitive psychology. Psychol Rev 98:254. https://doi.org/10.1037/0033-295X.98.2.254

Gigerenzer G, Hoffrage U, Kleinbölting H (1991) Probabilistic mental models: a brunswikian theory of confidence. Psychol Rev 98(4):506–528. https://doi.org/10.1037/0033-295X.98.4.506

Gilson A, Safranek C, Huang T, Socrates V, Chi L, Taylor RA, Chartash D (2022) How well does ChatGPT do when taking the medical licensing exams? The implications of large language models for medical education and knowledge assessment. MedRxiv. https://doi.org/10.1101/2022.12.23.22283901

Goodwins T (2022), December 12 ChatGPT has mastered the confidence trick, and that’s a terrible look for AI. The Register . https://www.theregister.com/2022/12/12/chatgpt_has_mastered_the_confidence/

Gunser VE, Gottschling S, Brucker B, Richter S, Gerjets P (2021) Can users distinguish narrative texts written by an artificial intelligence writing tool from purely human text? In C. Stephanidis, M. Antona, & S. Ntoa (Eds.), HCI International 2021 , Communications in Computer and Information Science , (Vol. 1419, pp. 520–527). Springer. https://doi.org/10.1007/978-3-030-78635-9_67

Hartshorne H, May MA (1928) Studies in the nature of character: vol. I. studies in deceit. Macmillan, New York

Google Scholar  

Hox J (2010) Multilevel analysis: techniques and applications, 2nd edn. Routledge, New York, NY

Jakesch M, Hancock JT, Naaman M (2023) Human heuristics for AI-generated language are flawed. Proceedings of the National Academy of Sciences, 120 (11), e2208839120. https://doi.org/10.1073/pnas.2208839120

Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, Wang Y, Dong Q, Shen H, Wang Y (2017) Artificial intelligence in healthcare: past, present and future. Stroke Vascular Neurol 2(4):230–243. https://doi.org/10.1136/svn-2017-000101

Joo YJ, Park S, Lim E (2018) Factors influencing preservice teachers’ intention to use technology: TPACK, teacher self-efficacy, and technology acceptance model. J Educational Technol Soc 21(3):48–59. https://www.jstor.org/stable/26458506

Kasneci E, Seßler K, Küchemann S, Bannert M, Dementieva D, Fischer F, Kasneci G (2023) ChatGPT for good? On opportunities and challenges of large language models for education. Learn Individual Differences 103:102274. https://doi.org/10.1016/j.lindif.2023.102274

Katz DM, Bommarito MJ, Gao S, Arredondo P (2023) GPT-4 passes the bar exam. SSRN Electron J. https://doi.org/10.2139/ssrn.4389233

Köbis N, Mossink LD (2021) Artificial intelligence versus Maya Angelou: experimental evidence that people cannot differentiate AI-generated from human-written poetry. Comput Hum Behav 114:106553. https://doi.org/10.1016/j.chb.2020.106553

Köbis NC, Doležalová B, Soraperra I (2021) Fooled twice: people cannot detect deepfakes but think they can. iScience 24(11):103364. https://doi.org/10.1016/j.isci.2021.103364

Lo CK (2023) What is the impact of ChatGPT on education? A rapid review of the literature. Educ Sci 13(4):410. https://doi.org/10.3390/educsci13040410

McCabe DL, Butterfield KD, Treviño LK (2012) Cheating in college: why students do it and what educators can do about it. Johns Hopkins, Baltimore, MD

Mitchell A (2022) December 26). Professor catches student cheating with ChatGPT: ‘I feel abject terror’. New York Post. https://nypost.com/2022/12/26/students-using-chatgpt-to-cheat-professor-warns

Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I (2019) Language models are unsupervised multitask learners. OpenAI https://openai.com/research/better-language-models

Rettinger DA, Bertram Gallant T (eds) (2022) Cheating academic integrity: lessons from 30 years of research. Jossey Bass

Rosenzweig-Ziff D (2023) New York City blocks use of the ChatGPT bot in its schools. Wash Post https://www.washingtonpost.com/education/2023/01/05/nyc-schools-ban-chatgpt/

Salvi F, Ribeiro MH, Gallotti R, West R (2024) On the conversational persuasiveness of large language models: a randomized controlled trial. ArXiv. https://doi.org/10.48550/arXiv.2403.14380

Shynkaruk JM, Thompson VA (2006) Confidence and accuracy in deductive reasoning. Mem Cognit 34(3):619–632. https://doi.org/10.3758/BF03193584

Stokel-Walker C (2022) AI bot ChatGPT writes smart essays — should professors worry? Nature. https://doi.org/10.1038/d41586-022-04397-7

Susnjak T (2022) ChatGPT: The end of online exam integrity? ArXiv . https://arxiv.org/abs/2212.09292

Svrluga S (2023) Princeton student builds app to detect essays written by a popular AI bot. Wash Post https://www.washingtonpost.com/education/2023/01/12/gptzero-chatgpt-detector-ai/

Terwiesch C (2023) Would Chat GPT3 get a Wharton MBA? A prediction based on its performance in the Operations Management course. Mack Institute for Innovation Management at the Wharton School , University of Pennsylvania. https://mackinstitute.wharton.upenn.edu/wp-content/uploads/2023/01/Christian-Terwiesch-Chat-GTP-1.24.pdf

Tlili A, Shehata B, Adarkwah MA, Bozkurt A, Hickey DT, Huang R, Agyemang B (2023) What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn Environ 10:15. https://doi.org/10.1186/s40561-023-00237-x

Turing AM (1950) Computing machinery and intelligence. Mind - Q Rev Psychol Philos 236:433–460

UCSD Academic Integrity Office (2023) GenAI, cheating and reporting to the AI office [Announcement]. https://adminrecords.ucsd.edu/Notices/2023/2023-5-17-1.html

Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Polosukhin I (2017) Attention is all you need. Adv Neural Inf Process Syst 30. https://doi.org/10.48550/arxiv.1706.03762

Waltzer T, Dahl A (2023) Why do students cheat? Perceptions, evaluations, and motivations. Ethics Behav 33(2):130–150. https://doi.org/10.1080/10508422.2022.2026775

Waltzer T, Cox RL, Heyman GD (2023a) Testing the ability of teachers and students to differentiate between essays generated by ChatGPT and high school students. Hum Behav Emerg Technol 2023:1923981. https://doi.org/10.1155/2023/1923981

Waltzer T, DeBernardi FC, Dahl A (2023b) Student and teacher views on cheating in high school: perceptions, evaluations, and decisions. J Res Adolescence 33(1):108–126. https://doi.org/10.1111/jora.12784

Weidinger L, Mellor J, Rauh M, Griffin C, Uesato J, Huang PS, Gabriel I (2021) Ethical and social risks of harm from language models. ArXiv. https://doi.org/10.48550/arxiv.2112.04359

Wixted JT, Wells GL (2017) The relationship between eyewitness confidence and identification accuracy: a new synthesis. Psychol Sci Public Interest 18(1):10–65. https://doi.org/10.1177/1529100616686966

Yeadon W, Inyang OO, Mizouri A, Peach A, Testrow C (2023) The death of the short-form physics essay in the coming AI revolution. Phys Educ 58:035027. https://doi.org/10.1088/1361-6552/acc5cf

Zhuo TY, Huang Y, Chen C, Xing Z (2023) Red teaming ChatGPT via jailbreaking: bias, robustness, reliability and toxicity. ArXiv. https://doi.org/10.48550/arxiv.2301.12867

Download references

Acknowledgements

We thank Daniel Chen and Riley L. Cox for assistance with study design, stimulus preparation, and pilot testing. We also thank Emma C. Miller for grading the essays and Brian J. Compton for comments on the manuscript.

This work was partly supported by a National Science Foundation Postdoctoral Fellowship for T. Waltzer (NSF SPRF-FR# 2104610).

Author information

Authors and affiliations.

Department of Psychology, University of California San Diego, 9500 Gilman Drive, La Jolla, San Diego, CA, 92093-0109, USA

Tal Waltzer, Celeste Pilegard & Gail D. Heyman

You can also search for this author in PubMed   Google Scholar

Contributions

All authors collaborated in the conceptualization and design of the research. C. Pilegard facilitated recruitment and coding for real class assignments used in the study. T. Waltzer led data collection and analysis. G. Heyman and T. Waltzer wrote and revised the manuscript.

Corresponding author

Correspondence to Tal Waltzer .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Waltzer, T., Pilegard, C. & Heyman, G.D. Can you spot the bot? Identifying AI-generated writing in college essays. Int J Educ Integr 20 , 11 (2024). https://doi.org/10.1007/s40979-024-00158-3

Download citation

Received : 16 February 2024

Accepted : 11 June 2024

Published : 08 July 2024

DOI : https://doi.org/10.1007/s40979-024-00158-3

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Academic integrity
  • Higher education

International Journal for Educational Integrity

ISSN: 1833-2595

ged writing test essay examples

ged writing test essay examples

Language Arts, Writing

The GED ®  Language Arts, Writing test is divided into two parts, but the scores are combined so you’ll receive a single score.

About the Test

Part I is 75 minutes long and contains 50 multiple choice questions from the following content areas:

  • Organization (15%)
  • Sentence structure (30%)
  • Usage (30%)
  • Mechanics (25%)

Part II consists of writing an essay about a familiar subject. You will have 45 minutes to plan, write, and revise your essay. The essay topic will require you to present your opinion or explain your views about the assigned topic. Two trained readers will score your essay on the basis of the following features:

  • Focused main points
  • Clear organization
  • Specific development of ideas
  • Sentence structure control, punctuation, grammar, word choice, and spelling

Try a sample question

June 24, 2002

Jonathan Quinn Employment Director Capital City Gardening Services 4120 Wisconsin Ave., NW Washington, DC 20016

Dear Mr. Quinn:

(A) (1) I would like to apply for the landscape supervisor position advertised in the Sunday, June 23rd edition of the Washington Post. (2) My work experience and education combined with your need for an experienced landscape supervisor have resulted in a relationship that would profit both parties. (3) In May, I graduated from Prince William Community College. (4) Graduating with an associate of arts degree in horticulture. (5) My concentration within the program was designing gardens and choosing the appropriate plants for particular soils and regions. (6) I have also had considerable supervising experience. (7) For several years, I have worked with a local company, Burke Nursery and Garden Center, and have been responsible for supervising the four members of the planting staff.

(B) (8) Our community knows that Capital City Gardening Services is a company that does excellent work and strives hard to meet the demands of its clients. (9) As my references will attest, I am a diligent worker and have the respect of both my coworkers and my customers. (10) I will be, as a landscape supervisor at your firm, able to put to use the skills and knowledge that I have obtained from my professional career and education. (11) I have included a copy of my resume, which details my principal interests education, and past work experience. (12) I have also included photographs of the landscape projects I have supervised as well as drawings of proposed projects.

(C) (13) I am excited about the opportunities and many challenges that this position would provide. (14) Thank you for your consideration, and I look forward to hearing from you.

Patrick Jones 1219 Cedar Lane Manassas, VA 20109

Sentence 2: My work experience and education combined with your need for an experienced landscape supervisor have resulted in a relationship that would profit both parties. Which correction should be made to sentence 2?

IMAGES

  1. How To Write The GED Essay 2024 (Extended Response)

    ged writing test essay examples

  2. 5 Paragraph GED Essay Sample

    ged writing test essay examples

  3. Ged Essay Samples

    ged writing test essay examples

  4. How to Pass the GED Writing Test: Video 3

    ged writing test essay examples

  5. THE ULTIMATE GED ESSAY-WRITING GUIDE WITH SUPPORT AND WORD BANK by

    ged writing test essay examples

  6. Ged essay topics samples in 2021

    ged writing test essay examples

VIDEO

  1. Madhyamik ABTA Test Paper 2024 Solved. || Madhyamik ABTA Test Paper English Page-233 Solved

  2. Legal Current Affairs: Same-Sex Marriage Judgment Criticized

  3. HiSET / GED Math: Algebra

  4. Essay Tips for the GED and HiSET

  5. GED® I Language Arts Essay

  6. HiSET & GED Math Word Problems Part 1

COMMENTS

  1. GED Essay Writing Guide

    Follow this strategy when writing your GED Essay: Step 1 Read and Analyze the Stimulus Passages (5 Minutes). Start by reading both of the passages. Make sure you understand the issue and the position that each passage is taking. Try to ignore your own personal feelings on the topic as you read.

  2. How to Write the GED Essay-Topics, Sample, and Tips

    Here are a few examples of GED Essay Topics. Click on the title to read a full stimulus and a prompt. Topic 1. An Analysis of Daylight-Saving Time. The article presents arguments from both supporters and critics of Daylight-Saving Time who disagree about the practice's impact on energy consumption and safety.

  3. GED Sample Essay

    The following is an example of a high-scoring essay response to our free practice GED Essay Prompt. Below our GED sample essay is a brief analysis justifying its perfect score. Police militarization is a hot-button topic these days. Some believe that criticizing the actions of the police hurts their ability to do their job, while others argue ...

  4. How to Write & Pass a GED Essay

    For GED essay practice, try writing your own essay based on the example above. Set a timer for 45 minutes and do your best to write an essay with your own analysis and ideas. You can practice more writing skills with this free test or enroll today in the GED Academy to get access to more GED essay prompts and personalized feedback from GED ...

  5. Extended Response

    Use these free videos, guidelines and examples to prepare and practice for the essay section of the Language Arts test. Videos: How to write a great GED extended response. Overview of the GED Extended Response Format (1:28)

  6. GED Essay Question

    This is simply an essay question. You will have 45 minutes to type your answer. This is a tricky part of the GED test, so it's very important to familiarize yourself with this task ahead of time. First read our essay guide and then review our sample question. Try typing out your own essay before you look at our sample response.

  7. GED Essay

    There are is now an extended response (essay) question on the GED Reasoning Through Language Arts Test (RLA). You are given 45 minutes to type your GED Essay on the RLA test. Read through our tips and strategies, use our sample prompt to write out a practice essay, and then examine our essay examples to gauge your strengths and weaknesses. GED.

  8. GED Essay: Everything You Need To Know In 2024

    The GED essay is intimidating to many people. Writing an entire essay from scratch in 45 minutes or less may seem difficult, but it does not have to be. This GED essay writing overview will help you prepare for and learn about the written section of the exam.In this post, Get-TestPrep will show everything you need to know about GED essays, including their structure, sample topics, tips, and ...

  9. PDF See a Perfect Scoring GED Test Extended Response

    Use this guide to prepare for the extended responses that you'll be writing on the Reasoning Through Language Arts. Step 1: Read the instructions for the Extended Response task. Step 2: Read the two passages. Step 3: Review the sample extended response that received full score points (6 points out of 6 possible)

  10. Contemporary's GED Language Arts, Writing

    Sample GED Essays. Below are an essay topic and four sample essays with the holistic scores they received from the GED Testing Service. Readers may use these samples as they familiarize themselves with the Essay Scoring Guide. Notice that there is no required minimum number of words. The essays with higher scores have are a clear organization ...

  11. GED Writing Practice Test

    This is our free GED Writing Practice Test 1. To prepare for your GED Writing Test, be sure to work through as many practice questions as possible. After you answer each question, the correct answer will be provided along with a detailed explanation. Click on the right arrow to move on to the next question.

  12. PDF Preparing for the GED Essay

    Preparing for the GED Essay. This section of the book presents a simple strategy for writing a passing GED essay. The GED Language Arts, Writing Test has two parts. Part I, Editing, is a multiple-choice section covering organization, sentence structure, usage, and mechanics. The first part of this book will help you pass Part I of the test.

  13. GED Essay Practice Question

    As a part of the GED Reasoning Through Language Arts test, there is a 45-minute extended response question. For this question, two articles are presented that discuss a topic and take opposing positions. You are required to write an essay arguing that one of the positions is better-supported than the other. Be sure to read our GED Essay Writing ...

  14. How To Write The GED Essay 2024 (Extended Response)

    The best strategy for writing the GED essay is: Read the passages (5 minutes) Analyze the data and create an outline (5 minutes) Write your extended response (30 minutes) Reread and edit your writing (5 minutes) If you want a clear example of what your GED essay should like like, later in this blog you'll find a sample.

  15. PDF The 2014 GED Reasoning Through Language Arts Test Extended Response

    2014 GED® test went through in 2012. The responses that you will see in this guide are actual writing samples written by adult test-takers in response to the stimulus material and prompt on Daylight Saving Time. These writing samples were generated under standardized computer-based testing administration conditions that

  16. Extended Response: Example 1

    Here, at HowtoPasstheGED.com, a five-paragraph essay will be used as a framework for writing an Extended Response. Five-Paragraph Essay - Outline. Paragraph 1: Introduction of your position with three supporting points. Paragraph 2: Discussion of first point. Paragraph 3: Discussion of second point.

  17. GED Essay Tips & Strategies

    Writing Guidelines. Rely upon these timing guidelines as you write your GED essay: PLAN — Spend 10 minutes reading the source material and organizing your essay response. PRODUCE — Spend 30 minutes writing your (ideally) 5-paragraph essay. PROOFREAD — Save 5 minutes for re-reading what you wrote and making necessary changes and improvements.

  18. PDF Video Training for Authorized Test Centers

    Help your students get ready for the extended responses on the GED® test - Reasoning Through Language Arts test by practicing with these sample prompts and source materials in the classroom. Fully answering an ER prompt often requires 4 to 7 paragraphs of 3 to 7 sentences each - that can quickly add up to 300 to 500 words of writing!

  19. GED Essay Sample Response

    Commentary. This sample essay would receive a perfect score on the GED. The writer clearly reviewed the prompt and outlined the argument before writing. Generally, the response exhibits the following organization: Paragraph 1 — Introduction. Paragraph 2 — Logical reasoning. Paragraph 3 — Statistics. Paragraph 4 — Ethics.

  20. I took my GED Ready test, anyone want an example of the RLA essay

    If you had left the essay blank, your score would have been much lower, though likely passing still, based on the evidence of your strong reading and grammar skills. I have been a GED teacher for 18 years & have taught essay writing for the GED, and I also tutor college students on their writing daily at the college I work for.

  21. HiSET vs. GED: What's the Difference? [Updated for 2024]

    The essay questions on the GED and HiSET differ slightly. Both tests will ask you to write an argumentative essay. Two reading passages from two different authors will address the same topic from two different perspectives. The GED essay question prompts testers to read, evaluate, and decide which author has the strongest opinion and why.

  22. Exhibit 8-PROCEDURES FOR THE PAPER BASED ESSAY

    A. Assign an alternate topic in accordance with the following procedures: If the essay topic is printed at random in the test booklet, issue another Language Arts, Writing Test booklet of the same test form bearing the next sequential serial number. For example, the GED Chief Examiner™ or GED Examiner™ would exchange Test Form IA, serial ...

  23. GED Essay Topics

    GED Essay - Reasoning Through Language Arts; GED Essay - Social Studies; GED Short Answer Questions - Science; The essay portion of the GED will require you to compose a short essay on a pre-selected topic. The essay will be either a narrative, descriptive, or persuasive essay. Narrative essays require you to tell a story from your own life.

  24. Can you spot the bot? Identifying AI-generated writing in college essays

    The assessment was administered in an online survey and included an AI Identification Test which presented pairs of essays: In each case, one was written by a college student during an in-class exam and the other was generated by ChatGPT. ... one combined grade for each essay (mean: 7.93, SD: 2.29, range: 2-10). Two of the authors also scored ...

  25. Language Arts, Writing

    About the Test. Part I is 75 minutes long and contains 50 multiple choice questions from the following content areas: Part II consists of writing an essay about a familiar subject. You will have 45 minutes to plan, write, and revise your essay. The essay topic will require you to present your opinion or explain your views about the assigned topic.