[Journalism Internship] ChatGPT’s Positive and Negative Influences on Education
Published: 14 Aug. 2023, 13:41
Updated: 14 Aug. 2023, 13:42
ChatGPT, an artificial intelligence (AI) chatbot developed by OpenAI, may become a great tool for students to learn more efficiently, but only when the problems regarding academic integrity and inaccuracy are addressed.
While ChatGPT arguably helps students learn better and research more efficiently, concerns about plagiarism and inaccuracy are rising.
This new technology helps students understand a concept by giving them new insights into a certain topic. For instance, Code Interpreter, a new feature of ChatGPT, helps users identify key ideas in a given set of data by catching trends or anomalies.
Likewise, ChatGPT can provide students with a starting point for researching projects, writing research papers, and preparing for exams.
“[ChatGPT] helped me with brainstorming ideas…it can be a useful tool in certain tasks,” said Charlie Kook, a 14 year-old student at Korea International School, who used ChatGPT for his recent engineering project.
ChatGPT uses big data from online sources to compose what it believes to be the most adequate response to a given question, which may offer students some helpful guidelines.
Some schools are already looking forward to integrating ChatGPT into their programs.
Seattle is “discussing possibly expanding the use of ChatGPT into classrooms … to let students use the application as a ‘personal tutor,’" stated Tim Robinson, the direct spokesman of Seattle Public Schools, in an interview with CBC News.
As such, many educators suggest that the tool has the potential to assist students in their learning process.
“[ChatGPT] can define a word, identify its part of speech, provide sample sentences, and offer additional meanings,” wrote Lucas Kohnke, a senior lecturer at the Education University of Hong Kong, and his team in their recent study.
However, experts warn that ChatGPT poses risks regarding academic integrity such as plagiarism.
“GPT-3 [an AI model ChatGPT uses] in higher education has the potential to offer a range of benefits,” said Debby Cotton, a professor of Higher Education at Plymouth Marjon University, in a recent interview with The Guardian. “However, these tools also raise a number of challenges and concerns, particularly in relation to academic honesty and plagiarism.”
Students can easily take advantage of AI to complete the likes of quizzes, formative tests, and even finals as the answers to problems can be obtained by simply typing the questions into ChatGPT.
About 48 percent of college students in the United States said they have used ChatGPT while taking an at-home test or quiz, while roughly 50 percent said they have made it write an essay for them, according to a survey by Study.com. The online education platform interviewed 1,000 students. Multiple answers were allowed.
Students using AI models like ChatGPT clouds the primary purpose of assessments, hindering the ability of schools and universities alike to properly measure a student’s progress.
Although teachers are constantly looking for counteractive measures to identify AI-generated work, it is still a struggle to distinguish illegitimate work.
“There are detectors that are being created, but there are false positives,” said Grace Lee, the head of the English Department at Saint Paul Preparatory Seoul. “[ChatGPT plagiarism] is causing English teachers a huge headache.”
While AI-generated academic essays may often only affect the individual student, entire fields of study may be misled if one AI is used to write a scholarly article.
In a study conducted by Catherine Gao and her team from Northwestern University and the University of Chicago, ChatGPT was asked to generate fake scientific abstracts using existing papers. To this, ChatGPT created convincing abstracts with fabricated data; only 68 percent of volunteers were able to identify the generated text among legitimate abstracts.
Students looking for learning opportunities using ChatGPT may also be misled by the tool because it is difficult for those with no background knowledge of a certain subject to assess the validity of the responses.
“Because these systems [such as ChatGPT] respond so confidently, … it’s very difficult to tell the difference between facts and falsehoods,” said Kate Crawford, a professor at the University of Southern California, during an interview with the Washington Post.
Experts say regulating the usage of AI is urgently needed to further develop the technology in a positive direction.
“We need to put in place appropriate guardrails for the university and transparency on how the university is using any of these tools in its services,” stated Brandie Nonnecke, the founding director of the CITRIS Policy Lab, a lab based in the University of California in Berkeley that focuses on studying regulations, to Berkeley News.
Responding to the call for transparency, OpenAI is developing a way to watermark its output.
Scott Aaronson, an OpenAI guest researcher, talked in a lecture at the University of Texas about a new feature for ChatGPT that addresses cheating and plagiarism by “statistically watermarking outputs” through tweaking the specific word choices. The watermarks, while unnoticeable to the plain eye, would be obvious when searching for the signs of generated text.
Currently, such regulations have not yet been implemented, and it remains to be seen whether they will be sufficient or not.
BY ALICE NAH, MICHAEL NOH, JUNHWAN OH, HANNAH CHOO, AIDEN CHOI, BENNY YU [[email protected], [email protected], [email protected], [email protected], [email protected], [email protected]]
with the Korea JoongAng Daily
To write comments, please log in to one of the accounts.
Standards Board Policy (0/250자)