
Australian universities to return to ‘pen, paper exams’ over use of ChatGPT, other artificial intelligence softwares by students – The Street Journal
Australian universities have been forced to change the way they run exams and other assessments.
This development comes amid fears that students are using emerging artificial intelligence (AI) software to write essays.
Major institutions have added new rules which state that the use of AI is cheating.
However, one AI expert recently noted that universities are in an “arms race” they can never win.
Recall that ChatGPT, which generates text on any subject in response to a prompt or query, was launched in November 2022 by OpenAI and has already been banned across all devices in New York’s public schools due to concerns over its “negative impact on student learning” and potential for plagiarism.
In London, one academic tested it against a 2022 exam question and said that the AI’s answer was “coherent, comprehensive and sticks to the points, something students often fail to do.”
He added that he would have to “set a different kind of exam” or deprive students of internet access for future exams.
In Australia, academics have cited concerns over ChatGPT and similar technology’s ability to evade anti-plagiarism software while providing quick and credible academic writing, The Guardian reports.
The group of eight leading universities, the leading research-intensive universities around the country, while speaking on the development on Tuesday, January 10, said that they had revised how they would run assessments this year due to the emergent technology.
The group’s deputy chief executive, Dr. Matthew Brown, noted that its institutions were “proactively tackling” AI through student education, staff training, redesigning assessments and targeted technological and other detection strategies.
Brown said that, “Our universities have revised how they will run assessments in 2023, including supervised exams.
“Our forthcoming assessments will include greater use of pen and paper exams and tests, and tests only for units with low integrity risks.
“Assessment redesign is critical, and this work is ongoing for our universities as we seek to get ahead of AI developments.”
The University of Sydney’s latest academic integrity policy now specifically describes “generating content using artificial intelligence” as a form of cheating.
A spokesperson of the group noted that while few instances of cheating had been observed, the university was preparing for change by redesigning assessments and improving detection strategies.
“We also know AI can help students learn, and will form part of the tools we use at work in the future, so we need to teach our students how to use it legitimately,” he said.
Meanwhile, the Australian National University has changed assessment designs to rely on laboratory activities and fieldwork.
Toby Walsh, Scientia professor of artificial intelligence at the University of New South Wales, said that teachers were in “crisis meetings” about how exams would be marked in the new year and whether protocols were in place to deal with plagiarism.
“People are already using it to submit essays,” Walsh said.
“We should’ve been aware this was coming,” he added.
Walsh noted that with more advanced programs arriving, including GPT-4 from OpenAI, simply banning the platform was unrealistic.
Walsh said that AI technology has great potential for innovation and streamlining in the education sector.
“Teachers hate marking essays, and with suitable prompts, it can be used for marking essays,” he said.
“We don’t want to destroy literacy, but did calculators destroy numeracy?”
Flinders University was one of the first in Australia to implement a specific policy against computer-generated cheating.
Its deputy vice-chancellor, Prof Romy Lawson, said that maintaining academic integrity in an era of fast changing technology was an “ongoing challenge”.
“We are concerned about the emergence of increasingly sophisticated text generators, most recently Chat GPT, which appear capable of producing very convincing content and increasing the difficulty of detection,” she said.
“Instead of banning students from using such programs, we aim to assist academic staff and students to use digital tools to support learning.”
A spokesperson for UNSW Sydney also said that the university was aware of the use of artificial intelligence programs to assist and write papers for students who then submitted the work as their own.
“Using AI in this way undermines academic integrity, and it is a significant issue facing all education and training institutions, nationally and internationally,” he said.
This content was originally published here.