Over the past few months, artificial intelligence (AI) tools have taken the world by storm. Finding information has become easier than ever and the resulting opportunities are ever-growing.
Today, AI can write emails, debug computer code, and even put together academic papers. It’s also readily accessible, with programs like ChatGPT and Bing AI still being free to use.
For universities and other educational institutions, this raises quite a few dilemmas.
On one hand, AI could soon prove essential for students to prepare themselves for a competitive business environment. In order to use it effectively, students need to learn how to craft the right prompts and vet the resulting responses. This takes practice, and university could be the perfect place to hone such skills.
The emergence of AI tools like ChatGPT comes naturally with technological advancement, and our society today is one that is highly mobile and connected, with information only a click away.
– Spokesperson from the Singapore Institute of Management (SIM)
On the other hand, AI also provides the opportunity to cut corners and skirt academic policies. For example, written reports are often meant to test students’ ability to understand information and think critically. Delegating such tasks to AI would defeat the purpose of these assignments.
If the use of AI was allowed in classes, universities would need to define a disclosure process. It’d be important for students to identify how they made use of such tools in their work. In turn, professors might also have to establish separate grading standards for work completed with and without the use of AI.
We spoke to Singapore’s professors and university representatives to find out more about this technology’s future in the world of education.
AI is inevitable
With AI offering uses in such a wide variety of fields, it’d appear that there is no choice but to embrace it.
Singapore Management University (SMU)’s professor Siew Ning Kan says, “Humankind has to embrace all new technologies that hold the promise of becoming ubiquitous. Go forward or be left behind.”
He goes on to cite examples like calculators replacing slide rules and logarithmic tables, and the Google search engine surpassing Yahoo due to its advanced features.
Professor Siew teaches the course ‘Doing Business with Artificial Intelligence’, among others, and stresses that his views are not representative of SMU’s policies.
LASALLE Dean, Dr. Wolfgang Meunch, offers a similar view on the emergence of AI.
As natural language processing tools – driven by AI technology such as ChatGPT – continue to rapidly develop, they will have a significant societal impact in the future. They will also increasingly play a significant role in learning and teaching environments.
– Dr Wolfgang Muench, Dean, Learning, Teaching & Research, LASALLE College of the Arts
Today, the International Baccalaureate (IB) educational system has already set a precedent in favour of AI software.
Dr. Matthew Glanville, IB’s Head of Assessment Principles and Practice, revealed in a commentary that students would be allowed to use tools like ChatGPT and claimed that banning such tools is the “wrong way to deal with innovation”.
Professor Kan believes that this decision will pose a challenge for universities as they come up with their own policies for AI-produced work as well.
Academic policy and AI
Plagiarism has been one of the immediate concerns surrounding the use of AI chatbots. They’re trained on information that already exists online and are unable to provide accurate citations for their responses.
Most universities have plagiarism policies, as does Nanyang Technological University (NTU). The broad principle underlying these policies is that students’ work must be their own. When students are borrowing words or ideas from sources, they must cite those sources using conventions that are appropriate for their discipline.
– Dr. Mark Cenite, Associate Dean (Undergraduate) at NTU’s College of Humanities, Arts and Social Sciences
Dr. Cenite believes that the same idea can be applied when it comes to the use of AI tools. It’s inappropriate to copy a chatbot’s response and present it as one’s own, in the same way it’d be inappropriate to copy-and-paste the contents of someone else’s academic paper.
That being said, AI tools can be incredibly useful in helping conceptualise an essay or report.
“ChatGPT can be a great tool for brainstorming on a research topic,” he adds. “A student can get into a dialogue with the bot that helps refine their ideas. Bots can summarise existing research quite well too.”
A spokesperson from SIM offers a similar view: “While [tools like ChatGPT] can be useful for understanding theories, they should not be used to gain an advantage for written academic work or submissions.”
Diving deep
As the use of AI becomes more prevalent, there are finer concerns which will need to be addressed as well. Plagiarism might be easy to detect, but other issues might not have as clear-cut an answer.
Imagine that a political theory course allows the option of using ChatGPT to brainstorm a paper, but not to help write it.
One student uses ChatGPT and comes up with quite a novel thesis, which they further research and write up in their own words. Another student doesn’t use ChatGPT and comes up with quite a conventional thesis, which they research further and write up on their own. How do you compare these papers?
– Dr. Mark Cenite, Associate Dean (Undergraduate) at NTU’s College of Humanities, Arts and Social Sciences
With this being a subjective issue, Dr. Cenite believes that instructors might have to think over and settle on their course policies.
This could bring about changes to assessment formats as well. The aid of AI technology might force instructors to move away from written pieces as a test of critical thinking – seeing that it’d be difficult to tell apart the ideological contributions of a student from those of the AI tool.
Dr. Muench offers another concern surrounding the use of AI-generated information. Beyond checking it for accuracy, students would also need to pay mind to the biases which it might be prone to.
“AI systems are not neutral, but rather reflect the values of those who define them including cultural, ethnic, social, and political biases. Society in general, and education in particular, should promote an acute awareness of this fact and avoid portraying ‘machine intelligence’ as an objective concept,” he says.
Featured Image Credit: NTU / LASALLE / SIM