On February 7, The Weekend Australian Magazine published a feature by Ros Thomas focusing on the reliance by University Students on artificial intelligence (AI) to help them complete assessment tasks, especially essays and take-home exams.
It is concerning because it could produce university graduates with next to no knowledge of their field, relying entirely on AI to complete their assessment tasks. They are, in fact, not showing their lecturer or marker what they really know about a topic, because they let the computer do "the talking" for them.
They also lose the opportunity to develop skills in critical thinking and information skills, such as how to locate, select, and communicate their knowledge, thoughts, and ideas on a topic. Finding information that is relevant, reliable and trustworthy is part of the process as well, particularly in a world where information can easily deceive and mislead us. We need to be able to filter truth from untruth.
While some may have been able to use AI to mould the perfect assignment, AI is not perfect either. Thomas refers to an account by one student in which incorrect information appeared in a group assignment they had worked on with AI. They, along with another member, had just three hours left before submission and had to rewrite their assignment to ensure they could land a good mark.
From my own experience using AI, I have had to hold AI to account when it omits information, incorrectly quotes information, or provides inaccurate information. More frustrating is wanting a direct quote from a document, only for AI to reword it or ignore it altogether. Even asking for academic papers on a topic (even relating to Teacher Librarianship) has yielded inaccurate results. I still get results that may not answer the question. AI models have improved in reliability, but they can still be inaccurate. There are times my prompts are ignored.
Even in our schools, we face the same situation where students rely on AI to complete homework or research tasks.
Whether at the school or tertiary level, we need to ensure we set standards for how we use AI. I am not against AI because, in my role, it can simplify tasks. It can help me organise library research lessons or recommend books for a reluctant reader based on the profile I give them (and even upload a reading list or a catalogue listing of books). I then have to check to ensure what I got is accurate.
If I need to write a report, it can provide a scaffold, template, or draft that I can then tweak to suit what I want to write or present.
It leaves me thinking, what can I teach my students to do? I think I'd have to emphasise the following to encourage AI to be useful, while still encouraging critical thinking and the information skills process.
- Checking the usefulness and reliability of information provided by a chatbot - Is it relevant? Is it correct?
- Organising summaries of notes
- Proofreading written content. Tools such as Grammarly have AI embedded in it. I have found that it has improved my writing skills after regular use of it.
- Summaries of Pdf files.
- Fact Checking - Checking to make sure AI-generated summaries are correct. If searching for content, conduct a usefulness and reliability check - Does it answer the research question
- Scaffolding - helping the student to organise their information
- Avoiding situations where one copies and pastes AI-generated content into an assessment task
- Expecting students to edit and reedit written work to ensure that the content complies with assessment requirements and marking criteria, and that it is expressed in their own words. I'd encourage them to think of themselves as the marker and consider what the marker might think of their work. Do they use a word in which they don't know the meaning or usage? I'd tell them not to use it and that teachers know how they typically write.