ChatGPT for learning: Incredible or Uncredible?

Published by Elaine Gallagher on

ChatGPT for learning: Incredible or Uncredible?

Talk of AI and its potential benefits and disadvantages has reached unprecedented levels recently with access to AI-driven platforms, such as Midjourney and ChatGPT, widespread. Like many others I was sceptical, yet curious, as to whether AI could replace me as a behavioural scientist in the field of learning and behaviour change.

Creating efficiency or cutting corners?

I have been dabbling in the use of ChatGPT to see how it might help in optimising my efficiency and I can see the benefits to using it as an extra pair of hands. Typically, I use it to clarify my understanding of a complex topic, or I ask it to summarise a new theory or behavioural model and tell me the main limitations. This is hugely time-saving and offers me a good starting point from which to further my own research.

However, there is a fine line between using resources to improve efficiency and cutting corners and causing reputational damage. Recently, I was looking for a non-academic article, on behalf of a client, that explained a certain concept. Whilst doing so I came across what appeared to be a well-written piece, that covered the topic appropriately. Importantly, this was also published by a reputable source and written by an academic who had publications in academic journals. Having read the article I was confident in the content, the source, and the author. I then moved on to the list of references that were provided for the article, both to check their credentials and to read for my own further learning. These references looked and sounded legitimate, yet when I entered them into Google Scholar one by one, it became evident that none of them existed.

It was clear that the article had, in fact, been written by ChatGPT and erroneous references were generated (I reached out to the author privately who confirmed this and updated the article to include legitimate references). At this point I could no longer trust the content of the article and had to return to the ‘old-fashioned way’ by going back to sifting through reputable academic journals for the information I needed.

ChatGPT, can you tell me the strengths and limitations of using ChatGPT as a learning tool?

I looked into this and found out that ChatGPT does indeed produce false references to support some of its output. This is not always the case and it is not clear why, or under what circumstances, this happens but it does appear to happen frequently. That is not to say that any content produced by ChatGPT that includes false references is incorrect or inaccurate, rather it simply can’t be attributed with confidence to a reputable source.

When writing this blog, I decided to test this out. I asked ChatGPT to tell me ‘the strengths and limitations of using ChatGPT as a learning tool’, and to support the answer with academic references. I was furnished with a concise list of plausible and relevant strengths and limitations. Interestingly, one of the limitations it presented was that ChatGPT can produce errors and provide inaccurate information. As for the supporting academic references, five were provided, with four of these being non-existent and generated by the program. At first, and even second, glance the references look genuine but unfortunately are not. Here are two of the references that were provided:

Tegos, S., Demetriadis, S., & Papadopoulos, P. M. (2021). Chatbots and intelligent conversational agents in education: A systematic literature review. International Journal of Artificial Intelligence in Education, 31(1), 70-124.

Shute, V. J., & Wang, L. (2020). One-on-one tutoring by humans and computers. International Handbook of the Learning Sciences (pp239-249). Routledge.

When I questioned ChatGPT on the legitimacy of the references that it provided, it apologised, and stated that the response it provided was based on more general information rather than the academic references that I had asked for. It then went on to recommend how to find credible sources and critically appraise their validity. While it was good to see that it provides this guidance, it only appeared upon questioning the legitimacy of the references and not when I put in my initial query.

Always check your sources and ask the right questions

My experience has really reinforced my long-held ethos that we should always think critically, check our sources, ask the right questions, and not take things at face value, particularly if our aims could have an impact on others. I wonder how many articles out there are written by AI but claim to be compiled by a real person, and moreover, how many have been produced without checking the content and sources.  

To summarise, from a learning perspective, ChatGPT is an amazing tool that can really help us to cut through to the important bits of what we want to know. It provides a solid foundation upon which to build and can help us learn complex topics faster. However, it should not be our only and final source but instead should be used as a starting block that can help us refine our approach and offer direction on what we should be looking at.

Thankfully, I don’t think it will be replacing me any time soon.

Related stories