Table of contents
1. Chat GPT and artificial intelligence models
Since ChatGPT was made free and accessible in November 2022, there has been widespread interest and concern about artificial intelligence models which produce text outputs to prompts from users.
ChatGPT, and other tools such as Anthropic and Google’s forthcoming Sparrow, are large language models (LLMs): algorithms that use billions of data points, or pieces of writing, to summarise, predict and generate text. They are designed to be naturalistic and convincingly close to a human conversation, essay, article or perhaps even Library briefing.
Dr Kate Darling, who researches human-robot interaction at Massachusetts Institute of Technology (MIT), explains the difference from existing technology like virtual assistants, such as Apple’s Siri, or automated customer service chat pop-ups online.
Darling notes that customer service chat bots analyse the words in your message or ‘prompt’ and choose an answer “from a pre-defined set of options”. ChatGPT is based on a “generative pre-trained transformer model, which can generate its own conversational output”. Rather than analysing the presence of individual words in the string, it “attempts to understand context by analysing entire sentences and the relationships between them”.
There is also some human intervention. ChatGPT has certain pre-defined responses, and users can rate answers as good or bad and suggest better answers. Concerns have been raised about the accuracy of information being provided by LLMs, as well as whether they have adopted any biases present in their training datasets. On logging in to ChatGPT, it shows warnings including “may occasionally generate incorrect information” and “may occasionally produce harmful instructions or biased content”.
Writing for Nature, Chris Stokel-Walker and Richard Van Noorden spoke to researchers about the uses of LLMs for scientists. They note that some regularly use them to edit text, check for errors in passages of code and write up grant proposals. However, researchers point to their inaccuracies in technical topics and lack of ability to correctly cite sources. Meta’s LLM Galactica was specifically trained with a dataset of scientific abstracts to improve its ability in producing academic content. However, the demo is no longer publicly available after users found it could produce inaccurate and racist content. ChatGPT has installed filters with the aim of removing its ability to produce offensive content. This involved human moderators reading and flagging ‘toxic’ content.
Educators have raised concerns about AI-generated content being handed in by students as their own work. In his paper ‘Would ChatGPT get a Wharton MBA?’, Wharton’s Mack Institute Co-Director Christian Terwiesch claims that it would have scored a B or B- in an MBA course final exam, doing well on case studies but making errors in basic maths and failing to understand advanced process-based questions. Terwiesch and other teachers emphasise teaching students to work with AI.
The Independent has written about teachers using AI to help them plan lessons. One teacher explained how it can enhance lessons: “take any lesson you’re doing and say, ‘give me a real-world example’, and you’ll get examples from today—not 20 years ago when the textbooks we’re using were written”.
Responding to the challenge of students attempting to pass off AI-written work as their own, tools have also emerged which aim to assess whether text has been written by students or LLMs, including Detect GPT and GPTZero. The latter works by looking for characteristics of human writing that ChatGPT has yet to replicate: whether it is written in ‘bursts’ (a mix of long, short and run-on sentences) and ‘perplexity’ (whether there is a randomness or ‘going off track’ element to the piece).
2. Could ChatGPT support parliamentary work?
I logged in to ChatGPT and asked “what do members of the House of Lords think about AI?”. It replied:
As members of the House of Lords represent diverse views and opinions, their attitudes towards AI may vary. However, in general, members of the House of Lords have expressed both optimism and concern about the impact of AI on society.
It then generated some ‘quotes’ from members, complete with Hansard dates and column numbers, which were entirely fabricated. When pressed, it said it could not access Hansard.
ChatGPT’s dataset was established in 2021 and has not been updated. Even if a future iteration included Hansard in the dataset, it would need to be constantly updating. For the question we put to it above, staff in the Library would consider recent proceedings: for example, the oral question on the use of AI software in schools from 19 January 2023 or the explicit mention of ChatGPT during the second reading of the Online Safety Bill on 1 February 2023.
I also asked ChatGPT whether it could conduct some of the research we would normally do for a Digest piece, providing perspectives from scientists and researchers. It generated a convincing list of real experts with associated citations, but on inspection the articles cited did not exist and the quotations were invented.
I asked ChatGPT what it understood about UK law as it relates to AI. It mentioned the General Data Protection Regulation (GDPR), the Equality Act 2010 and the Computer Misuse Act 1990. I prompted it to consider another area: intellectual property (IP) and patent law. When asked about its own position on IP, it said:
I am programmed to follow ethical and legal guidelines, which means that I cannot engage in any activities that would infringe on copyright or intellectual property rights. This includes writing in the style or using the characters of a copyrighted franchise without obtaining the necessary permissions or licenses.
I then prompted: “write a sentence in the style of a Lord Dobbs ‘House of Cards’ book”, and it replied in a way that suggested it does infringe copyright:
He had clawed his way up from the backbenches with ruthless determination, but now that he was just inches from the prime minister’s chair, he would need to be even more cunning and cut-throat than before.
As Wired Editor Kevin Kelly explains, the key to getting the desired result from AI is finding the correct prompt. Indeed, for art-generating AIs, people are now selling prompts—combinations of words tested to have desired outputs. ChatGPT functions essentially as a co-author, reliant on a person inputting, checking and steering. As Guillaume Thierry, professor of cognitive neuroscience at Bangor University, explains in The Conversation, “the source is language; the product is language. In the middle stands a mysterious black box a thousand times smaller than the human brain in capacity and nothing like it in the way it works”.
During the 2022 Christmas adjournment proceedings in the Commons, Dr Luke Evans (Conservative MP for Bosworth) delivered a ChatGPT-authored speech, based on the prompt “write a Churchillian speech on the state of the United Kingdom over the past 12 months”. The ChatGPT-authored part of the speech concluded:
Of course, no country is perfect and there are always areas where improvements can be made. However, I believe that the UK has shown itself to be capable and resilient, and has demonstrated its commitment to its values and principles. As Winston Churchill once said, ‘success is not final, failure is not fatal: it is the courage to continue that counts’. I believe that the UK has shown that courage over the past 12 months, and I have every confidence that it will continue to do so in the months and years ahead.
Dr Evans suggested that “there should be a regulator for algorithms and artificial intelligence, to run concurrently and in synergy with the technology we are developing”.
Cover image by Markus Spiske on Unsplash.