ChatGPT is a pleaser through and through
Everyone will have to deal with ChatGPT and similar large language models sooner rather than later, scientists say. Professor of Communication and Cognition Emiel Krahmer and researcher Chris Emmery have been involved in research on AI and computers for quite some time now and are closely following the launching of the chatbot. It will completely change the way we write, but little thought has been given to its impact, they argue. However, the bot should never be allowed to answer medical questions.
This is how ChatGPT, developed by OpenAI.com, was introduced: "A conversational chatbot, currently the most powerful artificial intelligence in the world, capable of discussing any topic. It has achieved fair grades in a number of university and professional tests. GPT stands for Generative Pre-trained Transformers, a type of neural network architecture."
Krahmer: "ChatGPT is a big thing in our professional field. Computer models that use neural networks have been around for a long time. They were originally developed to better understand how people learn. In the past five years, large language models were built, based on neural networks, but ChatGPT is different because it is a computer program that produces correct language. It makes no linguistic mistakes and it is very consistent. And the interface is very friendly, there is good interaction. It is a wonderful model. This chatbot has been trained to be friendly and helpful: it is a real pleaser. However, that does not mean that it is speaking the truth. That it has become such a hype is due to the quality and creativity of the answers and its speed, and because it is now within everyone’s reach."
The opportunities are huge; the bot can do much more than hold amusing conversations
Professor of Communication and Cognition Emiel Krahmer
Emmery adds: "Within a week after it was launched, it was already used by more than a million people, enabling the system to continue to learn immediately. And new companies were set up to develop numerous new applications."
Open AI is a relatively small company in Silicon Valley, co-founded by Elon Musk. One of the objectives was to use AI in an ethically responsible way and to communicate about it transparently but, in Emmery’s view, that is no longer the case now. Google has been overtaken and is now engaged in a development race involving AI and search engines.
Krahmer: "The opportunities are huge; the bot can do much more than hold amusing conversations. It can make summaries, analyze texts, take notes, as well as write a computer program. Or suggest research questions, for instance. It generates language in a random style and on random subjects. You can have a text edited and ask it to explain why it made certain changes. Or ask it to create a poem. There is no form of writing that ChatGPT cannot get to grips with.
Students are already using it. The bot is already revolutionizing education. Universities are now looking for different ways to test students than, say, writing a thousand-word essay, which is a piece of cake for the bot. Assessment will have to change, but we have no short-term alternative.
Its use, by the way, does not mean that creative and critical thinking have become obsolete: students will still need to learn that. But writing is going to change. Just as the spell checker was introduced into Microsoft Office, you will soon have ChatGPT on your laptop."
The system uses data it has come across but it doesn’t know if this information is accurate. It does not have a clue. This poses a risk for students when they start gathering information.
Researcher Chris Emmery
Emmery: "The system uses data it has come across but it doesn’t know if this information is accurate. It does not have a clue. This poses a risk for students when they start gathering information. The system can only learn because we ourselves have posted those texts on the internet. And the bot can only work now because there is much more information on the internet than a few decades ago."
"If you had searched the internet a few years ago for information on the color of peanut butter, a bot wouldn’t have been able to answer that question because that info just wasn’t available then. Now there is a Wikipedia page about peanut butter that also describes the color," Krahmer explains.
Emmery: "Actually, the system is a product made by us all. But the development of bots and search engines is in the hands of a few big tech companies like Microsoft, Google, OpenAI. It comes with a big price tag, not only in terms of influence and money, but also with huge emissions as a result of data storage and computing power. It is a big business model. But it isn’t at all suitable for some applications, for instance, for medical and legal questions. The people who created the bot have not addressed these ethical issues."
"For example, research was done on answers to medical questions by human doctors and via ChatGPT. ChatGPT’s answers were found to be much friendlier and more relatable. The bot has been trained to answer with empathy, which is not what all doctors are famous for. But there is no guarantee that what ChatGPT says is actually true, and there is no way of knowing why it gives a certain answer. The conclusion is that, for the time being, ChatGPT should not be used for medical purposes," Krahmer argues.
This is about political and ethical issues. How do we see the future of humanity?
Researcher Chris Emmery
So risks are involved in using it, both scientists agree. The filters don’t always work, the chatbot hasn’t a clue about the need for truth. People ascribe human characteristics to the bot, as they did with robots when they were introduced. Krahmer expects the hype to die down after some time. "I am reminded of the introduction of the internet. Its impact was overestimated, its drawbacks underestimated. But if you use it well, we stand to benefit a lot. People need to be aware of both the positive and the negative aspects. Enforcing transparency of use and development of these models will have great consequences, and I am in favor of that. But that doesn’t solve the problem. Because you can’t enforce that legislation worldwide."
"We need to carefully consider the effects," Emmery argues. "This is about political and ethical issues. How do we see the future of humanity?"
And what does the bot itself think of all the criticism?
It acknowledges four major problems: misinformation, privacy and security concerns, addiction and overuse, and the ethical implications for society. It already has the solution, though: ‘In general, the potential dangers of ChatGPT and other AI language models can be mitigated through responsible use and ongoing research and development to improve their accuracy, transparency, and ethical implications.’
Chris Emmery is interested in the effect of intelligent systems on our lives and in developing open-source tools to better understand, and defend against, such techniques invading people’s privacy.
Emiel Krahmer investigates how humans exchange information during communication, both verbally and non-verbally, to improve the way computers present information and communicate with humans.
Date of publication: 1 June 2023