WOTY (1): AI

WOTY (1): AI
Picture by Collins English

Every year around December, websites, publishers and writers come up with their version of Word of the Year (WOTY). This year is no exception. Collins English Dictionary team, among a shortlist of notable words, selected AI as their choice as WOTY. AI, according to Collins team, is defined as artificial intelligence or “the modelling of human mental functions by computer programs”. This notion explicitly captures the profound nature of challenge facing us, namely, a robot takeover. Similarly, a columnist from the Economist, picked ChatGPT as WOTY, claiming that the breakthrough in large language models, ChatGPT in particular, has been so stunning that they have ignited a debate about whether AI is actually thinking (or whether university students will finish their assignments without ChatGPT again). For business users, a chatbot with built-in ChatGPT can produce human-like and immediate responses when interacting with customers. For content creators, ChatGPT can be used to compose music and create fiction such as short stories. For programmers, they can even write and debug with proper prompts. But all that’s just the tip of an iceberg, and ChatGPT has huge potentials such as automating repetitive tasks in industry and beyond. In short, it is a tool so versatile and informative that a bright future awaits.

Does ChatGPT really help?

Artificial intelligence, however, is not without concerns. The Cambridge Dictionary team chose hallucinate as its WOTY. It connotes the idea that when an artificial intelligence hallucinates, it produces false information. This raises the concern of how we should interact safely and effectively with generative AI, a tool that is so powerful that the misuse of AI would cause havocs in human society. The rise of hallucination generated by AI which contains false or misleading information as fact has also seen a substantial increase in the lookup of the word authentic. And it is this particular word that has become WOTY of Merriam-Webster, an American dictionary. The word itself has several meanings including “not false or imitation,” a synonym of real and actual; and also “true to one’s own personality, spirit, or character.” In the context of generative AI, misleading information and deepfake videos, the increase in its lookup may indicate the line between “real” and “fake” has become increasingly blurred.


I think, by now, you should have an overall understanding of the current trend in society, at least the society of the English world. We live in a society where generative AI, such as Chatgpt and the like, has gained prominence in just a year’s time and we are on the edge of another Industrial Reform, and the impact generative AI has and will have on people and society would be immeasurable and unprecedented. However, we should also have a critical eye. The first to consider is the disruption caused by AI — loss of job opportunities in many industries due to the versatility of AI. A counterargument to this is there will be job opportunities brought by AI too. Even this offset is true, many would still find doing their old jobs more valuable and meaningful than whatever new jobs they will have (if there were any). 

“Turkeys voting for Christmas”

What's turkey voting for Christmas?

Turkeys voting for Christmas is an English idiom used as a metaphor for a situation in which a choice made is clearly against one's self-interest. In the United Kingdom, turkeys are commonly eaten as part of the English Christmas dinner.

Another point worth mentioning is that AI is not creative itself by nature, even though it can pretend to be creative. We’ve talked about ChatGPT creating music, stories and all that, but what it’s really doing to is scouring its data sources and work that’s been done by people. In other words, it’s a mass plagiarism device. Interestingly, that is what a lot of research is about — citing others’ work and coping their patterns. The problem with AI is that once the original data is biased or inaccurate, whatever comes out from the AI will reinforce the biases and inaccuracies. When a large number of users believe what is produced by AI, there will be an inevitable echo chamber effect. This is by all means no good for aggregate knowledge of humanity. 

What's echo chamber effect?

An echo chamber is an environment where a person only encounters information or opinions that reflect and reinforce their own. Echo chambers can create misinformation and distort a person’s perspective so they have difficulty considering opposing viewpoints and discussing complicated topics.

“Any body of knowledge consists of an end (or purpose) and a means.” — Immanuel Kant

Finally, we should discuss how we can maximise the usefulness of AI to humanity in light of all its potentials and downsides. AI and any other technologies are faster and more efficient means. But to what end? Nuclear power could be used as renewable energy production and mass destruction weapons at the same time. The key is in the hand of its developers and users. AI without an improved end, is just a tool that get us faster to exactly where we do not want to go. The more powerful the technology, the more enlightened humanity has to be, the more clear humanity has to be about what constitutes a good end. Technological breakthroughs including AI are neutral, it is human’s choice that ensures we can get most out of them. Technology and humanity need to develop in tandem.

Fun fact: The Chinese character for "zhen" meaning "self-inspire" and "high-quality development" have been respectively chosen as the Character and Word of the Year 2023 in China.