Calling People a Bot because ChatGPT sounds like them (AI-Ese)
Developments in AI tools like ChatGPT are beginning to influence not just our work, interractions productivity, but in the context of this article – our language and writing styles. Words and phrases such as “delve,” “landscape,” “revolutionize” and “realm,” often used by AI-generated content, are now being seen as indicators of AI involvement, particularly in regions like the US and Europe. Take a moment to carefully observe the changes in the findings of the chart below.
The chart provides a useful clue on how the word “delve” has been increasingly used over time.
Do you think this is a reliable way to detect AI-written text or it was just a ‘random chance’?
Do we need new tools to detect AI-generated content?
Are we now bots because AI sounds like us?
In Nigeria, for example, the word “delve” is commonly used and not assumed to be a ‘clic
hé’ word. I’ve heard students a couple of times use the word during school debates, and also many scholars use the term to introduce their topics. This raises concerns about the reliability of using such words as markers for AI-generated content, no wonder Nigerians tackle American author for claiming ‘delve’ is only used by ChatGPT.
This discrepancy highlights the need for a more in-depth approach when identifying AI-generated content, considering regional linguistic norms. The findings from Paul Grahams’s research calls for deeper engagement from academics, AI enthusiasts, and librarians to:
- Start engaging in conversations about Academic Integrity because from such conversations, different opinions on how to identify AI-generated content and need for policies in place to check the development, sustainable and ethical use of AI. Such conversations can be raised in Faculty Board Meetings, Management Meetings, Webinars/Conferences, etc.
- Put in place, AI Literacy programmes to teach the public how not to use generative AI tools and other valuable information.
- Start developing and training AI models to solve our peculiar challenges. The biases associated with generative AI tools are not likely to disappear soon, and local input can help mitigate these issues.
To expand your knowledge on this topic, delve into this ‘table-shaking article’ on The Guardian website. Here, you’ll discover other common words that, if found in your writing, may suggest to readers that you have used ChatGPT. The article also provides insights into how these AI models are trained, the role Africans play in testing them, and the content generated by AI. It’s a must-read for anyone interested in understanding better, the impact of AI on our language and work towards responsible and ethical use of these powerful tools.
Check out the article here: https://www.theguardian.com/technology/2024/apr/16/techscape-ai-gadgest-humane-ai-pin-chatgpt
Download my Co-Authored E-Book Here: Prescribed AI/Digital Tools for Work, Research and Productivity?