Unlike when OpenAI first released ChatGPT3, its actually now encouraging to see AI developers taking steps towards improved transparency, just like the recent update of OpenAI’s development of a text watermarking method to detect AI-generated content. The report claims that the new watermarking tool is 99.9% effective at identifying ChatGPT’s output, but OpenAI has yet to release it. However, we can learn more about the report here: OpenAI has built a text watermarking method to detect ChatGPT-written content — company has mulled its release over the past year.
My concern though, is about the part that mentioned that a professor once failed an entire class after their AI detection tool incorrectly flagged all student papers as AI-generated. Let us not forget that there are past cases of where AI Detection tools like TurnitIn and GPTzero have given an unfair judgement about contents generated with AI.
So, as this new-found love by many students and scholars (AI Technology) continues to evolve, we must all take care, especially Librarians. Library and Information Professionals should be at the forefront of AI Literacy. Let us not stop teaching our students and library users the implications of not using AI responsibly.
Over here, we’re still promoting the #ResponsibleUseofAICampiagn. Reach out, and we’ll implement the Responsible Use of AI Webinar at your Institution, for your Staff/Students.