tech

Students Cheating With ChatGPT? Detection Tool Exists, But OpenAI Hasn't Released It Yet

The technology for detecting AI-generated text with pinpoint accuracy has been under internal discussion for the past two years.

Cover image via Shantanu Kumar/Pexels & Andrea Piacquadio/Pexels

Follow us on Instagram, TikTok, and WhatsApp for the latest stories and breaking news.

OpenAI has developed a tool capable of detecting if students use ChatGPT to write their assignments, but they are currently debating whether to release it or not

Speaking to TechCrunch, OpenAI has confirmed they are researching a text watermarking method as mentioned in an exclusive by The Wall Street Journal. However, they are taking a deliberate approach due to the complexities involved and the potential impact on the broader ecosystem beyond OpenAI.

With text watermarking, OpenAI would specifically target detecting text generated by ChatGPT, not other models. This involves subtly altering how ChatGPT selects words to embed an invisible watermark in the text, which can later be detected by a separate tool.

Providing a method to detect AI-generated text could be a gamechanger for teachers aiming to prevent students from submitting AI-written assignments

Image used for illustration purposes only.

Image via Shantanu Kumar/Pexels

OpenAI confirmed in a blog post that their watermarking method for detecting AI-generated text is highly accurate and resistant to tampering, including paraphrasing.

However, they acknowledge that rewording using another model could easily bypass the watermark. The company also expressed concerns about the potential stigmatisation of AI writing tools, particularly regarding their utility for non-native speakers.

OpenAI is also concerned that implementing watermarking might deter users, with nearly 30% of surveyed ChatGPT users indicating they would use the software less if it were introduced

Despite these concerns, some employees believe watermarking is effective. To address user feedback, the company is exploring alternative methods, such as embedding metadata, which could be less controversial but are still unproven.

According to their blog post, OpenAI is in the early stages of this exploration, and while the metadata would be cryptographically signed to avoid false positives, it's too soon to determine its effectiveness.

Follow SAYS Tech on Facebook, Instagram, & TikTok for the latest in tech in Malaysia and the world!

Read more #tech stories:

Don't miss out! Here are some more trending stories on SAYS:

You may be interested in: