How Blind Trust in Generative AI Undermines Creative Work


This summer, we experienced firsthand the damage caused by blind faith in the "superhuman" capabilities of generative AI, even among otherwise intelligent individuals. Here's what happened.

We submitted a research paper to an academic conference, and after it was accepted, I paid the registration fee, booked flights, and arranged accommodations. A week later, we received an abrupt email from the organizers stating that our paper had been rejected. The reason? Their AI-based plagiarism and ChatGPT-detection tool flagged it as containing both plagiarized content and text generated by ChatGPT.

This accusation was entirely unfounded. Our paper presented highly original work by our team, proposing novel AI algorithms to predict animal emotions—a concept never explored before. We detailed the development of an AI model to validate this approach, something completely unique to our research. By its very nature, it was impossible for our work to be plagiarized or generated by ChatGPT.

I attempted to explain this to the program committee, emphasizing the groundbreaking and technical nature of our research. I also clarified that while we had used ChatGPT to enhance the language and clarity of our paper—a common and widely accepted practice—every idea, methodology, and analysis was entirely our own. Additionally, I pointed out that due to the technical nature of the subject, certain formulations might overlap with other works, as there's often limited flexibility in describing algorithms.

To further prove our case, I ran the paper through a different plagiarism detection tool, which showed it was well below the threshold for concern. Unfortunately, none of this swayed the committee, who remained rigidly committed to the judgment of their AI tool.

While they refunded the conference fees and I was able to cancel the hotel without penalty, the cost of the non-refundable flight ticket was a painful reminder of this ordeal.

This experience highlights a critical issue: uncritical reliance on generative AI tools can lead to unjust outcomes, especially when those tools are treated as infallible. In this case, it didn’t just undermine the integrity of the conference process—it dismissed original, innovative work without fair consideration. As we integrate AI into more facets of professional and academic life, it is imperative that human judgment remains a central part of the equation.

Comments

Popular posts from this blog

Predicting Stock Market Indicators Through Twitter “I hope it is not as bad as I fear”

Overcoming the pyramid of professional oppression

My life in Alternative Realities - fatherlanders and nerds don’t go well together