Our analysts read thousands of submissions when reporting on a public consultation. More and more often, submitters are utilising our handy friend ChatGPT to organise ideas and help write their submissions.
Like any tool, it can be used in positive and negative ways.
Our staff – to varying levels – consider themselves experts at picking out which submissions have been run through ChatGPT. Formatting consistencies across these submissions (a love of the em dash; excessive quotation marks; specific formatting etc.) are known to us as trademarks of ChatGPT… for now. No doubt it will get increasingly difficult to pick out these submissions as the technology develops.
One of the key positive ways in which it is used, can help people be more confident in their writing skills. Used for this purpose, AI enables people to express ideas with clarity and conciseness which has the benefit of making public consultations more accessible to more people. However, we have noticed that some submitters use AI in ways that can negatively impact their response.
We have come across submissions where the submitters (helpfully, from this perspective) have included their entire conversation with ChatGPT as their submission. This has clearly shown us how one-line prompts can be expanded into multiple paragraphs. ChatGPT can take liberties with this and make up details that the submitter never prompted. When a consultation is interested in hearing personal stories, this can create blurry lines around the submission’s authenticity.
Taking care with copy and paste is important when using AI. Leaving in a (fascinating but irrelevant) conversation that you had with ChatGPT about the best way to cook a cauliflower, snuck in alongside your submission is also inadvisable.
Our analysts have also found that it is a headache to read through submissions where ChatGPT has added unnecessary embellishments. The personal touch or stories can get lost amidst a sea of ideas that are “leveraged”, “fostered”, or “delved” into. There is a balance between clarity and merely sounding intelligent, which it does not always get right.
Every day, the capability of AI is improving. This will continue to impact the variety and content of responses we receive, especially as generations grow up with this technology as a tool instinctively turned to.
Here at Global Research, we will continue to watch with interest how this develops and take it one day at a time to inform our practices when it comes to AI-supported submissions