AI vs. Human consultation analysis: why humans still matter

It seems like AI is everywhere—diagnosing diseases more accurately than doctors, writing university essays in seconds, drafting tax advice, generating public submissions at scale, and even composing symphonies. (And if the em dash didn’t give it away—yes, it also wrote this list.)

One other emerging AI use is analysing thousands of public submissions to help governments understand what their citizens think about new laws, plans and policies. Submitting to public consultation has become something of a national pastime in Aotearoa - New Zealand over the last 12 months, with tens of thousands of responses common, and sometimes hundreds of thousands of submissions received. This presents a serious challenge: how can all the input be analysed quickly and comprehensively, detailed and accurate insights which reflect public opinion be analysed and reported, while all the time protecting submitters’ confidentiality?

AI is a tempting solution, and it is already being used. For the Regulatory Standards Bill (2025), our understanding is the Ministry for Regulation and a New Zealand consultancy applied AI‑based qualitative analysis to the 22,821 public submissions, automatically tagging each one as support, oppose, or neutral. They then manually reviewed a 4% sample for deeper insight. The output showed roughly 88% opposition and 0.33% support. The approach was fast, but it raised deeper questions: What value did it add? What might have been missed? And how was speed, transparency, accuracy, and trust balanced during this process?

This is how David Seymour described the process to analyse the nearly 23,000 submissions when responding to Guyon Espiner’s questions on the topic. “You’re smart enough to know that those 23,000 submissions, 99.5 percent of them, were because somebody figured out how to make a bot make fake submissions that inflated the numbers… Those figures… represented nothing more than somebody running a smart campaign with a bot.” (30 with Guyon Espiner, RNZ interview, 4 June 2025). Additionally, “I was aware of the ministry’s approach to analysing the submissions, which filtered out spurious submissions fuelled by social media campaigns, and allowed substantive ideas to be considered carefully.”

David Seymour’s comments make the good point that AI can spot AI. While that analytical observation is likely informed by his background as an electrical engineer, I propose, like other inexperienced observers, he doesn’t have the grounding in qualitative analysis to make a more qualified assessment of AI as a submission analysis tool. Similar to the heckled comedian who retorts with, “I don’t turn up at your work and tell you how to sweep up.” I wouldn’t turn up at David’s office and tell him how to rewire a switchboard or write a bill.

However, in every industry that’s facing AI disruption, experienced practitioners with significant experience can understand the issues and identify what’s at risk of being compromised or lost. In this article, we draw on our 25 years of qualitative analysis experience to explain what might be lost if we overlook the value of human critical thinking, and jump to relying on AI to report on human ideas, and more specifically copious amounts of public submissions.

Our premise is that to understand the detail and nuance of human thought, and achieve what should be the aim of consultation on a public process, to find ideas that improve what is being proposed, we think humans are still better placed to achieve that goal.

What humans do that AI can’t, (yet)

This is what we believe qualifies us to have an opinion in this area. Since the Christchurch earthquakes starting in 2010, Global Research has completed over 200 human-led high-profile public submission analysis projects for local and central government organisations in New Zealand and Australia. Examples include: the Future of Education in New Zealand by the Ministry of Education, Future Melbourne 2026 for the City of Melbourne, the Rewrite of the New Zealand Arms Act and Human Rights (Incitement on Ground of Religious Belief) Amendment Bill for the Ministry of Justice, and still ongoing, the Royal Commission of Inquiry into COVID19. Additionally, for the last 12 years, Patrick O’Neill has been invited to Lincoln University to teach postgraduate qualitative research students how to apply qualitative analysis in real-world settings.

To complete all of these assignments, this is a quick 12 step outline of the process we have applied, as humans:

1. Tailor our method to each project’s unique goals, data collected, context and reporting needs

2. Use NVivo software to efficiently manage data and complete analysis

3. Create a custom framework of themes and topics

4. Read every submission categorising each point in context into the topics created

5. Refine the framework when new topics emerge, or a change to the analysis structure is required

6. Use a team of analysts to manage large volumes in short timeframes and ensure multiple analysis perspectives are considered

7. Deliver interim reports when needed, often prior to the engagement collection closing

8. Peer review coding and ensure inter-coder consistency

9. Synthesise insights into narrative themes for reporting

10. Identify cross-cutting themes across submissions, and the implications of conflicts or synergies within the points made

11. Segment findings by demographics or interest groups if required

12. Deliver clear, insightful reports, often accompanied by summary or infographic reporting

This approach ensures:

• Every submission is read and included

• No themes are overlooked or over-emphasised

• There’s a full audit trail from submissions, to comments grouped under topics, to reports

• Analysis is informed by human judgment, not by what the next best predicted word is

• Data is stored securely and handled locally, underpinned by SOC 2 accreditation

What AI offers (and where It falls short)

To be fair, AI brings some real advantages

• Speed from processing tens of thousands of comments in minutes

• Cost reduction due to no hourly rates

• Stamina, it doesn’t get tired

• Trend and sentiment detection across large datasets

But compared to humans when the aim is to complete thorough, detailed qualitative analysis and reporting, even ChatGPT admits that when AI is relied on to analyse public feedback, it comes with significant risks. These are some areas that it identified:

1. Data Security

• AI Weakness: Cloud-based AI tools can risk sensitive and confidential data falling into the wrong hands unless securely hosted and properly certified (SOC 2, ISO 27001).

• Human Strength: Human teams can operate in controlled, local systems. At Global Research, our SOC 2 certification ensures data is managed in a controlled environment and is secure, private, and respected.

2. Contextual accuracy

• AI Weakness: AI often fails to understand nuance, sarcasm, cultural references, legal reasoning, poor writing, or mixed messages.

• Human Strength: Our analysts understand tone, intent, and context and adapt the framework as new points and insights emerge. We read everything that is said, and understand what’s meant.

3. Auditability

• AI Weakness: Most AI tools are black boxes and the detail of how conclusions are arrived aren’t isn’t revealed.

• Human Strength: Our process is fully traceable. Clients (and the public) can follow the logic from submission to coding, to thematic analysis, to the final report.

4. Fairness & representation

• AI Weakness: AI may miss minority views, especially if tuned to statistical norms or overseas data.

• Human Strength: Humans can deliberately elevate diverse voices, apply an analysis lens, understand local nuance, and detect coordinated campaigns.

5. Error detection

• AI Weakness: Systematic AI errors can slip through unnoticed across thousands of submissions.

• Human Strength: Peer checks, team reviews, and contextual judgment help us identify and fix problems early and throughout the analysis process.

6. Stakeholder trust

• AI Weakness: AI can feel impersonal and opaque. It struggles to interpret lived experience. And it is prone to hallucinating, or making things up.

• Human Strength: People trust that humans understand people. Especially when the issues are emotional, political, or personal.

In Summary

The key concern with AI is that it often jumps from submission to summary, skipping the important middle steps. The risks hidden in that gap include:

• Data vulnerability

• Shallow analysis

• Invisible bias (hallucinations)

• Missed minority voices

• Undetected errors

• Lack of empathy or understanding

• Lack of critical thought

And most importantly: how can someone judge the quality of AI-generated output if they’ve never completed high-quality qualitative analysis themselves? Even results that appear flawless at first glance may misrepresent what submitters actually said, overlook critical insights, or, at worst, be misleading.

So where does this leave humans and AI?

We’re not anti-AI. In fact, we have tested in a controlled environment eight different qualitative analysis applications over the last 12 months. But based on the results from those tests, when we compared what the AI applications produced, we assessed that we could achieve significantly better outcomes with human analysts. The only significant benefit we identified was speed. But when you can’t trust the results, the speed created may just be leading to an inferior outcome faster. And we believe our interim reporting can adequately overcome any delays for our clients.

For now, we continue to trust our people over unknown code, especially when our clients rely on us to get the details right, represent voices fairly, deliver insights that inform quality decision making, and protect the public’s data.

So some things, such as public submission analysis, and understanding the depth and nuances of human thought are currently better achieved by humans. How do we know this? Because we have been doing it for the last 25 years.