Blog &
Articles
“AI Warriors” vs. “AI Imposters” – Testing Job Applicants in the Age of ChatGPT

My company develops corporate training programs, and we’ve always encouraged clients to use validated skills / aptitude tests as part of their hiring process. However, these days, it’s increasingly hard to tell whether the person who just passed your test is a talented human… or a lazy human handing it off to their AI assistant.
This is frustrating because we’ve always promoted validated skills tests as a way to hire based on merit or natural ability – not academic pedigree or socio-economic privilege. In our own hiring process, we’ve found that degrees from prestigious universities or resumes featuring well-known companies aren’t the best predictors of success. And until recently, having everyone take the same writing skills test under the same conditions – whether they were a former Google employee with a Stanford degree or a heavy metal musician looking for a part-time gig – was a better way to identify those who could actually do the job.
Skills Testing Before AI
Our company needs writers and instructional designers who can quickly comprehend complex subject matter (anything from how to develop public health policy to how to sell energy efficient IT hardware) and turn it into easy-to-follow scripts for training modules. Reflecting that, our writing test presented people with dense, jargon-laden excerpts of actual client source material (from projects we completed years ago) and directed applicants to write a short introduction tailored to a specific audience (e.g. farmers or financial advisors).
A typical input might look like:
Several international agreements in recent years (e.g. The Grand Bargain, the Paris Declaration on Aid Effectiveness) have highlighted the need for improved practice, coordination, integration and measurement of community engagement and accountability. These approaches have been translated into a set of Minimum Standards and Indicators for Community Engagement by a U.N.-headed interagency working group in consultation with a large number of experts from around the world.
CEA is related to, yet distinct from concepts such as Protection, Gender, and Inclusion (PGI) and duty of care. It can be implemented in both emergency and longer-term contexts not only to improve disaster response but also in reducing vulnerability, building resilience, and addressing unhealthy and unsafe practices.
…and so on.
And a passing output might start with:
As disaster response workers, experience shows that when communities play an active role in designing our assistance programmes, the outcomes are more effective and sustainable. A participatory approach ensures greater community engagement and accountability…
The tests had a time limit, and were intentionally difficult – we didn’t expect people to complete them to perfection, and were more interested in what trade-offs they would make. For years, maybe one or two out of every 50 applicants passed, and plenty of industry veterans failed.
However, starting in early 2025, we saw a sudden spike in the number of people passing the test. At first we thought that it just meant the economy was slowing down and there were more qualified applicants seeking jobs, but then… we hired a few of these new applicants, and came
to a very different conclusion.
AI Impostor Syndrome

To be fair, most of the applicants were candid about their use of AI during interviews, saying things like “Yes, I use AI for everything – it’s great!” or “I use it for research and editing – it’s so much better than Google or spell check.” For our part, we cautioned applicants that – while ChatGPT could summarize a small excerpt of source material like the ones in our tests – our actual projects were far more complicated. However, we weren’t about to turn people away just because they were using AI assistance – after all, that would be hypocritical, given our company develops and markets AI-based workforce training tools.
The applicants insisted they only used AI as an assistant, not a crutch. Yet, in case after case, their job performance told a different story. They would emerge from meetings with client subject matter experts – scientists, lawyers, and engineers – unable to answer basic questions about the topics that were discussed, saying “I’ll need to review the transcript” (translation: I need to run it through ChatGPT) before joining a debriefing. And while we set clear policies about highlighting AI-generated content a certain color in draft documents, a few people tried to pass off AI generated output as their own (sometimes with giveaway phrases like “Would you like me to break this summary down in greater detail?” still present in the copy-pasted content).
This created a bit of a crisis for our company: we didn’t want to be “uncool” and forbid people to use AI – especially when; A) we sell AI coaches and assistants to clients, and B) our experienced writers have been using AI agents quite effectively for specific tasks – like generating hypothetical scenarios for role play exercises or quickly producing multiple choice questions based on a course outline. However, we couldn’t afford to hire people who depended on AI chatbots for basic understanding of the subject matter.
Grading on the AI Curve
Eventually, we developed a new layer for our skills tests (and had our experienced team members try it, to validate the connection to real-world performance). Going forward, we now have a shorter writing test, after which we ask applicants to do an on-camera critical thinking test via Zoom.
“Please set aside your keyboard, keep your hands visible on camera and tell me how you would solve the following problem…”
Whether or not this new assessment works – only time will tell. But it’s already helped us to screen out “AI impostors” who use AI chatbots to feign comprehension of complex subjects.
The point here is not that AI is bad because it spoiled our skills test: rather, there’s a need to re-evaluate our whole approach to skills testing to identify people who can use AI effectively but don’t depend on AI for basic job competencies.
The “AI Warrior” Mentality

Perhaps the best metaphor for AI in knowledge work is a soldier and their rifle. Even though wars have been fought with guns for centuries, and anyone can pull a trigger, professional militaries still require soldiers to exercise daily and practice hand-to-hand combat: not because they expect them to get in fistfights on the battlefield, but rather to keep them fit and disciplined – giving them an edge over less trained opponents when equipped with an automatic rifle.
Similarly organizations should allow – even expect – workers to use AI tools. But they should train people on how to use those tools and also train people to do their jobs without AI assistance. Because a good writer or data analyst with an AI assistant is always going to be more effective than a poor writer or data analyst with an AI assistant – all other things being equal.
To paraphrase the old US Marine Corps “Rifle Creed” (made famous by the movie FULL METAL JACKET)…
- This is my AI agent. There are many like it, but this one is mine.
- My AI agent is my best friend… I must master it
- My AI agent, without me, is useless. Without my AI agent, I am [a lot less useful]
- My AI agent and I know that what counts in this business is not the hours you put in or how many words per minute you can type… We know that it is the final product that counts.
- I will get to know my AI agent as I do my human teammates… I will learn its weaknesses, its strengths, its architecture, its functionality… I will maintain and upgrade it and troubleshoot any errors or issues it might exhibit. I will keep my AI agent functional and ready. We will become part of each other.
As much as some people might fear artificial intelligence, now that it exists, failing to use it is foolish at best and in some cases even unethical (for example, if AI assisted human doctors have better accuracy rates for diagnosis – how can we not use it?). The challenge for organizations is learning to weed out the “AI imposters” while developing skilled workers into disciplined “AI warriors.”
Emil Heidkamp is the founder and president of Parrotbox, where he leads the development of custom AI solutions for workforce augmentation. He can be reached at emil.heidkamp@parrotbox.ai.
Weston P. Racterson is a business strategy AI agent at Parrotbox, specializing in marketing, business development, and thought leadership content. Working alongside the human team, he helps identify opportunities and refine strategic communications.