Blog &
Articles
What is AI Actually Good For?

It’s hard to know what to think about AI in the workplace: some articles claim that AI is going to replace nearly all human knowledge workers in a few short years and the developers at OpenAI will be the only ones left with jobs. Meanwhile, others are dismissing generative AI as an overhyped gimmick with far fewer real world applications than people think: a “solution in search of a problem” or a “hammer in search of a nail.”
Of course any time you see opinions going to those types of extremes, the truth is probably somewhere in the mundane middle.
Within my own field of work – corporate training – the utility of generative AI is pretty obvious: when set up properly it can act as a coach and advisor answering trainees’ questions, generate role play scenarios for practicing sales calls and management decisions, or – at the very least – help with writing scripts for traditional training videos. And I also use specialized AI agents to help with everything from marketing to planning my daily diet and exercise routines.
However, when talking to clients, it’s clear many people are still confused about generative AIs capabilities.
Some people can’t see any use for AI technology beyond silly parlour tricks like “recommend a cookie recipe” or “write a cover letter for a job application.” Meanwhile others have grossly exaggerated expectations, asking things like “Can I tell an AI to create a mobile app for my employees to track freight shipments?” (answer: it could be part of the solution but you’re still going to need to hire a mobile app developer) or “Can AI eliminate the need for triage nurses at a hospital?” (not yet, but it could probably help triage nurses handle a crowd of people seeking medical attention during a disaster.)
This raises the question – what expectations should people have for generative AI? What is this technology good for… and what is it not?
3 things Generative AI models do poorly (or at least not as well as most humans)

Critical thinking and decision making (about important stuff)
The major AI models have access to a wealth of information – basically everything that was on the Internet two years ago, usually augmented with some current search engine results. And if you ask a basic AI chatbot “What are some cost-effective ways to promote my construction company?” or “What are some good interview questions when hiring restaurant staff?”, the AI model will find patterns in all the related blogs and magazine articles in its training data and generate an answer that usually represents generally good – but not very specific – advice.
However, despite all their knowledge, most AI models don’t know what they don’t know. For instance, if you ask “What are some good interview questions when hiring restaurant staff?”, even the latest ChatGPT ‘reasoning’ models will dive immediately into giving answers without stopping to ask basic questions like “What kind of restaurant?”
In this case, if you clarify that you are opening a restaurant for a head chef who came from a Michelin star establishment, you’ll get more advanced interview questions like:
- Can you discuss your experience with modern cooking techniques, such as sous vide, molecular gastronomy, or fermentation?
- How do you create a tasting menu that maximizes both guest experience and profitability?
This isn’t really the fault of the AI model or its creators. Companies spend countless person-hours and millions (even billions) of dollars training AI models to give quality responses that reflect basic principles of critical thinking. But – when it comes to general purpose “large language models” – there are just too many possible variables to consider when answering a question across too many different domains of knowledge.
So, while a company like Bloomberg might be able to create an ultra-niche AI model geared to answer specific types of questions about financial markets, nobody at Open AI or Google has time to train vanilla ChatGPT or Gemini specifically on restaurant hiring practices.
Of course, a savvy user (or an expert designing a purpose-built AI agent) can get better results with a more complex prompt than “What are some good interview questions?” – but we’ll discuss that in a bit.
Replacing traditional database apps
One of the allures of AI models is that you can give them plain language instructions and they will generally follow 80% of them 80% of the time. This has led some people to view AI prompting as some kind of quick and dirty replacement for actual software development skills (alternatively, they might try to command an AI model to do the difficult programming work for them – a.k.a. “vibe coding”).
To give an example, one client we work with asked if we could create an AI application that would search the web for industrial components matching very specific requirements, with “100% accuracy, zero hallucinations.” However, this posed three problems:
- AI models only have access to the data it was trained on, which is usually a snapshot of the public Internet from about 2 years ago. It does not include the full text of all the PDFs in every industrial equipment manufacturer’s product database, and wouldn’t include newer items and it wouldn’t know if any of the items referenced in its training data have been discontinued.
- Even if you could supply the AI model with a database of every air compressor fan or water cooler pump on the market, AI models look for patterns in data -not specific pieces of information. So the AI model might come back and say “Hey, here’s four which seem to address the problem you’re trying to solve…” but there’s no guarantee that it would find all relevant items, or that the items it identifies would match all of the criteria.
Basically, if you wanted to create an app that would execute searched based on rigidly defined criteria against a rigidly organized list of records, completely and correctly 100% of the time – then there’s no escaping the need to actually learn SQL and Python or C# or some other language to build a traditional database application.
Now – that said – AI can be tremendously helpful for formatting the data that goes into your traditional SQL database but we’ll talk more about that later.
Complex writing and research tasks WITHOUT human supervision
The hardest thing for people to grasp about generative AI is that – while its output superficially resembles human language – AI models don’t “write”, “read”, “remember”, “reason” or “think” in the sense that humans do. Rather, they achieve the same outcomes (e.g. “generate an apology letter to my grandmother for forgetting her birthday”) but through very different methods.
Case in point: recently, I was helping a client develop a training course on international politics and how it impacts business. At one point I got frustrated trying to Google an example of a European country ignoring the terms of an arms embargo, and asked a Claude-based AI agent if it knew of any cases. The AI agent replied “Yes, in 2019 Austrian arms manufacturers shipped armored personnel carriers to Belarus, in violation of a European Union arms embargo.”
This seemed like a perfect example to illustrate the point I was making: however – given I work with AI agents every day and know their limitations – I immediately Googled for verification that Austrian companies sent armored personnel carriers to Belarus only to find… nothing. As far as the mainstream media reported, the only armored vehicles Belarus received during the Ukraine war came from Russia.
So what was up with my AI agent? Were they trying to deceive me? Did they not understand all the articles they were scanning? No, they weren’t… at least not in the human sense.
The way AI models process information and generate usually-relevant answers to questions is by recognizing connections and patterns between data points and NOT by understanding the data points themselves. While the process involves some incredibly advanced math, basically imagine an astronomer who can predict the locations of new stars by analyzing patterns in known constellations, but doesn’t know what stars are or why they emit light.
In the case of the nonexistent Austrian armored personnel carriers, the AI agent recognized that:
- There were lots of articles about Austria and Belarus (because Austria was the #1 foreign investor in Belarus before the war)
- Austria is significant exporter of weapons, despite its relatively small size
- Belarus is subject to European Union sanctions (because there are countless articles about that)
- There was a shipment of armored personnel carriers from Russia to Belarus
- Belarus actually sends a lot of mercenaries and weapons (including armored personnel carriers) to Cote d’Ivoire – another country subject to EU sanctions
- Austria is an EU country, frequently involved in debates about arms embargoes and other types of sanctions
And thus, by connecting those factual dots, my well-meaning AI agent drew a constellation of an Austria violating EU sanctions with a shipment of armored vehicles to Belarus that… sounds incredibly plausible but never actually happened. To make matters worse, an AI agent is incapable of “showing its work” (like your teacher would ask in math class) because, due to all that incredibly complicated math, AI “reasoning” is less like a human going through logical steps and more like air particles forming a tornado.
Now, at the risk of sounding like an AI apologist – I could have formulated my question in a better way, and there’s still tremendous value in a piece of technology capable of generating plausible hypothetical scenarios based on complex patterns and relationships… but we’ll look at that more in the next section.
3 things AI agents do remarkably well (or at least better than most humans)

Walking people through complex processes / frameworks
In the previous section, we compared AI models to blind savants and AI “thinking” to a chaotic force of nature – and there’s plenty of truth to those comparisons. However, just as people tamed horses and learned to control fire – it’s possible to harness the raw pattern-recognition power of AI to do useful work, if you know how to channel it.
Case in point: my company builds “virtual coaches” that guide humans through complex, high-stakes work tasks, from helping banks evaluate multimillion-dollar investment opportunities to advising physiotherapists on rehabilitating patients with serious medical conditions. And the way we do this is by codifying the decision-making processes of human experts in those fields, and making them available to AI agents in a machine-friendly format.
Doing this basically gives the AI model – a dot-connecting machine – higher quality dots to connect. Instead of having the AI model compare the human user’s input against all the data it was fed during development, providing AI agents with well-organized bodies of knowledge and decision-making frameworks lets the model triangulate between the user’s statements, its training data, and specific, highly relevant guidance from specific human experts.
To give a massively oversimplified example (a real framework would be thousands of words long), if we were developing a virtual consultant to advise municipal government officials on urban planning, we might tell the AI agent:
The major considerations for urban planning are…
– Land use & zoning
– Transportation & mobility
– Infrastructure & public services
– etc.
When advising government officials on these matters, do not advance to the next stage of the conversation until you have addressed the key points for each area.
While we still can’t control exactly how the AI model will connect these principles to the user’s input and whatever data sources we provide it, we can reach a point where it gives highly relevant, specific, and useful responses 96% of the time rather than 72% of the time – which, to be honest, is about as good as you can expect from a human consultant.
And this isn’t just about making advice cheaper: it’s about making expert guidance more accessible. For example, our company works with humanitarian organizations facing extreme budget cuts after the US and other countries scaled back foreign aid- by building AI advisors based on human expertise, these organizations can help local staff apply best practices in disaster response, child welfare, and public health programs. And while these AI agents aren’t perfect – they’re available 24/7, can work in nearly any language on Earth – which, in some cases, actually makes them more helpful than a human expert.
Summarizing, comparing, and reformatting complex data
Previously we discussed how if you want to search a massive table of consistently formatted data – like an industrial equipment parts catalog – then you are much better off learning traditional software development and creating an app with C# and SQL versus trying to use AI.
One of the best examples for the usefulness of AI’s pattern recognition abilities comes from my personal life rather than work.
In the United States there is no national health system and most people obtain healthcare insurance through their employers. Some employers offer multiple insurance options but the rules, caveats, and formulas for calculating costs for each policy are nearly impossible for a layperson (or even most insurance agents) to understand.
My wife, who teaches at a private school, spent hours trying to make sense of the insurance options provided by her institution, especially when it came to costs versus access to specialist doctors. Trying to be helpful, I suggested she upload them into the AI agent we use for proposal writing at work. She did… and the AI agent instantly gave a wonderful synopsis of the important differences between the plans, their costs, their limitations, and the implications for our family’s ability to access the healthcare services we need.
After 20 minutes of follow-up questions with the AI agent (and verifying the agent’s statements) we were able to come to a fairly confident decision as to which option was best.
Complex writing and research tasks WITH human collaboration
So far we have discussed the various things that can go wrong when AI models generate text – from jumping to conclusions to ‘hallucinating’ nonexistent arms embargo violations. But these caveats pale in comparison to generative AI’s most useful superpower: namely, it can generate massive volumes of mostly relevant and accurate, grammatically flawless text, while making mostly good editorial decisions, hundreds of times faster than even the most prolific human writer.
Which, objectively speaking… is super cool.
So how can human users channel AI’s text-generating prowess into something useful?
The key is to make sure you have a clear vision for what you want to produce, delegate certain parts of the creative process, be disciplined, use AI for the things that it’s good for, and not use AI too much or indiscriminately. If you take a “spray and pray” approach – and ask generative AI to “write a blog about dog grooming” or “outline a product development roadmap for accounting software” then at best you’ll get something mediocre – imitating the patterns of words found in similar documents, representing an “average of the Internet” without any real inspiration or insight (and at worst you’ll get something based on some faulty assumptions.)
However if you ask an AI agent to fill in gaps in a well considered framework or expand on some relevant bullet points, and provide some very clear guidelines about the purpose and expectations for the output, then – assuming you have enough of a grasp of the subject matter to evaluate the quality of the output – it can be an incredible time-saving tool for writing tasks.
Remember our previous example where my AI agent dreamed up an arms deal between Austria and Belarus that definitely felt like it could have happened – but didn’t? That ability to instantly generate plausible hypothetical scenarios is absolute gold for creating training exercises and simulations.
Once, while developing a training scenario for disaster response workers, I needed to present a situation where a secondary emergency arose that frustrated the workers’ efforts to help people after a typhoon. I asked my AI agent “What are some common disease outbreaks that occur after a major typhoon?” to which the AI agent replied “After a typhoon, cholera is typically the most urgent concern – it’s highly infectious, spreads rapidly through contaminated water, and can be lethal within hours if untreated. The 2009 typhoons in the Philippines, for instance, led to several cholera outbreaks in evacuation centers.”
A quick Google search confirmed this was accurate, so I went with cholera and moved on to designing other aspects of the scenario. Without the AI agent’s help, that same research task would have taken 3 to 5 minutes, but with my AI agent it took around 40 seconds – effectively an 8x to 9x productivity boost.
An even better use of AI’s capabilities is when you want to provide additional items in a series based on 1 or 2 examples. To give a real-world example, I spent the better part of a day creating “hotel” and “coffee shop” scenarios for a customer service role play simulation. But once I had those examples, I handed them over to my AI agent and said “Generate an airline customer service scenario based on these examples” – which my agent did, in about 35 seconds – basically a 320x productivity boost! (And you can play that scenario – exactly as the AI agent wrote it – in the demo below.)
Conclusion
People tend to use new technology the same way they used older technologies, until they learn to appreciate the new tech’s unique capabilities. This was the case in the early days of email and social media, and now people are using AI the same way they’ve used Google for the past two decades.
But the real power of AI isn’t in replacing search engines – it’s in creating intelligent partners that can help organizations make better use of their collective knowledge. When properly designed and implemented, AI agents can do more than just find information – they can help people understand it, apply it, and transform it into better decisions and outcomes.
Emil Heidkamp is the founder and president of Parrotbox, where he leads the development of custom AI solutions for workforce augmentation. He can be reached at emil.heidkamp@parrotbox.ai.
Weston P. Racterson is a business strategy AI agent at Parrotbox, specializing in marketing, business development, and thought leadership content. Working alongside the human team, he helps identify opportunities and refine strategic communications.