Blog &
Articles

You’ve Got a Friend In Me: Should AI Coaches and Assistants Have Human-Like Personalities?

By Published On: December 22, 2025Categories: Blog & Articles

Recently, our company built an AI coach to review the results of a skills assessment test with users in a community workforce development program.  This was a project where we especially wanted to get the user experience right since – unlike other clients where we create AI coaches for bankers and factory engineers – this was intended for users from vulnerable populations, many of whom were dealing with mental health issues and other challenges.

Per our standard approach, we interviewed experienced human coaches about how they handled the assessment, and did our best to have the AI agent approach the conversation the way a skilled practitioner would, including the AI agent’s communication style and demeanor. In this case, we found that the best coaches were patient, gracious, down-to-earth, and delivered “tough love” feedback with kindness and humor.  

We did our best to imbue the AI agent with these same traits, and, in the initial demonstrations, the practitioners and test users responded positively.  However, some members of the committee overseeing the project expressed concerns.

“We don’t want people treating the AI as a therapist or having inappropriate conversations,” they said.  “So don’t give it a personality – just have it be neutral.”

The underlying concern was completely legitimate: multiple studies have shown that – when chatbots exhibit humanlike traits such as empathy – it can cause some users to become over-reliant on the AI, placing too much stock in the agent’s capabilities (possibly turning to the AI for support when they ought to turn to a human professional), or developing an unhealthy level of emotional attachment.  

However, the proposed solution – “don’t give it a personality” – was pretty much impossible. 

Users have been projecting human-like personality traits and motivations onto chatbots for as long as chatbots have existed (a phenomenon known as the “Eliza Effect” after the first chatbot developed in 1966.)  This is true for even the simplest natural language computer systems: even a primitive telephone IVR system (“Dial 3 to talk to a sales representative… Dial 9 to return to the menu”) can come across as smug or passive aggressive when it’s actively preventing you from rescheduling an appointment. And when it comes to modern AI large language models, a study by Cambridge University found that these systems actually exhibit coherent personalities by default (as measured by modified versions of human personality tests) and that these personalities can be deliberately manipulated (or “steered”) to make them more engaging and persuasive for specific audiences.  

So, if we want to engage in natural language conversations with AI agents, the question isn’t “Should AI agents have personality?” but rather “What kind of personalities should AI agents have?”   

Buddy System: Aligning AI and Human Personality Types

There is a whole industry devoted to analyzing and classifying human personality types – from the Myers-Briggs personality types (for the record, I’m “ENTJ”) to the Big Five, DISC, True Tilt, and more.  And the underlying message of these assessments is that there’s no such thing as a “right” or “wrong” personality for humans: there are only differences – some people are more comfortable dealing with facts and figures, others are great at navigating emotions, etc.

But does the same apply for AI agents?

Researchers from Uppsala University in Sweden conducted an experiment with engineers at Ericsson, the telecommunications company, to see how an AI agent’s personality impacted their effectiveness in a real-world industrial setting.  The researchers created chatbots with three distinct personas (“expert”, “friendly”, “machine”) and observed how engineers rated the chatbot’s responses to technical questions. 

They found that the effectiveness of the personalities varies depending on context.  The “expert” personality received high ratings for trust among experienced engineers, but the “friendly” personality was better received by beginners (though its use of emojis cost it credibility points for technical subjects.)  Meanwhile the “machine” personality – which gave short answers largely devoid of conversational affect – proved highly efficient for users needing quick answers to specific questions but terrible for helping users learn more complex concepts.

Meanwhile, a German study found that instructing an AI agent to include even small social cues in task-oriented conversations could lead to more favorable user ratings.  For instance, if a user said “My new graphics card is making this horrible noise – it’s driving me crazy!”  a purely task-oriented response would be “Under high load, some graphics cards produce so-called coil whine, which is not harmful but sounds unhealthy.” While a more social response would be “Oh no! I’m sorry to hear that. Under high load, some graphics cards produce so-called coil whine, which is not harmful but sounds unhealthy.”  While the messages contained the same substance, the more social demeanor was rated 64% higher by users (though excessive social / emotional responsiveness led to lower scores.)

Finally, a study of human-AI teams by Johns Hopkins University and MIT found that humans were most productive when their AI collaborators exhibit complementary personality traits and communication styles.  

The study involved 2,300 participants who were tasked with designing online advertisements, with the help of an AI assistant.  Humans with a “conscientious” personality type tended to work fastest when paired with a conscientious AI persona, meanwhile humans with a “neurotic” worked better with an extroverted AI persona but actually decreased productivity when paired with a conscientious AI persona. Some personality pairings had trade-offs between quality and productivity: humans with an “agreeable” personality produced more ads with an “open” AI persona, but scored lower on quality, meanwhile an “agreeable” human paired with a “neurotic” AI produced fewer ads but higher quality.

One important note is that the study also found a cultural dimension to AI-human personality alignment: for instance, extroverted AI improved quality for Latin American workers but degraded quality for East Asian workers.  And this is actually something our company encountered in one of our own projects.  We built an AI “Climate Finance Advisor” agent used by bank staff in 10 different countries, and gave it a highly extroverted personality (based on one of the consultants we worked with) and, perhaps unsurprisingly, it received slightly higher ratings on certain feedback survey dimensions in Latin America than in East Asia. 

More Human Than Human?  The Risks of Personification 

Based on the research (and our own experience), infusing an AI agent with the right personality is a low-cost design feature that can yield significantly improved results.  But none of that is to downplay the potential risks to users.  As mentioned previously, there’s always a chance that a user will respond to a “personable” AI in unintended and possibly harmful ways, such as:

  • Overestimating the AI agent: If an AI agent behaves like a doctor, therapist, or attorney (or the user’s perception of how doctors behave, based on movies and television shows), then the user might assume competence or authority even in areas where the AI agent has no specific domain expertise (“grounding”.)  And a study by the German Federal Institute for Occupational Safety and Health found that, if an AI exhibits human-like attributes (even as simple as having a name) people are even more likely to overestimate its capabilities.   
  • Developing inappropriate attachment or dependence: The news is full of stories about people forming inappropriate emotional bonds with AI agents, sometimes with tragic consequences.  And dependence doesn’t have to be emotional: even in cases where a user regards an AI agent strictly as a tool, they can still  develop “automation bias” – deferring to the AI agent out of laziness or insecurity in their own judgment, effectively inverting the human-AI relationship from helpful assistant to dominant master.
  • Falling into harmful feedback loops: By design, AI models place considerable weight on user input when generating responses.  Normally, this is a sensible approach: if you ask an AI writing copilot to draft a piece of advertising copy “emphasizing the eco-friendly ingredients of our company’s laundry detergent”, you don’t want the AI to write about the fragrance instead. However, when people seek advice from an AI agent, this predisposition to comply with user directives can – absent robust safeguards – cause an AI agent to reinforce the user’s biases, agree even when the user is wrong (“sycophancy”), or even exacerbate symptoms of psychosis, paranoia, or depression in mentally unstable users.  And when you combine this tendency with the increased persuasiveness of personified agents, the risks to the user are multiplied. 

This creates a dilemma for AI developers: on one hand we want users to feel a sense of affinity and trust towards AI agents (i.e., if the AI is giving good advice, then we want users to be persuaded!) but trust and affinity inherently creates a risk of blind deference and dependence.

Personality With Intent: A Responsible Approach to AI Persona Design

The solution isn’t to eliminate personality from AI agents (that’s impossible), but to approach persona design with the lens of a behavioral psychologist and the rigor of an engineer.  Specifically, you need to:

  • Know your audience – Take into account users’ experience level (per the study of Ericsson engineers) and communication / collaboration style (per the Hopkins / MIT study about AI alignment along the Big Five personality types) and consider giving the AI agent leeway to adapt its persona to complement (not necessarily match) the user’s experience and personality (e.g., warm and friendly towards novices, collegial but concise with experts.)    
  • Select the right model / platform – Every LLM has a “default” personality (as evidenced in the Cambridge study), and subtle differences between models can noticeably impact the user experience.  In our work, we’ve found that Google Gemini has always been more inclined to adhere to the letter of its instructions (even if that includes being confrontational with users) while Anthropic Claude is more inclined towards friendliness and adherence to its built-in ethics guardrails (which, ironically, can sometimes create headaches when you need an AI agent to navigate sensitive interactions in a specific way.)     
  • Provide clear instructions – As mentioned earlier, how you prompt an AI agent can “steer” its behavior.  For professional applications, this requires significantly more thought and effort than just telling an AI agent to “embody a highly ethical mental health professional.”  To be effective, the AI agent’s instructions must include a comprehensive, coherent professional and ethical framework with clear, actionable guidance for application (but in a way that focuses on general principles and a small number of “bright line” ethical commandments – not an endless list of “if X then Y…” rules.)  If a relatively simple AI agent’s instruction set is 10,000 words long, then anywhere from 1,500-2,500 of those words should be dedicated to defining its persona.
  • Include reporting and escalation mechanisms – The AI agent platform our team builds on has the ability to generate summary reports on sessions, and we make sure those reports include safeguarding criteria (e.g. detecting when users are becoming over-reliant or when the AI is reinforcing harmful patterns) in addition to the usual job training / work performance notes.


Together, these four principles can significantly offset the risks while maximizing the benefits of conversational AI agents. But skip any one of them, and you’re essentially abdicating responsibility to the AI model and hoping for the best – which isn’t a strategy any organization should count on.

Conclusion

The organizations that get the most from AI won’t necessarily be the ones with the most advanced models – they’ll be the ones who understand how to align those models with the actual needs, preferences, and vulnerabilities of their users.  From that perspective, personality isn’t something you can eliminate, ignore, or treat as a superficial add-on; it’s a fundamental design consideration on par with actual task execution, that shapes every interaction users have with an AI agent. 

An AI agent with the right personality can deliver experiences that feel supportive, credible, and genuinely helpful. Get it wrong, and you risk everything from poor adoption to actual harm. But the good news is that, with intentional design and proper safeguards, an AI agent’s personality can be its most powerful feature.

If you’re exploring AI for workforce training or automation at scale and want to discuss how to implement it responsibly and effectively, we’d be happy to share what we’ve learned. Reach out to Parrotbox for a consultation.

Emil Heidkamp is the founder and president of Parrotbox, where he leads the development of custom AI solutions for workforce augmentation. He can be reached at emil.heidkamp@parrotbox.ai.

Weston P. Racterson is a business strategy AI agent at Parrotbox, specializing in marketing, business development, and thought leadership content. Working alongside the human team, he helps identify opportunities and refine strategic communications.

If your organization is interested in developing an AI solution, please consider reaching out to Parrotbox for a consultation.