The Agreeable Partner: Why AI Should be the Voice of Dissent

Quite by chance and against my all-too-bendable will, I found myself reading through a long argument between Grok (of the erstwhile bird app) and a probable human (I’m 88% sure) about whether a celebrity had had a nose job. Thrilling stuff. Anyway, it started off innocuously enough – what did Grok ‘think’, what were its sources, and what was its conclusion, etc. But by the end of the conversation the human had proved that if you pushed hard enough (or were annoying enough), you could always get the answer you were looking for, be it from a human or from a bot. As an aside, I still don’t know nor care what the conclusion was in this instance. But it is far more worrying in other contexts.

Confirmation bias can be defined as the tendency to “look for, or give greater weight to, evidence that confirms our views and experiences…[which]… can lead to selective observation and us not seeing or valuing evidence that contradicts our beliefs” (Understanding different types of bias — Conscious inclusion - Equality, diversity and inclusion - About).  It is what we all simultaneously crave and abhor. So how can we expect AI to be any different given the trainers it has?

Finding Robots That Also Agree with You

You don’t have to look too far to find support for the idea that chatbots and LLMs show this bias quite consistently and concerningly. It becomes even more alarming when we find it in Financial Research that’s supposed to be based on factual data, material non-public information, mathematical calculations, confidential C-Level strategies, and logical and inclusive interpretation of complex data in an increasingly intolerant and biased world.

But my question is, how is this any different from the old way of doing research? Take a movie adaptation of a novel, for example, and what you see on screen is only what the director, cinematographer, actor, or screenplay writer wants you to see. Stranger still is that what we see in both the movie and the book is what we want to see and what we are conditioned to see.

My take on this argument is that AI isn’t any more or less biased than we are, but that it is much easier to change the ‘mind’ of a machine than it is to change the entire outlook of a person. After all, whatever else it is, AI is not as stubborn as we are.

Let’s Not Forget the Biases in Traditional Research

According to Scribbr, Confirmation bias can constitute: Selective search; Selective interpretation; and Selective recall, all quite self-explanatory.

In the traditional research process, it is easy enough to find the pitfalls of confirmation bias in the form of:

  1. Cognitive anchoring: Researchers often start with a hypothesis and subconsciously seek evidence that supports it.
  2. Selective focus: Humans tend to notice data that aligns with their expectations and overlook contradictory signals. Recall how your hard-of-hearing teenager suddenly hears the word ‘girlfriend’ from the farthest room and instantly protests too much?
  3. Institutional and personal pressures: Deadlines, client expectations, or career incentives can push analysts to favor a particular narrative.
  4. Emotional investment: Analysts may become attached to their thesis after spending so much time and effort developing it.

Money Matters: The Impact on Financial Research

This can have a significant adverse impact on decision-making in Financial Research as this field relies on interpreting a vast pool of complex data and forecasting outcomes.

Here’s how it can show up in reports:

  1. Analysts may favor data that supports their initial thesis (e.g., bullish or bearish stance).
  2. They might ignore contradictory signals or disconfirming evidence such as macroeconomic risks or alternative valuation models.
  3. Overconfidence in a preferred narrative can lead to skewed recommendations and reduced objectivity.
  4. Potential mispricing and sub-optimal recommendations.
  5. Persistent bias can increase reputational risk by damaging credibility if forecasts fail.
  6. Systemic risk if many analysts share the same bias (what we tend to see during bubbles), it amplifies market distortions

Looking at the Bright Side

It’s relatively easy to find the cons of confirmation bias, but what could be some of its advantages? To begin with, focusing on a thesis can accelerate the authoring and publishing process; second, a strong, consistent point of view can make reports more persuasive and easier to understand; third, recommendations could resonate more with investors if the bias matches the overall market sentiment; and fourth, and most importantly, analysts’ research contains years and years of industry experience, sharp insights, careful planning, expert opinions, and clever and often bold calls to action.

Could it be that the difference between LLMs and analysts is more a difference between a new analyst and an experienced one?

Objection, Your Honor, Leading the Witness

We can circumvent or reduce this bias when working with LLMs through careful prompt design and interaction strategies. First, ask neutral questions that can have varying answers (in other words, no ‘yes/no’ questions). Avoid leading language like “Don’t you think X is the best option?”  Instead, use open-ended phrasing, ask for pros and cons, alternate viewpoints. Second, ask for multiple perspectives and diversity of thought. Third, define reputed sources that the AI model/agent can refer to. Fourth, push the model to justify its reasoning and provide evidence/sources for the content it generates, including any assumptions it has made. Fifth, include uncertainty in your prompts, asking the model to highlight any limitations. For example, “What are the uncertainties/ gaps in information about this topic, and under which scenarios might these rationales fail?” And lastly, use specific, role-based prompts. Ask it to be a neutral analyst or an unbiased Supervisory Analyst.

Ask it (in no uncertain terms) to not please you for the sake of it, and moreover to self-validate and conduct its own bias checks. Once we sharpen these prompts to suit our needs, and train the model to probe its own biases, we might be able to write more objective research with a more complete analysis of data than we could without machine intervention. Teaching it to be the devil’s advocate if you will.

The Most Scathing Form of Introspection

In the end, less biased research begins with better prompts. Financial research, as FINRA reminds us, must be “reasonably sufficient” to support investment decisions and free of language that veers into the promissory or flamboyant. Finetuned prompts that force models to question assumptions, test multiple scenarios, reveal contradictory evidence, and justify conclusions with data offer a practical way to stress‑test our work for bias and imbalance.

Seen this way, AI‑enhanced research brings clear advantages – scale, breadth, and a more systematic reduction of emotional and cognitive bias – while humans rely on conscious discipline and peer review, a slower and often more painful route. The challenge, and the opportunity, is to make the machine a better dissenting partner than we often are ourselves.

Talk to One of Our Experts

Get in touch today to find out about how Evalueserve can help you improve your processes, making you better, faster and more efficient.  

Written By

Debasmita Majumdar
Associate Director, Editorial Services   Posts

Latest Posts