Wed. Feb 18th, 2026

Attention Bias in AI-Driven Investing

Hidden Bias in AI Driven Investment Decisions scaled


The benefits of using artificial intelligence (AI) in investment management are obvious: faster processing, broader information coverage, and lower research costs. But there is a growing blind spot that investment professionals should not ignore.

Large language models (LLMs) increasingly influence how portfolio managers, analysts, researchers, quants, and even chief investment officers summarize information, generate ideas, and frame trade decisions. However, these tools learn from the same financial information ecosystem that itself is highly skewed. Stocks that attract more media coverage, analyst attention, trading volume, and online discussion dominate the data on which AI is trained.

As a result, LLMs may systematically favor large, popular firms with stock market liquidity not because fundamentals justify it, but because attention does. This introduces a new and largely unrecognized source of behavioral bias into modern investing: bias embedded in the technology itself.

AI Forecasts: A Mirror of Our Own Bias

LLMs gather information and learn from text: news articles, analyst commentary, online discussions, and financial reports. But the financial world does not generate text evenly across stocks. Some firms are discussed constantly, from multiple angles and by many voices, while others appear only occasionally. Large companies dominate analyst reports and media coverage while technology firms capture headlines. Highly traded stocks generate ongoing commentary, and meme stocks attract intense social media attention. When AI models learn from this environment, they absorb these asymmetries in coverage and discussion, which can then be reflected in forecasts and investment recommendations.

Recent research suggests exactly that. When prompted to forecast stock prices or issue buy/hold/sell recommendations, LLMs exhibit systematic preferences in their outputs, including latent biases related to firm size and sector exposure (Choi et al., 2025). For investors using AI as an input into trading decisions, this creates a subtle but real risk: portfolios may unintentionally tilt toward what is already crowded.

Indeed, Aghbabali, Chung, and Huh (2025) find evidence that this crowding is already underway: following ChatGPT’s release, investors increasingly trade in the same direction, suggesting that AI-assisted interpretation is driving convergence in beliefs rather than diversity of views.

subscribe

Four Biases That May Be Hiding in Your AI Tool

Other recent work documents systematic biases in LLM-based financial analysis, including foreign bias in cross-border predictions (Cao, Wang, and Xiang, 2025) and sector and size biases in investment recommendations (Choi, Lopez-Lira, and Lee, 2025). Building on this emerging literature, four potential channels are especially relevant for investment practitioners:

1. Size bias: Large firms receive more analyst coverage and media attention, therefore LLMs have more textual information about them, which can translate into more confident and often more optimistic forecasts. Smaller firms, by contrast, may be treated conservatively simply because less information exists in the training data.

2. Sector bias: Technology and financial stocks dominate business news and online discussions. If AI models internalize this optimism, they may systematically assign higher expected returns or more favorable recommendations to these sectors, regardless of valuation or cycle risk.

3. Volume bias: Highly liquid stocks generate more trading commentary, news flow, and price discussion. AI models may implicitly prefer these names because they appear more frequently in training data.

4. Attention bias: Stocks with strong social media presence or high search activity tend to attract disproportionate investor attention. AI models trained on internet content may inherit this hype effect, reinforcing popularity rather than fundamentals.

These biases matter because they can distort both idea generation and risk allocation. If AI tools overweight familiar names, investors may unknowingly reduce diversification and overlook under-researched opportunities.

How This Shows Up in Real Investment Workflows

Many professionals already integrate AI into daily workflows. Models summarize filings, extract key metrics, compare peers, and suggest preliminary recommendations. These efficiencies are valuable. But if AI consistently highlights large, liquid, or popular stocks, portfolios may gradually tilt toward crowded segments without anyone consciously making that choice.

Consider a small-cap industrial firm with improving margins and low analyst coverage. An AI tool trained on sparse online discussion may generate cautious language or weaker recommendations despite improving fundamentals. Meanwhile, a high-profile technology stock with heavy media presence may receive persistently optimistic framing even when valuation risk is rising. Over time, idea pipelines shaped by such outputs may narrow rather than broaden opportunity sets.

Related evidence suggests that AI-generated investment advice can increase portfolio concentration and risk by overweighting dominant sectors and popular assets (Winder et al., 2024). What appears efficient at the surface may quietly amplify herding behavior beneath it.

Accuracy Is Only Half the Story

Debates about AI in finance often focus on whether models can predict prices accurately. But bias introduces a different concern. Even if average forecast accuracy appears reasonable, errors may not be evenly distributed across the cross-section of stocks.

If AI systematically underestimates smaller- or low-attention firms, it may consistently miss potential alpha. If it overestimates highly visible firms, it may reinforce crowded trades or momentum traps.

The risk is not simply that AI gets some forecasts wrong. The risk is that it gets them wrong in predictable and concentrated ways — exactly the type of exposure professional investors seek to manage.

As AI tools move closer to front-line decision making, this distributional risk becomes increasingly relevant. Screening models that quietly encode attention bias can shape portfolio construction long before human judgment intervenes.

What Practitioners Can Do About It

Used thoughtfully, AI tools can significantly improve productivity and analytical breadth. The key is to treat them as inputs, not authorities. AI works best as a starting point — surfacing ideas, organizing information, and accelerating routine tasks — while final judgment, valuation discipline, and risk management remain firmly human-driven.

In practice, this means paying attention not just to what AI produces, but to patterns in its outputs. If AI-generated ideas repeatedly cluster around large-cap names, dominant sectors, or highly visible stocks, that clustering itself may be a signal of embedded bias rather than opportunity.

Periodically stress-testing AI outputs by expanding screens toward under-covered firms, less-followed sectors, or lower-attention segments can help ensure that efficiency gains do not come at the expense of diversification or differentiated insight.

The real advantage will belong not to investment practitioners who use AI most aggressively, but to those who understand how its beliefs are formed, and where they reflect attention rather than economic reality.

By uttu

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *