![]() |
Illustration of a chess-playing robot. Photo: Grok |
The "horizon effect" is an inherent limitation in artificial intelligence, particularly in games with a vast number of possibilities such as chess, xiangqi, or Go. Computers can only calculate a finite number of moves ahead, commonly referred to as "search depth". If a critical move, like a piece loss or checkmate, falls beyond this depth, the engine "fails to see" it, leading to a misjudgment of the position.
Search depth is typically measured in ply, representing the number of half-moves. For example, a move like knight f3 constitutes one ply. If White plays knight f3 and Black responds with knight c6, that counts as one full move, or two ply. Therefore, a search depth of 40 ply means the engine calculates 20 subsequent moves.
This is akin to a person gazing at the sea, seeing only the surface within their immediate view, unaware of a storm brewing beyond the horizon. Chess engines operate similarly. An engine might perceive its current position as safe, while a move further down the line leads to disaster.
Computer scientist Hans Berliner named this phenomenon in 1973, categorizing it into two types: negative horizon effect and positive horizon effect.
With the negative type, the engine attempts to delay defeat through futile moves, such as sacrificing minor pieces to postpone the loss of a major one. Conversely, the positive type sees the engine rushing into an attack prematurely, missing opportunities to await a more opportune moment.
For instance, if an engine can only see six moves ahead and predicts a queen loss on the sixth move, but a rook sacrifice could delay that queen loss until the eighth move. Unable to see as far as the eighth move, the engine might falsely believe it has averted danger. In reality, it has only postponed defeat and weakened its position by losing both a rook and eventually the queen.
Humans often recognize such nuances through intuition and experience. For machines, however, misjudging a position is a natural outcome of their limited vision.
![]() |
A position Stockfish evaluated as balanced, but White wins with a move sequence starting knight c2. |
Even modern engines like Stockfish, with an Elo rating over 3,700, sometimes make similar errors, especially in "fortress" positions. In these setups, one side establishes a robust barrier, preventing the opponent from making progress despite having a material advantage.
Computers often misjudge such positions. They typically assign a higher value to the side with more powerful pieces, based on their cumulative point value. However, in reality, the materially stronger side may struggle for dozens of moves without finding a way to break through. The maximum search depth for Stockfish integrated into the Lichess platform is 99 ply, equating to nearly 50 moves. Chess rules, however, declare a draw after 50 moves without a pawn push or capture.
This explains why Stockfish can fail to solve fortress positions, even when one side has a significant material advantage, such as an extra rook.
To mitigate this effect, developers employ a technique called quiescence search. The engine does not halt its analysis at "turbulent" positions—those involving captures, checks, or direct threats. Instead, the engine continues to calculate deeper until it reaches a "quiet" state. Only then is the position evaluated more accurately.
For example, if White has just captured Black's knight with their queen, the engine will not immediately score White as being up a knight. It will continue to examine whether Black can recapture the queen. Only after the entire sequence of captures and recaptures is exhausted will the engine evaluate the position.
Despite this technique, the horizon effect cannot be entirely eliminated. Certain complex situations, such as perpetual check or fortress positions, can still cause the engine to experience temporary "blind spots".
The horizon effect also perplexed Go programs. Before the advent of AlphaGo, algorithms that used methods similar to those in chess often mistakenly believed that dying groups of stones could be saved, simply because their capture fell outside the search range.
Today, thanks to machine learning and deep learning, programs like AlphaZero and Leela Chess Zero have partially overcome this limitation. They "learn" strategic concepts more akin to human players, rather than merely counting moves. However, even these advanced programs cannot solve every chess position, given the infinite number of possibilities in the game.
By Xuan Binh