To a large degree, I don't think this is possible.
As humans, we want an intuitive explanation. For instance with chess, because it weakens certain squares or allows a particular combination resulting in a pawn break.
Unfortunately, these notions arise from our human attempt to understand a complicated game by reasoning through abstractions over the game. Things like pawn structure, control over light and dark squares, and pressure on pinned pieces aren't fundamental components of chess — they're just patterns that help us understand and reason about complicated board positions.
An AI doesn't need or use these abstractions. At the end of the day, all of them can, will, and should be ignored for the sake of simply achieving a superior position on the board. And it's very likely in my mind that moves at this deep a level can't be suggested in terms of a more useful abstraction than, "because it's better than all the other moves".
As humans, we want an intuitive explanation. For instance with chess, because it weakens certain squares or allows a particular combination resulting in a pawn break.
Unfortunately, these notions arise from our human attempt to understand a complicated game by reasoning through abstractions over the game. Things like pawn structure, control over light and dark squares, and pressure on pinned pieces aren't fundamental components of chess — they're just patterns that help us understand and reason about complicated board positions.
An AI doesn't need or use these abstractions. At the end of the day, all of them can, will, and should be ignored for the sake of simply achieving a superior position on the board. And it's very likely in my mind that moves at this deep a level can't be suggested in terms of a more useful abstraction than, "because it's better than all the other moves".