Oh, and since it came up, and I like talking about these things, it's actually a huge misconception that modern engines got their evaluations from human masters and the distilled wisdom of human practice.
The strongest engines look at some of the same board features as humans do, but the main reason modern engines are so strong is the massive testing and tuning at super fast time controls.
For a long time humans (incorrectly) assumed that the way to make engines play chess well was to figure out what GMs did, and then translate that for the computer.
Eventually some brave souls thought "Why make engines think like humans? Why not just make small changes, and test that change by playing tens of thousands of ultra fast games against itself?", and then things were off to the races.
Even Komodo, which is about as close as you get to a top engine whose evaluations come straight from a human player, modifies and tunes the values based on massive testing.
That's just the way it works. Humans aren't very good at predicting what will work for a computer, so instead we just try some idea, see if it gains or loses a couple ELO, and repeat.
Anyway, it's not surprising that the same things that work for computers don't work for humans, and vice versa. Machines still don't do all that well at walking, but boy can they roll :)
Stockfish's regular chess evaluation weights, for example, have basically no ties to human play. It's just a highly tuned and tested bit of software, thanks especially to the distributed testing framework.
Off my soap box now :)
The strongest engines look at some of the same board features as humans do, but the main reason modern engines are so strong is the massive testing and tuning at super fast time controls.
For a long time humans (incorrectly) assumed that the way to make engines play chess well was to figure out what GMs did, and then translate that for the computer.
Eventually some brave souls thought "Why make engines think like humans? Why not just make small changes, and test that change by playing tens of thousands of ultra fast games against itself?", and then things were off to the races.
Even Komodo, which is about as close as you get to a top engine whose evaluations come straight from a human player, modifies and tunes the values based on massive testing.
That's just the way it works. Humans aren't very good at predicting what will work for a computer, so instead we just try some idea, see if it gains or loses a couple ELO, and repeat.
Anyway, it's not surprising that the same things that work for computers don't work for humans, and vice versa. Machines still don't do all that well at walking, but boy can they roll :)
Stockfish's regular chess evaluation weights, for example, have basically no ties to human play. It's just a highly tuned and tested bit of software, thanks especially to the distributed testing framework.
Off my soap box now :)