Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Stockfish 16.1 (stockfishchess.org)
33 points by jonbaer on Feb 24, 2024 | hide | past | favorite | 12 comments


Complete Removal of handcrafted evaluation (HCE) is a pretty big milestone.

Original strength of Stockfish was closer to Type B strategy (per Shannon) as opposed to Type A(brute force) of Deep Blue.

That is Stockfish was evaluating relatively few positions / second compared to brute forcers (like Crafty,Fritz etc).

This was offset by the best eval (basically crowdsourced human GM/IM/FM knowledge) heuretics.

As an FM I could "exploit" Fritzes and Crafties from 1995-2005 by using holes in their eval.

Tim Crabbe provides some examples from that era: https://timkr.home.xs4all.nl/chess2/honor.htm

With Stockfish, its eval was always top notch (compared to GM) and constantly improving.

Obviously Stockfish was always a few orders faster than a human.


I’m curious about the comparison between NNUE and the LLM-based model that Deep Mind announced a couple of weeks ago (https://arxiv.org/pdf/2402.04494.pdf). Using NNUE only (i.e. depth 1 search) would be directly comparable. If Deep Mind’s model is better it raises interesting questions about scaling laws for this kind of thing.


It's crazy to me how it keeps getting better when stockfish was already destroying the best players in the world with version 1.


'This release marks the removal of the traditional handcrafted evaluation and the transition to a fully neural network-based approach.'


Like AlphaGo Zero, but for Chess.


Stockfish still runs a full alpha beta tree search, so it’s not really the same.


Can Stockfish (with small modifications) be used for other games now? There are a few decent open source AlphaZero implementations and I wonder how it would compare.


Depends on what games and what you define as “small” modifications. There are a lot of game-specific tweaks in the search code.

You can still apply alpha-beta tree search to lots of games though.


NNUE is the interesting part to me. Alpha-beta tree search is useless without a good value function. Not sure what would be the best way to generate the training data if you're starting from scratch.


One way is through self-play. You can pit two value functions against each other to determine which is better, thereby learning the value function.


Now that it uses NN only and does away with search, does it use more or less computing resources? Also, does it suffer from the "Swiss cheese" problem like the ones for go do? People could essentially look for weaknesses in go engines by finding paths that the engine hasn't explored during self play and the accuracy would plummet in a way that humans could beat it, as far as I understand.


It doesn't do away with search..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: