Alpha zero is easily the best chess program ever created, whats even cooler is that it plays/learns chess more like a human. Another interesting fact is that it only looks at 80,000 positions per second compared to stockfishes 300,000,000+, it als. Leelenstein did not yet face Leela Chess Zero in a TCEC superfinal for a definitive result. In general it probably will be slightly stronger on average. Leelenstein is a derivative so it takes the pure Zero trained network of Lc0 and adds in some non-Zero knowledge to increase its strength. Leelenstein is a derivative of Leela chess Zero and uses mostly the same source code except its training includes some non zero knowledge. It appears that Neural network chess engines continue to. The strongest 20b nets are the Leelenstein ones listed above but these aren't trained purely on Lc0 data. The best 20b net trained only on Lc0 data is 256x20-t40-1541.pb.gz from Sergio Vieri's repository.
Leelenstein 11.1 Chess Engine
Leelenstein 11.1 Chess Engine Download
Leelenstein 11.1 Chess Engine
Leelenstein 11.1 Chess Engine Download
Leelenstein Chess Engines
private website for chessengine-testsLatest Website-News (2020/12/05): NN-testrun of Lc0 0.26.3 J96-28 finished. See the result and download the games in the 'NN vs SF testing'- section. I did a huge experiment (3x 7000 games!) of the Eman 6.60 learning-feature. See the results in the 'Experiments'- section. Release of my new Unbalanced Human Openings. Learn more in the 'Unbalanced Human Openings'- section and download them right here At the moment, I am working on a new V2.00 of UHO-openings, which means, all (nearly) 400000 endpositions of the raw-database are re-evaluated by KomodoDragon 1.0 (15 seconds per position on a Quadcore PC). V1.00 was evaluated by Komodo 14. Because KomodoDragon is around +200 Elo stronger and the nnue-net boosts the positional understanding even more, the new evaluation promises much better and more valid results. But it is not clear, if this will lead to better results of UHO-openings in testings, too. I hope so. If the testing results of UHO V2.00 are better than UHO V1.00 and no unexpected problems appear, the release of UHO V2.00 will be in Q1/2021 - I need around 2 months for the evaluation of all endpositions. And a lot of pre-tests, to find the best eval-interval for best results, are needed. And then, the final testruns have to be done.. A lot of work! Stay tuned. Stockfish testing Playing conditions: Hardware: Since 20/07/21 AMD Ryzen 3900 12-core (24 threads) notebook with 32GB RAM. Now, 20 games are played simultaneously (!), so from now, each testrun will have 6000 or 7000 games (instead of 5000 before) and will take only 2 days, not 6-7 days as before! From now, all engine-binaries are popcount/avx2, of course, because bmi2-compiles are extremly slow on AMD. To keep the rating-list engine-names consistent, the 'bmi2'- or 'pext'-extension in the engine-name is still in use for older engines - otherwise ORDO will not calculate all played games by this engine as one engine.. Speed: (singlethread, TurboBoost-mode switched off, chess starting position) Stockfish: 1.3 mn/s, Komodo: 1.1 mn/s Hash: 256MB per engine GUI: Cutechess-cli (GUI ends game, when a 5-piece endgame is on the board) Tablebases: None for engines, 5 Syzygy for cutechess-cli Openings: HERT_500 testset (by Thomas Zipproth) (download the file at the 'Download & Links'-section or here) Ponder, Large Memory Pages & learning: Off Thinking time: 180'+1000ms (= 3'+1') per game/engine (average game-duration: around 7.5 minutes). One 7000 games-testrun takes about 2 days.The version-numbers of the Stockfish engines are the date of the latest patch, which was included in the Stockfish sourcecode, not the release-date of the engine-file, written backwards (year,month,day))(example: 200807 = August, 7, 2020). The used SF compile is the AVX2-compile, which is the fastest on my AMD Ryzen CPU. SF binaries are taken from abrok.eu (except the official SF-release versions, which are taken form the official Stockfish website). Adwcleaner c quoi. Download BrainFish (and the Cerebellum-Libraries): here To avoid distortions in the Ordo Elo-calculation, from now, only 3x Stockfish (latest official release + the latest 2 dev-versions) and 1x Brainfish are stored in the gamebase (all older engine-versions games will be deleted, every time, when a new version was tested). Stockfish and BrainFish older Elo-results can still be seen in the Elo-diagrams below. BrainFish plays always with the latest Cerebellum-Libraries of course, because otherwise BrainFish = Stockfish. Latest update: 2020/11/29: Stockfish 201126 avx2 (-1 Elo to Stockfish 201115) (Ordo-calculation fixed to Stockfish 12 = 3684 Elo) Carnivores dinosaur hunter reborn ps4. See the individual statistics of engine-results here Download the current gamebase here Program Elo + - Games Score Av.Op. Draws 1 CFish 12 3xCerebellum : 3726 8 8 7000 86.1 % 3389 27.3 % The version-numbers (180622 for example) of the engines are the date of the latest patch, which was included in the Stockfish sourcecode, not the release-date of the engine-file. Especially the asmFish-engines are often released much later!! Dumpchk.exe download windows 8.1. Below you find a diagram of the progress of Stockfish in my tests since August 2020. And below that diagram, the older diagrams. You can save the diagrams (as a JPG-picture (in originial size)) on your PC with mouseclick (right button) and then choose 'save image'.. The Elo-ratings of older Stockfish dev-versions in the Ordo-calculation can be a little different to the Elo-'dots' in the diagram, because the results/games of new Stockfish dev-versions - when getting part of the Ordo-calculation - can change the Elo-ratings of the opponent engines and that can change the Elo-ratings of older Stockfish dev-versions (in the Ordo-calculation / ratinglist, but not in the diagram, where all Elo-'dots' are the rating of one Stockfish dev-version at the moment, when the testrun of that Stockfish dev-version was finished). |