Thank you for your donations!   Download the book!   Public source code repository  

Remove the rule, update the book

I have removed the rule which stated that the first piece in a move cannot return to its initial position, regardless if it was the only piece in a move, the first in a cascade, or in divergence. Since this is a major change I'm also updating the book, the other changes include:

  • clarified double checkmate, added notation and status grammar,
  • reworded Starchild intro,
  • clarified castling notation, fixed grammar,
  • clarified optional notation, fixed outro,
  • reorganized sections, 
  • fixed formatting in tables, 
  • other tiny fixes.

If the rule stayed, it would actually mean that all pieces would have to be uniquely tracked, which would mean that each piece would have to be assigned unique number at a start of a match, and then for everything else it's type and tag would have to be fetched from look-up-table. This would lead to a major  shift of underlying API in a library; but more concerning, also in paradigm shift, where chessboard wouldn't host pieces anymore, but their unique identifiers. This change would be even more of a tectonic shift than the previous one; doable, but got me thinking; is it really worth it, why did I impose such a rule, and such.

For one, it wasn't a movement rule per se (belonging to any particular piece), but it was a restriction imposed on a movement rules of all pieces. As such, it doesn't belong in the book, but among other FIDE or tournament rules. More specifically, any piece starting a move cannot do anything more than just to initiate a permutation of Waves, if that starting piece is about to return onto its initial position in the same move. What I overlooked is that permutation necessary leads to repetition of positions, and that has already being handled by FIDE; for instance, see FIDE 9.2 point in FIDE Handbook.

Another reason for the removal of the rule is physicality; all pieces of the same color and type look the same; this is especially true if displayed on-screen, on a web site, or in a computer game. And yet, the rule called for recognizing (and accounting for) that one specific e.g. dark Rook which started a cascade vs. the other(s) which did not. Of course, computers can be programmed to distinguish those easily; however, a player would have hard time tracking which dark Rook is which, especially in a long-winded cascade.

In short, the above rule would be difficult to implement, goes against a physical reality of a game it tries to regulate, and there is already available better, easier alternative; so, no wonder it has been removed.

Anyway, the book was compiled on June 6, 2025, version is 20250620.014531, and can be found behind that juicy red button above, or in other usual places.

Guessing game

Merging pieces and tags mentioned in a previous post have gone much smoother than I expected, and it's finished now. I also wrote in that post my estimates how long a game in the largest variant could take. In the meantime I have revised my estimates, and so I present you with bigger, better numbers. Again, these are very rough guesstimates, it's difficult to even assess how much numbers presented here could deviate from real-world matches; in short, those shouldn't be taken too seriously.

Estimates here are based on the fact that most large, complex systems settle its metrics somewhat in the middle, since most extremes tend to cancel each other out. For this analysis we'll be comparing easy to calculate metrics such as:

  • size of a chessboards,
  • count of all pieces in a variant,
  • number of different types of pieces,
  • number of possible interactions,
and will be comparing One variant against Classical Chess. For each metrics we'll calculate its contribution to overall complexity, then multiply them all together, since they are all (mostly) independent; although, total number of pieces is always larger than number of different piece types.

All complexity factors have the same form, so let's define generic complexity factor cf as a simple ratio between new (n) and old (m) metrics, like so:

cf = n m

This definition is all well and good, but applies only if corresponding metrics contribute to system's complexity directly, in a linear fashion. Most of the time, this is not the case. For instance, adding two ranks and files to a classical chessboard is a much larger change (as a percentage) then adding the same to the second largest variant, Discovery. So, each increase in metrics yields diminishing increase in complexity; for such a non-linear increase there is a function which sets limits to growth, and that's natural logarithm (ln):

cf = ln ( n m )

There is still a small issue to solve here, before calculations can take place. Observe what happens if we compare e.g. Classical Chess with its own self:

cf = ln ( m m ) = ln ( 1 ) = 0

Our complexity factor cf gets to 0. This is actually fine, all that calculation is showing us is that there is no additional complexity when comparing a variant to itself. Still, we'd like to multiply our complexity factors, as independent variables should be. So, we'll add 1 to formulae, like so:

cf = 1 + ln ( n m )

Now that we have generic formulae for complexity factors sorted out, we can actually calculate something; lets start by comparing sizes of chessboards:

cf size = 1 + ln ( 26 2 8 2 ) = 1 + ln ( 676 64 ) = 3.357310

Next, we can compare total number of pieces on the chessboards:

cf pieces = 1 + ln ( 190 32 ) = 2.781288

Another comparison is between number of different types of pieces:

cf types = 1 + ln ( 18 6 ) = 2.098612

Finally, we can compare number of different interactions:

cf interactions = 1 + ln ( 19 3 ) = 2.845827

Our complexity c is then defined as a product of all factors calculated above:

c regular = cf size cf pieces cf types cf interactions = 55.767104

This is actual length scaling factor of regular games from Classical Chess into One variant, i.e. all games should be 55.767 times longer. For instance, the longest recorded tournament game was 538 moves (269 FIDE moves, aka cycles), which turns into 30002 moves (15001 cycles) for One variant. Average on-line match lasts about 80 moves (40 cycles), in One variant that would become 4461 moves (2230.5 cycles). Average tournament match lasts about 88 moves (44 cycles), which becomes 4907 moves (2453.5 cycles).

The same, however, does not apply when calculating maximal possible game length, because players will try to maximize each and every metrics available to prolong the game. This can be seen in Classical Chess games alone; the longest possible game with 50-cycle rule is 11797 moves (5898.5 cycles), while with 75-cycle rule it's 17697 moves (8848.5 cycles). If we calculate ratio between the two rules:

75 50 = 1.5

and game lengths, we can see that contribution to game length by rules extension was almost perfectly linear (50% increase in movement rule resulted in 50% longer game):

17697 11797 = 1.500127151

So, for maximum game length we have to calculate linear scaling factors; for chessboard sizes factor becomes:

cf size = 26 2 8 2 = 676 64 = 10.562500

Next, for total number of pieces on the chessboards we have:

cf pieces = 190 32 = 5.937500

For number of different types of pieces we get:

cf types = 18 6 = 3.000000

Finally, we can calculate factor for number of different interactions:

cf interactions = 19 3 = 6.333333

Taken together, our complexity c becomes:

c longest = cf size cf pieces cf types cf interactions = 1191.582031

This is scaling factor for the longest possible games, i.e. the longest games should be 1191.58 times longer in One variant compared to Classical Chess. For instance, previously mentioned 11797 moves (5898.5 cycles) game as the longest possible with 50-cycle rule in One variant becomes 14,057,093 moves (7,028,546.5 cycles) game. Even longer 17697 moves (8848.5 cycles) game with 75-cycle rule in One variant turns into 21,087,427 moves (10,543,713.5 cycles) game.

These are all very large numbers, even "just" 55.767 times increase in complexity results in some ludicrous (as in, almost completely impractical) estimates. For instance, increase in complexity should also translate into longer time allowance per turn, simply because there is so much more stuff a player has to handle. So, 15 seconds per player's turn in bullet game (10 minutes, spread over average of 40 turns per match, see https://en.wikipedia.org/wiki/Time_control#Classification) now becomes approx. 13.941776 minutes per turn; given that average One game length would be 4461 turns, it would last for approx. 1036.5710456 hours of gameplay time; or, 129.5713807 days if we assume 8-hour gameplay in a day. If we don't increase time allowance per turn, 15 second per turn bullet game would take on average 18.5875 hours of gameplay time, or 2.3234375 days with the same 8 hours of gameplay per day.

And that's just bullet, lets not talk about classical time controls here, those numbers would be depressingly huge. One thing that is sorely missing from complexity estimate is mobility, and associated with it, piece powers. These are not easily estimated; also, increase in mobility alone does not add to complexity, rather it's ratio between mobility and available space (i.e. chessboard size), and I'm not sure that ratio has been increased by much. Anyway, this post is already too long, so I'll leave it for some future post.

Tectonic shift

It's unusual to have very base data model change this late in a project; any yet, here we are. Currently, I was implementing generators of all legal paths every piece can make in a ply. Since this change will disrupt development for quite some time, let me try to explain it.

The issue I'm talking about is that currently piece and tag enumerations are separated; which would be fine in almost all designs, since most of the time one data entity is indeed independent from any other, e.g. one person's year of birth is very separated from their phone number. This is not the case here, Bishop can never get "can rush" tag, nor Knight can ever castle.
Edit: Tags and pieces have very few, and very intimate relationships; usually tag can be attached to a few pieces (e.g. rush and en passant tags for privates). Technically, tag is a link between a piece and a field at which that piece is located. Once that link is broken (e.g. a Pawn has been activated and moved away), there is no more tag present (e.g. activated Pawn has lost its promotion tag) even if both a piece and its originating field are still present in a game. Most of the time the best data model is the one designed after real-life; this is why original design featured tags separated from pieces. 

Another issue with separated piece and tag enumerations is that, well, they are separated. So, every function working with pieces (and in a chess game that means all of them) has to have two separate parameters. Yes, you can combine then in a neat struct, together with some other bits (e.g. a position) to bring number of parameters down; but, they both are now present in every struct you have to pass around.

Plus, they still take too much space, in fact, twice as much as needed; and that's after a change has been made quite some time ago, so that only 1 byte (char) is stored per enumeration instead of default 4 (enums are just fancy ints). You might think that we have moved past storage issues since like forever, but that is true only for local apps. For libraries, one has to consider possibility that it might serve many users at once, also it might need to do it on a restricted hardware (e.g. micro-controllers).

There is another issue with enumeration storage, and that's space it takes in a chessboard struct; again, currently it's double the amount it actually needs. Parsing user notation and applying it to a current chessboard is not an easy task; if undo is supported then one has to apply all moves performed until that point. To speed things up (and avoid doing the same job twice) position after performed move can be stored, and later retrieved. Only problem is, the largest variant has 676 fields, 190 pieces and 14 different interactions (losing tags are not counted, as they're result of interactions), which means it can last much longer than classical chess game. The longest recorded classical chess match in tournament was 269 moves, but legal match can stretch up to 5898.5 moves with 50-moves draw rule, or up to 8848.5 moves if 75-move draw rule is used instead.

For the record, move to me always means all actions performed by one player in one turn; this is different from FIDEs definition used above which define a move as white player action followed by black player action; for FIDE definition I use term cycle. Reason why I refer to move as such is because each player moves their pieces independently to each other (current chessboard position notwithstanding), there is a meaningful choice to be made; in fact, making a choice is the whole point of a game.

So, how long can chess game go in the latest and greatest variant of them all, the One? Honest answer, I don't know. I tried to guess-timate the thing, but it's probably wildly inaccurate. My guess is that tournament games won't last much longer than 18.000 moves, and technically legal games could probably go for 10.000.000+ moves; both with 50-cycle draw rule. This post is already too long, if you're interested to hear my reasoning, let me know in comments below.

As you can see, if undo is supported by storing chessboards, just having variant enumeration as an int in a chessboard struct can easily eat up to 40+ MB of space, just for that one enum. Undo chessboards could currently use 13+ GB of RAM in total; with changes proposed here implemented, that could drop down to 6.5+ GB. This is why I'll merge piece and tag enums, and also change variant enum storage to byte. Such a change will affect everything in the project; for the time, there will be development without (much) progress.

New book update

I'm updating the book even though less than a month has passed since the previous update; the newest changes include:

  • rewrote checking opponent's King by Serpent,
  • tiny clarification, cascading en passant,
  • filled-in, clarified piece activations table,
  • removed activations on miracle-fields by Starchild, Wave,
  • removed Wave divergence on miracle-fields,
  • a bunch of tiny fixes, clarifications.

In a previous update I planned on making an exception to Wave's transparency, so that King behind it is not in check anymore. That has been revised, and so Wave transparency stays the same, and King behind it is still in check. Due to Shaman being able to capture multiple pieces in a single ply, I had to put forward rule that King is in check if it could be captured, if it was opponent's turn. Then everything else followed from there: if a Wave can be passed over by a piece due to its transparency, then attacker already have direct access to a King; and hence, King is already in check.

I also removed all activations on miracle-fields, except Starchild can still activate a Star. This is due to miracle- and uplifting-fields being the same, so it was hard to determine on-the-spot (when parsing user notation, building legal path, ...) if interaction is just an ordinary activation (on miracle-field), or it's uplifting a piece; because uplifting is a two-stage process, and so activation context has to be passed between plies, before anything conclusive can be known.

The book was compiled on April 2, 2025, version is 20250402.230028, and can be found in usual places.