From a statistical perspective, however, if we assume that all the previous data has been calculated to provide a distribution that properly reflects the expected distribution of your play, then any additional data points should update that accordingly. Previous data should have already been accounted for.
Hence a proper system of rating should never take a new piece of data and make an update in the opposite direction. I understood the previous system, as it was updating the data of previous games, thus it makes sense that the information gained could overcome the update of the most recent game easily.
But if I flip a coin (which we do not know is fair), and we have an estimation of how often it flips heads or tails, and it flips heads, sure thatâs not always a good indicator that heads is more likely than your previous estimate, you should not be able to take the model and go âwell the last few flips that weâve already accounted for were already tails, so letâs update it to be more likely to flip tails.â because that information should have already been accounted for in the model.
Not to mention, the way this system currently goes, itâs not just trying to smooth things out â if you have an upset that would only happen 1/100 times, which is already going to make a notable change, suddenly under the current system, thatâs happened 15 times in recency, creating a much bigger set of outliers in the data in a system where recent results matter more than old ones by design.