Consider two players,A and B,play a ranked handicapped game,where B acquires the advantage at the beginning.In current ranking mechanism,If B wins,B will reasonably gain little ranking score.But A will lose almost as much as the game is even,which doesn’t make sense.
I only examined several games between Echinops and some DDK players,so I’m not quite sure whether this weird mechanism is general.

Edit:It’s seems that my formal conclusion is inaccurate due to the lack of samples.But “handicapped game will result in reduction of the sum of both players’ rankings” remains unchanged.Thus,as jlt said,any handicapped game will lead to rank deflation.

How many points a player gains or loses depends on their rating and their opponents rating, the one taking a handicap will have their rating adjusted accordingly.

You can experiment with how many points players will get at

Also, technically we don’t use ELO we use Glicko-2 for our ratings.

I made a test with two players
A is 1.0d (rating 1920)
B is 9.0k (rating 1305).
Both have volatility 0.06 and deviation 65.

They play an even game. If A wins then A’s rating increases by 0.7 and B’s rating decreases by 0.7. If B wins then A’s rating decreases by 23 and B’s rating increases by 23.

They play a 9-stone handicap game. If A wins then A’s rating increases by 11.7 and B’s rating decreases by 20. If A loses then A’s rating decreases by 11.3 and B’s rating increases by 3.7.

This looks wrong to me. Assuming they both have 50% chances of winning, then each time they play a 9-stone handicap game, the 1d expects to gain 0.17 point (which is negligible) but the 9k expects to lose 8.2 points. In the long run, playing too many handicap games results in rank deflation.

In the 9-stone handicap game, if Black wins then Black gains 3.78 points, otherwise Black loses 20.21 points. This is fair if

(Black’s probability of winning)/(Black’s probability of losing) = 20.21/3.78 = 5.34, i.e. Black’s winrate is 84%. This is unrealistic since the link you pointed to says that Black’s winrate is less than 50%.

What I mean is if the rating change of Black winning is K(S-P(B)) where S is the outcome and P(B)=E_{B} since the score is 1 for a win and 0 for a loss.

The alternate result of white winning has chance 1-P(B) and the alternate result 1-S has chance 1-P(B) leading to a rating change of just minus the value of the win.

This formula is wrong (if I understood correctly what you wrote).
If Black wins, the rating change is K(1-P(B)) and if Black loses, the rating change is K(0-P(B)). So the expected rating change is
P(B) * K(1-P(B)) + (1-P(B)) * K(0-P(B)) = 0.

The problem exists no matter how you assume both players’ probability of winning.When Black wins,White loses more rating than Black gains,which implies that White is more likely to win.However,when White wins,Black loses more rating than White gains,which implies that Black is more likely to win.Those conclusions contradict to each other.