It's interesting that it makes that mistake, but then catches it a few lines later.
A common complaint about LLMs is that once they make a mistake, they will keep making it and write the rest of their completion under the assumption that everything before was correct. Even if they've been RLHF to take human feedback into account and the human points out the mistake, their answer is "Certainly! Here's the corrected version" and then they write something that makes the same mistake.
So it's interesting that this model does something that appears to be self-correction.
> 9 corresponds to 'i'(9='i')
> But 'i' is 9, so that seems off by 1.
Still seems bad at counting, as ever.