If it were 53, I'd wonder "are they storing the time in the integer part of a double precision float?" That wouldn't go negative, it'd just start absorbing increments without changing the value.
Though that might cause a divide by zero?
What could cause unexpected behavior at 57 bits?
Perhaps storing fractions of an hour, like incrementing it every 1/16th of an hour and calculating a relative rate of change, causing a divide by zero?
My overactive imagination thinks it went something like this:
Engineer A: Gee, I need to store a few flags with each block, but there's nowhere to put them. Ah! We're storing timestamps as 64-bit microseconds. I can borrow a few of those bits and there'll still be enough to go for thousands of years without overflowing.
Engineer B: Gee, our SSDs are getting so fast, soon we'll be able to hit 1M writes/sec. But we're storing timestamps as microseconds. How can we generate unique timestamps for each write? Ah! I'll switch to nanoseconds. It's a good thing we have plenty of space in this 64-bit int.
Packing a type flag into the upper bits of a 64 bit value is a reasonably common optimisation in dynamic language implementations (because it lets you use unboxed number arithmetic).
Or sometimes the lower bits, as at least used to be the case for integers in v8. (Also OCaml, but that's not dynamically typed. It simplifies the garbage collector to at least some times not require a pointer map for each type, just a flag in the object header to indicate if it contains any pointers, and then everything that isn't ints or pointers needs to be boxed.)
Do embedded CPUs like the one in an SSD have floating point units? It seems more likely to me that the upper bits in a 64 bit integer counter were used for something else.
I think it is more likely they shifted a power of two over implicitly by a base 10 place value instead of a binary one. Or multiplied by 10. Unsure why. But, seems simpler.
52 is notable as 2^4 + 2^3 = 24, 24 + 24 = 48, 48 + 2^2 = 52. But 57?