The advantages to using integers are that no matter where you are on the number line, you always have the same level of precision. Whether I am at 0 or at INT_MAX units away, I will always be exactly 1 unit away from the positions either side of me (namely INT_MAX-1 or overflowing to 0). With (here, 64 bit) floats, at location 0 my left and right positions are something like 2^-53 units away from each other, while at 2^53 units, I have eaten into my precision to that I can't be less than another entire unit away if I go right, while going left I can still travel half units.
Integers guarantee precision that doesn't vary depending on where you sit. Position is stored in the number, and precision is constantly 1. Floating point allows much more accuracy within reasonable bounds, but the further you are from zero, the less precision you have. To increase precision you must decrease position, and vice versa.
On the topic of minimum coordinate precision, I'd say that a centimetre is too big to have as the base unit. Although in terms of (D/R/whatever is chosen)CPU and Gravity calculations no more accuracy is needed than about a decimetre, for the base entity coordinates I would recommend going smaller, since this will also affect rendering, not just clunky 16-bit programs or rough physical approximations. 1cm intervals are noticeable to the player, as they create a very rough effect on rendering to the player if they move in 1cm intervals. Although the player speed may be 150 cm/s, inbetween ticks the render engine will have to interpolate position, and interpolating to 1cm is horribly noticeable and jumpy. This will also show up whenever a player walks at a nearly-axial direction, since the motion will be (cos theta, sin theta) or (sin theta, cos theta) depending on the coord system. This gives one dimension that is at nearly full speed, and one that is at very low speed. This low speed dimension will show the snap-to effect very clearly, creating nasty visual artifacts. Also, at this range, the velocity and momentum calculations become discrete rather than continuous in nature, since there is no opportunity to go smaller than the large minimum dimension. This would require the engine to choose the closest legal point to the actual value. creating an almost voxel-like entity positioning restriction. To find speed, you need to square your velocities together, for instance - this will increase the error in position, and decrease precision. So although the base unit size for the hidden-to-the-player stuff could more easily be made 1cm, the calculations involved with this can make the errors involved increase very quickly, which is why having a much lower initial error is preferable to += 5mm.
In place of 64-bit floating points (double) for positions within a 10^14 space, I would recommend that either a 64-bit fixed point number is used, or that a 128-bit float (quad/long double) is used. The fixed point would offer a guaranteed precision over the entire range (similar to how integers do), while still being able to go to the limits of the system on full precision and back. This precision would also be enough that rendering artifacts would not arise, as for a fixed-point of [12 significand, 51 exponent] would offer a range of 2.2x10^14 with a constant precision of 0.00024, which is better than 0.01. The fixed point has a few drawbacks, such as requiring a bit of extra engineering to work, but it will eliminate the issues that arise by limiting entity positions to a 1x1cm grid (which is a key issue in a first-person game). The floating point of 128 bits would be easier to engineer, but it depends upon compiler support for it, since VC++ doesn't seem to like long double, and gcc and clang have specific switches and types for it.