I'm explaining assuming int is of size 4, but only the minimum size of 2 is guaranteed by C standard. Still, in all real C compilers, 4 is the default now.
With 4 bytes we have 32 bits for int. Now, if we have only unsigned integers, this corresponds to $0 - 2^32 -1$.
Number |
Signed |
Unsigned |
---|
0x7fff ffff |
2^31 - 1 |
2^31-1 |
0x ffff ffff |
-1 or depends on the negative representation used |
2^32 - 1 |
As shown in the above table as long as the most significant bit is 0, the number is positive and signed and unsigned representation is the same. When we use "%d", the parameter is assumed to be in "signed representation" and when we give "%u", the parameter is assumed to be in "unsigned representation". Since both representations are same as long as the most significant bit is 0, we are safe to use both format specifiers for these range of values. But depending on the type being passed to print, compiler might produce warnings.
NB: Format specifiers never do any type conversion. They simply assume the parameter as whichever type the format tells. So, giving "%d" and passing a float value or vice versa produces unexpected result and not the int value of the float.