Why convert 0xffffffff to decimal is -1 in C?


I thought the result would be (2**32 - 1)

#include <stdio.h>
int main(){
  unsigned int a = 0xffffffff;
  printf("the size of int a %d\n",a);
  return 0;

but it gives me -1, any idea?

asked on Stack Overflow Jul 17, 2012 by mko

4 Answers


You're using the wrong format string. %d is a signed decimal int. You should use %u.

printf has no knowledge of the types of variables you pass it. It's up to you to choose the right format strings.

answered on Stack Overflow Jul 17, 2012 by Dancrumb

You're asking printf() to interpret that value as a signed integer, whose range is -(2**31) to (2**31)-1. Basically, the high bit is a sign bit. Read about two's complement.

answered on Stack Overflow Jul 17, 2012 by Wyzard

Because %d represents a signed integer. With a signed integer, the high bit (32nd bit) is set when the integer is negative, or not as the case may be. In 0xFFFFFFFF the high bit is set, so when casting to a signed integer, the result is negative. To treat the high bit as part of the number itself, use an unsigned type

%lu or %u 
answered on Stack Overflow Jul 17, 2012 by Jason Larke

The %d format specifier is used for printing signed integers, and since you're passing in 0xffffffff, printf is correctly outputting -1.

The issue is that the %d specifier is for signed integers. You need to use %u.

printf("%d",0xffffffff); // Will print -1

printf("%u",0xffffffff); // Will print 4294967295

By the way, I would expect to see a compiler warning here -- something along the lines of "printf() expects int value but argument has unsigned long int type", because most compilers detect the type mismatch.

answered on Stack Overflow May 13, 2020 by chandu_reddim • edited May 13, 2020 by ldoogy

User contributions licensed under CC BY-SA 3.0