Is GDB interpreting the memory address correctly?


I am examining the contents of a memory address using GDB, but don't know if it is being displayed correctly.

(gdb) p (char *)0x8182f40
 $4 = 0x8182f40 "XYZ"

(gdb) x/40x 0x8182f40-16
0x8182f30:      0x00000000      0x00000000      0x000000a8      0x00000010
0x8182f40:      0x005a5958      0x00000000      0x00000000      0x00000029
0x8182f50:      0x00000000      0x00000000      0x00010000      0x082439d8
0x8182f60:      0x08199100      0x00000000      0x08000000      0x00002f08
0x8182f70:      0x00000002      0x000000b1      0x00000000      0x00000000
0x8182f80:      0x00000000      0x00000000      0x00000000      0x00000000
0x8182f90:      0x00000000      0x00000000      0x000000d4      0x00000002
0x8182fa0:      0x000003f1      0x00007162      0x00000002      0x08178d00
0x8182fb0:      0x00000000      0x080ef4b8      0x00000000      0x00000000
0x8182fc0:      0x00000000      0x00000000      0x0000021d      0x00000000

Content at 0x8182f40 above is shown as 0x005a5958, but this looks reversed. Is that correct?

Now printing per byte, I get this:

(gdb) x/40bx 0x8182f40-16
0x8182f30:      0x00    0x00    0x00    0x00    0x00    0x00    0x00    0x00
0x8182f38:      0xa8    0x00    0x00    0x00    0x10    0x00    0x00    0x00
0x8182f40:      0x58    0x59    0x5a    0x00    0x00    0x00    0x00    0x00
0x8182f48:      0x00    0x00    0x00    0x00    0x29    0x00    0x00    0x00
0x8182f50:      0x00    0x00    0x00    0x00    0x00    0x00    0x00    0x00

This one makes more sense: 0x8182f40: 0x58 0x59 0x5a

How do I correctly interpret these addresses and contents?

asked on Stack Overflow May 6, 2014 by adizone • edited May 6, 2014 by Massimiliano

3 Answers


That's little endian.

When storing multi-byte values in memory, there are two1 ways to store them:

  • Lower bytes on lower addresses. This is called Little Endian or Least Significant Byte First (LSB).

  • Higher bytes on lower addresses. This is called Big Endian or Most Significant Byte First (MSB).

Historically some CPUs were little endian and some were big endian with big endian perhaps more common, but little endian prevailed. In part because the most common ix86 architecture is little endian. The second most common architecture, ARM, can be configured for either and while traditionally many operating systems used it as big endian (including early Linux), recently everybody seems to use it little endian. Main reason is probably to avoid having to check that code ported from ix86 is endian-neutral.

The reason is looks "wrong" is just a conflict of two conventions:

  1. Numbers are written left-to-right with most significant digit first
  2. Content of memory is written left-to-right in order of increasing addresses.

But this is merely a convention. In computer, little endian might be slightly more logical in that given int value x, the equality (char)x == *(char *)&x holds, which is not true in big endian. Of course C specification is careful enough to leave this implementation defined (with char it does not violate strict aliasing rules).

1PDP-11 featured a third way, a special abomination called middle endian, where 16-bit values were little endian, but 32-bit values were composed of the two 16-bit units in big endian.

answered on Stack Overflow May 6, 2014 by Jan Hudec • edited May 6, 2014 by Jan Hudec

You may need to set endianness:

answered on Stack Overflow May 6, 2014 by James McDonnell

Looks like your GDB is set as Little-Endian. Refer to for more details about Endianness.

answered on Stack Overflow May 6, 2014 by stanleyli

User contributions licensed under CC BY-SA 3.0