gcc takes much memory to compile c++ file with very large object on stack

2

I have a c++ file which uses a std::bitset<size>. When compiling using gcc on windows subsystem linux, if size = 1000000000 (1Gbit) cause about 1.6GB compile-time memory using, size = 10000000000 (10Gbit) cause about ~6GB memory and ~15GB virtual memory (my PC has 8GB memory in total). The memory is allocated gradually and the compilation finishes immediately after the maximum. The program runs into Segmentation fault (core dumped) as soon as launching if size is large. The turning point is between 10M and 100M.

On MSVC, the program compiles and runs good if size is small. For larger size an exception is thrown: "Stack overflow". If size is really large it gives error "total size of array must not exceed 0x7fffffff bytes" in file bitset

The problem is irrelevant to optimization level. No matter -O0, -O1, -O2 or O3, it is the same.

this is the output of gcc -v:

Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/7/lto-wrapper
OFFLOAD_TARGET_NAMES=nvptx-none
OFFLOAD_TARGET_DEFAULT=1
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Ubuntu 7.3.0-27ubuntu1~18.04' --with-bugurl=file:///usr/share/doc/gcc-7/README.Bugs --enable-languages=c,ada,c++,go,brig,d,fortran,objc,obj-c++ --prefix=/usr --with-gcc-major-version-only --program-suffix=-7 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-libmpx --enable-plugin --enable-default-pie --with-system-zlib --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --disable-werror --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-offload-targets=nvptx-none --without-cuda-driver --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 7.3.0 (Ubuntu 7.3.0-27ubuntu1~18.04)

This is my test code, very simple

#include <bitset>

int main(){
    const size_t size = ...;
    std::bitset<size> bs;
    return 0;
}

If gcc-8 is used instead of gcc-7 there is no such issue, the compilation finishes quick, and the program runs into segmentation fault if size is large,as it should be. If I use vector<bool> or create the bitset using new,it runs good.

So there is no problem to solve but my question is:

Why gcc-7 takes so much memory (and time) to compile the file?

c++
gcc
memory
stack-overflow
asked on Stack Overflow Apr 14, 2019 by user • edited Apr 14, 2019 by user

1 Answer

1

This was a known bug in GCC, having to do with constexpr initialisation of very large objects.

There are several bug reports, one of which is bug 56671, reported in March, 2013 against v4.7.2. The bug was marked resolved in June, 2018, which means that it was present through to v7.3, at least, but has now been corrected.

If you need a workaround, it appears that using a different constructor did not trigger the bug. In particular, changing std::bitset<size> bs; to std::bitset<size> bs(0); compiles fine on various versions of gcc, including v.7.3.0 (according to the online compiler I tried).

answered on Stack Overflow Apr 14, 2019 by rici • edited Apr 14, 2019 by rici

User contributions licensed under CC BY-SA 3.0