There is insufficient memory for the Java Runtime Environment to continue hbase

3

I have gone through all the answers related to similar questions, but I was unable to come to a conclusion about the issue being in my Java code or Hbase configuration. So I am posting this question again. I am getting bellow error in Hbase. I have 3 VMs for Hadoop cluster.

Master node - 3 GB RAM

Datanode 1 - 7 GB RAM

Datanode 2 - 7 GB RAM

My Java program is running on Hbase Master Node, this worker insert data into the Hbase table and approximately after inserting 100k records I got below error and both the Java program and HMaster stopped working.

Java Program Error :-

OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007fe05185c000, 12288, 0) failed; error='Cannot allocate memory' (errno=12)

There is insufficient memory for the Java Runtime Environment to continue. Native memory allocation (malloc) failed to allocate 12288 bytes for committing reserved memory.

An error report file with more information is saved as:

/var/data/HadoopOperations/javaOperations/hs_err_pid41813.log

Log for hs_err_pid41813.log

processor : 1

vendor_id : AuthenticAMD

cpu family : 16

model : 8

model name : AMD Opteron(tm) Processor 4171 HE

stepping : 1

microcode : 0xffffffff

cpu MHz : 2094.643

cache size : 512 KB

physical id : 0

siblings : 2

core id : 1

cpu cores : 2

apicid : 1

initial apicid : 1

fpu : yes

fpu_exception : yes

cpuid level : 5

wp : yes

flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt lm 3dnowext 3dnow rep_good nopl extd_apicid pni cx16 popcnt hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw vmmcall

bugs : tlb_mmatch apic_c1e fxsave_leak

bogomips : 4205.20

TLB size : 1024 4K pages

clflush size : 64

cache_alignment : 64

address sizes : 42 bits physical, 48 bits virtual power management:

Memory: 4k page, physical 3523172k(135048k free), swap 0k(0k free)

vm_info: OpenJDK 64-Bit Server VM (24.79-b02) for linux-amd64 JRE (1.7.0_79-b14), built on Jul 24 2015 08:15:54 by "buildd" with gcc 4.8.2

time: Fri Sep 4 06:43:48 2015

elapsed time: 63099 seconds

hbase-site.xml configuration

<configuration>
    <property>
            <name>hbase.rootdir</name>
            <value>hdfs://master:9000/hbase</value>
    </property> 

    <property>
            <name>hbase.cluster.distributed</name>
            <value>true</value>
    </property>

    <property>
            <name>hbase.zookeeper.property.clientPort</name>
            <value>2181</value> 
    </property>

    <property>
            <name>hbase.zookeeper.quorum</name><value>master,datanodeone,datanodetwo</value>
    </property>

    <property>
            <name>hbase.client.scanner.caching</name>
            <value>10000</value>
    </property>

    <property>
            <name>hfile.block.cache.size</name>
            <value>0.6</value> 
    </property> 

    <property>
            <name>hbase.regionserver.global.memstore.size</name>
            <value>0.2</value> 
    </property>         
 </configuration>
java
hadoop
jvm
hbase
asked on Stack Overflow Sep 4, 2015 by 3ppps • edited Sep 4, 2015 by Wazy

2 Answers

5

You have almost no free memory and no swap.

physical 3523172k(135048k free), swap 0k(0k free)

The simplest solution is to add some swap space, I suggest 4 GB minimum, up to 16 GB.

answered on Stack Overflow Sep 4, 2015 by Peter Lawrey
0

To create a Swap file of 2G

sudo dd if=/dev/zero of=/swapfile count=2048 bs=1MiB
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile

sudo nano /etc/fstab

add this line: /swapfile swap swap sw 0 0

run swapon --show and free -h to verify

answered on Stack Overflow Apr 3, 2020 by ainaomotayo

User contributions licensed under CC BY-SA 3.0