Java Memory Issues in Cloud Native

Java has been one of the most popular programming languages in the past two decades with its active open source community and well-established ecological advantages. Entering the cloud-native era, the booming cloud-native technology releases cloud computing dividends, drives business to cloud-native transformation, and accelerates the digital transformation of enterprises.

However, Java’s cloud-native transformation path faces great challenges, and there are many contradictions between Java’s operation mechanism and cloud-native features. Enterprises are leveraging cloud-native technology for deep cost optimization, and resource cost management has been raised to an unprecedented level. Resources on the public cloud are charged by volume, and users are very sensitive to resource usage. In terms of memory usage, the execution mechanism based on Java virtual machine makes any Java program have a fixed base memory overhead. Compared with C++/Golang and other native languages, Java applications occupy huge memory and are called “memory gobblers”, so Java applications are more expensive on the cloud. Moreover, the complexity of the system increases after the application is integrated into the cloud. Ordinary users do not have a clear understanding of the memory of Java applications on the cloud and do not know how to configure memory for the application reasonably, which makes it difficult to troubleshoot OOM problems and encounter many problems.

Why does OOM occur when heap memory does not exceed Xmx, and how to understand the relationship between OS and JVM memory? Why does the program take up a lot more memory than Xmx, and where is the memory used? Why do programs in online containers have greater memory requirements? In this article, we analyze these problems encountered by EDAS users in the practice of cloud-native Java application evolution, and give recommendations on the configuration of memory for cloud-native Java applications.

Background Knowledge

Resource Allocation for K8s Applications

The cloud-native architecture uses K8s as the cornerstone, and applications are deployed on K8s and run in the form of container groups. k8s has two definitions of resource model, resource request (request) and resource limit (limit). k8s guarantees that the container has the number of resources in request, but does not allow the use of more resources than the limit number. Take the following memory configuration as an example, the container can get at least 1024Mi of memory resources, but is not allowed to exceed 4096Mi. Once the memory usage exceeds the limit, the container will have an OOM and then be restarted by the K8s controller.

spec:
  containers:
  - name: edas
    image: alibaba/edas
    resources:
      requests:
        memory: "1024Mi"
      limits:
        memory: "4096Mi"
    command: ["java", "-jar", "edas.jar"]

Container OOM

For the OOM mechanism of containers, you first need to review the concept of containers. When we talk about containers, we will say that it is a sandbox technology. A container as a sandbox is relatively independent inside and is bounded and sized. The independent runtime environment inside the container is realized by the Linux Namespace mechanism, which is a blindfold for the Namespace of PID, Mount, UTS, IPD, Network, etc. inside the container, so that the host Namespace and the Namespace of other containers are not visible inside the container; and the so-called container boundary and size Cgroup is a mechanism provided by the Linux kernel to limit the resources used by a single process or multiple processes, and is the core technology to achieve container resource constraints. A container is seen by the operating system as a special process whose use of resources is constrained by the Cgroup. When the amount of memory used by a process exceeds the Cgroup limit, it will be killed by the system OOM Killer mercilessly.

Cgroup is not an obscure technology, Linux implements it as a file system, which is very much in line with the Unix philosophy that everything is a file. For Cgroup V1 version, we can view the current container’s Cgroup configuration directly in the /sys/fs/cgroup/ directory inside the container.

For container memory, memory.limit_in_bytes and memory.usage_in_bytes are the two most important parameters in the memory control group, the former identifies the maximum amount of memory available for the current container process group, and the latter is the total amount of memory actually used by the current container process group. In general, the closer the usage value is to the maximum value, the higher the risk of OOM.

# Current container memory limit
$ cat /sys/fs/cgroup/memory/memory.limit_in_bytes
4294967296
# Actual amount of current container memory
$ cat /sys/fs/cgroup/memory/memory.usage_in_bytes
39215104

JVM OOM

OutOfMemoryError is thrown when the JVM does not have enough memory to allocate space for an object and the garbage collector has run out of space to recover it. The most common JVM OOM cases are.

  • OutOfMemoryError:

Java heap space Heap memory overflow. This error is thrown when there is not enough heap space to hold a newly created object. It is usually caused by memory leaks or improperly set heap size. For memory leaks, you need to use memory monitoring software to find the leaking code in the program, while the heap size can be modified by parameters such as -Xms,-Xmx, etc.

  • java.lang.OutOfMemoryError:

PermGen space / Metaspace Permanent generation / metaspace overflowed. Permanent Generation stores objects including class information and constants, JDK 1.8 uses Metaspace instead of Permanent Generation. This error is usually thrown because too many classes are loaded or the size is too large. You can increase the size of the permanent generation/metaspace by changing the -XX:MaxPermSize or -XX:MaxMetaspaceSize startup parameter.

  • OutOfMemoryError:

Unable to create new native thread Unable to create a new thread. Each Java thread requires a certain amount of memory space, and this error is reported when the JVM requests the underlying operating system to create a new native thread, if there are not enough resources allocated. This can be caused by insufficient native memory, thread leaks causing the number of threads to exceed the OS ulimit limit, or the number of threads exceeding kernel.pid_max. Depending on the situation, you may need to upscale resources, limit the thread pool size, reduce the thread stack size, etc.

Why does OOM occur when heap memory does not exceed Xmx?

I’m sure many of you have encountered this scenario, where a Java application deployed in K8s restarts frequently and the exit status of the container is exit code 137 reason: OOM Killed All the information points to an obvious OOM, yet the JVM monitoring data shows that the heap memory usage does not exceed the maximum heap memory limit Xmx, and the OOM auto heapdump parameter, no dump file is generated when the OOM occurs.

According to the above background knowledge, there are two types of OOM exceptions that can occur in Java applications inside containers, one is JVM OOM and the other is container OOM. JVM OOM is an error caused by insufficient space in the JVM memory area, the JVM actively throws the error and exits the process. And the container OOM is a system behavior, the memory used by the whole container process group exceeds the Cgroup limit and is killed by the system OOM Killer, which will leave the relevant records in the system log and K8s events.

In general, Java program memory usage is limited by both JVM and Cgroup, where Java heap memory is limited by Xmx parameter and JVM OOM occurs when the limit is exceeded, and the whole process memory is limited by container memory limit value and container OOM occurs when the limit is exceeded. The OOM needs to be distinguished, troubleshoot, and configured and adjusted as needed.

How to understand the memory relationship between OS and JVM?

The above mentioned the Java container OOM.

As mentioned above, Java container OOM is essentially a Java process that uses more memory than the Cgroup limit and is killed by the OS OOM Killer. How does the memory of a Java process look from the OS perspective? The OS and JVM both have their own memory models, how do they map? It is important to understand the memory relationship between the JVM and the OS to explore the OOM problem of Java processes.

The virtual address space of a Linux process is divided into kernel space and user space, and the user space is subdivided into many segments. Here are a few segments that are highly relevant to the discussion in this paper, describing the mapping between JVM memory and process memory.

jvm

  • Code segment. Generally refers to the mapping of the program code in memory, specifically noted here as the JVM’s own code, not Java code.
  • Data segment. Data that has been initialized for variables at the beginning of the program run, in this case the JVM’s own data.
  • Heap space. The runtime heap is one of the memory segments that most distinguishes a Java process from a normal process. the heap in the Linux process memory model provides memory space for objects dynamically allocated by the process at runtime, while almost everything in the JVM memory model is a new object created by the JVM as a process at runtime. The Java heap in the JVM memory model, on the other hand, is nothing more than a logical section of space that the JVM creates on top of its process heap space.
  • Stack space. This is not the thread stack in the JVM memory model, but rather some of the operational data that the operating system needs to keep to run the JVM itself.

As mentioned above, the heap space is the most confusing and different concept as both the Linux process memory layout and the JVM memory layout. thread stack, code cache, GC and compiler data, etc.

Why does the program take up a lot more memory than Xmx, and where is all the memory used?

In Java developers’ view, the objects opened by Java code running are placed in Java heap, so many people will equate Java heap memory with Java process memory, use Java heap memory limit parameter Xmx as process memory limit parameter, and set container memory limit to the same size as Xmx, and then sadly find that the container is OOMed.

In fact, in addition to the familiar Heap memory, the JVM also has so-called Non-Heap memory, which is not only managed by the JVM, but also local memory that is opened directly by bypassing the JVM.

java Memory

JDK8 introduces the Native Memory Tracking (NMT) feature, which tracks the internal memory usage of the JVM. By default, NMT is off, use the JVM argument to turn it on: -XX:NativeMemoryTracking=[off | summary | detail]

java -Xms300m -Xmx300m -XX:+UseG1GC -XX:NativeMemoryTracking=summary -jar app.jar

This limits the maximum heap memory to 300M, uses G1 as the GC algorithm, and enables NMT to track the memory usage of the process.

Note: Enabling NMT will result in a 5% -10% performance overhead.

When NMT is enabled, you can use the jcmd command to print the JVM memory usage. This is to view only the memory summary information, set in MB.

jcmd <pid> VM.native_memory summary scale=MB

JVM total memory

Native Memory Tracking:
Total: reserved=1764MB, committed=534MB

NMT reports that the process currently has 1764MB of reserved memory and 534MB of committed memory, which is much higher than the maximum heap memory of 300M. reserved refers to opening up a continuous section of virtual address memory for the process, which can be interpreted as the amount of memory that the process may use; committed refers to mapping virtual addresses to physical memory, which can be interpreted as the amount of memory currently occupied by the process.

It should be noted that the memory counted by NMT is different from the memory counted by the operating system. Linux follows the lazy allocation mechanism when allocating memory, and only swaps it into physical memory when the process actually accesses a memory page, so the physical memory usage of the process as seen by the top command is different from that seen in the NMT report. Only NMT is used here to illustrate the memory usage in the JVM perspective.

Java Heap

Java Heap (reserved=300MB, committed=300MB)
    (mmap: reserved=300MB, committed=300MB)

Java heap memory is set as it is, actually opening up 300M of memory space.

Metaspace

Class (reserved=1078MB, committed=61MB)
      (classes #11183)
      (malloc=2MB #19375) 
      (mmap: reserved=1076MB, committed=60MB)

Loaded classes are stored in Metaspace, where the metaspace loads 11183 classes, reserves almost 1G, and commits 61M.

The more classes loaded, the more metaspace is used. The metaspace size is limited by -XX:MaxMetaspaceSize (default no limit) and -XX:CompressedClassSpaceSize (default 1G).