We are running into issues with one of our applications requesting too much memory on the cluster. We need to set appropriate limits and ensure that the application knows how much memory it has available.
To begin, we need some code to test how many cores and memory the application thinks it has. The application that is causing us issues is written in Java, but we’re going to do this in python3 to ensure we can debug what is going on.
There are two python3 modules that you can use to test what is available. psutil
(python system and process utilities) and resources
(basic mechanisms for measuring and controlling system resources utilized by a program). The former gives us access to core system information, while the latter gives us access to available resources. Here is some code to print what is, or maybe, available. Before we start, a little helper function to convert bytes to human-readable format (from this SO post)
def sizeof_fmt(num, suffix='B'): for unit in ['','Ki','Mi','Gi','Ti','Pi','Ei','Zi']: if abs(num) < 1024.0: return "%3.1f%s%s" % (num, unit, suffix) num /= 1024.0 return "%.1f%s%s" % (num, 'Yi', suffix)
Here we figure out our host and important information:
hostname = socket.gethostname() m = psutil.virtual_memory() c = psutil.cpu_count() t = sizeof_fmt(m.total) a = sizeof_fmt(m.available) (ms, mh) = resource.getrlimit(resource.RLIMIT_AS) if ms > 0: ms = sizeof_fmt(ms) if mh > 0: mh = sizeof_fmt(mh) print(f"Running on: {hostname}") print(f"Number of cpus: {c}") print(f"Total memory: {t}\nAvailable memory: {a}") print(f"Memory limit (ulimit): soft: {ms} hard: {mh}")
You maybe wondering why we used resource.RLIMIT_AS to get the virtual memory as opposed to resource.RLIMIT_VMEM. Linux systems don’t report RLIMIT_VMEM, and instead use RLIMIT_AS for address space.
Note that we are using both psutil
and resources
to get information, and they tell us different things. If I run these on my laptop, I see something like this:
Running on: Laptop Number of cpus: 8 Total memory: 15.6GiB Available memory: 7.0GiB Memory limit (ulimit): soft: -1 hard: -1
Note that the memory limit is -1
for both hard and soft limits (from the resource man page: the soft limit is the current limit, and may be lowered or raised by a process over time. The soft limit can never exceed the hard limit. The hard limit can be lowered to any value greater than the soft limit, but not raised.) This value is actually the value of resource.RLIM_INFINITY
and so may not be -1
in your case (but probably is)!
The equivalent information is pulled from /proc/cpuinfo
or /proc/meminfo
on a Linux system, and the memory limit comes from ulimit
(see the man page)
So … how does this help us on the cluster. Lets try a few simple tests. I create a file called mem.sh
that basically just runs that python3 code above. When I submit it with default parameters, this is what I get
$ qsub -cwd -o mem.out -e mem.err ./mem.sh Running on: node15 At the start: Number of cpus: 16 Total memory: 125.9GiB Available memory: 124.1GiB Memory limit (ulimit): soft: -1 hard: -1
On my cluster, node15 has 16 CPUs and 126 GiB RAM, but some of it is currently being used.
With SGE, you can pass a couple of parameters to adjust memory settings. If we restrict memory usage using the h_vmem
setting, we see this answer:
$ qsub -cwd -o mem.out -e mem.err -l h_vmem=1G ./mem.sh Running on: node48 At the start: Number of cpus: 16 Total memory: 125.9GiB Available memory: 123.8GiB Memory limit (ulimit): soft: 1.0GiB hard: 1.0GiB
In this case, adding the -l h_vmem
option has limited the amount of resources available via ulimit
, and has set both hard and soft limits.
In contrast, setting s_vmem
sets the ulimit
soft limit, but leaves the hard limit unchanged:
$ qsub -cwd -o mem.out -e mem.err -l s_vmem=2G ./mem.sh Running on: node47 At the start: Number of cpus: 16 Total memory: 125.9GiB Available memory: 123.8GiB Memory limit (ulimit): soft: 2.0GiB hard: -1
Using Java?
Unfortunately, setting the limit on SGE using -l h_vmem
causes Java to crash with a known bug. You will see an error like this:
Error occurred during initialization of VM Could not allocate metaspace: 1073741824 bytes
There is a work around, and on my cluster I have to set both of these:
First, export MALLOC_ARENA_MAX and ensure that your qsub
inherits this variable (e.g. qsub -V
)
export MALLOC_ARENA_MAX=4
Then append this Java option:
-XX:CompressedClassSpaceSize=64m
It got Java to run, but it would still crash if I was trying to do anything remotely complex.