We run several daemons on linux system and came across a strange behaviour regarding virtual memory size of our processes. When suddenly the amount of physical memory available in the system reduces to very small amount (like several megabytes), our daemons start to consume enormous amount of virtual memory, as we can observe it with top command.
I thought a little about virtual memory management and came to the result that there can be 2 types of memory fragmentation. The first happens on physical memory side where pages can not be freed because there is are some bytes of it used.
we use "shm_open" to create a shared memory object, and then "mmap" to map it to a memory region. However, in later time, when the code actually accesses the memory, in some corner cases, it will hit "bus error" as the underlying physical memory was running out.
We know for every process, it has 4G virtual memory on a 32bit machine.
since virtual memory is not physical memory, why don't operating system allocate all it's virtual memory to it, but set a "program break" to limit it's heap space?
Even if the operating system allocate all the 4G virtual memory to a process, only when the process access an address that not mapped into physical memo
I have an application that reserves a contiguous memory block using VirtualAllocEx on Windows with the MEM_RESERVE flag. This reserves a virtual memory block, but does not back it with a physical page or page file chunk.