we use "shm_open" to create a shared memory object, and then "mmap" to map it to a memory region. However, in later time, when the code actually accesses the memory, in some corner cases, it will hit "bus error" as the underlying physical memory was running out.
We know for every process, it has 4G virtual memory on a 32bit machine.
since virtual memory is not physical memory, why don't operating system allocate all it's virtual memory to it, but set a "program break" to limit it's heap space?
Even if the operating system allocate all the 4G virtual memory to a process, only when the process access an address that not mapped into physical memo
I have an application that reserves a contiguous memory block using VirtualAllocEx on Windows with the MEM_RESERVE flag. This reserves a virtual memory block, but does not back it with a physical page or page file chunk.
Every process has the possibility to address 2^32 or 2^64 bits of virtual memory. The moment you request to read or to write to one of these memory address', it is converted to a physical memory location (based on the process ID) as it is sent to your L1 data cache.
How much can I overhead the physical memory of my host assigning memory to my guest machine?
Host physical memory:10 GB
Is it acceptable for example assigning 2GB of virtual memory to 7 machines for a total of 14GB ? How much can I overcommit the memory ensuring balooning and other host memory freeing technique works fine?
I have a process that is reporting in 'top' that it has 6GB of resident memory and 70GB of virtual memory allocated. The strange thing is that this particular server only has 8GB physical and 35GB of swap space available.
From the 'top' manual:
o: VIRT -- Virtual Image (kb)
The total amount of virtual memory used by the task.