This shows you the differences between two versions of the page.
Both sides previous revision Previous revision | Next revisionBoth sides next revision |
abstract:farber:runjobs:schedule_jobs [2019-02-22 11:31] – [Memory] ssunkara | abstract:farber:runjobs:schedule_jobs [2019-02-22 11:32] – [Memory] ssunkara |
---|
| m_mem_free | Yes |Memory consumed per CPU DURING execution | | | m_mem_free | Yes |Memory consumed per CPU DURING execution | |
| |
It is usually a good idea to add both resources. The ''mem_free'' complex is sensor driven, and is more reliable for choosing a node for your job. The ''m_mem_free'' is consumable, which means you are reserving the memory for future use. Other jobs, using ''m_mem_free'', may be barred from starting on the node. If you are specifying memory resources for a parallel environment job, the requested memory is multiplied by the slot count. By default, ''m_mem_free'' is defined as 1GB of memory per core (slot), if not specified. | The ''m_mem_free'' is consumable, which means you are reserving the memory for future use. Other jobs, using ''m_mem_free'', may be barred from starting on the node. If you are specifying memory resources for a parallel environment job, the requested memory is multiplied by the slot count. By default, ''m_mem_free'' is defined as 1GB of memory per core (slot), if not specified. |
| |
<note tip>When using a shared memory parallel computing environment ''-pe threads'', divide the total memory needed by the number of slots. For example, to request 48G of shared memory for an 8 thread job, request 6G (6G per slot).</note> | <note tip>When using a shared memory parallel computing environment ''-pe threads'', divide the total memory needed by the number of slots. For example, to request 48G of shared memory for an 8 thread job, request 6G (6G per slot).</note> |