Misplaced Pages

Memory management: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 14:34, 13 July 2016 edit115.188.27.154 (talk) The pagefile is the pagefile, it's not virtual memory. The pagefile is a there Winternals 6th Edition, Chapter 10 "Memory Management". Sections mapped to committed memory are called page-filebacked sections because the pages are written to the paging fi← Previous edit Revision as of 14:51, 13 July 2016 edit undo115.188.27.154 (talk) A pagefile is not virtual memory, nor does it increase available memory. The pagefile is the pagefile; a safety net in case of insufficient RAM, nothing more. It is not a substitute. I strongly suggest reading Windows Internals 6th Edition, Chapter 10 :)Next edit →
Line 2: Line 2:
{{about|memory management at the application level|memory management at the operating system level|Memory management (operating systems)}} {{about|memory management at the application level|memory management at the operating system level|Memory management (operating systems)}}
{{More footnotes|date=April 2014}} {{More footnotes|date=April 2014}}

'''Memory management''' is the act of managing ] at the system level. The essential requirement of memory management is to provide ways to dynamically allocate portions of memory to programs at their request, and free it for reuse when no longer needed. This is critical to any advanced computer system where more than a single ] might be underway at any time.<ref>{{cite journal|last=Gibson|first=Steve|author1-link=Steve Gibson (computer programmer)|title=Tech Talk: Placing the IBM/Microsoft XMS Spec Into Perspective|url=http://books.google.com/books?id=ZzoEAAAAMBAJ&pg=PA34|magazine=]|date=August 15, 1988}}</ref>





Revision as of 14:51, 13 July 2016

"Memory allocation" redirects here. For memory allocation in the brain, see Neuronal memory allocation. This article is about memory management at the application level. For memory management at the operating system level, see Memory management (operating systems).
This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (April 2014) (Learn how and when to remove this message)


Dynamic memory allocation

An example of external fragmentation

Details

The task of fulfilling an allocation request consists of locating a block of unused memory of sufficient size. Memory requests are satisfied by allocating portions from a large pool of memory called the heap or free store. At any given time, some parts of the heap are in use, while some are "free" (unused) and thus available for future allocations.

Several issues complicate the implementation, such as external fragmentation, which arises when there are many small gaps between allocated memory blocks, which invalidates their use for an allocation request. The allocator's metadata can also inflate the size of (individually) small allocations. This is often managed by chunking. The memory management system must track outstanding allocations to ensure that they do not overlap and that no memory is ever "lost" as a memory leak.

Efficiency

The specific dynamic memory allocation algorithm implemented can impact performance significantly. A study conducted in 1994 by Digital Equipment Corporation illustrates the overheads involved for a variety of allocators. The lowest average instruction path length required to allocate a single memory slot was 52 (as measured with an instruction level profiler on a variety of software).

Implementations

Since the precise location of the allocation is not known in advance, the memory is accessed indirectly, usually through a pointer reference. The specific algorithm used to organize the memory area and allocate and deallocate chunks is interlinked with the kernel, and may use any of the following methods:

Fixed-size blocks allocation

Main article: Memory pool

Fixed-size blocks allocation, also called memory pool allocation, uses a free list of fixed-size blocks of memory (often all of the same size). This works well for simple embedded systems where no large objects need to be allocated, but suffers from fragmentation, especially with long memory addresses. However, due to the significantly reduced overhead this method can substantially improve performance for objects that need frequent allocation / de-allocation and is often used in video games.

Buddy blocks

Further information: Buddy memory allocation

In this system, memory is allocated into several pools of memory instead of just one, where each pool represents blocks of memory of a certain power of two in size, or blocks of some other convenient size progression. All blocks of a particular size are kept in a sorted linked list or tree and all new blocks that are formed during allocation are added to their respective memory pools for later use. If a smaller size is requested than is available, the smallest available size is selected and split. One of the resulting parts is selected, and the process repeats until the request is complete. When a block is allocated, the allocator will start with the smallest sufficiently large block to avoid needlessly breaking blocks. When a block is freed, it is compared to its buddy. If they are both free, they are combined and placed in the correspondingly larger-sized buddy-block list.

Systems with virtual memory

Main articles: Memory protection and Shared memory (interprocess communication)

Virtual memory is a method of decoupling the memory organization from the physical hardware. The applications operate memory via virtual addresses. Each time an attempt to access stored data is made, virtual memory data orders translate the virtual address to a physical address. In this way addition of virtual memory enables granular control over memory systems and methods of access.

In virtual memory systems the operating system limits how a process can access the memory. This feature, called memory protection, can be used to disallow a process to read or write to memory that is not allocated to it, preventing malicious or malfunctioning code in one program from interfering with the operation of another.

Even though the memory allocated for specific processes is normally isolated, processes sometimes need to be able to share information. Shared memory is one of the fastest techniques for inter-process communication.

Memory is usually classified by access rate into primary storage and secondary storage. Memory management systems, among other operations, also handle the moving of information between these two levels of memory.

See also

Notes

  1. Not to be confused with the unrelated heap data structure.

References

  1. Detlefs, D.; Dosser, A.; Zorn, B. (June 1994). "Memory allocation costs in large C and C++ programs" (PDF). Software: Practice and Experience. 24 (6): 527–542. doi:10.1002/spe.4380240602. CiteSeer10.1.1.30.3073.
General

Further reading

External links

Memory management
Hardware
Virtual memory
Memory segmentation
Memory allocator
Manual memory management
Garbage collection
Memory safety
Issues
Other
Categories: