std.experimental.allocator.mallocatorpublic
import
std.experimental.allocator.building_blocks.affix_allocator,
std.experimental.allocator.building_blocks.aligned_block_list,
std.experimental.allocator.building_blocks.allocator_list,
std.experimental.allocator.building_blocks.ascending_page_allocator,
std.experimental.allocator.building_blocks.bucketizer,
std.experimental.allocator.building_blocks.fallback_allocator,
std.experimental.allocator.building_blocks.free_list,
std.experimental.allocator.building_blocks.free_tree,
std.experimental.allocator.gc_allocator,
std.experimental.allocator.building_blocks.bitmapped_block,
std.experimental.allocator.building_blocks.kernighan_ritchie,
std.experimental.allocator.mallocator,
std.experimental.allocator.mmap_allocator,
std.experimental.allocator.building_blocks.null_allocator,
std.experimental.allocator.building_blocks.quantizer,
std.experimental.allocator.building_blocks.region,
std.experimental.allocator.building_blocks.segregator,
std.experimental.allocator.building_blocks.stats_collector;
Assembling Your Own Allocator
This package also implements untyped composable memory allocators. They are untyped because they deal exclusively in void[] and have no notion of what type the memory allocated would be destined for. They are composable because the included allocators are building blocks that can be assembled in complex nontrivial allocators.
Unlike the allocators for the C and C++ programming languages, which manage the allocated size internally, these allocators require that the client maintains (or knows a priori) the allocation size for each piece of memory allocated. Put simply, the client must pass the allocated size upon deallocation. Storing the size in the _allocator has significant negative performance implications, and is virtually always redundant because client code needs knowledge of the allocated size in order to avoid buffer overruns. (See more discussion in a proposal for sized deallocation in C++.) For this reason, allocators herein traffic in void[] as opposed to void*.
In order to be usable as an _allocator, a type should implement the following methods with their respective semantics. Only alignment and allocate are required. If any of the other methods is missing, the _allocator is assumed to not have that capability (for example some allocators do not offer manual deallocation of memory). Allocators should NOT implement unsupported methods to always fail. For example, an allocator that lacks the capability to implement alignedAllocate should not define it at all (as opposed to defining it to always return null or throw an exception). The missing implementation statically informs other components about the allocator's capabilities and allows them to make design decisions accordingly.
Sample Assembly
The example below features an _allocator modeled after jemalloc, which uses a battery of free-list allocators spaced so as to keep internal fragmentation to a minimum. The FList definitions specify no bounds for the freelist because the Segregator does all size selection in advance.
Sizes through 3584 bytes are handled via freelists of staggered sizes. Sizes from 3585 bytes through 4072 KB are handled by a BitmappedBlock with a block size of 4 KB. Sizes above that are passed direct to the GCAllocator.
$(RUNNABLE_EXAMPLE ---- import std.experimental.allocator; import std.algorithm.comparison : max; alias FList = FreeList!(GCAllocator, 0, unbounded); alias A = Segregator!( 8, FreeList!(GCAllocator, 0, 8), 128, Bucketizer!(FList, 1, 128, 16), 256, Bucketizer!(FList, 129, 256, 32), 512, Bucketizer!(FList, 257, 512, 64), 1024, Bucketizer!(FList, 513, 1024, 128), 2048, Bucketizer!(FList, 1025, 2048, 256), 3584, Bucketizer!(FList, 2049, 3584, 512), 4072 * 1024, AllocatorList!(n => Region!GCAllocator(max(n, 1024 * 4096))), GCAllocator ); A tuMalloc; auto b = tuMalloc.allocate(500); assert(b.length == 500); auto c = tuMalloc.allocate(113); assert(c.length == 113); assert(tuMalloc.expand(c, 14)); tuMalloc.deallocate(b); tuMalloc.deallocate(c); ---- )
Allocating memory for sharing across threads
One allocation pattern used in multithreaded applications is to share memory across threads, and to deallocate blocks in a different thread than the one that allocated it.
All allocators in this module accept and return void[] (as opposed to shared void[]). This is because at the time of allocation, deallocation, or reallocation, the memory is effectively not shared (if it were, it would reveal a bug at the application level).
The issue remains of calling a.deallocate(b) from a different thread than the one that allocated b. It follows that both threads must have access to the same instance a of the respective allocator type. By definition of D, this is possible only if a has the shared qualifier. It follows that the allocator type must implement allocate and deallocate as shared methods. That way, the allocator commits to allowing usable shared instances.
Conversely, allocating memory with one non-shared allocator, passing it across threads (by casting the obtained buffer to shared), and later deallocating it in a different thread (either with a different allocator object or with the same allocator object after casting it to shared) is illegal.
Building Blocks
The table below gives a synopsis of predefined allocator building blocks, with their respective modules. Either import the needed modules individually, or import std.experimental.building_blocks, which imports them all publicly. The building blocks can be assembled in unbounded ways and also combined with your own. For a collection of typical and useful preassembled allocators and for inspiration in defining more such assemblies, refer to std.experimental.allocator.showcase.