libfoedus-core
FOEDUS Core Library
foedus::memory Namespace Reference

Memory Manager, which controls memory allocations, deallocations, and sharing. More...

Detailed Description

Memory Manager, which controls memory allocations, deallocations, and sharing.

This package contains classes that control memory allocations, deallocations, and sharing.

Classes

class  AlignedMemory
 Represents one memory block aligned to actual OS/hardware pages. More...
 
struct  AlignedMemorySlice
 A slice of foedus::memory::AlignedMemory. More...
 
struct  AutoVolatilePageReleaseScope
 Automatically invokes a page offset acquired for volatile page. More...
 
class  DivvyupPageGrabBatch
 A helper class to grab a bunch of pages from multiple nodes in arbitrary fashion. More...
 
class  EngineMemory
 Repository of all memories dynamically acquired and shared within one database engine. More...
 
struct  GlobalVolatilePageResolver
 Resolves an offset in a volatile page pool to an actual pointer and vice versa. More...
 
struct  LocalPageResolver
 Resolves an offset in local (same NUMA node) page pool to a pointer and vice versa. More...
 
struct  MemoryOptions
 Set of options for memory manager. More...
 
class  NumaCoreMemory
 Repository of memories dynamically acquired within one CPU core (thread). More...
 
class  NumaNodeMemory
 Repository of memories dynamically acquired and shared within one NUMA node (socket). More...
 
class  NumaNodeMemoryRef
 A view of NumaNodeMemory for other SOCs and master engine. More...
 
class  PagePool
 Page pool for volatile read/write store (VolatilePage) and the read-only bufferpool (SnapshotPage). More...
 
struct  PagePoolControlBlock
 Shared data in PagePoolPimpl. More...
 
class  PagePoolOffsetAndEpochChunk
 Used to store an epoch value with each entry in PagePoolOffsetChunk. More...
 
class  PagePoolOffsetChunk
 To reduce the overhead of grabbing/releasing pages from pool, we pack this many pointers for each grab/release. More...
 
class  PagePoolOffsetDynamicChunk
 Used to point to an already existing array. More...
 
class  PagePoolPimpl
 Pimpl object of PagePool. More...
 
class  PageReleaseBatch
 A helper class to return a bunch of pages to individual nodes. More...
 
class  RoundRobinPageGrabBatch
 A helper class to grab a bunch of pages from multiple nodes in round-robin fashion per chunk. More...
 
struct  ScopedNumaPreferred
 Automatically sets and resets numa_set_preferred(). More...
 
class  SharedMemory
 Represents memory shared between processes. More...
 

Typedefs

typedef uint32_t PagePoolOffset
 Offset in PagePool that compactly represents the page address (unlike 8 bytes pointer). More...
 

Functions

bool is_1gb_hugepage_enabled ()
 Returns if 1GB hugepages were enabled. More...
 
void _dummy_static_size_check__COUNTER__ ()
 
char * alloc_mmap (uint64_t size, uint64_t alignment)
 
void * alloc_mmap_1gb_pages (uint64_t size)
 
std::ostream & operator<< (std::ostream &o, const AlignedMemory &v)
 
std::ostream & operator<< (std::ostream &o, const AlignedMemorySlice &v)
 
int64_t get_numa_node_size (int node)
 
std::ostream & operator<< (std::ostream &o, const PagePool &v)
 
std::ostream & operator<< (std::ostream &o, const PagePoolPimpl &v)
 
std::ostream & operator<< (std::ostream &o, const SharedMemory &v)
 

Variables

const uint64_t kHugepageSize = 1 << 21
 So far 2MB is the only page size available via Transparent Huge Page (THP). More...
 

Function Documentation

void foedus::memory::_dummy_static_size_check__COUNTER__ ( )
inline

Definition at line 417 of file page_pool.hpp.

char* foedus::memory::alloc_mmap ( uint64_t  size,
uint64_t  alignment 
)

Definition at line 64 of file aligned_memory.cpp.

References is_1gb_hugepage_enabled(), MAP_HUGE_1GB, MAP_HUGE_2MB, and foedus::assorted::os_error().

Referenced by foedus::memory::AlignedMemory::alloc(), and alloc_mmap_1gb_pages().

64  {
65  // std::lock_guard<std::mutex> guard(mmap_allocate_mutex);
66  // we don't use MAP_POPULATE because it will block here and also serialize hugepage allocation!
67  // even if we run mmap in parallel, linux serializes the looooong population in all numa nodes.
68  // lame. we will memset right after this.
69  int pagesize;
70  if (alignment >= (1ULL << 30)) {
72  pagesize = MAP_HUGE_1GB | MAP_HUGETLB;
73  } else {
74  pagesize = MAP_HUGE_2MB | MAP_HUGETLB;
75  }
76  } else if (alignment >= (1ULL << 21)) {
77  pagesize = MAP_HUGE_2MB | MAP_HUGETLB;
78  } else {
79  pagesize = 0;
80  }
81  bool running_on_valgrind = RUNNING_ON_VALGRIND;
82  if (running_on_valgrind) {
83  // if this is running under valgrind, we have to avoid using hugepages due to a bug in valgrind.
84  // When we are running on valgrind, we don't care performance anyway. So shouldn't matter.
85  pagesize = 0;
86  }
87  char* ret = reinterpret_cast<char*>(::mmap(
88  nullptr,
89  size,
90  PROT_READ | PROT_WRITE,
91  MAP_ANONYMOUS | MAP_PRIVATE | pagesize, // | MAP_NORESERVE
92  -1,
93  0));
94  // Note: We previously used MAP_NORESERVE to explicitly say we don't want swapping,
95  // but mmap with this flag causes SIGSEGV when there aren't enough hugepages.
96  // In that case mmap doesn't return -1 because it just checks if VA space is mappable.
97  // We still don't need swapping, but it won't hurt. sorta. Debuggability matters more.
98 
99  // when mmap() fails, it returns -1 (MAP_FAILED)
100  if (ret == nullptr || ret == MAP_FAILED) {
101  LOG(FATAL) << "mmap() failed. size=" << size << ", error=" << assorted::os_error()
102  << ". This error usually means you don't have enough hugepages allocated."
103  << " eg) sudo sh -c 'echo 196608 > /proc/sys/vm/nr_hugepages'";
104  }
105  return ret;
106 }
#define MAP_HUGE_1GB
#define MAP_HUGE_2MB
std::string os_error()
Thread-safe strerror(errno).
bool is_1gb_hugepage_enabled()
Returns if 1GB hugepages were enabled.

Here is the call graph for this function:

Here is the caller graph for this function:

void* foedus::memory::alloc_mmap_1gb_pages ( uint64_t  size)

Definition at line 108 of file aligned_memory.cpp.

References alloc_mmap(), and ASSERT_ND.

Referenced by foedus::memory::AlignedMemory::alloc().

108  {
109  ASSERT_ND(size % (1ULL << 30) == 0);
110  return alloc_mmap(size, 1ULL << 30);
111 }
char * alloc_mmap(uint64_t size, uint64_t alignment)
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

int64_t foedus::memory::get_numa_node_size ( int  node)

Definition at line 49 of file numa_node_memory.cpp.

References numa_available(), and numa_node_size().

Referenced by foedus::memory::NumaNodeMemory::initialize_once(), and foedus::memory::NumaNodeMemory::uninitialize_once().

49  {
50  if (::numa_available() < 0) {
51  return 0;
52  } else {
53  return ::numa_node_size(node, nullptr);
54  }
55 }
long numa_node_size(int node, long *freep)
int numa_available(void)

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::memory::is_1gb_hugepage_enabled ( )

Returns if 1GB hugepages were enabled.

Definition at line 293 of file aligned_memory.cpp.

Referenced by alloc_mmap().

293  {
294  // /proc/meminfo should have "Hugepagesize: 1048576 kB"
295  // Unfortunately, sysinfo() doesn't provide this information. So, just read the whole file.
296  // Alternatively, we can use gethugepagesizes(3) in libhugetlbs, but I don't want to add
297  // a dependency just for that...
298  std::ifstream file("/proc/meminfo");
299  if (!file.is_open()) {
300  return false;
301  }
302 
303  std::string line;
304  while (std::getline(file, line)) {
305  if (line.find("Hugepagesize:") != std::string::npos) {
306  break;
307  }
308  }
309  file.close();
310  if (line.find("1048576 kB") != std::string::npos) {
311  return true;
312  }
313  return false;
314 }

Here is the caller graph for this function:

std::ostream& foedus::memory::operator<< ( std::ostream &  o,
const PagePool v 
)

Definition at line 148 of file page_pool.cpp.

148  {
149  o << v.pimpl_;
150  return o;
151 }
std::ostream& foedus::memory::operator<< ( std::ostream &  o,
const SharedMemory v 
)

Definition at line 245 of file shared_memory.cpp.

References foedus::memory::SharedMemory::get_block(), foedus::memory::SharedMemory::get_meta_path(), foedus::memory::SharedMemory::get_numa_node(), foedus::memory::SharedMemory::get_owner_pid(), foedus::memory::SharedMemory::get_shmid(), foedus::memory::SharedMemory::get_shmkey(), foedus::memory::SharedMemory::get_size(), and foedus::memory::SharedMemory::is_owned().

245  {
246  o << "<SharedMemory>";
247  o << "<meta_path>" << v.get_meta_path() << "</meta_path>";
248  o << "<size>" << v.get_size() << "</size>";
249  o << "<owned>" << v.is_owned() << "</owned>";
250  o << "<owner_pid>" << v.get_owner_pid() << "</owner_pid>";
251  o << "<numa_node>" << v.get_numa_node() << "</numa_node>";
252  o << "<shmid>" << v.get_shmid() << "</shmid>";
253  o << "<shmkey>" << v.get_shmkey() << "</shmkey>";
254  o << "<address>" << reinterpret_cast<uintptr_t>(v.get_block()) << "</address>";
255  o << "</SharedMemory>";
256  return o;
257 }

Here is the call graph for this function:

std::ostream& foedus::memory::operator<< ( std::ostream &  o,
const AlignedMemory v 
)

Definition at line 253 of file aligned_memory.cpp.

References foedus::memory::AlignedMemory::get_alignment(), foedus::memory::AlignedMemory::get_alloc_type(), foedus::memory::AlignedMemory::get_block(), foedus::memory::AlignedMemory::get_numa_node(), foedus::memory::AlignedMemory::get_size(), foedus::memory::AlignedMemory::is_null(), foedus::memory::AlignedMemory::kNumaAllocInterleaved, foedus::memory::AlignedMemory::kNumaAllocOnnode, foedus::memory::AlignedMemory::kNumaMmapOneGbPages, and foedus::memory::AlignedMemory::kPosixMemalign.

253  {
254  o << "<AlignedMemory>";
255  o << "<is_null>" << v.is_null() << "</is_null>";
256  o << "<size>" << v.get_size() << "</size>";
257  o << "<alignment>" << v.get_alignment() << "</alignment>";
258  o << "<alloc_type>" << v.get_alloc_type() << " (";
259  switch (v.get_alloc_type()) {
260  case AlignedMemory::kPosixMemalign:
261  o << "kPosixMemalign";
262  break;
263  case AlignedMemory::kNumaAllocInterleaved:
264  o << "kNumaAllocInterleaved";
265  break;
266  case AlignedMemory::kNumaAllocOnnode:
267  o << "kNumaAllocOnnode";
268  break;
269  case AlignedMemory::kNumaMmapOneGbPages:
270  o << "kNumaMmapOneGbPages";
271  break;
272  default:
273  o << "Unknown";
274  }
275  o << ")</alloc_type>";
276  o << "<numa_node>" << static_cast<int>(v.get_numa_node()) << "</numa_node>";
277  o << "<address>" << v.get_block() << "</address>";
278  o << "</AlignedMemory>";
279  return o;
280 }

Here is the call graph for this function:

std::ostream& foedus::memory::operator<< ( std::ostream &  o,
const AlignedMemorySlice v 
)

Definition at line 282 of file aligned_memory.cpp.

References foedus::memory::AlignedMemorySlice::count_, foedus::memory::AlignedMemorySlice::memory_, and foedus::memory::AlignedMemorySlice::offset_.

282  {
283  o << "<AlignedMemorySlice>";
284  o << "<offset>" << v.offset_ << "</offset>";
285  o << "<count>" << v.count_ << "</count>";
286  if (v.memory_) {
287  o << *v.memory_;
288  }
289  o << "</AlignedMemorySlice>";
290  return o;
291 }
std::ostream& foedus::memory::operator<< ( std::ostream &  o,
const PagePoolPimpl v 
)

Definition at line 343 of file page_pool_pimpl.cpp.

References foedus::memory::PagePoolPimpl::free_pool_capacity_, foedus::memory::PagePoolPimpl::free_pool_head(), foedus::memory::PagePoolPimpl::get_debug_pool_name(), foedus::memory::PagePoolPimpl::get_free_pool_count(), foedus::memory::PagePoolPimpl::memory_, foedus::memory::PagePoolPimpl::memory_size_, foedus::memory::PagePoolPimpl::owns_, foedus::memory::PagePoolPimpl::pages_for_free_pool_, and foedus::memory::PagePoolPimpl::rigorous_page_boundary_check_.

343  {
344  o << "<PagePool>"
345  << "<name_>" << v.get_debug_pool_name() << "</name_>"
346  << "<memory_>" << v.memory_ << "</memory_>"
347  << "<memory_size>" << v.memory_size_ << "</memory_size>"
348  << "<owns_>" << v.owns_ << "</owns_>"
349  << "<rigorous_page_boundary_check_>"
350  << v.rigorous_page_boundary_check_ << "</rigorous_page_boundary_check_>"
351  << "<pages_for_free_pool_>" << v.pages_for_free_pool_ << "</pages_for_free_pool_>"
352  << "<free_pool_capacity_>" << v.free_pool_capacity_ << "</free_pool_capacity_>"
353  << "<free_pool_head_>" << v.free_pool_head() << "</free_pool_head_>"
354  << "<free_pool_count_>" << v.get_free_pool_count() << "</free_pool_count_>"
355  << "</PagePool>";
356  return o;
357 }

Here is the call graph for this function: