libfoedus-core
FOEDUS Core Library
foedus::memory::NumaCoreMemory Class Referencefinal

Repository of memories dynamically acquired within one CPU core (thread). More...

Detailed Description

Repository of memories dynamically acquired within one CPU core (thread).

One NumaCoreMemory corresponds to one foedus::thread::Thread. Each Thread exclusively access its NumaCoreMemory so that it needs no synchronization nor causes cache misses/cache-line ping-pongs. All memories here are allocated/freed via numa_alloc_interleaved(), numa_alloc_onnode(), and numa_free() (except the user specifies to not use them).

Definition at line 46 of file numa_core_memory.hpp.

#include <numa_core_memory.hpp>

Inheritance diagram for foedus::memory::NumaCoreMemory:
Collaboration diagram for foedus::memory::NumaCoreMemory:

Classes

struct  SmallThreadLocalMemoryPieces
 Packs pointers to pieces of small_thread_local_memory_. More...
 

Public Member Functions

 NumaCoreMemory ()=delete
 
 NumaCoreMemory (Engine *engine, NumaNodeMemory *node_memory, thread::ThreadId core_id)
 
ErrorStack initialize_once () override
 
ErrorStack uninitialize_once () override
 
AlignedMemorySlice get_log_buffer_memory () const
 
NumaNodeMemoryget_node_memory () const
 Returns the parent memory repository. More...
 
PagePoolOffset grab_free_volatile_page ()
 Acquires one free volatile page from local page pool. More...
 
storage::VolatilePagePointer grab_free_volatile_page_pointer ()
 Wrapper for grab_free_volatile_page(). More...
 
PagePoolOffset grab_free_snapshot_page ()
 Same, except it's for snapshot page. More...
 
void release_free_volatile_page (PagePoolOffset offset)
 Returns one free volatile page to local page pool. More...
 
void release_free_snapshot_page (PagePoolOffset offset)
 Same, except it's for snapshot page. More...
 
memory::PagePoolget_volatile_pool ()
 
memory::PagePoolget_snapshot_pool ()
 
PagePoolOffsetAndEpochChunkget_retired_volatile_pool_chunk (uint16_t node)
 
xct::LockEntryget_current_lock_list_memory () const
 
uint64_t get_current_lock_list_capacity () const
 
xct::LockEntryget_retrospective_lock_list_memory () const
 
uint64_t get_retrospective_lock_list_capacity () const
 
const SmallThreadLocalMemoryPiecesget_small_thread_local_memory_pieces () const
 
void * get_local_work_memory () const
 
uint64_t get_local_work_memory_size () const
 
- Public Member Functions inherited from foedus::DefaultInitializable
 DefaultInitializable ()
 
virtual ~DefaultInitializable ()
 
 DefaultInitializable (const DefaultInitializable &)=delete
 
DefaultInitializableoperator= (const DefaultInitializable &)=delete
 
ErrorStack initialize () override final
 Typical implementation of Initializable::initialize() that provides initialize-once semantics. More...
 
ErrorStack uninitialize () override final
 Typical implementation of Initializable::uninitialize() that provides uninitialize-once semantics. More...
 
bool is_initialized () const override final
 Returns whether the object has been already initialized or not. More...
 
- Public Member Functions inherited from foedus::Initializable
virtual ~Initializable ()
 

Static Public Member Functions

static uint64_t calculate_local_small_memory_size (const EngineOptions &options)
 

Constructor & Destructor Documentation

foedus::memory::NumaCoreMemory::NumaCoreMemory ( )
delete
foedus::memory::NumaCoreMemory::NumaCoreMemory ( Engine engine,
NumaNodeMemory node_memory,
thread::ThreadId  core_id 
)

Definition at line 39 of file numa_core_memory.cpp.

References ASSERT_ND, foedus::thread::compose_thread_id(), and foedus::memory::NumaNodeMemory::get_numa_node().

43  : engine_(engine),
44  node_memory_(node_memory),
45  core_id_(core_id),
46  numa_node_(thread::decompose_numa_node(core_id)),
47  core_local_ordinal_(thread::decompose_numa_local_ordinal(core_id)),
48  free_volatile_pool_chunk_(nullptr),
49  free_snapshot_pool_chunk_(nullptr),
50  retired_volatile_pool_chunks_(nullptr),
51  current_lock_list_memory_(nullptr),
52  current_lock_list_capacity_(0),
53  retrospective_lock_list_memory_(nullptr),
54  retrospective_lock_list_capacity_(0),
55  volatile_pool_(nullptr),
56  snapshot_pool_(nullptr) {
57  ASSERT_ND(numa_node_ == node_memory->get_numa_node());
58  ASSERT_ND(core_id_ == thread::compose_thread_id(node_memory->get_numa_node(),
59  core_local_ordinal_));
60 }
ThreadLocalOrdinal decompose_numa_local_ordinal(ThreadId global_id)
Extracts local ordinal from the given globally unique ID of Thread (core).
Definition: thread_id.hpp:139
ThreadGroupId decompose_numa_node(ThreadId global_id)
Extracts NUMA node ID from the given globally unique ID of Thread (core).
Definition: thread_id.hpp:131
ThreadId compose_thread_id(ThreadGroupId node, ThreadLocalOrdinal local_core)
Returns a globally unique ID of Thread (core) for the given node and ordinal in the node...
Definition: thread_id.hpp:123
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Member Function Documentation

uint64_t foedus::memory::NumaCoreMemory::calculate_local_small_memory_size ( const EngineOptions options)
static
Returns
the byte size of small_thread_local_memory each thread consumes

Definition at line 62 of file numa_core_memory.cpp.

References foedus::thread::ThreadOptions::group_count_, foedus::xct::Xct::kMaxPageVersionSets, foedus::xct::Xct::kMaxPointerSets, foedus::xct::XctOptions::max_lock_free_read_set_size_, foedus::xct::XctOptions::max_lock_free_write_set_size_, foedus::xct::XctOptions::max_read_set_size_, foedus::xct::XctOptions::max_write_set_size_, foedus::EngineOptions::thread_, foedus::thread::ThreadOptions::thread_count_per_group_, and foedus::EngineOptions::xct_.

Referenced by foedus::EngineOptions::calculate_required_memory(), and initialize_once().

62  {
63  uint64_t memory_size = 0;
64  // for the "shift" part, we calculate conservatively then skip it at the end.
65  // it's a wasted memory, but negligible.
66  memory_size += static_cast<uint64_t>(options.thread_.thread_count_per_group_) << 12;
67  memory_size += sizeof(xct::SysxctWorkspace);
68  memory_size += sizeof(xct::PageVersionAccess) * xct::Xct::kMaxPageVersionSets;
69  memory_size += sizeof(xct::PointerAccess) * xct::Xct::kMaxPointerSets;
70  const xct::XctOptions& xct_opt = options.xct_;
71  const uint16_t nodes = options.thread_.group_count_;
72  memory_size += sizeof(xct::ReadXctAccess) * xct_opt.max_read_set_size_;
73  memory_size += sizeof(xct::WriteXctAccess) * xct_opt.max_write_set_size_;
74  memory_size += sizeof(xct::LockFreeReadXctAccess)
75  * xct_opt.max_lock_free_read_set_size_;
76  memory_size += sizeof(xct::LockFreeWriteXctAccess)
77  * xct_opt.max_lock_free_write_set_size_;
78  memory_size += sizeof(memory::PagePoolOffsetAndEpochChunk) * nodes;
79 
80  // In reality almost no chance we take as many locks as all read/write-sets,
81  // but let's simplify that. Not much memory anyways.
82  const uint64_t total_access_sets = xct_opt.max_read_set_size_ + xct_opt.max_write_set_size_;
83  memory_size += sizeof(xct::LockEntry) * total_access_sets;
84  memory_size += sizeof(xct::LockEntry) * total_access_sets;
85  return memory_size;
86 }

Here is the caller graph for this function:

uint64_t foedus::memory::NumaCoreMemory::get_current_lock_list_capacity ( ) const
inline

Definition at line 95 of file numa_core_memory.hpp.

Referenced by foedus::xct::Xct::initialize().

95  {
96  return current_lock_list_capacity_;
97  }

Here is the caller graph for this function:

xct::LockEntry* foedus::memory::NumaCoreMemory::get_current_lock_list_memory ( ) const
inline

Definition at line 92 of file numa_core_memory.hpp.

Referenced by foedus::xct::Xct::initialize().

92  {
93  return current_lock_list_memory_;
94  }

Here is the caller graph for this function:

void* foedus::memory::NumaCoreMemory::get_local_work_memory ( ) const
inline

Definition at line 109 of file numa_core_memory.hpp.

References foedus::memory::AlignedMemory::get_block().

Referenced by foedus::xct::Xct::initialize().

109 { return local_work_memory_.get_block(); }
void * get_block() const
Returns the memory block.

Here is the call graph for this function:

Here is the caller graph for this function:

uint64_t foedus::memory::NumaCoreMemory::get_local_work_memory_size ( ) const
inline

Definition at line 110 of file numa_core_memory.hpp.

References foedus::memory::AlignedMemory::get_size().

Referenced by foedus::xct::Xct::initialize().

110 { return local_work_memory_.get_size(); }
uint64_t get_size() const
Returns the byte size of the memory block.

Here is the call graph for this function:

Here is the caller graph for this function:

AlignedMemorySlice foedus::memory::NumaCoreMemory::get_log_buffer_memory ( ) const
inline

Definition at line 64 of file numa_core_memory.hpp.

Referenced by foedus::log::ThreadLogBuffer::initialize_once().

64 { return log_buffer_memory_; }

Here is the caller graph for this function:

NumaNodeMemory* foedus::memory::NumaCoreMemory::get_node_memory ( ) const
inline

Returns the parent memory repository.

Definition at line 67 of file numa_core_memory.hpp.

Referenced by foedus::thread::Thread::get_node_memory().

67 { return node_memory_; }

Here is the caller graph for this function:

PagePoolOffsetAndEpochChunk * foedus::memory::NumaCoreMemory::get_retired_volatile_pool_chunk ( uint16_t  node)

Definition at line 253 of file numa_core_memory.cpp.

Referenced by foedus::thread::ThreadPimpl::collect_retired_volatile_page(), and foedus::thread::ThreadPimpl::uninitialize_once().

253  {
254  return retired_volatile_pool_chunks_ + node;
255 }

Here is the caller graph for this function:

uint64_t foedus::memory::NumaCoreMemory::get_retrospective_lock_list_capacity ( ) const
inline

Definition at line 101 of file numa_core_memory.hpp.

Referenced by foedus::xct::Xct::initialize().

101  {
102  return retrospective_lock_list_capacity_;
103  }

Here is the caller graph for this function:

xct::LockEntry* foedus::memory::NumaCoreMemory::get_retrospective_lock_list_memory ( ) const
inline

Definition at line 98 of file numa_core_memory.hpp.

Referenced by foedus::xct::Xct::initialize().

98  {
99  return retrospective_lock_list_memory_;
100  }

Here is the caller graph for this function:

const SmallThreadLocalMemoryPieces& foedus::memory::NumaCoreMemory::get_small_thread_local_memory_pieces ( ) const
inline

Definition at line 105 of file numa_core_memory.hpp.

Referenced by foedus::xct::Xct::initialize().

105  {
106  return small_thread_local_memory_pieces_;
107  }

Here is the caller graph for this function:

memory::PagePool* foedus::memory::NumaCoreMemory::get_snapshot_pool ( )
inline

Definition at line 89 of file numa_core_memory.hpp.

89 { return snapshot_pool_; }
memory::PagePool* foedus::memory::NumaCoreMemory::get_volatile_pool ( )
inline

Definition at line 88 of file numa_core_memory.hpp.

88 { return volatile_pool_; }
PagePoolOffset foedus::memory::NumaCoreMemory::grab_free_snapshot_page ( )

Same, except it's for snapshot page.

Definition at line 221 of file numa_core_memory.cpp.

References ASSERT_ND, foedus::memory::PagePoolOffsetChunk::empty(), foedus::kErrorCodeOk, foedus::memory::PagePoolOffsetChunk::pop_back(), and UNLIKELY.

Referenced by foedus::thread::ThreadPimpl::on_snapshot_cache_miss().

221  {
222  if (UNLIKELY(free_snapshot_pool_chunk_->empty())) {
223  if (grab_free_pages_from_node(free_snapshot_pool_chunk_, snapshot_pool_) != kErrorCodeOk) {
224  return 0;
225  }
226  }
227  ASSERT_ND(!free_snapshot_pool_chunk_->empty());
228  return free_snapshot_pool_chunk_->pop_back();
229 }
0 means no-error.
Definition: error_code.hpp:87
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

PagePoolOffset foedus::memory::NumaCoreMemory::grab_free_volatile_page ( )

Acquires one free volatile page from local page pool.

Returns
acquired page, or 0 if no free page is available (OUTOFMEMORY).

This method does not return error code to be simple and fast. Instead, The caller MUST check if the returned page is zero or not.

Definition at line 199 of file numa_core_memory.cpp.

References ASSERT_ND, foedus::memory::PagePoolOffsetChunk::empty(), foedus::kErrorCodeOk, foedus::memory::PagePoolOffsetChunk::pop_back(), and UNLIKELY.

Referenced by foedus::storage::masstree::allocate_new_border_page(), foedus::storage::sequential::SequentialStoragePimpl::append_record(), foedus::thread::ThreadPimpl::follow_page_pointer(), foedus::thread::ThreadPimpl::follow_page_pointers_for_read_batch(), foedus::thread::ThreadPimpl::follow_page_pointers_for_write_batch(), foedus::thread::GrabFreeVolatilePagesScope::grab(), grab_free_volatile_page_pointer(), and foedus::storage::masstree::ReserveRecords::run().

199  {
200  if (UNLIKELY(free_volatile_pool_chunk_->empty())) {
201  if (grab_free_pages_from_node(free_volatile_pool_chunk_, volatile_pool_) != kErrorCodeOk) {
202  return 0;
203  }
204  }
205  ASSERT_ND(!free_volatile_pool_chunk_->empty());
206  return free_volatile_pool_chunk_->pop_back();
207 }
0 means no-error.
Definition: error_code.hpp:87
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

storage::VolatilePagePointer foedus::memory::NumaCoreMemory::grab_free_volatile_page_pointer ( )

Wrapper for grab_free_volatile_page().

Definition at line 208 of file numa_core_memory.cpp.

References grab_free_volatile_page(), and foedus::storage::VolatilePagePointer::set().

Referenced by foedus::storage::hash::ReserveRecords::create_new_tail_page(), foedus::storage::hash::HashStoragePimpl::follow_page_bin_head(), foedus::storage::masstree::grow_case_b_common(), and foedus::thread::ThreadPimpl::install_a_volatile_page().

208  {
209  storage::VolatilePagePointer ret;
210  ret.set(numa_node_, grab_free_volatile_page());
211  return ret;
212 }
PagePoolOffset grab_free_volatile_page()
Acquires one free volatile page from local page pool.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::memory::NumaCoreMemory::initialize_once ( )
overridevirtual

Implements foedus::DefaultInitializable.

Definition at line 88 of file numa_core_memory.cpp.

References foedus::memory::NumaNodeMemory::allocate_numa_memory(), ASSERT_ND, calculate_local_small_memory_size(), CHECK_ERROR, foedus::memory::PagePoolOffsetAndEpochChunk::clear(), foedus::memory::AlignedMemory::get_block(), foedus::memory::PagePool::get_free_pool_capacity(), foedus::memory::NumaNodeMemory::get_log_buffer_memory_piece(), foedus::Engine::get_options(), foedus::memory::PagePool::get_recommended_pages_per_grab(), foedus::memory::NumaNodeMemory::get_snapshot_offset_chunk_memory_piece(), foedus::memory::NumaNodeMemory::get_snapshot_pool(), foedus::memory::NumaNodeMemory::get_volatile_offset_chunk_memory_piece(), foedus::memory::NumaNodeMemory::get_volatile_pool(), foedus::memory::PagePool::grab(), foedus::thread::ThreadOptions::group_count_, foedus::xct::Xct::kMaxPageVersionSets, foedus::xct::Xct::kMaxPointerSets, foedus::kRetOk, foedus::EngineOptions::memory_, foedus::memory::MemoryOptions::private_page_pool_initial_grab_, foedus::memory::NumaCoreMemory::SmallThreadLocalMemoryPieces::sysxct_workspace_memory_, foedus::EngineOptions::thread_, foedus::thread::ThreadOptions::thread_count_per_group_, WRAP_ERROR_CODE, foedus::EngineOptions::xct_, foedus::memory::NumaCoreMemory::SmallThreadLocalMemoryPieces::xct_lock_free_read_access_memory_, foedus::memory::NumaCoreMemory::SmallThreadLocalMemoryPieces::xct_lock_free_write_access_memory_, foedus::memory::NumaCoreMemory::SmallThreadLocalMemoryPieces::xct_page_version_memory_, foedus::memory::NumaCoreMemory::SmallThreadLocalMemoryPieces::xct_pointer_access_memory_, foedus::memory::NumaCoreMemory::SmallThreadLocalMemoryPieces::xct_read_access_memory_, and foedus::memory::NumaCoreMemory::SmallThreadLocalMemoryPieces::xct_write_access_memory_.

88  {
89  LOG(INFO) << "Initializing NumaCoreMemory for core " << core_id_;
90  free_volatile_pool_chunk_ = node_memory_->get_volatile_offset_chunk_memory_piece(
91  core_local_ordinal_);
92  free_snapshot_pool_chunk_ = node_memory_->get_snapshot_offset_chunk_memory_piece(
93  core_local_ordinal_);
94  volatile_pool_ = node_memory_->get_volatile_pool();
95  snapshot_pool_ = node_memory_->get_snapshot_pool();
96  log_buffer_memory_ = node_memory_->get_log_buffer_memory_piece(core_local_ordinal_);
97 
98  // allocate small_thread_local_memory_. it's a collection of small memories
99  uint64_t memory_size = calculate_local_small_memory_size(engine_->get_options());
100  if (memory_size > (1U << 21)) {
101  VLOG(1) << "mm, small_local_memory_size is more than 2MB(" << memory_size << ")."
102  " not a big issue, but consumes one more TLB entry...";
103  }
104  CHECK_ERROR(node_memory_->allocate_numa_memory(memory_size, &small_thread_local_memory_));
105 
106  const xct::XctOptions& xct_opt = engine_->get_options().xct_;
107  const uint16_t nodes = engine_->get_options().thread_.group_count_;
108  const uint16_t thread_per_group = engine_->get_options().thread_.thread_count_per_group_;
109  char* memory = reinterpret_cast<char*>(small_thread_local_memory_.get_block());
110  // "shift" 4kb for each thread on this node so that memory banks are evenly used.
111  // in many architecture, 13th- or 14th- bits are memory banks (see [JEONG11])
112  memory += static_cast<uint64_t>(core_local_ordinal_) << 12;
113  small_thread_local_memory_pieces_.sysxct_workspace_memory_ = memory;
114  memory += sizeof(xct::SysxctWorkspace);
115  small_thread_local_memory_pieces_.xct_page_version_memory_ = memory;
116  memory += sizeof(xct::PageVersionAccess) * xct::Xct::kMaxPageVersionSets;
117  small_thread_local_memory_pieces_.xct_pointer_access_memory_ = memory;
118  memory += sizeof(xct::PointerAccess) * xct::Xct::kMaxPointerSets;
119  small_thread_local_memory_pieces_.xct_read_access_memory_ = memory;
120  memory += sizeof(xct::ReadXctAccess) * xct_opt.max_read_set_size_;
121  small_thread_local_memory_pieces_.xct_write_access_memory_ = memory;
122  memory += sizeof(xct::WriteXctAccess) * xct_opt.max_write_set_size_;
123  small_thread_local_memory_pieces_.xct_lock_free_read_access_memory_ = memory;
124  memory += sizeof(xct::LockFreeReadXctAccess) * xct_opt.max_lock_free_read_set_size_;
125  small_thread_local_memory_pieces_.xct_lock_free_write_access_memory_ = memory;
126  memory += sizeof(xct::LockFreeWriteXctAccess) * xct_opt.max_lock_free_write_set_size_;
127  retired_volatile_pool_chunks_ = reinterpret_cast<PagePoolOffsetAndEpochChunk*>(memory);
128  memory += sizeof(memory::PagePoolOffsetAndEpochChunk) * nodes;
129 
130  const uint64_t total_access_sets = xct_opt.max_read_set_size_ + xct_opt.max_write_set_size_;
131  current_lock_list_memory_ = reinterpret_cast<xct::LockEntry*>(memory);
132  current_lock_list_capacity_ = total_access_sets;
133  memory += sizeof(xct::LockEntry) * total_access_sets;
134  retrospective_lock_list_memory_ = reinterpret_cast<xct::LockEntry*>(memory);
135  retrospective_lock_list_capacity_ = total_access_sets;
136  memory += sizeof(xct::LockEntry) * total_access_sets;
137 
138  memory += static_cast<uint64_t>(thread_per_group - core_local_ordinal_) << 12;
139  ASSERT_ND(reinterpret_cast<char*>(small_thread_local_memory_.get_block())
140  + memory_size == memory);
141 
142  for (uint16_t node = 0; node < nodes; ++node) {
143  retired_volatile_pool_chunks_[node].clear();
144  }
145 
146  CHECK_ERROR(node_memory_->allocate_numa_memory(
147  xct_opt.local_work_memory_size_mb_ * (1ULL << 20),
148  &local_work_memory_));
149 
150  // Each core starts from 50%-full free pool chunk (configurable)
151  uint32_t initial_pages = engine_->get_options().memory_.private_page_pool_initial_grab_;
152  {
153  uint32_t grab_count = std::min<uint32_t>(
154  volatile_pool_->get_recommended_pages_per_grab(),
155  std::min<uint32_t>(
156  initial_pages,
157  volatile_pool_->get_free_pool_capacity() / (2U * thread_per_group)));
158  WRAP_ERROR_CODE(volatile_pool_->grab(grab_count, free_volatile_pool_chunk_));
159  }
160  {
161  uint32_t grab_count = std::min<uint32_t>(
162  snapshot_pool_->get_recommended_pages_per_grab(),
163  std::min<uint32_t>(
164  initial_pages,
165  snapshot_pool_->get_free_pool_capacity() / (2U * thread_per_group)));
166  WRAP_ERROR_CODE(snapshot_pool_->grab(grab_count, free_snapshot_pool_chunk_));
167  }
168  return kRetOk;
169 }
ErrorStack allocate_numa_memory(uint64_t size, AlignedMemory *out) const
uint32_t private_page_pool_initial_grab_
How many pages each NumaCoreMemory initially grabs when it is initialized.
PagePoolOffsetChunk * get_volatile_offset_chunk_memory_piece(foedus::thread::ThreadLocalOrdinal core_ordinal)
ErrorCode grab(uint32_t desired_grab_count, PagePoolOffsetChunk *chunk)
Adds the specified number of free pages to the chunk.
Definition: page_pool.cpp:129
uint32_t get_recommended_pages_per_grab() const
Definition: page_pool.cpp:124
const EngineOptions & get_options() const
Definition: engine.cpp:39
ThreadLocalOrdinal thread_count_per_group_
Number of Thread in each ThreadGroup.
memory::MemoryOptions memory_
void * get_block() const
Returns the memory block.
uint16_t group_count_
Number of ThreadGroup in the engine.
thread::ThreadOptions thread_
static uint64_t calculate_local_small_memory_size(const EngineOptions &options)
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.
PagePoolOffsetChunk * get_snapshot_offset_chunk_memory_piece(foedus::thread::ThreadLocalOrdinal core_ordinal)
uint64_t get_free_pool_capacity() const
Definition: page_pool.cpp:120
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
#define WRAP_ERROR_CODE(x)
Same as CHECK_ERROR(x) except it receives only an error code, thus more efficient.
AlignedMemorySlice get_log_buffer_memory_piece(log::LoggerId logger)

Here is the call graph for this function:

void foedus::memory::NumaCoreMemory::release_free_snapshot_page ( PagePoolOffset  offset)

Same, except it's for snapshot page.

Definition at line 230 of file numa_core_memory.cpp.

References ASSERT_ND, foedus::memory::PagePoolOffsetChunk::full(), foedus::memory::PagePoolOffsetChunk::push_back(), and UNLIKELY.

Referenced by foedus::thread::ThreadPimpl::on_snapshot_cache_miss().

230  {
231  if (UNLIKELY(free_snapshot_pool_chunk_->full())) {
232  release_free_pages_to_node(free_snapshot_pool_chunk_, snapshot_pool_);
233  }
234  ASSERT_ND(!free_snapshot_pool_chunk_->full());
235  free_snapshot_pool_chunk_->push_back(offset);
236 }
void push_back(PagePoolOffset pointer)
Definition: page_pool.hpp:68
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::memory::NumaCoreMemory::release_free_volatile_page ( PagePoolOffset  offset)

Returns one free volatile page to local page pool.

Definition at line 213 of file numa_core_memory.cpp.

References ASSERT_ND, foedus::memory::PagePoolOffsetChunk::full(), foedus::memory::PagePoolOffsetChunk::push_back(), and UNLIKELY.

Referenced by foedus::storage::hash::HashStoragePimpl::follow_page_bin_head(), foedus::thread::GrabFreeVolatilePagesScope::grab(), foedus::thread::ThreadPimpl::place_a_new_volatile_page(), foedus::thread::GrabFreeVolatilePagesScope::release(), and foedus::memory::AutoVolatilePageReleaseScope::~AutoVolatilePageReleaseScope().

213  {
214  if (UNLIKELY(free_volatile_pool_chunk_->full())) {
215  release_free_pages_to_node(free_volatile_pool_chunk_, volatile_pool_);
216  }
217  ASSERT_ND(!free_volatile_pool_chunk_->full());
218  free_volatile_pool_chunk_->push_back(offset);
219 }
void push_back(PagePoolOffset pointer)
Definition: page_pool.hpp:68
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::memory::NumaCoreMemory::uninitialize_once ( )
overridevirtual

Implements foedus::DefaultInitializable.

Definition at line 170 of file numa_core_memory.cpp.

References ASSERT_ND, foedus::memory::AlignedMemorySlice::clear(), foedus::memory::PagePoolOffsetAndEpochChunk::empty(), foedus::Engine::get_soc_count(), foedus::memory::PagePool::release(), foedus::memory::AlignedMemory::release_block(), foedus::memory::PagePoolOffsetChunk::size(), and SUMMARIZE_ERROR_BATCH.

170  {
171  LOG(INFO) << "Releasing NumaCoreMemory for core " << core_id_;
172  ErrorStackBatch batch;
173  // return all free pages
174  if (retired_volatile_pool_chunks_) {
175  // this should be already released in ThreadPimpl's uninitialize.
176  // we can't do it here because uninitialization of node/core memories are parallelized
177  for (uint16_t node = 0; node < engine_->get_soc_count(); ++node) {
178  PagePoolOffsetAndEpochChunk* chunk = retired_volatile_pool_chunks_ + node;
179  ASSERT_ND(chunk->empty()); // just sanity check
180  }
181  retired_volatile_pool_chunks_ = nullptr;
182  }
183  if (free_volatile_pool_chunk_) {
184  volatile_pool_->release(free_volatile_pool_chunk_->size(), free_volatile_pool_chunk_);
185  free_volatile_pool_chunk_ = nullptr;
186  volatile_pool_ = nullptr;
187  }
188  if (free_snapshot_pool_chunk_) {
189  snapshot_pool_->release(free_snapshot_pool_chunk_->size(), free_snapshot_pool_chunk_);
190  free_snapshot_pool_chunk_ = nullptr;
191  snapshot_pool_ = nullptr;
192  }
193  log_buffer_memory_.clear();
194  local_work_memory_.release_block();
195  small_thread_local_memory_.release_block();
196  return SUMMARIZE_ERROR_BATCH(batch);
197 }
void release_block()
Releases the memory block.
soc::SocId get_soc_count() const
Shorthand for get_options().thread_.group_count_.
Definition: engine.cpp:74
void release(uint32_t desired_release_count, PagePoolOffsetChunk *chunk)
Returns the specified number of free pages from the chunk.
Definition: page_pool.cpp:134
#define SUMMARIZE_ERROR_BATCH(x)
This macro calls ErrorStackBatch::summarize() with automatically provided parameters.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:


The documentation for this class was generated from the following files: