libfoedus-core
FOEDUS Core Library
foedus::memory::PagePoolOffsetAndEpochChunk Class Reference

Used to store an epoch value with each entry in PagePoolOffsetChunk. More...

Detailed Description

Used to store an epoch value with each entry in PagePoolOffsetChunk.

This is used where the page offset can be passed around after some epoch, for example "retired" pages that must be kept intact until next-next epoch.

Definition at line 108 of file page_pool.hpp.

#include <page_pool.hpp>

Classes

struct  OffsetAndEpoch
 

Public Types

enum  Constants { kMaxSize = (1 << 16) - 1 }
 

Public Member Functions

 PagePoolOffsetAndEpochChunk ()
 
uint32_t capacity () const
 
uint32_t size () const
 
bool empty () const
 
bool full () const
 
void clear ()
 
bool is_sorted () const
 
void push_back (PagePoolOffset offset, const Epoch &safe_epoch)
 
void move_to (PagePoolOffset *destination, uint32_t count)
 Note that the destination is PagePoolOffset* because that's the only usecase. More...
 
uint32_t get_safe_offset_count (const Epoch &threshold) const
 Returns the number of offsets (always from index-0) whose safe_epoch_ is strictly-before the given epoch. More...
 
uint32_t unused_dummy_func_dummy () const
 

Member Enumeration Documentation

Enumerator
kMaxSize 

Max number of pointers to pack.

We use this object to pool retired pages, and we observed lots of waits due to full pool that causes the thread to wait for a new epoch. To avoid that, we now use a much larger kMaxSize than PagePoolOffsetChunk. Yes, it means much larger memory consumption in NumaCoreMemory, but shouldn't be a big issue. 8 * 2^16 * nodes * threads. On 16-node/12 threads-per-core (DH), 96MB per node. On 4-node/12 (DL580), 24 MB per node. I'd say negligible.

Definition at line 110 of file page_pool.hpp.

110  {
121  kMaxSize = (1 << 16) - 1,
122  };

Constructor & Destructor Documentation

foedus::memory::PagePoolOffsetAndEpochChunk::PagePoolOffsetAndEpochChunk ( )
inline

Definition at line 127 of file page_pool.hpp.

127 : size_(0) {}

Member Function Documentation

uint32_t foedus::memory::PagePoolOffsetAndEpochChunk::capacity ( ) const
inline

Definition at line 129 of file page_pool.hpp.

References foedus::memory::PagePoolOffsetChunk::kMaxSize.

129 { return kMaxSize; }
void foedus::memory::PagePoolOffsetAndEpochChunk::clear ( )
inline

Definition at line 133 of file page_pool.hpp.

Referenced by foedus::memory::NumaCoreMemory::initialize_once().

133 { size_ = 0; }

Here is the caller graph for this function:

bool foedus::memory::PagePoolOffsetAndEpochChunk::empty ( ) const
inline

Definition at line 131 of file page_pool.hpp.

Referenced by foedus::memory::NumaCoreMemory::uninitialize_once(), and foedus::thread::ThreadPimpl::uninitialize_once().

131 { return size_ == 0; }

Here is the caller graph for this function:

bool foedus::memory::PagePoolOffsetAndEpochChunk::full ( ) const
inline

Definition at line 132 of file page_pool.hpp.

References foedus::memory::PagePoolOffsetChunk::kMaxSize.

Referenced by foedus::thread::ThreadPimpl::collect_retired_volatile_page(), and foedus::thread::ThreadPimpl::flush_retired_volatile_page().

132 { return size_ == kMaxSize; }

Here is the caller graph for this function:

uint32_t foedus::memory::PagePoolOffsetAndEpochChunk::get_safe_offset_count ( const Epoch threshold) const

Returns the number of offsets (always from index-0) whose safe_epoch_ is strictly-before the given epoch.

This method does binary search assuming that chunk_ is sorted by safe_epoch_.

Parameters
[in]thresholdepoch that is deemed as unsafe to return.
Returns
number of offsets whose safe_epoch_ < threshold

Definition at line 57 of file page_pool.cpp.

References ASSERT_ND, is_sorted(), foedus::memory::PagePoolOffsetAndEpochChunk::OffsetAndEpoch::safe_epoch_, and foedus::Epoch::value().

Referenced by foedus::thread::ThreadPimpl::flush_retired_volatile_page().

57  {
59  OffsetAndEpoch dummy;
60  dummy.safe_epoch_ = threshold.value();
61  struct CompareEpoch {
62  bool operator() (const OffsetAndEpoch& left, const OffsetAndEpoch& right) {
63  return Epoch(left.safe_epoch_) < Epoch(right.safe_epoch_);
64  }
65  };
66  const OffsetAndEpoch* result = std::lower_bound(chunk_, chunk_ + size_, dummy, CompareEpoch());
67  ASSERT_ND(result);
68  ASSERT_ND(result - chunk_ <= size_);
69  return result - chunk_;
70 }
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::memory::PagePoolOffsetAndEpochChunk::is_sorted ( ) const

Definition at line 87 of file page_pool.cpp.

References ASSERT_ND.

Referenced by get_safe_offset_count(), and move_to().

87  {
88  for (uint32_t i = 1; i < size_; ++i) {
89  ASSERT_ND(chunk_[i].offset_);
90  if (Epoch(chunk_[i - 1U].safe_epoch_) > Epoch(chunk_[i].safe_epoch_)) {
91  return false;
92  }
93  }
94  return true;
95 }
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the caller graph for this function:

void foedus::memory::PagePoolOffsetAndEpochChunk::move_to ( PagePoolOffset destination,
uint32_t  count 
)

Note that the destination is PagePoolOffset* because that's the only usecase.

Definition at line 72 of file page_pool.cpp.

References ASSERT_ND, is_sorted(), and foedus::memory::PagePoolOffsetAndEpochChunk::OffsetAndEpoch::offset_.

72  {
73  ASSERT_ND(size_ >= count);
74  // we can't do memcpy. Just copy one by one
75  for (uint32_t i = 0; i < count; ++i) {
76  destination[i] = chunk_[i].offset_;
77  }
78  // Also, unlike PagePoolOffsetChunk, we copied from the head (we have to because epoch matters).
79  // So, we also have to move remainings to the beginning
80  if (size_ > count) {
81  std::memmove(chunk_, chunk_ + count, (size_ - count) * sizeof(OffsetAndEpoch));
82  }
83  size_ -= count;
85 }
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

void foedus::memory::PagePoolOffsetAndEpochChunk::push_back ( PagePoolOffset  offset,
const Epoch safe_epoch 
)
inline
Precondition
empty() || Epoch(chunk_[size_ - 1U].safe_epoch_) <= safe_epoch, meaning you cannot specify safe_epoch below what you have already specified.

Definition at line 140 of file page_pool.hpp.

References ASSERT_ND, foedus::memory::PagePoolOffsetChunk::empty(), foedus::memory::PagePoolOffsetChunk::full(), and foedus::Epoch::value().

Referenced by foedus::thread::ThreadPimpl::collect_retired_volatile_page().

140  {
141  ASSERT_ND(!full());
142  ASSERT_ND(empty() || Epoch(chunk_[size_ - 1U].safe_epoch_) <= safe_epoch);
143  chunk_[size_].offset_ = offset;
144  chunk_[size_].safe_epoch_ = safe_epoch.value();
145  ++size_;
146  }
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

uint32_t foedus::memory::PagePoolOffsetAndEpochChunk::size ( ) const
inline

Definition at line 130 of file page_pool.hpp.

Referenced by foedus::thread::ThreadPimpl::flush_retired_volatile_page(), and foedus::thread::ThreadPimpl::uninitialize_once().

130 { return size_; }

Here is the caller graph for this function:

uint32_t foedus::memory::PagePoolOffsetAndEpochChunk::unused_dummy_func_dummy ( ) const
inline

Definition at line 157 of file page_pool.hpp.

157 { return dummy_; }

The documentation for this class was generated from the following files: