libfoedus-core
FOEDUS Core Library
foedus::storage::masstree::MasstreeStoragePimpl Class Referencefinal

Pimpl object of MasstreeStorage. More...

Detailed Description

Pimpl object of MasstreeStorage.

A private pimpl object for MasstreeStorage. Do not include this header from a client program unless you know what you are doing.

Definition at line 98 of file masstree_storage_pimpl.hpp.

#include <masstree_storage_pimpl.hpp>

Inheritance diagram for foedus::storage::masstree::MasstreeStoragePimpl:
Collaboration diagram for foedus::storage::masstree::MasstreeStoragePimpl:

Public Member Functions

 MasstreeStoragePimpl ()
 
 MasstreeStoragePimpl (MasstreeStorage *storage)
 
ErrorStack create (const MasstreeMetadata &metadata)
 
ErrorStack load (const StorageControlBlock &snapshot_block)
 
ErrorStack load_empty ()
 
ErrorStack drop ()
 Storage-wide operations, such as drop, create, etc. More...
 
bool exists () const
 
StorageId get_id () const
 
const StorageNameget_name () const
 
const MasstreeMetadataget_meta () const
 
const DualPagePointerget_first_root_pointer () const
 
DualPagePointerget_first_root_pointer_address ()
 
ErrorCode get_first_root (thread::Thread *context, bool for_write, MasstreeIntermediatePage **root)
 Root-node related, such as a method to retrieve 1st-root, to grow, etc. More...
 
ErrorCode find_border_physical (thread::Thread *context, MasstreePage *layer_root, uint8_t current_layer, bool for_writes, KeySlice slice, MasstreeBorderPage **border) __attribute__((always_inline))
 Find a border node in the layer that corresponds to the given key slice. More...
 
ErrorCode locate_record (thread::Thread *context, const void *key, KeyLength key_length, bool for_writes, RecordLocation *result)
 Identifies page and record for the key. More...
 
ErrorCode locate_record_normalized (thread::Thread *context, KeySlice key, bool for_writes, RecordLocation *result)
 Identifies page and record for the normalized key. More...
 
ErrorCode reserve_record (thread::Thread *context, const void *key, KeyLength key_length, PayloadLength payload_count, PayloadLength physical_payload_hint, RecordLocation *result)
 Like locate_record(), this is also a logical operation. More...
 
ErrorCode reserve_record_normalized (thread::Thread *context, KeySlice key, PayloadLength payload_count, PayloadLength physical_payload_hint, RecordLocation *result)
 
ErrorCode retrieve_general (thread::Thread *context, const RecordLocation &location, void *payload, PayloadLength *payload_capacity)
 implementation of get_record family. More...
 
ErrorCode retrieve_part_general (thread::Thread *context, const RecordLocation &location, void *payload, PayloadLength payload_offset, PayloadLength payload_count)
 
ErrorCode register_record_write_log (thread::Thread *context, const RecordLocation &location, log::RecordLogType *log_entry)
 Used in the following methods. More...
 
ErrorCode insert_general (thread::Thread *context, const RecordLocation &location, const void *be_key, KeyLength key_length, const void *payload, PayloadLength payload_count)
 implementation of insert_record family. More...
 
ErrorCode delete_general (thread::Thread *context, const RecordLocation &location, const void *be_key, KeyLength key_length)
 implementation of delete_record family. More...
 
ErrorCode upsert_general (thread::Thread *context, const RecordLocation &location, const void *be_key, KeyLength key_length, const void *payload, PayloadLength payload_count)
 implementation of upsert_record family. More...
 
ErrorCode overwrite_general (thread::Thread *context, const RecordLocation &location, const void *be_key, KeyLength key_length, const void *payload, PayloadLength payload_offset, PayloadLength payload_count)
 implementation of overwrite_record family. More...
 
template<typename PAYLOAD >
ErrorCode increment_general (thread::Thread *context, const RecordLocation &location, const void *be_key, KeyLength key_length, PAYLOAD *value, PayloadLength payload_offset)
 implementation of increment_record family. More...
 
ErrorStack verify_single_thread (thread::Thread *context)
 These are defined in masstree_storage_verify.cpp. More...
 
ErrorStack verify_single_thread_layer (thread::Thread *context, uint8_t layer, MasstreePage *layer_root)
 
ErrorStack verify_single_thread_intermediate (thread::Thread *context, KeySlice low_fence, HighFence high_fence, MasstreeIntermediatePage *page)
 
ErrorStack verify_single_thread_border (thread::Thread *context, KeySlice low_fence, HighFence high_fence, MasstreeBorderPage *page)
 
ErrorStack debugout_single_thread (Engine *engine, bool volatile_only, uint32_t max_pages)
 These are defined in masstree_storage_debug.cpp. More...
 
ErrorStack debugout_single_thread_recurse (Engine *engine, cache::SnapshotFileSet *fileset, MasstreePage *parent, bool follow_volatile, uint32_t *remaining_pages)
 
ErrorStack debugout_single_thread_follow (Engine *engine, cache::SnapshotFileSet *fileset, const DualPagePointer &pointer, bool follow_volatile, uint32_t *remaining_pages)
 
ErrorStack hcc_reset_all_temperature_stat ()
 For stupid reasons (I'm lazy!) these are defined in _debug.cpp. More...
 
ErrorStack hcc_reset_all_temperature_stat_recurse (MasstreePage *parent)
 
ErrorStack hcc_reset_all_temperature_stat_follow (VolatilePagePointer page_id)
 
ErrorCode follow_page (thread::Thread *context, bool for_writes, storage::DualPagePointer *pointer, MasstreePage **page)
 Thread::follow_page_pointer() for masstree. More...
 
ErrorCode follow_layer (thread::Thread *context, bool for_writes, MasstreeBorderPage *parent, SlotIndex record_index, MasstreePage **page) __attribute__((always_inline))
 Follows to next layer's root page. More...
 
ErrorCode prefetch_pages_normalized (thread::Thread *context, bool install_volatile, bool cache_snapshot, KeySlice from, KeySlice to)
 defined in masstree_storage_prefetch.cpp More...
 
ErrorCode prefetch_pages_normalized_recurse (thread::Thread *context, bool install_volatile, bool cache_snapshot, KeySlice from, KeySlice to, MasstreePage *page)
 
ErrorCode prefetch_pages_follow (thread::Thread *context, DualPagePointer *pointer, bool vol_on, bool snp_on, KeySlice from, KeySlice to)
 
xct::TrackMovedRecordResult track_moved_record (xct::RwLockableXctId *old_address, xct::WriteXctAccess *write_set) __attribute__((always_inline))
 
ErrorCode peek_volatile_page_boundaries (Engine *engine, const MasstreeStorage::PeekBoundariesArguments &args)
 Defined in masstree_storage_peek.cpp. More...
 
ErrorCode peek_volatile_page_boundaries_next_layer (const MasstreePage *layer_root, const memory::GlobalVolatilePageResolver &resolver, const MasstreeStorage::PeekBoundariesArguments &args)
 
ErrorCode peek_volatile_page_boundaries_this_layer (const MasstreePage *layer_root, const memory::GlobalVolatilePageResolver &resolver, const MasstreeStorage::PeekBoundariesArguments &args)
 
ErrorCode peek_volatile_page_boundaries_this_layer_recurse (const MasstreeIntermediatePage *cur, const memory::GlobalVolatilePageResolver &resolver, const MasstreeStorage::PeekBoundariesArguments &args)
 
ErrorStack fatify_first_root (thread::Thread *context, uint32_t desired_count, bool disable_no_record_split)
 Defined in masstree_storage_fatify.cpp. More...
 
ErrorStack fatify_first_root_double (thread::Thread *context, bool disable_no_record_split)
 
ErrorCode approximate_count_root_children (thread::Thread *context, uint32_t *out)
 Returns the count of direct children in the first-root node. More...
 
- Public Member Functions inherited from foedus::Attachable< MasstreeStorageControlBlock >
 Attachable ()
 
 Attachable (Engine *engine)
 
 Attachable (Engine *engine, MasstreeStorageControlBlock *control_block)
 
 Attachable (MasstreeStorageControlBlock *control_block)
 
 Attachable (const Attachable &other)
 
virtual ~Attachable ()
 
Attachableoperator= (const Attachable &other)
 
virtual void attach (MasstreeStorageControlBlock *control_block)
 Attaches to the given shared memory. More...
 
bool is_attached () const
 Returns whether the object has been already attached to some shared memory. More...
 
MasstreeStorageControlBlock * get_control_block () const
 
Engineget_engine () const
 
void set_engine (Engine *engine)
 

Static Public Member Functions

static ErrorCode check_next_layer_bit (xct::XctId observed) __attribute__((always_inline))
 

Additional Inherited Members

- Protected Attributes inherited from foedus::Attachable< MasstreeStorageControlBlock >
Engineengine_
 Most attachable object stores an engine pointer (local engine), so we define it here. More...
 
MasstreeStorageControlBlock * control_block_
 The shared data on shared memory that has been initialized in some SOC or master engine. More...
 

Constructor & Destructor Documentation

foedus::storage::masstree::MasstreeStoragePimpl::MasstreeStoragePimpl ( )
inline

Definition at line 100 of file masstree_storage_pimpl.hpp.

100 : Attachable<MasstreeStorageControlBlock>() {}
foedus::storage::masstree::MasstreeStoragePimpl::MasstreeStoragePimpl ( MasstreeStorage storage)
inlineexplicit

Definition at line 101 of file masstree_storage_pimpl.hpp.

102  : Attachable<MasstreeStorageControlBlock>(
103  storage->get_engine(),
104  storage->get_control_block()) {}

Member Function Documentation

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::approximate_count_root_children ( thread::Thread context,
uint32_t *  out 
)

Returns the count of direct children in the first-root node.

This is without lock, so it might not be accurate.

Definition at line 42 of file masstree_storage_fatify.cpp.

References CHECK_ERROR_CODE, foedus::storage::masstree::count_children_approximate(), get_first_root(), and foedus::kErrorCodeOk.

Referenced by fatify_first_root(), and fatify_first_root_double().

44  {
45  *out = 0;
46  MasstreeIntermediatePage* root;
47  CHECK_ERROR_CODE(get_first_root(context, true, &root));
48  *out = count_children_approximate(root);
49  return kErrorCodeOk;
50 }
uint32_t count_children_approximate(const MasstreeIntermediatePage *page)
0 means no-error.
Definition: error_code.hpp:87
ErrorCode get_first_root(thread::Thread *context, bool for_write, MasstreeIntermediatePage **root)
Root-node related, such as a method to retrieve 1st-root, to grow, etc.
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::check_next_layer_bit ( xct::XctId  observed)
inlinestatic

Definition at line 589 of file masstree_storage_pimpl.cpp.

References foedus::xct::XctId::is_next_layer(), foedus::kErrorCodeOk, foedus::kErrorCodeXctRaceAbort, and UNLIKELY.

Referenced by delete_general(), increment_general(), insert_general(), overwrite_general(), retrieve_general(), retrieve_part_general(), and upsert_general().

589  {
590  if (UNLIKELY(observed.is_next_layer())) {
591  // this should have been checked before this method and resolved as abort or retry,
592  // but if it reaches here for some reason, we treat it as usual contention abort.
593  DLOG(INFO) << "Probably this should be caught beforehand. next_layer bit is on";
594  return kErrorCodeXctRaceAbort;
595  }
596  return kErrorCodeOk;
597 }
0 means no-error.
Definition: error_code.hpp:87
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
0x0A05 : "XCTION : Aborted a transaction because of a race condition. This is an expected error in hi...
Definition: error_code.hpp:200

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::create ( const MasstreeMetadata metadata)

Definition at line 164 of file masstree_storage_pimpl.cpp.

References CHECK_ERROR, foedus::Attachable< MasstreeStorageControlBlock >::control_block_, ERROR_STACK, exists(), get_name(), foedus::kErrorCodeStrAlreadyExists, foedus::storage::kExists, foedus::kRetOk, and load_empty().

Referenced by foedus::storage::masstree::MasstreeStorage::create().

164  {
165  if (exists()) {
166  LOG(ERROR) << "This masstree-storage already exists: " << get_name();
168  }
169 
170  control_block_->meta_ = metadata;
172  control_block_->status_ = kExists;
173  LOG(INFO) << "Newly created an masstree-storage " << get_name();
174  return kRetOk;
175 }
#define ERROR_STACK(e)
Instantiates ErrorStack with the given foedus::error_code, creating an error stack with the current f...
The storage has been created and ready for use.
Definition: storage_id.hpp:158
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111
0x0802 : "STORAGE: This storage already exists" .
Definition: error_code.hpp:168
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::debugout_single_thread ( Engine engine,
bool  volatile_only,
uint32_t  max_pages 
)

These are defined in masstree_storage_debug.cpp.

Definition at line 36 of file masstree_storage_debug.cpp.

References CHECK_ERROR, foedus::Attachable< MasstreeStorageControlBlock >::control_block_, debugout_single_thread_follow(), foedus::Attachable< MasstreeStorageControlBlock >::engine_, foedus::DefaultInitializable::initialize(), foedus::storage::VolatilePagePointer::is_null(), foedus::kRetOk, foedus::UninitializeGuard::kWarnIfUninitializeError, foedus::storage::DualPagePointer::snapshot_pointer_, foedus::DefaultInitializable::uninitialize(), and foedus::storage::DualPagePointer::volatile_pointer_.

Referenced by foedus::storage::masstree::MasstreeStorage::debugout_single_thread().

39  {
40  LOG(INFO) << "**"
41  << std::endl << "***************************************************************"
42  << std::endl << "*** Dumping " << MasstreeStorage(engine_, control_block_) << " in details. "
43  << std::endl << "*** volatile_only=" << volatile_only << ", max_pages=" << max_pages
44  << std::endl << "***************************************************************";
45 
46  cache::SnapshotFileSet files(engine);
47  CHECK_ERROR(files.initialize());
48  UninitializeGuard files_guard(&files, UninitializeGuard::kWarnIfUninitializeError);
49 
50  LOG(INFO) << "First, dumping volatile pages...";
51  DualPagePointer pointer = control_block_->root_page_pointer_;
52  if (pointer.volatile_pointer_.is_null()) {
53  LOG(INFO) << "No volatile pages.";
54  } else {
55  uint32_t remaining = max_pages;
56  CHECK_ERROR(debugout_single_thread_follow(engine, &files, pointer, true, &remaining));
57  }
58  LOG(INFO) << "Dumped volatile pages.";
59  if (!volatile_only) {
60  LOG(INFO) << "Now dumping snapshot pages...";
61  if (pointer.snapshot_pointer_ == 0) {
62  LOG(INFO) << "No snapshot pages.";
63  } else {
64  uint32_t remaining = max_pages;
65  CHECK_ERROR(debugout_single_thread_follow(engine, &files, pointer, false, &remaining));
66  }
67  LOG(INFO) << "Dumped snapshot pages.";
68  }
69 
70  CHECK_ERROR(files.uninitialize());
71  return kRetOk;
72 }
Automatically calls if uninitialize() wasn't called when it gets out of scope, and just complains whe...
Engine * engine_
Most attachable object stores an engine pointer (local engine), so we define it here.
Definition: attachable.hpp:107
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.
ErrorStack debugout_single_thread_follow(Engine *engine, cache::SnapshotFileSet *fileset, const DualPagePointer &pointer, bool follow_volatile, uint32_t *remaining_pages)

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::debugout_single_thread_follow ( Engine engine,
cache::SnapshotFileSet fileset,
const DualPagePointer pointer,
bool  follow_volatile,
uint32_t *  remaining_pages 
)

Definition at line 122 of file masstree_storage_debug.cpp.

References foedus::memory::AlignedMemory::alloc(), CHECK_ERROR, debugout_single_thread_recurse(), foedus::memory::AlignedMemory::get_block(), foedus::memory::EngineMemory::get_global_volatile_page_resolver(), foedus::Engine::get_memory_manager(), foedus::storage::VolatilePagePointer::is_null(), foedus::memory::AlignedMemory::kNumaAllocOnnode, foedus::kRetOk, foedus::cache::SnapshotFileSet::read_page(), foedus::memory::GlobalVolatilePageResolver::resolve_offset(), foedus::storage::DualPagePointer::snapshot_pointer_, foedus::storage::DualPagePointer::volatile_pointer_, and WRAP_ERROR_CODE.

Referenced by debugout_single_thread(), and debugout_single_thread_recurse().

127  {
128  if ((*remaining_pages) == 0) {
129  return kRetOk;
130  }
131  if (follow_volatile) {
132  if (!pointer.volatile_pointer_.is_null()) {
133  MasstreePage* page = reinterpret_cast<MasstreePage*>(
134  engine->get_memory_manager()->get_global_volatile_page_resolver().resolve_offset(
135  pointer.volatile_pointer_));
136  CHECK_ERROR(debugout_single_thread_recurse(engine, files, page, true, remaining_pages));
137  }
138  } else {
139  if (pointer.snapshot_pointer_ != 0) {
140  memory::AlignedMemory buf;
141  buf.alloc(1 << 12U, 1 << 12U, memory::AlignedMemory::kNumaAllocOnnode, 0);
142  MasstreePage* page = reinterpret_cast<MasstreePage*>(buf.get_block());
143  WRAP_ERROR_CODE(files->read_page(pointer.snapshot_pointer_, page));
144  CHECK_ERROR(debugout_single_thread_recurse(engine, files, page, false, remaining_pages));
145  }
146  }
147  return kRetOk;
148 }
numa_alloc_onnode() and numa_free().
ErrorStack debugout_single_thread_recurse(Engine *engine, cache::SnapshotFileSet *fileset, MasstreePage *parent, bool follow_volatile, uint32_t *remaining_pages)
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.
#define WRAP_ERROR_CODE(x)
Same as CHECK_ERROR(x) except it receives only an error code, thus more efficient.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::debugout_single_thread_recurse ( Engine engine,
cache::SnapshotFileSet fileset,
MasstreePage parent,
bool  follow_volatile,
uint32_t *  remaining_pages 
)

Definition at line 75 of file masstree_storage_debug.cpp.

References CHECK_ERROR, debugout_single_thread_follow(), foedus::storage::masstree::MasstreeBorderPage::does_point_to_layer(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::storage::masstree::MasstreeBorderPage::get_next_layer(), foedus::storage::masstree::MasstreePage::is_border(), foedus::storage::masstree::MasstreeIntermediatePointerIterator::is_valid(), and foedus::kRetOk.

Referenced by debugout_single_thread_follow().

80  {
81  if (((*remaining_pages) == 0) || --(*remaining_pages) == 0) {
82  LOG(INFO) << "Reached write-out max. skip the following";
83  return kRetOk;
84  }
85  // Implementation note:
86  // So far we don't follow foster-twins in debugout_xxx, so this is irrelevant.
87  // However, if we change it later to follow foster-twins,
88  // do not forget about empty-range intermediate pages that only appear as foster twins.
89  // Empty-range border pages are fine. We just see zero records.
90  // Foster-twins are sooner or later adopted, and Master-Tree invariant for intermediate page
91  // guarantees that we can just read the old page. no need to follow foster-twins.
92 
93  LOG(INFO) << *parent;
94  uint16_t key_count = parent->get_key_count();
95  if (parent->is_border()) {
96  MasstreeBorderPage* casted = reinterpret_cast<MasstreeBorderPage*>(parent);
97  for (uint16_t i = 0; i < key_count; ++i) {
98  if (casted->does_point_to_layer(i)) {
100  engine,
101  files,
102  *casted->get_next_layer(i),
103  follow_volatile,
104  remaining_pages));
105  }
106  }
107  } else {
108  MasstreeIntermediatePage* casted = reinterpret_cast<MasstreeIntermediatePage*>(parent);
109  for (MasstreeIntermediatePointerIterator it(casted); it.is_valid(); it.next()) {
111  engine,
112  files,
113  it.get_pointer(),
114  follow_volatile,
115  remaining_pages));
116  }
117  }
118 
119  return kRetOk;
120 }
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.
ErrorStack debugout_single_thread_follow(Engine *engine, cache::SnapshotFileSet *fileset, const DualPagePointer &pointer, bool follow_volatile, uint32_t *remaining_pages)

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::delete_general ( thread::Thread context,
const RecordLocation location,
const void *  be_key,
KeyLength  key_length 
)

implementation of delete_record family.

use with locate_record()

Definition at line 691 of file masstree_storage_pimpl.cpp.

References foedus::storage::masstree::MasstreeDeleteLogType::calculate_log_length(), CHECK_ERROR_CODE, check_next_layer_bit(), get_id(), foedus::thread::Thread::get_numa_node(), foedus::thread::Thread::get_thread_log_buffer(), foedus::storage::masstree::MasstreePage::header(), foedus::xct::XctId::is_deleted(), foedus::kErrorCodeStrKeyNotFound, foedus::storage::masstree::RecordLocation::observed_, foedus::storage::masstree::RecordLocation::page_, foedus::storage::masstree::MasstreeDeleteLogType::populate(), register_record_write_log(), foedus::log::ThreadLogBuffer::reserve_new_log(), and foedus::storage::PageHeader::stat_last_updater_node_.

Referenced by foedus::storage::masstree::MasstreeCursor::delete_record(), foedus::storage::masstree::MasstreeStorage::delete_record(), and foedus::storage::masstree::MasstreeStorage::delete_record_normalized().

695  {
696  if (location.observed_.is_deleted()) {
697  // in this case, we don't need a page-version set. the physical record is surely there.
699  }
700  CHECK_ERROR_CODE(check_next_layer_bit(location.observed_));
701 
702  MasstreeBorderPage* border = location.page_;
703  uint16_t log_length = MasstreeDeleteLogType::calculate_log_length(key_length);
704  MasstreeDeleteLogType* log_entry = reinterpret_cast<MasstreeDeleteLogType*>(
705  context->get_thread_log_buffer().reserve_new_log(log_length));
706  log_entry->populate(get_id(), be_key, key_length);
707  border->header().stat_last_updater_node_ = context->get_numa_node();
708  return register_record_write_log(context, location, log_entry);
709 }
0x080C : "STORAGE: This key is not found in this storage" .
Definition: error_code.hpp:178
static uint16_t calculate_log_length(KeyLength key_length) __attribute__((always_inline))
ErrorCode register_record_write_log(thread::Thread *context, const RecordLocation &location, log::RecordLogType *log_entry)
Used in the following methods.
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
static ErrorCode check_next_layer_bit(xct::XctId observed) __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::drop ( )

Storage-wide operations, such as drop, create, etc.

Definition at line 105 of file masstree_storage_pimpl.cpp.

References foedus::Attachable< MasstreeStorageControlBlock >::control_block_, foedus::Attachable< MasstreeStorageControlBlock >::engine_, foedus::memory::EngineMemory::get_global_volatile_page_resolver(), foedus::Engine::get_memory_manager(), get_name(), foedus::kRetOk, foedus::storage::masstree::MasstreeIntermediatePage::release_pages_recursive_parallel(), and foedus::memory::GlobalVolatilePageResolver::resolve_offset().

Referenced by foedus::storage::masstree::MasstreeStorage::drop().

105  {
106  LOG(INFO) << "Uninitializing a masstree-storage " << get_name();
107 
108  if (!control_block_->root_page_pointer_.volatile_pointer_.is_null()) {
109  // release volatile pages
110  const memory::GlobalVolatilePageResolver& page_resolver
112  // first root is guaranteed to be an intermediate page
113  MasstreeIntermediatePage* first_root = reinterpret_cast<MasstreeIntermediatePage*>(
114  page_resolver.resolve_offset(control_block_->root_page_pointer_.volatile_pointer_));
115  first_root->release_pages_recursive_parallel(engine_);
116  control_block_->root_page_pointer_.volatile_pointer_.word = 0;
117  }
118 
119  return kRetOk;
120 }
const GlobalVolatilePageResolver & get_global_volatile_page_resolver() const
Returns the page resolver to convert volatile page ID to page pointer.
Engine * engine_
Most attachable object stores an engine pointer (local engine), so we define it here.
Definition: attachable.hpp:107
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111
const ErrorStack kRetOk
Normal return value for no-error case.
memory::EngineMemory * get_memory_manager() const
See Memory Manager.
Definition: engine.cpp:50

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::storage::masstree::MasstreeStoragePimpl::exists ( ) const
inline

Definition at line 111 of file masstree_storage_pimpl.hpp.

References foedus::Attachable< MasstreeStorageControlBlock >::control_block_.

Referenced by create().

111 { return control_block_->exists(); }
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::fatify_first_root ( thread::Thread context,
uint32_t  desired_count,
bool  disable_no_record_split 
)

Defined in masstree_storage_fatify.cpp.

Definition at line 54 of file masstree_storage_fatify.cpp.

References approximate_count_root_children(), CHECK_ERROR, foedus::debugging::StopWatch::elapsed_us(), fatify_first_root_double(), get_name(), foedus::storage::masstree::kIntermediateAlmostFull, foedus::kRetOk, foedus::debugging::StopWatch::start(), foedus::debugging::StopWatch::stop(), and WRAP_ERROR_CODE.

Referenced by foedus::storage::masstree::MasstreeStorage::fatify_first_root().

57  {
58  LOG(INFO) << "Masstree-" << get_name() << " being fatified for " << desired_count
59  << ", disable_no_record_split=" << disable_no_record_split;
60 
61  if (desired_count > kIntermediateAlmostFull) {
62  LOG(INFO) << "desired_count too large. adjusted to the max";
63  desired_count = kIntermediateAlmostFull;
64  }
65  uint32_t initial_children;
66  WRAP_ERROR_CODE(approximate_count_root_children(context, &initial_children));
67  LOG(INFO) << "initial_children=" << initial_children;
68 
69  // We keep doubling the direct root-children
70  debugging::StopWatch watch;
71  watch.start();
72  uint16_t iterations = 0;
73  for (uint32_t count = initial_children; count < desired_count && iterations < 10U; ++iterations) {
74  CHECK_ERROR(fatify_first_root_double(context, disable_no_record_split));
75  uint32_t new_count;
77  if (count == new_count) {
78  LOG(WARNING) << "Not enough descendants for further fatification. Stopped here";
79  break;
80  }
81  count = new_count;
82  }
83 
84  uint32_t after_children;
85  WRAP_ERROR_CODE(approximate_count_root_children(context, &after_children));
86  watch.stop();
87  LOG(INFO) << "fatify done: Iterations=" << iterations
88  << ", took " << watch.elapsed_us() << "us in total."
89  << " child count: " << initial_children << "->" << after_children;
90 
91  return kRetOk;
92 }
constexpr uint32_t kIntermediateAlmostFull
ErrorStack fatify_first_root_double(thread::Thread *context, bool disable_no_record_split)
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.
#define WRAP_ERROR_CODE(x)
Same as CHECK_ERROR(x) except it receives only an error code, thus more efficient.
ErrorCode approximate_count_root_children(thread::Thread *context, uint32_t *out)
Returns the count of direct children in the first-root node.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::fatify_first_root_double ( thread::Thread context,
bool  disable_no_record_split 
)

Definition at line 94 of file masstree_storage_fatify.cpp.

References approximate_count_root_children(), ASSERT_ND, foedus::storage::masstree::count_children_approximate(), foedus::debugging::StopWatch::elapsed_us(), foedus::storage::masstree::MasstreeIntermediatePage::find_minipage(), foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::find_pointer(), follow_page(), get_first_root(), foedus::storage::masstree::MasstreePage::get_high_fence(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::storage::masstree::MasstreePage::get_low_fence(), foedus::storage::masstree::MasstreeIntermediatePage::get_minipage(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::MasstreePage::is_border(), foedus::storage::masstree::MasstreePage::is_moved(), foedus::storage::masstree::kInfimumSlice, foedus::kRetOk, foedus::storage::masstree::kSupremumSlice, foedus::thread::Thread::run_nested_sysxct(), foedus::storage::PageHeader::snapshot_, foedus::debugging::StopWatch::start(), foedus::debugging::StopWatch::stop(), and WRAP_ERROR_CODE.

Referenced by fatify_first_root().

96  {
97  // We invoke split sysxct and adopt sysxct many times.
98  debugging::StopWatch watch;
99  watch.start();
100  uint16_t root_retries = 0;
101  KeySlice cur_slice = kInfimumSlice;
102  uint16_t skipped_children = 0;
103  uint16_t adopted_children = 0;
104  uint32_t initial_children;
105  WRAP_ERROR_CODE(approximate_count_root_children(context, &initial_children));
106  while (cur_slice != kSupremumSlice) {
107  if (initial_children + adopted_children >= kIntermediateAlmostFull) {
108  LOG(INFO) << "Root page nearing full. Stopped fatification";
109  break;
110  }
111 
112  // Get a non-moved root. This might trigger grow-root.
113  // grow-root is a non-mandatory operation, so we might keep seeing a moved root.
114  MasstreeIntermediatePage* root;
115  WRAP_ERROR_CODE(get_first_root(context, true, &root));
116  ASSERT_ND(root->get_low_fence() == kInfimumSlice);
117  ASSERT_ND(root->get_high_fence() == kSupremumSlice);
118  if (root->is_moved()) {
119  ++root_retries;
120  if (root_retries > 50U) {
121  LOG(WARNING) << "Hm? there might be some contention to prevent grow-root. Gave up";
122  break;
123  }
124  continue;
125  }
126 
127  root_retries = 0;
128  const auto minipage_index = root->find_minipage(cur_slice);
129  auto& minipage = root->get_minipage(minipage_index);
130  auto pointer_index = minipage.find_pointer(cur_slice);
131 
132  MasstreePage* child;
134  context,
135  true,
136  minipage.pointers_ + pointer_index,
137  &child));
138  ASSERT_ND(!child->header().snapshot_);
139  cur_slice = child->get_high_fence(); // go on to next
140 
141  if (!child->is_moved()) {
142  // Split the child so that we can adopt it to the root
143  if (child->is_border()) {
144  if (child->get_key_count() >= 2U) {
145  MasstreeBorderPage* casted = reinterpret_cast<MasstreeBorderPage*>(child);
146  auto low_slice = child->get_low_fence();
147  auto high_slice = child->get_high_fence();
148  auto mid_slice = low_slice + (high_slice - low_slice) / 2U;
149  SplitBorder split(context, casted, mid_slice, disable_no_record_split);
150  WRAP_ERROR_CODE(context->run_nested_sysxct(&split, 2U));
151  } else {
152  ++skipped_children;
153  continue; // not worth splitting
154  }
155  } else {
156  MasstreeIntermediatePage* casted = reinterpret_cast<MasstreeIntermediatePage*>(child);
157  uint32_t grandchild_count = count_children_approximate(casted);
158  if (grandchild_count >= 2U) {
159  SplitIntermediate split(context, casted);
160  WRAP_ERROR_CODE(context->run_nested_sysxct(&split, 2U));
161  } else {
162  ++skipped_children;
163  continue; // not worth splitting
164  }
165  }
166  }
167 
168  Adopt adopt(context, root, child);
169  WRAP_ERROR_CODE(context->run_nested_sysxct(&adopt, 2U));
170  ++adopted_children;
171  }
172 
173  uint32_t after_children;
174  WRAP_ERROR_CODE(approximate_count_root_children(context, &after_children));
175  watch.stop();
176  LOG(INFO) << "fatify_double: adopted " << adopted_children << " root-children and skipped "
177  << skipped_children << " that are already too sparse in " << watch.elapsed_us() << "us."
178  << " child count: " << initial_children << "->" << after_children;
179 
180  return kRetOk;
181 }
const KeySlice kInfimumSlice
ErrorCode follow_page(thread::Thread *context, bool for_writes, storage::DualPagePointer *pointer, MasstreePage **page)
Thread::follow_page_pointer() for masstree.
uint64_t KeySlice
Each key slice is an 8-byte integer.
constexpr uint32_t kIntermediateAlmostFull
uint32_t count_children_approximate(const MasstreeIntermediatePage *page)
ErrorCode get_first_root(thread::Thread *context, bool for_write, MasstreeIntermediatePage **root)
Root-node related, such as a method to retrieve 1st-root, to grow, etc.
const ErrorStack kRetOk
Normal return value for no-error case.
const KeySlice kSupremumSlice
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
#define WRAP_ERROR_CODE(x)
Same as CHECK_ERROR(x) except it receives only an error code, thus more efficient.
ErrorCode approximate_count_root_children(thread::Thread *context, uint32_t *out)
Returns the count of direct children in the first-root node.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::find_border_physical ( thread::Thread context,
MasstreePage layer_root,
uint8_t  current_layer,
bool  for_writes,
KeySlice  slice,
MasstreeBorderPage **  border 
)
inline

Find a border node in the layer that corresponds to the given key slice.

Record-wise or page-wise operations.

Note
this method is physical-only. It doesn't take any long-term lock or readset.

Definition at line 213 of file masstree_storage_pimpl.cpp.

References foedus::storage::assert_aligned_page(), ASSERT_ND, CHECK_ERROR_CODE, foedus::storage::masstree::MasstreeIntermediatePage::find_minipage(), foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::find_pointer(), follow_page(), foedus::storage::masstree::MasstreePage::get_low_fence(), foedus::storage::masstree::MasstreeIntermediatePage::get_minipage(), foedus::storage::masstree::MasstreePage::has_foster_child(), foedus::storage::masstree::MasstreePage::is_high_fence_supremum(), foedus::storage::masstree::MasstreePage::is_locked(), foedus::kErrorCodeOk, foedus::storage::masstree::kInfimumSlice, LIKELY, foedus::assorted::memory_fence_acquire(), foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::pointers_, foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::prefetch(), foedus::storage::masstree::MasstreePage::prefetch_general(), foedus::thread::Thread::resolve(), foedus::thread::Thread::run_nested_sysxct(), UNLIKELY, and foedus::storage::masstree::MasstreePage::within_fences().

Referenced by locate_record(), locate_record_normalized(), reserve_record(), and reserve_record_normalized().

219  {
220  assert_aligned_page(layer_root);
221  ASSERT_ND(layer_root->is_high_fence_supremum());
222  ASSERT_ND(layer_root->get_low_fence() == kInfimumSlice);
223  MasstreePage* cur = layer_root;
224  cur->prefetch_general();
225  while (true) {
226  assert_aligned_page(cur);
227  ASSERT_ND(cur->get_layer() == current_layer);
228  ASSERT_ND(cur->within_fences(slice));
229  if (cur->is_border()) {
230  // We follow foster-twins only in border pages.
231  // In intermediate pages, Master-Tree invariant tells us that we don't have to.
232  // Furthermore, if we do, we need to handle the case of empty-range intermediate pages.
233  // Rather we just do this only in border pages.
234  if (UNLIKELY(cur->has_foster_child())) {
235  // follow one of foster-twin.
236  if (cur->within_foster_minor(slice)) {
237  cur = reinterpret_cast<MasstreePage*>(context->resolve(cur->get_foster_minor()));
238  } else {
239  cur = reinterpret_cast<MasstreePage*>(context->resolve(cur->get_foster_major()));
240  }
241  ASSERT_ND(cur->within_fences(slice));
242  continue;
243  }
244  *border = reinterpret_cast<MasstreeBorderPage*>(cur);
245  return kErrorCodeOk;
246  } else {
247  MasstreeIntermediatePage* page = reinterpret_cast<MasstreeIntermediatePage*>(cur);
248  uint8_t minipage_index = page->find_minipage(slice);
249  MasstreeIntermediatePage::MiniPage& minipage = page->get_minipage(minipage_index);
250 
251  minipage.prefetch();
252  uint8_t pointer_index = minipage.find_pointer(slice);
253  DualPagePointer& pointer = minipage.pointers_[pointer_index];
254  MasstreePage* next;
255  CHECK_ERROR_CODE(follow_page(context, for_writes, &pointer, &next));
256  next->prefetch_general();
257  if (LIKELY(next->within_fences(slice))) {
258  if (next->has_foster_child() && !cur->is_moved()) {
259  // oh, the page has foster child, so we should adopt it.
260  // Whether Adopt actually adopted it or not,
261  // we follow the "old" next page. Master-Tree invariant guarantees that it's safe.
262  // This is beneficial when we lazily give up adoption in the method, eg other threads
263  // holding locks in the intermediate page.
264  if (!next->is_locked() && !cur->is_locked()) {
265  // Let's try adopting. No need to try many times. Adopt can be delayed
266  Adopt functor(context, page, next);
267  CHECK_ERROR_CODE(context->run_nested_sysxct(&functor, 2));
268  } else {
269  // We don't have to adopt it right away. Do it when it's not contended
270  DVLOG(1) << "Someone else seems doing something there.. already adopting? skip it";
271  }
272  }
273  cur = next;
274  } else {
275  // even in this case, local retry suffices thanks to foster-twin
276  DVLOG(0) << "Interesting. concurrent thread affected the search. local retry";
278  }
279  }
280  }
281 }
void assert_aligned_page(const void *page)
Definition: page.hpp:402
const KeySlice kInfimumSlice
ErrorCode follow_page(thread::Thread *context, bool for_writes, storage::DualPagePointer *pointer, MasstreePage **page)
Thread::follow_page_pointer() for masstree.
#define LIKELY(x)
Hints that x is highly likely true.
Definition: compiler.hpp:103
0 means no-error.
Definition: error_code.hpp:87
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
void memory_fence_acquire()
Equivalent to std::atomic_thread_fence(std::memory_order_acquire).
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::follow_layer ( thread::Thread context,
bool  for_writes,
MasstreeBorderPage parent,
SlotIndex  record_index,
MasstreePage **  page 
)
inline

Follows to next layer's root page.

Definition at line 384 of file masstree_storage_pimpl.cpp.

References ASSERT_ND, CHECK_ERROR_CODE, foedus::storage::masstree::MasstreeBorderPage::does_point_to_layer(), follow_page(), foedus::storage::masstree::MasstreeBorderPage::get_next_layer(), foedus::storage::masstree::MasstreeBorderPage::get_owner_id(), foedus::xct::XctId::is_moved(), foedus::xct::XctId::is_next_layer(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::kErrorCodeOk, foedus::thread::Thread::run_nested_sysxct(), UNLIKELY, and foedus::xct::RwLockableXctId::xct_id_.

Referenced by locate_record(), and reserve_record().

389  {
390  ASSERT_ND(record_index < kBorderPageMaxSlots);
391  ASSERT_ND(parent->does_point_to_layer(record_index));
392  DualPagePointer* pointer = parent->get_next_layer(record_index);
393  xct::RwLockableXctId* owner = parent->get_owner_id(record_index);
394  ASSERT_ND(owner->xct_id_.is_next_layer() || owner->xct_id_.is_moved());
395  ASSERT_ND(!pointer->is_both_null());
396  MasstreePage* next_root;
397  CHECK_ERROR_CODE(follow_page(context, for_writes, pointer, &next_root));
398 
399  // root page has a foster child... time for tree growth!
400  if (UNLIKELY(next_root->has_foster_child())) {
401  ASSERT_ND(next_root->get_layer() > 0);
402  // Either case, we follow the old page. Master-Tree invariant guarantees it's safe
403  GrowNonFirstLayerRoot functor(context, parent, record_index);
404  CHECK_ERROR_CODE(context->run_nested_sysxct(&functor, 2U));
405  }
406 
407  ASSERT_ND(next_root);
408  *page = next_root;
409  return kErrorCodeOk;
410 }
ErrorCode follow_page(thread::Thread *context, bool for_writes, storage::DualPagePointer *pointer, MasstreePage **page)
Thread::follow_page_pointer() for masstree.
0 means no-error.
Definition: error_code.hpp:87
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::follow_page ( thread::Thread context,
bool  for_writes,
storage::DualPagePointer pointer,
MasstreePage **  page 
)

Thread::follow_page_pointer() for masstree.

Because of how masstree works, this method never creates a completely new page. In other words, always !pointer->is_both_null(). It either reads an existing volatile/snapshot page or creates a volatile page from existing snapshot page.

Definition at line 367 of file masstree_storage_pimpl.cpp.

References ASSERT_ND, foedus::thread::Thread::follow_page_pointer(), and foedus::storage::DualPagePointer::is_both_null().

Referenced by fatify_first_root_double(), find_border_physical(), follow_layer(), verify_single_thread_border(), and verify_single_thread_intermediate().

371  {
372  ASSERT_ND(!pointer->is_both_null());
373  return context->follow_page_pointer(
374  nullptr, // masstree doesn't create a new page except splits.
375  false, // so, there is no null page possible
376  for_writes, // always get volatile pages for writes
377  true,
378  pointer,
379  reinterpret_cast<Page**>(page),
380  nullptr, // only used for new page creation, so nothing to pass
381  -1); // same as above
382 }
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::get_first_root ( thread::Thread context,
bool  for_write,
MasstreeIntermediatePage **  root 
)

Root-node related, such as a method to retrieve 1st-root, to grow, etc.

Definition at line 58 of file masstree_storage_pimpl.cpp.

References foedus::storage::assert_aligned_page(), ASSERT_ND, CHECK_ERROR_CODE, foedus::Attachable< MasstreeStorageControlBlock >::control_block_, foedus::thread::Thread::follow_page_pointer(), get_first_root_pointer_address(), foedus::storage::masstree::MasstreePage::get_high_fence(), get_id(), foedus::storage::masstree::MasstreePage::get_layer(), foedus::storage::masstree::MasstreePage::get_low_fence(), foedus::storage::masstree::MasstreePage::has_foster_child(), foedus::storage::masstree::MasstreePage::header(), foedus::kErrorCodeOk, foedus::storage::masstree::kInfimumSlice, foedus::storage::masstree::kSupremumSlice, foedus::thread::Thread::run_nested_sysxct(), foedus::storage::PageHeader::snapshot_, and UNLIKELY.

Referenced by approximate_count_root_children(), fatify_first_root_double(), locate_record(), locate_record_normalized(), foedus::storage::masstree::MasstreeCursor::open(), prefetch_pages_normalized(), reserve_record(), reserve_record_normalized(), and verify_single_thread().

61  {
62  DualPagePointer* root_pointer = get_first_root_pointer_address();
63  MasstreeIntermediatePage* page = nullptr;
64  CHECK_ERROR_CODE(context->follow_page_pointer(
65  nullptr,
66  false,
67  for_write,
68  true,
69  root_pointer,
70  reinterpret_cast<Page**>(&page),
71  nullptr,
72  0));
73 
74  assert_aligned_page(page);
75  ASSERT_ND(page->get_layer() == 0);
76  ASSERT_ND(page->get_low_fence() == kInfimumSlice);
77  ASSERT_ND(page->get_high_fence() == kSupremumSlice);
78  ASSERT_ND(!for_write || !page->header().snapshot_);
79 
80  if (UNLIKELY(page->has_foster_child())) {
81  ASSERT_ND(!page->header().snapshot_);
82  // root page has a foster child... time for tree growth!
83  // either case, we just follow the moved page. Master-Tree invariant guarantees it's safe.
84  if (control_block_->first_root_locked_) {
85  // To avoid contention, we begin it only when the lock seems uncontended.
86  DVLOG(0) << "Other thread seems growing the first-layer. let him do that";
87  } else {
88  GrowFirstLayerRoot functor(context, get_id());
89  CHECK_ERROR_CODE(context->run_nested_sysxct(&functor, 2));
90  }
91  }
92 
93  ASSERT_ND(page->get_layer() == 0);
94  ASSERT_ND(page->get_low_fence() == kInfimumSlice);
95  ASSERT_ND(page->get_high_fence() == kSupremumSlice);
96  *root = page;
97  return kErrorCodeOk;
98 }
void assert_aligned_page(const void *page)
Definition: page.hpp:402
const KeySlice kInfimumSlice
0 means no-error.
Definition: error_code.hpp:87
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
const KeySlice kSupremumSlice
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

const DualPagePointer& foedus::storage::masstree::MasstreeStoragePimpl::get_first_root_pointer ( ) const
inline

Definition at line 115 of file masstree_storage_pimpl.hpp.

References foedus::Attachable< MasstreeStorageControlBlock >::control_block_.

Referenced by peek_volatile_page_boundaries().

115  {
116  return control_block_->root_page_pointer_;
117  }
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111

Here is the caller graph for this function:

DualPagePointer* foedus::storage::masstree::MasstreeStoragePimpl::get_first_root_pointer_address ( )
inline

Definition at line 118 of file masstree_storage_pimpl.hpp.

References foedus::Attachable< MasstreeStorageControlBlock >::control_block_.

Referenced by get_first_root().

118 { return &control_block_->root_page_pointer_; }
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111

Here is the caller graph for this function:

StorageId foedus::storage::masstree::MasstreeStoragePimpl::get_id ( ) const
inline

Definition at line 112 of file masstree_storage_pimpl.hpp.

References foedus::Attachable< MasstreeStorageControlBlock >::control_block_.

Referenced by delete_general(), get_first_root(), increment_general(), insert_general(), load_empty(), overwrite_general(), register_record_write_log(), and upsert_general().

112 { return control_block_->meta_.id_; }
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111

Here is the caller graph for this function:

const MasstreeMetadata& foedus::storage::masstree::MasstreeStoragePimpl::get_meta ( ) const
inline

Definition at line 114 of file masstree_storage_pimpl.hpp.

References foedus::Attachable< MasstreeStorageControlBlock >::control_block_.

Referenced by load(), and reserve_record().

114 { return control_block_->meta_; }
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111

Here is the caller graph for this function:

const StorageName& foedus::storage::masstree::MasstreeStoragePimpl::get_name ( ) const
inline

Definition at line 113 of file masstree_storage_pimpl.hpp.

References foedus::Attachable< MasstreeStorageControlBlock >::control_block_.

Referenced by create(), drop(), fatify_first_root(), and prefetch_pages_normalized().

113 { return control_block_->meta_.name_; }
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::hcc_reset_all_temperature_stat ( )

For stupid reasons (I'm lazy!) these are defined in _debug.cpp.

Definition at line 151 of file masstree_storage_debug.cpp.

References CHECK_ERROR, foedus::Attachable< MasstreeStorageControlBlock >::control_block_, foedus::Attachable< MasstreeStorageControlBlock >::engine_, hcc_reset_all_temperature_stat_follow(), foedus::storage::VolatilePagePointer::is_null(), foedus::kRetOk, and foedus::storage::DualPagePointer::volatile_pointer_.

Referenced by foedus::storage::masstree::MasstreeStorage::hcc_reset_all_temperature_stat().

151  {
152  LOG(INFO) << "**"
153  << std::endl << "***************************************************************"
154  << std::endl << "*** Reseting " << MasstreeStorage(engine_, control_block_) << "'s "
155  << std::endl << "*** temperature stat for HCC"
156  << std::endl << "***************************************************************";
157 
158  DualPagePointer pointer = control_block_->root_page_pointer_;
159  if (pointer.volatile_pointer_.is_null()) {
160  LOG(INFO) << "No volatile pages.";
161  } else {
162  CHECK_ERROR(hcc_reset_all_temperature_stat_follow(pointer.volatile_pointer_));
163  }
164 
165  LOG(INFO) << "Done resettting";
166  return kRetOk;
167 }
Engine * engine_
Most attachable object stores an engine pointer (local engine), so we define it here.
Definition: attachable.hpp:107
ErrorStack hcc_reset_all_temperature_stat_follow(VolatilePagePointer page_id)
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::hcc_reset_all_temperature_stat_follow ( VolatilePagePointer  page_id)

Definition at line 192 of file masstree_storage_debug.cpp.

References CHECK_ERROR, foedus::Attachable< MasstreeStorageControlBlock >::engine_, foedus::memory::EngineMemory::get_global_volatile_page_resolver(), foedus::Engine::get_memory_manager(), hcc_reset_all_temperature_stat_recurse(), foedus::storage::VolatilePagePointer::is_null(), foedus::kRetOk, and foedus::memory::GlobalVolatilePageResolver::resolve_offset().

Referenced by hcc_reset_all_temperature_stat(), and hcc_reset_all_temperature_stat_recurse().

193  {
194  if (page_id.is_null()) {
195  MasstreePage* page = reinterpret_cast<MasstreePage*>(
198  }
199  return kRetOk;
200 }
storage::Page * resolve_offset(uint8_t numa_node, PagePoolOffset offset) const __attribute__((always_inline))
Resolves offset plus NUMA node ID to storage::Page*.
const GlobalVolatilePageResolver & get_global_volatile_page_resolver() const
Returns the page resolver to convert volatile page ID to page pointer.
Engine * engine_
Most attachable object stores an engine pointer (local engine), so we define it here.
Definition: attachable.hpp:107
ErrorStack hcc_reset_all_temperature_stat_recurse(MasstreePage *parent)
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.
memory::EngineMemory * get_memory_manager() const
See Memory Manager.
Definition: engine.cpp:50

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::hcc_reset_all_temperature_stat_recurse ( MasstreePage parent)

Definition at line 170 of file masstree_storage_debug.cpp.

References CHECK_ERROR, foedus::storage::masstree::MasstreeBorderPage::does_point_to_layer(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::storage::masstree::MasstreeBorderPage::get_next_layer(), hcc_reset_all_temperature_stat_follow(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::PageHeader::hotness_, foedus::storage::masstree::MasstreePage::is_border(), foedus::storage::masstree::MasstreeIntermediatePointerIterator::is_valid(), foedus::kRetOk, foedus::assorted::ProbCounter::reset(), and foedus::storage::DualPagePointer::volatile_pointer_.

Referenced by hcc_reset_all_temperature_stat_follow().

170  {
171  uint16_t key_count = parent->get_key_count();
172  if (parent->is_border()) {
173  MasstreeBorderPage* casted = reinterpret_cast<MasstreeBorderPage*>(parent);
174  casted->header().hotness_.reset();
175  for (uint16_t i = 0; i < key_count; ++i) {
176  if (casted->does_point_to_layer(i)) {
178  casted->get_next_layer(i)->volatile_pointer_));
179  }
180  }
181  } else {
182  MasstreeIntermediatePage* casted = reinterpret_cast<MasstreeIntermediatePage*>(parent);
183  for (MasstreeIntermediatePointerIterator it(casted); it.is_valid(); it.next()) {
185  it.get_pointer().volatile_pointer_));
186  }
187  }
188 
189  return kRetOk;
190 }
ErrorStack hcc_reset_all_temperature_stat_follow(VolatilePagePointer page_id)
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename PAYLOAD >
ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::increment_general ( thread::Thread context,
const RecordLocation location,
const void *  be_key,
KeyLength  key_length,
PAYLOAD *  value,
PayloadLength  payload_offset 
)

implementation of increment_record family.

use with locate_record()

Definition at line 806 of file masstree_storage_pimpl.cpp.

References foedus::storage::masstree::MasstreeCommonLogType::calculate_log_length(), CHECK_ERROR_CODE, check_next_layer_bit(), get_id(), foedus::thread::Thread::get_numa_node(), foedus::storage::masstree::MasstreeBorderPage::get_payload_length(), foedus::storage::masstree::MasstreeBorderPage::get_record_payload(), foedus::thread::Thread::get_thread_log_buffer(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::RecordLocation::index_, foedus::xct::XctId::is_deleted(), foedus::kErrorCodeStrKeyNotFound, foedus::kErrorCodeStrTooShortPayload, foedus::storage::masstree::RecordLocation::observed_, foedus::storage::masstree::RecordLocation::page_, foedus::storage::masstree::MasstreeOverwriteLogType::populate(), register_record_write_log(), foedus::log::ThreadLogBuffer::reserve_new_log(), and foedus::storage::PageHeader::stat_last_updater_node_.

Referenced by foedus::storage::masstree::MasstreeCursor::increment_record(), foedus::storage::masstree::MasstreeStorage::increment_record(), and foedus::storage::masstree::MasstreeStorage::increment_record_normalized().

812  {
813  if (location.observed_.is_deleted()) {
814  // in this case, we don't need a page-version set. the physical record is surely there.
816  }
817  CHECK_ERROR_CODE(check_next_layer_bit(location.observed_));
818  MasstreeBorderPage* border = location.page_;
819  if (border->get_payload_length(location.index_) < payload_offset + sizeof(PAYLOAD)) {
820  LOG(WARNING) << "short record "; // probably this is a rare error. so warn.
822  }
823 
824  char* ptr = border->get_record_payload(location.index_) + payload_offset;
825  *value += *reinterpret_cast<const PAYLOAD*>(ptr);
826 
827  uint16_t log_length = MasstreeOverwriteLogType::calculate_log_length(key_length, sizeof(PAYLOAD));
828  MasstreeOverwriteLogType* log_entry = reinterpret_cast<MasstreeOverwriteLogType*>(
829  context->get_thread_log_buffer().reserve_new_log(log_length));
830  log_entry->populate(
831  get_id(),
832  be_key,
833  key_length,
834  value,
835  payload_offset,
836  sizeof(PAYLOAD));
837  border->header().stat_last_updater_node_ = context->get_numa_node();
838  return register_record_write_log(context, location, log_entry);
839 }
0x080A : "STORAGE: The record's payload is smaller than requested" .
Definition: error_code.hpp:176
0x080C : "STORAGE: This key is not found in this storage" .
Definition: error_code.hpp:178
ErrorCode register_record_write_log(thread::Thread *context, const RecordLocation &location, log::RecordLogType *log_entry)
Used in the following methods.
static uint16_t calculate_log_length(KeyLength key_length, PayloadLength payload_count) __attribute__((always_inline))
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
static ErrorCode check_next_layer_bit(xct::XctId observed) __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::insert_general ( thread::Thread context,
const RecordLocation location,
const void *  be_key,
KeyLength  key_length,
const void *  payload,
PayloadLength  payload_count 
)

implementation of insert_record family.

use with reserve_record()

Definition at line 661 of file masstree_storage_pimpl.cpp.

References ASSERT_ND, foedus::storage::masstree::MasstreeCommonLogType::calculate_log_length(), CHECK_ERROR_CODE, check_next_layer_bit(), get_id(), foedus::storage::masstree::MasstreeBorderPage::get_max_payload_length(), foedus::thread::Thread::get_numa_node(), foedus::thread::Thread::get_thread_log_buffer(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::RecordLocation::index_, foedus::xct::XctId::is_deleted(), foedus::kErrorCodeStrKeyAlreadyExists, foedus::storage::masstree::RecordLocation::observed_, foedus::storage::masstree::RecordLocation::page_, foedus::storage::masstree::MasstreeInsertLogType::populate(), register_record_write_log(), foedus::log::ThreadLogBuffer::reserve_new_log(), and foedus::storage::PageHeader::stat_last_updater_node_.

Referenced by foedus::storage::masstree::MasstreeStorage::insert_record(), and foedus::storage::masstree::MasstreeStorage::insert_record_normalized().

667  {
668  if (!location.observed_.is_deleted()) {
670  }
671  CHECK_ERROR_CODE(check_next_layer_bit(location.observed_));
672 
673  // as of reserve_record() it was spacious enough, and this length is
674  // either immutable or only increases, so this must hold.
675  MasstreeBorderPage* border = location.page_;
676  ASSERT_ND(border->get_max_payload_length(location.index_) >= payload_count);
677 
678  uint16_t log_length = MasstreeInsertLogType::calculate_log_length(key_length, payload_count);
679  MasstreeInsertLogType* log_entry = reinterpret_cast<MasstreeInsertLogType*>(
680  context->get_thread_log_buffer().reserve_new_log(log_length));
681  log_entry->populate(
682  get_id(),
683  be_key,
684  key_length,
685  payload,
686  payload_count);
687  border->header().stat_last_updater_node_ = context->get_numa_node();
688  return register_record_write_log(context, location, log_entry);
689 }
ErrorCode register_record_write_log(thread::Thread *context, const RecordLocation &location, log::RecordLogType *log_entry)
Used in the following methods.
static uint16_t calculate_log_length(KeyLength key_length, PayloadLength payload_count) __attribute__((always_inline))
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
static ErrorCode check_next_layer_bit(xct::XctId observed) __attribute__((always_inline))
0x080B : "STORAGE: This key already exists in this storage" .
Definition: error_code.hpp:177

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::load ( const StorageControlBlock snapshot_block)

Definition at line 176 of file masstree_storage_pimpl.cpp.

References CHECK_ERROR, foedus::Attachable< MasstreeStorageControlBlock >::control_block_, foedus::Attachable< MasstreeStorageControlBlock >::engine_, foedus::Engine::get_memory_manager(), get_meta(), foedus::DefaultInitializable::initialize(), foedus::storage::kExists, foedus::kRetOk, foedus::UninitializeGuard::kWarnIfUninitializeError, load_empty(), foedus::memory::EngineMemory::load_one_volatile_page(), foedus::storage::StorageControlBlock::meta_, foedus::storage::Metadata::root_snapshot_page_id_, and foedus::DefaultInitializable::uninitialize().

Referenced by foedus::storage::masstree::MasstreeStorage::load().

176  {
177  control_block_->meta_ = static_cast<const MasstreeMetadata&>(snapshot_block.meta_);
178  const MasstreeMetadata& meta = control_block_->meta_;
179  control_block_->root_page_pointer_.snapshot_pointer_ = meta.root_snapshot_page_id_;
180  control_block_->root_page_pointer_.volatile_pointer_.word = 0;
181  control_block_->first_root_locked_ = false;
182 
183  // So far we assume the root page always has a volatile version.
184  // Create it now.
185  if (meta.root_snapshot_page_id_ != 0) {
186  cache::SnapshotFileSet fileset(engine_);
187  CHECK_ERROR(fileset.initialize());
188  UninitializeGuard fileset_guard(&fileset, UninitializeGuard::kWarnIfUninitializeError);
189  VolatilePagePointer volatile_pointer;
190  MasstreeIntermediatePage* volatile_root;
192  &fileset,
193  meta.root_snapshot_page_id_,
194  &volatile_pointer,
195  reinterpret_cast<Page**>(&volatile_root)));
196  CHECK_ERROR(fileset.uninitialize());
197  control_block_->root_page_pointer_.volatile_pointer_ = volatile_pointer;
198  } else {
199  LOG(INFO) << "This is an empty masstree: " << get_meta();
201  }
202 
203  control_block_->status_ = kExists;
204  LOG(INFO) << "Loaded a masstree-storage " << get_meta();
205  return kRetOk;
206 }
Automatically calls if uninitialize() wasn't called when it gets out of scope, and just complains whe...
ErrorStack load_one_volatile_page(cache::SnapshotFileSet *fileset, storage::SnapshotPagePointer snapshot_pointer, storage::VolatilePagePointer *pointer, storage::Page **page)
Another convenience method that also reads an existing snapshot page to the volatile page...
Engine * engine_
Most attachable object stores an engine pointer (local engine), so we define it here.
Definition: attachable.hpp:107
The storage has been created and ready for use.
Definition: storage_id.hpp:158
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.
memory::EngineMemory * get_memory_manager() const
See Memory Manager.
Definition: engine.cpp:50

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::load_empty ( )

Definition at line 122 of file masstree_storage_pimpl.cpp.

References ASSERT_ND, foedus::Attachable< MasstreeStorageControlBlock >::control_block_, foedus::Attachable< MasstreeStorageControlBlock >::engine_, get_id(), foedus::Engine::get_memory_manager(), foedus::storage::masstree::MasstreeIntermediatePage::get_minipage(), foedus::memory::EngineMemory::get_node_memory(), foedus::memory::PagePool::get_resolver(), foedus::memory::NumaNodeMemoryRef::get_volatile_pool(), foedus::memory::PagePool::grab_one(), foedus::storage::masstree::MasstreeIntermediatePage::initialize_volatile_page(), foedus::storage::masstree::MasstreeBorderPage::initialize_volatile_page(), foedus::storage::masstree::kInfimumSlice, foedus::kRetOk, foedus::storage::masstree::kSupremumSlice, foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::pointers_, foedus::memory::LocalPageResolver::resolve_offset_newpage(), foedus::storage::VolatilePagePointer::set(), foedus::storage::DualPagePointer::snapshot_pointer_, foedus::storage::DualPagePointer::volatile_pointer_, and WRAP_ERROR_CODE.

Referenced by create(), and load().

122  {
123  control_block_->root_page_pointer_.snapshot_pointer_ = 0;
124  control_block_->root_page_pointer_.volatile_pointer_.word = 0;
125  control_block_->meta_.root_snapshot_page_id_ = 0;
126 
127  const uint16_t kDummyNode = 0; // whatever. just pick from the first node
128  memory::PagePool* pool
130  const memory::LocalPageResolver &local_resolver = pool->get_resolver();
131 
132  // The root of first layer is always an intermediate page.
133  // This is a special rule only for first layer to simplify partitioning and composer.
134  memory::PagePoolOffset root_offset;
135  WRAP_ERROR_CODE(pool->grab_one(&root_offset));
136  ASSERT_ND(root_offset);
137  MasstreeIntermediatePage* root_page = reinterpret_cast<MasstreeIntermediatePage*>(
138  local_resolver.resolve_offset_newpage(root_offset));
139  control_block_->first_root_locked_ = false;
140  control_block_->root_page_pointer_.snapshot_pointer_ = 0;
141  control_block_->root_page_pointer_.volatile_pointer_.set(kDummyNode, root_offset);
142  root_page->initialize_volatile_page(
143  get_id(),
144  control_block_->root_page_pointer_.volatile_pointer_,
145  0,
146  1,
149 
150  // Also allocate the only child.
151  memory::PagePoolOffset child_offset;
152  WRAP_ERROR_CODE(pool->grab_one(&child_offset));
153  ASSERT_ND(child_offset);
154  MasstreeBorderPage* child_page = reinterpret_cast<MasstreeBorderPage*>(
155  local_resolver.resolve_offset_newpage(child_offset));
156  VolatilePagePointer child_pointer;
157  child_pointer.set(kDummyNode, child_offset);
158  child_page->initialize_volatile_page(get_id(), child_pointer, 0, kInfimumSlice, kSupremumSlice);
159  root_page->get_minipage(0).pointers_[0].snapshot_pointer_ = 0;
160  root_page->get_minipage(0).pointers_[0].volatile_pointer_ = child_pointer;
161  return kRetOk;
162 }
const KeySlice kInfimumSlice
uint32_t PagePoolOffset
Offset in PagePool that compactly represents the page address (unlike 8 bytes pointer).
Definition: memory_id.hpp:44
Engine * engine_
Most attachable object stores an engine pointer (local engine), so we define it here.
Definition: attachable.hpp:107
MasstreeStorageControlBlock * control_block_
The shared data on shared memory that has been initialized in some SOC or master engine.
Definition: attachable.hpp:111
NumaNodeMemoryRef * get_node_memory(foedus::thread::ThreadGroupId group) const
const ErrorStack kRetOk
Normal return value for no-error case.
const KeySlice kSupremumSlice
const LocalPageResolver & get_resolver() const
Gives an object to resolve an offset in this page pool (thus local) to an actual pointer and vice ver...
Definition: page_pool.cpp:146
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
#define WRAP_ERROR_CODE(x)
Same as CHECK_ERROR(x) except it receives only an error code, thus more efficient.
memory::EngineMemory * get_memory_manager() const
See Memory Manager.
Definition: engine.cpp:50

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::locate_record ( thread::Thread context,
const void *  key,
KeyLength  key_length,
bool  for_writes,
RecordLocation result 
)

Identifies page and record for the key.

Definition at line 283 of file masstree_storage_pimpl.cpp.

References foedus::xct::Xct::add_to_page_version_set(), ASSERT_ND, CHECK_ERROR_CODE, foedus::storage::masstree::RecordLocation::clear(), foedus::storage::masstree::MasstreeBorderPage::does_point_to_layer(), find_border_physical(), foedus::storage::masstree::MasstreeBorderPage::find_key(), follow_layer(), foedus::thread::Thread::get_current_xct(), get_first_root(), foedus::storage::masstree::MasstreePage::get_version(), foedus::storage::masstree::MasstreePage::get_version_address(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::kErrorCodeOk, foedus::kErrorCodeStrKeyNotFound, foedus::storage::masstree::kMaxKeyLength, foedus::assorted::memory_fence_consume(), foedus::storage::masstree::RecordLocation::populate_logical(), foedus::storage::masstree::slice_layer(), foedus::storage::PageHeader::snapshot_, and foedus::storage::PageVersion::status_.

Referenced by foedus::storage::masstree::MasstreeStorage::delete_record(), foedus::storage::masstree::MasstreeStorage::get_record(), foedus::storage::masstree::MasstreeStorage::get_record_part(), foedus::storage::masstree::MasstreeStorage::get_record_primitive(), foedus::storage::masstree::MasstreeStorage::increment_record(), foedus::storage::masstree::MasstreeStorage::overwrite_record(), and foedus::storage::masstree::MasstreeStorage::overwrite_record_primitive().

288  {
289  ASSERT_ND(result);
290  ASSERT_ND(key_length <= kMaxKeyLength);
291  result->clear();
292  xct::Xct* cur_xct = &context->get_current_xct();
293  MasstreePage* layer_root;
295  context,
296  for_writes,
297  reinterpret_cast<MasstreeIntermediatePage**>(&layer_root)));
298  for (uint16_t current_layer = 0;; ++current_layer) {
299  KeyLength remainder_length = key_length - current_layer * 8;
300  KeySlice slice = slice_layer(key, key_length, current_layer);
301  const void* suffix = reinterpret_cast<const char*>(key) + (current_layer + 1) * 8;
302  MasstreeBorderPage* border;
304  context,
305  layer_root,
306  current_layer,
307  for_writes,
308  slice,
309  &border));
310  PageVersionStatus border_version = border->get_version().status_;
312  SlotIndex index = border->find_key(slice, suffix, remainder_length);
313 
314  if (index == kBorderPageMaxSlots) {
315  // this means not found. add it to page version set to protect the lack of record
316  if (!border->header().snapshot_) {
317  CHECK_ERROR_CODE(cur_xct->add_to_page_version_set(
318  border->get_version_address(),
319  border_version));
320  }
321  // So far we just use page-version set to protect this non-existence.
322  // TASK(Hideaki) range lock rather than page-set to improve concurrency?
323  result->clear();
325  }
326  if (border->does_point_to_layer(index)) {
327  CHECK_ERROR_CODE(follow_layer(context, for_writes, border, index, &layer_root));
328  continue;
329  } else {
330  CHECK_ERROR_CODE(result->populate_logical(cur_xct, border, index, for_writes));
331  return kErrorCodeOk;
332  }
333  }
334 }
ErrorCode find_border_physical(thread::Thread *context, MasstreePage *layer_root, uint8_t current_layer, bool for_writes, KeySlice slice, MasstreeBorderPage **border) __attribute__((always_inline))
Find a border node in the layer that corresponds to the given key slice.
0x080C : "STORAGE: This key is not found in this storage" .
Definition: error_code.hpp:178
uint16_t SlotIndex
Index of a record in a (border) page.
const KeyLength kMaxKeyLength
Max length of a key.
Definition: masstree_id.hpp:75
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
0 means no-error.
Definition: error_code.hpp:87
ErrorCode get_first_root(thread::Thread *context, bool for_write, MasstreeIntermediatePage **root)
Root-node related, such as a method to retrieve 1st-root, to grow, etc.
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
ErrorCode follow_layer(thread::Thread *context, bool for_writes, MasstreeBorderPage *parent, SlotIndex record_index, MasstreePage **page) __attribute__((always_inline))
Follows to next layer's root page.
void memory_fence_consume()
Equivalent to std::atomic_thread_fence(std::memory_order_consume).
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
KeySlice slice_layer(const void *be_bytes, KeyLength key_length, Layer current_layer)
Extract a part of a big-endian byte array of given length as KeySlice.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::locate_record_normalized ( thread::Thread context,
KeySlice  key,
bool  for_writes,
RecordLocation result 
)

Identifies page and record for the normalized key.

Definition at line 336 of file masstree_storage_pimpl.cpp.

References foedus::xct::Xct::add_to_page_version_set(), ASSERT_ND, CHECK_ERROR_CODE, foedus::storage::masstree::RecordLocation::clear(), foedus::storage::masstree::MasstreeBorderPage::does_point_to_layer(), find_border_physical(), foedus::storage::masstree::MasstreeBorderPage::find_key_normalized(), foedus::thread::Thread::get_current_xct(), get_first_root(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::storage::masstree::MasstreePage::get_version(), foedus::storage::masstree::MasstreePage::get_version_address(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::kErrorCodeOk, foedus::kErrorCodeStrKeyNotFound, foedus::storage::masstree::RecordLocation::populate_logical(), foedus::storage::PageHeader::snapshot_, and foedus::storage::PageVersion::status_.

Referenced by foedus::storage::masstree::MasstreeStorage::delete_record_normalized(), foedus::storage::masstree::MasstreeStorage::get_record_normalized(), foedus::storage::masstree::MasstreeStorage::get_record_part_normalized(), foedus::storage::masstree::MasstreeStorage::get_record_primitive_normalized(), foedus::storage::masstree::MasstreeStorage::increment_record_normalized(), foedus::storage::masstree::MasstreeStorage::overwrite_record_normalized(), and foedus::storage::masstree::MasstreeStorage::overwrite_record_primitive_normalized().

340  {
341  ASSERT_ND(result);
342  result->clear();
343  xct::Xct* cur_xct = &context->get_current_xct();
344  MasstreeBorderPage* border;
345  MasstreeIntermediatePage* layer_root;
346  CHECK_ERROR_CODE(get_first_root(context, for_writes, &layer_root));
347  CHECK_ERROR_CODE(find_border_physical(context, layer_root, 0, for_writes, key, &border));
348  SlotIndex index = border->find_key_normalized(0, border->get_key_count(), key);
349  PageVersionStatus border_version = border->get_version().status_;
350  if (index == kBorderPageMaxSlots) {
351  // this means not found
352  if (!border->header().snapshot_) {
353  CHECK_ERROR_CODE(cur_xct->add_to_page_version_set(
354  border->get_version_address(),
355  border_version));
356  }
357  // TASK(Hideaki) range lock
358  result->clear();
360  }
361  // because this is just one slice, we never go to second layer
362  ASSERT_ND(!border->does_point_to_layer(index));
363  CHECK_ERROR_CODE(result->populate_logical(cur_xct, border, index, for_writes));
364  return kErrorCodeOk;
365 }
ErrorCode find_border_physical(thread::Thread *context, MasstreePage *layer_root, uint8_t current_layer, bool for_writes, KeySlice slice, MasstreeBorderPage **border) __attribute__((always_inline))
Find a border node in the layer that corresponds to the given key slice.
0x080C : "STORAGE: This key is not found in this storage" .
Definition: error_code.hpp:178
uint16_t SlotIndex
Index of a record in a (border) page.
0 means no-error.
Definition: error_code.hpp:87
ErrorCode get_first_root(thread::Thread *context, bool for_write, MasstreeIntermediatePage **root)
Root-node related, such as a method to retrieve 1st-root, to grow, etc.
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::overwrite_general ( thread::Thread context,
const RecordLocation location,
const void *  be_key,
KeyLength  key_length,
const void *  payload,
PayloadLength  payload_offset,
PayloadLength  payload_count 
)

implementation of overwrite_record family.

use with locate_record()

Definition at line 772 of file masstree_storage_pimpl.cpp.

References foedus::storage::masstree::MasstreeCommonLogType::calculate_log_length(), CHECK_ERROR_CODE, check_next_layer_bit(), get_id(), foedus::thread::Thread::get_numa_node(), foedus::storage::masstree::MasstreeBorderPage::get_payload_length(), foedus::thread::Thread::get_thread_log_buffer(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::RecordLocation::index_, foedus::xct::XctId::is_deleted(), foedus::kErrorCodeStrKeyNotFound, foedus::kErrorCodeStrTooShortPayload, foedus::storage::masstree::RecordLocation::observed_, foedus::storage::masstree::RecordLocation::page_, foedus::storage::masstree::MasstreeOverwriteLogType::populate(), register_record_write_log(), foedus::log::ThreadLogBuffer::reserve_new_log(), and foedus::storage::PageHeader::stat_last_updater_node_.

Referenced by foedus::storage::masstree::MasstreeCursor::overwrite_record(), foedus::storage::masstree::MasstreeStorage::overwrite_record(), foedus::storage::masstree::MasstreeStorage::overwrite_record_normalized(), foedus::storage::masstree::MasstreeCursor::overwrite_record_primitive(), foedus::storage::masstree::MasstreeStorage::overwrite_record_primitive(), and foedus::storage::masstree::MasstreeStorage::overwrite_record_primitive_normalized().

779  {
780  if (location.observed_.is_deleted()) {
781  // in this case, we don't need a page-version set. the physical record is surely there.
783  }
784  CHECK_ERROR_CODE(check_next_layer_bit(location.observed_));
785  MasstreeBorderPage* border = location.page_;
786  if (border->get_payload_length(location.index_) < payload_offset + payload_count) {
787  LOG(WARNING) << "short record "; // probably this is a rare error. so warn.
789  }
790 
791  uint16_t log_length = MasstreeOverwriteLogType::calculate_log_length(key_length, payload_count);
792  MasstreeOverwriteLogType* log_entry = reinterpret_cast<MasstreeOverwriteLogType*>(
793  context->get_thread_log_buffer().reserve_new_log(log_length));
794  log_entry->populate(
795  get_id(),
796  be_key,
797  key_length,
798  payload,
799  payload_offset,
800  payload_count);
801  border->header().stat_last_updater_node_ = context->get_numa_node();
802  return register_record_write_log(context, location, log_entry);
803 }
0x080A : "STORAGE: The record's payload is smaller than requested" .
Definition: error_code.hpp:176
0x080C : "STORAGE: This key is not found in this storage" .
Definition: error_code.hpp:178
ErrorCode register_record_write_log(thread::Thread *context, const RecordLocation &location, log::RecordLogType *log_entry)
Used in the following methods.
static uint16_t calculate_log_length(KeyLength key_length, PayloadLength payload_count) __attribute__((always_inline))
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
static ErrorCode check_next_layer_bit(xct::XctId observed) __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::peek_volatile_page_boundaries ( Engine engine,
const MasstreeStorage::PeekBoundariesArguments args 
)

Defined in masstree_storage_peek.cpp.

Definition at line 37 of file masstree_storage_peek.cpp.

References foedus::storage::masstree::MasstreeStorage::PeekBoundariesArguments::found_boundary_count_, get_first_root_pointer(), foedus::memory::EngineMemory::get_global_volatile_page_resolver(), foedus::Engine::get_memory_manager(), foedus::storage::VolatilePagePointer::is_null(), foedus::kErrorCodeOk, peek_volatile_page_boundaries_next_layer(), peek_volatile_page_boundaries_this_layer(), foedus::storage::masstree::MasstreeStorage::PeekBoundariesArguments::prefix_slice_count_, foedus::memory::GlobalVolatilePageResolver::resolve_offset(), and foedus::storage::DualPagePointer::volatile_pointer_.

Referenced by foedus::storage::masstree::MasstreeStorage::peek_volatile_page_boundaries().

39  {
40  *args.found_boundary_count_ = 0;
41 
42  VolatilePagePointer root_pointer = get_first_root_pointer().volatile_pointer_;
43  if (root_pointer.is_null()) {
44  VLOG(0) << "This masstree has no volatile pages. Maybe the composer is executed as part of"
45  << " restart?";
46  return kErrorCodeOk;
47  }
48 
49  const memory::GlobalVolatilePageResolver& resolver
50  = engine->get_memory_manager()->get_global_volatile_page_resolver();
51  const MasstreePage* root
52  = reinterpret_cast<const MasstreePage*>(resolver.resolve_offset(root_pointer));
53 
54  if (args.prefix_slice_count_ == 0) {
55  return peek_volatile_page_boundaries_this_layer(root, resolver, args);
56  } else {
57  return peek_volatile_page_boundaries_next_layer(root, resolver, args);
58  }
59 }
ErrorCode peek_volatile_page_boundaries_this_layer(const MasstreePage *layer_root, const memory::GlobalVolatilePageResolver &resolver, const MasstreeStorage::PeekBoundariesArguments &args)
ErrorCode peek_volatile_page_boundaries_next_layer(const MasstreePage *layer_root, const memory::GlobalVolatilePageResolver &resolver, const MasstreeStorage::PeekBoundariesArguments &args)
VolatilePagePointer volatile_pointer_
Definition: storage_id.hpp:308
0 means no-error.
Definition: error_code.hpp:87

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::peek_volatile_page_boundaries_next_layer ( const MasstreePage layer_root,
const memory::GlobalVolatilePageResolver resolver,
const MasstreeStorage::PeekBoundariesArguments args 
)

Definition at line 61 of file masstree_storage_peek.cpp.

References ASSERT_ND, foedus::memory::GlobalVolatilePageResolver::begin_, foedus::storage::masstree::MasstreeBorderPage::does_point_to_layer(), foedus::memory::GlobalVolatilePageResolver::end_, foedus::storage::masstree::MasstreeIntermediatePage::find_minipage(), foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::find_pointer(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::storage::masstree::MasstreePage::get_layer(), foedus::storage::masstree::MasstreePage::get_low_fence(), foedus::storage::masstree::MasstreeBorderPage::get_next_layer(), foedus::storage::VolatilePagePointer::get_numa_node(), foedus::storage::VolatilePagePointer::get_offset(), foedus::storage::masstree::MasstreeBorderPage::get_slice(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::MasstreePage::is_border(), foedus::storage::masstree::MasstreePage::is_high_fence_supremum(), foedus::storage::VolatilePagePointer::is_null(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::kErrorCodeOk, foedus::storage::masstree::kInfimumSlice, LIKELY, foedus::assorted::memory_fence_acquire(), foedus::memory::GlobalVolatilePageResolver::numa_node_count_, peek_volatile_page_boundaries_this_layer(), foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::pointers_, foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::prefetch(), foedus::storage::masstree::MasstreeBorderPage::prefetch_additional_if_needed(), foedus::storage::masstree::MasstreePage::prefetch_general(), foedus::storage::masstree::MasstreeStorage::PeekBoundariesArguments::prefix_slice_count_, foedus::storage::masstree::MasstreeStorage::PeekBoundariesArguments::prefix_slices_, foedus::memory::GlobalVolatilePageResolver::resolve_offset(), foedus::storage::PageHeader::snapshot_, UNLIKELY, foedus::storage::DualPagePointer::volatile_pointer_, and foedus::storage::masstree::MasstreePage::within_fences().

Referenced by peek_volatile_page_boundaries().

64  {
65  ASSERT_ND(!layer_root->header().snapshot_);
66  uint8_t this_layer = layer_root->get_layer();
67  ASSERT_ND(this_layer < args.prefix_slice_count_);
68  ASSERT_ND(layer_root->get_low_fence() == kInfimumSlice && layer_root->is_high_fence_supremum());
69  KeySlice slice = args.prefix_slices_[layer_root->get_layer()];
70 
71  // look for this slice in this layer. If we can't find it (which is unlikely), no results.
72  // compared to tree-traversal code in the transactional methods, we can give up whenever
73  // something rare happens.
74  const MasstreePage* cur = layer_root;
75  cur->prefetch_general();
76  while (!cur->is_border()) {
77  ASSERT_ND(cur->within_fences(slice));
78  // We do NOT follow foster-twins here.
79  // Foster-twins are sooner or later adopted, and Master-Tree invariant for intermediate page
80  // guarantees that we can just read the old page. no need to follow foster-twins.
81  const MasstreeIntermediatePage* page = reinterpret_cast<const MasstreeIntermediatePage*>(cur);
82  uint8_t minipage_index = page->find_minipage(slice);
83  const MasstreeIntermediatePage::MiniPage& minipage = page->get_minipage(minipage_index);
84 
85  minipage.prefetch();
86  uint8_t pointer_index = minipage.find_pointer(slice);
87  VolatilePagePointer pointer = minipage.pointers_[pointer_index].volatile_pointer_;
88  const MasstreePage* next
89  = reinterpret_cast<const MasstreePage*>(resolver.resolve_offset(pointer));
90  next->prefetch_general();
91  if (LIKELY(next->within_fences(slice))) {
92  cur = next;
93  } else {
94  // even in this case, local retry suffices thanks to foster-twin
95  VLOG(0) << "Interesting. concurrent thread affected the search. local retry";
97  }
98  }
99  const MasstreeBorderPage* border = reinterpret_cast<const MasstreeBorderPage*>(cur);
100 
101  // following code are not transactional at all, but it's fine because these are just hints.
102  // however, make sure we don't hit something like segfault. check nulls etc.
103  ASSERT_ND(border->within_fences(slice));
104  uint8_t key_count = border->get_key_count();
106  border->prefetch_additional_if_needed(key_count);
107  for (SlotIndex i = 0; i < key_count; ++i) {
108  KeySlice cur_slice = border->get_slice(i);
109  if (LIKELY(cur_slice < slice)) {
110  continue;
111  }
112  if (cur_slice > slice) {
113  break;
114  }
115  // one slice might be used for up to 10 keys, length 0 to 8 and pointer to next layer.
116  if (border->does_point_to_layer(i)) {
117  rec = i;
118  break;
119  }
120  }
121 
122  if (UNLIKELY(rec >= key_count || !border->does_point_to_layer(rec))) {
123  LOG(INFO) << "Slice not found during peeking. Gives up finding a page boundary hint";
124  return kErrorCodeOk;
125  }
126 
127  VolatilePagePointer pointer = border->get_next_layer(rec)->volatile_pointer_;
128  if (UNLIKELY(pointer.is_null()
129  || pointer.get_numa_node() >= resolver.numa_node_count_
130  || pointer.get_offset() < resolver.begin_
131  || pointer.get_offset() >= resolver.end_)) {
132  LOG(INFO) << "Encountered invalid page during peeking. Gives up finding a page boundary hint";
133  return kErrorCodeOk;
134  }
135  const MasstreePage* next_root
136  = reinterpret_cast<const MasstreePage*>(resolver.resolve_offset(pointer));
137  if (this_layer + 1U == args.prefix_slice_count_) {
138  return peek_volatile_page_boundaries_this_layer(next_root, resolver, args);
139  } else {
140  return peek_volatile_page_boundaries_next_layer(next_root, resolver, args);
141  }
142 }
ErrorCode peek_volatile_page_boundaries_this_layer(const MasstreePage *layer_root, const memory::GlobalVolatilePageResolver &resolver, const MasstreeStorage::PeekBoundariesArguments &args)
const KeySlice kInfimumSlice
uint16_t SlotIndex
Index of a record in a (border) page.
ErrorCode peek_volatile_page_boundaries_next_layer(const MasstreePage *layer_root, const memory::GlobalVolatilePageResolver &resolver, const MasstreeStorage::PeekBoundariesArguments &args)
uint64_t KeySlice
Each key slice is an 8-byte integer.
#define LIKELY(x)
Hints that x is highly likely true.
Definition: compiler.hpp:103
0 means no-error.
Definition: error_code.hpp:87
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
void memory_fence_acquire()
Equivalent to std::atomic_thread_fence(std::memory_order_acquire).
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::peek_volatile_page_boundaries_this_layer ( const MasstreePage layer_root,
const memory::GlobalVolatilePageResolver resolver,
const MasstreeStorage::PeekBoundariesArguments args 
)

Definition at line 144 of file masstree_storage_peek.cpp.

References ASSERT_ND, foedus::storage::masstree::MasstreePage::get_layer(), foedus::storage::masstree::MasstreePage::get_low_fence(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::MasstreePage::is_border(), foedus::storage::masstree::MasstreePage::is_high_fence_supremum(), foedus::kErrorCodeOk, foedus::storage::masstree::kInfimumSlice, peek_volatile_page_boundaries_this_layer_recurse(), foedus::storage::masstree::MasstreeStorage::PeekBoundariesArguments::prefix_slice_count_, and foedus::storage::PageHeader::snapshot_.

Referenced by peek_volatile_page_boundaries(), and peek_volatile_page_boundaries_next_layer().

147  {
148  ASSERT_ND(!layer_root->header().snapshot_);
149  ASSERT_ND(layer_root->get_layer() == args.prefix_slice_count_);
150  ASSERT_ND(layer_root->get_low_fence() == kInfimumSlice && layer_root->is_high_fence_supremum());
151  if (layer_root->is_border()) {
152  ASSERT_ND(layer_root->get_layer() > 0); // first layer's root page is always intermediate
153  // the root page of the layer of interest is a border page, hence no boundaries.
154  return kErrorCodeOk;
155  }
156  const MasstreeIntermediatePage* cur
157  = reinterpret_cast<const MasstreeIntermediatePage*>(layer_root);
158  return peek_volatile_page_boundaries_this_layer_recurse(cur, resolver, args);
159 }
const KeySlice kInfimumSlice
ErrorCode peek_volatile_page_boundaries_this_layer_recurse(const MasstreeIntermediatePage *cur, const memory::GlobalVolatilePageResolver &resolver, const MasstreeStorage::PeekBoundariesArguments &args)
0 means no-error.
Definition: error_code.hpp:87
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::peek_volatile_page_boundaries_this_layer_recurse ( const MasstreeIntermediatePage cur,
const memory::GlobalVolatilePageResolver resolver,
const MasstreeStorage::PeekBoundariesArguments args 
)

Definition at line 161 of file masstree_storage_peek.cpp.

References ASSERT_ND, foedus::memory::GlobalVolatilePageResolver::begin_, CHECK_ERROR_CODE, foedus::memory::GlobalVolatilePageResolver::end_, foedus::storage::masstree::MasstreeIntermediatePage::find_minipage(), foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::find_pointer(), foedus::storage::masstree::MasstreeStorage::PeekBoundariesArguments::found_boundaries_, foedus::storage::masstree::MasstreeStorage::PeekBoundariesArguments::found_boundary_capacity_, foedus::storage::masstree::MasstreeStorage::PeekBoundariesArguments::found_boundary_count_, foedus::storage::masstree::MasstreeStorage::PeekBoundariesArguments::from_, foedus::storage::masstree::MasstreePage::get_btree_level(), foedus::storage::masstree::MasstreePage::get_high_fence(), foedus::storage::masstree::MasstreePage::get_layer(), foedus::storage::masstree::MasstreePage::get_low_fence(), foedus::storage::masstree::MasstreeIntermediatePointerIterator::get_low_key(), foedus::storage::masstree::MasstreeIntermediatePage::get_minipage(), foedus::storage::VolatilePagePointer::get_numa_node(), foedus::storage::VolatilePagePointer::get_offset(), foedus::storage::PageHeader::get_page_type(), foedus::storage::masstree::MasstreeIntermediatePointerIterator::get_pointer(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::MasstreeIntermediatePointerIterator::index_, foedus::storage::masstree::MasstreeIntermediatePointerIterator::index_mini_, foedus::storage::masstree::MasstreePage::is_border(), foedus::storage::masstree::MasstreePage::is_high_fence_supremum(), foedus::storage::VolatilePagePointer::is_null(), foedus::storage::masstree::MasstreeIntermediatePointerIterator::is_valid(), foedus::kErrorCodeOk, foedus::storage::masstree::kInfimumSlice, foedus::storage::kMasstreeIntermediatePageType, LIKELY, foedus::storage::masstree::MasstreeIntermediatePointerIterator::next(), foedus::memory::GlobalVolatilePageResolver::numa_node_count_, foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::prefetch(), foedus::storage::masstree::MasstreeStorage::PeekBoundariesArguments::prefix_slice_count_, foedus::memory::GlobalVolatilePageResolver::resolve_offset(), foedus::storage::PageHeader::snapshot_, foedus::storage::masstree::MasstreeStorage::PeekBoundariesArguments::to_, UNLIKELY, foedus::storage::DualPagePointer::volatile_pointer_, and foedus::storage::masstree::MasstreePage::within_fences().

Referenced by peek_volatile_page_boundaries_this_layer().

164  {
165  ASSERT_ND(!cur->header().snapshot_);
166  ASSERT_ND(!cur->is_border());
167  ASSERT_ND(cur->get_layer() == args.prefix_slice_count_);
168  // because of concurrent modifications, this is possible. we just give up finding hints.
169  if (UNLIKELY(cur->get_low_fence() >= args.to_ || cur->get_high_fence() <= args.from_)) {
170  return kErrorCodeOk;
171  }
172 
173  uint8_t begin_index = 0;
174  uint8_t begin_mini_index = 0;
175  if (cur->within_fences(args.from_)) {
176  begin_index = cur->find_minipage(args.from_);
177  const MasstreeIntermediatePage::MiniPage& minipage = cur->get_minipage(begin_index);
178  minipage.prefetch();
179  begin_mini_index = minipage.find_pointer(args.from_);
180  }
181 
182  // if children of this page are intermediate pages (this level>=2), we further recurse.
183  // if children of this page are border pages, just append the page boundaries stored in this page.
184  const bool needs_recurse = (cur->get_btree_level() >= 2U);
185  // we don't care wherther the child has foster twins. it's rare, and most keys are already
186  // pushed up to this page. again, this method is opportunistic.
187  // this guarantees that the cost of peeking is cheap. we just read intermediate pages.
188  // again, this code is not protected from concurrent transactions at all.
189  // we just make sure it doesn't hit segfault etc.
190  MasstreeIntermediatePointerIterator it(cur);
191  it.index_ = begin_index;
192  it.index_mini_ = begin_mini_index;
193  for (; it.is_valid(); it.next()) {
194  VolatilePagePointer pointer = it.get_pointer().volatile_pointer_;
195  KeySlice boundary = it.get_low_key();
196  if (boundary >= args.to_) { // all pages under the pointer are >= boundary.
197  break;
198  }
199  if (boundary > args.from_) {
200  // at least guarantee that the resulting found_boundaries_ is ordered.
201  if ((*args.found_boundary_count_) == 0
202  || args.found_boundaries_[(*args.found_boundary_count_) - 1U] < boundary) {
203  ASSERT_ND((*args.found_boundary_count_) < args.found_boundary_capacity_);
204  args.found_boundaries_[*args.found_boundary_count_] = boundary;
205  ++(*args.found_boundary_count_);
206  if ((*args.found_boundary_count_) >= args.found_boundary_capacity_) {
207  VLOG(0) << "Found too many boundaries while peeking. stops here.";
208  return kErrorCodeOk;
209  }
210  }
211  }
212  // recurse to find more. note that it might contain pages of interest
213  // even if boundary < args.from_ because it's a *low*-fence.
214  if (needs_recurse && !pointer.is_null()) {
215  if (LIKELY(pointer.get_numa_node() < resolver.numa_node_count_
216  && pointer.get_offset() >= resolver.begin_
217  && pointer.get_offset() < resolver.end_)) {
218  const MasstreeIntermediatePage* next
219  = reinterpret_cast<const MasstreeIntermediatePage*>(resolver.resolve_offset(pointer));
220  if (next->is_border()) {
221  // this is possible because our masstree might be unbalanced.
222  ASSERT_ND(cur->get_layer() == 0); // but it can happen only at first layer's root
223  ASSERT_ND(cur->get_low_fence() == kInfimumSlice && cur->is_high_fence_supremum());
224  continue;
225  }
226  ASSERT_ND(next->header().get_page_type() == kMasstreeIntermediatePageType);
228  ASSERT_ND((*args.found_boundary_count_) <= args.found_boundary_capacity_);
229  if ((*args.found_boundary_count_) >= args.found_boundary_capacity_) {
230  return kErrorCodeOk;
231  }
232  }
233  }
234  }
235 
236  return kErrorCodeOk;
237 }
const KeySlice kInfimumSlice
ErrorCode peek_volatile_page_boundaries_this_layer_recurse(const MasstreeIntermediatePage *cur, const memory::GlobalVolatilePageResolver &resolver, const MasstreeStorage::PeekBoundariesArguments &args)
uint64_t KeySlice
Each key slice is an 8-byte integer.
#define LIKELY(x)
Hints that x is highly likely true.
Definition: compiler.hpp:103
0 means no-error.
Definition: error_code.hpp:87
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::prefetch_pages_follow ( thread::Thread context,
DualPagePointer pointer,
bool  vol_on,
bool  snp_on,
KeySlice  from,
KeySlice  to 
)

Definition at line 112 of file masstree_storage_prefetch.cpp.

References ASSERT_ND, CHECK_ERROR_CODE, foedus::thread::Thread::find_or_read_a_snapshot_page(), foedus::thread::Thread::install_a_volatile_page(), foedus::storage::VolatilePagePointer::is_null(), foedus::kErrorCodeOk, foedus::storage::prefetch_page_l2(), prefetch_pages_normalized_recurse(), foedus::thread::Thread::resolve_cast(), foedus::storage::DualPagePointer::snapshot_pointer_, foedus::storage::to_page(), and foedus::storage::DualPagePointer::volatile_pointer_.

Referenced by prefetch_pages_normalized_recurse().

118  {
119  // first, do we have to cache snapshot page?
120  if (pointer->snapshot_pointer_ != 0) {
121  if (snp_on) {
122  MasstreePage* child;
123  CHECK_ERROR_CODE(context->find_or_read_a_snapshot_page(
124  pointer->snapshot_pointer_,
125  reinterpret_cast<Page**>(&child)));
126  prefetch_page_l2(child);
128  context,
129  false,
130  snp_on,
131  from,
132  to,
133  child));
134  }
135  // do we have to install volatile page based on it?
136  if (pointer->volatile_pointer_.is_null() && vol_on) {
137  ASSERT_ND(!to_page(pointer)->get_header().snapshot_);
138  Page* child;
139  CHECK_ERROR_CODE(context->install_a_volatile_page(pointer, &child));
140  }
141  }
142 
143  // then go down
144  if (!pointer->volatile_pointer_.is_null() && vol_on) {
145  ASSERT_ND(!to_page(pointer)->get_header().snapshot_);
146  MasstreePage* child = context->resolve_cast<MasstreePage>(pointer->volatile_pointer_);
147  prefetch_page_l2(child);
149  context,
150  vol_on,
151  snp_on,
152  from,
153  to,
154  child));
155  }
156  return kErrorCodeOk;
157 }
Page * to_page(const void *address)
super-dirty way to obtain Page the address belongs to.
Definition: page.hpp:395
0 means no-error.
Definition: error_code.hpp:87
void prefetch_page_l2(const void *page)
Prefetch a page to L2/L3 cache.
ErrorCode prefetch_pages_normalized_recurse(thread::Thread *context, bool install_volatile, bool cache_snapshot, KeySlice from, KeySlice to, MasstreePage *page)
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::prefetch_pages_normalized ( thread::Thread context,
bool  install_volatile,
bool  cache_snapshot,
KeySlice  from,
KeySlice  to 
)

defined in masstree_storage_prefetch.cpp

Definition at line 30 of file masstree_storage_prefetch.cpp.

References CHECK_ERROR_CODE, foedus::debugging::StopWatch::elapsed_us(), get_first_root(), get_name(), foedus::thread::Thread::get_thread_id(), foedus::kErrorCodeOk, foedus::storage::prefetch_page_l2(), prefetch_pages_normalized_recurse(), and foedus::debugging::StopWatch::stop().

Referenced by foedus::storage::masstree::MasstreeStorage::prefetch_pages_normalized().

35  {
36  debugging::StopWatch watch;
37  VLOG(0) << "Thread-" << context->get_thread_id()
38  << " prefetching " << get_name() << " from=" << from << ", to=" << to;
39 
40  MasstreeIntermediatePage* root_page;
41  CHECK_ERROR_CODE(get_first_root(context, false, &root_page));
42  prefetch_page_l2(root_page);
43  CHECK_ERROR_CODE(prefetch_pages_normalized_recurse(context, vol_on, snp_on, from, to, root_page));
44 
45  watch.stop();
46  VLOG(0) << "Thread-" << context->get_thread_id()
47  << " prefetched " << get_name() << " in " << watch.elapsed_us() << "us";
48  return kErrorCodeOk;
49 }
0 means no-error.
Definition: error_code.hpp:87
ErrorCode get_first_root(thread::Thread *context, bool for_write, MasstreeIntermediatePage **root)
Root-node related, such as a method to retrieve 1st-root, to grow, etc.
void prefetch_page_l2(const void *page)
Prefetch a page to L2/L3 cache.
ErrorCode prefetch_pages_normalized_recurse(thread::Thread *context, bool install_volatile, bool cache_snapshot, KeySlice from, KeySlice to, MasstreePage *page)
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::prefetch_pages_normalized_recurse ( thread::Thread context,
bool  install_volatile,
bool  cache_snapshot,
KeySlice  from,
KeySlice  to,
MasstreePage page 
)

Definition at line 51 of file masstree_storage_prefetch.cpp.

References CHECK_ERROR_CODE, foedus::storage::masstree::MasstreeBorderPage::does_point_to_layer(), foedus::storage::masstree::MasstreePage::get_foster_major(), foedus::storage::masstree::MasstreePage::get_foster_minor(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::storage::masstree::MasstreeIntermediatePage::get_minipage(), foedus::storage::masstree::MasstreeBorderPage::get_next_layer(), foedus::storage::masstree::MasstreeIntermediatePage::get_separator(), foedus::storage::masstree::MasstreeBorderPage::get_slice(), foedus::storage::masstree::MasstreePage::has_foster_child(), foedus::storage::masstree::MasstreePage::is_border(), foedus::storage::masstree::MasstreePage::is_empty_range(), foedus::kErrorCodeOk, foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::key_count_, foedus::storage::masstree::kInfimumSlice, foedus::storage::masstree::kSupremumSlice, foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::pointers_, prefetch_pages_follow(), foedus::thread::Thread::resolve_cast(), and foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::separators_.

Referenced by prefetch_pages_follow(), and prefetch_pages_normalized().

57  {
58  if (p->is_empty_range()) {
59  return kErrorCodeOk;
60  }
61 
62  if (p->has_foster_child()) {
63  MasstreePage* minor = context->resolve_cast<MasstreePage>(p->get_foster_minor());
64  MasstreePage* major = context->resolve_cast<MasstreePage>(p->get_foster_major());
65  CHECK_ERROR_CODE(prefetch_pages_normalized_recurse(context, vol_on, snp_on, from, to, minor));
66  CHECK_ERROR_CODE(prefetch_pages_normalized_recurse(context, vol_on, snp_on, from, to, major));
67  return kErrorCodeOk;
68  }
69 
70  uint8_t count = p->get_key_count();
71  if (p->is_border()) {
72  MasstreeBorderPage* page = reinterpret_cast<MasstreeBorderPage*>(p);
73  for (uint8_t i = 0; i < count; ++i) {
74  if (page->does_point_to_layer(i) && page->get_slice(i) >= from && page->get_slice(i) <= to) {
75  DualPagePointer* pointer = page->get_next_layer(i);
76  // next layer. exhaustively read
78  context,
79  pointer,
80  vol_on,
81  snp_on,
84  }
85  }
86  } else {
87  MasstreeIntermediatePage* page = reinterpret_cast<MasstreeIntermediatePage*>(p);
88  for (uint8_t i = 0; i <= count; ++i) {
89  if (i < count && page->get_separator(i) >= from) {
90  continue;
91  }
92  if (i > 0 && page->get_separator(i - 1) > to) {
93  break;
94  }
95  MasstreeIntermediatePage::MiniPage& minipage = page->get_minipage(i);
96  for (uint8_t j = 0; j <= minipage.key_count_; ++j) {
97  if (j < minipage.key_count_ && minipage.separators_[j] >= from) {
98  continue;
99  }
100  if (j > 0 && minipage.separators_[j - 1] > to) {
101  break;
102  }
103  DualPagePointer* pointer = &minipage.pointers_[j];
104  // next layer. exhaustively read
105  CHECK_ERROR_CODE(prefetch_pages_follow(context, pointer, vol_on, snp_on, from, to));
106  }
107  }
108  }
109  return kErrorCodeOk;
110 }
ErrorCode prefetch_pages_follow(thread::Thread *context, DualPagePointer *pointer, bool vol_on, bool snp_on, KeySlice from, KeySlice to)
const KeySlice kInfimumSlice
0 means no-error.
Definition: error_code.hpp:87
ErrorCode prefetch_pages_normalized_recurse(thread::Thread *context, bool install_volatile, bool cache_snapshot, KeySlice from, KeySlice to, MasstreePage *page)
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
const KeySlice kSupremumSlice

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::register_record_write_log ( thread::Thread context,
const RecordLocation location,
log::RecordLogType log_entry 
)

Used in the following methods.

Definition at line 645 of file masstree_storage_pimpl.cpp.

References foedus::xct::Xct::add_related_write_set(), foedus::xct::Xct::add_to_write_set(), foedus::thread::Thread::get_current_xct(), get_id(), foedus::storage::masstree::MasstreeBorderPage::get_owner_id(), foedus::storage::masstree::MasstreeBorderPage::get_record(), foedus::storage::masstree::RecordLocation::index_, foedus::storage::masstree::RecordLocation::page_, and foedus::storage::masstree::RecordLocation::readset_.

Referenced by delete_general(), increment_general(), insert_general(), overwrite_general(), and upsert_general().

648  {
649  // If we have taken readset in locate_record, add as a related write set
650  MasstreeBorderPage* border = location.page_;
651  auto* tid = border->get_owner_id(location.index_);
652  char* record = border->get_record(location.index_);
653  xct::Xct* cur_xct = &context->get_current_xct();
654  if (location.readset_) {
655  return cur_xct->add_related_write_set(location.readset_, tid, record, log_entry);
656  } else {
657  return cur_xct->add_to_write_set(get_id(), tid, record, log_entry);
658  }
659 }

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::reserve_record ( thread::Thread context,
const void *  key,
KeyLength  key_length,
PayloadLength  payload_count,
PayloadLength  physical_payload_hint,
RecordLocation result 
)

Like locate_record(), this is also a logical operation.

Definition at line 412 of file masstree_storage_pimpl.cpp.

References ASSERT_ND, CHECK_ERROR_CODE, foedus::storage::masstree::RecordLocation::clear(), find_border_physical(), foedus::storage::masstree::MasstreeBorderPage::find_key_for_reserve(), follow_layer(), foedus::thread::Thread::get_current_xct(), get_first_root(), foedus::storage::masstree::MasstreePage::get_foster_major(), foedus::storage::masstree::MasstreePage::get_foster_minor(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::storage::masstree::MasstreeBorderPage::get_max_payload_length(), get_meta(), foedus::storage::masstree::MasstreePage::has_foster_child(), foedus::storage::masstree::MasstreeBorderPage::FindKeyForReserveResult::index_, foedus::xct::XctId::is_moved(), foedus::xct::XctId::is_next_layer(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::kErrorCodeOk, foedus::storage::masstree::MasstreeBorderPage::kExactMatchLayerPointer, foedus::storage::masstree::MasstreeBorderPage::kExactMatchLocalRecord, foedus::storage::masstree::kMaxKeyLength, foedus::storage::masstree::MasstreeBorderPage::kNotFound, foedus::storage::masstree::MasstreeBorderPage::FindKeyForReserveResult::match_type_, foedus::assorted::memory_fence_acquire(), foedus::storage::masstree::RecordLocation::observed_, foedus::storage::masstree::ReserveRecords::out_split_needed_, foedus::storage::masstree::RecordLocation::populate_logical(), foedus::thread::Thread::resolve_cast(), foedus::thread::Thread::run_nested_sysxct(), foedus::storage::masstree::slice_layer(), foedus::storage::masstree::MasstreePage::within_fences(), and foedus::storage::masstree::MasstreePage::within_foster_minor().

Referenced by foedus::storage::masstree::MasstreeStorage::insert_record(), and foedus::storage::masstree::MasstreeStorage::upsert_record().

418  {
419  ASSERT_ND(key_length <= kMaxKeyLength);
420  ASSERT_ND(physical_payload_hint >= payload_count);
421  ASSERT_ND(result);
422  result->clear();
423  xct::Xct* cur_xct = &context->get_current_xct();
424 
425  MasstreePage* layer_root;
427  context,
428  true,
429  reinterpret_cast<MasstreeIntermediatePage**>(&layer_root)));
430  for (uint16_t layer = 0;; ++layer) {
431  const KeyLength remainder = key_length - layer * sizeof(KeySlice);
432  const KeySlice slice = slice_layer(key, key_length, layer);
433  const void* const suffix = reinterpret_cast<const char*>(key) + (layer + 1) * sizeof(KeySlice);
434  MasstreeBorderPage* border;
435  CHECK_ERROR_CODE(find_border_physical(context, layer_root, layer, true, slice, &border));
436  while (true) { // retry loop for following foster child and temporary failure
437  // if we found out that the page was split and we should follow foster child, do it.
438  while (border->has_foster_child()) {
439  if (border->within_foster_minor(slice)) {
440  border = context->resolve_cast<MasstreeBorderPage>(border->get_foster_minor());
441  } else {
442  border = context->resolve_cast<MasstreeBorderPage>(border->get_foster_major());
443  }
444  }
445  ASSERT_ND(border->within_fences(slice));
446 
447  const SlotIndex count = border->get_key_count();
448  // ReserveRecords, we need a fence on BOTH sides.
449  // observe key count first, then verify the keys.
451  MasstreeBorderPage::FindKeyForReserveResult match = border->find_key_for_reserve(
452  0,
453  count,
454  slice,
455  suffix,
456  remainder);
457 
458  if (match.match_type_ == MasstreeBorderPage::kExactMatchLayerPointer) {
459  ASSERT_ND(match.index_ < kBorderPageMaxSlots);
460  CHECK_ERROR_CODE(follow_layer(context, true, border, match.index_, &layer_root));
461  break; // next layer
462  } else if (match.match_type_ == MasstreeBorderPage::kExactMatchLocalRecord) {
463  // Even in this case, if the record space is too small, we must migrate it first.
464  if (border->get_max_payload_length(match.index_) >= payload_count) {
465  // Ok, this seems to already satisfy our need. but...
466  // Up until now, all the searches/expands were physical-only.
467  // Now we finalize the XID in the search result so that precommit will
468  // catch any change since now. This also means we need to re-check
469  // the status again, and retry if pointing to next layer or moved.
470  CHECK_ERROR_CODE(result->populate_logical(cur_xct, border, match.index_, true));
471  if (result->observed_.is_next_layer() || result->observed_.is_moved()) {
472  // because the search is optimistic, we might now see a XctId with next-layer bit on.
473  // in this case, we retry.
474  VLOG(0) << "Interesting. Next-layer-retry due to concurrent transaction";
476  continue;
477  }
478  return kErrorCodeOk;
479  }
480  }
481 
482  // In all other cases, we (probably) need to do something complex.
483  // ReserveRecords sysxct does it.
484  ReserveRecords reserve(
485  context,
486  border,
487  slice,
488  remainder,
489  suffix,
490  physical_payload_hint, // let's allocate conservatively
491  get_meta().should_aggresively_create_next_layer(layer, remainder),
492  match.match_type_ == MasstreeBorderPage::kNotFound ? count : match.index_);
493  CHECK_ERROR_CODE(context->run_nested_sysxct(&reserve, 2U));
494 
495  // We might need to split the page
496  if (reserve.out_split_needed_) {
497  SplitBorder split(
498  context,
499  border,
500  slice,
501  false,
502  true,
503  remainder,
504  physical_payload_hint,
505  suffix);
506  CHECK_ERROR_CODE(context->run_nested_sysxct(&split, 2U));
507  }
508 
509  // In either case, we should resume the search.
510  continue;
511  }
512  }
513 }
ErrorCode find_border_physical(thread::Thread *context, MasstreePage *layer_root, uint8_t current_layer, bool for_writes, KeySlice slice, MasstreeBorderPage **border) __attribute__((always_inline))
Find a border node in the layer that corresponds to the given key slice.
uint16_t SlotIndex
Index of a record in a (border) page.
const KeyLength kMaxKeyLength
Max length of a key.
Definition: masstree_id.hpp:75
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
0 means no-error.
Definition: error_code.hpp:87
ErrorCode get_first_root(thread::Thread *context, bool for_write, MasstreeIntermediatePage **root)
Root-node related, such as a method to retrieve 1st-root, to grow, etc.
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
ErrorCode follow_layer(thread::Thread *context, bool for_writes, MasstreeBorderPage *parent, SlotIndex record_index, MasstreePage **page) __attribute__((always_inline))
Follows to next layer's root page.
void memory_fence_acquire()
Equivalent to std::atomic_thread_fence(std::memory_order_acquire).
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
KeySlice slice_layer(const void *be_bytes, KeyLength key_length, Layer current_layer)
Extract a part of a big-endian byte array of given length as KeySlice.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::reserve_record_normalized ( thread::Thread context,
KeySlice  key,
PayloadLength  payload_count,
PayloadLength  physical_payload_hint,
RecordLocation result 
)

Definition at line 515 of file masstree_storage_pimpl.cpp.

References ASSERT_ND, CHECK_ERROR_CODE, foedus::storage::masstree::RecordLocation::clear(), find_border_physical(), foedus::storage::masstree::MasstreeBorderPage::find_key_normalized(), foedus::thread::Thread::get_current_xct(), get_first_root(), foedus::storage::masstree::MasstreePage::get_foster_major(), foedus::storage::masstree::MasstreePage::get_foster_minor(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::storage::masstree::MasstreeBorderPage::get_max_payload_length(), foedus::storage::masstree::MasstreePage::has_foster_child(), foedus::xct::XctId::is_moved(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::kErrorCodeOk, foedus::assorted::memory_fence_acquire(), foedus::storage::masstree::RecordLocation::observed_, foedus::storage::masstree::ReserveRecords::out_split_needed_, foedus::storage::masstree::RecordLocation::populate_logical(), foedus::thread::Thread::resolve_cast(), foedus::thread::Thread::run_nested_sysxct(), foedus::storage::masstree::MasstreePage::within_fences(), and foedus::storage::masstree::MasstreePage::within_foster_minor().

Referenced by foedus::storage::masstree::MasstreeStorage::insert_record_normalized(), and foedus::storage::masstree::MasstreeStorage::upsert_record_normalized().

520  {
521  ASSERT_ND(result);
522  result->clear();
523  xct::Xct* cur_xct = &context->get_current_xct();
524  MasstreeBorderPage* border;
525 
526  MasstreeIntermediatePage* layer_root;
527  CHECK_ERROR_CODE(get_first_root(context, true, &layer_root));
528  CHECK_ERROR_CODE(find_border_physical(context, layer_root, 0, true, key, &border));
529  while (true) { // retry loop for following foster child
530  // if we found out that the page was split and we should follow foster child, do it.
531  while (border->has_foster_child()) {
532  if (border->within_foster_minor(key)) {
533  border = context->resolve_cast<MasstreeBorderPage>(border->get_foster_minor());
534  } else {
535  border = context->resolve_cast<MasstreeBorderPage>(border->get_foster_major());
536  }
537  }
538 
539  ASSERT_ND(border->within_fences(key));
540 
541  // because we never go on to second layer in this case, it's either a full match or not-found
542  const SlotIndex count = border->get_key_count();
544  const SlotIndex index = border->find_key_normalized(0, count, key);
545 
546  if (index != kBorderPageMaxSlots) {
547  // If the record space is too small, we can't insert.
548  if (border->get_max_payload_length(index) >= payload_count) {
549  CHECK_ERROR_CODE(result->populate_logical(cur_xct, border, index, true));
550  if (result->observed_.is_moved()) {
551  // because the search is optimistic, we might now see a XctId with next-layer bit on.
552  // in this case, we retry.
553  VLOG(0) << "Interesting. Moved by concurrent transaction";
555  continue;
556  }
557  return kErrorCodeOk;
558  }
559  }
560 
561  // Same as the general case.
562  ReserveRecords reserve(
563  context,
564  border,
565  key,
566  sizeof(KeySlice),
567  nullptr,
568  physical_payload_hint,
569  false,
570  index == kBorderPageMaxSlots ? count : index);
571  CHECK_ERROR_CODE(context->run_nested_sysxct(&reserve, 2U));
572 
573  if (reserve.out_split_needed_) {
574  SplitBorder split(
575  context,
576  border,
577  key,
578  false,
579  true,
580  sizeof(KeySlice),
581  physical_payload_hint,
582  nullptr);
583  CHECK_ERROR_CODE(context->run_nested_sysxct(&split, 2U));
584  }
585  continue;
586  }
587 }
ErrorCode find_border_physical(thread::Thread *context, MasstreePage *layer_root, uint8_t current_layer, bool for_writes, KeySlice slice, MasstreeBorderPage **border) __attribute__((always_inline))
Find a border node in the layer that corresponds to the given key slice.
uint16_t SlotIndex
Index of a record in a (border) page.
uint64_t KeySlice
Each key slice is an 8-byte integer.
0 means no-error.
Definition: error_code.hpp:87
ErrorCode get_first_root(thread::Thread *context, bool for_write, MasstreeIntermediatePage **root)
Root-node related, such as a method to retrieve 1st-root, to grow, etc.
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
void memory_fence_acquire()
Equivalent to std::atomic_thread_fence(std::memory_order_acquire).
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::retrieve_general ( thread::Thread context,
const RecordLocation location,
void *  payload,
PayloadLength payload_capacity 
)

implementation of get_record family.

use with locate_record()

Definition at line 599 of file masstree_storage_pimpl.cpp.

References CHECK_ERROR_CODE, check_next_layer_bit(), foedus::storage::masstree::MasstreeBorderPage::get_payload_length(), foedus::storage::masstree::MasstreeBorderPage::get_record_payload(), foedus::storage::masstree::RecordLocation::index_, foedus::xct::XctId::is_deleted(), foedus::kErrorCodeOk, foedus::kErrorCodeStrKeyNotFound, foedus::kErrorCodeStrTooSmallPayloadBuffer, foedus::storage::masstree::RecordLocation::observed_, and foedus::storage::masstree::RecordLocation::page_.

Referenced by foedus::storage::masstree::MasstreeStorage::get_record(), and foedus::storage::masstree::MasstreeStorage::get_record_normalized().

603  {
604  if (location.observed_.is_deleted()) {
605  // This result is protected by readset
607  }
608  CHECK_ERROR_CODE(check_next_layer_bit(location.observed_));
609 
610  // here, we do NOT have to do another optimistic-read protocol because we already took
611  // the owner_id into read-set. If this read is corrupted, we will be aware of it at commit time.
612  MasstreeBorderPage* border = location.page_;
613  PayloadLength payload_length = border->get_payload_length(location.index_);
614  if (payload_length > *payload_capacity) {
615  // buffer too small
616  DVLOG(0) << "buffer too small??" << payload_length << ":" << *payload_capacity;
617  *payload_capacity = payload_length;
619  }
620  *payload_capacity = payload_length;
621  std::memcpy(payload, border->get_record_payload(location.index_), payload_length);
622  return kErrorCodeOk;
623 }
uint16_t PayloadLength
Represents a byte-length of a payload in this package.
Definition: masstree_id.hpp:92
0x080C : "STORAGE: This key is not found in this storage" .
Definition: error_code.hpp:178
0 means no-error.
Definition: error_code.hpp:87
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
0x0809 : "STORAGE: The record's payload is larger than the buffer" .
Definition: error_code.hpp:175
static ErrorCode check_next_layer_bit(xct::XctId observed) __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::retrieve_part_general ( thread::Thread context,
const RecordLocation location,
void *  payload,
PayloadLength  payload_offset,
PayloadLength  payload_count 
)

Definition at line 625 of file masstree_storage_pimpl.cpp.

References CHECK_ERROR_CODE, check_next_layer_bit(), foedus::storage::masstree::MasstreeBorderPage::get_payload_length(), foedus::storage::masstree::MasstreeBorderPage::get_record_payload(), foedus::storage::masstree::RecordLocation::index_, foedus::xct::XctId::is_deleted(), foedus::kErrorCodeOk, foedus::kErrorCodeStrKeyNotFound, foedus::kErrorCodeStrTooShortPayload, foedus::storage::masstree::RecordLocation::observed_, and foedus::storage::masstree::RecordLocation::page_.

Referenced by foedus::storage::masstree::MasstreeStorage::get_record_part(), foedus::storage::masstree::MasstreeStorage::get_record_part_normalized(), foedus::storage::masstree::MasstreeStorage::get_record_primitive(), and foedus::storage::masstree::MasstreeStorage::get_record_primitive_normalized().

630  {
631  if (location.observed_.is_deleted()) {
632  // This result is protected by readset
634  }
635  CHECK_ERROR_CODE(check_next_layer_bit(location.observed_));
636  MasstreeBorderPage* border = location.page_;
637  if (border->get_payload_length(location.index_) < payload_offset + payload_count) {
638  LOG(WARNING) << "short record"; // probably this is a rare error. so warn.
640  }
641  std::memcpy(payload, border->get_record_payload(location.index_) + payload_offset, payload_count);
642  return kErrorCodeOk;
643 }
0x080A : "STORAGE: The record's payload is smaller than requested" .
Definition: error_code.hpp:176
0x080C : "STORAGE: This key is not found in this storage" .
Definition: error_code.hpp:178
0 means no-error.
Definition: error_code.hpp:87
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
static ErrorCode check_next_layer_bit(xct::XctId observed) __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

xct::TrackMovedRecordResult foedus::storage::masstree::MasstreeStoragePimpl::track_moved_record ( xct::RwLockableXctId old_address,
xct::WriteXctAccess write_set 
)
inline

Definition at line 848 of file masstree_storage_pimpl.cpp.

References ASSERT_ND, foedus::Attachable< MasstreeStorageControlBlock >::engine_, foedus::storage::masstree::MasstreePage::is_border(), foedus::storage::masstree::MasstreePage::is_moved(), foedus::xct::RwLockableXctId::is_next_layer(), foedus::storage::to_page(), and foedus::storage::masstree::MasstreeBorderPage::track_moved_record().

Referenced by foedus::storage::masstree::MasstreeStorage::track_moved_record().

850  {
851  ASSERT_ND(old_address);
852  // We use moved bit only for volatile border pages
853  MasstreeBorderPage* page = reinterpret_cast<MasstreeBorderPage*>(to_page(old_address));
854  ASSERT_ND(page->is_border());
855  ASSERT_ND(page->is_moved() || old_address->is_next_layer());
856  return page->track_moved_record(engine_, old_address, write_set);
857 }
Page * to_page(const void *address)
super-dirty way to obtain Page the address belongs to.
Definition: page.hpp:395
Engine * engine_
Most attachable object stores an engine pointer (local engine), so we define it here.
Definition: attachable.hpp:107
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::masstree::MasstreeStoragePimpl::upsert_general ( thread::Thread context,
const RecordLocation location,
const void *  be_key,
KeyLength  key_length,
const void *  payload,
PayloadLength  payload_count 
)

implementation of upsert_record family.

use with reserve_record()

Definition at line 711 of file masstree_storage_pimpl.cpp.

References ASSERT_ND, foedus::storage::masstree::MasstreeCommonLogType::calculate_log_length(), CHECK_ERROR_CODE, check_next_layer_bit(), get_id(), foedus::storage::masstree::MasstreeBorderPage::get_max_payload_length(), foedus::thread::Thread::get_numa_node(), foedus::storage::masstree::MasstreeBorderPage::get_payload_length(), foedus::thread::Thread::get_thread_log_buffer(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::RecordLocation::index_, foedus::xct::XctId::is_deleted(), foedus::storage::masstree::RecordLocation::observed_, foedus::storage::masstree::RecordLocation::page_, foedus::storage::masstree::MasstreeInsertLogType::populate(), foedus::storage::masstree::MasstreeUpdateLogType::populate(), foedus::storage::masstree::MasstreeOverwriteLogType::populate(), register_record_write_log(), foedus::log::ThreadLogBuffer::reserve_new_log(), and foedus::storage::PageHeader::stat_last_updater_node_.

Referenced by foedus::storage::masstree::MasstreeStorage::upsert_record(), and foedus::storage::masstree::MasstreeStorage::upsert_record_normalized().

717  {
718  // Upsert is a combination of what insert does and what delete does.
719  // If there isn't an existing physical record, it's exactly same as insert.
720  // If there is, it's _basically_ a delete followed by an insert.
721  // There are a few complications, depending on the status of the record.
722  CHECK_ERROR_CODE(check_next_layer_bit(location.observed_));
723 
724  // as of reserve_record() it was spacious enough, and this length is
725  // either immutable or only increases, so this must hold.
726  MasstreeBorderPage* border = location.page_;
727  ASSERT_ND(border->get_max_payload_length(location.index_) >= payload_count);
728 
729  MasstreeCommonLogType* common_log;
730  if (location.observed_.is_deleted()) {
731  // If it's a deleted record, this turns to be a plain insert.
732  uint16_t log_length = MasstreeInsertLogType::calculate_log_length(key_length, payload_count);
733  MasstreeInsertLogType* log_entry = reinterpret_cast<MasstreeInsertLogType*>(
734  context->get_thread_log_buffer().reserve_new_log(log_length));
735  log_entry->populate(
736  get_id(),
737  be_key,
738  key_length,
739  payload,
740  payload_count);
741  common_log = log_entry;
742  } else if (payload_count == border->get_payload_length(location.index_)) {
743  // If it's not changing payload size of existing record, we can conver it to an overwrite,
744  // which is more efficient
745  uint16_t log_length = MasstreeOverwriteLogType::calculate_log_length(key_length, payload_count);
746  MasstreeOverwriteLogType* log_entry = reinterpret_cast<MasstreeOverwriteLogType*>(
747  context->get_thread_log_buffer().reserve_new_log(log_length));
748  log_entry->populate(
749  get_id(),
750  be_key,
751  key_length,
752  payload,
753  0,
754  payload_count);
755  common_log = log_entry;
756  } else {
757  // If not, this is an update operation.
758  uint16_t log_length = MasstreeUpdateLogType::calculate_log_length(key_length, payload_count);
759  MasstreeUpdateLogType* log_entry = reinterpret_cast<MasstreeUpdateLogType*>(
760  context->get_thread_log_buffer().reserve_new_log(log_length));
761  log_entry->populate(
762  get_id(),
763  be_key,
764  key_length,
765  payload,
766  payload_count);
767  common_log = log_entry;
768  }
769  border->header().stat_last_updater_node_ = context->get_numa_node();
770  return register_record_write_log(context, location, common_log);
771 }
ErrorCode register_record_write_log(thread::Thread *context, const RecordLocation &location, log::RecordLogType *log_entry)
Used in the following methods.
static uint16_t calculate_log_length(KeyLength key_length, PayloadLength payload_count) __attribute__((always_inline))
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
static ErrorCode check_next_layer_bit(xct::XctId observed) __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::verify_single_thread ( thread::Thread context)

These are defined in masstree_storage_verify.cpp.

Definition at line 39 of file masstree_storage_verify.cpp.

References CHECK_AND_ASSERT, CHECK_ERROR, get_first_root(), foedus::storage::masstree::MasstreePage::is_border(), foedus::kRetOk, verify_single_thread_layer(), and WRAP_ERROR_CODE.

Referenced by foedus::storage::masstree::MasstreeStorage::verify_single_thread().

39  {
40  MasstreeIntermediatePage* layer_root;
41  WRAP_ERROR_CODE(get_first_root(context, false, &layer_root));
42  CHECK_AND_ASSERT(!layer_root->is_border()); // root of first layer is always intermediate page
43  CHECK_ERROR(verify_single_thread_layer(context, 0, layer_root));
44  return kRetOk;
45 }
#define CHECK_AND_ASSERT(x)
ErrorCode get_first_root(thread::Thread *context, bool for_write, MasstreeIntermediatePage **root)
Root-node related, such as a method to retrieve 1st-root, to grow, etc.
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.
#define WRAP_ERROR_CODE(x)
Same as CHECK_ERROR(x) except it receives only an error code, thus more efficient.
ErrorStack verify_single_thread_layer(thread::Thread *context, uint8_t layer, MasstreePage *layer_root)

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::verify_single_thread_border ( thread::Thread context,
KeySlice  low_fence,
HighFence  high_fence,
MasstreeBorderPage page 
)

Definition at line 201 of file masstree_storage_verify.cpp.

References CHECK_AND_ASSERT, CHECK_ERROR, foedus::storage::masstree::MasstreeBorderPage::does_point_to_layer(), follow_page(), foedus::xct::XctId::get_epoch(), foedus::storage::masstree::MasstreePage::get_foster_fence(), foedus::storage::masstree::MasstreePage::get_foster_major(), foedus::storage::masstree::MasstreePage::get_foster_minor(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::storage::masstree::MasstreePage::get_layer(), foedus::storage::masstree::MasstreeBorderPage::get_next_layer(), foedus::storage::masstree::MasstreeBorderPage::get_owner_id(), foedus::storage::masstree::MasstreeBorderPage::get_remainder_length(), foedus::storage::masstree::MasstreeBorderPage::get_slice(), foedus::xct::XctId::is_being_written(), foedus::storage::DualPagePointer::is_both_null(), foedus::storage::masstree::MasstreeBorderPage::is_consecutive_inserts(), foedus::xct::McsRwLock::is_locked(), foedus::storage::masstree::MasstreePage::is_moved(), foedus::xct::XctId::is_next_layer(), foedus::Epoch::is_valid(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::storage::kMasstreeBorderPageType, foedus::kRetOk, foedus::storage::masstree::kSupremumSlice, foedus::xct::RwLockableXctId::lock_, foedus::thread::Thread::resolve_cast(), foedus::storage::masstree::HighFence::supremum_, foedus::storage::masstree::verify_page_basic(), verify_single_thread_layer(), foedus::storage::masstree::MasstreeBorderPage::verify_slot_lengthes(), WRAP_ERROR_CODE, and foedus::xct::RwLockableXctId::xct_id_.

Referenced by verify_single_thread_intermediate(), and verify_single_thread_layer().

205  {
206  CHECK_ERROR(verify_page_basic(context, page, kMasstreeBorderPageType, low_fence, high_fence));
207  // check consecutive_inserts_. this should be consistent whether it's moved or not.
208  bool sorted = true;
209  for (SlotIndex i = 1; i < page->get_key_count(); ++i) {
210  KeySlice prev = page->get_slice(i - 1);
211  KeySlice slice = page->get_slice(i);
212  KeyLength prev_len = page->get_remainder_length(i - 1);
213  KeyLength len = page->get_remainder_length(i);
214  if (prev > slice || (prev == slice && prev_len > len)) {
215  sorted = false;
216  break;
217  }
218  }
219  CHECK_AND_ASSERT(page->is_consecutive_inserts() == sorted);
220 
221  if (page->is_moved()) {
222  auto* minor = context->resolve_cast<MasstreeBorderPage>(page->get_foster_minor());
223  auto* major = context->resolve_cast<MasstreeBorderPage>(page->get_foster_major());
224  // If major is empty-range, the minor might be supremum
225  bool is_minor_supremum = high_fence.supremum_ && page->get_foster_fence() == kSupremumSlice;
227  context,
228  low_fence,
229  HighFence(page->get_foster_fence(), is_minor_supremum),
230  minor));
232  context,
233  page->get_foster_fence(),
234  high_fence,
235  major));
236  return kRetOk;
237  }
238 
239  CHECK_AND_ASSERT(!page->is_moved());
240  CHECK_AND_ASSERT(page->get_key_count() <= kBorderPageMaxSlots);
241  for (SlotIndex i = 0; i < page->get_key_count(); ++i) {
242  CHECK_AND_ASSERT(!page->get_owner_id(i)->lock_.is_locked());
243  CHECK_AND_ASSERT(!page->get_owner_id(i)->xct_id_.is_being_written());
244  CHECK_AND_ASSERT(page->get_owner_id(i)->xct_id_.get_epoch().is_valid());
245  CHECK_AND_ASSERT(page->verify_slot_lengthes(i));
246  KeySlice slice = page->get_slice(i);
247  CHECK_AND_ASSERT(slice >= low_fence);
248  CHECK_AND_ASSERT(slice < high_fence.slice_ || page->is_high_fence_supremum());
249  if (page->does_point_to_layer(i)) {
250  CHECK_AND_ASSERT(page->get_owner_id(i)->xct_id_.is_next_layer());
251  CHECK_AND_ASSERT(!page->get_next_layer(i)->is_both_null());
252  MasstreePage* next;
253  // TASK(Hideaki) probably two versions: always follow volatile vs snapshot
254  // so far check volatile only
255  WRAP_ERROR_CODE(follow_page(context, true, page->get_next_layer(i), &next));
256  CHECK_ERROR(verify_single_thread_layer(context, page->get_layer() + 1, next));
257  } else {
258  CHECK_AND_ASSERT(!page->get_owner_id(i)->xct_id_.is_next_layer());
259  }
260  }
261 
262  return kRetOk;
263 }
#define CHECK_AND_ASSERT(x)
uint16_t SlotIndex
Index of a record in a (border) page.
ErrorCode follow_page(thread::Thread *context, bool for_writes, storage::DualPagePointer *pointer, MasstreePage **page)
Thread::follow_page_pointer() for masstree.
ErrorStack verify_single_thread_border(thread::Thread *context, KeySlice low_fence, HighFence high_fence, MasstreeBorderPage *page)
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
ErrorStack verify_page_basic(thread::Thread *context, MasstreePage *page, PageType page_type, KeySlice low_fence, HighFence high_fence)
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.
const KeySlice kSupremumSlice
#define WRAP_ERROR_CODE(x)
Same as CHECK_ERROR(x) except it receives only an error code, thus more efficient.
ErrorStack verify_single_thread_layer(thread::Thread *context, uint8_t layer, MasstreePage *layer_root)

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::verify_single_thread_intermediate ( thread::Thread context,
KeySlice  low_fence,
HighFence  high_fence,
MasstreeIntermediatePage page 
)

Definition at line 103 of file masstree_storage_verify.cpp.

References CHECK_AND_ASSERT, CHECK_ERROR, follow_page(), foedus::storage::masstree::MasstreePage::get_btree_level(), foedus::storage::masstree::MasstreePage::get_foster_fence(), foedus::storage::masstree::MasstreePage::get_foster_major(), foedus::storage::masstree::MasstreePage::get_foster_minor(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::storage::masstree::MasstreePage::get_layer(), foedus::storage::masstree::MasstreeIntermediatePage::get_minipage(), foedus::storage::masstree::MasstreeIntermediatePage::get_separator(), foedus::storage::DualPagePointer::is_both_null(), foedus::storage::masstree::MasstreePage::is_empty_range(), foedus::storage::masstree::MasstreePage::is_moved(), foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::key_count_, foedus::storage::kMasstreeIntermediatePageType, foedus::storage::masstree::kMaxIntermediateMiniSeparators, foedus::storage::masstree::kMaxIntermediateSeparators, foedus::kRetOk, foedus::storage::masstree::kSupremumSlice, foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::pointers_, foedus::thread::Thread::resolve_cast(), foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::separators_, foedus::storage::masstree::HighFence::slice_, foedus::storage::masstree::HighFence::supremum_, foedus::storage::masstree::verify_page_basic(), verify_single_thread_border(), and WRAP_ERROR_CODE.

Referenced by verify_single_thread_layer().

107  {
108  CHECK_ERROR(
109  verify_page_basic(context, page, kMasstreeIntermediatePageType, low_fence, high_fence));
110 
111  if (page->is_empty_range()) {
112  CHECK_AND_ASSERT(page->get_key_count() == 0);
113  CHECK_AND_ASSERT(!page->is_moved());
114  return kRetOk;
115  }
116 
117  if (page->is_moved()) {
118  auto* minor = context->resolve_cast<MasstreeIntermediatePage>(page->get_foster_minor());
119  auto* major = context->resolve_cast<MasstreeIntermediatePage>(page->get_foster_major());
120  // If major is empty-range, the minor might be supremum
121  bool is_minor_supremum = high_fence.supremum_ && page->get_foster_fence() == kSupremumSlice;
123  context,
124  low_fence,
125  HighFence(page->get_foster_fence(), is_minor_supremum),
126  minor));
128  context,
129  page->get_foster_fence(),
130  high_fence,
131  major));
132  return kRetOk;
133  }
134 
135  uint8_t key_count = page->get_key_count();
137  KeySlice previous_low = low_fence;
138  for (uint8_t i = 0; i <= key_count; ++i) {
139  HighFence mini_high(0, false);
140  if (i < key_count) {
141  mini_high.slice_ = page->get_separator(i);
142  mini_high.supremum_ = false;
143  CHECK_AND_ASSERT(high_fence.supremum_ || mini_high.slice_ < high_fence.slice_);
144  if (i == 0) {
145  CHECK_AND_ASSERT(mini_high.slice_ > low_fence);
146  } else {
147  CHECK_AND_ASSERT(mini_high.slice_ > page->get_separator(i - 1));
148  }
149  } else {
150  mini_high = high_fence;
151  }
152 
153  MasstreeIntermediatePage::MiniPage& minipage = page->get_minipage(i);
154  uint8_t mini_count = minipage.key_count_;
156  KeySlice page_low = previous_low;
157  for (uint8_t j = 0; j <= mini_count; ++j) {
158  HighFence page_high(0, false);
159  if (j < mini_count) {
160  page_high.slice_ = minipage.separators_[j];
161  page_high.supremum_ = false;
162  CHECK_AND_ASSERT(page_high.slice_ < mini_high.slice_ || mini_high.supremum_);
163  if (j == 0) {
164  CHECK_AND_ASSERT(page_high.slice_ > previous_low);
165  } else {
166  CHECK_AND_ASSERT(page_high.slice_ > minipage.separators_[j - 1]);
167  }
168  } else {
169  page_high = mini_high;
170  }
171  CHECK_AND_ASSERT(!minipage.pointers_[j].is_both_null());
172  MasstreePage* next;
173  // TASK(Hideaki) probably two versions: always follow volatile vs snapshot
174  // so far check volatile only
175  WRAP_ERROR_CODE(follow_page(context, true, &minipage.pointers_[j], &next));
176  CHECK_AND_ASSERT(next->get_layer() == page->get_layer());
177  CHECK_AND_ASSERT(next->get_btree_level() + 1U == page->get_btree_level());
178  if (next->is_border()) {
180  context,
181  page_low,
182  page_high,
183  reinterpret_cast<MasstreeBorderPage*>(next)));
184  } else {
186  context,
187  page_low,
188  page_high,
189  reinterpret_cast<MasstreeIntermediatePage*>(next)));
190  }
191 
192  page_low = page_high.slice_;
193  }
194 
195  previous_low = mini_high.slice_;
196  }
197 
198  return kRetOk;
199 }
#define CHECK_AND_ASSERT(x)
ErrorCode follow_page(thread::Thread *context, bool for_writes, storage::DualPagePointer *pointer, MasstreePage **page)
Thread::follow_page_pointer() for masstree.
ErrorStack verify_single_thread_border(thread::Thread *context, KeySlice low_fence, HighFence high_fence, MasstreeBorderPage *page)
uint64_t KeySlice
Each key slice is an 8-byte integer.
const uint16_t kMaxIntermediateMiniSeparators
Max number of separators stored in the second level of intermediate pages.
const uint16_t kMaxIntermediateSeparators
Max number of separators stored in the first level of intermediate pages.
ErrorStack verify_page_basic(thread::Thread *context, MasstreePage *page, PageType page_type, KeySlice low_fence, HighFence high_fence)
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.
ErrorStack verify_single_thread_intermediate(thread::Thread *context, KeySlice low_fence, HighFence high_fence, MasstreeIntermediatePage *page)
const KeySlice kSupremumSlice
#define WRAP_ERROR_CODE(x)
Same as CHECK_ERROR(x) except it receives only an error code, thus more efficient.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::storage::masstree::MasstreeStoragePimpl::verify_single_thread_layer ( thread::Thread context,
uint8_t  layer,
MasstreePage layer_root 
)

Definition at line 47 of file masstree_storage_verify.cpp.

References CHECK_AND_ASSERT, CHECK_ERROR, foedus::storage::masstree::MasstreePage::get_layer(), foedus::storage::masstree::MasstreePage::is_border(), foedus::storage::masstree::kInfimumSlice, foedus::kRetOk, foedus::storage::masstree::kSupremumSlice, verify_single_thread_border(), and verify_single_thread_intermediate().

Referenced by verify_single_thread(), and verify_single_thread_border().

50  {
51  CHECK_AND_ASSERT(layer_root->get_layer() == layer);
52  HighFence high_fence(kSupremumSlice, true);
53  if (layer_root->is_border()) {
55  context,
57  high_fence,
58  reinterpret_cast<MasstreeBorderPage*>(layer_root)));
59  } else {
61  context,
63  high_fence,
64  reinterpret_cast<MasstreeIntermediatePage*>(layer_root)));
65  }
66  return kRetOk;
67 }
const KeySlice kInfimumSlice
#define CHECK_AND_ASSERT(x)
ErrorStack verify_single_thread_border(thread::Thread *context, KeySlice low_fence, HighFence high_fence, MasstreeBorderPage *page)
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.
ErrorStack verify_single_thread_intermediate(thread::Thread *context, KeySlice low_fence, HighFence high_fence, MasstreeIntermediatePage *page)
const KeySlice kSupremumSlice

Here is the call graph for this function:

Here is the caller graph for this function:


The documentation for this class was generated from the following files: