libfoedus-core
FOEDUS Core Library
foedus::storage::masstree::MasstreeBorderPage Class Referencefinal

Represents one border page in Masstree Storage. More...

Detailed Description

Represents one border page in Masstree Storage.

Slots
One border page has at most kBorderPageMaxSlots slots. One slot is reserved for one physical record, which is never moved except snapshotting and split/compact. A thread first installs a new record by atomically modifying page_version, then set up the record with deletion flag on. Flipping the delete flag of the record is done by apply() of transaction, which might fail. If it fails, the record is left as deleted until snapshotting or split/compact.
Page Layout
Headers, including common part and border-specific part
Record Data part, which grows forward
Unused part
Slot part (32 bytes per record), which grows backward
Attention
Do NOT instantiate this object or derive from this class. A page is always reinterpret-ed from a pooled memory region. No meaningful RTTI.

Definition at line 444 of file masstree_page_impl.hpp.

#include <masstree_page_impl.hpp>

Inheritance diagram for foedus::storage::masstree::MasstreeBorderPage:
Collaboration diagram for foedus::storage::masstree::MasstreeBorderPage:

Classes

struct  FindKeyForReserveResult
 return value for find_key_for_reserve(). More...
 
struct  Slot
 Fix-sized slot for each record, which is placed at the end of data region. More...
 
struct  SlotLengthPart
 A piece of Slot object that must be read/written in one-shot, meaning no one reads half-written values whether it reads old values or new values. More...
 
union  SlotLengthUnion
 

Public Types

enum  MatchType { kNotFound = 0, kExactMatchLocalRecord = 1, kExactMatchLayerPointer = 2, kConflictingLocalRecord = 3 }
 Used in FindKeyForReserveResult. More...
 

Public Member Functions

 MasstreeBorderPage ()=delete
 
 MasstreeBorderPage (const MasstreeBorderPage &other)=delete
 
MasstreeBorderPageoperator= (const MasstreeBorderPage &other)=delete
 
void initialize_volatile_page (StorageId storage_id, VolatilePagePointer page_id, uint8_t layer, KeySlice low_fence, KeySlice high_fence)
 
void initialize_snapshot_page (StorageId storage_id, SnapshotPagePointer page_id, uint8_t layer, KeySlice low_fence, KeySlice high_fence)
 
bool is_consecutive_inserts () const
 Whether this page is receiving only sequential inserts. More...
 
DataOffset get_next_offset () const
 
void increase_next_offset (DataOffset length)
 
const Slotget_slot (SlotIndex index) const __attribute__((always_inline))
 
Slotget_slot (SlotIndex index) __attribute__((always_inline))
 
Slotget_new_slot (SlotIndex index) __attribute__((always_inline))
 
SlotIndex to_slot_index (const Slot *slot) const __attribute__((always_inline))
 
DataOffset available_space () const
 Returns usable data space in bytes. More...
 
SlotIndex find_key (KeySlice slice, const void *suffix, KeyLength remainder) const __attribute__((always_inline))
 Navigates a searching key-slice to one of the record in this page. More...
 
SlotIndex find_key_normalized (SlotIndex from_index, SlotIndex to_index, KeySlice slice) const __attribute__((always_inline))
 Specialized version for 8 byte native integer search. More...
 
FindKeyForReserveResult find_key_for_reserve (SlotIndex from_index, SlotIndex to_index, KeySlice slice, const void *suffix, KeyLength remainder) const __attribute__((always_inline))
 This is for the case we are looking for either the matching slot or the slot we will modify. More...
 
FindKeyForReserveResult find_key_for_snapshot (KeySlice slice, const void *suffix, KeyLength remainder) const __attribute__((always_inline))
 This one is used for snapshot pages. More...
 
char * get_record (SlotIndex index) __attribute__((always_inline))
 
const char * get_record (SlotIndex index) const __attribute__((always_inline))
 
char * get_record_payload (SlotIndex index) __attribute__((always_inline))
 
const char * get_record_payload (SlotIndex index) const __attribute__((always_inline))
 
DualPagePointerget_next_layer (SlotIndex index) __attribute__((always_inline))
 
const DualPagePointerget_next_layer (SlotIndex index) const __attribute__((always_inline))
 
char * get_record_from_offset (DataOffset record_offset) __attribute__((always_inline))
 Offset versions. More...
 
const char * get_record_from_offset (DataOffset record_offset) const __attribute__((always_inline))
 
const char * get_record_payload_from_offsets (DataOffset record_offset, KeyLength remainder_length) const __attribute__((always_inline))
 
char * get_record_payload_from_offsets (DataOffset record_offset, KeyLength remainder_length) __attribute__((always_inline))
 
DualPagePointerget_next_layer_from_offsets (DataOffset record_offset, KeyLength remainder_length) __attribute__((always_inline))
 
const DualPagePointerget_next_layer_from_offsets (DataOffset record_offset, KeyLength remainder_length) const __attribute__((always_inline))
 
bool does_point_to_layer (SlotIndex index) const __attribute__((always_inline))
 
KeySlice get_slice (SlotIndex index) const __attribute__((always_inline))
 
void set_slice (SlotIndex index, KeySlice slice) __attribute__((always_inline))
 
DataOffset get_offset_in_bytes (SlotIndex index) const __attribute__((always_inline))
 
xct::RwLockableXctIdget_owner_id (SlotIndex index) __attribute__((always_inline))
 
const xct::RwLockableXctIdget_owner_id (SlotIndex index) const __attribute__((always_inline))
 
KeyLength get_remainder_length (SlotIndex index) const __attribute__((always_inline))
 
KeyLength get_suffix_length (SlotIndex index) const __attribute__((always_inline))
 
KeyLength get_suffix_length_aligned (SlotIndex index) const __attribute__((always_inline))
 
PayloadLength get_payload_length (SlotIndex index) const __attribute__((always_inline))
 
PayloadLength get_max_payload_length (SlotIndex index) const __attribute__((always_inline))
 
bool can_accomodate (SlotIndex new_index, KeyLength remainder_length, PayloadLength payload_count) const __attribute__((always_inline))
 
bool can_accomodate_snapshot (KeyLength remainder_length, PayloadLength payload_count) const __attribute__((always_inline))
 Slightly different from can_accomodate() as follows: More...
 
bool compare_key (SlotIndex index, const void *be_key, KeyLength key_length) const __attribute__((always_inline))
 actually this method should be renamed to equal_key... More...
 
bool equal_key (SlotIndex index, const void *be_key, KeyLength key_length) const __attribute__((always_inline))
 let's gradually migrate from compare_key() to this. More...
 
int ltgt_key (SlotIndex index, const char *be_key, KeyLength key_length) const __attribute__((always_inline))
 compare the key. More...
 
int ltgt_key (SlotIndex index, KeySlice slice, const char *suffix, KeyLength remainder) const __attribute__((always_inline))
 Overload to receive slice+suffix. More...
 
bool will_conflict (SlotIndex index, const char *be_key, KeyLength key_length) const __attribute__((always_inline))
 Returns whether inserting the key will cause creation of a new next layer. More...
 
bool will_conflict (SlotIndex index, KeySlice slice, KeyLength remainder) const __attribute__((always_inline))
 Overload to receive slice. More...
 
bool will_contain_next_layer (SlotIndex index, const char *be_key, KeyLength key_length) const __attribute__((always_inline))
 Returns whether the record is a next-layer pointer that would contain the key. More...
 
bool will_contain_next_layer (SlotIndex index, KeySlice slice, KeyLength remainder) const __attribute__((always_inline))
 
void reserve_record_space (SlotIndex index, xct::XctId initial_owner_id, KeySlice slice, const void *suffix, KeyLength remainder_length, PayloadLength payload_count)
 Installs a new physical record that doesn't exist logically (delete bit on). More...
 
void reserve_initially_next_layer (SlotIndex index, xct::XctId initial_owner_id, KeySlice slice, const DualPagePointer &pointer)
 For creating a record that is initially a next-layer. More...
 
void append_next_layer_snapshot (xct::XctId initial_owner_id, KeySlice slice, SnapshotPagePointer pointer)
 Installs a next layer pointer. More...
 
void replace_next_layer_snapshot (SnapshotPagePointer pointer)
 Same as above, except this is used to transform an existing record at end to a next layer pointer. More...
 
void initialize_layer_root (const MasstreeBorderPage *copy_from, SlotIndex copy_index)
 Copy the initial record that will be the only record for a new root page. More...
 
void release_pages_recursive (const memory::GlobalVolatilePageResolver &page_resolver, memory::PageReleaseBatch *batch)
 
void prefetch () const __attribute__((always_inline))
 prefetch upto 256th bytes. More...
 
void prefetch_additional_if_needed (SlotIndex key_count) const __attribute__((always_inline))
 
bool try_expand_record_in_page_physical (PayloadLength payload_count, SlotIndex record_index)
 A physical-only method to expand a record within this page without any logical change. More...
 
void initialize_as_layer_root_physical (VolatilePagePointer page_id, MasstreeBorderPage *parent, SlotIndex parent_index)
 A physical-only method to initialize this page as a volatile page of a layer-root pointed from the given parent record. More...
 
xct::TrackMovedRecordResult track_moved_record (Engine *engine, xct::RwLockableXctId *old_address, xct::WriteXctAccess *write_set)
 
xct::TrackMovedRecordResult track_moved_record_next_layer (Engine *engine, xct::RwLockableXctId *old_address)
 This one further tracks it to next layer. More...
 
bool verify_slot_lengthes (SlotIndex index) const
 
void assert_entries () __attribute__((always_inline))
 
void assert_entries_impl () const
 defined in masstree_page_debug.cpp. More...
 
- Public Member Functions inherited from foedus::storage::masstree::MasstreePage
 MasstreePage ()=delete
 
 MasstreePage (const MasstreePage &other)=delete
 
MasstreePageoperator= (const MasstreePage &other)=delete
 
PageHeaderheader ()
 
const PageHeaderheader () const
 
VolatilePagePointer get_volatile_page_id () const
 
SnapshotPagePointer get_snapshot_page_id () const
 
bool is_border () const __attribute__((always_inline))
 
bool is_empty_range () const __attribute__((always_inline))
 An empty-range page, either intermediate or border, never has any entries. More...
 
KeySlice get_low_fence () const __attribute__((always_inline))
 
KeySlice get_high_fence () const __attribute__((always_inline))
 
bool is_high_fence_supremum () const __attribute__((always_inline))
 
bool is_low_fence_infimum () const __attribute__((always_inline))
 
bool is_layer_root () const __attribute__((always_inline))
 
KeySlice get_foster_fence () const __attribute__((always_inline))
 
bool is_foster_minor_null () const __attribute__((always_inline))
 
bool is_foster_major_null () const __attribute__((always_inline))
 
VolatilePagePointer get_foster_minor () const __attribute__((always_inline))
 
VolatilePagePointer get_foster_major () const __attribute__((always_inline))
 
void set_foster_twin (VolatilePagePointer minor, VolatilePagePointer major)
 
void install_foster_twin (VolatilePagePointer minor, VolatilePagePointer major, KeySlice foster_fence)
 
bool within_fences (KeySlice slice) const __attribute__((always_inline))
 
bool within_foster_minor (KeySlice slice) const __attribute__((always_inline))
 
bool within_foster_major (KeySlice slice) const __attribute__((always_inline))
 
bool has_foster_child () const __attribute__((always_inline))
 
uint8_t get_layer () const __attribute__((always_inline))
 Layer-0 stores the first 8 byte slice, Layer-1 next 8 byte... More...
 
uint8_t get_btree_level () const __attribute__((always_inline))
 used only in masstree. More...
 
SlotIndex get_key_count () const __attribute__((always_inline))
 physical key count (those keys might be deleted) in this page. More...
 
void set_key_count (SlotIndex count) __attribute__((always_inline))
 
void increment_key_count () __attribute__((always_inline))
 
void prefetch_general () const __attribute__((always_inline))
 prefetch upto keys/separators, whether this page is border or interior. More...
 
const PageVersionget_version () const __attribute__((always_inline))
 
PageVersionget_version () __attribute__((always_inline))
 
const PageVersionget_version_address () const __attribute__((always_inline))
 
PageVersionget_version_address () __attribute__((always_inline))
 
xct::McsWwLockget_lock_address () __attribute__((always_inline))
 
bool is_locked () const __attribute__((always_inline))
 
bool is_moved () const __attribute__((always_inline))
 
bool is_retired () const __attribute__((always_inline))
 
void set_moved () __attribute__((always_inline))
 
void set_retired () __attribute__((always_inline))
 
void release_pages_recursive_common (const memory::GlobalVolatilePageResolver &page_resolver, memory::PageReleaseBatch *batch)
 
void set_foster_major_offset_unsafe (memory::PagePoolOffset offset) __attribute__((always_inline))
 As the name suggests, this should be used only by composer. More...
 
void set_high_fence_unsafe (KeySlice high_fence) __attribute__((always_inline))
 As the name suggests, this should be used only by composer. More...
 

Static Public Member Functions

static DataOffset to_record_length (KeyLength remainder_length, PayloadLength payload_length)
 returns minimal physical_record_length_ for the given remainder/payload length. More...
 
static DataOffset required_data_space (KeyLength remainder_length, PayloadLength payload_length)
 returns the byte size of required contiguous space in data_ to insert a new record of the given remainder/payload length. More...
 

Friends

struct SplitBorder
 
std::ostream & operator<< (std::ostream &o, const MasstreeBorderPage &v)
 defined in masstree_page_debug.cpp. More...
 

Additional Inherited Members

- Protected Member Functions inherited from foedus::storage::masstree::MasstreePage
void initialize_volatile_common (StorageId storage_id, VolatilePagePointer page_id, PageType page_type, uint8_t layer, uint8_t level, KeySlice low_fence, KeySlice high_fence)
 
void initialize_snapshot_common (StorageId storage_id, SnapshotPagePointer page_id, PageType page_type, uint8_t layer, uint8_t level, KeySlice low_fence, KeySlice high_fence)
 
- Protected Attributes inherited from foedus::storage::masstree::MasstreePage
PageHeader header_
 
KeySlice low_fence_
 Inclusive low fence of this page. More...
 
KeySlice high_fence_
 Inclusive high fence of this page. More...
 
KeySlice foster_fence_
 Inclusive low_fence of foster child. More...
 
VolatilePagePointer foster_twin_ [2]
 Points to foster children, or tentative child pages. More...
 

Member Enumeration Documentation

Constructor & Destructor Documentation

foedus::storage::masstree::MasstreeBorderPage::MasstreeBorderPage ( )
delete
foedus::storage::masstree::MasstreeBorderPage::MasstreeBorderPage ( const MasstreeBorderPage other)
delete

Member Function Documentation

void foedus::storage::masstree::MasstreeBorderPage::append_next_layer_snapshot ( xct::XctId  initial_owner_id,
KeySlice  slice,
SnapshotPagePointer  pointer 
)
inline

Installs a next layer pointer.

This is used only from snapshot composer, so no race.

Definition at line 1417 of file masstree_page_impl.hpp.

References ASSERT_ND, can_accomodate(), foedus::storage::VolatilePagePointer::clear(), foedus::storage::masstree::MasstreeBorderPage::SlotLengthUnion::components, foedus::storage::masstree::MasstreePage::get_key_count(), get_new_slot(), get_next_layer_from_offsets(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::kInitiallyNextLayer, foedus::storage::masstree::MasstreeBorderPage::Slot::lengthes_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::offset_, foedus::storage::masstree::MasstreeBorderPage::Slot::original_offset_, foedus::storage::masstree::MasstreeBorderPage::Slot::original_physical_record_length_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::payload_length_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::physical_record_length_, foedus::storage::masstree::MasstreeBorderPage::Slot::remainder_length_, foedus::storage::masstree::MasstreePage::set_key_count(), foedus::xct::XctId::set_next_layer(), set_slice(), foedus::storage::DualPagePointer::snapshot_pointer_, foedus::storage::masstree::MasstreeBorderPage::Slot::tid_, to_record_length(), foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::unused_, foedus::storage::DualPagePointer::volatile_pointer_, and foedus::xct::RwLockableXctId::xct_id_.

1420  {
1421  ASSERT_ND(header().snapshot_); // this is used only from snapshot composer
1422  const SlotIndex index = get_key_count();
1423  const KeyLength kRemainder = kInitiallyNextLayer;
1424  ASSERT_ND(can_accomodate(index, kRemainder, sizeof(DualPagePointer)));
1425  const DataOffset record_size = to_record_length(kRemainder, sizeof(DualPagePointer));
1426  const DataOffset offset = next_offset_;
1427  set_slice(index, slice);
1428  // This is in snapshot page, so no worry on race.
1429  Slot* slot = get_new_slot(index);
1430  slot->lengthes_.components.offset_ = offset;
1431  slot->lengthes_.components.unused_ = 0;
1432  slot->lengthes_.components.physical_record_length_ = record_size;
1433  slot->lengthes_.components.payload_length_ = sizeof(DualPagePointer);
1434  slot->original_physical_record_length_ = record_size;
1435  slot->remainder_length_ = kRemainder;
1436  slot->original_offset_ = offset;
1437  next_offset_ += record_size;
1438 
1439  slot->tid_.xct_id_ = initial_owner_id;
1440  slot->tid_.xct_id_.set_next_layer();
1441  DualPagePointer* dual_pointer = get_next_layer_from_offsets(offset, kRemainder);
1442  dual_pointer->volatile_pointer_.clear();
1443  dual_pointer->snapshot_pointer_ = pointer;
1444  set_key_count(index + 1U);
1445 }
void set_slice(SlotIndex index, KeySlice slice) __attribute__((always_inline))
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
uint16_t SlotIndex
Index of a record in a (border) page.
Slot * get_new_slot(SlotIndex index) __attribute__((always_inline))
uint16_t DataOffset
Byte-offset in a page.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
bool can_accomodate(SlotIndex new_index, KeyLength remainder_length, PayloadLength payload_count) const __attribute__((always_inline))
const KeyLength kInitiallyNextLayer
A special key length value that denotes the record in a border page was initially a next-layer pointe...
Definition: masstree_id.hpp:86
DualPagePointer * get_next_layer_from_offsets(DataOffset record_offset, KeyLength remainder_length) __attribute__((always_inline))
static DataOffset to_record_length(KeyLength remainder_length, PayloadLength payload_length)
returns minimal physical_record_length_ for the given remainder/payload length.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
void set_key_count(SlotIndex count) __attribute__((always_inline))

Here is the call graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::assert_entries ( )
inline

Definition at line 988 of file masstree_page_impl.hpp.

References assert_entries_impl().

Referenced by foedus::storage::masstree::ReserveRecords::run().

988  {
989 #ifndef NDEBUG
991 #endif // NDEBUG
992  }
void assert_entries_impl() const
defined in masstree_page_debug.cpp.

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::assert_entries_impl ( ) const

defined in masstree_page_debug.cpp.

Definition at line 114 of file masstree_page_debug.cpp.

References foedus::assorted::align8(), ASSERT_ND, does_point_to_layer(), foedus::storage::masstree::MasstreePage::get_key_count(), get_payload_length(), get_record(), get_remainder_length(), get_slice(), get_suffix_length(), get_suffix_length_aligned(), foedus::storage::masstree::MasstreePage::header_, foedus::storage::masstree::MasstreePage::is_locked(), foedus::storage::masstree::kBorderPageMaxSlots, and foedus::storage::PageHeader::snapshot_.

Referenced by assert_entries().

114  {
115  // the following logic holds only when this page is locked
117  struct Sorter {
118  explicit Sorter(const MasstreeBorderPage* target) : target_(target) {}
119  bool operator() (SlotIndex left, SlotIndex right) {
120  KeySlice left_slice = target_->get_slice(left);
121  KeySlice right_slice = target_->get_slice(right);
122  if (left_slice < right_slice) {
123  return true;
124  } else if (left_slice == right_slice) {
125  return target_->get_remainder_length(left) < target_->get_remainder_length(right);
126  } else {
127  return false;
128  }
129  }
130  const MasstreeBorderPage* target_;
131  };
132  SlotIndex key_count = get_key_count();
134  for (SlotIndex i = 0; i < key_count; ++i) {
135  order[i] = i;
136  }
137  std::sort(order, order + key_count, Sorter(this));
138 
139  if (header_.snapshot_) {
140  // in snapshot page, all entries should be fully sorted
141  for (SlotIndex i = 0; i < key_count; ++i) {
142  ASSERT_ND(order[i] == i);
143  }
144  }
145 
146  for (SlotIndex i = 1; i < key_count; ++i) {
147  SlotIndex pre = order[i - 1];
148  SlotIndex cur = order[i];
149  const KeySlice pre_slice = get_slice(pre);
150  const KeySlice cur_slice = get_slice(cur);
151  ASSERT_ND(pre_slice <= cur_slice);
152  if (pre_slice == cur_slice) {
153  const KeyLength pre_klen = get_remainder_length(pre);
154  const KeyLength cur_klen = get_remainder_length(cur);
155  ASSERT_ND(pre_klen < cur_klen);
156  ASSERT_ND(pre_klen <= sizeof(KeySlice));
157  }
158  }
159 
160  // also check the padding between key suffix and payload
161  if (header_.snapshot_) { // this can't be checked in volatile pages that are being changed
162  for (SlotIndex i = 0; i < key_count; ++i) {
163  if (does_point_to_layer(i)) {
164  continue;
165  }
166  KeyLength suffix_length = get_suffix_length(i);
167  KeyLength suffix_length_aligned = get_suffix_length_aligned(i);
168  if (suffix_length > 0 && suffix_length != suffix_length_aligned) {
169  ASSERT_ND(suffix_length_aligned > suffix_length);
170  for (KeyLength pos = suffix_length; pos < suffix_length_aligned; ++pos) {
171  // must be zero-padded
172  ASSERT_ND(get_record(i)[pos] == 0);
173  }
174  }
175  PayloadLength payload_length = get_payload_length(i);
176  PayloadLength payload_length_aligned = assorted::align8(payload_length);
177  if (payload_length > 0 && payload_length != payload_length_aligned) {
178  ASSERT_ND(payload_length_aligned > payload_length);
179  for (PayloadLength pos = payload_length; pos < payload_length_aligned; ++pos) {
180  // must be zero-padded
181  ASSERT_ND(get_record(i)[suffix_length_aligned + pos] == 0);
182  }
183  }
184  }
185  }
186 }
KeyLength get_remainder_length(SlotIndex index) const __attribute__((always_inline))
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
KeySlice get_slice(SlotIndex index) const __attribute__((always_inline))
T align8(T value)
8-alignment.
uint16_t PayloadLength
Represents a byte-length of a payload in this package.
Definition: masstree_id.hpp:92
uint16_t SlotIndex
Index of a record in a (border) page.
bool is_locked() const __attribute__((always_inline))
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
char * get_record(SlotIndex index) __attribute__((always_inline))
PayloadLength get_payload_length(SlotIndex index) const __attribute__((always_inline))
KeyLength get_suffix_length(SlotIndex index) const __attribute__((always_inline))
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
KeyLength get_suffix_length_aligned(SlotIndex index) const __attribute__((always_inline))
bool snapshot_
Whether this page image is of a snapshot page.
Definition: page.hpp:211
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
bool does_point_to_layer(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

DataOffset foedus::storage::masstree::MasstreeBorderPage::available_space ( ) const
inline

Returns usable data space in bytes.

Definition at line 660 of file masstree_page_impl.hpp.

References ASSERT_ND, and foedus::storage::masstree::MasstreePage::get_key_count().

Referenced by can_accomodate(), can_accomodate_snapshot(), and try_expand_record_in_page_physical().

660  {
661  uint16_t consumed = next_offset_ + get_key_count() * sizeof(Slot);
662  ASSERT_ND(consumed <= sizeof(data_));
663  if (consumed > sizeof(data_)) { // just to be conservative on release build
664  return 0;
665  }
666  return sizeof(data_) - consumed;
667  }
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::storage::masstree::MasstreeBorderPage::can_accomodate ( SlotIndex  new_index,
KeyLength  remainder_length,
PayloadLength  payload_count 
) const
inline

Definition at line 1570 of file masstree_page_impl.hpp.

References ASSERT_ND, available_space(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::storage::masstree::kInitiallyNextLayer, foedus::storage::masstree::kMaxKeyLength, foedus::storage::masstree::kMaxPayloadLength, and required_data_space().

Referenced by append_next_layer_snapshot(), reserve_initially_next_layer(), reserve_record_space(), and foedus::storage::masstree::ReserveRecords::run().

1573  {
1574  if (new_index == 0) {
1575  ASSERT_ND(remainder_length <= kMaxKeyLength || remainder_length == kInitiallyNextLayer);
1576  ASSERT_ND(payload_count <= kMaxPayloadLength);
1577  return true;
1578  } else if (new_index >= kBorderPageMaxSlots) {
1579  return false;
1580  }
1581  const DataOffset required = required_data_space(remainder_length, payload_count);
1582  const DataOffset available = available_space();
1583  return required <= available;
1584 }
const KeyLength kMaxKeyLength
Max length of a key.
Definition: masstree_id.hpp:75
uint16_t DataOffset
Byte-offset in a page.
DataOffset available_space() const
Returns usable data space in bytes.
static DataOffset required_data_space(KeyLength remainder_length, PayloadLength payload_length)
returns the byte size of required contiguous space in data_ to insert a new record of the given remai...
const KeyLength kInitiallyNextLayer
A special key length value that denotes the record in a border page was initially a next-layer pointe...
Definition: masstree_id.hpp:86
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
const PayloadLength kMaxPayloadLength
Max length of a payload.
Definition: masstree_id.hpp:98
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::storage::masstree::MasstreeBorderPage::can_accomodate_snapshot ( KeyLength  remainder_length,
PayloadLength  payload_count 
) const
inline

Slightly different from can_accomodate() as follows:

  • No race, so no need to receive new_index. It just uses get_key_count().
  • Always guarantees that the payload can be later expanded to sizeof(DualPagePointer).
    See also
    replace_next_layer_snapshot()
    MasstreeComposeContext::append_border()

Definition at line 1585 of file masstree_page_impl.hpp.

References ASSERT_ND, available_space(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::storage::masstree::MasstreePage::header_, foedus::storage::masstree::kBorderPageMaxSlots, required_data_space(), and foedus::storage::PageHeader::snapshot_.

1587  {
1589  SlotIndex new_index = get_key_count();
1590  if (new_index == 0) {
1591  return true;
1592  } else if (new_index >= kBorderPageMaxSlots) {
1593  return false;
1594  }
1595  PayloadLength adjusted_payload = std::max<PayloadLength>(payload_count, sizeof(DualPagePointer));
1596  const DataOffset required = required_data_space(remainder_length, adjusted_payload);
1597  const DataOffset available = available_space();
1598  return required <= available;
1599 }
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
uint16_t PayloadLength
Represents a byte-length of a payload in this package.
Definition: masstree_id.hpp:92
uint16_t SlotIndex
Index of a record in a (border) page.
uint16_t DataOffset
Byte-offset in a page.
DataOffset available_space() const
Returns usable data space in bytes.
static DataOffset required_data_space(KeyLength remainder_length, PayloadLength payload_length)
returns the byte size of required contiguous space in data_ to insert a new record of the given remai...
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
bool snapshot_
Whether this page image is of a snapshot page.
Definition: page.hpp:211
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

bool foedus::storage::masstree::MasstreeBorderPage::compare_key ( SlotIndex  index,
const void *  be_key,
KeyLength  key_length 
) const
inline

actually this method should be renamed to equal_key...

Definition at line 1600 of file masstree_page_impl.hpp.

References ASSERT_ND, foedus::storage::masstree::MasstreePage::get_layer(), get_record(), get_remainder_length(), get_slice(), foedus::storage::masstree::kBorderPageMaxSlots, and foedus::storage::masstree::slice_layer().

Referenced by equal_key().

1603  {
1604  ASSERT_ND(index < kBorderPageMaxSlots);
1605  KeyLength remainder = key_length - get_layer() * sizeof(KeySlice);
1606  const KeyLength klen = get_remainder_length(index);
1607  if (remainder != klen) {
1608  return false;
1609  }
1610  KeySlice slice = slice_layer(be_key, key_length, get_layer());
1611  const KeySlice rec_slice = get_slice(index);
1612  if (slice != rec_slice) {
1613  return false;
1614  }
1615  if (remainder > sizeof(KeySlice)) {
1616  return std::memcmp(
1617  reinterpret_cast<const char*>(be_key) + (get_layer() + 1) * sizeof(KeySlice),
1618  get_record(index),
1619  remainder - sizeof(KeySlice)) == 0;
1620  } else {
1621  return true;
1622  }
1623 }
KeyLength get_remainder_length(SlotIndex index) const __attribute__((always_inline))
KeySlice get_slice(SlotIndex index) const __attribute__((always_inline))
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
char * get_record(SlotIndex index) __attribute__((always_inline))
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
uint8_t get_layer() const __attribute__((always_inline))
Layer-0 stores the first 8 byte slice, Layer-1 next 8 byte...
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
KeySlice slice_layer(const void *be_bytes, KeyLength key_length, Layer current_layer)
Extract a part of a big-endian byte array of given length as KeySlice.

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::storage::masstree::MasstreeBorderPage::does_point_to_layer ( SlotIndex  index) const
inline

Definition at line 796 of file masstree_page_impl.hpp.

References ASSERT_ND, get_slot(), foedus::xct::XctId::is_next_layer(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::storage::masstree::MasstreeBorderPage::Slot::tid_, and foedus::xct::RwLockableXctId::xct_id_.

Referenced by assert_entries_impl(), foedus::storage::masstree::MasstreeStoragePimpl::debugout_single_thread_recurse(), find_key(), find_key_for_reserve(), find_key_for_snapshot(), foedus::storage::masstree::MasstreeStoragePimpl::follow_layer(), foedus::storage::masstree::MasstreeStoragePimpl::hcc_reset_all_temperature_stat_recurse(), foedus::storage::masstree::MasstreeStoragePimpl::locate_record(), foedus::storage::masstree::MasstreeStoragePimpl::locate_record_normalized(), ltgt_key(), foedus::storage::masstree::operator<<(), foedus::storage::masstree::MasstreeStoragePimpl::peek_volatile_page_boundaries_next_layer(), foedus::storage::masstree::MasstreeStoragePimpl::prefetch_pages_normalized_recurse(), release_pages_recursive(), replace_next_layer_snapshot(), foedus::storage::masstree::GrowNonFirstLayerRoot::run(), foedus::storage::masstree::ReserveRecords::run(), track_moved_record(), track_moved_record_next_layer(), foedus::storage::masstree::MasstreeStoragePimpl::verify_single_thread_border(), and will_contain_next_layer().

796  {
798  return get_slot(index)->tid_.xct_id_.is_next_layer();
799  }
XctId xct_id_
the second 64bit: Persistent status part of TID.
Definition: xct_id.hpp:1137
const Slot * get_slot(SlotIndex index) const __attribute__((always_inline))
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
bool is_next_layer() const __attribute__((always_inline))
Definition: xct_id.hpp:1042
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
xct::RwLockableXctId tid_
TID of the record.

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::storage::masstree::MasstreeBorderPage::equal_key ( SlotIndex  index,
const void *  be_key,
KeyLength  key_length 
) const
inline

let's gradually migrate from compare_key() to this.

Definition at line 1624 of file masstree_page_impl.hpp.

References compare_key().

1627  {
1628  return compare_key(index, be_key, key_length);
1629 }
bool compare_key(SlotIndex index, const void *be_key, KeyLength key_length) const __attribute__((always_inline))
actually this method should be renamed to equal_key...

Here is the call graph for this function:

SlotIndex foedus::storage::masstree::MasstreeBorderPage::find_key ( KeySlice  slice,
const void *  suffix,
KeyLength  remainder 
) const
inline

Navigates a searching key-slice to one of the record in this page.

Returns
index of key found in this page, or kBorderPageMaxSlots if not found.

Definition at line 1120 of file masstree_page_impl.hpp.

References ASSERT_ND, does_point_to_layer(), foedus::storage::masstree::MasstreePage::get_key_count(), get_record(), get_remainder_length(), get_slice(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::storage::masstree::kMaxKeyLength, LIKELY, and prefetch_additional_if_needed().

Referenced by foedus::storage::masstree::MasstreeStoragePimpl::locate_record(), track_moved_record(), and track_moved_record_next_layer().

1123  {
1124  SlotIndex key_count = get_key_count();
1125  ASSERT_ND(remainder <= kMaxKeyLength);
1126  ASSERT_ND(key_count <= kBorderPageMaxSlots);
1127  prefetch_additional_if_needed(key_count);
1128 
1129  // one slice might be used for up to 10 keys, length 0 to 8 and pointer to next layer.
1130  if (remainder <= sizeof(KeySlice)) {
1131  // then we are looking for length 0-8 only.
1132  for (SlotIndex i = 0; i < key_count; ++i) {
1133  const KeySlice rec_slice = get_slice(i);
1134  if (LIKELY(slice != rec_slice)) {
1135  continue;
1136  }
1137  // no suffix nor next layer, so just compare length. if not match, continue
1138  const KeyLength klen = get_remainder_length(i);
1139  if (klen == remainder) {
1140  return i;
1141  }
1142  }
1143  } else {
1144  // then we are only looking for length>8.
1145  for (SlotIndex i = 0; i < key_count; ++i) {
1146  const KeySlice rec_slice = get_slice(i);
1147  if (LIKELY(slice != rec_slice)) {
1148  continue;
1149  }
1150 
1151  if (does_point_to_layer(i)) {
1152  // as it points to next layer, no need to check suffix. We are sure this is it.
1153  // so far we don't delete layers, so in this case the record is always valid.
1154  return i;
1155  }
1156 
1157  // now, our key is > 8 bytes and we found some local record.
1158  const KeyLength klen = get_remainder_length(i);
1159  if (klen <= sizeof(KeySlice)) {
1160  continue; // length 0-8, surely not a match. skip.
1161  }
1162 
1163  if (klen == remainder) {
1164  // compare suffix.
1165  const char* record_suffix = get_record(i);
1166  if (std::memcmp(record_suffix, suffix, remainder - sizeof(KeySlice)) == 0) {
1167  return i;
1168  }
1169  }
1170 
1171  // The record has > 8 bytes key of the same slice. it must be the only such record in this
1172  // page because otherwise we must have created a next layer!
1173  break; // no more check needed
1174  }
1175  }
1176  return kBorderPageMaxSlots;
1177 }
KeyLength get_remainder_length(SlotIndex index) const __attribute__((always_inline))
void prefetch_additional_if_needed(SlotIndex key_count) const __attribute__((always_inline))
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
KeySlice get_slice(SlotIndex index) const __attribute__((always_inline))
uint16_t SlotIndex
Index of a record in a (border) page.
const KeyLength kMaxKeyLength
Max length of a key.
Definition: masstree_id.hpp:75
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
#define LIKELY(x)
Hints that x is highly likely true.
Definition: compiler.hpp:103
char * get_record(SlotIndex index) __attribute__((always_inline))
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
bool does_point_to_layer(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

MasstreeBorderPage::FindKeyForReserveResult foedus::storage::masstree::MasstreeBorderPage::find_key_for_reserve ( SlotIndex  from_index,
SlotIndex  to_index,
KeySlice  slice,
const void *  suffix,
KeyLength  remainder 
) const
inline

This is for the case we are looking for either the matching slot or the slot we will modify.

Definition at line 1198 of file masstree_page_impl.hpp.

References ASSERT_ND, does_point_to_layer(), get_record(), get_remainder_length(), get_slice(), foedus::storage::masstree::kBorderPageMaxSlots, kConflictingLocalRecord, kExactMatchLayerPointer, kExactMatchLocalRecord, foedus::storage::masstree::kMaxKeyLength, kNotFound, LIKELY, and prefetch_additional_if_needed().

Referenced by foedus::storage::masstree::MasstreeStoragePimpl::reserve_record(), and foedus::storage::masstree::ReserveRecords::run().

1203  {
1204  ASSERT_ND(to_index <= kBorderPageMaxSlots);
1205  ASSERT_ND(from_index <= to_index);
1206  ASSERT_ND(remainder <= kMaxKeyLength);
1207  if (from_index == 0) {
1209  }
1210  if (remainder <= sizeof(KeySlice)) {
1211  for (SlotIndex i = from_index; i < to_index; ++i) {
1212  const KeySlice rec_slice = get_slice(i);
1213  if (LIKELY(slice != rec_slice)) {
1214  continue;
1215  }
1216  const KeyLength klen = get_remainder_length(i);
1217  if (klen == remainder) {
1219  return FindKeyForReserveResult(i, kExactMatchLocalRecord);
1220  }
1221  }
1222  } else {
1223  for (SlotIndex i = from_index; i < to_index; ++i) {
1224  const KeySlice rec_slice = get_slice(i);
1225  if (LIKELY(slice != rec_slice)) {
1226  continue;
1227  }
1228 
1229  const bool next_layer = does_point_to_layer(i);
1230  const KeyLength klen = get_remainder_length(i);
1231 
1232  if (next_layer) {
1233  ASSERT_ND(klen > sizeof(KeySlice));
1234  return FindKeyForReserveResult(i, kExactMatchLayerPointer);
1235  } else if (klen <= sizeof(KeySlice)) {
1236  continue;
1237  }
1238 
1239  // now, both the searching key and this key are more than 8 bytes.
1240  // whether the key really matches or not, this IS the slot we are looking for.
1241  // Either 1) the keys really match, or 2) we will make this record point to next layer.
1242  const char* record_suffix = get_record(i);
1243  if (klen == remainder &&
1244  std::memcmp(record_suffix, suffix, remainder - sizeof(KeySlice)) == 0) {
1245  // case 1)
1246  return FindKeyForReserveResult(i, kExactMatchLocalRecord);
1247  } else {
1248  // case 2)
1249  return FindKeyForReserveResult(i, kConflictingLocalRecord);
1250  }
1251  }
1252  }
1253  return FindKeyForReserveResult(kBorderPageMaxSlots, kNotFound);
1254 }
KeyLength get_remainder_length(SlotIndex index) const __attribute__((always_inline))
void prefetch_additional_if_needed(SlotIndex key_count) const __attribute__((always_inline))
KeySlice get_slice(SlotIndex index) const __attribute__((always_inline))
uint16_t SlotIndex
Index of a record in a (border) page.
const KeyLength kMaxKeyLength
Max length of a key.
Definition: masstree_id.hpp:75
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
#define LIKELY(x)
Hints that x is highly likely true.
Definition: compiler.hpp:103
char * get_record(SlotIndex index) __attribute__((always_inline))
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
bool does_point_to_layer(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

MasstreeBorderPage::FindKeyForReserveResult foedus::storage::masstree::MasstreeBorderPage::find_key_for_snapshot ( KeySlice  slice,
const void *  suffix,
KeyLength  remainder 
) const
inline

This one is used for snapshot pages.

Because keys are fully sorted in snapshot pages, this returns the index of the first record whose key is strictly larger than given key (key_count if not exists).

Precondition
header_.snapshot

Definition at line 1256 of file masstree_page_impl.hpp.

References ASSERT_ND, does_point_to_layer(), foedus::storage::masstree::MasstreePage::get_key_count(), get_record(), get_remainder_length(), get_slice(), foedus::storage::masstree::MasstreePage::header_, kConflictingLocalRecord, kExactMatchLayerPointer, kExactMatchLocalRecord, foedus::storage::masstree::kMaxKeyLength, kNotFound, and foedus::storage::PageHeader::snapshot_.

1259  {
1261  ASSERT_ND(remainder <= kMaxKeyLength);
1262  // Remember, unlike other cases above, there are no worry on concurrency.
1263  const SlotIndex key_count = get_key_count();
1264  if (remainder <= sizeof(KeySlice)) {
1265  for (SlotIndex i = 0; i < key_count; ++i) {
1266  const KeySlice rec_slice = get_slice(i);
1267  if (rec_slice < slice) {
1268  continue;
1269  } else if (rec_slice > slice) {
1270  // as keys are sorted in snapshot page, "i" is the place the slice would be inserted.
1271  return FindKeyForReserveResult(i, kNotFound);
1272  }
1273  ASSERT_ND(rec_slice == slice);
1274  const KeyLength klen = get_remainder_length(i);
1275  if (klen == remainder) {
1277  return FindKeyForReserveResult(i, kExactMatchLocalRecord);
1278  } else if (klen < remainder) {
1280  continue; // same as "rec_slice < slice" case
1281  } else {
1282  return FindKeyForReserveResult(i, kNotFound); // same as "rec_slice > slice" case
1283  }
1284  }
1285  } else {
1286  for (SlotIndex i = 0; i < key_count; ++i) {
1287  const KeySlice rec_slice = get_slice(i);
1288  if (rec_slice < slice) {
1289  continue;
1290  } else if (rec_slice > slice) {
1291  // as keys are sorted in snapshot page, "i" is the place the slice would be inserted.
1292  return FindKeyForReserveResult(i, kNotFound);
1293  }
1294  ASSERT_ND(rec_slice == slice);
1295  const KeyLength klen = get_remainder_length(i);
1296 
1297  if (does_point_to_layer(i)) {
1298  return FindKeyForReserveResult(i, kExactMatchLayerPointer);
1299  }
1300 
1302 
1303  if (klen <= sizeof(KeySlice)) {
1304  continue; // same as "rec_slice < slice" case
1305  }
1306 
1307  // see the comment in MasstreeComposerContext::PathLevel.
1308  // Even if the record is larger than the current key, we consider it "matched" and cause
1309  // next layer creation, but keeping the larger record in a dummy original page in nexy layer.
1310  const char* record_suffix = get_record(i);
1311  if (klen == remainder &&
1312  std::memcmp(record_suffix, suffix, remainder - sizeof(KeySlice)) == 0) {
1313  return FindKeyForReserveResult(i, kExactMatchLocalRecord);
1314  } else {
1315  return FindKeyForReserveResult(i, kConflictingLocalRecord);
1316  }
1317  }
1318  }
1319  return FindKeyForReserveResult(key_count, kNotFound);
1320 }
KeyLength get_remainder_length(SlotIndex index) const __attribute__((always_inline))
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
KeySlice get_slice(SlotIndex index) const __attribute__((always_inline))
uint16_t SlotIndex
Index of a record in a (border) page.
const KeyLength kMaxKeyLength
Max length of a key.
Definition: masstree_id.hpp:75
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
char * get_record(SlotIndex index) __attribute__((always_inline))
bool snapshot_
Whether this page image is of a snapshot page.
Definition: page.hpp:211
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
bool does_point_to_layer(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

SlotIndex foedus::storage::masstree::MasstreeBorderPage::find_key_normalized ( SlotIndex  from_index,
SlotIndex  to_index,
KeySlice  slice 
) const
inline

Specialized version for 8 byte native integer search.

Because such a key never goes to second layer, this is much simpler.

Definition at line 1179 of file masstree_page_impl.hpp.

References ASSERT_ND, get_remainder_length(), get_slice(), foedus::storage::masstree::kBorderPageMaxSlots, prefetch_additional_if_needed(), and UNLIKELY.

Referenced by foedus::storage::masstree::MasstreeStoragePimpl::locate_record_normalized(), and foedus::storage::masstree::MasstreeStoragePimpl::reserve_record_normalized().

1182  {
1183  ASSERT_ND(to_index <= kBorderPageMaxSlots);
1184  ASSERT_ND(from_index <= to_index);
1185  if (from_index == 0) { // we don't need prefetching in second time
1187  }
1188  for (SlotIndex i = from_index; i < to_index; ++i) {
1189  const KeySlice rec_slice = get_slice(i);
1190  const KeyLength klen = get_remainder_length(i);
1191  if (UNLIKELY(slice == rec_slice && klen == sizeof(KeySlice))) {
1192  return i;
1193  }
1194  }
1195  return kBorderPageMaxSlots;
1196 }
KeyLength get_remainder_length(SlotIndex index) const __attribute__((always_inline))
void prefetch_additional_if_needed(SlotIndex key_count) const __attribute__((always_inline))
KeySlice get_slice(SlotIndex index) const __attribute__((always_inline))
uint16_t SlotIndex
Index of a record in a (border) page.
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

PayloadLength foedus::storage::masstree::MasstreeBorderPage::get_max_payload_length ( SlotIndex  index) const
inline
Returns
the maximum payload length the physical record allows.

Definition at line 838 of file masstree_page_impl.hpp.

References foedus::storage::masstree::MasstreeBorderPage::Slot::get_max_payload_peek(), and get_slot().

Referenced by initialize_as_layer_root_physical(), foedus::storage::masstree::MasstreeStoragePimpl::insert_general(), foedus::storage::masstree::MasstreeStoragePimpl::reserve_record(), foedus::storage::masstree::MasstreeStoragePimpl::reserve_record_normalized(), foedus::storage::masstree::ReserveRecords::run(), try_expand_record_in_page_physical(), and foedus::storage::masstree::MasstreeStoragePimpl::upsert_general().

838  {
839  return get_slot(index)->get_max_payload_peek();
840  }
PayloadLength get_max_payload_peek() const __attribute__((always_inline))
This might be affected by concurrent threads.
const Slot * get_slot(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

Slot* foedus::storage::masstree::MasstreeBorderPage::get_new_slot ( SlotIndex  index)
inline

Definition at line 643 of file masstree_page_impl.hpp.

References ASSERT_ND, foedus::storage::masstree::MasstreePage::get_key_count(), and foedus::storage::masstree::kBorderPageMaxSlots.

Referenced by append_next_layer_snapshot(), initialize_layer_root(), foedus::storage::masstree::SplitBorder::migrate_records(), reserve_initially_next_layer(), and reserve_record_space().

643  {
644  ASSERT_ND(index == get_key_count());
646  return reinterpret_cast<Slot*>(this + 1) - index - 1;
647  }
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

const DualPagePointer* foedus::storage::masstree::MasstreeBorderPage::get_next_layer ( SlotIndex  index) const
inline

Definition at line 753 of file masstree_page_impl.hpp.

References get_record_payload().

753  {
754  return reinterpret_cast<const DualPagePointer*>(get_record_payload(index));
755  }
char * get_record_payload(SlotIndex index) __attribute__((always_inline))

Here is the call graph for this function:

DualPagePointer* foedus::storage::masstree::MasstreeBorderPage::get_next_layer_from_offsets ( DataOffset  record_offset,
KeyLength  remainder_length 
)
inline

Definition at line 782 of file masstree_page_impl.hpp.

References get_record_payload_from_offsets().

Referenced by append_next_layer_snapshot(), and reserve_initially_next_layer().

784  {
785  char* payload = get_record_payload_from_offsets(record_offset, remainder_length);
786  return reinterpret_cast<DualPagePointer*>(payload);
787  }
const char * get_record_payload_from_offsets(DataOffset record_offset, KeyLength remainder_length) const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

const DualPagePointer* foedus::storage::masstree::MasstreeBorderPage::get_next_layer_from_offsets ( DataOffset  record_offset,
KeyLength  remainder_length 
) const
inline

Definition at line 788 of file masstree_page_impl.hpp.

References get_record_payload_from_offsets().

790  {
791  const char* payload = get_record_payload_from_offsets(record_offset, remainder_length);
792  return reinterpret_cast<const DualPagePointer*>(payload);
793  }
const char * get_record_payload_from_offsets(DataOffset record_offset, KeyLength remainder_length) const __attribute__((always_inline))

Here is the call graph for this function:

DataOffset foedus::storage::masstree::MasstreeBorderPage::get_next_offset ( ) const
inline

Definition at line 625 of file masstree_page_impl.hpp.

Referenced by foedus::storage::masstree::SplitBorder::run(), and try_expand_record_in_page_physical().

625 { return next_offset_; }

Here is the caller graph for this function:

DataOffset foedus::storage::masstree::MasstreeBorderPage::get_offset_in_bytes ( SlotIndex  index) const
inline

Definition at line 809 of file masstree_page_impl.hpp.

References foedus::storage::masstree::MasstreeBorderPage::SlotLengthUnion::components, get_slot(), foedus::storage::masstree::MasstreeBorderPage::Slot::lengthes_, and foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::offset_.

Referenced by get_record(), and foedus::storage::masstree::operator<<().

809  {
810  return get_slot(index)->lengthes_.components.offset_;
811  }
SlotLengthUnion lengthes_
Stores mutable length information of the record.
const Slot * get_slot(SlotIndex index) const __attribute__((always_inline))
DataOffset offset_
Byte offset in data_ where this record starts.

Here is the call graph for this function:

Here is the caller graph for this function:

const xct::RwLockableXctId* foedus::storage::masstree::MasstreeBorderPage::get_owner_id ( SlotIndex  index) const
inline

Definition at line 816 of file masstree_page_impl.hpp.

References get_slot(), and foedus::storage::masstree::MasstreeBorderPage::Slot::tid_.

816  {
817  return &get_slot(index)->tid_;
818  }
const Slot * get_slot(SlotIndex index) const __attribute__((always_inline))
xct::RwLockableXctId tid_
TID of the record.

Here is the call graph for this function:

PayloadLength foedus::storage::masstree::MasstreeBorderPage::get_payload_length ( SlotIndex  index) const
inline
Returns
the current logical payload length, which might change later.

Definition at line 832 of file masstree_page_impl.hpp.

References foedus::storage::masstree::MasstreeBorderPage::SlotLengthUnion::components, get_slot(), foedus::storage::masstree::MasstreeBorderPage::Slot::lengthes_, and foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::payload_length_.

Referenced by assert_entries_impl(), foedus::storage::masstree::MasstreeStoragePimpl::increment_general(), foedus::storage::masstree::operator<<(), foedus::storage::masstree::MasstreeStoragePimpl::overwrite_general(), foedus::storage::masstree::MasstreeStoragePimpl::retrieve_general(), foedus::storage::masstree::MasstreeStoragePimpl::retrieve_part_general(), and foedus::storage::masstree::MasstreeStoragePimpl::upsert_general().

832  {
834  }
SlotLengthUnion lengthes_
Stores mutable length information of the record.
const Slot * get_slot(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

char* foedus::storage::masstree::MasstreeBorderPage::get_record ( SlotIndex  index)
inline

Definition at line 732 of file masstree_page_impl.hpp.

References ASSERT_ND, get_offset_in_bytes(), get_record_from_offset(), and foedus::storage::masstree::kBorderPageMaxSlots.

Referenced by assert_entries_impl(), compare_key(), find_key(), find_key_for_reserve(), find_key_for_snapshot(), get_record_payload(), ltgt_key(), foedus::storage::masstree::operator<<(), foedus::storage::masstree::MasstreeStoragePimpl::register_record_write_log(), track_moved_record(), and track_moved_record_next_layer().

732  {
735  }
DataOffset get_offset_in_bytes(SlotIndex index) const __attribute__((always_inline))
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
char * get_record_from_offset(DataOffset record_offset) __attribute__((always_inline))
Offset versions.

Here is the call graph for this function:

Here is the caller graph for this function:

const char* foedus::storage::masstree::MasstreeBorderPage::get_record ( SlotIndex  index) const
inline

Definition at line 736 of file masstree_page_impl.hpp.

References ASSERT_ND, get_offset_in_bytes(), get_record_from_offset(), and foedus::storage::masstree::kBorderPageMaxSlots.

736  {
739  }
DataOffset get_offset_in_bytes(SlotIndex index) const __attribute__((always_inline))
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
char * get_record_from_offset(DataOffset record_offset) __attribute__((always_inline))
Offset versions.

Here is the call graph for this function:

char* foedus::storage::masstree::MasstreeBorderPage::get_record_from_offset ( DataOffset  record_offset)
inline

Offset versions.

Definition at line 758 of file masstree_page_impl.hpp.

References ASSERT_ND.

Referenced by get_record(), get_record_payload_from_offsets(), initialize_layer_root(), foedus::storage::masstree::SplitBorder::migrate_records(), reserve_record_space(), and try_expand_record_in_page_physical().

758  {
759  ASSERT_ND(record_offset % 8 == 0);
760  ASSERT_ND(record_offset < sizeof(data_));
761  return data_ + record_offset;
762  }
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the caller graph for this function:

const char* foedus::storage::masstree::MasstreeBorderPage::get_record_from_offset ( DataOffset  record_offset) const
inline

Definition at line 763 of file masstree_page_impl.hpp.

References ASSERT_ND.

763  {
764  ASSERT_ND(record_offset % 8 == 0);
765  ASSERT_ND(record_offset < sizeof(data_));
766  return data_ + record_offset;
767  }
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
char* foedus::storage::masstree::MasstreeBorderPage::get_record_payload ( SlotIndex  index)
inline

Definition at line 740 of file masstree_page_impl.hpp.

References get_record(), and get_suffix_length_aligned().

Referenced by get_next_layer(), foedus::storage::masstree::MasstreeStoragePimpl::increment_general(), initialize_as_layer_root_physical(), initialize_layer_root(), foedus::storage::masstree::operator<<(), foedus::storage::masstree::MasstreeStoragePimpl::retrieve_general(), and foedus::storage::masstree::MasstreeStoragePimpl::retrieve_part_general().

740  {
741  char* record = get_record(index);
742  KeyLength skipped = get_suffix_length_aligned(index);
743  return record + skipped;
744  }
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
char * get_record(SlotIndex index) __attribute__((always_inline))
KeyLength get_suffix_length_aligned(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

const char* foedus::storage::masstree::MasstreeBorderPage::get_record_payload ( SlotIndex  index) const
inline

Definition at line 745 of file masstree_page_impl.hpp.

References get_record(), and get_suffix_length_aligned().

745  {
746  const char* record = get_record(index);
747  KeyLength skipped = get_suffix_length_aligned(index);
748  return record + skipped;
749  }
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
char * get_record(SlotIndex index) __attribute__((always_inline))
KeyLength get_suffix_length_aligned(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

const char* foedus::storage::masstree::MasstreeBorderPage::get_record_payload_from_offsets ( DataOffset  record_offset,
KeyLength  remainder_length 
) const
inline

Definition at line 768 of file masstree_page_impl.hpp.

References foedus::storage::masstree::calculate_suffix_length_aligned(), and get_record_from_offset().

Referenced by get_next_layer_from_offsets(), and initialize_layer_root().

770  {
771  const char* record = get_record_from_offset(record_offset);
772  KeyLength skipped = calculate_suffix_length_aligned(remainder_length);
773  return record + skipped;
774  }
KeyLength calculate_suffix_length_aligned(KeyLength remainder_length) __attribute__((always_inline))
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
char * get_record_from_offset(DataOffset record_offset) __attribute__((always_inline))
Offset versions.

Here is the call graph for this function:

Here is the caller graph for this function:

char* foedus::storage::masstree::MasstreeBorderPage::get_record_payload_from_offsets ( DataOffset  record_offset,
KeyLength  remainder_length 
)
inline

Definition at line 775 of file masstree_page_impl.hpp.

References foedus::storage::masstree::calculate_suffix_length_aligned(), and get_record_from_offset().

777  {
778  char* record = get_record_from_offset(record_offset);
779  KeyLength skipped = calculate_suffix_length_aligned(remainder_length);
780  return record + skipped;
781  }
KeyLength calculate_suffix_length_aligned(KeyLength remainder_length) __attribute__((always_inline))
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
char * get_record_from_offset(DataOffset record_offset) __attribute__((always_inline))
Offset versions.

Here is the call graph for this function:

KeyLength foedus::storage::masstree::MasstreeBorderPage::get_remainder_length ( SlotIndex  index) const
inline
const Slot* foedus::storage::masstree::MasstreeBorderPage::get_slot ( SlotIndex  index) const
inline

Definition at line 631 of file masstree_page_impl.hpp.

References ASSERT_ND, foedus::storage::masstree::MasstreePage::get_key_count(), and foedus::storage::masstree::kBorderPageMaxSlots.

Referenced by does_point_to_layer(), get_max_payload_length(), get_offset_in_bytes(), get_owner_id(), get_payload_length(), get_remainder_length(), initialize_as_layer_root_physical(), initialize_layer_root(), foedus::storage::masstree::operator<<(), replace_next_layer_snapshot(), track_moved_record(), try_expand_record_in_page_physical(), and verify_slot_lengthes().

631  {
632  ASSERT_ND(index < get_key_count());
634  return reinterpret_cast<const Slot*>(this + 1) - index - 1;
635  }
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

Slot* foedus::storage::masstree::MasstreeBorderPage::get_slot ( SlotIndex  index)
inline

Definition at line 637 of file masstree_page_impl.hpp.

References ASSERT_ND, foedus::storage::masstree::MasstreePage::get_key_count(), and foedus::storage::masstree::kBorderPageMaxSlots.

637  {
638  ASSERT_ND(index < get_key_count());
640  return reinterpret_cast<Slot*>(this + 1) - index - 1;
641  }
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

KeyLength foedus::storage::masstree::MasstreeBorderPage::get_suffix_length ( SlotIndex  index) const
inline

Definition at line 823 of file masstree_page_impl.hpp.

References foedus::storage::masstree::calculate_suffix_length(), and get_remainder_length().

Referenced by assert_entries_impl(), and foedus::storage::masstree::operator<<().

823  {
824  const KeyLength remainder_length = get_remainder_length(index);
825  return calculate_suffix_length(remainder_length);
826  }
KeyLength get_remainder_length(SlotIndex index) const __attribute__((always_inline))
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
KeyLength calculate_suffix_length(KeyLength remainder_length) __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

KeyLength foedus::storage::masstree::MasstreeBorderPage::get_suffix_length_aligned ( SlotIndex  index) const
inline

Definition at line 827 of file masstree_page_impl.hpp.

References foedus::storage::masstree::calculate_suffix_length_aligned(), and get_remainder_length().

Referenced by assert_entries_impl(), and get_record_payload().

827  {
828  const KeyLength remainder_length = get_remainder_length(index);
829  return calculate_suffix_length_aligned(remainder_length);
830  }
KeyLength get_remainder_length(SlotIndex index) const __attribute__((always_inline))
KeyLength calculate_suffix_length_aligned(KeyLength remainder_length) __attribute__((always_inline))
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::increase_next_offset ( DataOffset  length)
inline

Definition at line 626 of file masstree_page_impl.hpp.

References ASSERT_ND.

Referenced by try_expand_record_in_page_physical().

626  {
627  next_offset_ += length;
628  ASSERT_ND(next_offset_ <= sizeof(data_));
629  }
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the caller graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::initialize_as_layer_root_physical ( VolatilePagePointer  page_id,
MasstreeBorderPage parent,
SlotIndex  parent_index 
)

A physical-only method to initialize this page as a volatile page of a layer-root pointed from the given parent record.

It merely migrates the parent record without any logical change.

Precondition
!parent->header_.snapshot_: only for volatile page
parent->is_locked() && !parent->is_moved()
parent->get_slot(parent_index)->is_locked()
!parent->get_slot(parent_index)->is_moved()
!parent->get_slot(parent_index)->does_point_to_layer()

Definition at line 431 of file masstree_page_impl.cpp.

References ASSERT_ND, foedus::storage::masstree::MasstreePage::get_layer(), get_max_payload_length(), get_next_layer(), get_record_payload(), get_slot(), foedus::storage::masstree::MasstreePage::header_, initialize_layer_root(), initialize_volatile_page(), foedus::storage::masstree::MasstreePage::is_locked(), foedus::storage::masstree::MasstreePage::is_moved(), foedus::storage::masstree::MasstreePage::is_retired(), foedus::storage::masstree::kInfimumSlice, foedus::storage::masstree::kSupremumSlice, foedus::assorted::memory_fence_release(), foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::payload_length_, foedus::storage::DualPagePointer::snapshot_pointer_, foedus::storage::PageHeader::storage_id_, and foedus::storage::DualPagePointer::volatile_pointer_.

Referenced by foedus::storage::masstree::ReserveRecords::run().

434  {
435  // This method assumes that the record's payload space is spacious enough.
436  // The caller must make it sure as pre-condition.
437  ASSERT_ND(parent->is_locked());
438  ASSERT_ND(!parent->is_moved());
439  Slot* parent_slot = parent->get_slot(parent_index);
440  ASSERT_ND(parent_slot->tid_.is_keylocked());
441  ASSERT_ND(!parent_slot->tid_.is_moved());
442  ASSERT_ND(!parent_slot->does_point_to_layer());
443  ASSERT_ND(parent->get_max_payload_length(parent_index) >= sizeof(DualPagePointer));
444  DualPagePointer pointer;
445  pointer.snapshot_pointer_ = 0;
446  pointer.volatile_pointer_ = page_id;
447 
448  // initialize the root page by copying the record
450  parent->header_.storage_id_,
451  page_id,
452  parent->get_layer() + 1,
453  kInfimumSlice, // infimum slice
454  kSupremumSlice); // high-fence is supremum
455  ASSERT_ND(!is_locked());
456  initialize_layer_root(parent, parent_index);
457  ASSERT_ND(!is_moved());
458  ASSERT_ND(!is_retired());
459 
460  SlotLengthPart new_lengthes = parent_slot->read_lengthes_oneshot();
461  new_lengthes.payload_length_ = sizeof(DualPagePointer);
462  char* parent_payload = parent->get_record_payload(parent_index);
463 
464  // point to the new page. Be careful on ordering.
465  std::memcpy(parent_payload, &pointer, sizeof(pointer));
467  parent_slot->write_lengthes_oneshot(new_lengthes);
469  parent_slot->tid_.xct_id_.set_next_layer(); // which also turns off delete-bit
470 
471  ASSERT_ND(parent->get_next_layer(parent_index)->volatile_pointer_ == page_id);
473 }
const KeySlice kInfimumSlice
bool is_locked() const __attribute__((always_inline))
bool is_moved() const __attribute__((always_inline))
bool is_retired() const __attribute__((always_inline))
void initialize_layer_root(const MasstreeBorderPage *copy_from, SlotIndex copy_index)
Copy the initial record that will be the only record for a new root page.
void initialize_volatile_page(StorageId storage_id, VolatilePagePointer page_id, uint8_t layer, KeySlice low_fence, KeySlice high_fence)
const KeySlice kSupremumSlice
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
void memory_fence_release()
Equivalent to std::atomic_thread_fence(std::memory_order_release).

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::initialize_layer_root ( const MasstreeBorderPage copy_from,
SlotIndex  copy_index 
)

Copy the initial record that will be the only record for a new root page.

This is called when a new layer is created, and done in a thread-private memory. So, no synchronization needed.

Definition at line 298 of file masstree_page_impl.cpp.

References foedus::assorted::align8(), ASSERT_ND, foedus::storage::masstree::calculate_suffix_length_aligned(), foedus::storage::masstree::MasstreeBorderPage::SlotLengthUnion::components, foedus::storage::masstree::MasstreePage::get_key_count(), get_new_slot(), get_owner_id(), get_record_from_offset(), get_record_payload(), get_record_payload_from_offsets(), get_slot(), foedus::storage::masstree::MasstreePage::header_, foedus::xct::RwLockableXctId::is_keylocked(), foedus::storage::masstree::MasstreePage::is_locked(), foedus::xct::XctId::is_next_layer(), foedus::storage::PageHeader::key_count_, foedus::storage::masstree::kInitiallyNextLayer, foedus::storage::masstree::MasstreeBorderPage::Slot::lengthes_, foedus::xct::RwLockableXctId::lock_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::offset_, foedus::storage::masstree::MasstreeBorderPage::Slot::original_offset_, foedus::storage::masstree::MasstreeBorderPage::Slot::original_physical_record_length_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::payload_length_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::physical_record_length_, foedus::storage::masstree::MasstreeBorderPage::Slot::remainder_length_, foedus::xct::McsRwLock::reset(), set_slice(), foedus::storage::masstree::slice_key(), foedus::storage::masstree::MasstreeBorderPage::Slot::tid_, to_record_length(), foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::unused_, and foedus::xct::RwLockableXctId::xct_id_.

Referenced by initialize_as_layer_root_physical().

300  {
301  ASSERT_ND(get_key_count() == 0);
302  ASSERT_ND(!is_locked()); // we don't lock a new page
303  ASSERT_ND(copy_from->get_owner_id(copy_index)->is_keylocked());
304 
305  // This is a new page, so no worry on race.
306  const SlotIndex new_index = 0;
307  Slot* slot = get_new_slot(new_index);
308  const Slot* parent_slot = copy_from->get_slot(copy_index);
309  const SlotLengthPart parent_lengthes = parent_slot->lengthes_.components;
310  const char* parent_record = copy_from->get_record_from_offset(parent_lengthes.offset_);
311 
312  KeyLength parent_remainder = parent_slot->remainder_length_;
313  ASSERT_ND(parent_remainder != kInitiallyNextLayer);
314  ASSERT_ND(parent_remainder > sizeof(KeySlice));
315  KeyLength remainder = parent_remainder - sizeof(KeySlice);
316 
317  // retrieve the first 8 byte (or less) as the new slice.
318  const KeySlice new_slice = slice_key(parent_record, parent_remainder);
319  const PayloadLength payload_length = parent_lengthes.payload_length_;
320  const KeyLength suffix_length_aligned = calculate_suffix_length_aligned(remainder);
321  const DataOffset record_size = to_record_length(remainder, payload_length);
322  const DataOffset new_offset = next_offset_;
323 
324  set_slice(new_index, new_slice);
325 
326  ASSERT_ND(next_offset_ == 0);
327  slot->lengthes_.components.offset_ = new_offset;
328  slot->lengthes_.components.unused_ = 0;
329  slot->lengthes_.components.physical_record_length_ = record_size;
330  slot->lengthes_.components.payload_length_ = payload_length;
331  slot->original_physical_record_length_ = record_size;
332  slot->remainder_length_ = remainder;
333  slot->original_offset_ = new_offset;
334  next_offset_ += record_size;
335  consecutive_inserts_ = true;
336 
337  // use the same xct ID. This means we also inherit deleted flag.
338  slot->tid_.xct_id_ = parent_slot->tid_.xct_id_;
339  ASSERT_ND(!slot->tid_.xct_id_.is_next_layer());
340  // but we don't want to inherit locks
341  slot->tid_.lock_.reset();
342  if (suffix_length_aligned > 0) {
343  // because suffix parts are 8-byte aligned with zero padding, we can memcpy in 8-bytes unit
344  std::memcpy(
345  get_record_from_offset(new_offset),
346  parent_record + sizeof(KeySlice),
347  suffix_length_aligned);
348  }
349  if (payload_length > 0) {
350  std::memcpy(
351  get_record_payload_from_offsets(new_offset, remainder),
352  copy_from->get_record_payload(copy_index),
353  assorted::align8(payload_length)); // payload is zero-padded to 8 bytes, too.
354  }
355 
356  // as we don't lock this page, we directly increment it to avoid is_locked assertion
358 }
void set_slice(SlotIndex index, KeySlice slice) __attribute__((always_inline))
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
T align8(T value)
8-alignment.
KeyLength calculate_suffix_length_aligned(KeyLength remainder_length) __attribute__((always_inline))
uint16_t PayloadLength
Represents a byte-length of a payload in this package.
Definition: masstree_id.hpp:92
uint16_t SlotIndex
Index of a record in a (border) page.
const char * get_record_payload_from_offsets(DataOffset record_offset, KeyLength remainder_length) const __attribute__((always_inline))
bool is_locked() const __attribute__((always_inline))
Slot * get_new_slot(SlotIndex index) __attribute__((always_inline))
uint16_t DataOffset
Byte-offset in a page.
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
KeySlice slice_key(const void *be_bytes, KeyLength slice_length)
Extract a part of a big-endian byte array of given length as KeySlice.
uint16_t key_count_
physical key count (those keys might be deleted) in this page.
Definition: page.hpp:219
const KeyLength kInitiallyNextLayer
A special key length value that denotes the record in a border page was initially a next-layer pointe...
Definition: masstree_id.hpp:86
static DataOffset to_record_length(KeyLength remainder_length, PayloadLength payload_length)
returns minimal physical_record_length_ for the given remainder/payload length.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
char * get_record_from_offset(DataOffset record_offset) __attribute__((always_inline))
Offset versions.

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::initialize_snapshot_page ( StorageId  storage_id,
SnapshotPagePointer  page_id,
uint8_t  layer,
KeySlice  low_fence,
KeySlice  high_fence 
)

Definition at line 146 of file masstree_page_impl.cpp.

References foedus::storage::masstree::MasstreePage::initialize_snapshot_common(), and foedus::storage::kMasstreeBorderPageType.

Referenced by foedus::storage::masstree::MasstreeComposer::construct_root().

151  {
153  storage_id,
154  page_id,
156  layer,
157  0,
158  low_fence,
159  high_fence);
160  consecutive_inserts_ = true; // snapshot pages are always completely sorted
161  next_offset_ = 0; // well, already implicitly zero-ed, but to be clear
162 }
void initialize_snapshot_common(StorageId storage_id, SnapshotPagePointer page_id, PageType page_type, uint8_t layer, uint8_t level, KeySlice low_fence, KeySlice high_fence)

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::initialize_volatile_page ( StorageId  storage_id,
VolatilePagePointer  page_id,
uint8_t  layer,
KeySlice  low_fence,
KeySlice  high_fence 
)

Definition at line 128 of file masstree_page_impl.cpp.

References foedus::storage::masstree::MasstreePage::initialize_volatile_common(), and foedus::storage::kMasstreeBorderPageType.

Referenced by initialize_as_layer_root_physical(), foedus::storage::masstree::MasstreeStoragePimpl::load_empty(), and foedus::storage::masstree::ReserveRecords::run().

133  {
135  storage_id,
136  page_id,
138  layer,
139  0,
140  low_fence,
141  high_fence);
142  consecutive_inserts_ = true; // initially key_count = 0, so of course sorted
143  next_offset_ = 0; // well, already implicitly zero-ed, but to be clear
144 }
void initialize_volatile_common(StorageId storage_id, VolatilePagePointer page_id, PageType page_type, uint8_t layer, uint8_t level, KeySlice low_fence, KeySlice high_fence)

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::storage::masstree::MasstreeBorderPage::is_consecutive_inserts ( ) const
inline

Whether this page is receiving only sequential inserts.

If this is true, cursor can skip its sorting phase. If this is a snapshot page, this is always true.

Definition at line 623 of file masstree_page_impl.hpp.

Referenced by foedus::storage::masstree::SplitBorder::decide_strategy(), foedus::storage::masstree::SplitBorder::run(), and foedus::storage::masstree::MasstreeStoragePimpl::verify_single_thread_border().

623 { return consecutive_inserts_; }

Here is the caller graph for this function:

int foedus::storage::masstree::MasstreeBorderPage::ltgt_key ( SlotIndex  index,
const char *  be_key,
KeyLength  key_length 
) const
inline

compare the key.

returns negative, 0, positive when the given key is smaller,same,larger.

Definition at line 1631 of file masstree_page_impl.hpp.

References ASSERT_ND, foedus::storage::masstree::MasstreePage::get_layer(), foedus::storage::masstree::kSliceLen, and foedus::storage::masstree::slice_layer().

1634  {
1635  ASSERT_ND(key_length > get_layer() * kSliceLen);
1636  KeyLength remainder = key_length - get_layer() * kSliceLen;
1637  KeySlice slice = slice_layer(be_key, key_length, get_layer());
1638  const char* suffix = be_key + (get_layer() + 1) * kSliceLen;
1639  return ltgt_key(index, slice, suffix, remainder);
1640 }
const uint64_t kSliceLen
Shorthand for sizeof(KeySlice).
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
uint8_t get_layer() const __attribute__((always_inline))
Layer-0 stores the first 8 byte slice, Layer-1 next 8 byte...
int ltgt_key(SlotIndex index, const char *be_key, KeyLength key_length) const __attribute__((always_inline))
compare the key.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
KeySlice slice_layer(const void *be_bytes, KeyLength key_length, Layer current_layer)
Extract a part of a big-endian byte array of given length as KeySlice.

Here is the call graph for this function:

int foedus::storage::masstree::MasstreeBorderPage::ltgt_key ( SlotIndex  index,
KeySlice  slice,
const char *  suffix,
KeyLength  remainder 
) const
inline

Overload to receive slice+suffix.

Definition at line 1641 of file masstree_page_impl.hpp.

References ASSERT_ND, does_point_to_layer(), get_record(), get_remainder_length(), get_slice(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::storage::masstree::kMaxKeyLength, and foedus::storage::masstree::kSliceLen.

1645  {
1646  ASSERT_ND(index < kBorderPageMaxSlots);
1647  ASSERT_ND(remainder > 0);
1648  ASSERT_ND(remainder <= kMaxKeyLength);
1649  KeyLength rec_remainder = get_remainder_length(index);
1650  KeySlice rec_slice = get_slice(index);
1651  if (slice < rec_slice) {
1652  return -1;
1653  } else if (slice > rec_slice) {
1654  return 1;
1655  }
1656 
1657  if (remainder <= kSliceLen) {
1658  if (remainder == rec_remainder) {
1659  return 0;
1660  } else if (remainder < rec_remainder) {
1661  return -1;
1662  } else {
1663  return 1;
1664  }
1665  }
1666 
1667  ASSERT_ND(remainder > kSliceLen);
1668  if (rec_remainder <= kSliceLen) {
1669  return 1;
1670  }
1671  if (does_point_to_layer(index)) {
1672  return 0;
1673  }
1674  ASSERT_ND(rec_remainder <= kMaxKeyLength);
1675  KeyLength min_remainder = std::min(remainder, rec_remainder);
1676  const char* rec_suffix = get_record(index);
1677  int cmp = std::memcmp(suffix, rec_suffix, min_remainder);
1678  if (cmp != 0) {
1679  return cmp;
1680  }
1681 
1682  if (remainder == rec_remainder) {
1683  return 0;
1684  } else if (remainder < rec_remainder) {
1685  return -1;
1686  } else {
1687  return 1;
1688  }
1689 }
KeyLength get_remainder_length(SlotIndex index) const __attribute__((always_inline))
KeySlice get_slice(SlotIndex index) const __attribute__((always_inline))
const uint64_t kSliceLen
Shorthand for sizeof(KeySlice).
const KeyLength kMaxKeyLength
Max length of a key.
Definition: masstree_id.hpp:75
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
char * get_record(SlotIndex index) __attribute__((always_inline))
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
bool does_point_to_layer(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

MasstreeBorderPage& foedus::storage::masstree::MasstreeBorderPage::operator= ( const MasstreeBorderPage other)
delete
void foedus::storage::masstree::MasstreeBorderPage::prefetch ( ) const
inline

prefetch upto 256th bytes.

Definition at line 934 of file masstree_page_impl.hpp.

References foedus::assorted::prefetch_cachelines().

934  {
936  }
void prefetch_cachelines(const void *address, int cacheline_count)
Prefetch multiple contiguous cachelines to L1 cache.
Definition: cacheline.hpp:66

Here is the call graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::prefetch_additional_if_needed ( SlotIndex  key_count) const
inline

Definition at line 937 of file masstree_page_impl.hpp.

References foedus::storage::masstree::kBorderPageAdditionalHeaderSize, foedus::assorted::kCachelineSize, foedus::storage::masstree::kCommonPageHeaderSize, and foedus::assorted::prefetch_cachelines().

Referenced by find_key(), find_key_for_reserve(), find_key_normalized(), and foedus::storage::masstree::MasstreeStoragePimpl::peek_volatile_page_boundaries_next_layer().

937  {
938  const uint32_t kInitialPrefetchBytes = 4U * assorted::kCachelineSize; // 256 bytes
939  const SlotIndex kInitiallyFetched
940  = (kInitialPrefetchBytes - kCommonPageHeaderSize - kBorderPageAdditionalHeaderSize)
941  / sizeof(KeySlice);
942  if (key_count > kInitiallyFetched) {
943  // we initially prefetched 64*4 = 256 bytes: header and 22 key slices.
944  // if we have more, prefetch now while we are still searching.
945  uint16_t cachelines = ((key_count - kInitiallyFetched) / sizeof(KeySlice)) + 1;
947  reinterpret_cast<const char*>(this) + kInitialPrefetchBytes,
948  cachelines);
949  }
950  }
const uint32_t kCommonPageHeaderSize
Size of the base page class (MasstreePage), which is the common header for intermediate and border pa...
uint16_t SlotIndex
Index of a record in a (border) page.
void prefetch_cachelines(const void *address, int cacheline_count)
Prefetch multiple contiguous cachelines to L1 cache.
Definition: cacheline.hpp:66
uint64_t KeySlice
Each key slice is an 8-byte integer.
const uint16_t kCachelineSize
Byte count of one cache line.
Definition: cacheline.hpp:42
const uint32_t kBorderPageAdditionalHeaderSize
Misc header attributes specific to MasstreeBorderPage placed after the common header.

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::release_pages_recursive ( const memory::GlobalVolatilePageResolver page_resolver,
memory::PageReleaseBatch batch 
)

Definition at line 266 of file masstree_page_impl.cpp.

References ASSERT_ND, does_point_to_layer(), foedus::storage::masstree::MasstreePage::foster_twin_, foedus::storage::masstree::MasstreePage::get_key_count(), get_next_layer(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::MasstreePage::header_, foedus::storage::PageVersion::is_moved(), foedus::storage::VolatilePagePointer::is_null(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::storage::PageHeader::page_id_, foedus::storage::PageHeader::page_version_, foedus::memory::PageReleaseBatch::release(), release_pages_recursive(), foedus::storage::masstree::MasstreePage::release_pages_recursive_common(), foedus::memory::GlobalVolatilePageResolver::resolve_offset(), foedus::storage::DualPagePointer::volatile_pointer_, and foedus::storage::VolatilePagePointer::word.

Referenced by release_pages_recursive(), and foedus::storage::masstree::MasstreePage::release_pages_recursive_common().

268  {
270  for (int i = 0; i < 2; ++i) {
271  ASSERT_ND(!foster_twin_[i].is_null());
273  = reinterpret_cast<MasstreeBorderPage*>(page_resolver.resolve_offset(foster_twin_[i]));
274  p->release_pages_recursive(page_resolver, batch);
275  foster_twin_[i].word = 0;
276  }
277  } else {
278  SlotIndex key_count = get_key_count();
279  ASSERT_ND(key_count <= kBorderPageMaxSlots);
280  for (SlotIndex i = 0; i < key_count; ++i) {
281  if (does_point_to_layer(i)) {
282  DualPagePointer* pointer = get_next_layer(i);
283  if (!pointer->volatile_pointer_.is_null()) {
284  MasstreePage* child = reinterpret_cast<MasstreePage*>(
285  page_resolver.resolve_offset(pointer->volatile_pointer_));
286  child->release_pages_recursive_common(page_resolver, batch);
287  }
288  }
289  }
290  }
291 
292  VolatilePagePointer volatile_id;
293  volatile_id.word = header().page_id_;
294  batch->release(volatile_id);
295 }
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
uint16_t SlotIndex
Index of a record in a (border) page.
DualPagePointer * get_next_layer(SlotIndex index) __attribute__((always_inline))
VolatilePagePointer foster_twin_[2]
Points to foster children, or tentative child pages.
bool is_moved() const __attribute__((always_inline))
Definition: page.hpp:139
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
PageVersion page_version_
Used in several storage types as concurrency control mechanism for the page.
Definition: page.hpp:272
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
bool does_point_to_layer(SlotIndex index) const __attribute__((always_inline))
uint64_t page_id_
Page ID of this page.
Definition: page.hpp:191

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::replace_next_layer_snapshot ( SnapshotPagePointer  pointer)
inline

Same as above, except this is used to transform an existing record at end to a next layer pointer.

Unlike set_next_layer, this shrinks the payload part.

Definition at line 1447 of file masstree_page_impl.hpp.

References ASSERT_ND, foedus::storage::VolatilePagePointer::clear(), foedus::storage::masstree::MasstreeBorderPage::SlotLengthUnion::components, does_point_to_layer(), foedus::storage::masstree::MasstreePage::get_key_count(), get_next_layer(), get_owner_id(), get_slot(), foedus::storage::masstree::MasstreePage::header(), foedus::xct::RwLockableXctId::is_next_layer(), foedus::storage::masstree::kInitiallyNextLayer, foedus::storage::masstree::MasstreeBorderPage::Slot::lengthes_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::offset_, foedus::storage::masstree::MasstreeBorderPage::Slot::original_physical_record_length_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::payload_length_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::physical_record_length_, foedus::storage::masstree::MasstreeBorderPage::Slot::remainder_length_, foedus::xct::XctId::set_next_layer(), foedus::storage::DualPagePointer::snapshot_pointer_, foedus::storage::masstree::MasstreeBorderPage::Slot::tid_, to_record_length(), foedus::storage::DualPagePointer::volatile_pointer_, and foedus::xct::RwLockableXctId::xct_id_.

1447  {
1448  ASSERT_ND(header().snapshot_); // this is used only from snapshot composer
1449  ASSERT_ND(get_key_count() > 0);
1450  const SlotIndex index = get_key_count() - 1U;
1451 
1452  ASSERT_ND(!does_point_to_layer(index));
1453  ASSERT_ND(!get_owner_id(index)->xct_id_.is_next_layer());
1454 
1455  // Overwriting an existing slot with kInitiallyNextLayer is not generally safe,
1456  // but this is in snapshot page, so no worry on race.
1457  const KeyLength kRemainder = kInitiallyNextLayer;
1458  const DataOffset new_record_size = sizeof(DualPagePointer);
1459  ASSERT_ND(new_record_size == to_record_length(kRemainder, sizeof(DualPagePointer)));
1460 
1461  // This is in snapshot page, so no worry on race.
1462  Slot* slot = get_slot(index);
1463 
1464  // This is the last record in this page, right?
1465  ASSERT_ND(next_offset_
1466  == slot->lengthes_.components.offset_ + slot->lengthes_.components.physical_record_length_);
1467 
1468  // Here, we assume the snapshot border page always leaves sizeof(DualPagePointer) whenever
1469  // it creates a record. Otherwise we might not have enough space!
1470  // For this reason, we use can_accomodate_snapshot() rather than can_accomodate().
1471  ASSERT_ND(slot->lengthes_.components.offset_ + new_record_size + sizeof(Slot) * get_key_count()
1472  <= sizeof(data_));
1473  slot->lengthes_.components.physical_record_length_ = new_record_size;
1474  slot->lengthes_.components.payload_length_ = sizeof(DualPagePointer);
1475  slot->original_physical_record_length_ = new_record_size;
1476  slot->remainder_length_ = kRemainder;
1477  next_offset_ = slot->lengthes_.components.offset_ + new_record_size;
1478 
1479  slot->tid_.xct_id_.set_next_layer();
1480  DualPagePointer* dual_pointer = get_next_layer(index);
1481  dual_pointer->volatile_pointer_.clear();
1482  dual_pointer->snapshot_pointer_ = pointer;
1483 
1485 }
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
uint16_t SlotIndex
Index of a record in a (border) page.
uint16_t DataOffset
Byte-offset in a page.
DualPagePointer * get_next_layer(SlotIndex index) __attribute__((always_inline))
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
const KeyLength kInitiallyNextLayer
A special key length value that denotes the record in a border page was initially a next-layer pointe...
Definition: masstree_id.hpp:86
const Slot * get_slot(SlotIndex index) const __attribute__((always_inline))
static DataOffset to_record_length(KeyLength remainder_length, PayloadLength payload_length)
returns minimal physical_record_length_ for the given remainder/payload length.
xct::RwLockableXctId * get_owner_id(SlotIndex index) __attribute__((always_inline))
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
bool is_next_layer() const __attribute__((always_inline))
Definition: xct_id.hpp:1143
bool does_point_to_layer(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

static DataOffset foedus::storage::masstree::MasstreeBorderPage::required_data_space ( KeyLength  remainder_length,
PayloadLength  payload_length 
)
inlinestatic

returns the byte size of required contiguous space in data_ to insert a new record of the given remainder/payload length.

This includes Slot size. Slices are not included as they are anyway statically placed.

See also
to_record_length()

Definition at line 687 of file masstree_page_impl.hpp.

References to_record_length().

Referenced by can_accomodate(), and can_accomodate_snapshot().

689  {
690  return to_record_length(remainder_length, payload_length) + sizeof(Slot);
691  }
static DataOffset to_record_length(KeyLength remainder_length, PayloadLength payload_length)
returns minimal physical_record_length_ for the given remainder/payload length.

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::reserve_initially_next_layer ( SlotIndex  index,
xct::XctId  initial_owner_id,
KeySlice  slice,
const DualPagePointer pointer 
)
inline

For creating a record that is initially a next-layer.

Definition at line 1375 of file masstree_page_impl.hpp.

References ASSERT_ND, can_accomodate(), foedus::storage::masstree::MasstreeBorderPage::SlotLengthUnion::components, foedus::storage::masstree::MasstreePage::get_key_count(), get_new_slot(), get_next_layer_from_offsets(), get_slice(), foedus::storage::masstree::MasstreePage::header(), foedus::xct::XctId::is_deleted(), foedus::storage::masstree::MasstreePage::is_locked(), foedus::xct::XctId::is_next_layer(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::storage::masstree::kInitiallyNextLayer, foedus::storage::masstree::MasstreeBorderPage::Slot::lengthes_, foedus::xct::RwLockableXctId::lock_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::offset_, foedus::storage::masstree::MasstreeBorderPage::Slot::original_offset_, foedus::storage::masstree::MasstreeBorderPage::Slot::original_physical_record_length_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::payload_length_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::physical_record_length_, foedus::storage::masstree::MasstreeBorderPage::Slot::remainder_length_, foedus::xct::McsRwLock::reset(), set_slice(), foedus::storage::masstree::MasstreeBorderPage::Slot::tid_, to_record_length(), foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::unused_, and foedus::xct::RwLockableXctId::xct_id_.

Referenced by foedus::storage::masstree::ReserveRecords::run().

1379  {
1380  ASSERT_ND(index < kBorderPageMaxSlots);
1381  ASSERT_ND(header().snapshot_ || is_locked());
1382  ASSERT_ND(get_key_count() == index);
1383  ASSERT_ND(can_accomodate(index, sizeof(KeySlice), sizeof(DualPagePointer)));
1384  ASSERT_ND(next_offset_ % 8 == 0);
1385  ASSERT_ND(initial_owner_id.is_next_layer());
1386  ASSERT_ND(!initial_owner_id.is_deleted());
1387  const KeyLength remainder = kInitiallyNextLayer;
1388  const DataOffset record_size = to_record_length(remainder, sizeof(DualPagePointer));
1389  ASSERT_ND(record_size % 8 == 0);
1390  const DataOffset new_offset = next_offset_;
1391  set_slice(index, slice);
1392  // This is a new slot, so no worry on race.
1393  Slot* slot = get_new_slot(index);
1394  slot->lengthes_.components.offset_ = new_offset;
1395  slot->lengthes_.components.unused_ = 0;
1396  slot->lengthes_.components.physical_record_length_ = record_size;
1397  slot->lengthes_.components.payload_length_ = sizeof(DualPagePointer);
1398  slot->original_physical_record_length_ = record_size;
1399  slot->remainder_length_ = remainder;
1400  slot->original_offset_ = new_offset;
1401  next_offset_ += record_size;
1402  if (index == 0) {
1403  consecutive_inserts_ = true;
1404  } else if (consecutive_inserts_) {
1405  // This record must be the only full-length slice, so the check is simpler
1406  const KeySlice prev_slice = get_slice(index - 1);
1407  if (prev_slice > slice) {
1408  consecutive_inserts_ = false;
1409  }
1410  }
1411  slot->tid_.lock_.reset();
1412  slot->tid_.xct_id_ = initial_owner_id;
1413  *get_next_layer_from_offsets(new_offset, remainder) = pointer;
1414 }
void set_slice(SlotIndex index, KeySlice slice) __attribute__((always_inline))
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
KeySlice get_slice(SlotIndex index) const __attribute__((always_inline))
bool is_locked() const __attribute__((always_inline))
Slot * get_new_slot(SlotIndex index) __attribute__((always_inline))
uint16_t DataOffset
Byte-offset in a page.
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
bool can_accomodate(SlotIndex new_index, KeyLength remainder_length, PayloadLength payload_count) const __attribute__((always_inline))
const KeyLength kInitiallyNextLayer
A special key length value that denotes the record in a border page was initially a next-layer pointe...
Definition: masstree_id.hpp:86
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
DualPagePointer * get_next_layer_from_offsets(DataOffset record_offset, KeyLength remainder_length) __attribute__((always_inline))
static DataOffset to_record_length(KeyLength remainder_length, PayloadLength payload_length)
returns minimal physical_record_length_ for the given remainder/payload length.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::reserve_record_space ( SlotIndex  index,
xct::XctId  initial_owner_id,
KeySlice  slice,
const void *  suffix,
KeyLength  remainder_length,
PayloadLength  payload_count 
)
inline

Installs a new physical record that doesn't exist logically (delete bit on).

This sets 1) slot, 2) suffix key, and 3) XctId. Payload is not set yet. This is executed as a system transaction.

Definition at line 1322 of file masstree_page_impl.hpp.

References foedus::assorted::align8(), ASSERT_ND, foedus::storage::masstree::calculate_suffix_length(), can_accomodate(), foedus::storage::masstree::MasstreeBorderPage::SlotLengthUnion::components, foedus::storage::masstree::MasstreePage::get_key_count(), get_new_slot(), get_record_from_offset(), get_remainder_length(), get_slice(), foedus::storage::masstree::MasstreePage::header(), foedus::xct::XctId::is_deleted(), foedus::storage::masstree::MasstreePage::is_locked(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::storage::masstree::kInitiallyNextLayer, foedus::storage::masstree::kMaxKeyLength, foedus::storage::masstree::MasstreeBorderPage::Slot::lengthes_, foedus::xct::RwLockableXctId::lock_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::offset_, foedus::storage::masstree::MasstreeBorderPage::Slot::original_offset_, foedus::storage::masstree::MasstreeBorderPage::Slot::original_physical_record_length_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::payload_length_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::physical_record_length_, foedus::storage::masstree::MasstreeBorderPage::Slot::remainder_length_, foedus::xct::McsRwLock::reset(), set_slice(), foedus::storage::masstree::MasstreeBorderPage::Slot::tid_, to_record_length(), foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::unused_, and foedus::xct::RwLockableXctId::xct_id_.

Referenced by foedus::storage::masstree::ReserveRecords::run().

1328  {
1329  ASSERT_ND(header().snapshot_ || initial_owner_id.is_deleted());
1330  ASSERT_ND(index < kBorderPageMaxSlots);
1331  ASSERT_ND(remainder_length <= kMaxKeyLength || remainder_length == kInitiallyNextLayer);
1332  ASSERT_ND(remainder_length != kInitiallyNextLayer || payload_count == sizeof(DualPagePointer));
1333  ASSERT_ND(header().snapshot_ || is_locked());
1334  ASSERT_ND(get_key_count() == index);
1335  ASSERT_ND(can_accomodate(index, remainder_length, payload_count));
1336  ASSERT_ND(next_offset_ % 8 == 0);
1337  const KeyLength suffix_length = calculate_suffix_length(remainder_length);
1338  const DataOffset record_size = to_record_length(remainder_length, payload_count);
1339  ASSERT_ND(record_size % 8 == 0);
1340  const DataOffset new_offset = next_offset_;
1341  set_slice(index, slice);
1342  // This is a new slot, so no worry on race.
1343  Slot* slot = get_new_slot(index);
1344  slot->lengthes_.components.offset_ = new_offset;
1345  slot->lengthes_.components.unused_ = 0;
1346  slot->lengthes_.components.physical_record_length_ = record_size;
1347  slot->lengthes_.components.payload_length_ = payload_count;
1348  slot->original_physical_record_length_ = record_size;
1349  slot->remainder_length_ = remainder_length;
1350  slot->original_offset_ = new_offset;
1351  next_offset_ += record_size;
1352  if (index == 0) {
1353  consecutive_inserts_ = true;
1354  } else if (consecutive_inserts_) {
1355  // do we have to turn off consecutive_inserts_ now?
1356  const KeySlice prev_slice = get_slice(index - 1);
1357  const KeyLength prev_klen = get_remainder_length(index - 1);
1358  if (prev_slice > slice || (prev_slice == slice && prev_klen > remainder_length)) {
1359  consecutive_inserts_ = false;
1360  }
1361  }
1362  slot->tid_.lock_.reset();
1363  slot->tid_.xct_id_ = initial_owner_id;
1364  if (suffix_length > 0) {
1365  char* record = get_record_from_offset(new_offset);
1366  std::memcpy(record, suffix, suffix_length);
1367  KeyLength suffix_length_aligned = assorted::align8(suffix_length);
1368  // zero-padding
1369  if (suffix_length_aligned > suffix_length) {
1370  std::memset(record + suffix_length, 0, suffix_length_aligned - suffix_length);
1371  }
1372  }
1373 }
KeyLength get_remainder_length(SlotIndex index) const __attribute__((always_inline))
void set_slice(SlotIndex index, KeySlice slice) __attribute__((always_inline))
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
KeySlice get_slice(SlotIndex index) const __attribute__((always_inline))
T align8(T value)
8-alignment.
const KeyLength kMaxKeyLength
Max length of a key.
Definition: masstree_id.hpp:75
bool is_locked() const __attribute__((always_inline))
Slot * get_new_slot(SlotIndex index) __attribute__((always_inline))
uint16_t DataOffset
Byte-offset in a page.
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
bool can_accomodate(SlotIndex new_index, KeyLength remainder_length, PayloadLength payload_count) const __attribute__((always_inline))
const KeyLength kInitiallyNextLayer
A special key length value that denotes the record in a border page was initially a next-layer pointe...
Definition: masstree_id.hpp:86
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
static DataOffset to_record_length(KeyLength remainder_length, PayloadLength payload_length)
returns minimal physical_record_length_ for the given remainder/payload length.
KeyLength calculate_suffix_length(KeyLength remainder_length) __attribute__((always_inline))
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
char * get_record_from_offset(DataOffset record_offset) __attribute__((always_inline))
Offset versions.

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::storage::masstree::MasstreeBorderPage::set_slice ( SlotIndex  index,
KeySlice  slice 
)
inline

Definition at line 805 of file masstree_page_impl.hpp.

References ASSERT_ND, and foedus::storage::masstree::kBorderPageMaxSlots.

Referenced by append_next_layer_snapshot(), initialize_layer_root(), foedus::storage::masstree::SplitBorder::migrate_records(), reserve_initially_next_layer(), and reserve_record_space().

805  {
807  slices_[index] = slice;
808  }
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the caller graph for this function:

static DataOffset foedus::storage::masstree::MasstreeBorderPage::to_record_length ( KeyLength  remainder_length,
PayloadLength  payload_length 
)
inlinestatic

returns minimal physical_record_length_ for the given remainder/payload length.

Attention
this doesn't count the space for Slot nor slices.
See also
required_data_space()

Definition at line 674 of file masstree_page_impl.hpp.

References foedus::assorted::align8(), and foedus::storage::masstree::calculate_suffix_length_aligned().

Referenced by append_next_layer_snapshot(), initialize_layer_root(), foedus::storage::masstree::SplitBorder::migrate_records(), replace_next_layer_snapshot(), required_data_space(), reserve_initially_next_layer(), reserve_record_space(), and try_expand_record_in_page_physical().

676  {
677  KeyLength suffix_length_aligned = calculate_suffix_length_aligned(remainder_length);
678  return suffix_length_aligned + assorted::align8(payload_length);
679  }
T align8(T value)
8-alignment.
KeyLength calculate_suffix_length_aligned(KeyLength remainder_length) __attribute__((always_inline))
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69

Here is the call graph for this function:

Here is the caller graph for this function:

SlotIndex foedus::storage::masstree::MasstreeBorderPage::to_slot_index ( const Slot slot) const
inline

Definition at line 649 of file masstree_page_impl.hpp.

References ASSERT_ND.

Referenced by track_moved_record(), and track_moved_record_next_layer().

649  {
650  ASSERT_ND(slot);
651  int64_t index = reinterpret_cast<const Slot*>(this + 1) - slot - 1;
652  ASSERT_ND(index >= 0);
653  ASSERT_ND(index < static_cast<int64_t>(sizeof(data_) / sizeof(Slot)));
654  return static_cast<SlotIndex>(index);
655  }
uint16_t SlotIndex
Index of a record in a (border) page.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the caller graph for this function:

xct::TrackMovedRecordResult foedus::storage::masstree::MasstreeBorderPage::track_moved_record ( Engine engine,
xct::RwLockableXctId old_address,
xct::WriteXctAccess write_set 
)
See also
StorageManager::track_moved_record()

Definition at line 534 of file masstree_page_impl.cpp.

References ASSERT_ND, does_point_to_layer(), find_key(), foedus::memory::EngineMemory::get_global_volatile_page_resolver(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::Engine::get_memory_manager(), get_record(), get_slice(), get_slot(), foedus::storage::masstree::MasstreePage::has_foster_child(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::MasstreePage::is_foster_major_null(), foedus::storage::masstree::MasstreePage::is_foster_minor_null(), foedus::storage::masstree::MasstreePage::is_moved(), foedus::xct::RwLockableXctId::is_moved(), foedus::xct::RwLockableXctId::is_next_layer(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::storage::masstree::kInitiallyNextLayer, foedus::storage::kMasstreeBorderPageType, foedus::storage::masstree::MasstreeBorderPage::Slot::remainder_length_, foedus::storage::masstree::MasstreeBorderPage::Slot::tid_, to_slot_index(), and track_moved_record_next_layer().

Referenced by foedus::storage::masstree::MasstreeStoragePimpl::track_moved_record().

537  {
538  ASSERT_ND(owner_address->is_moved() || owner_address->is_next_layer());
539  ASSERT_ND(!header().snapshot_);
540  ASSERT_ND(header().get_page_type() == kMasstreeBorderPageType);
541 
542  // Slot begins with TID, so it's also the slot address
543  Slot* slot = reinterpret_cast<Slot*>(owner_address);
544  ASSERT_ND(&slot->tid_ == owner_address);
545  const SlotIndex original_index = to_slot_index(slot);
546  ASSERT_ND(original_index < kBorderPageMaxSlots);
547  ASSERT_ND(original_index < get_key_count());
548 
549  const KeyLength remainder = slot->remainder_length_;
550  // If it's originally a next-layer record, why we took it as a record? This shouldn't happen.
551  ASSERT_ND(remainder != kInitiallyNextLayer);
552 
553  if (does_point_to_layer(original_index)) {
554  return track_moved_record_next_layer(engine, owner_address);
555  }
556 
557  // otherwise, we can track without key information within this layer.
558  // the slice and key length is enough to identify the record.
559  ASSERT_ND(is_moved());
563  const char* suffix = get_record(original_index);
564  KeySlice slice = get_slice(original_index);
565 
566  // if remainder <= 8 : this layer can have only one record that has this slice and this length.
567  // if remainder > 8 : this layer has either the exact record, or it's now a next layer pointer.
568 
569  // recursively track. although probably it's only one level
570  MasstreeBorderPage* cur_page = this;
571  const memory::GlobalVolatilePageResolver& resolver
572  = engine->get_memory_manager()->get_global_volatile_page_resolver();
573  while (true) {
574  cur_page = cur_page->track_foster_child(slice, resolver);
575 
576  // now cur_page must be the page that contains the record.
577  // the only exception is
578  // 1) again the record is being moved concurrently
579  // 2) the record was moved to another layer
580  SlotIndex index = cur_page->find_key(slice, suffix, remainder);
581  if (index == kBorderPageMaxSlots) {
582  // this can happen rarely because we are not doing the stable version trick here.
583  // this is rare, so we just abort. no safety violation.
584  VLOG(0) << "Very interesting. moved record not found due to concurrent updates";
585  return xct::TrackMovedRecordResult();
586  }
587 
588  Slot* cur_slot = cur_page->get_slot(index);
589  xct::RwLockableXctId* new_owner_address = &cur_slot->tid_;
590  char* new_record_address = cur_page->get_record(index);
591  if (cur_page->does_point_to_layer(index)) {
592  // another rare case. the record has been now moved to another layer.
593  if (remainder <= sizeof(KeySlice)) {
594  // the record we are looking for can't be stored in next layer..
595  VLOG(0) << "Wtf 2. moved record in next layer not found. Probably due to concurrent thread";
596  return xct::TrackMovedRecordResult();
597  }
598 
599  VLOG(0) << "Interesting. moved record are now in another layer. further track.";
600  return cur_page->track_moved_record_next_layer(engine, new_owner_address);
601  }
602 
603  // Otherwise, this is it!
604  // be careful, we give get_record() as "payload". the word "payload" is a bit overused here.
605  return xct::TrackMovedRecordResult(new_owner_address, new_record_address);
606  }
607 }
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
KeySlice get_slice(SlotIndex index) const __attribute__((always_inline))
uint16_t SlotIndex
Index of a record in a (border) page.
bool is_moved() const __attribute__((always_inline))
bool is_foster_minor_null() const __attribute__((always_inline))
bool is_foster_major_null() const __attribute__((always_inline))
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
char * get_record(SlotIndex index) __attribute__((always_inline))
const KeyLength kInitiallyNextLayer
A special key length value that denotes the record in a border page was initially a next-layer pointe...
Definition: masstree_id.hpp:86
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
SlotIndex to_slot_index(const Slot *slot) const __attribute__((always_inline))
bool has_foster_child() const __attribute__((always_inline))
xct::TrackMovedRecordResult track_moved_record_next_layer(Engine *engine, xct::RwLockableXctId *old_address)
This one further tracks it to next layer.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
bool does_point_to_layer(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

xct::TrackMovedRecordResult foedus::storage::masstree::MasstreeBorderPage::track_moved_record_next_layer ( Engine engine,
xct::RwLockableXctId old_address 
)

This one further tracks it to next layer.

Instead it requires a non-null write_set.

Definition at line 609 of file masstree_page_impl.cpp.

References ASSERT_ND, does_point_to_layer(), find_key(), foedus::storage::masstree::MasstreeIntermediatePage::find_minipage(), foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::find_pointer(), foedus::memory::EngineMemory::get_global_volatile_page_resolver(), foedus::storage::masstree::MasstreePage::get_key_count(), foedus::storage::masstree::MasstreePage::get_layer(), foedus::storage::masstree::MasstreePage::get_low_fence(), foedus::Engine::get_memory_manager(), foedus::storage::masstree::MasstreeIntermediatePage::get_minipage(), get_next_layer(), get_owner_id(), get_record(), foedus::storage::masstree::MasstreePage::header(), foedus::storage::masstree::MasstreePage::is_border(), foedus::storage::masstree::MasstreePage::is_high_fence_supremum(), foedus::xct::XctId::is_next_layer(), foedus::storage::VolatilePagePointer::is_null(), foedus::storage::masstree::kBorderPageMaxSlots, foedus::storage::masstree::kInfimumSlice, foedus::storage::masstree::kInitiallyNextLayer, foedus::storage::kMasstreeBorderPageType, foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::pointers_, foedus::memory::GlobalVolatilePageResolver::resolve_offset(), foedus::storage::masstree::slice_key(), to_slot_index(), track_moved_record_next_layer(), UNLIKELY, foedus::storage::DualPagePointer::volatile_pointer_, foedus::storage::masstree::MasstreePage::within_fences(), and foedus::xct::RwLockableXctId::xct_id_.

Referenced by track_moved_record(), and track_moved_record_next_layer().

611  {
612  ASSERT_ND(!header().snapshot_);
613  ASSERT_ND(header().get_page_type() == kMasstreeBorderPageType);
614  ASSERT_ND(owner_address->xct_id_.is_next_layer());
615 
616  Slot* slot = reinterpret_cast<Slot*>(owner_address);
617  ASSERT_ND(&slot->tid_ == owner_address);
618  const SlotIndex original_index = to_slot_index(slot);
619  ASSERT_ND(original_index < kBorderPageMaxSlots);
620  ASSERT_ND(original_index < get_key_count());
621  ASSERT_ND(does_point_to_layer(original_index));
622 
623  // The new masstree page layout leaves the original suffix untouched.
624  // Thus, we can always retrieve the original key from it no matter whether
625  // the record moved offset to expand. In case the record was expanded in-page,
626  // we use original_xxx below.
627  const KeyLength remainder = slot->remainder_length_;
628  ASSERT_ND(remainder != kInitiallyNextLayer);
629  if (UNLIKELY(remainder <= sizeof(KeySlice))) {
630  LOG(ERROR) << "WTF. The original record was too short to get moved to next-layer, but it did?";
631  return xct::TrackMovedRecordResult();
632  }
633 
634  const char* original_suffix = get_record(original_index);
635  const uint8_t cur_layer = get_layer();
636  const uint8_t next_layer = cur_layer + 1U;
637  const KeySlice next_slice = slice_key(original_suffix, remainder - sizeof(KeySlice));
638  const KeyLength next_remainder = remainder - sizeof(KeySlice);
639  const char* next_suffix = original_suffix + sizeof(KeySlice);
640 
641  VolatilePagePointer root_pointer = get_next_layer(original_index)->volatile_pointer_;
642  if (root_pointer.is_null()) {
643  VLOG(0) << "Wtf. moved record in next layer not found. Probably due to concurrent thread";
644  return xct::TrackMovedRecordResult();
645  }
646 
647  const memory::GlobalVolatilePageResolver& resolver
648  = engine->get_memory_manager()->get_global_volatile_page_resolver();
649  MasstreePage* cur_page
650  = reinterpret_cast<MasstreeBorderPage*>(resolver.resolve_offset(root_pointer));
651  ASSERT_ND(cur_page->get_low_fence() == kInfimumSlice && cur_page->is_high_fence_supremum());
652  ASSERT_ND(cur_page->get_layer() == next_layer);
653 
654  while (true) {
655  ASSERT_ND(cur_page->get_layer() == next_layer);
656  ASSERT_ND(cur_page->within_fences(next_slice));
657 
658  if (!cur_page->is_border()) {
659  MasstreeIntermediatePage* casted = reinterpret_cast<MasstreeIntermediatePage*>(cur_page);
660  uint8_t index = casted->find_minipage(next_slice);
661  MasstreeIntermediatePage::MiniPage& minipage = casted->get_minipage(index);
662  uint8_t index_mini = minipage.find_pointer(next_slice);
663  VolatilePagePointer pointer = minipage.pointers_[index_mini].volatile_pointer_;
664  if (pointer.is_null()) {
665  VLOG(0) << "Wtf 1. moved record in next layer not found. Probably due to concurrent thread";
666  return xct::TrackMovedRecordResult();
667  }
668  cur_page = reinterpret_cast<MasstreePage*>(resolver.resolve_offset(pointer));
669  ASSERT_ND(cur_page->get_layer() == next_layer);
670  ASSERT_ND(cur_page->within_fences(next_slice));
671  continue;
672  }
673 
674  // now cur_page must be the page that contains the record.
675  // the only exception is
676  // 1) again the record is being moved concurrently
677  // 2) the record was moved to another layer
678  MasstreeBorderPage* casted = reinterpret_cast<MasstreeBorderPage*>(cur_page);
679  // we track foster child in border pages only
680  casted = casted->track_foster_child(next_slice, resolver);
681  ASSERT_ND(casted != this);
682  SlotIndex index = casted->find_key(next_slice, next_suffix, next_remainder);
683  if (index == kBorderPageMaxSlots) {
684  VLOG(0) << "Very interesting. moved record not found due to concurrent updates";
685  return xct::TrackMovedRecordResult();
686  }
687 
688  xct::RwLockableXctId* new_owner_address = casted->get_owner_id(index);
689  ASSERT_ND(new_owner_address != owner_address);
690  char* new_record_address = casted->get_record(index);
691  if (casted->does_point_to_layer(index)) {
692  // the record has been now moved to yet another layer.
693  if (next_remainder <= sizeof(KeySlice)) {
694  // the record we are looking for can't be stored in next layer..
695  VLOG(0) << "Wtf 2. moved record in next layer not found. Probably due to concurrent thread";
696  return xct::TrackMovedRecordResult();
697  }
698 
699  VLOG(0) << "Interesting. moved record are now in another layer. further track.";
700  return casted->track_moved_record_next_layer(engine, new_owner_address);
701  }
702 
703  // be careful, we give get_record() as "payload". the word "payload" is a bit overused here.
704  return xct::TrackMovedRecordResult(new_owner_address, new_record_address);
705  }
706 }
const KeySlice kInfimumSlice
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
uint16_t SlotIndex
Index of a record in a (border) page.
DualPagePointer * get_next_layer(SlotIndex index) __attribute__((always_inline))
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
char * get_record(SlotIndex index) __attribute__((always_inline))
KeySlice slice_key(const void *be_bytes, KeyLength slice_length)
Extract a part of a big-endian byte array of given length as KeySlice.
VolatilePagePointer volatile_pointer_
Definition: storage_id.hpp:308
const KeyLength kInitiallyNextLayer
A special key length value that denotes the record in a border page was initially a next-layer pointe...
Definition: masstree_id.hpp:86
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
uint8_t get_layer() const __attribute__((always_inline))
Layer-0 stores the first 8 byte slice, Layer-1 next 8 byte...
SlotIndex to_slot_index(const Slot *slot) const __attribute__((always_inline))
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
bool does_point_to_layer(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::storage::masstree::MasstreeBorderPage::try_expand_record_in_page_physical ( PayloadLength  payload_count,
SlotIndex  record_index 
)

A physical-only method to expand a record within this page without any logical change.

Precondition
!header_.snapshot_: only for volatile page
is_locked() && !is_moved()
get_slot(record_index)->is_locked() && !get_slot(record_index)->is_moved()
Returns
This method might fail if there isn't enough space. In that case it returns false.

Definition at line 360 of file masstree_page_impl.cpp.

References ASSERT_ND, available_space(), foedus::storage::masstree::MasstreeBorderPage::SlotLengthUnion::components, foedus::storage::masstree::MasstreeBorderPage::Slot::does_point_to_layer(), foedus::storage::masstree::MasstreePage::get_key_count(), get_max_payload_length(), get_next_offset(), get_record_from_offset(), get_slot(), foedus::storage::masstree::MasstreePage::header_, increase_next_offset(), foedus::xct::RwLockableXctId::is_keylocked(), foedus::storage::masstree::MasstreePage::is_locked(), foedus::storage::masstree::MasstreePage::is_moved(), foedus::xct::RwLockableXctId::is_moved(), foedus::storage::masstree::MasstreeBorderPage::Slot::lengthes_, foedus::assorted::memory_fence_release(), foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::offset_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::physical_record_length_, foedus::storage::masstree::MasstreeBorderPage::Slot::read_lengthes_oneshot(), foedus::storage::masstree::MasstreeBorderPage::Slot::remainder_length_, foedus::storage::PageHeader::snapshot_, foedus::storage::masstree::MasstreeBorderPage::Slot::tid_, to_record_length(), verify_slot_lengthes(), and foedus::storage::masstree::MasstreeBorderPage::Slot::write_lengthes_oneshot().

Referenced by foedus::storage::masstree::ReserveRecords::run().

362  {
364  ASSERT_ND(is_locked());
365  ASSERT_ND(!is_moved());
366  ASSERT_ND(record_index < get_key_count());
367  DVLOG(2) << "Expanding record.. current max=" << get_max_payload_length(record_index)
368  << ", which must become " << payload_count;
369 
370  ASSERT_ND(verify_slot_lengthes(record_index));
371  Slot* old_slot = get_slot(record_index);
372  ASSERT_ND(!old_slot->tid_.is_moved());
373  ASSERT_ND(old_slot->tid_.is_keylocked());
374  ASSERT_ND(!old_slot->does_point_to_layer());
375  const SlotLengthPart lengthes = old_slot->read_lengthes_oneshot();
376  const KeyLength remainder_length = old_slot->remainder_length_;
377  const DataOffset record_length = to_record_length(remainder_length, payload_count);
378  const DataOffset available = available_space();
379 
380  // 1. Trivial expansion if the record is placed at last. Fastest.
381  if (get_next_offset() == lengthes.offset_ + lengthes.physical_record_length_) {
382  const DataOffset diff = record_length - lengthes.physical_record_length_;
383  DVLOG(1) << "Lucky, expanding a record at last record region. diff=" << diff;
384  if (available >= diff) {
385  DVLOG(2) << "woo. yes, we can just increase the length";
386  old_slot->lengthes_.components.physical_record_length_ = record_length;
388  increase_next_offset(diff);
390  return true;
391  }
392  }
393 
394  // 2. In-page expansion. Fast.
395  if (available >= record_length) {
396  DVLOG(2) << "Okay, in-page record expansion.";
397  // We have to make sure all threads see a valid state, either new or old.
398  SlotLengthPart new_lengthes = lengthes;
399  new_lengthes.offset_ = get_next_offset();
400  new_lengthes.physical_record_length_ = record_length;
401  const char* old_record = get_record_from_offset(lengthes.offset_);
402  char* new_record = get_record_from_offset(new_lengthes.offset_);
403 
404  // 2-a. Create the new record region.
405  if (lengthes.physical_record_length_ > 0) {
406  std::memcpy(new_record, old_record, lengthes.physical_record_length_);
408  }
409 
410  // 2-b. announce the new location in one-shot.
411  old_slot->write_lengthes_oneshot(new_lengthes);
413  // We don't have to change TID here because we did nothing logically.
414  // Reading transactions are safe to read either old or new record regions.
415  // See comments in MasstreeCommonLogType::apply_record_prepare() for how we make it safe
416  // for writing transactions.
417 
418  increase_next_offset(record_length);
420  return true;
421  }
422 
423  // 3. ouch. we need to split the page to complete it. beyond this method.
424  DVLOG(1) << "Umm, we need to split this page for record expansion. available="
425  << available << ", record_length=" << record_length
426  << ", record_index=" << record_index
427  << ", key_count=" << get_key_count();
428  return false;
429 }
SlotIndex get_key_count() const __attribute__((always_inline))
physical key count (those keys might be deleted) in this page.
PayloadLength get_max_payload_length(SlotIndex index) const __attribute__((always_inline))
bool is_locked() const __attribute__((always_inline))
bool is_moved() const __attribute__((always_inline))
uint16_t DataOffset
Byte-offset in a page.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
DataOffset available_space() const
Returns usable data space in bytes.
const Slot * get_slot(SlotIndex index) const __attribute__((always_inline))
static DataOffset to_record_length(KeyLength remainder_length, PayloadLength payload_length)
returns minimal physical_record_length_ for the given remainder/payload length.
bool snapshot_
Whether this page image is of a snapshot page.
Definition: page.hpp:211
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
void memory_fence_release()
Equivalent to std::atomic_thread_fence(std::memory_order_release).
char * get_record_from_offset(DataOffset record_offset) __attribute__((always_inline))
Offset versions.

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::storage::masstree::MasstreeBorderPage::verify_slot_lengthes ( SlotIndex  index) const
Returns
whether the length information seems okay. used only for assertions.

Definition at line 708 of file masstree_page_impl.cpp.

References ASSERT_ND, foedus::storage::masstree::MasstreeBorderPage::SlotLengthUnion::components, foedus::storage::masstree::MasstreeBorderPage::Slot::does_point_to_layer(), foedus::storage::masstree::MasstreeBorderPage::Slot::get_max_payload_peek(), get_slot(), foedus::storage::masstree::kInitiallyNextLayer, foedus::storage::masstree::MasstreeBorderPage::Slot::lengthes_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::offset_, foedus::storage::masstree::MasstreeBorderPage::Slot::original_offset_, foedus::storage::masstree::MasstreeBorderPage::Slot::original_physical_record_length_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::payload_length_, foedus::storage::masstree::MasstreeBorderPage::SlotLengthPart::physical_record_length_, and foedus::storage::masstree::MasstreeBorderPage::Slot::remainder_length_.

Referenced by try_expand_record_in_page_physical(), and foedus::storage::masstree::MasstreeStoragePimpl::verify_single_thread_border().

708  {
709  const Slot* slot = get_slot(index);
710  SlotLengthPart lengthes = slot->lengthes_.components;
711  if (lengthes.offset_ % 8 != 0) {
712  ASSERT_ND(false);
713  return false;
714  }
715  if (lengthes.physical_record_length_ % 8 != 0) {
716  ASSERT_ND(false);
717  return false;
718  }
719  if (lengthes.offset_ + lengthes.physical_record_length_ > sizeof(data_)) {
720  ASSERT_ND(false);
721  return false;
722  }
723 
724  if (slot->remainder_length_ == kInitiallyNextLayer) {
725  if (!slot->does_point_to_layer()) {
726  ASSERT_ND(false);
727  return false;
728  }
729  }
730  // the only case we move the record in-page is to get a larger record size.
731  if (lengthes.offset_ != slot->original_offset_) {
732  if (lengthes.offset_ < slot->original_offset_) {
733  ASSERT_ND(false);
734  return false;
735  }
736  if (lengthes.physical_record_length_ <= slot->original_physical_record_length_) {
737  ASSERT_ND(false);
738  return false;
739  }
740  }
741 
742  if (slot->does_point_to_layer()) {
743  if (slot->remainder_length_ <= sizeof(KeySlice)) {
744  ASSERT_ND(false);
745  return false;
746  }
747  if (lengthes.payload_length_ != sizeof(DualPagePointer)) {
748  ASSERT_ND(false);
749  return false;
750  }
751  } else {
752  if (lengthes.payload_length_ > slot->get_max_payload_peek()) {
753  ASSERT_ND(false);
754  return false;
755  }
756  }
757  return true;
758 }
uint64_t KeySlice
Each key slice is an 8-byte integer.
const KeyLength kInitiallyNextLayer
A special key length value that denotes the record in a border page was initially a next-layer pointe...
Definition: masstree_id.hpp:86
const Slot * get_slot(SlotIndex index) const __attribute__((always_inline))
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::storage::masstree::MasstreeBorderPage::will_conflict ( SlotIndex  index,
const char *  be_key,
KeyLength  key_length 
) const
inline

Returns whether inserting the key will cause creation of a new next layer.

This is mainly used for assertions.

Definition at line 1690 of file masstree_page_impl.hpp.

References ASSERT_ND, foedus::storage::masstree::MasstreePage::get_layer(), foedus::storage::masstree::kSliceLen, and foedus::storage::masstree::slice_layer().

1693  {
1694  ASSERT_ND(key_length > get_layer() * kSliceLen);
1695  KeyLength remainder = key_length - get_layer() * kSliceLen;
1696  KeySlice slice = slice_layer(be_key, key_length, get_layer());
1697  return will_conflict(index, slice, remainder);
1698 }
const uint64_t kSliceLen
Shorthand for sizeof(KeySlice).
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
uint8_t get_layer() const __attribute__((always_inline))
Layer-0 stores the first 8 byte slice, Layer-1 next 8 byte...
bool will_conflict(SlotIndex index, const char *be_key, KeyLength key_length) const __attribute__((always_inline))
Returns whether inserting the key will cause creation of a new next layer.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
KeySlice slice_layer(const void *be_bytes, KeyLength key_length, Layer current_layer)
Extract a part of a big-endian byte array of given length as KeySlice.

Here is the call graph for this function:

bool foedus::storage::masstree::MasstreeBorderPage::will_conflict ( SlotIndex  index,
KeySlice  slice,
KeyLength  remainder 
) const
inline

Overload to receive slice.

Definition at line 1699 of file masstree_page_impl.hpp.

References ASSERT_ND, get_remainder_length(), get_slice(), foedus::storage::masstree::kBorderPageMaxSlots, and foedus::storage::masstree::kSliceLen.

1702  {
1703  ASSERT_ND(index < kBorderPageMaxSlots);
1704  ASSERT_ND(remainder > 0);
1705  if (slice != get_slice(index)) {
1706  return false;
1707  }
1708  KeyLength rec_remainder = get_remainder_length(index);
1709  if (remainder > kSliceLen && rec_remainder > kSliceLen) {
1710  return true; // need to create next layer
1711  }
1712  return (remainder == rec_remainder); // if same, it's exactly the same key
1713 }
KeyLength get_remainder_length(SlotIndex index) const __attribute__((always_inline))
KeySlice get_slice(SlotIndex index) const __attribute__((always_inline))
const uint64_t kSliceLen
Shorthand for sizeof(KeySlice).
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

bool foedus::storage::masstree::MasstreeBorderPage::will_contain_next_layer ( SlotIndex  index,
const char *  be_key,
KeyLength  key_length 
) const
inline

Returns whether the record is a next-layer pointer that would contain the key.

Definition at line 1714 of file masstree_page_impl.hpp.

References ASSERT_ND, foedus::storage::masstree::MasstreePage::get_layer(), foedus::storage::masstree::kSliceLen, and foedus::storage::masstree::slice_layer().

1717  {
1718  ASSERT_ND(key_length > get_layer() * kSliceLen);
1719  KeyLength remainder = key_length - get_layer() * kSliceLen;
1720  KeySlice slice = slice_layer(be_key, key_length, get_layer());
1721  return will_contain_next_layer(index, slice, remainder);
1722 }
const uint64_t kSliceLen
Shorthand for sizeof(KeySlice).
uint64_t KeySlice
Each key slice is an 8-byte integer.
uint16_t KeyLength
Represents a byte-length of a key in this package.
Definition: masstree_id.hpp:69
uint8_t get_layer() const __attribute__((always_inline))
Layer-0 stores the first 8 byte slice, Layer-1 next 8 byte...
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
KeySlice slice_layer(const void *be_bytes, KeyLength key_length, Layer current_layer)
Extract a part of a big-endian byte array of given length as KeySlice.
bool will_contain_next_layer(SlotIndex index, const char *be_key, KeyLength key_length) const __attribute__((always_inline))
Returns whether the record is a next-layer pointer that would contain the key.

Here is the call graph for this function:

bool foedus::storage::masstree::MasstreeBorderPage::will_contain_next_layer ( SlotIndex  index,
KeySlice  slice,
KeyLength  remainder 
) const
inline

Definition at line 1723 of file masstree_page_impl.hpp.

References ASSERT_ND, does_point_to_layer(), get_slice(), foedus::storage::masstree::kBorderPageMaxSlots, and foedus::storage::masstree::kSliceLen.

1726  {
1727  ASSERT_ND(index < kBorderPageMaxSlots);
1728  ASSERT_ND(remainder > 0);
1729  return slice == get_slice(index) && does_point_to_layer(index) && remainder > kSliceLen;
1730 }
KeySlice get_slice(SlotIndex index) const __attribute__((always_inline))
const uint64_t kSliceLen
Shorthand for sizeof(KeySlice).
const SlotIndex kBorderPageMaxSlots
Maximum number of slots in one MasstreeBorderPage.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
bool does_point_to_layer(SlotIndex index) const __attribute__((always_inline))

Here is the call graph for this function:

Friends And Related Function Documentation

std::ostream& operator<< ( std::ostream &  o,
const MasstreeBorderPage v 
)
friend

defined in masstree_page_debug.cpp.

Definition at line 80 of file masstree_page_debug.cpp.

80  {
81  o << "<MasstreeBorderPage>";
83  o << "<consecutive_inserts_>" << v.consecutive_inserts_ << "</consecutive_inserts_>";
84  o << std::endl << "<records>";
85  for (uint16_t i = 0; i < v.get_key_count(); ++i) {
86  o << std::endl << " <record index=\"" << i
87  << "\" slice=\"" << assorted::Hex(v.get_slice(i), 16)
88  << "\" remainder_len=\"" << static_cast<int>(v.get_remainder_length(i))
89  << "\" offset=\"" << v.get_offset_in_bytes(i)
90  << "\" physical_record_len=\"" << v.get_slot(i)->lengthes_.components.physical_record_length_
91  << "\" payload_len=\"" << v.get_payload_length(i)
92  << "\">";
93  if (v.does_point_to_layer(i)) {
94  o << "<next_layer>" << *v.get_next_layer(i) << "</next_layer>";
95  } else {
96  if (v.get_remainder_length(i) > sizeof(KeySlice)) {
97  std::string suffix(v.get_record(i), v.get_suffix_length(i));
98  o << "<key_suffix>" << assorted::HexString(suffix) << "</key_suffix>";
99  }
100  if (v.get_payload_length(i) > 0) {
101  std::string payload(v.get_record_payload(i), v.get_payload_length(i));
102  o << "<payload>" << assorted::HexString(payload) << "</payload>";
103  }
104  }
105  o << * v.get_owner_id(i);
106  o << "</record>";
107  }
108  o << std::endl << "</records>";
109  o << "</MasstreeBorderPage>";
110  return o;
111 }
void describe_masstree_page_common(std::ostream *o_ptr, const MasstreePage &v)
uint64_t KeySlice
Each key slice is an 8-byte integer.
friend struct SplitBorder
friend

Definition at line 446 of file masstree_page_impl.hpp.


The documentation for this class was generated from the following files: