libfoedus-core
FOEDUS Core Library
foedus::storage::hash::ReserveRecords Struct Referencefinal

A system transaction to reserve a physical record(s) in a hash data page. More...

Detailed Description

A system transaction to reserve a physical record(s) in a hash data page.

See also
Physical-only, short-living system transactions.

A record insertion happens in two steps:

  • During Execution: Physically inserts a deleted record for the key in the data page.
  • During Commit: Logically flips the delete-bit and installs the payload.

This system transaction does the former.

This does nothing and returns kErrorCodeOk in the following cases:

  • The page turns out to already contain a satisfying physical record for the key.

In all other cases, this sysxct creates or finds the record. In other words, this sysxct guarantees a success as far as it returns kErrorCodeOk.

Locks taken in this sysxct (in order of taking):

  • Page-lock of the target or the tail page if target_ is not the tail (in the order of the chain, so no deadlock).
  • Record-lock of an existing, matching record. Only when we have to expand the record. This also happens after the page-lock of the enclosing page, before the page-lock of following pages, so every lock is in order.

Note, however, that it might not be in address order (address order might be tail -> head). We thus might have a false deadlock-abort. However, as far as the first lock on the target_ page is unconditional, all locks after that should be non-racy, so all try-lock should succeed. We might want to specify a bit larger max_retries (5?) for this reason.

Differences from Masstree's reserve
This sysxct is simpler than masstree::ReserveRecords for a few reasons.
  • No need to split the page
  • No need to adopt a split page
We thus contain all required logics (reserve/expansion/migrate) in this sysxct. In all circumstances, this sysxct finds or reserves the required record. Simpler for the caller.

So far this sysxct installs only one physical record at a time. TASK(Hideaki): Probably it helps by batching several records.

Definition at line 73 of file hash_reserve_impl.hpp.

#include <hash_reserve_impl.hpp>

Inheritance diagram for foedus::storage::hash::ReserveRecords:
Collaboration diagram for foedus::storage::hash::ReserveRecords:

Public Member Functions

 ReserveRecords (thread::Thread *context, HashDataPage *target, const void *key, KeyLength key_length, const HashCombo &combo, PayloadLength payload_count, PayloadLength aggressive_payload_count_hint, DataPageSlotIndex hint_check_from)
 
virtual ErrorCode run (xct::SysxctWorkspace *sysxct_workspace) override
 Execute the system transaction. More...
 
ErrorCode find_or_create_or_expand (xct::SysxctWorkspace *sysxct_workspace, HashDataPage *page, DataPageSlotIndex examined_records)
 The main loop (well, recursion actually). More...
 
ErrorCode expand_record (xct::SysxctWorkspace *sysxct_workspace, HashDataPage *page, DataPageSlotIndex index)
 
ErrorCode find_and_lock_spacious_tail (xct::SysxctWorkspace *sysxct_workspace, HashDataPage *from_page, HashDataPage **tail)
 
ErrorCode create_new_record_in_tail_page (HashDataPage *tail)
 Installs it as a fresh-new physical record, assuming the given page is the tail and already locked. More...
 
ErrorCode create_new_tail_page (HashDataPage *cur_tail, HashDataPage **new_tail)
 
DataPageSlotIndex search_within_page (const HashDataPage *page, DataPageSlotIndex key_count, DataPageSlotIndex examined_records) const
 
DataPageSlotIndex append_record_to_page (HashDataPage *page, xct::XctId initial_xid) const
 Appends a new physical record to the page. More...
 

Public Attributes

thread::Thread *const context_
 Thread context. More...
 
HashDataPage *const target_
 The data page to install a new physical record. More...
 
const void *const key_
 The key of the new record. More...
 
const HashCombocombo_
 Hash info of the key. More...
 
const KeyLength key_length_
 Byte length of the key. More...
 
const PayloadLength payload_count_
 Minimal required length of the payload. More...
 
const PayloadLength aggressive_payload_count_hint_
 When we expand the record or allocate a new record, we might allocate a larger-than-necessary space guided by this hint. More...
 
const DataPageSlotIndex hint_check_from_
 The in-page location from which this sysxct will look for matching records. More...
 
DataPageSlotIndex out_slot_
 [Out] The slot of the record that is found or created. More...
 
HashDataPageout_page_
 [Out] The page that contains the found/created record. More...
 

Constructor & Destructor Documentation

foedus::storage::hash::ReserveRecords::ReserveRecords ( thread::Thread context,
HashDataPage target,
const void *  key,
KeyLength  key_length,
const HashCombo combo,
PayloadLength  payload_count,
PayloadLength  aggressive_payload_count_hint,
DataPageSlotIndex  hint_check_from 
)
inline

Definition at line 142 of file hash_reserve_impl.hpp.

151  : xct::SysxctFunctor(),
152  context_(context),
153  target_(target),
154  key_(key),
155  combo_(combo),
156  key_length_(key_length),
157  payload_count_(payload_count),
158  aggressive_payload_count_hint_(aggressive_payload_count_hint),
159  hint_check_from_(hint_check_from),
161  out_page_(nullptr) {
162  }
HashDataPage *const target_
The data page to install a new physical record.
DataPageSlotIndex out_slot_
[Out] The slot of the record that is found or created.
const PayloadLength payload_count_
Minimal required length of the payload.
const DataPageSlotIndex kSlotNotFound
Definition: hash_id.hpp:197
HashDataPage * out_page_
[Out] The page that contains the found/created record.
const void *const key_
The key of the new record.
const KeyLength key_length_
Byte length of the key.
const DataPageSlotIndex hint_check_from_
The in-page location from which this sysxct will look for matching records.
thread::Thread *const context_
Thread context.
const PayloadLength aggressive_payload_count_hint_
When we expand the record or allocate a new record, we might allocate a larger-than-necessary space g...
const HashCombo & combo_
Hash info of the key.

Member Function Documentation

DataPageSlotIndex foedus::storage::hash::ReserveRecords::append_record_to_page ( HashDataPage page,
xct::XctId  initial_xid 
) const

Appends a new physical record to the page.

The caller must make sure there is no race in the page. In other words, the page must be either locked or a not-yet-published page.

Definition at line 261 of file hash_reserve_impl.cpp.

References foedus::storage::hash::DataPageBloomFilter::add(), aggressive_payload_count_hint_, foedus::assorted::align8(), ASSERT_ND, foedus::storage::hash::HashDataPage::available_space(), combo_, foedus::storage::hash::HashCombo::fingerprint_, foedus::storage::hash::HashDataPage::get_new_slot(), foedus::storage::hash::HashDataPage::get_record_count(), foedus::storage::hash::HashCombo::hash_, foedus::storage::PageHeader::increment_key_count(), key_, key_length_, foedus::assorted::memory_fence_release(), foedus::storage::hash::HashDataPage::next_offset(), foedus::storage::hash::HashDataPage::Slot::offset_, payload_count_, foedus::storage::hash::HashDataPage::record_from_offset(), and foedus::storage::hash::HashDataPage::required_space().

Referenced by create_new_record_in_tail_page(), and expand_record().

263  {
264  ASSERT_ND(page->available_space()
266  DataPageSlotIndex index = page->get_record_count();
267  auto& slot = page->get_new_slot(index);
268  slot.offset_ = page->next_offset();
269  slot.hash_ = combo_.hash_;
270  slot.key_length_ = key_length_;
271  slot.physical_record_length_ = assorted::align8(key_length_) + assorted::align8(payload_count_);
272  slot.payload_length_ = 0;
273  char* record = page->record_from_offset(slot.offset_);
274  std::memcpy(record, key_, key_length_);
275  if (key_length_ % 8 != 0) {
276  std::memset(record + key_length_, 0, 8 - (key_length_ % 8));
277  }
278  slot.tid_.reset();
279  slot.tid_.xct_id_ = initial_xid;
280 
281  // we install the fingerprint to bloom filter BEFORE we increment key count.
282  // it's okay for concurrent reads to see false positives, but false negatives are wrong!
283  page->bloom_filter_.add(combo_.fingerprint_);
284 
285  // we increment key count AFTER installing the key because otherwise the optimistic read
286  // might see the record but find that the key doesn't match. we need a fence to prevent it.
288  page->header_.increment_key_count();
289  return index;
290 }
T align8(T value)
8-alignment.
const PayloadLength payload_count_
Minimal required length of the payload.
const void *const key_
The key of the new record.
const KeyLength key_length_
Byte length of the key.
uint16_t DataPageSlotIndex
Definition: hash_id.hpp:196
BloomFilterFingerprint fingerprint_
Definition: hash_combo.hpp:51
static uint16_t required_space(uint16_t key_length, uint16_t payload_length)
returns physical_record_length_ for a new record
const PayloadLength aggressive_payload_count_hint_
When we expand the record or allocate a new record, we might allocate a larger-than-necessary space g...
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
void memory_fence_release()
Equivalent to std::atomic_thread_fence(std::memory_order_release).
const HashCombo & combo_
Hash info of the key.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::hash::ReserveRecords::create_new_record_in_tail_page ( HashDataPage tail)

Installs it as a fresh-new physical record, assuming the given page is the tail and already locked.

Definition at line 208 of file hash_reserve_impl.cpp.

References aggressive_payload_count_hint_, append_record_to_page(), ASSERT_ND, foedus::storage::hash::HashDataPage::available_space(), CHECK_ERROR_CODE, create_new_tail_page(), foedus::storage::hash::HashDataPage::get_volatile_page_id(), foedus::storage::hash::HashDataPage::header(), foedus::storage::hash::HashDataPage::is_locked(), foedus::storage::VolatilePagePointer::is_null(), foedus::Epoch::kEpochInitialCurrent, foedus::kErrorCodeOk, key_length_, foedus::assorted::memory_fence_release(), foedus::storage::hash::HashDataPage::next_page(), out_page_, out_slot_, foedus::storage::PageHeader::page_version_, foedus::storage::hash::HashDataPage::required_space(), foedus::xct::XctId::set(), foedus::xct::XctId::set_deleted(), foedus::storage::PageVersion::set_has_next_page(), and foedus::storage::DualPagePointer::volatile_pointer_.

Referenced by find_or_create_or_expand().

208  {
209  ASSERT_ND(tail->is_locked());
210  ASSERT_ND(tail->next_page().volatile_pointer_.is_null());
211 
212  // do we have enough room in this page?
213  const uint16_t available_space = tail->available_space();
214  const uint16_t required_space
216  xct::XctId initial_xid;
217  initial_xid.set(
218  Epoch::kEpochInitialCurrent, // TODO(Hideaki) this should be something else
219  0);
220  initial_xid.set_deleted();
221  if (available_space < required_space) {
222  // This page can't hold it. Let's make a new page.
223  HashDataPage* new_tail;
224  CHECK_ERROR_CODE(create_new_tail_page(tail, &new_tail));
225 
226  // Then append a record to it.
227  out_slot_ = append_record_to_page(new_tail, initial_xid);
228  ASSERT_ND(out_slot_ == 0); // it's the first record in the new page
229  out_page_ = new_tail;
230  // Then, after all, we "publish" this new page
231  assorted::memory_fence_release(); // so that others don't see uninitialized page
232  tail->next_page().volatile_pointer_ = new_tail->get_volatile_page_id();
233  assorted::memory_fence_release(); // so that others don't have "where's the next page" issue
234  tail->header().page_version_.set_has_next_page();
235  } else {
236  // the page is enough spacious, and has no next page. we rule!
237  out_slot_ = append_record_to_page(tail, initial_xid);
238  out_page_ = tail;
239  }
240 
241  return kErrorCodeOk;
242 }
The first epoch (before wrap-around) that might have transactions is ep-3.
Definition: epoch.hpp:77
DataPageSlotIndex out_slot_
[Out] The slot of the record that is found or created.
HashDataPage * out_page_
[Out] The page that contains the found/created record.
const KeyLength key_length_
Byte length of the key.
0 means no-error.
Definition: error_code.hpp:87
ErrorCode create_new_tail_page(HashDataPage *cur_tail, HashDataPage **new_tail)
static uint16_t required_space(uint16_t key_length, uint16_t payload_length)
returns physical_record_length_ for a new record
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
const PayloadLength aggressive_payload_count_hint_
When we expand the record or allocate a new record, we might allocate a larger-than-necessary space g...
DataPageSlotIndex append_record_to_page(HashDataPage *page, xct::XctId initial_xid) const
Appends a new physical record to the page.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
void memory_fence_release()
Equivalent to std::atomic_thread_fence(std::memory_order_release).

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::hash::ReserveRecords::create_new_tail_page ( HashDataPage cur_tail,
HashDataPage **  new_tail 
)

Definition at line 187 of file hash_reserve_impl.cpp.

References ASSERT_ND, context_, foedus::storage::hash::HashDataPage::get_bin(), foedus::storage::hash::HashDataPage::get_bin_shifts(), foedus::storage::VolatilePagePointer::get_offset(), foedus::thread::Thread::get_thread_memory(), foedus::memory::NumaCoreMemory::grab_free_volatile_page_pointer(), foedus::storage::hash::HashDataPage::header(), foedus::storage::hash::HashDataPage::initialize_volatile_page(), foedus::storage::hash::HashDataPage::is_locked(), foedus::storage::VolatilePagePointer::is_null(), foedus::kErrorCodeMemoryNoFreePages, foedus::kErrorCodeOk, foedus::thread::Thread::resolve_newpage_cast(), foedus::storage::PageHeader::storage_id_, and UNLIKELY.

Referenced by create_new_record_in_tail_page(), and find_and_lock_spacious_tail().

189  {
190  ASSERT_ND(cur_tail->is_locked());
191  DVLOG(2) << "Volatile HashDataPage is full. Adding a next page..";
192  const VolatilePagePointer new_pointer
194  if (UNLIKELY(new_pointer.is_null())) {
196  }
197 
198  *new_tail = context_->resolve_newpage_cast<HashDataPage>(new_pointer.get_offset());
199  (*new_tail)->initialize_volatile_page(
200  cur_tail->header().storage_id_,
201  new_pointer,
202  reinterpret_cast<Page*>(cur_tail),
203  cur_tail->get_bin(),
204  cur_tail->get_bin_shifts());
205  return kErrorCodeOk;
206 }
memory::NumaCoreMemory * get_thread_memory() const
Returns the private memory repository of this thread.
Definition: thread.cpp:57
storage::VolatilePagePointer grab_free_volatile_page_pointer()
Wrapper for grab_free_volatile_page().
0 means no-error.
Definition: error_code.hpp:87
P * resolve_newpage_cast(storage::VolatilePagePointer ptr) const
Definition: thread.hpp:113
0x0301 : "MEMORY : Not enough free volatile pages. Check the config of MemoryOptions" ...
Definition: error_code.hpp:142
thread::Thread *const context_
Thread context.
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::hash::ReserveRecords::expand_record ( xct::SysxctWorkspace sysxct_workspace,
HashDataPage page,
DataPageSlotIndex  index 
)

Definition at line 101 of file hash_reserve_impl.cpp.

References aggressive_payload_count_hint_, append_record_to_page(), ASSERT_ND, foedus::storage::hash::HashDataPage::available_space(), CHECK_ERROR_CODE, context_, find_and_lock_spacious_tail(), foedus::storage::hash::HashDataPage::get_slot(), foedus::storage::hash::HashDataPage::get_volatile_page_id(), foedus::storage::hash::HashDataPage::is_locked(), foedus::kErrorCodeOk, foedus::kErrorCodeXctRaceAbort, key_length_, foedus::assorted::memory_fence_release(), out_page_, out_slot_, foedus::storage::hash::HashDataPage::required_space(), foedus::thread::Thread::sysxct_record_lock(), and foedus::storage::hash::HashDataPage::Slot::tid_.

Referenced by find_or_create_or_expand().

104  {
105  auto* record = &page->get_slot(index).tid_;
107  sysxct_workspace,
108  page->get_volatile_page_id(),
109  record));
110  // After locking, now the record status is finalized.
111  if (record->is_moved()) {
112  // If we now find it moved, we have two choices.
113  // 1) resume the search. the new location should be somewhere after here.
114  // 2) retry the whole sysxct.
115  // We do 2) to save coding (yikes!). Excuse: This case should be rare.
116  LOG(INFO) << "Rare. The record turns out to be moved after locking. Retry the sysxct";
117  return kErrorCodeXctRaceAbort;
118  }
119 
120  // Now, because we locked a record of the key, and because it wasn't moved yet,
121  // we are sure that this is the only record in this hash bucket that might contain this key.
122  // We'll be done as soon as we figure out where to move this record.
123  HashDataPage* tail;
124  CHECK_ERROR_CODE(find_and_lock_spacious_tail(sysxct_workspace, page, &tail));
125  ASSERT_ND(tail->is_locked());
126  ASSERT_ND(tail->available_space()
128 
129  // Copy the XID of the existing record
130  out_slot_ = append_record_to_page(tail, record->xct_id_);
131  out_page_ = tail;
132 
133  // Now we mark the old record as moved
135  record->xct_id_.set_moved();
136  return kErrorCodeOk;
137 }
ErrorCode find_and_lock_spacious_tail(xct::SysxctWorkspace *sysxct_workspace, HashDataPage *from_page, HashDataPage **tail)
ErrorCode sysxct_record_lock(xct::SysxctWorkspace *sysxct_workspace, storage::VolatilePagePointer page_id, xct::RwLockableXctId *lock)
Takes a lock for a sysxct running under this thread.
DataPageSlotIndex out_slot_
[Out] The slot of the record that is found or created.
HashDataPage * out_page_
[Out] The page that contains the found/created record.
const KeyLength key_length_
Byte length of the key.
0 means no-error.
Definition: error_code.hpp:87
thread::Thread *const context_
Thread context.
static uint16_t required_space(uint16_t key_length, uint16_t payload_length)
returns physical_record_length_ for a new record
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
const PayloadLength aggressive_payload_count_hint_
When we expand the record or allocate a new record, we might allocate a larger-than-necessary space g...
DataPageSlotIndex append_record_to_page(HashDataPage *page, xct::XctId initial_xid) const
Appends a new physical record to the page.
0x0A05 : "XCTION : Aborted a transaction because of a race condition. This is an expected error in hi...
Definition: error_code.hpp:200
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
void memory_fence_release()
Equivalent to std::atomic_thread_fence(std::memory_order_release).

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::hash::ReserveRecords::find_and_lock_spacious_tail ( xct::SysxctWorkspace sysxct_workspace,
HashDataPage from_page,
HashDataPage **  tail 
)

Definition at line 139 of file hash_reserve_impl.cpp.

References aggressive_payload_count_hint_, foedus::storage::hash::HashDataPage::available_space(), CHECK_ERROR_CODE, context_, create_new_tail_page(), foedus::storage::hash::HashDataPage::get_volatile_page_id(), foedus::storage::hash::HashDataPage::header(), foedus::storage::VolatilePagePointer::is_null(), foedus::kErrorCodeOk, key_length_, foedus::assorted::memory_fence_release(), foedus::storage::hash::HashDataPage::next_page(), foedus::storage::PageHeader::page_version_, foedus::storage::hash::HashDataPage::required_space(), foedus::thread::Thread::resolve_cast(), foedus::storage::PageVersion::set_has_next_page(), foedus::thread::Thread::sysxct_page_lock(), and foedus::storage::DualPagePointer::volatile_pointer_.

Referenced by expand_record().

142  {
143  *tail = nullptr;
144  HashDataPage* cur = from_page;
145  while (true) {
146  VolatilePagePointer next_pointer = cur->next_page().volatile_pointer_;
147  if (!next_pointer.is_null()) {
148  cur = context_->resolve_cast<HashDataPage>(next_pointer);
149  continue;
150  }
151  // cur looks like a tail page, but we need to confirm that after locking.
152  CHECK_ERROR_CODE(context_->sysxct_page_lock(sysxct_workspace, reinterpret_cast<Page*>(cur)));
153  if (!cur->next_page().volatile_pointer_.is_null()) {
154  LOG(INFO) << "Rare. Someone has just made a next page";
155  continue;
156  }
157 
158  const uint16_t available_space = cur->available_space();
159  const uint16_t required_space
161  if (available_space >= required_space) {
162  *tail = cur;
163  return kErrorCodeOk;
164  } else {
165  // need to make a new page..
166  HashDataPage* new_tail;
167  CHECK_ERROR_CODE(create_new_tail_page(cur, &new_tail));
168 
169  // Then, after all, we "publish" this new page
170  assorted::memory_fence_release(); // so that others don't see uninitialized page
171  cur->next_page().volatile_pointer_ = new_tail->get_volatile_page_id();
172  assorted::memory_fence_release(); // so that others don't have "where's the next page" issue
173  cur->header().page_version_.set_has_next_page();
174 
175  // We could immediately make a record in this page before the publish above.
176  // If we do that we could avoid lock here. But the code becomes a bit uglier.
177  // record-expand is anyway a costly operation, so ok for now.
179  sysxct_workspace,
180  reinterpret_cast<Page*>(new_tail)));
181  *tail = new_tail;
182  return kErrorCodeOk;
183  }
184  }
185 }
const KeyLength key_length_
Byte length of the key.
0 means no-error.
Definition: error_code.hpp:87
ErrorCode sysxct_page_lock(xct::SysxctWorkspace *sysxct_workspace, storage::Page *page)
Takes a page lock in the same page for a sysxct running under this thread.
ErrorCode create_new_tail_page(HashDataPage *cur_tail, HashDataPage **new_tail)
P * resolve_cast(storage::VolatilePagePointer ptr) const
resolve() plus reinterpret_cast
Definition: thread.hpp:110
thread::Thread *const context_
Thread context.
static uint16_t required_space(uint16_t key_length, uint16_t payload_length)
returns physical_record_length_ for a new record
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
const PayloadLength aggressive_payload_count_hint_
When we expand the record or allocate a new record, we might allocate a larger-than-necessary space g...
void memory_fence_release()
Equivalent to std::atomic_thread_fence(std::memory_order_release).

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::hash::ReserveRecords::find_or_create_or_expand ( xct::SysxctWorkspace sysxct_workspace,
HashDataPage page,
DataPageSlotIndex  examined_records 
)

The main loop (well, recursion actually).

Definition at line 36 of file hash_reserve_impl.cpp.

References ASSERT_ND, CHECK_ERROR_CODE, context_, create_new_record_in_tail_page(), expand_record(), foedus::storage::hash::HashDataPage::Slot::get_max_payload(), foedus::storage::hash::HashDataPage::get_record_count(), foedus::storage::hash::HashDataPage::get_slot(), foedus::storage::hash::HashDataPage::header(), foedus::storage::hash::HashDataPage::is_locked(), foedus::kErrorCodeOk, foedus::storage::hash::kSlotNotFound, foedus::assorted::memory_fence_acquire(), foedus::storage::hash::HashDataPage::next_page(), out_page_, out_slot_, payload_count_, foedus::thread::Thread::resolve_cast(), search_within_page(), foedus::storage::PageHeader::snapshot_, foedus::thread::Thread::sysxct_page_lock(), target_, and foedus::storage::DualPagePointer::volatile_pointer_.

Referenced by run().

39  {
40  ASSERT_ND(!page->header().snapshot_);
41  // lock the page first so that there is no race on new keys.
42  VolatilePagePointer next_pointer = page->next_page().volatile_pointer_;
43  if (next_pointer.is_null()) {
44  CHECK_ERROR_CODE(context_->sysxct_page_lock(sysxct_workspace, reinterpret_cast<Page*>(page)));
45  next_pointer = page->next_page().volatile_pointer_;
46  } else {
47  // If the page already has a next-page pointer, the page is already finalized. Skip locking.
48  assorted::memory_fence_acquire(); // must check next-page BEFORE everything else
49  }
50  // In either case, from now on this page is finalized
51  ASSERT_ND(page->is_locked() || !next_pointer.is_null());
53  ASSERT_ND(examined_records <= count);
54 
55  // Locate the key in this page.
56  DataPageSlotIndex index;
57  if (examined_records == count) {
58 #ifndef NDEBUG
59  // is examined_records correct? let's confirm
60  index = search_within_page(page, examined_records, 0);
61  ASSERT_ND(index == kSlotNotFound);
62 #endif // NDEBUG
63  index = kSlotNotFound;
64  } else {
65  // We can skip the first examined_records records.
66  // We took the page lock, so physical-only search is enough.
67  index = search_within_page(page, count, examined_records);
68  }
69 
70  if (index == kSlotNotFound) {
71  // still no match, This is simpler.
72  if (!next_pointer.is_null()) {
73  // We have a next page, let's go on.
74  // This is so far a recursion. Easier to debug than a loop.
75  // Stackoverflow? If you have a long chain in hash bucket, you are already screwed
76  HashDataPage* next_page = context_->resolve_cast<HashDataPage>(next_pointer);
77  return find_or_create_or_expand(sysxct_workspace, next_page, 0);
78  } else {
79  // This is the tail page. and we locked it.
80  // We simply create a new physical record.
81  ASSERT_ND(page->is_locked());
82  return create_new_record_in_tail_page(page);
83  }
84  } else {
85  // We found a match! This is either super easy or complex
86  ASSERT_ND(index >= examined_records);
87  ASSERT_ND(index < count);
88  if (page->get_slot(index).get_max_payload() >= payload_count_) {
89  // This record is enough spacious. we are done! a super easy case.
90  out_slot_ = index;
91  out_page_ = page;
92  return kErrorCodeOk;
93  } else {
94  // We need to expand this record. To do that, we need to lock this record and move it.
95  // This can be complex.
96  return expand_record(sysxct_workspace, page, index);
97  }
98  }
99 }
HashDataPage *const target_
The data page to install a new physical record.
DataPageSlotIndex out_slot_
[Out] The slot of the record that is found or created.
const PayloadLength payload_count_
Minimal required length of the payload.
const DataPageSlotIndex kSlotNotFound
Definition: hash_id.hpp:197
ErrorCode find_or_create_or_expand(xct::SysxctWorkspace *sysxct_workspace, HashDataPage *page, DataPageSlotIndex examined_records)
The main loop (well, recursion actually).
HashDataPage * out_page_
[Out] The page that contains the found/created record.
0 means no-error.
Definition: error_code.hpp:87
ErrorCode sysxct_page_lock(xct::SysxctWorkspace *sysxct_workspace, storage::Page *page)
Takes a page lock in the same page for a sysxct running under this thread.
ErrorCode create_new_record_in_tail_page(HashDataPage *tail)
Installs it as a fresh-new physical record, assuming the given page is the tail and already locked...
uint16_t DataPageSlotIndex
Definition: hash_id.hpp:196
P * resolve_cast(storage::VolatilePagePointer ptr) const
resolve() plus reinterpret_cast
Definition: thread.hpp:110
thread::Thread *const context_
Thread context.
ErrorCode expand_record(xct::SysxctWorkspace *sysxct_workspace, HashDataPage *page, DataPageSlotIndex index)
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
void memory_fence_acquire()
Equivalent to std::atomic_thread_fence(std::memory_order_acquire).
DataPageSlotIndex search_within_page(const HashDataPage *page, DataPageSlotIndex key_count, DataPageSlotIndex examined_records) const
uint16_t get_record_count() const __attribute__((always_inline))
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::storage::hash::ReserveRecords::run ( xct::SysxctWorkspace sysxct_workspace)
overridevirtual

Execute the system transaction.

You should override this method.

Implements foedus::xct::SysxctFunctor.

Definition at line 31 of file hash_reserve_impl.cpp.

References aggressive_payload_count_hint_, ASSERT_ND, find_or_create_or_expand(), hint_check_from_, payload_count_, and target_.

31  {
33  return find_or_create_or_expand(sysxct_workspace, target_, hint_check_from_);
34 }
HashDataPage *const target_
The data page to install a new physical record.
const PayloadLength payload_count_
Minimal required length of the payload.
ErrorCode find_or_create_or_expand(xct::SysxctWorkspace *sysxct_workspace, HashDataPage *page, DataPageSlotIndex examined_records)
The main loop (well, recursion actually).
const DataPageSlotIndex hint_check_from_
The in-page location from which this sysxct will look for matching records.
const PayloadLength aggressive_payload_count_hint_
When we expand the record or allocate a new record, we might allocate a larger-than-necessary space g...
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

DataPageSlotIndex foedus::storage::hash::ReserveRecords::search_within_page ( const HashDataPage page,
DataPageSlotIndex  key_count,
DataPageSlotIndex  examined_records 
) const

Definition at line 244 of file hash_reserve_impl.cpp.

References ASSERT_ND, combo_, foedus::storage::hash::HashCombo::fingerprint_, foedus::storage::hash::HashDataPage::get_record_count(), foedus::storage::hash::HashCombo::hash_, foedus::storage::hash::HashDataPage::header(), foedus::storage::PageVersion::is_locked(), foedus::storage::VolatilePagePointer::is_null(), key_, key_length_, foedus::storage::hash::HashDataPage::next_page(), foedus::storage::PageHeader::page_version_, foedus::storage::hash::HashDataPage::search_key_physical(), foedus::storage::PageHeader::snapshot_, and foedus::storage::DualPagePointer::volatile_pointer_.

Referenced by find_or_create_or_expand().

247  {
248  ASSERT_ND(!page->header().snapshot_);
249  ASSERT_ND(page->header().page_version_.is_locked()
250  || !page->next_page().volatile_pointer_.is_null());
251  ASSERT_ND(key_count == page->get_record_count());
252  return page->search_key_physical(
253  combo_.hash_,
255  key_,
256  key_length_,
257  key_count,
258  examined_records);
259 }
const void *const key_
The key of the new record.
const KeyLength key_length_
Byte length of the key.
BloomFilterFingerprint fingerprint_
Definition: hash_combo.hpp:51
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
const HashCombo & combo_
Hash info of the key.

Here is the call graph for this function:

Here is the caller graph for this function:

Member Data Documentation

const PayloadLength foedus::storage::hash::ReserveRecords::aggressive_payload_count_hint_

When we expand the record or allocate a new record, we might allocate a larger-than-necessary space guided by this hint.

It's useful to avoid future record expansion.

Precondition
aggressive_payload_count_hint_ >= payload_count_

Definition at line 113 of file hash_reserve_impl.hpp.

Referenced by append_record_to_page(), create_new_record_in_tail_page(), expand_record(), find_and_lock_spacious_tail(), and run().

const HashCombo& foedus::storage::hash::ReserveRecords::combo_

Hash info of the key.

It's &, so lifetime of the caller's HashCombo object must be longer than this sysxct.

Definition at line 102 of file hash_reserve_impl.hpp.

Referenced by append_record_to_page(), and search_within_page().

thread::Thread* const foedus::storage::hash::ReserveRecords::context_
const DataPageSlotIndex foedus::storage::hash::ReserveRecords::hint_check_from_

The in-page location from which this sysxct will look for matching records.

The caller is sure that no record before this position can match the slice. Thanks to append-only writes in data pages, the caller can guarantee that the records it observed are final.

In most cases, this is same as key_count after locking, thus completely avoiding the re-check. In some cases, the caller already found a matching key in this index, but even in that case this sysxct must re-search after locking because the record might be now moved.

Definition at line 124 of file hash_reserve_impl.hpp.

Referenced by run().

const void* const foedus::storage::hash::ReserveRecords::key_

The key of the new record.

Definition at line 97 of file hash_reserve_impl.hpp.

Referenced by append_record_to_page(), and search_within_page().

const KeyLength foedus::storage::hash::ReserveRecords::key_length_
HashDataPage* foedus::storage::hash::ReserveRecords::out_page_

[Out] The page that contains the found/created record.

As far as this sysxct returns kErrorCodeOk, out_page_ is either the target_ itself or some page after target_.

Postcondition
out_page_ != nullptr

Definition at line 140 of file hash_reserve_impl.hpp.

Referenced by create_new_record_in_tail_page(), expand_record(), find_or_create_or_expand(), and foedus::storage::hash::HashStoragePimpl::locate_record_reserve_physical().

DataPageSlotIndex foedus::storage::hash::ReserveRecords::out_slot_

[Out] The slot of the record that is found or created.

As far as this sysxct returns kErrorCodeOk, this guarantees the following.

Postcondition
out_slot_ != kSlotNotFound
out_page_'s out_slot_ is at least at some point a valid non-moved record of the given key with satisfying max-payload length.

Definition at line 133 of file hash_reserve_impl.hpp.

Referenced by create_new_record_in_tail_page(), expand_record(), find_or_create_or_expand(), and foedus::storage::hash::HashStoragePimpl::locate_record_reserve_physical().

const PayloadLength foedus::storage::hash::ReserveRecords::payload_count_

Minimal required length of the payload.

Definition at line 106 of file hash_reserve_impl.hpp.

Referenced by append_record_to_page(), find_or_create_or_expand(), and run().

HashDataPage* const foedus::storage::hash::ReserveRecords::target_

The data page to install a new physical record.

This might NOT be the head page of the hash bin. The contract here is that the caller must be sure that any pages before this page in the chain must not contain a record of the given key that is not marked as moved. In other words, the caller must have checked that there is no such record before hint_check_from_ of target_.

  • Example 1: the caller found an existing record in target_ that is not moved. hint_check_from_ < target_->get_key_count(), and target_ might not be the tail of the bin.
  • Example 2: the caller found no existing non-moved record. hint_check_from_ == target_->get_key_count(), and target_ is the tail of the bin.

Of course, by the time this sysxct takes a lock, other threads might insert more records, some of which might have the key. This sysxct thus needs to resume search from target_ and hint_check_from_ (but, not anywhere before!).

Definition at line 95 of file hash_reserve_impl.hpp.

Referenced by find_or_create_or_expand(), and run().


The documentation for this struct was generated from the following files: