libfoedus-core
FOEDUS Core Library
foedus::xct::CurrentLockList Class Reference

Sorted list of all locks, either read-lock or write-lock, taken in the current run. More...

Detailed Description

Sorted list of all locks, either read-lock or write-lock, taken in the current run.

This is NOT a POD because we need dynamic memory for the list. This holds all locks in address order to help our commit protocol.

We so far maintain this object in addition to read-set/write-set. We can merge these objects in to one, which will allow us to add some functionality and optimizations. For example, "read my own write" semantics would be practically implemented with such an integrated list. We can reduce the number of sorting and tracking, too. But, let's do them later.

Note
This object itself is thread-private. No concurrency control needed.

Definition at line 309 of file retrospective_lock_list.hpp.

#include <retrospective_lock_list.hpp>

Public Member Functions

 CurrentLockList ()
 
 ~CurrentLockList ()
 
void init (LockEntry *array, uint32_t capacity, const memory::GlobalVolatilePageResolver &resolver)
 
void uninit ()
 
void clear_entries ()
 
LockListPosition binary_search (UniversalLockId lock) const
 Analogous to std::binary_search() for the given lock. More...
 
LockListPosition get_or_add_entry (UniversalLockId lock_id, RwLockableXctId *lock, LockMode preferred_mode)
 Adds an entry to this list, re-sorting part of the list if necessary to keep the sortedness. More...
 
LockListPosition lower_bound (UniversalLockId lock) const
 Analogous to std::lower_bound() for the given lock. More...
 
const LockEntryget_array () const
 
LockEntryget_array ()
 
LockEntryget_entry (LockListPosition pos)
 
const LockEntryget_entry (LockListPosition pos) const
 
uint32_t get_capacity () const
 
LockListPosition get_last_active_entry () const
 
bool is_valid_entry (LockListPosition pos) const
 
bool is_empty () const
 
void assert_sorted () const __attribute__((always_inline))
 
void assert_sorted_impl () const
 
LockEntrybegin ()
 
LockEntryend ()
 
const LockEntrycbegin () const
 
const LockEntrycend () const
 
const memory::GlobalVolatilePageResolverget_volatile_page_resolver () const
 
LockListPosition get_last_locked_entry () const
 
UniversalLockId get_max_locked_id () const
 
LockListPosition calculate_last_locked_entry () const
 Calculate last_locked_entry_ by really checking the whole list. More...
 
LockListPosition calculate_last_locked_entry_from (LockListPosition from) const
 Only searches among entries at or before "from". More...
 
void assert_last_locked_entry () const
 
void batch_insert_write_placeholders (const WriteXctAccess *write_set, uint32_t write_set_size)
 Create entries for all write-sets in one-shot. More...
 
void prepopulate_for_retrospective_lock_list (const RetrospectiveLockList &rll)
 Another batch-insert method used at the beginning of a transaction. More...
 
template<typename MCS_RW_IMPL >
ErrorCode try_or_acquire_single_lock (LockListPosition pos, MCS_RW_IMPL *mcs_rw_impl)
 Methods below take or release locks, so they receive MCS_RW_IMPL, a template param. More...
 
template<typename MCS_RW_IMPL >
ErrorCode try_or_acquire_multiple_locks (LockListPosition upto_pos, MCS_RW_IMPL *mcs_rw_impl)
 Acquire multiple locks up to the given position in canonical order. More...
 
template<typename MCS_RW_IMPL >
void try_async_single_lock (LockListPosition pos, MCS_RW_IMPL *mcs_rw_impl)
 
template<typename MCS_RW_IMPL >
bool retry_async_single_lock (LockListPosition pos, MCS_RW_IMPL *mcs_rw_impl)
 
template<typename MCS_RW_IMPL >
void cancel_async_single_lock (LockListPosition pos, MCS_RW_IMPL *mcs_rw_impl)
 
template<typename MCS_RW_IMPL >
void try_async_multiple_locks (LockListPosition upto_pos, MCS_RW_IMPL *mcs_rw_impl)
 
template<typename MCS_RW_IMPL >
void release_all_locks (MCS_RW_IMPL *mcs_rw_impl)
 
template<typename MCS_RW_IMPL >
void release_all_after (UniversalLockId address, MCS_RW_IMPL *mcs_rw_impl)
 Release all locks in CLL whose addresses are canonically ordered before the parameter. More...
 
template<typename MCS_RW_IMPL >
void release_all_at_and_after (UniversalLockId address, MCS_RW_IMPL *mcs_rw_impl)
 same as release_all_after(address - 1) More...
 
template<typename MCS_RW_IMPL >
void giveup_all_after (UniversalLockId address, MCS_RW_IMPL *mcs_rw_impl)
 This gives-up locks in CLL that are not yet taken. More...
 
template<typename MCS_RW_IMPL >
void giveup_all_at_and_after (UniversalLockId address, MCS_RW_IMPL *mcs_rw_impl)
 

Friends

std::ostream & operator<< (std::ostream &o, const CurrentLockList &v)
 

Constructor & Destructor Documentation

foedus::xct::CurrentLockList::CurrentLockList ( )

Definition at line 76 of file retrospective_lock_list.cpp.

References clear_entries().

76  {
77  array_ = nullptr;
78  capacity_ = 0;
79  clear_entries();
80 }

Here is the call graph for this function:

foedus::xct::CurrentLockList::~CurrentLockList ( )

Definition at line 82 of file retrospective_lock_list.cpp.

References uninit().

82  {
83  uninit();
84 }

Here is the call graph for this function:

Member Function Documentation

void foedus::xct::CurrentLockList::assert_last_locked_entry ( ) const
inline

Definition at line 405 of file retrospective_lock_list.hpp.

References ASSERT_ND, and calculate_last_locked_entry().

Referenced by assert_sorted_impl(), binary_search(), get_or_add_entry(), lower_bound(), release_all_after(), retry_async_single_lock(), try_async_single_lock(), and try_or_acquire_single_lock().

405  {
406 #ifndef NDEBUG
408  ASSERT_ND(correct == last_locked_entry_);
409 #endif // NDEBUG
410  }
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
LockListPosition calculate_last_locked_entry() const
Calculate last_locked_entry_ by really checking the whole list.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::xct::CurrentLockList::assert_sorted ( ) const
inline

Definition at line 555 of file retrospective_lock_list.hpp.

References assert_sorted_impl().

Referenced by batch_insert_write_placeholders(), foedus::xct::CurrentLockListIteratorForWriteSet::CurrentLockListIteratorForWriteSet(), get_or_add_entry(), giveup_all_after(), prepopulate_for_retrospective_lock_list(), and release_all_after().

555  {
556 #ifndef NDEBUG
558 #endif // NDEBUG
559 }

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::xct::CurrentLockList::assert_sorted_impl ( ) const

Definition at line 178 of file retrospective_lock_list.cpp.

References assert_last_locked_entry(), and foedus::xct::lock_assert_sorted().

Referenced by assert_sorted().

178  {
180  lock_assert_sorted(*this);
181 }
void lock_assert_sorted(const LOCK_LIST &list)

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::xct::CurrentLockList::batch_insert_write_placeholders ( const WriteXctAccess write_set,
uint32_t  write_set_size 
)

Create entries for all write-sets in one-shot.

Parameters
[in]write_setwrite-sets to create placeholders for. Must be canonically sorted.
[in]write_set_sizecount of entries in write_set

During precommit, we must create an entry for every write-set. Rather than doing it one by one, this method creates placeholder entries for all of them. The placeholders are not locked yet (taken_mode_ == kNoLock).

Definition at line 329 of file retrospective_lock_list.cpp.

References ASSERT_ND, assert_sorted(), binary_search(), calculate_last_locked_entry(), foedus::xct::kLockListPositionInvalid, foedus::xct::kNoLock, foedus::xct::kWriteLock, foedus::xct::RecordXctAccess::ordinal_, foedus::xct::RecordXctAccess::owner_id_address_, foedus::xct::RecordXctAccess::owner_lock_id_, foedus::xct::LockEntry::preferred_mode_, foedus::xct::LockEntry::set(), and foedus::xct::LockEntry::universal_lock_id_.

Referenced by foedus::xct::XctManagerPimpl::precommit_xct_lock().

331  {
332  // We want to avoid full-sorting and minimize the number of copies/shifts.
333  // Luckily, write_set is already sorted, so is our array_. Just merge them in order.
334  if (write_set_size == 0) {
335  return;
336  }
337 #ifndef NDEBUG
338  for (uint32_t i = 1; i < write_set_size; ++i) {
339  ASSERT_ND(write_set[i - 1].ordinal_ != write_set[i].ordinal_);
340  if (write_set[i].owner_lock_id_ == write_set[i - 1].owner_lock_id_) {
341  ASSERT_ND(write_set[i - 1].ordinal_ < write_set[i].ordinal_);
342  } else {
343  ASSERT_ND(write_set[i - 1].owner_lock_id_ < write_set[i].owner_lock_id_);
344  }
345  }
346  assert_sorted();
347 #endif // NDEBUG
348 
349  // Implementation note: I considered a few approaches to efficiently do the merge.
350  // 1) insertion-sort: that sounds expensive... we might be inserting several
351  // 2) a bit complex. first path to identify the number of new entries, then second path to
352  // merge from the end, not the beginning, to copy/shift only what we need to.
353  // 3) insert all write-sets at the end then invoke std::sort once.
354  // For now I picked 3) for simplicity. Revisit laster if CPU profile tells something.
355  if (last_active_entry_ == kLockListPositionInvalid) {
356  // If CLL is now empty, it's even easier. Just add all write-sets
357  uint32_t added = 0;
358  for (uint32_t write_pos = 0; write_pos < write_set_size; ++write_pos) {
359  const WriteXctAccess* write = write_set + write_pos;
360  if (write_pos > 0) {
361  const WriteXctAccess* prev = write_set + write_pos - 1;
362  ASSERT_ND(write->ordinal_ != prev->ordinal_);
363  ASSERT_ND(write->owner_lock_id_ >= prev->owner_lock_id_);
364  if (write->owner_lock_id_ == prev->owner_lock_id_) {
365  ASSERT_ND(write->ordinal_ > prev->ordinal_);
366  continue;
367  }
368  }
369  ++added;
370  LockEntry* new_entry = array_ + added;
371  new_entry->set(write->owner_lock_id_, write->owner_id_address_, kWriteLock, kNoLock);
372  }
373  last_active_entry_ = added;
374  } else {
375  uint32_t write_pos = 0;
376  uint32_t added = 0;
377  for (LockListPosition pos = 1U; pos <= last_active_entry_ && write_pos < write_set_size;) {
378  LockEntry* existing = array_ + pos;
379  const WriteXctAccess* write = write_set + write_pos;
380  UniversalLockId write_lock_id = write->owner_lock_id_;
381  if (existing->universal_lock_id_ < write_lock_id) {
382  ++pos;
383  } else if (existing->universal_lock_id_ == write_lock_id) {
384  if (existing->preferred_mode_ != kWriteLock) {
385  existing->preferred_mode_ = kWriteLock;
386  }
387  ++write_pos;
388  } else {
389  // yuppy, new entry.
390  ASSERT_ND(existing->universal_lock_id_ > write_lock_id);
391  ++added;
392  LockEntry* new_entry = array_ + last_active_entry_ + added;
393  new_entry->set(write_lock_id, write->owner_id_address_, kWriteLock, kNoLock);
394  // be careful on duplicate in write-set.
395  // It might contain multiple writes to one record.
396  for (++write_pos; write_pos < write_set_size; ++write_pos) {
397  const WriteXctAccess* next_write = write_set + write_pos;
398  UniversalLockId next_write_id = next_write->owner_lock_id_;
399  ASSERT_ND(next_write_id >= write_lock_id);
400  if (next_write_id > write_lock_id) {
401  break;
402  }
403  }
404  }
405  }
406 
407  while (write_pos < write_set_size) {
408  // After iterating over all existing entries, still some write-set entry remains.
409  // Hence they are all after the existing entries.
410  const WriteXctAccess* write = write_set + write_pos;
411  UniversalLockId write_lock_id = write->owner_lock_id_;
412  ASSERT_ND(last_active_entry_ == kLockListPositionInvalid
413  || array_[last_active_entry_].universal_lock_id_ < write_lock_id);
414 
415  // Again, be careful on duplicate in write set.
416  ++added;
417  LockEntry* new_entry = array_ + last_active_entry_ + added;
418  new_entry->set(write_lock_id, write->owner_id_address_, kWriteLock, kNoLock);
419  for (++write_pos; write_pos < write_set_size; ++write_pos) {
420  const WriteXctAccess* next_write = write_set + write_pos;
421  UniversalLockId next_write_id = next_write->owner_lock_id_;
422  ASSERT_ND(next_write_id >= write_lock_id);
423  if (next_write_id > write_lock_id) {
424  break;
425  }
426  }
427  }
428 
429  if (added > 0) {
430  last_active_entry_ += added;
431  std::sort(array_ + 1U, array_ + 1U + last_active_entry_);
432  }
433  }
434 
435  // Adjusting last_locked_entry_ is not impossible.. but let's just recalculate. Not too often.
436  last_locked_entry_ = calculate_last_locked_entry();
437  assert_sorted();
438 #ifndef NDEBUG
439  for (uint32_t i = 0; i < write_set_size; ++i) {
440  ASSERT_ND(binary_search(write_set[i].owner_lock_id_) != kLockListPositionInvalid);
441  }
442 #endif // NDEBUG
443 }
void set(UniversalLockId id, RwLockableXctId *lock, LockMode preferred_mode, LockMode taken_mode)
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
taken_mode_: Not taken the lock yet.
Definition: xct_id.hpp:100
uintptr_t UniversalLockId
Universally ordered identifier of each lock.
Definition: xct_id.hpp:134
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
LockListPosition binary_search(UniversalLockId lock) const
Analogous to std::binary_search() for the given lock.
LockListPosition calculate_last_locked_entry() const
Calculate last_locked_entry_ by really checking the whole list.
taken_mode_: we took a write-lock.
Definition: xct_id.hpp:110
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
void assert_sorted() const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

LockEntry* foedus::xct::CurrentLockList::begin ( )
inline

Definition at line 371 of file retrospective_lock_list.hpp.

Referenced by giveup_all_after(), and release_all_after().

371 { return array_ + 1U; }

Here is the caller graph for this function:

LockListPosition foedus::xct::CurrentLockList::binary_search ( UniversalLockId  lock) const

Analogous to std::binary_search() for the given lock.

Data manipulation (search/add/etc)

Returns
Index of an entry whose lock_ == lock. kLockListPositionInvalid if not found.

Definition at line 189 of file retrospective_lock_list.cpp.

References assert_last_locked_entry().

Referenced by batch_insert_write_placeholders().

189  {
191  return lock_binary_search<CurrentLockList, LockEntry>(*this, lock);
192 }

Here is the call graph for this function:

Here is the caller graph for this function:

LockListPosition foedus::xct::CurrentLockList::calculate_last_locked_entry ( ) const
inline

Calculate last_locked_entry_ by really checking the whole list.

Usually for sanity checks

Definition at line 393 of file retrospective_lock_list.hpp.

References calculate_last_locked_entry_from().

Referenced by assert_last_locked_entry(), and batch_insert_write_placeholders().

393  {
394  return calculate_last_locked_entry_from(last_active_entry_);
395  }
LockListPosition calculate_last_locked_entry_from(LockListPosition from) const
Only searches among entries at or before "from".

Here is the call graph for this function:

Here is the caller graph for this function:

LockListPosition foedus::xct::CurrentLockList::calculate_last_locked_entry_from ( LockListPosition  from) const
inline

Only searches among entries at or before "from".

Definition at line 397 of file retrospective_lock_list.hpp.

References foedus::xct::kLockListPositionInvalid.

Referenced by calculate_last_locked_entry(), try_async_single_lock(), and try_or_acquire_single_lock().

397  {
398  for (LockListPosition pos = from; pos > kLockListPositionInvalid; --pos) {
399  if (array_[pos].is_locked()) {
400  return pos;
401  }
402  }
404  }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148

Here is the caller graph for this function:

template<typename MCS_RW_IMPL >
void foedus::xct::CurrentLockList::cancel_async_single_lock ( LockListPosition  pos,
MCS_RW_IMPL *  mcs_rw_impl 
)
inline

Definition at line 724 of file retrospective_lock_list.hpp.

References ASSERT_ND, get_entry(), foedus::xct::RwLockableXctId::get_key_lock(), foedus::xct::LockEntry::is_enough(), foedus::xct::kNoLock, foedus::xct::kReadLock, foedus::xct::LockEntry::lock_, foedus::xct::LockEntry::mcs_block_, foedus::xct::LockEntry::preferred_mode_, and foedus::xct::LockEntry::taken_mode_.

726  {
727  LockEntry* lock_entry = get_entry(pos);
728  ASSERT_ND(!lock_entry->is_enough());
729  ASSERT_ND(lock_entry->taken_mode_ == kNoLock);
730  ASSERT_ND(lock_entry->mcs_block_);
731  McsRwLock* lock_addr = lock_entry->lock_->get_key_lock();
732  if (lock_entry->preferred_mode_ == kReadLock) {
733  mcs_rw_impl->cancel_async_rw_reader(lock_addr, lock_entry->mcs_block_);
734  } else {
735  ASSERT_ND(lock_entry->preferred_mode_ == kReadLock);
736  mcs_rw_impl->cancel_async_rw_writer(lock_addr, lock_entry->mcs_block_);
737  }
738  lock_entry->mcs_block_ = 0;
739 }
taken_mode_: we took a read-lock, not write-lock yet.
Definition: xct_id.hpp:105
taken_mode_: Not taken the lock yet.
Definition: xct_id.hpp:100
LockEntry * get_entry(LockListPosition pos)
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

const LockEntry* foedus::xct::CurrentLockList::cbegin ( ) const
inline

Definition at line 373 of file retrospective_lock_list.hpp.

373 { return array_ + 1U; }
const LockEntry* foedus::xct::CurrentLockList::cend ( ) const
inline

Definition at line 374 of file retrospective_lock_list.hpp.

374 { return array_ + 1U + last_active_entry_; }
void foedus::xct::CurrentLockList::clear_entries ( )

Definition at line 96 of file retrospective_lock_list.cpp.

References foedus::xct::kLockListPositionInvalid, foedus::xct::kNoLock, foedus::xct::LockEntry::lock_, foedus::xct::LockEntry::preferred_mode_, foedus::xct::LockEntry::taken_mode_, and foedus::xct::LockEntry::universal_lock_id_.

Referenced by foedus::xct::Xct::activate(), CurrentLockList(), init(), foedus::xct::XctManagerPimpl::release_and_clear_all_current_locks(), and uninit().

96  {
97  last_active_entry_ = kLockListPositionInvalid;
98  last_locked_entry_ = kLockListPositionInvalid;
99  if (array_) {
100  // Dummy entry
102  array_[kLockListPositionInvalid].lock_ = nullptr;
105  }
106 }
UniversalLockId universal_lock_id_
Used to order locks in canonical order.
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
taken_mode_: Not taken the lock yet.
Definition: xct_id.hpp:100
LockMode preferred_mode_
Whick lock mode we should take according to RLL.
LockMode taken_mode_
Whick lock mode we have taken during the current run (of course initially kNoLock) ...
RwLockableXctId * lock_
Virtual address of the lock.

Here is the caller graph for this function:

LockEntry* foedus::xct::CurrentLockList::end ( )
inline

Definition at line 372 of file retrospective_lock_list.hpp.

Referenced by giveup_all_after(), and release_all_after().

372 { return array_ + 1U + last_active_entry_; }

Here is the caller graph for this function:

const LockEntry* foedus::xct::CurrentLockList::get_array ( ) const
inline

Definition at line 350 of file retrospective_lock_list.hpp.

Referenced by foedus::xct::XctManagerPimpl::precommit_xct_lock().

350 { return array_; }

Here is the caller graph for this function:

LockEntry* foedus::xct::CurrentLockList::get_array ( )
inline

Definition at line 351 of file retrospective_lock_list.hpp.

351 { return array_; }
uint32_t foedus::xct::CurrentLockList::get_capacity ( ) const
inline

Definition at line 360 of file retrospective_lock_list.hpp.

360 { return capacity_; }
LockEntry* foedus::xct::CurrentLockList::get_entry ( LockListPosition  pos)
inline

Definition at line 352 of file retrospective_lock_list.hpp.

References ASSERT_ND, and is_valid_entry().

Referenced by cancel_async_single_lock(), get_max_locked_id(), foedus::xct::CurrentLockListIteratorForWriteSet::next_writes(), foedus::xct::Xct::on_record_read_take_locks_if_needed(), retry_async_single_lock(), try_async_single_lock(), and try_or_acquire_single_lock().

352  {
354  return array_ + pos;
355  }
bool is_valid_entry(LockListPosition pos) const
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

const LockEntry* foedus::xct::CurrentLockList::get_entry ( LockListPosition  pos) const
inline

Definition at line 356 of file retrospective_lock_list.hpp.

References ASSERT_ND, and is_valid_entry().

356  {
358  return array_ + pos;
359  }
bool is_valid_entry(LockListPosition pos) const
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

LockListPosition foedus::xct::CurrentLockList::get_last_active_entry ( ) const
inline

Definition at line 361 of file retrospective_lock_list.hpp.

361 { return last_active_entry_; }
LockListPosition foedus::xct::CurrentLockList::get_last_locked_entry ( ) const
inline
Returns
largest index of entries that are already locked. kLockListPositionInvalid if no entry is locked.

Definition at line 383 of file retrospective_lock_list.hpp.

Referenced by foedus::xct::XctManagerPimpl::precommit_xct_lock().

383 { return last_locked_entry_; }

Here is the caller graph for this function:

UniversalLockId foedus::xct::CurrentLockList::get_max_locked_id ( ) const
inline
Returns
the biggest Id of locks actually locked. If none, kNullUniversalLockId.

Definition at line 385 of file retrospective_lock_list.hpp.

References get_entry(), foedus::xct::kLockListPositionInvalid, foedus::xct::kNullUniversalLockId, and foedus::xct::LockEntry::universal_lock_id_.

Referenced by foedus::thread::ThreadPimpl::cll_get_max_locked_id().

385  {
386  if (last_locked_entry_ == kLockListPositionInvalid) {
387  return kNullUniversalLockId;
388  } else {
389  return get_entry(last_locked_entry_)->universal_lock_id_;
390  }
391  }
UniversalLockId universal_lock_id_
Used to order locks in canonical order.
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
LockEntry * get_entry(LockListPosition pos)
const UniversalLockId kNullUniversalLockId
This never points to a valid lock, and also evaluates less than any vaild alocks. ...
Definition: xct_id.hpp:137

Here is the call graph for this function:

Here is the caller graph for this function:

LockListPosition foedus::xct::CurrentLockList::get_or_add_entry ( UniversalLockId  lock_id,
RwLockableXctId lock,
LockMode  preferred_mode 
)

Adds an entry to this list, re-sorting part of the list if necessary to keep the sortedness.

If there is an existing entry for the lock, it just returns its position. If not, this method creates a new entry with taken_mode=kNoLock.

Returns
the position of the newly added entry. kLockListPositionInvalid means the list was full and couldn't add (which is very unlikely, tho).
Note
Do not frequently use this method. You should batch your insert when you can. This method is used when we couldn't batch/expect the new entry.
See also
batch_insert_write_placeholders()

Definition at line 204 of file retrospective_lock_list.cpp.

References assert_last_locked_entry(), ASSERT_ND, assert_sorted(), foedus::xct::kLockListPositionInvalid, foedus::xct::kNoLock, lower_bound(), foedus::xct::LockEntry::preferred_mode_, foedus::xct::LockEntry::set(), and foedus::xct::xct_id_to_universal_lock_id().

Referenced by foedus::xct::Xct::on_record_read_take_locks_if_needed().

207  {
209  ASSERT_ND(id == xct_id_to_universal_lock_id(volatile_page_resolver_, lock));
210  // Easy case? (lock >= the last entry)
211  LockListPosition insert_pos = lower_bound(id);
212  ASSERT_ND(insert_pos != kLockListPositionInvalid);
213 
214  // Larger than all existing entries? Append to the last!
215  if (insert_pos > last_active_entry_) {
216  ASSERT_ND(insert_pos == last_active_entry_ + 1U);
217  LockListPosition new_pos = issue_new_position();
218  array_[new_pos].set(id, lock, preferred_mode, kNoLock);
219  ASSERT_ND(new_pos == insert_pos);
220  return new_pos;
221  }
222 
223  // lower_bound returns the first entry that is NOT less than. is it equal?
224  ASSERT_ND(array_[insert_pos].universal_lock_id_ >= id);
225  if (array_[insert_pos].universal_lock_id_ == id) {
226  // Found existing!
227  if (array_[insert_pos].preferred_mode_ < preferred_mode) {
228  array_[insert_pos].preferred_mode_ = preferred_mode;
229  }
230  return insert_pos;
231  }
232 
233  DVLOG(1) << "not an easy case. We need to adjust the order. This is costly!";
234  ASSERT_ND(insert_pos <= last_active_entry_); // otherwise we went into the 1st branch
235  ASSERT_ND(array_[insert_pos].universal_lock_id_ > id); // if ==, we went into the 2nd branch
236  ASSERT_ND(insert_pos == 1U || array_[insert_pos - 1U].universal_lock_id_ < id);
237 
238  LockListPosition new_last_pos = issue_new_position();
239  ASSERT_ND(new_last_pos > insert_pos);
240  uint64_t moved_bytes = sizeof(LockEntry) * (new_last_pos - insert_pos);
241  std::memmove(array_ + insert_pos + 1U, array_ + insert_pos, moved_bytes);
242  DVLOG(1) << "Re-sorted. hope this won't happen often";
243  array_[insert_pos].set(id, lock, preferred_mode, kNoLock);
244  if (last_locked_entry_ >= insert_pos) {
245  ++last_locked_entry_;
246  }
247 
248  assert_sorted();
249  return insert_pos;
250 }
void set(UniversalLockId id, RwLockableXctId *lock, LockMode preferred_mode, LockMode taken_mode)
LockListPosition lower_bound(UniversalLockId lock) const
Analogous to std::lower_bound() for the given lock.
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
taken_mode_: Not taken the lock yet.
Definition: xct_id.hpp:100
UniversalLockId xct_id_to_universal_lock_id(const memory::GlobalVolatilePageResolver &resolver, RwLockableXctId *lock)
Definition: xct_id.hpp:1226
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
LockMode preferred_mode_
Whick lock mode we should take according to RLL.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
void assert_sorted() const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

const memory::GlobalVolatilePageResolver& foedus::xct::CurrentLockList::get_volatile_page_resolver ( ) const
inline

Definition at line 375 of file retrospective_lock_list.hpp.

375  {
376  return volatile_page_resolver_;
377  }
template<typename MCS_RW_IMPL >
void foedus::xct::CurrentLockList::giveup_all_after ( UniversalLockId  address,
MCS_RW_IMPL *  mcs_rw_impl 
)
inline

This gives-up locks in CLL that are not yet taken.

preferred mode will be set to either NoLock or same as taken_mode, and all incomplete async locks will be cancelled. Unlike clear_entries(), this leaves the entries.

Definition at line 826 of file retrospective_lock_list.hpp.

References ASSERT_ND, assert_sorted(), begin(), end(), foedus::xct::kNoLock, foedus::xct::kReadLock, and foedus::xct::kWriteLock.

Referenced by foedus::thread::ThreadPimpl::cll_giveup_all_locks_after().

826  {
827  assert_sorted();
828  uint32_t givenup_read_locks = 0;
829  uint32_t givenup_write_locks = 0;
830  uint32_t givenup_upgrades = 0;
831  uint32_t already_enough_locks = 0;
832  uint32_t canceled_async_read_locks = 0;
833  uint32_t canceled_async_write_locks = 0;
834 
835  for (LockEntry* entry = begin(); entry != end(); ++entry) {
836  if (entry->universal_lock_id_ <= address) {
837  continue;
838  }
839  if (entry->preferred_mode_ == kNoLock) {
840  continue;
841  }
842  if (entry->is_enough()) {
843  ++already_enough_locks;
844  continue;
845  }
846 
847  if (entry->is_locked()) {
848  ASSERT_ND(entry->taken_mode_ == kReadLock);
849  ASSERT_ND(entry->preferred_mode_ == kWriteLock);
850  ++givenup_upgrades;
851  entry->preferred_mode_ = entry->taken_mode_;
852  } else if (entry->mcs_block_) {
853  if (entry->preferred_mode_ == kReadLock) {
854  mcs_rw_impl->cancel_async_rw_reader(entry->lock_->get_key_lock(), entry->mcs_block_);
855  ++canceled_async_read_locks;
856  } else {
857  ASSERT_ND(entry->preferred_mode_ == kWriteLock);
858  mcs_rw_impl->cancel_async_rw_writer(entry->lock_->get_key_lock(), entry->mcs_block_);
859  ++canceled_async_write_locks;
860  }
861  entry->mcs_block_ = 0;
862  entry->preferred_mode_ = kNoLock;
863  } else {
864  ASSERT_ND(entry->taken_mode_ == kNoLock);
865  if (entry->preferred_mode_ == kReadLock) {
866  ++givenup_read_locks;
867  } else {
868  ASSERT_ND(entry->preferred_mode_ == kWriteLock);
869  ++givenup_write_locks;
870  }
871  entry->preferred_mode_ = kNoLock;
872  }
873  }
874 
875 #ifndef NDEBUG
876  giveup_all_after_debuglog(
877  givenup_read_locks,
878  givenup_write_locks,
879  givenup_upgrades,
880  already_enough_locks,
881  canceled_async_read_locks,
882  canceled_async_write_locks);
883 #endif // NDEBUG
884 }
taken_mode_: we took a read-lock, not write-lock yet.
Definition: xct_id.hpp:105
taken_mode_: Not taken the lock yet.
Definition: xct_id.hpp:100
taken_mode_: we took a write-lock.
Definition: xct_id.hpp:110
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
void assert_sorted() const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename MCS_RW_IMPL >
void foedus::xct::CurrentLockList::giveup_all_at_and_after ( UniversalLockId  address,
MCS_RW_IMPL *  mcs_rw_impl 
)
inline

Definition at line 886 of file retrospective_lock_list.hpp.

References foedus::xct::kNullUniversalLockId.

888  {
889  if (address == kNullUniversalLockId) {
890  giveup_all_after<MCS_RW_IMPL>(kNullUniversalLockId, mcs_rw_impl);
891  } else {
892  giveup_all_after<MCS_RW_IMPL>(address - 1U, mcs_rw_impl);
893  }
894 }
const UniversalLockId kNullUniversalLockId
This never points to a valid lock, and also evaluates less than any vaild alocks. ...
Definition: xct_id.hpp:137
void foedus::xct::CurrentLockList::init ( LockEntry array,
uint32_t  capacity,
const memory::GlobalVolatilePageResolver resolver 
)

Definition at line 86 of file retrospective_lock_list.cpp.

References clear_entries().

Referenced by foedus::xct::Xct::initialize().

89  {
90  array_ = array;
91  capacity_ = capacity;
92  volatile_page_resolver_ = resolver;
93  clear_entries();
94 }

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::xct::CurrentLockList::is_empty ( ) const
inline

Definition at line 365 of file retrospective_lock_list.hpp.

References foedus::xct::kLockListPositionInvalid.

Referenced by foedus::xct::Xct::deactivate(), foedus::xct::XctManagerPimpl::precommit_xct(), and prepopulate_for_retrospective_lock_list().

365 { return last_active_entry_ == kLockListPositionInvalid; }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149

Here is the caller graph for this function:

bool foedus::xct::CurrentLockList::is_valid_entry ( LockListPosition  pos) const
inline

Definition at line 362 of file retrospective_lock_list.hpp.

References foedus::xct::kLockListPositionInvalid.

Referenced by get_entry().

362  {
363  return pos != kLockListPositionInvalid && pos <= last_active_entry_;
364  }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149

Here is the caller graph for this function:

LockListPosition foedus::xct::CurrentLockList::lower_bound ( UniversalLockId  lock) const

Analogous to std::lower_bound() for the given lock.

Returns
Index of the fist entry whose lock_ is not less than lock. If no such entry, last_active_entry_ + 1U, whether last_active_entry_ is 0 or not. Thus the return value is always >0. This is to immitate std::lower_bound's behavior.

Definition at line 196 of file retrospective_lock_list.cpp.

References assert_last_locked_entry().

Referenced by get_or_add_entry().

196  {
198  return lock_lower_bound<CurrentLockList, LockEntry>(*this, lock);
199 }

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::xct::CurrentLockList::prepopulate_for_retrospective_lock_list ( const RetrospectiveLockList rll)

Another batch-insert method used at the beginning of a transaction.

When an Xct has RLL, it will highly likely lock all of them. So, it pre-populates CLL entries for all of them at the beginning. This is both for simplicity and performance.

Precondition
is_empty() : call this at the beginning of xct.
!rll.is_empty() : then why the heck are you calling.

Definition at line 445 of file retrospective_lock_list.cpp.

References ASSERT_ND, foedus::xct::RetrospectiveLockList::assert_sorted(), assert_sorted(), foedus::xct::RetrospectiveLockList::get_array(), foedus::xct::RetrospectiveLockList::get_last_active_entry(), foedus::xct::RetrospectiveLockList::is_empty(), is_empty(), and foedus::xct::kLockListPositionInvalid.

Referenced by foedus::xct::Xct::activate().

445  {
446  ASSERT_ND(is_empty());
447  ASSERT_ND(!rll.is_empty());
448  rll.assert_sorted();
449  // Because now we use LockEntry for both RLL and CLL, we can do just one memcpy
450  std::memcpy(array_ + 1U, rll.get_array() + 1U, sizeof(LockEntry) * rll.get_last_active_entry());
451  last_active_entry_ = rll.get_last_active_entry();
452  last_locked_entry_ = kLockListPositionInvalid;
453  assert_sorted();
454 }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
void assert_sorted() const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename MCS_RW_IMPL >
void foedus::xct::CurrentLockList::release_all_after ( UniversalLockId  address,
MCS_RW_IMPL *  mcs_rw_impl 
)
inline

Release all locks in CLL whose addresses are canonically ordered before the parameter.

This is used where we need to rule out the risk of deadlock. Unlike clear_entries(), this leaves the entries.

Definition at line 753 of file retrospective_lock_list.hpp.

References assert_last_locked_entry(), ASSERT_ND, assert_sorted(), begin(), end(), foedus::xct::kLockListPositionInvalid, foedus::xct::kNoLock, foedus::xct::kReadLock, and foedus::xct::kWriteLock.

Referenced by foedus::thread::ThreadPimpl::cll_release_all_locks_after(), and release_all_locks().

753  {
754  // Only this and below logics are implemented here because this needs to know about CLL.
755  // This is not quite about the lock algorithm itself. It's something on higher level.
756  assert_sorted();
757  uint32_t released_read_locks = 0;
758  uint32_t released_write_locks = 0;
759  uint32_t already_released_locks = 0;
760  uint32_t canceled_async_read_locks = 0;
761  uint32_t canceled_async_write_locks = 0;
762 
763  LockListPosition new_last_locked_entry = kLockListPositionInvalid;
764  for (LockEntry* entry = begin(); entry != end(); ++entry) {
765  if (entry->universal_lock_id_ <= address) {
766  if (entry->is_locked()) {
767  new_last_locked_entry = entry - array_;
768  }
769  continue;
770  }
771  if (entry->is_locked()) {
772  if (entry->taken_mode_ == kReadLock) {
773  mcs_rw_impl->release_rw_reader(entry->lock_->get_key_lock(), entry->mcs_block_);
774  ++released_read_locks;
775  } else {
776  ASSERT_ND(entry->taken_mode_ == kWriteLock);
777  mcs_rw_impl->release_rw_writer(entry->lock_->get_key_lock(), entry->mcs_block_);
778  ++released_write_locks;
779  }
780  entry->mcs_block_ = 0;
781  entry->taken_mode_ = kNoLock;
782  } else if (entry->mcs_block_) {
783  // Not locked yet, but we have mcs_block_ set, this means we tried it in
784  // async mode, then still waiting or at least haven't confirmed that we acquired it.
785  // Cancel these "retrieable" locks to which we already pushed our qnode.
786  ASSERT_ND(entry->taken_mode_ == kNoLock);
787  if (entry->preferred_mode_ == kReadLock) {
788  mcs_rw_impl->cancel_async_rw_reader(entry->lock_->get_key_lock(), entry->mcs_block_);
789  ++canceled_async_read_locks;
790  } else {
791  ASSERT_ND(entry->preferred_mode_ == kWriteLock);
792  mcs_rw_impl->cancel_async_rw_writer(entry->lock_->get_key_lock(), entry->mcs_block_);
793  ++canceled_async_write_locks;
794  }
795  entry->mcs_block_ = 0;
796  } else {
797  ASSERT_ND(entry->taken_mode_ == kNoLock);
798  ++already_released_locks;
799  }
800  }
801 
802  last_locked_entry_ = new_last_locked_entry;
804 
805 #ifndef NDEBUG
806  release_all_after_debuglog(
807  released_read_locks,
808  released_write_locks,
809  already_released_locks,
810  canceled_async_read_locks,
811  canceled_async_write_locks);
812 #endif // NDEBUG
813 }
taken_mode_: we took a read-lock, not write-lock yet.
Definition: xct_id.hpp:105
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
taken_mode_: Not taken the lock yet.
Definition: xct_id.hpp:100
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
taken_mode_: we took a write-lock.
Definition: xct_id.hpp:110
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
void assert_sorted() const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename MCS_RW_IMPL >
void foedus::xct::CurrentLockList::release_all_at_and_after ( UniversalLockId  address,
MCS_RW_IMPL *  mcs_rw_impl 
)
inline

same as release_all_after(address - 1)

Definition at line 815 of file retrospective_lock_list.hpp.

References foedus::xct::kNullUniversalLockId.

817  {
818  if (address == kNullUniversalLockId) {
819  release_all_after<MCS_RW_IMPL>(kNullUniversalLockId, mcs_rw_impl);
820  } else {
821  release_all_after<MCS_RW_IMPL>(address - 1U, mcs_rw_impl);
822  }
823 }
const UniversalLockId kNullUniversalLockId
This never points to a valid lock, and also evaluates less than any vaild alocks. ...
Definition: xct_id.hpp:137
template<typename MCS_RW_IMPL >
void foedus::xct::CurrentLockList::release_all_locks ( MCS_RW_IMPL *  mcs_rw_impl)
inline
  • Release all locks in CLL

Definition at line 473 of file retrospective_lock_list.hpp.

References foedus::xct::kNullUniversalLockId, and release_all_after().

Referenced by foedus::thread::ThreadPimpl::cll_release_all_locks().

473  {
475  }
void release_all_after(UniversalLockId address, MCS_RW_IMPL *mcs_rw_impl)
Release all locks in CLL whose addresses are canonically ordered before the parameter.
const UniversalLockId kNullUniversalLockId
This never points to a valid lock, and also evaluates less than any vaild alocks. ...
Definition: xct_id.hpp:137

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename MCS_RW_IMPL >
bool foedus::xct::CurrentLockList::retry_async_single_lock ( LockListPosition  pos,
MCS_RW_IMPL *  mcs_rw_impl 
)
inline

Definition at line 696 of file retrospective_lock_list.hpp.

References assert_last_locked_entry(), ASSERT_ND, get_entry(), foedus::xct::RwLockableXctId::get_key_lock(), foedus::xct::LockEntry::is_enough(), foedus::xct::LockEntry::is_locked(), foedus::xct::kNoLock, foedus::xct::kReadLock, foedus::xct::kWriteLock, foedus::xct::LockEntry::lock_, foedus::xct::LockEntry::mcs_block_, foedus::xct::LockEntry::preferred_mode_, and foedus::xct::LockEntry::taken_mode_.

698  {
699  LockEntry* lock_entry = get_entry(pos);
700  // Must be not taken yet, and must have pushed a qnode to the lock queue
701  ASSERT_ND(!lock_entry->is_enough());
702  ASSERT_ND(lock_entry->taken_mode_ == kNoLock);
703  ASSERT_ND(lock_entry->mcs_block_);
704  ASSERT_ND(!lock_entry->is_locked());
705 
706  McsRwLock* lock_addr = lock_entry->lock_->get_key_lock();
707  bool acquired = false;
708  if (lock_entry->preferred_mode_ == kWriteLock) {
709  acquired = mcs_rw_impl->retry_async_rw_writer(lock_addr, lock_entry->mcs_block_);
710  } else {
711  ASSERT_ND(lock_entry->preferred_mode_ == kReadLock);
712  acquired = mcs_rw_impl->retry_async_rw_reader(lock_addr, lock_entry->mcs_block_);
713  }
714  if (acquired) {
715  lock_entry->taken_mode_ = lock_entry->preferred_mode_;
716  ASSERT_ND(lock_entry->is_locked());
717  last_locked_entry_ = std::max(last_locked_entry_, pos);
719  }
720  return acquired;
721 }
taken_mode_: we took a read-lock, not write-lock yet.
Definition: xct_id.hpp:105
taken_mode_: Not taken the lock yet.
Definition: xct_id.hpp:100
taken_mode_: we took a write-lock.
Definition: xct_id.hpp:110
LockEntry * get_entry(LockListPosition pos)
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

template<typename MCS_RW_IMPL >
void foedus::xct::CurrentLockList::try_async_multiple_locks ( LockListPosition  upto_pos,
MCS_RW_IMPL *  mcs_rw_impl 
)
inline

Definition at line 742 of file retrospective_lock_list.hpp.

References ASSERT_ND, foedus::xct::kLockListPositionInvalid, and try_async_single_lock().

744  {
745  ASSERT_ND(upto_pos != kLockListPositionInvalid);
746  ASSERT_ND(upto_pos <= last_active_entry_);
747  for (LockListPosition pos = 1U; pos <= upto_pos; ++pos) {
748  try_async_single_lock(pos, mcs_rw_impl);
749  }
750 }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
void try_async_single_lock(LockListPosition pos, MCS_RW_IMPL *mcs_rw_impl)
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

template<typename MCS_RW_IMPL >
void foedus::xct::CurrentLockList::try_async_single_lock ( LockListPosition  pos,
MCS_RW_IMPL *  mcs_rw_impl 
)
inline

Definition at line 649 of file retrospective_lock_list.hpp.

References foedus::xct::AcquireAsyncRet::acquired_, assert_last_locked_entry(), ASSERT_ND, foedus::xct::AcquireAsyncRet::block_index_, calculate_last_locked_entry_from(), get_entry(), foedus::xct::RwLockableXctId::get_key_lock(), foedus::xct::LockEntry::is_enough(), foedus::xct::kNoLock, foedus::xct::kReadLock, foedus::xct::kWriteLock, foedus::xct::LockEntry::lock_, foedus::xct::LockEntry::mcs_block_, foedus::xct::LockEntry::preferred_mode_, and foedus::xct::LockEntry::taken_mode_.

Referenced by try_async_multiple_locks().

651  {
652  LockEntry* lock_entry = get_entry(pos);
653  if (lock_entry->is_enough()) {
654  return;
655  }
656  ASSERT_ND(lock_entry->taken_mode_ != kWriteLock);
657 
658  McsRwLock* lock_addr = lock_entry->lock_->get_key_lock();
659  if (lock_entry->taken_mode_ != kNoLock) {
660  ASSERT_ND(lock_entry->preferred_mode_ == kWriteLock);
661  ASSERT_ND(lock_entry->taken_mode_ == kReadLock);
662  ASSERT_ND(lock_entry->mcs_block_);
663  // This is reader->writer upgrade.
664  // We simply release read-lock first then take write-lock in this case.
665  // In traditional 2PL, such an unlock-then-lock violates serializability,
666  // but we guarantee serializability by read-verification anyways.
667  // We can release any lock anytime.. great flexibility!
668  mcs_rw_impl->release_rw_reader(lock_addr, lock_entry->mcs_block_);
669  lock_entry->taken_mode_ = kNoLock;
670  last_locked_entry_ = calculate_last_locked_entry_from(pos - 1U);
672  } else {
673  // This function is for pushing the queue node in the extended rwlock.
674  // Doomed if we already have a queue node.
675  ASSERT_ND(lock_entry->mcs_block_ == 0);
676  }
677 
678  // Don't really care canonical order here, just send out the request.
679  AcquireAsyncRet async_ret;
680  if (lock_entry->preferred_mode_ == kWriteLock) {
681  async_ret = mcs_rw_impl->acquire_async_rw_writer(lock_addr);
682  } else {
683  ASSERT_ND(lock_entry->preferred_mode_ == kReadLock);
684  async_ret = mcs_rw_impl->acquire_async_rw_reader(lock_addr);
685  }
686  ASSERT_ND(async_ret.block_index_);
687  lock_entry->mcs_block_ = async_ret.block_index_;
688  if (async_ret.acquired_) {
689  lock_entry->taken_mode_ = lock_entry->preferred_mode_;
690  ASSERT_ND(lock_entry->is_enough());
691  }
692  ASSERT_ND(lock_entry->mcs_block_);
693 }
taken_mode_: we took a read-lock, not write-lock yet.
Definition: xct_id.hpp:105
taken_mode_: Not taken the lock yet.
Definition: xct_id.hpp:100
LockListPosition calculate_last_locked_entry_from(LockListPosition from) const
Only searches among entries at or before "from".
taken_mode_: we took a write-lock.
Definition: xct_id.hpp:110
LockEntry * get_entry(LockListPosition pos)
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename MCS_RW_IMPL >
ErrorCode foedus::xct::CurrentLockList::try_or_acquire_multiple_locks ( LockListPosition  upto_pos,
MCS_RW_IMPL *  mcs_rw_impl 
)
inline

Acquire multiple locks up to the given position in canonical order.

This is invoked by the thread to keep itself in canonical mode. This method is unconditional, meaning waits forever until we acquire the locks. Hence, this method must be invoked when the thread is still in canonical mode. Otherwise, it risks deadlock.

Definition at line 636 of file retrospective_lock_list.hpp.

References ASSERT_ND, CHECK_ERROR_CODE, foedus::kErrorCodeOk, foedus::xct::kLockListPositionInvalid, and try_or_acquire_single_lock().

Referenced by foedus::thread::ThreadPimpl::cll_try_or_acquire_multiple_locks().

638  {
639  ASSERT_ND(upto_pos != kLockListPositionInvalid);
640  ASSERT_ND(upto_pos <= last_active_entry_);
641  // Especially in this case, we probably should release locks after upto_pos first.
642  for (LockListPosition pos = 1U; pos <= upto_pos; ++pos) {
643  CHECK_ERROR_CODE(try_or_acquire_single_lock(pos, mcs_rw_impl));
644  }
645  return kErrorCodeOk;
646 }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
0 means no-error.
Definition: error_code.hpp:87
ErrorCode try_or_acquire_single_lock(LockListPosition pos, MCS_RW_IMPL *mcs_rw_impl)
Methods below take or release locks, so they receive MCS_RW_IMPL, a template param.
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename MCS_RW_IMPL >
ErrorCode foedus::xct::CurrentLockList::try_or_acquire_single_lock ( LockListPosition  pos,
MCS_RW_IMPL *  mcs_rw_impl 
)
inline

Methods below take or release locks, so they receive MCS_RW_IMPL, a template param.

Inline definitions of CurrentLockList methods below.

To avoid vtable and allow inlining, we define them at the bottom of this file. Acquire one lock in this CLL.

This method automatically checks if we are following canonical mode, and acquire the lock unconditionally when in canonical mode (never returns until acquire), and try the lock instanteneously when not in canonical mode (returns RaceAbort immediately).

These are inlined primarily because they receive a template param, not because we want to inline for performance. We could do explicit instantiations, but not that lengthy, either. Just inlining them is easier in this case.

Definition at line 569 of file retrospective_lock_list.hpp.

References assert_last_locked_entry(), ASSERT_ND, calculate_last_locked_entry_from(), get_entry(), foedus::xct::RwLockableXctId::get_key_lock(), foedus::xct::LockEntry::is_enough(), foedus::xct::LockEntry::is_locked(), foedus::kErrorCodeOk, foedus::kErrorCodeXctLockAbort, foedus::xct::kLockListPositionInvalid, foedus::xct::kNoLock, foedus::xct::kReadLock, foedus::xct::kWriteLock, foedus::xct::LockEntry::lock_, foedus::xct::LockEntry::mcs_block_, foedus::xct::LockEntry::preferred_mode_, and foedus::xct::LockEntry::taken_mode_.

Referenced by foedus::thread::ThreadPimpl::cll_try_or_acquire_single_lock(), and try_or_acquire_multiple_locks().

571  {
572  LockEntry* lock_entry = get_entry(pos);
573  if (lock_entry->is_enough()) {
574  return kErrorCodeOk;
575  }
576  ASSERT_ND(lock_entry->taken_mode_ != kWriteLock);
577 
578  McsRwLock* lock_addr = lock_entry->lock_->get_key_lock();
579  if (lock_entry->taken_mode_ != kNoLock) {
580  ASSERT_ND(lock_entry->preferred_mode_ == kWriteLock);
581  ASSERT_ND(lock_entry->taken_mode_ == kReadLock);
582  ASSERT_ND(lock_entry->mcs_block_);
583  // This is reader->writer upgrade.
584  // We simply release read-lock first then take write-lock in this case.
585  // In traditional 2PL, such an unlock-then-lock violates serializability,
586  // but we guarantee serializability by read-verification anyways.
587  // We can release any lock anytime.. great flexibility!
588  mcs_rw_impl->release_rw_reader(lock_addr, lock_entry->mcs_block_);
589  lock_entry->taken_mode_ = kNoLock;
590  last_locked_entry_ = calculate_last_locked_entry_from(pos - 1U);
592  } else {
593  // This method is for unconditional acquire and try, not aync/retry.
594  // If we have a queue node already, something was misused.
595  ASSERT_ND(lock_entry->mcs_block_ == 0);
596  }
597 
598  // Now we need to take the lock. Are we in canonical mode?
599  if (last_locked_entry_ == kLockListPositionInvalid || last_locked_entry_ < pos) {
600  // yay, we are in canonical mode. we can unconditionally get the lock
601  ASSERT_ND(lock_entry->taken_mode_ == kNoLock); // not a lock upgrade, either
602  if (lock_entry->preferred_mode_ == kWriteLock) {
603  lock_entry->mcs_block_ = mcs_rw_impl->acquire_unconditional_rw_writer(lock_addr);
604  } else {
605  ASSERT_ND(lock_entry->preferred_mode_ == kReadLock);
606  lock_entry->mcs_block_ = mcs_rw_impl->acquire_unconditional_rw_reader(lock_addr);
607  }
608  lock_entry->taken_mode_ = lock_entry->preferred_mode_;
609  last_locked_entry_ = pos;
610  } else {
611  // hmm, we violated canonical mode. has a risk of deadlock.
612  // Let's just try acquire the lock and immediately give up if it fails.
613  // The RLL will take care of the next run.
614  // TODO(Hideaki) release some of the lock we have taken to restore canonical mode.
615  // We haven't imlpemented this optimization yet.
616  ASSERT_ND(lock_entry->mcs_block_ == 0);
617  ASSERT_ND(lock_entry->taken_mode_ == kNoLock);
618  if (lock_entry->preferred_mode_ == kWriteLock) {
619  lock_entry->mcs_block_ = mcs_rw_impl->acquire_try_rw_writer(lock_addr);
620  } else {
621  ASSERT_ND(lock_entry->preferred_mode_ == kReadLock);
622  lock_entry->mcs_block_ = mcs_rw_impl->acquire_try_rw_reader(lock_addr);
623  }
624  if (lock_entry->mcs_block_ == 0) {
625  return kErrorCodeXctLockAbort;
626  }
627  lock_entry->taken_mode_ = lock_entry->preferred_mode_;
628  }
629  ASSERT_ND(lock_entry->mcs_block_);
630  ASSERT_ND(lock_entry->is_locked());
632  return kErrorCodeOk;
633 }
taken_mode_: we took a read-lock, not write-lock yet.
Definition: xct_id.hpp:105
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
taken_mode_: Not taken the lock yet.
Definition: xct_id.hpp:100
0x0AA1 : "XCTION : Lock acquire failed." .
Definition: error_code.hpp:206
LockListPosition calculate_last_locked_entry_from(LockListPosition from) const
Only searches among entries at or before "from".
0 means no-error.
Definition: error_code.hpp:87
taken_mode_: we took a write-lock.
Definition: xct_id.hpp:110
LockEntry * get_entry(LockListPosition pos)
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::xct::CurrentLockList::uninit ( )

Definition at line 109 of file retrospective_lock_list.cpp.

References clear_entries().

Referenced by ~CurrentLockList().

109  {
110  array_ = nullptr;
111  capacity_ = 0;
112  clear_entries();
113 }

Here is the call graph for this function:

Here is the caller graph for this function:

Friends And Related Function Documentation

std::ostream& operator<< ( std::ostream &  o,
const CurrentLockList v 
)
friend

Definition at line 132 of file retrospective_lock_list.cpp.

132  {
133  o << "<CurrentLockList>"
134  << "<Capacity>" << v.capacity_ << "</Capacity>"
135  << "<LastActiveEntry>" << v.last_active_entry_ << "</LastActiveEntry>"
136  << "<LastLockedEntry>" << v.last_locked_entry_ << "</LastLockedEntry>";
137  const uint32_t kMaxShown = 32U;
138  for (auto i = 1U; i <= std::min(v.last_active_entry_, kMaxShown); ++i) {
139  o << v.array_[i];
140  }
141  o << "</CurrentLockList>";
142  return o;
143 }

The documentation for this class was generated from the following files: