libfoedus-core
FOEDUS Core Library
foedus::xct::RetrospectiveLockList Class Reference

Retrospective lock list. More...

Detailed Description

Retrospective lock list.

This is NOT a POD because we need dynamic memory for the list.

Note
This object itself is thread-private. No concurrency control needed.

Definition at line 189 of file retrospective_lock_list.hpp.

#include <retrospective_lock_list.hpp>

Public Member Functions

 RetrospectiveLockList ()
 Init/Uninit. More...
 
 ~RetrospectiveLockList ()
 
void init (LockEntry *array, uint32_t capacity, const memory::GlobalVolatilePageResolver &resolver)
 
void uninit ()
 
void clear_entries ()
 
LockListPosition binary_search (UniversalLockId lock) const
 Analogous to std::binary_search() for the given lock. More...
 
LockListPosition lower_bound (UniversalLockId lock) const
 Analogous to std::lower_bound() for the given lock. More...
 
void construct (thread::Thread *context, uint32_t read_lock_threshold)
 Fill out this retrospetive lock list for the next run of the given transaction. More...
 
const LockEntryget_array () const
 
LockEntryget_entry (LockListPosition pos)
 
const LockEntryget_entry (LockListPosition pos) const
 
uint32_t get_capacity () const
 
LockListPosition get_last_active_entry () const
 
bool is_valid_entry (LockListPosition pos) const
 
bool is_empty () const
 
void assert_sorted () const __attribute__((always_inline))
 
void assert_sorted_impl () const
 
LockEntrybegin ()
 
LockEntryend ()
 
const LockEntrycbegin () const
 
const LockEntrycend () const
 
const memory::GlobalVolatilePageResolverget_volatile_page_resolver () const
 

Friends

std::ostream & operator<< (std::ostream &o, const RetrospectiveLockList &v)
 

Constructor & Destructor Documentation

foedus::xct::RetrospectiveLockList::RetrospectiveLockList ( )

Init/Uninit.

Definition at line 39 of file retrospective_lock_list.cpp.

References clear_entries().

39  {
40  array_ = nullptr;
41  capacity_ = 0;
42  clear_entries();
43 }

Here is the call graph for this function:

foedus::xct::RetrospectiveLockList::~RetrospectiveLockList ( )

Definition at line 45 of file retrospective_lock_list.cpp.

References uninit().

Here is the call graph for this function:

Member Function Documentation

void foedus::xct::RetrospectiveLockList::assert_sorted ( ) const
inline

Definition at line 548 of file retrospective_lock_list.hpp.

References assert_sorted_impl().

Referenced by construct(), and foedus::xct::CurrentLockList::prepopulate_for_retrospective_lock_list().

548  {
549  // In release mode, this code must be completely erased by compiler
550 #ifndef NDEBUG
552 #endif // NDEBUG
553 }

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::xct::RetrospectiveLockList::assert_sorted_impl ( ) const

Definition at line 182 of file retrospective_lock_list.cpp.

References foedus::xct::lock_assert_sorted().

Referenced by assert_sorted().

182  {
183  lock_assert_sorted(*this);
184 }
void lock_assert_sorted(const LOCK_LIST &list)

Here is the call graph for this function:

Here is the caller graph for this function:

LockEntry* foedus::xct::RetrospectiveLockList::begin ( )
inline

Definition at line 254 of file retrospective_lock_list.hpp.

254 { return array_ + 1U; }
LockListPosition foedus::xct::RetrospectiveLockList::binary_search ( UniversalLockId  lock) const

Analogous to std::binary_search() for the given lock.

Returns
Index of an entry whose lock_ == lock. kLockListPositionInvalid if not found.

Definition at line 193 of file retrospective_lock_list.cpp.

Referenced by foedus::xct::Xct::on_record_read_take_locks_if_needed().

193  {
194  return lock_binary_search<RetrospectiveLockList, LockEntry>(*this, lock);
195 }

Here is the caller graph for this function:

const LockEntry* foedus::xct::RetrospectiveLockList::cbegin ( ) const
inline

Definition at line 256 of file retrospective_lock_list.hpp.

256 { return array_ + 1U; }
const LockEntry* foedus::xct::RetrospectiveLockList::cend ( ) const
inline

Definition at line 257 of file retrospective_lock_list.hpp.

257 { return array_ + 1U + last_active_entry_; }
void foedus::xct::RetrospectiveLockList::clear_entries ( )

Definition at line 59 of file retrospective_lock_list.cpp.

References foedus::xct::kLockListPositionInvalid, foedus::xct::kNoLock, foedus::xct::LockEntry::lock_, foedus::xct::LockEntry::preferred_mode_, foedus::xct::LockEntry::taken_mode_, and foedus::xct::LockEntry::universal_lock_id_.

Referenced by foedus::xct::XctManagerPimpl::abort_xct(), init(), foedus::xct::Xct::on_record_read_take_locks_if_needed(), foedus::xct::XctManagerPimpl::precommit_xct(), RetrospectiveLockList(), and uninit().

59  {
60  last_active_entry_ = kLockListPositionInvalid;
61  if (array_) {
62  // Dummy entry
64  array_[kLockListPositionInvalid].lock_ = nullptr;
67  }
68 }
UniversalLockId universal_lock_id_
Used to order locks in canonical order.
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
taken_mode_: Not taken the lock yet.
Definition: xct_id.hpp:100
LockMode preferred_mode_
Whick lock mode we should take according to RLL.
LockMode taken_mode_
Whick lock mode we have taken during the current run (of course initially kNoLock) ...
RwLockableXctId * lock_
Virtual address of the lock.

Here is the caller graph for this function:

void foedus::xct::RetrospectiveLockList::construct ( thread::Thread context,
uint32_t  read_lock_threshold 
)

Fill out this retrospetive lock list for the next run of the given transaction.

Parameters
[in]contextthe thread conveying the transaction. must be currently running a xct
[in]read_lock_thresholdwe "recommend" a read lock in RLL for records whose page have this value or more in the temperature-stat. This value should be a bit lower than the threshold we trigger read-locks without RLL. Otherwise, the next run might often take a read-lock the RLL discarded due to a concurrent abort, which might violate canonical order.

This method is invoked when a transaction aborts at precommit due to some conflict. Based on the current transaction's read/write-sets, this builds an RLL that contains locks we probably should take next time in a canonical order.

Note
we should keep an eye on the CPU cost of this method. Hopefully negligible. We can speed up this method by merging RLL/CLL to read/write-set in xct so that we don't need another sorting.

Definition at line 252 of file retrospective_lock_list.cpp.

References ASSERT_ND, assert_sorted(), foedus::thread::Thread::get_current_xct(), foedus::storage::Page::get_header(), foedus::xct::Xct::get_read_set(), foedus::xct::Xct::get_read_set_size(), foedus::xct::Xct::get_write_set(), foedus::xct::Xct::get_write_set_size(), foedus::storage::PageHeader::hotness_, foedus::xct::Xct::is_active(), foedus::xct::kLockListPositionInvalid, foedus::xct::kNoLock, foedus::xct::kReadLock, foedus::xct::kWriteLock, foedus::xct::LockEntry::preferred_mode_, foedus::xct::LockEntry::set(), foedus::storage::to_page(), foedus::assorted::ProbCounter::value_, foedus::xct::RwLockableXctId::xct_id_, and foedus::xct::xct_id_to_universal_lock_id().

Referenced by foedus::xct::XctManagerPimpl::abort_xct().

252  {
253  Xct* xct = &context->get_current_xct();
254  ASSERT_ND(xct->is_active());
255  // We currently hold read/write-set separately. So, we need to sort and merge them.
256  ReadXctAccess* read_set = xct->get_read_set();
257  const uint32_t read_set_size = xct->get_read_set_size();
258  WriteXctAccess* write_set = xct->get_write_set();
259  const uint32_t write_set_size = xct->get_write_set_size();
260  ASSERT_ND(capacity_ >= read_set_size + write_set_size);
261 
262  last_active_entry_ = kLockListPositionInvalid;
263  if (read_set_size == 0 && write_set_size == 0) {
264  return;
265  }
266 
267  for (uint32_t i = 0; i < read_set_size; ++i) {
268  RwLockableXctId* lock = read_set[i].owner_id_address_;
269  storage::Page* page = storage::to_page(lock);
270  if (page->get_header().hotness_.value_ < read_lock_threshold
271  && lock->xct_id_ == read_set[i].observed_owner_id_) {
272  // We also add it to RLL whenever we observed a verification error.
273  continue;
274  }
275 
276  auto pos = issue_new_position();
277  ASSERT_ND(
278  read_set[i].owner_lock_id_ ==
279  xct_id_to_universal_lock_id(volatile_page_resolver_, lock));
280  array_[pos].set(read_set[i].owner_lock_id_, lock, kReadLock, kNoLock);
281  }
282  DVLOG(1) << "Added " << last_active_entry_ << " to RLL for read-locks";
283 
284  // Writes are always added to RLL.
285  for (uint32_t i = 0; i < write_set_size; ++i) {
286  auto pos = issue_new_position();
287  ASSERT_ND(
288  write_set[i].owner_lock_id_ ==
289  xct_id_to_universal_lock_id(volatile_page_resolver_, write_set[i].owner_id_address_));
290  array_[pos].set(
291  write_set[i].owner_lock_id_,
292  write_set[i].owner_id_address_,
293  kWriteLock,
294  kNoLock);
295  }
296 
297  // Now, the entries are not sorted and we might have duplicates.
298  // Sort them, and merge entries for the same record.
299  // std::set? no joke. we can't afford heap allocation here.
300  ASSERT_ND(last_active_entry_ != kLockListPositionInvalid);
301  std::sort(array_ + 1U, array_ + last_active_entry_ + 1U);
302  LockListPosition prev_pos = 1U;
303  uint32_t merged_count = 0U;
304  for (LockListPosition pos = 2U; pos <= last_active_entry_; ++pos) {
305  ASSERT_ND(prev_pos < pos);
306  ASSERT_ND(array_[prev_pos].universal_lock_id_ <= array_[pos].universal_lock_id_);
307  if (array_[prev_pos].universal_lock_id_ == array_[pos].universal_lock_id_) {
308  // Merge!
309  if (array_[pos].preferred_mode_ == kWriteLock) {
310  array_[prev_pos].preferred_mode_ = kWriteLock;
311  }
312  ++merged_count;
313  } else {
314  // No merge.
315  if (prev_pos + 1U < pos) {
316  std::memcpy(array_ + prev_pos + 1U, array_ + pos, sizeof(LockEntry));
317  }
318  ++prev_pos;
319  }
320  }
321 
322  // For example, last_active_entry_ was 3 (0=Dummy, 1=A, 2=A, 3=B),
323  // prev_pos becomes 2 while merged count is 1.
324  ASSERT_ND(prev_pos + merged_count == last_active_entry_);
325  last_active_entry_ = prev_pos;
326  assert_sorted();
327 }
taken_mode_: we took a read-lock, not write-lock yet.
Definition: xct_id.hpp:105
Page * to_page(const void *address)
super-dirty way to obtain Page the address belongs to.
Definition: page.hpp:395
void set(UniversalLockId id, RwLockableXctId *lock, LockMode preferred_mode, LockMode taken_mode)
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
taken_mode_: Not taken the lock yet.
Definition: xct_id.hpp:100
UniversalLockId xct_id_to_universal_lock_id(const memory::GlobalVolatilePageResolver &resolver, RwLockableXctId *lock)
Definition: xct_id.hpp:1226
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
LockMode preferred_mode_
Whick lock mode we should take according to RLL.
taken_mode_: we took a write-lock.
Definition: xct_id.hpp:110
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
void assert_sorted() const __attribute__((always_inline))

Here is the call graph for this function:

Here is the caller graph for this function:

LockEntry* foedus::xct::RetrospectiveLockList::end ( )
inline

Definition at line 255 of file retrospective_lock_list.hpp.

255 { return array_ + 1U + last_active_entry_; }
const LockEntry* foedus::xct::RetrospectiveLockList::get_array ( ) const
inline

Definition at line 234 of file retrospective_lock_list.hpp.

Referenced by foedus::xct::Xct::on_record_read_take_locks_if_needed(), and foedus::xct::CurrentLockList::prepopulate_for_retrospective_lock_list().

234 { return array_; }

Here is the caller graph for this function:

uint32_t foedus::xct::RetrospectiveLockList::get_capacity ( ) const
inline

Definition at line 243 of file retrospective_lock_list.hpp.

243 { return capacity_; }
LockEntry* foedus::xct::RetrospectiveLockList::get_entry ( LockListPosition  pos)
inline

Definition at line 235 of file retrospective_lock_list.hpp.

References ASSERT_ND, and is_valid_entry().

235  {
237  return array_ + pos;
238  }
bool is_valid_entry(LockListPosition pos) const
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

const LockEntry* foedus::xct::RetrospectiveLockList::get_entry ( LockListPosition  pos) const
inline

Definition at line 239 of file retrospective_lock_list.hpp.

References ASSERT_ND, and is_valid_entry().

239  {
241  return array_ + pos;
242  }
bool is_valid_entry(LockListPosition pos) const
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

LockListPosition foedus::xct::RetrospectiveLockList::get_last_active_entry ( ) const
inline

Definition at line 244 of file retrospective_lock_list.hpp.

Referenced by foedus::xct::XctManagerPimpl::begin_xct(), and foedus::xct::CurrentLockList::prepopulate_for_retrospective_lock_list().

244 { return last_active_entry_; }

Here is the caller graph for this function:

const memory::GlobalVolatilePageResolver& foedus::xct::RetrospectiveLockList::get_volatile_page_resolver ( ) const
inline

Definition at line 258 of file retrospective_lock_list.hpp.

Referenced by foedus::xct::Xct::add_to_read_set(), and foedus::xct::Xct::add_to_write_set().

258  {
259  return volatile_page_resolver_;
260  }

Here is the caller graph for this function:

void foedus::xct::RetrospectiveLockList::init ( LockEntry array,
uint32_t  capacity,
const memory::GlobalVolatilePageResolver resolver 
)

Definition at line 49 of file retrospective_lock_list.cpp.

References clear_entries().

Referenced by foedus::xct::Xct::initialize().

52  {
53  array_ = array;
54  capacity_ = capacity;
55  volatile_page_resolver_ = resolver;
56  clear_entries();
57 }

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::xct::RetrospectiveLockList::is_empty ( ) const
inline

Definition at line 248 of file retrospective_lock_list.hpp.

References foedus::xct::kLockListPositionInvalid.

Referenced by foedus::xct::Xct::activate(), foedus::xct::Xct::on_record_read_take_locks_if_needed(), and foedus::xct::CurrentLockList::prepopulate_for_retrospective_lock_list().

248 { return last_active_entry_ == kLockListPositionInvalid; }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149

Here is the caller graph for this function:

bool foedus::xct::RetrospectiveLockList::is_valid_entry ( LockListPosition  pos) const
inline

Definition at line 245 of file retrospective_lock_list.hpp.

References foedus::xct::kLockListPositionInvalid.

Referenced by get_entry().

245  {
246  return pos != kLockListPositionInvalid && pos <= last_active_entry_;
247  }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149

Here is the caller graph for this function:

LockListPosition foedus::xct::RetrospectiveLockList::lower_bound ( UniversalLockId  lock) const

Analogous to std::lower_bound() for the given lock.

Returns
Index of the fist entry whose lock_ is not less than lock. If no such entry, last_active_entry_ + 1U, whether last_active_entry_ is 0 or not. Thus the return value is always >0. This is to immitate std::lower_bound's behavior.

Definition at line 200 of file retrospective_lock_list.cpp.

200  {
201  return lock_lower_bound<RetrospectiveLockList, LockEntry>(*this, lock);
202 }
void foedus::xct::RetrospectiveLockList::uninit ( )

Definition at line 70 of file retrospective_lock_list.cpp.

References clear_entries().

Referenced by ~RetrospectiveLockList().

70  {
71  array_ = nullptr;
72  capacity_ = 0;
73  clear_entries();
74 }

Here is the call graph for this function:

Here is the caller graph for this function:

Friends And Related Function Documentation

std::ostream& operator<< ( std::ostream &  o,
const RetrospectiveLockList v 
)
friend

Definition at line 145 of file retrospective_lock_list.cpp.

145  {
146  o << "<RetrospectiveLockList>"
147  << "<Capacity>" << v.capacity_ << "</Capacity>"
148  << "<LastActiveEntry>" << v.last_active_entry_ << "</LastActiveEntry>";
149  const uint32_t kMaxShown = 32U;
150  for (auto i = 1U; i <= std::min(v.last_active_entry_, kMaxShown); ++i) {
151  o << v.array_[i];
152  }
153  o << "</RetrospectiveLockList>";
154  return o;
155 }

The documentation for this class was generated from the following files: