libfoedus-core
FOEDUS Core Library
foedus::xct::SysxctLockList Class Reference

RLL/CLL of a system transaction. More...

Detailed Description

RLL/CLL of a system transaction.

Unlike the normal transaction, lock-list is simpler in sysxct. Sysxct doesn't need read-set or asynchronous locking. It immediately takes the locks required by the logic, and abort if it couldn't. Unlike normal xct, sysxct also locks records for just a few times (counting "lock all records in this page" as one). We thus integrate RLL and CLL into this list in system transactions. We might follow this approach in normal xct later.

This object is a POD. We use a fixed-size array.

Definition at line 166 of file sysxct_impl.hpp.

#include <sysxct_impl.hpp>

Public Types

enum  Constants { kMaxSysxctLocks = 1 << 10 }
 

Public Member Functions

 SysxctLockList ()
 
 ~SysxctLockList ()
 
 SysxctLockList (const SysxctLockList &other)=delete
 
SysxctLockListoperator= (const SysxctLockList &other)=delete
 
void init ()
 
void clear_entries (UniversalLockId enclosing_max_lock_id)
 Remove all entries. More...
 
void compress_entries (UniversalLockId enclosing_max_lock_id)
 Unlike clear_entries(), this is used when a sysxct is aborted and will be retried. More...
 
LockListPosition lower_bound (UniversalLockId lock) const
 Analogous to std::lower_bound() for the given lock. More...
 
template<typename MCS_ADAPTOR >
ErrorCode request_record_lock (MCS_ADAPTOR mcs_adaptor, storage::VolatilePagePointer page_id, RwLockableXctId *lock)
 If not yet acquired, acquires a record lock and adds an entry to this list, re-sorting part of the list if necessary to keep the sortedness. More...
 
template<typename MCS_ADAPTOR >
ErrorCode batch_request_record_locks (MCS_ADAPTOR mcs_adaptor, storage::VolatilePagePointer page_id, uint32_t lock_count, RwLockableXctId **locks)
 Used to acquire many locks in a page at once. More...
 
template<typename MCS_ADAPTOR >
ErrorCode request_page_lock (MCS_ADAPTOR mcs_adaptor, storage::Page *page)
 Acquires a page lock. More...
 
template<typename MCS_ADAPTOR >
ErrorCode batch_request_page_locks (MCS_ADAPTOR mcs_adaptor, uint32_t lock_count, storage::Page **pages)
 The interface is same as the record version, but this one doesn't do much optimization. More...
 
template<typename MCS_ADAPTOR >
void release_all_locks (MCS_ADAPTOR mcs_adaptor)
 Releases all locks that were acquired. More...
 
LockListPosition get_or_add_entry (storage::VolatilePagePointer page_id, uintptr_t lock_addr, bool page_lock)
 
LockListPosition batch_get_or_add_entries (storage::VolatilePagePointer page_id, uint32_t lock_count, uintptr_t *lock_addr, bool page_lock)
 Batched version of get_or_add_entry(). More...
 
const SysxctLockEntryget_array () const
 
SysxctLockEntryget_array ()
 
SysxctLockEntryget_entry (LockListPosition pos)
 
const SysxctLockEntryget_entry (LockListPosition pos) const
 
uint32_t get_capacity () const
 
bool is_full () const
 When this returns full, it's catastrophic. More...
 
LockListPosition get_last_active_entry () const
 
bool is_valid_entry (LockListPosition pos) const
 
bool is_empty () const
 
void assert_sorted () const __attribute__((always_inline))
 Inline definitions of SysxctLockList methods. More...
 
void assert_sorted_impl () const
 
SysxctLockEntrybegin ()
 
SysxctLockEntryend ()
 
const SysxctLockEntrycbegin () const
 
const SysxctLockEntrycend () const
 
LockListPosition to_pos (const SysxctLockEntry *entry) const
 
LockListPosition get_last_locked_entry () const
 
LockListPosition calculate_last_locked_entry () const
 Calculate last_locked_entry_ by really checking the whole list. More...
 
LockListPosition calculate_last_locked_entry_from (LockListPosition from) const
 Only searches among entries at or before "from". More...
 
void assert_last_locked_entry () const
 
UniversalLockId get_enclosing_max_lock_id () const
 
bool is_try_mode_required (LockListPosition pos) const
 

Friends

struct TestIsTryRequired
 
std::ostream & operator<< (std::ostream &o, const SysxctLockList &v)
 

Member Enumeration Documentation

Enumerator
kMaxSysxctLocks 

Maximum number of locks one system transaction might take.

In most cases just a few. The worst case is page-split where a sysxct takes a page-full of records.

Definition at line 170 of file sysxct_impl.hpp.

170  {
176  kMaxSysxctLocks = 1 << 10,
177  };
Maximum number of locks one system transaction might take.

Constructor & Destructor Documentation

foedus::xct::SysxctLockList::SysxctLockList ( )
inline

Definition at line 179 of file sysxct_impl.hpp.

References init().

179 { init(); }

Here is the call graph for this function:

foedus::xct::SysxctLockList::~SysxctLockList ( )
inline

Definition at line 180 of file sysxct_impl.hpp.

180 {}
foedus::xct::SysxctLockList::SysxctLockList ( const SysxctLockList other)
delete

Member Function Documentation

void foedus::xct::SysxctLockList::assert_last_locked_entry ( ) const
inline

Definition at line 345 of file sysxct_impl.hpp.

References ASSERT_ND, and calculate_last_locked_entry().

Referenced by assert_sorted_impl(), and lower_bound().

345  {
346 #ifndef NDEBUG
348  ASSERT_ND(correct == last_locked_entry_);
349 #endif // NDEBUG
350  }
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
LockListPosition calculate_last_locked_entry() const
Calculate last_locked_entry_ by really checking the whole list.

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::xct::SysxctLockList::assert_sorted ( ) const
inline

Inline definitions of SysxctLockList methods.

Definition at line 549 of file sysxct_impl.hpp.

References assert_sorted_impl().

Referenced by batch_get_or_add_entries(), clear_entries(), compress_entries(), and get_or_add_entry().

549  {
550  // In release mode, this code must be completely erased by compiler
551 #ifndef NDEBUG
553 #endif // NDEBUG
554 }
void assert_sorted_impl() const
Definition: sysxct_impl.cpp:78

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::xct::SysxctLockList::assert_sorted_impl ( ) const

Definition at line 78 of file sysxct_impl.cpp.

References assert_last_locked_entry(), ASSERT_ND, get_array(), get_last_active_entry(), foedus::storage::Page::get_volatile_page_id(), foedus::xct::kLockListPositionInvalid, foedus::xct::kNullUniversalLockId, foedus::storage::to_page(), and foedus::xct::to_universal_lock_id().

Referenced by assert_sorted().

78  {
80  const SysxctLockEntry* array = get_array();
81  ASSERT_ND(array[kLockListPositionInvalid].universal_lock_id_ == 0);
83  ASSERT_ND(array[kLockListPositionInvalid].mcs_block_ == 0);
84  const LockListPosition last_active_entry = get_last_active_entry();
85  for (LockListPosition pos = 2U; pos <= last_active_entry; ++pos) {
86  ASSERT_ND(array[pos - 1U].universal_lock_id_ < array[pos].universal_lock_id_);
87  ASSERT_ND(array[pos].universal_lock_id_ != 0);
88  ASSERT_ND(array[pos].lock_ != kNullUniversalLockId);
89  const storage::Page* page = storage::to_page(reinterpret_cast<void*>(array[pos].lock_));
90  auto page_id = page->get_volatile_page_id();
91  ASSERT_ND(array[pos].universal_lock_id_ == to_universal_lock_id(page_id, array[pos].lock_));
92  }
93 }
Page * to_page(const void *address)
super-dirty way to obtain Page the address belongs to.
Definition: page.hpp:395
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
UniversalLockId to_universal_lock_id(storage::VolatilePagePointer page_id, uintptr_t addr)
Definition: sysxct_impl.hpp:63
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
void assert_last_locked_entry() const
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
LockListPosition get_last_active_entry() const
const SysxctLockEntry * get_array() const
const UniversalLockId kNullUniversalLockId
This never points to a valid lock, and also evaluates less than any vaild alocks. ...
Definition: xct_id.hpp:137

Here is the call graph for this function:

Here is the caller graph for this function:

LockListPosition foedus::xct::SysxctLockList::batch_get_or_add_entries ( storage::VolatilePagePointer  page_id,
uint32_t  lock_count,
uintptr_t *  lock_addr,
bool  page_lock 
)

Batched version of get_or_add_entry().

Returns
position of the first entry that corresponds to any of the locks, either added or found. Returns kLockListPositionInvalid iff lock_count == 0.
Note
lock_addr does NOT have to be sorted. This function sorts it first (that's why it's not const param). However, you SHOULD give this method a sorted array if it's easy to do. This method first checks std::is_sorted() to skip sorting.

Definition at line 152 of file sysxct_impl.cpp.

References ASSERT_ND, assert_sorted(), get_last_active_entry(), get_or_add_entry(), foedus::xct::kLockListPositionInvalid, kMaxSysxctLocks, lower_bound(), foedus::xct::SysxctLockEntry::set(), foedus::xct::to_universal_lock_id(), foedus::xct::SysxctLockEntry::universal_lock_id_, and UNLIKELY.

156  {
157  ASSERT_ND(lock_count);
159  if (UNLIKELY(get_last_active_entry() + lock_count > kMaxSysxctLocks)) {
160  LOG(FATAL) << "SysxctLockList full. This must not happen";
161  }
162  if (lock_count == 0) {
164  }
165 
166  // Address order within the same page guarantees the Universal lock ID order.
167  if (std::is_sorted(lock_addr, lock_addr + lock_count)) {
168  DVLOG(3) << "Okay, sorted.";
169  } else {
170  // Most likely we hit this case for page locks (just 2 or 3 locks)
171  // Otherwise something wrong...
172  if (UNLIKELY(lock_count > 5U)) {
173  LOG(INFO) << "Given array is not sorted. We have to sort it here. Is it intended??";
174  }
175  std::sort(lock_addr, lock_addr + lock_count);
176  // If this is the majority case, the is_sorted() is a wasted cost (std::sort often
177  // do something like this internally), but this should almost never happen. In that
178  // case, std::is_sorted() is faster.
179  }
180 
181  const UniversalLockId min_id = to_universal_lock_id(page_id, lock_addr[0]);
182  const UniversalLockId max_id = to_universal_lock_id(page_id, lock_addr[lock_count - 1]);
183  ASSERT_ND(min_id <= max_id);
184 
185  // We optimize for the case where this list has nothing in [min_lock_addr,max_lock_addr].
186  // When this is not the case, we just call get_or_add_entry with for loop.
187  // A large lock_count should mean page split sysxct, so this should never happen.
188 
189  const LockListPosition insert_pos = lower_bound(min_id);
190  ASSERT_ND(insert_pos != kLockListPositionInvalid);
191  if (insert_pos > last_active_entry_) {
192  ASSERT_ND(insert_pos == last_active_entry_ + 1U);
193  // Larger than all existing entries. This is easy. We just append all to the end.
194  } else {
195  const UniversalLockId next_id = array_[insert_pos].universal_lock_id_;
196  ASSERT_ND(next_id >= min_id); // lower_bound()'s contract
197  if (next_id > max_id) {
198  DVLOG(3) << "Okay, no overlap. Much easier.";
199  } else {
200  // This is unfortunate, but it happens.. especially with pessimisitic locks
201  if (UNLIKELY(lock_count > 5U)) {
202  DVLOG(0) << "Overlapped entry for a large batch. Well, unfortunate. but it happens.";
203  }
204 
205  // Fall back to the slow path
206  for (uint32_t i = 0; i < lock_count; ++i) {
207  LockListPosition pos = get_or_add_entry(page_id, lock_addr[i], page_lock);
208  ASSERT_ND(pos >= insert_pos);
209  }
210  return insert_pos;
211  }
212  }
213 
214  ASSERT_ND(insert_pos > last_active_entry_ || array_[insert_pos].universal_lock_id_ >= max_id);
215 
216  // Move existing entries at once, then set() in a tight for loop
217  ASSERT_ND(last_active_entry_ + 1U >= insert_pos);
218  const uint32_t move_count = last_active_entry_ + 1U - insert_pos;
219  if (move_count > 0) {
220  const uint64_t moved_bytes = sizeof(SysxctLockEntry) * move_count;
221  std::memmove(array_ + insert_pos + lock_count, array_ + insert_pos, moved_bytes);
222  }
223  if (last_locked_entry_ >= insert_pos) {
224  last_locked_entry_ += lock_count;
225  }
226 
227  for (uint32_t i = 0; i < lock_count; ++i) {
228  const UniversalLockId id = to_universal_lock_id(page_id, lock_addr[i]);
229  array_[insert_pos + i].set(id, lock_addr[i], page_lock);
230  }
231 
232  last_active_entry_ += lock_count;
233  assert_sorted();
234  return insert_pos;
235 }
LockListPosition get_or_add_entry(storage::VolatilePagePointer page_id, uintptr_t lock_addr, bool page_lock)
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
Maximum number of locks one system transaction might take.
uintptr_t UniversalLockId
Universally ordered identifier of each lock.
Definition: xct_id.hpp:134
UniversalLockId to_universal_lock_id(storage::VolatilePagePointer page_id, uintptr_t addr)
Definition: sysxct_impl.hpp:63
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
LockListPosition lower_bound(UniversalLockId lock) const
Analogous to std::lower_bound() for the given lock.
Definition: sysxct_impl.cpp:98
void set(UniversalLockId lock_id, uintptr_t lock, bool page_lock)
void assert_sorted() const __attribute__((always_inline))
Inline definitions of SysxctLockList methods.
UniversalLockId universal_lock_id_
Used to order locks in canonical order.
Definition: sysxct_impl.hpp:85
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
LockListPosition get_last_active_entry() const

Here is the call graph for this function:

template<typename MCS_ADAPTOR >
ErrorCode foedus::xct::SysxctLockList::batch_request_page_locks ( MCS_ADAPTOR  mcs_adaptor,
uint32_t  lock_count,
storage::Page **  pages 
)
inline

The interface is same as the record version, but this one doesn't do much optimization.

We shouldn't frequently lock many pages in sysxct. However, at least this one locks the pages in UniversalLockId order to avoid deadlocks.

Definition at line 252 of file sysxct_impl.hpp.

References CHECK_ERROR_CODE, foedus::kErrorCodeOk, and request_page_lock().

255  {
256  std::sort(pages, pages + lock_count, PageComparator());
257  for (uint32_t i = 0; i < lock_count; ++i) {
258  CHECK_ERROR_CODE(request_page_lock(mcs_adaptor, pages[i]));
259  }
260  return kErrorCodeOk;
261  }
0 means no-error.
Definition: error_code.hpp:87
#define CHECK_ERROR_CODE(x)
This macro calls x and checks its returned error code.
Definition: error_code.hpp:155
ErrorCode request_page_lock(MCS_ADAPTOR mcs_adaptor, storage::Page *page)
Acquires a page lock.

Here is the call graph for this function:

template<typename MCS_ADAPTOR >
ErrorCode foedus::xct::SysxctLockList::batch_request_record_locks ( MCS_ADAPTOR  mcs_adaptor,
storage::VolatilePagePointer  page_id,
uint32_t  lock_count,
RwLockableXctId **  locks 
)
inline

Used to acquire many locks in a page at once.

This reduces sorting cost.

Definition at line 230 of file sysxct_impl.hpp.

234  {
235  uintptr_t* lock_addr = reinterpret_cast<uintptr_t*>(locks);
236  return batch_request_locks_general(mcs_adaptor, page_id, lock_count, lock_addr, false);
237  }
SysxctLockEntry* foedus::xct::SysxctLockList::begin ( )
inline

Definition at line 316 of file sysxct_impl.hpp.

Referenced by release_all_locks().

316 { return array_ + 1U; }

Here is the caller graph for this function:

LockListPosition foedus::xct::SysxctLockList::calculate_last_locked_entry ( ) const
inline

Calculate last_locked_entry_ by really checking the whole list.

Usually for sanity checks

Definition at line 333 of file sysxct_impl.hpp.

References calculate_last_locked_entry_from().

Referenced by assert_last_locked_entry().

333  {
334  return calculate_last_locked_entry_from(last_active_entry_);
335  }
LockListPosition calculate_last_locked_entry_from(LockListPosition from) const
Only searches among entries at or before "from".

Here is the call graph for this function:

Here is the caller graph for this function:

LockListPosition foedus::xct::SysxctLockList::calculate_last_locked_entry_from ( LockListPosition  from) const
inline

Only searches among entries at or before "from".

Definition at line 337 of file sysxct_impl.hpp.

References foedus::xct::kLockListPositionInvalid.

Referenced by calculate_last_locked_entry().

337  {
338  for (LockListPosition pos = from; pos > kLockListPositionInvalid; --pos) {
339  if (array_[pos].is_locked()) {
340  return pos;
341  }
342  }
344  }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148

Here is the caller graph for this function:

const SysxctLockEntry* foedus::xct::SysxctLockList::cbegin ( ) const
inline

Definition at line 318 of file sysxct_impl.hpp.

318 { return array_ + 1U; }
const SysxctLockEntry* foedus::xct::SysxctLockList::cend ( ) const
inline

Definition at line 319 of file sysxct_impl.hpp.

319 { return array_ + 1U + last_active_entry_; }
void foedus::xct::SysxctLockList::clear_entries ( UniversalLockId  enclosing_max_lock_id)
inline

Remove all entries.

This is used when a sysxct starts a fresh new transaction, not retry.

Definition at line 193 of file sysxct_impl.hpp.

References ASSERT_ND, assert_sorted(), and foedus::xct::kLockListPositionInvalid.

193  {
194  ASSERT_ND(last_locked_entry_ == kLockListPositionInvalid);
195  assert_sorted();
196  clear_no_assert(enclosing_max_lock_id);
197  }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
void assert_sorted() const __attribute__((always_inline))
Inline definitions of SysxctLockList methods.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

void foedus::xct::SysxctLockList::compress_entries ( UniversalLockId  enclosing_max_lock_id)

Unlike clear_entries(), this is used when a sysxct is aborted and will be retried.

We 'inherit' some entries (although all of them must be released) to help guide next runs.

Definition at line 237 of file sysxct_impl.cpp.

References ASSERT_ND, assert_sorted(), foedus::xct::kLockListPositionInvalid, and foedus::xct::SysxctLockEntry::used_in_this_run_.

237  {
238  assert_sorted();
239  ASSERT_ND(last_locked_entry_ == kLockListPositionInvalid);
240  LockListPosition next_compressed_pos = 1U;
241  for (LockListPosition pos = 1U; pos <= last_active_entry_; ++pos) {
242  ASSERT_ND(!array_[pos].is_locked());
243  ASSERT_ND(pos >= next_compressed_pos);
244  if (array_[pos].used_in_this_run_) {
245  // Okay, let's inherit this entry
246  if (pos != next_compressed_pos) {
247  array_[next_compressed_pos] = array_[pos];
248  }
249  array_[next_compressed_pos].used_in_this_run_ = false;
250  ++next_compressed_pos;
251  }
252  }
253  last_active_entry_ = next_compressed_pos - 1U;
254  enclosing_max_lock_id_ = enclosing_max_lock_id;
255  assert_sorted();
256 }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
bool used_in_this_run_
whether the lock was requested at least once in this run.
void assert_sorted() const __attribute__((always_inline))
Inline definitions of SysxctLockList methods.
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

SysxctLockEntry* foedus::xct::SysxctLockList::end ( )
inline

Definition at line 317 of file sysxct_impl.hpp.

Referenced by release_all_locks().

317 { return array_ + 1U + last_active_entry_; }

Here is the caller graph for this function:

const SysxctLockEntry* foedus::xct::SysxctLockList::get_array ( ) const
inline

Definition at line 293 of file sysxct_impl.hpp.

Referenced by assert_sorted_impl().

293 { return array_; }

Here is the caller graph for this function:

SysxctLockEntry* foedus::xct::SysxctLockList::get_array ( )
inline

Definition at line 294 of file sysxct_impl.hpp.

294 { return array_; }
uint32_t foedus::xct::SysxctLockList::get_capacity ( ) const
inline

Definition at line 303 of file sysxct_impl.hpp.

References kMaxSysxctLocks.

Referenced by foedus::xct::operator<<().

303 { return kMaxSysxctLocks; }
Maximum number of locks one system transaction might take.

Here is the caller graph for this function:

UniversalLockId foedus::xct::SysxctLockList::get_enclosing_max_lock_id ( ) const
inline

Definition at line 352 of file sysxct_impl.hpp.

Referenced by foedus::xct::operator<<().

352 { return enclosing_max_lock_id_; }

Here is the caller graph for this function:

SysxctLockEntry* foedus::xct::SysxctLockList::get_entry ( LockListPosition  pos)
inline

Definition at line 295 of file sysxct_impl.hpp.

References ASSERT_ND, and is_valid_entry().

Referenced by is_try_mode_required().

295  {
297  return array_ + pos;
298  }
bool is_valid_entry(LockListPosition pos) const
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

const SysxctLockEntry* foedus::xct::SysxctLockList::get_entry ( LockListPosition  pos) const
inline

Definition at line 299 of file sysxct_impl.hpp.

References ASSERT_ND, and is_valid_entry().

299  {
301  return array_ + pos;
302  }
bool is_valid_entry(LockListPosition pos) const
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

LockListPosition foedus::xct::SysxctLockList::get_last_active_entry ( ) const
inline

Definition at line 306 of file sysxct_impl.hpp.

Referenced by assert_sorted_impl(), batch_get_or_add_entries(), foedus::xct::operator<<(), and to_pos().

306 { return last_active_entry_; }

Here is the caller graph for this function:

LockListPosition foedus::xct::SysxctLockList::get_last_locked_entry ( ) const
inline
Returns
largest index of entries that are already locked. kLockListPositionInvalid if no entry is locked.

Definition at line 331 of file sysxct_impl.hpp.

Referenced by foedus::xct::operator<<().

331 { return last_locked_entry_; }

Here is the caller graph for this function:

LockListPosition foedus::xct::SysxctLockList::get_or_add_entry ( storage::VolatilePagePointer  page_id,
uintptr_t  lock_addr,
bool  page_lock 
)
Returns
position of the entry that corresponds to the lock. either added or found. Never returns kLockListPositionInvalid.

Definition at line 103 of file sysxct_impl.cpp.

References ASSERT_ND, assert_sorted(), is_full(), foedus::xct::kLockListPositionInvalid, lower_bound(), foedus::xct::SysxctLockEntry::set(), foedus::xct::to_universal_lock_id(), and UNLIKELY.

Referenced by batch_get_or_add_entries().

106  {
107  ASSERT_ND(!is_full());
108  if (UNLIKELY(is_full())) {
109  LOG(FATAL) << "SysxctLockList full. This must not happen";
110  }
111  const UniversalLockId id = to_universal_lock_id(page_id, lock_addr);
112 
113  // Easy case? (lock >= the last entry)
114  LockListPosition insert_pos = lower_bound(id);
115  ASSERT_ND(insert_pos != kLockListPositionInvalid);
116 
117  // Larger than all existing entries? Append to the last!
118  if (insert_pos > last_active_entry_) {
119  ASSERT_ND(insert_pos == last_active_entry_ + 1U);
120  LockListPosition new_pos = issue_new_position();
121  array_[new_pos].set(id, lock_addr, page_lock);
122  ASSERT_ND(new_pos == insert_pos);
123  return new_pos;
124  }
125 
126  // lower_bound returns the first entry that is NOT less than. is it equal?
127  ASSERT_ND(array_[insert_pos].universal_lock_id_ >= id);
128  if (array_[insert_pos].universal_lock_id_ == id) {
129  // Found existing!
130  return insert_pos;
131  }
132 
133  DVLOG(1) << "not an easy case. We need to adjust the order. This is costly!";
134  ASSERT_ND(insert_pos <= last_active_entry_); // otherwise we went into the 1st branch
135  ASSERT_ND(array_[insert_pos].universal_lock_id_ > id); // if ==, we went into the 2nd branch
136  ASSERT_ND(insert_pos == 1U || array_[insert_pos - 1U].universal_lock_id_ < id);
137 
138  LockListPosition new_last_pos = issue_new_position();
139  ASSERT_ND(new_last_pos > insert_pos);
140  uint64_t moved_bytes = sizeof(SysxctLockEntry) * (new_last_pos - insert_pos);
141  std::memmove(array_ + insert_pos + 1U, array_ + insert_pos, moved_bytes);
142  DVLOG(1) << "Re-sorted. hope this won't happen often";
143  array_[insert_pos].set(id, lock_addr, page_lock);
144  if (last_locked_entry_ >= insert_pos) {
145  ++last_locked_entry_;
146  }
147 
148  assert_sorted();
149  return insert_pos;
150 }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
uintptr_t UniversalLockId
Universally ordered identifier of each lock.
Definition: xct_id.hpp:134
UniversalLockId to_universal_lock_id(storage::VolatilePagePointer page_id, uintptr_t addr)
Definition: sysxct_impl.hpp:63
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
LockListPosition lower_bound(UniversalLockId lock) const
Analogous to std::lower_bound() for the given lock.
Definition: sysxct_impl.cpp:98
bool is_full() const
When this returns full, it's catastrophic.
void set(UniversalLockId lock_id, uintptr_t lock, bool page_lock)
void assert_sorted() const __attribute__((always_inline))
Inline definitions of SysxctLockList methods.
#define UNLIKELY(x)
Hints that x is highly likely false.
Definition: compiler.hpp:104
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::xct::SysxctLockList::init ( )
inline

Definition at line 186 of file sysxct_impl.hpp.

References foedus::xct::kNullUniversalLockId.

Referenced by foedus::xct::SysxctWorkspace::init(), and SysxctLockList().

186  {
187  clear_no_assert(kNullUniversalLockId);
188  }
const UniversalLockId kNullUniversalLockId
This never points to a valid lock, and also evaluates less than any vaild alocks. ...
Definition: xct_id.hpp:137

Here is the caller graph for this function:

bool foedus::xct::SysxctLockList::is_empty ( ) const
inline

Definition at line 310 of file sysxct_impl.hpp.

References foedus::xct::kLockListPositionInvalid.

310 { return last_active_entry_ == kLockListPositionInvalid; }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
bool foedus::xct::SysxctLockList::is_full ( ) const
inline

When this returns full, it's catastrophic.

Definition at line 305 of file sysxct_impl.hpp.

References kMaxSysxctLocks.

Referenced by get_or_add_entry().

305 { return last_active_entry_ == kMaxSysxctLocks; }
Maximum number of locks one system transaction might take.

Here is the caller graph for this function:

bool foedus::xct::SysxctLockList::is_try_mode_required ( LockListPosition  pos) const
inline
Returns
whether the lock must be taken in try mode

Definition at line 354 of file sysxct_impl.hpp.

References get_entry(), and foedus::xct::SysxctLockEntry::universal_lock_id_.

354  {
355  const SysxctLockEntry* entry = get_entry(pos);
356  // If the enclosing thread took a lock after this, we must be careful.
357  if (entry->universal_lock_id_ <= enclosing_max_lock_id_) {
358  return true;
359  }
360 
361  // If this list contains an already-taken lock after this, we must be careful.
362  if (pos <= last_locked_entry_) {
363  return true;
364  }
365 
366  return false;
367  }
SysxctLockEntry * get_entry(LockListPosition pos)

Here is the call graph for this function:

bool foedus::xct::SysxctLockList::is_valid_entry ( LockListPosition  pos) const
inline

Definition at line 307 of file sysxct_impl.hpp.

References foedus::xct::kLockListPositionInvalid.

Referenced by get_entry().

307  {
308  return pos != kLockListPositionInvalid && pos <= last_active_entry_;
309  }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149

Here is the caller graph for this function:

LockListPosition foedus::xct::SysxctLockList::lower_bound ( UniversalLockId  lock) const

Analogous to std::lower_bound() for the given lock.

Data manipulation (search/add/etc)

Returns
Index of the fist entry whose lock_ is not less than lock. If no such entry, last_active_entry_ + 1U, whether last_active_entry_ is 0 or not. Thus the return value is always >0. This is to immitate std::lower_bound's behavior.

Definition at line 98 of file sysxct_impl.cpp.

References assert_last_locked_entry().

Referenced by batch_get_or_add_entries(), and get_or_add_entry().

98  {
100  return lock_lower_bound<SysxctLockList, SysxctLockEntry>(*this, lock);
101 }
void assert_last_locked_entry() const

Here is the call graph for this function:

Here is the caller graph for this function:

SysxctLockList& foedus::xct::SysxctLockList::operator= ( const SysxctLockList other)
delete
template<typename MCS_ADAPTOR >
void foedus::xct::SysxctLockList::release_all_locks ( MCS_ADAPTOR  mcs_adaptor)
inline

Releases all locks that were acquired.

Entries are left as they were. You should then call clear_entries() or compress_entries().

Definition at line 647 of file sysxct_impl.hpp.

References begin(), end(), foedus::storage::Page::get_header(), foedus::xct::kLockListPositionInvalid, foedus::storage::PageVersion::lock_, foedus::storage::PageHeader::page_version_, foedus::xct::McsWwImpl< ADAPTOR >::release(), and foedus::xct::McsImpl< ADAPTOR, RW_BLOCK >::release_rw_writer().

647  {
648  for (auto* entry = begin(); entry != end(); ++entry) {
649  if (!entry->is_locked()) {
650  continue;
651  }
652  if (entry->page_lock_) {
653  McsWwImpl< MCS_ADAPTOR > impl(mcs_adaptor);
654  storage::Page* page = entry->get_as_page_lock();
655  McsWwLock* lock_addr = &page->get_header().page_version_.lock_;
656  impl.release(lock_addr, entry->mcs_block_);
657  entry->mcs_block_ = 0;
658  } else {
659  McsImpl< MCS_ADAPTOR, typename MCS_ADAPTOR::ThisRwBlock > impl(mcs_adaptor);
660  McsRwLock* lock_addr = &entry->get_as_record_lock()->lock_;
661  impl.release_rw_writer(lock_addr, entry->mcs_block_);
662  entry->mcs_block_ = 0;
663  }
664  }
665  last_locked_entry_ = kLockListPositionInvalid;
666 }
SysxctLockEntry * begin()
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
SysxctLockEntry * end()

Here is the call graph for this function:

template<typename MCS_ADAPTOR >
ErrorCode foedus::xct::SysxctLockList::request_page_lock ( MCS_ADAPTOR  mcs_adaptor,
storage::Page page 
)
inline

Acquires a page lock.

Definition at line 241 of file sysxct_impl.hpp.

References foedus::storage::Page::get_volatile_page_id().

Referenced by batch_request_page_locks().

241  {
242  uintptr_t lock_addr = reinterpret_cast<uintptr_t>(page);
243  return request_lock_general(mcs_adaptor, page->get_volatile_page_id(), lock_addr, true);
244  }

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename MCS_ADAPTOR >
ErrorCode foedus::xct::SysxctLockList::request_record_lock ( MCS_ADAPTOR  mcs_adaptor,
storage::VolatilePagePointer  page_id,
RwLockableXctId lock 
)
inline

If not yet acquired, acquires a record lock and adds an entry to this list, re-sorting part of the list if necessary to keep the sortedness.

If there is an existing entry for the lock, it resuses the entry.

Postcondition
Either the lock is taken or returns an error

Definition at line 220 of file sysxct_impl.hpp.

223  {
224  uintptr_t lock_addr = reinterpret_cast<uintptr_t>(lock);
225  return request_lock_general(mcs_adaptor, page_id, lock_addr, false);
226  }
LockListPosition foedus::xct::SysxctLockList::to_pos ( const SysxctLockEntry entry) const
inline

Definition at line 320 of file sysxct_impl.hpp.

References ASSERT_ND, and get_last_active_entry().

320  {
321  ASSERT_ND(entry > array_);
322  LockListPosition pos = entry - array_;
324  return pos;
325  }
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
LockListPosition get_last_active_entry() const

Here is the call graph for this function:

Friends And Related Function Documentation

std::ostream& operator<< ( std::ostream &  o,
const SysxctLockList v 
)
friend

Definition at line 53 of file sysxct_impl.cpp.

53  {
54  o << "<SysxctLockList>"
55  << "<Capacity>" << v.get_capacity() << "</Capacity>"
56  << "<LastActiveEntry>" << v.get_last_active_entry() << "</LastActiveEntry>"
57  << "<LastLockedEntry>" << v.get_last_locked_entry() << "</LastLockedEntry>"
58  << "<EnclosingMaxLockId>" << v.get_enclosing_max_lock_id() << "</EnclosingMaxLockId>";
59  const uint32_t kMaxShown = 32U;
60  for (auto i = 1U; i <= std::min(v.last_active_entry_, kMaxShown); ++i) {
61  o << std::endl << v.array_[i];
62  }
63  if (v.last_active_entry_ > kMaxShown) {
64  o << std::endl << "<too_many />";
65  }
66  o << "</SysxctLockList>";
67  return o;
68 }
friend struct TestIsTryRequired
friend

Definition at line 168 of file sysxct_impl.hpp.


The documentation for this class was generated from the following files: