libfoedus-core
FOEDUS Core Library
foedus::xct Namespace Reference

Transaction Manager, which provides APIs to begin/commit/abort transactions. More...

Detailed Description

Transaction Manager, which provides APIs to begin/commit/abort transactions.

This package is the implementation of the commit protocol, the gut of concurrency control.

Get Started

First thing first. Here's a minimal example to start one transaction in the engine.

// Example to start and commit one transaction
foedus::ErrorStack run_my_task(foedus::thread::Thread* context, ...) {
foedus::Engine *engine = context->get_engine();
foedus::xct::XctManager* xct_manager = engine->get_xct_manager();
... // read/modify data. See storage module's document for examples.
foedus::Epoch commit_epoch;
WRAP_ERROR_CODE(xct_manager->precommit_xct(context, &commit_epoch));
WRAP_ERROR_CODE(xct_manager->wait_for_commit(commit_epoch));
}

Notice the wait_for_commit(commit_epoch) call. Without invoking the method, you should not consider that your transactions are committed. That's why the name of the method invoked above is "precommit_xct".

Here's a minimal example to start several transactions and commit them together, or group-commit, which is the primary usecase our engine is optimized for.

// Example to start and commit several transactions
foedus::Epoch highest_commit_epoch;
for (int i = 0; i < 1000; ++i) {
... // read/modify data. See storage module's document for examples.
foedus::Epoch commit_epoch;
WRAP_ERROR_CODE(xct_manager->precommit_xct(context, &commit_epoch));
highest_commit_epoch.store_max(commit_epoch);
}
CHECK_ERROR(xct_manager->wait_for_commit(highest_commit_epoch));

In this case, we invoke wait_for_commit() for the largest commit epoch just once at the end. This dramatically improves the throughput at the cost of latency of individual transactions.

Optimistic Concurrency Control

Our commit protocol is based on [TU13], except that we have handling of non-volatile pages. [TU13]'s commit protocol completely avoids writes to shared memory by read operations. [LARSON11] is also an optimistic concurrency control, but it still has "read-lock" bits which has to be written by read operations. In many-core NUMA environment, this might cause scalability issues, thus we employ [TU13]'s approach.

Epoch-based commit protocol

The engine maintains two global foedus::Epoch; current Epoch and durable Epoch. foedus::xct::XctManager keeps advancing current epoch periodically while the log module advances durable epoch when it confirms that all log entries up to the epoch becomes durable and also that the log module durably writes a savepoint ( Savepoint Manager ) file.

Epoch Chime

Epoch Chime advances the current global epoch when a configured interval elapses or the user explicitly requests it. The chime checks whether it can safely advance an epoch so that the following invariant always holds.

  • A newly started transaction will always commit with current global epoch or larger.
  • All running transactions will always commit with at least current global epoch - 1, called grace-period epoch, or larger.

In many cases, the invariants are trivially achieved. However, there are a few tricky cases.

  • There is a long running transaction that already acquired a commit-epoch but not yet exit from the pre-commit stage.
  • There is a worker thread that has been idle for a while.

Whenever the chime advances the epoch, we have to safely detect whether there is any transaction that might violate the invariant without causing expensive synchronization. This is done via the in-commit epoch guard. For more details, see the following class.

See also
foedus::xct::InCommitEpochGuard

Isolation Levels

See foedus::xct::IsolationLevel

Read-Only Transactions

bluh

References

  • [LARSON11] Perake Larson, Spyros Blanas, Cristian Diaconu, Craig Freedman, Jignesh M. Patel, and Mike Zwilling. "High-Performance Concurrency Control Mechanisms for Main-Memory Databases." VLDB, 2011.
  • [TU13] Stephen Tu, Wenting Zheng, Eddie Kohler, Barbara Liskov, and Samuel Madden. "Speedy transactions in multicore in-memory databases.", SOSP, 2013.

Classes

struct  AcquireAsyncRet
 Return value of acquire_async_rw. More...
 
class  CurrentLockList
 Sorted list of all locks, either read-lock or write-lock, taken in the current run. More...
 
struct  CurrentLockListIteratorForWriteSet
 An iterator over CurrentLockList to find entries along with sorted write-set. More...
 
struct  InCommitEpochGuard
 Automatically sets in-commit-epoch with appropriate fence during pre-commit protocol. More...
 
struct  LockableXctId
 Transaction ID, a 128-bit data to manage record versions and provide locking mechanism. More...
 
struct  LockEntry
 An entry in CLL and RLL, representing a lock that is taken or will be taken. More...
 
struct  LockFreeReadXctAccess
 Represents a record of special read-access during a transaction without any need for locking. More...
 
struct  LockFreeWriteXctAccess
 Represents a record of special write-access during a transaction without any need for locking. More...
 
class  McsAdaptorConcept
 Defines an adapter template interface for our MCS lock classes. More...
 
class  McsImpl
 Implements an MCS-locking Algorithm. More...
 
class  McsImpl< ADAPTOR, McsRwExtendedBlock >
 The Extended MCS-RW lock. More...
 
class  McsImpl< ADAPTOR, McsRwSimpleBlock >
 The Simple MCS-RW lock. More...
 
class  McsMockAdaptor
 Implements McsAdaptorConcept. More...
 
struct  McsMockContext
 Analogous to the entire engine. More...
 
struct  McsMockDataPage
 A dummy page layout to store RwLockableXctId. More...
 
struct  McsMockNode
 Analogous to one thread-group/socket/node. More...
 
struct  McsMockThread
 A dummy implementation that provides McsAdaptorConcept for testing. More...
 
class  McsOwnerlessLockScope
 
struct  McsRwAsyncMapping
 
struct  McsRwExtendedBlock
 Pre-allocated MCS block for extended version of RW-locks. More...
 
struct  McsRwLock
 An MCS reader-writer lock data structure. More...
 
struct  McsRwSimpleBlock
 Reader-writer (RW) MCS lock classes. More...
 
struct  McsWwBlock
 Pre-allocated MCS block for WW-locks. More...
 
struct  McsWwBlockData
 Exclusive-only (WW) MCS lock classes. More...
 
class  McsWwImpl
 A specialized/simplified implementation of an MCS-locking Algorithm for exclusive-only (WW) locks. More...
 
struct  McsWwLock
 An exclusive-only (WW) MCS lock data structure. More...
 
class  McsWwOwnerlessImpl
 A ownerless (contextless) interface for McsWwImpl. More...
 
struct  PageComparator
 
struct  PageVersionAccess
 Represents a record of reading a page during a transaction. More...
 
struct  PointerAccess
 Represents a record of following a page pointer during a transaction. More...
 
struct  ReadXctAccess
 Represents a record of read-access during a transaction. More...
 
struct  RecordXctAccess
 Base of ReadXctAccess and WriteXctAccess. More...
 
class  RetrospectiveLockList
 Retrospective lock list. More...
 
struct  RwLockableXctId
 The MCS reader-writer lock variant of LockableXctId. More...
 
struct  SysxctFunctor
 A functor representing the logic in a system transaction via virtual-function. More...
 
struct  SysxctLockEntry
 An entry in CLL/RLL for system transactions. More...
 
class  SysxctLockList
 RLL/CLL of a system transaction. More...
 
struct  SysxctWorkspace
 Per-thread reused work memory for system transactions. More...
 
struct  TrackMovedRecordResult
 Result of track_moved_record(). More...
 
struct  WriteXctAccess
 Represents a record of write-access during a transaction. More...
 
class  Xct
 Represents a user transaction. More...
 
struct  XctId
 Persistent status part of Transaction ID. More...
 
class  XctManager
 Xct Manager class that provides API to begin/abort/commit transaction. More...
 
struct  XctManagerControlBlock
 Shared data in XctManagerPimpl. More...
 
class  XctManagerPimpl
 Pimpl object of XctManager. More...
 
struct  XctOptions
 Set of options for xct manager. More...
 

Typedefs

typedef uintptr_t UniversalLockId
 Universally ordered identifier of each lock. More...
 
typedef uint32_t LockListPosition
 Index in a lock-list, either RLL or CLL. More...
 
typedef uint32_t McsBlockIndex
 Index in thread-local MCS block. More...
 

Enumerations

enum  IsolationLevel { kDirtyRead, kSnapshot, kSerializable }
 Specifies the level of isolation during transaction processing. More...
 
enum  LockMode { kNoLock = 0, kReadLock, kWriteLock }
 Represents a mode of lock. More...
 

Functions

template<typename LOCK_LIST , typename LOCK_ENTRY >
LockListPosition lock_lower_bound (const LOCK_LIST &list, UniversalLockId lock)
 General lower_bound/binary_search logic for any kind of LockList/LockEntry. More...
 
template<typename LOCK_LIST , typename LOCK_ENTRY >
LockListPosition lock_binary_search (const LOCK_LIST &list, UniversalLockId lock)
 
UniversalLockId to_universal_lock_id (storage::VolatilePagePointer page_id, uintptr_t addr)
 
template<typename MCS_ADAPTOR , typename ENCLOSURE_RELEASE_ALL_LOCKS_FUNCTOR >
ErrorCode run_nested_sysxct_impl (SysxctFunctor *functor, MCS_ADAPTOR mcs_adaptor, uint32_t max_retries, SysxctWorkspace *workspace, UniversalLockId enclosing_max_lock_id, ENCLOSURE_RELEASE_ALL_LOCKS_FUNCTOR enclosure_release_all_locks_functor)
 Runs a system transaction nested in a user transaction. More...
 
UniversalLockId to_universal_lock_id (const memory::GlobalVolatilePageResolver &resolver, uintptr_t lock_ptr)
 Always use this method rather than doing the conversion yourself. More...
 
UniversalLockId to_universal_lock_id (uint64_t numa_node, uint64_t local_page_index, uintptr_t lock_ptr)
 If you already have the numa_node, local_page_index, prefer this one. More...
 
UniversalLockId xct_id_to_universal_lock_id (const memory::GlobalVolatilePageResolver &resolver, RwLockableXctId *lock)
 
UniversalLockId rw_lock_to_universal_lock_id (const memory::GlobalVolatilePageResolver &resolver, McsRwLock *lock)
 
RwLockableXctIdfrom_universal_lock_id (const memory::GlobalVolatilePageResolver &resolver, const UniversalLockId universal_lock_id)
 Always use this method rather than doing the conversion yourself. More...
 
void _dummy_static_size_check__COUNTER__ ()
 
std::ostream & operator<< (std::ostream &o, const LockEntry &v)
 Debugging. More...
 
std::ostream & operator<< (std::ostream &o, const CurrentLockList &v)
 
std::ostream & operator<< (std::ostream &o, const RetrospectiveLockList &v)
 
template<typename LOCK_LIST >
void lock_assert_sorted (const LOCK_LIST &list)
 
std::ostream & operator<< (std::ostream &o, const SysxctLockEntry &v)
 Debugging. More...
 
std::ostream & operator<< (std::ostream &o, const SysxctLockList &v)
 
std::ostream & operator<< (std::ostream &o, const SysxctWorkspace &v)
 
std::ostream & operator<< (std::ostream &o, const Xct &v)
 
std::ostream & operator<< (std::ostream &o, const PointerAccess &v)
 
std::ostream & operator<< (std::ostream &o, const PageVersionAccess &v)
 
std::ostream & operator<< (std::ostream &o, const ReadXctAccess &v)
 
std::ostream & operator<< (std::ostream &o, const WriteXctAccess &v)
 
std::ostream & operator<< (std::ostream &o, const LockFreeReadXctAccess &v)
 
std::ostream & operator<< (std::ostream &o, const LockFreeWriteXctAccess &v)
 
std::ostream & operator<< (std::ostream &o, const McsWwLock &v)
 Debug out operators. More...
 
std::ostream & operator<< (std::ostream &o, const XctId &v)
 
std::ostream & operator<< (std::ostream &o, const LockableXctId &v)
 
std::ostream & operator<< (std::ostream &o, const McsRwLock &v)
 
std::ostream & operator<< (std::ostream &o, const RwLockableXctId &v)
 
void assert_mcs_aligned (const void *address)
 
template<typename COND >
void spin_until (COND spin_until_cond)
 

Variables

const UniversalLockId kNullUniversalLockId = 0
 This never points to a valid lock, and also evaluates less than any vaild alocks. More...
 
const LockListPosition kLockListPositionInvalid = 0
 
const uint64_t kMcsGuestId = -1
 A special value meaning the lock is held by a non-regular guest that doesn't have a context. More...
 
const uint64_t kXctIdDeletedBit = 1ULL << 63
 
const uint64_t kXctIdMovedBit = 1ULL << 62
 
const uint64_t kXctIdBeingWrittenBit = 1ULL << 61
 
const uint64_t kXctIdNextLayerBit = 1ULL << 60
 
const uint64_t kXctIdMaskSerializer = 0x0FFFFFFFFFFFFFFFULL
 
const uint64_t kXctIdMaskEpoch = 0x0FFFFFFF00000000ULL
 
const uint64_t kXctIdMaskOrdinal = 0x00000000FFFFFFFFULL
 
const uint64_t kMaxXctOrdinal = (1ULL << 24) - 1U
 Maximum value of in-epoch ordinal. More...
 
const uint64_t kLockPageSize = 1 << 12
 Must be same as storage::kPageSize. More...
 
constexpr uint32_t kMcsMockDataPageHeaderSize = 128U
 
constexpr uint32_t kMcsMockDataPageHeaderPad = kMcsMockDataPageHeaderSize - sizeof(storage::PageHeader)
 
constexpr uint32_t kMcsMockDataPageLocksPerPage
 
constexpr uint32_t kMcsMockDataPageFiller
 
const uint16_t kReadsetPrefetchBatch = 16
 

Class Documentation

struct foedus::xct::AcquireAsyncRet

Return value of acquire_async_rw.

Definition at line 161 of file xct_id.hpp.

Class Members
bool acquired_ whether we immediately acquired the lock or not
McsBlockIndex block_index_ the queue node we pushed.

It is always set whether acquired_ or not, whether simple or extended. However, in simple case when !acquired_, the block is not used and nothing sticks to the queue. We just skip the index next time.

Typedef Documentation

typedef uint32_t foedus::xct::McsBlockIndex

Index in thread-local MCS block.

0 means not locked.

Definition at line 153 of file xct_id.hpp.

Function Documentation

void foedus::xct::_dummy_static_size_check__COUNTER__ ( )
inline

Definition at line 1246 of file xct_id.hpp.

void foedus::xct::assert_mcs_aligned ( const void *  address)
inline

Definition at line 34 of file xct_mcs_impl.cpp.

References ASSERT_ND.

Referenced by foedus::xct::McsWwImpl< ADAPTOR >::acquire_try(), foedus::xct::McsWwImpl< ADAPTOR >::acquire_unconditional(), foedus::xct::McsWwImpl< ADAPTOR >::initial(), foedus::xct::McsWwOwnerlessImpl::ownerless_acquire_try(), foedus::xct::McsWwOwnerlessImpl::ownerless_acquire_unconditional(), foedus::xct::McsWwOwnerlessImpl::ownerless_initial(), foedus::xct::McsWwOwnerlessImpl::ownerless_release(), and foedus::xct::McsWwImpl< ADAPTOR >::release().

34  {
35  ASSERT_ND(address);
36  ASSERT_ND(reinterpret_cast<uintptr_t>(address) % 8 == 0);
37 }
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the caller graph for this function:

RwLockableXctId * foedus::xct::from_universal_lock_id ( const memory::GlobalVolatilePageResolver resolver,
const UniversalLockId  universal_lock_id 
)

Always use this method rather than doing the conversion yourself.

See also
UniversalLockId

Definition at line 61 of file xct_id.cpp.

References foedus::memory::GlobalVolatilePageResolver::bases_.

63  {
64  uint16_t node = universal_lock_id >> 48;
65  uint64_t offset = universal_lock_id & ((1ULL << 48) - 1ULL);
66  uintptr_t base = reinterpret_cast<uintptr_t>(resolver.bases_[node]);
67  return reinterpret_cast<RwLockableXctId*>(base + offset);
68 }
template<typename LOCK_LIST >
void foedus::xct::lock_assert_sorted ( const LOCK_LIST &  list)

Definition at line 159 of file retrospective_lock_list.cpp.

References ASSERT_ND, foedus::storage::Page::get_volatile_page_id(), kLockListPositionInvalid, kNoLock, foedus::xct::LockEntry::lock_, foedus::storage::to_page(), and to_universal_lock_id().

Referenced by foedus::xct::RetrospectiveLockList::assert_sorted_impl(), and foedus::xct::CurrentLockList::assert_sorted_impl().

159  {
160  const LockEntry* array = list.get_array();
161  ASSERT_ND(array[kLockListPositionInvalid].universal_lock_id_ == 0);
162  ASSERT_ND(array[kLockListPositionInvalid].lock_ == nullptr);
163  ASSERT_ND(array[kLockListPositionInvalid].taken_mode_ == kNoLock);
164  ASSERT_ND(array[kLockListPositionInvalid].preferred_mode_ == kNoLock);
165  const LockListPosition last_active_entry = list.get_last_active_entry();
166  for (LockListPosition pos = 2U; pos <= last_active_entry; ++pos) {
167  ASSERT_ND(array[pos - 1U].universal_lock_id_ < array[pos].universal_lock_id_);
168  ASSERT_ND(array[pos].universal_lock_id_ != 0);
169  ASSERT_ND(array[pos].lock_ != nullptr);
170  const storage::Page* page = storage::to_page(array[pos].lock_);
171  uintptr_t lock_addr = reinterpret_cast<uintptr_t>(array[pos].lock_);
172  auto page_id = page->get_volatile_page_id();
173  ASSERT_ND(array[pos].universal_lock_id_
174  == to_universal_lock_id(page_id.get_numa_node(), page_id.get_offset(), lock_addr));
175  }
176 }
Page * to_page(const void *address)
super-dirty way to obtain Page the address belongs to.
Definition: page.hpp:395
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
taken_mode_: Not taken the lock yet.
Definition: xct_id.hpp:100
UniversalLockId to_universal_lock_id(storage::VolatilePagePointer page_id, uintptr_t addr)
Definition: sysxct_impl.hpp:63
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename LOCK_LIST , typename LOCK_ENTRY >
LockListPosition foedus::xct::lock_binary_search ( const LOCK_LIST &  list,
UniversalLockId  lock 
)
inline

Definition at line 936 of file retrospective_lock_list.hpp.

References kLockListPositionInvalid.

938  {
939  LockListPosition last_active_entry = list.get_last_active_entry();
940  LockListPosition pos = lock_lower_bound<LOCK_LIST, LOCK_ENTRY>(list, lock);
941  if (pos != kLockListPositionInvalid && pos <= last_active_entry) {
942  const LOCK_ENTRY* array = list.get_array();
943  if (array[pos].universal_lock_id_ == lock) {
944  return pos;
945  }
946  }
948 }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
template<typename LOCK_LIST , typename LOCK_ENTRY >
LockListPosition foedus::xct::lock_lower_bound ( const LOCK_LIST &  list,
UniversalLockId  lock 
)
inline

General lower_bound/binary_search logic for any kind of LockList/LockEntry.

Used from retrospective_lock_list.cpp and sysxct_impl.cpp. These implementations are skewed towards sorted cases, meaning it runs faster when accesses are nicely sorted.

Definition at line 903 of file retrospective_lock_list.hpp.

References ASSERT_ND, and kLockListPositionInvalid.

905  {
906  LockListPosition last_active_entry = list.get_last_active_entry();
907  if (last_active_entry == kLockListPositionInvalid) {
908  return kLockListPositionInvalid + 1U;
909  }
910  // Check the easy cases first. This will be an wasted cost if it's not, but still cheap.
911  const LOCK_ENTRY* array = list.get_array();
912  // For example, [dummy, 3, 5, 7] (last_active_entry=3).
913  // id=7: 3, larger: 4, smaller: need to check more
914  if (array[last_active_entry].universal_lock_id_ == lock) {
915  return last_active_entry;
916  } else if (array[last_active_entry].universal_lock_id_ < lock) {
917  return last_active_entry + 1U;
918  }
919 
920  LockListPosition pos
921  = std::lower_bound(
922  array + 1U,
923  array + last_active_entry + 1U,
924  lock,
925  typename LOCK_ENTRY::LessThan())
926  - array;
927  // in the above example, id=6: 3, id=4,5: 2, smaller: 1
929  ASSERT_ND(pos <= last_active_entry); // otherwise we went into the branch above
930  ASSERT_ND(array[pos].universal_lock_id_ >= lock);
931  ASSERT_ND(pos == 1U || array[pos - 1U].universal_lock_id_ < lock);
932  return pos;
933 }
const LockListPosition kLockListPositionInvalid
Definition: xct_id.hpp:149
uint32_t LockListPosition
Index in a lock-list, either RLL or CLL.
Definition: xct_id.hpp:148
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const PointerAccess v 
)

Definition at line 32 of file xct_access.cpp.

References foedus::xct::PointerAccess::address_, foedus::xct::PointerAccess::observed_, and foedus::storage::VolatilePagePointer::word.

32  {
33  o << "<PointerAccess><address>" << v.address_ << "</address>"
34  << "<observed>" << assorted::Hex(v.observed_.word) << "</observed></PointerAccess>";
35  return o;
36 }
std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const SysxctLockEntry v 
)

Debugging.

Definition at line 35 of file sysxct_impl.cpp.

References foedus::xct::SysxctLockEntry::get_as_page_lock(), foedus::xct::SysxctLockEntry::get_as_record_lock(), foedus::storage::Page::get_header(), foedus::xct::SysxctLockEntry::mcs_block_, foedus::xct::SysxctLockEntry::page_lock_, foedus::xct::SysxctLockEntry::universal_lock_id_, and foedus::xct::SysxctLockEntry::used_in_this_run_.

35  {
36  o << "<SysxctLockEntry>"
37  << "<LockId>" << v.universal_lock_id_ << "</LockId>"
38  << "<used>" << v.used_in_this_run_ << "</used>";
39  if (v.mcs_block_) {
40  o << "<mcs_block_>" << v.mcs_block_ << "</mcs_block_>";
41  if (v.page_lock_) {
42  o << v.get_as_page_lock()->get_header();
43  } else {
44  o << *(v.get_as_record_lock());
45  }
46  } else {
47  o << "<NotLocked />";
48  }
49  o << "</SysxctLockEntry>";
50  return o;
51 }

Here is the call graph for this function:

std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const PageVersionAccess v 
)

Definition at line 38 of file xct_access.cpp.

References foedus::xct::PageVersionAccess::address_, and foedus::xct::PageVersionAccess::observed_.

38  {
39  o << "<PageVersionAccess><address>" << v.address_ << "</address>"
40  << "<observed>" << v.observed_ << "</observed></PageVersionAccess>";
41  return o;
42 }
std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const ReadXctAccess v 
)

Definition at line 44 of file xct_access.cpp.

References foedus::xct::ReadXctAccess::observed_owner_id_, foedus::xct::RecordXctAccess::ordinal_, foedus::xct::RecordXctAccess::owner_id_address_, foedus::xct::RecordXctAccess::owner_lock_id_, foedus::xct::ReadXctAccess::related_write_, and foedus::xct::RecordXctAccess::storage_id_.

44  {
45  o << "<ReadXctAccess><storage>" << v.storage_id_ << "</storage>"
46 // << "<current_lock_position_>" << v.current_lock_position_ << "</current_lock_position_>"
47  << "<ordinal_>" << v.ordinal_ << "</ordinal_>"
48  << "<observed_owner_id>" << v.observed_owner_id_ << "</observed_owner_id>"
49  << "<record_address>" << v.owner_id_address_ << "</record_address>"
50  << "<current_owner_id>" << *v.owner_id_address_ << "</current_owner_id>"
51  << "<owner_lock_id>" << v.owner_lock_id_ << "</owner_lock_id><log>";
52  if (v.related_write_) {
53  o << "<HasRelatedWrite />"; // does not output its content to avoid circle
54  }
55  o << "</ReadXctAccess>";
56  return o;
57 }
std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const SysxctLockList v 
)

Definition at line 53 of file sysxct_impl.cpp.

References foedus::xct::SysxctLockList::get_capacity(), foedus::xct::SysxctLockList::get_enclosing_max_lock_id(), foedus::xct::SysxctLockList::get_last_active_entry(), and foedus::xct::SysxctLockList::get_last_locked_entry().

53  {
54  o << "<SysxctLockList>"
55  << "<Capacity>" << v.get_capacity() << "</Capacity>"
56  << "<LastActiveEntry>" << v.get_last_active_entry() << "</LastActiveEntry>"
57  << "<LastLockedEntry>" << v.get_last_locked_entry() << "</LastLockedEntry>"
58  << "<EnclosingMaxLockId>" << v.get_enclosing_max_lock_id() << "</EnclosingMaxLockId>";
59  const uint32_t kMaxShown = 32U;
60  for (auto i = 1U; i <= std::min(v.last_active_entry_, kMaxShown); ++i) {
61  o << std::endl << v.array_[i];
62  }
63  if (v.last_active_entry_ > kMaxShown) {
64  o << std::endl << "<too_many />";
65  }
66  o << "</SysxctLockList>";
67  return o;
68 }

Here is the call graph for this function:

std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const WriteXctAccess v 
)

Definition at line 59 of file xct_access.cpp.

References foedus::log::invoke_ostream(), foedus::xct::WriteXctAccess::log_entry_, foedus::xct::RecordXctAccess::ordinal_, foedus::xct::RecordXctAccess::owner_id_address_, foedus::xct::RecordXctAccess::owner_lock_id_, foedus::xct::WriteXctAccess::related_read_, and foedus::xct::RecordXctAccess::storage_id_.

59  {
60  o << "<WriteAccess><storage>" << v.storage_id_ << "</storage>"
61  << "<record_address>" << v.owner_id_address_ << "</record_address>"
62 // << "<current_lock_position_>" << v.current_lock_position_ << "</current_lock_position_>"
63  << "<ordinal_>" << v.ordinal_ << "</ordinal_>"
64  << "<current_owner_id>" << *(v.owner_id_address_) << "</current_owner_id><log>"
65  << "<owner_lock_id>" << v.owner_lock_id_ << "</owner_lock_id><log>";
66  log::invoke_ostream(v.log_entry_, &o);
67  o << "</log>";
68  if (v.related_read_) {
69  o << "<HasRelatedRead />"; // does not output its content to avoid circle
70  }
71  o << "</WriteAccess>";
72  return o;
73 }
void invoke_ostream(const void *buffer, std::ostream *ptr)
Invokes the ostream operator for the given log type defined in log_type.xmacro.

Here is the call graph for this function:

std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const SysxctWorkspace v 
)

Definition at line 70 of file sysxct_impl.cpp.

References foedus::xct::SysxctWorkspace::lock_list_, and foedus::xct::SysxctWorkspace::running_sysxct_.

70  {
71  o << "<SysxctWorkspace>"
72  << "<running_sysxct_>" << v.running_sysxct_ << "</running_sysxct_>"
73  << v.lock_list_
74  << "</SysxctWorkspace>";
75  return o;
76 }
std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const LockFreeReadXctAccess v 
)

Definition at line 75 of file xct_access.cpp.

References foedus::xct::LockFreeReadXctAccess::observed_owner_id_, foedus::xct::LockFreeReadXctAccess::owner_id_address_, and foedus::xct::LockFreeReadXctAccess::storage_id_.

75  {
76  o << "<LockFreeReadXctAccess>"
77  << "<storage>" << v.storage_id_ << "</storage>"
78  << "<observed_owner_id>" << v.observed_owner_id_ << "</observed_owner_id>"
79  << "<record_address>" << v.owner_id_address_ << "</record_address>"
80  << "<current_owner_id>" << *v.owner_id_address_ << "</current_owner_id>";
81  o << "</LockFreeReadXctAccess>";
82  return o;
83 }
std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const LockFreeWriteXctAccess v 
)

Definition at line 85 of file xct_access.cpp.

References foedus::log::invoke_ostream(), foedus::xct::LockFreeWriteXctAccess::log_entry_, and foedus::xct::LockFreeWriteXctAccess::storage_id_.

85  {
86  o << "<LockFreeWriteXctAccess>"
87  << "<storage>" << v.storage_id_ << "</storage>";
88  log::invoke_ostream(v.log_entry_, &o);
89  o << "</LockFreeWriteXctAccess>";
90  return o;
91 }
void invoke_ostream(const void *buffer, std::ostream *ptr)
Invokes the ostream operator for the given log type defined in log_type.xmacro.

Here is the call graph for this function:

std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const LockEntry v 
)

Debugging.

Definition at line 118 of file retrospective_lock_list.cpp.

References foedus::xct::LockEntry::lock_, foedus::xct::LockEntry::preferred_mode_, foedus::xct::LockEntry::taken_mode_, and foedus::xct::LockEntry::universal_lock_id_.

118  {
119  o << "<LockEntry>"
120  << "<LockId>" << v.universal_lock_id_ << "</LockId>"
121  << "<PreferredMode>" << v.preferred_mode_ << "</PreferredMode>"
122  << "<TakenMode>" << v.taken_mode_ << "</TakenMode>";
123  if (v.lock_) {
124  o << *(v.lock_);
125  } else {
126  o << "<Lock>nullptr</Lock>";
127  }
128  o << "</LockEntry>";
129  return o;
130 }
std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const CurrentLockList v 
)

Definition at line 132 of file retrospective_lock_list.cpp.

132  {
133  o << "<CurrentLockList>"
134  << "<Capacity>" << v.capacity_ << "</Capacity>"
135  << "<LastActiveEntry>" << v.last_active_entry_ << "</LastActiveEntry>"
136  << "<LastLockedEntry>" << v.last_locked_entry_ << "</LastLockedEntry>";
137  const uint32_t kMaxShown = 32U;
138  for (auto i = 1U; i <= std::min(v.last_active_entry_, kMaxShown); ++i) {
139  o << v.array_[i];
140  }
141  o << "</CurrentLockList>";
142  return o;
143 }
std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const RetrospectiveLockList v 
)

Definition at line 145 of file retrospective_lock_list.cpp.

145  {
146  o << "<RetrospectiveLockList>"
147  << "<Capacity>" << v.capacity_ << "</Capacity>"
148  << "<LastActiveEntry>" << v.last_active_entry_ << "</LastActiveEntry>";
149  const uint32_t kMaxShown = 32U;
150  for (auto i = 1U; i <= std::min(v.last_active_entry_, kMaxShown); ++i) {
151  o << v.array_[i];
152  }
153  o << "</RetrospectiveLockList>";
154  return o;
155 }
std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const Xct v 
)

Definition at line 168 of file xct.cpp.

References foedus::xct::Xct::get_default_hot_threshold_for_this_xct(), foedus::xct::Xct::get_default_rll_threshold_for_this_xct(), foedus::xct::Xct::get_hot_threshold_for_this_xct(), foedus::xct::Xct::get_id(), foedus::xct::Xct::get_lock_free_read_set_size(), foedus::xct::Xct::get_lock_free_write_set_size(), foedus::xct::Xct::get_page_version_set_size(), foedus::xct::Xct::get_pointer_set_size(), foedus::xct::Xct::get_read_set_size(), foedus::xct::Xct::get_rll_threshold_for_this_xct(), foedus::xct::Xct::get_sysxct_workspace(), foedus::xct::Xct::get_write_set_size(), foedus::xct::Xct::is_active(), foedus::xct::Xct::is_default_rll_for_this_xct(), and foedus::xct::Xct::is_enable_rll_for_this_xct().

168  {
169  o << "<Xct>"
170  << "<active_>" << v.is_active() << "</active_>";
171  o << "<enable_rll_for_this_xct_>" << v.is_enable_rll_for_this_xct()
172  << "</enable_rll_for_this_xct_>";
173  o << "<default_rll_for_this_xct_>" << v.is_default_rll_for_this_xct()
174  << "</default_rll_for_this_xct_>";
175  o << "<hot_threshold>" << v.get_hot_threshold_for_this_xct() << "</hot_threshold>";
176  o << "<default_hot_threshold>" << v.get_default_hot_threshold_for_this_xct()
177  << "</default_hot_threshold>";
178  o << "<rll_threshold>" << v.get_rll_threshold_for_this_xct() << "</rll_threshold>";
179  o << "<default_rll_threshold>" << v.get_default_rll_threshold_for_this_xct()
180  << "</default_rll_threshold>";
181  if (v.is_active()) {
182  o << "<id_>" << v.get_id() << "</id_>"
183  << "<read_set_size>" << v.get_read_set_size() << "</read_set_size>"
184  << "<write_set_size>" << v.get_write_set_size() << "</write_set_size>"
185  << "<pointer_set_size>" << v.get_pointer_set_size() << "</pointer_set_size>"
186  << "<page_version_set_size>" << v.get_page_version_set_size() << "</page_version_set_size>"
187  << "<lock_free_read_set_size>" << v.get_lock_free_read_set_size()
188  << "</lock_free_read_set_size>"
189  << "<lock_free_write_set_size>" << v.get_lock_free_write_set_size()
190  << "</lock_free_write_set_size>";
191  const SysxctWorkspace* sysxct_workspace = v.get_sysxct_workspace();
192  o << *sysxct_workspace;
193  }
194  o << "</Xct>";
195  return o;
196 }

Here is the call graph for this function:

std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const McsWwLock v 
)

Debug out operators.

Definition at line 171 of file xct_id.cpp.

References foedus::xct::McsWwLock::get_tail_waiter(), foedus::xct::McsWwLock::get_tail_waiter_block(), and foedus::xct::McsWwLock::is_locked().

171  {
172  o << "<McsWwLock><locked>" << v.is_locked() << "</locked><tail_waiter>"
173  << v.get_tail_waiter() << "</tail_waiter><tail_block>" << v.get_tail_waiter_block()
174  << "</tail_block></McsWwLock>";
175  return o;
176 }

Here is the call graph for this function:

std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const XctId v 
)

Definition at line 178 of file xct_id.cpp.

References foedus::xct::XctId::get_epoch(), foedus::xct::XctId::get_ordinal(), foedus::xct::XctId::is_being_written(), foedus::xct::XctId::is_deleted(), foedus::xct::XctId::is_moved(), and foedus::xct::XctId::is_next_layer().

178  {
179  o << "<XctId epoch=\"" << v.get_epoch()
180  << "\" ordinal=\"" << v.get_ordinal()
181  << "\" status=\""
182  << (v.is_deleted() ? "D" : " ")
183  << (v.is_moved() ? "M" : " ")
184  << (v.is_being_written() ? "W" : " ")
185  << (v.is_next_layer() ? "N" : " ")
186  << "\" />";
187  return o;
188 }

Here is the call graph for this function:

std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const LockableXctId v 
)

Definition at line 190 of file xct_id.cpp.

References foedus::xct::LockableXctId::lock_, and foedus::xct::LockableXctId::xct_id_.

190  {
191  o << "<LockableXctId>" << v.xct_id_ << v.lock_ << "</LockableXctId>";
192  return o;
193 }
std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const McsRwLock v 
)

Definition at line 195 of file xct_id.cpp.

References foedus::xct::McsRwLock::get_tail_waiter(), foedus::xct::McsRwLock::get_tail_waiter_block(), and foedus::xct::McsRwLock::is_locked().

195  {
196  o << "<McsRwLock><locked>" << v.is_locked() << "</locked><tail_waiter>"
197  << v.get_tail_waiter() << "</tail_waiter><tail_block>" << v.get_tail_waiter_block()
198  << "</tail_block></McsRwLock>";
199  return o;
200 }

Here is the call graph for this function:

std::ostream& foedus::xct::operator<< ( std::ostream &  o,
const RwLockableXctId v 
)

Definition at line 202 of file xct_id.cpp.

References foedus::xct::RwLockableXctId::lock_, and foedus::xct::RwLockableXctId::xct_id_.

202  {
203  o << "<RwLockableXctId>" << v.xct_id_ << v.lock_ << "</RwLockableXctId>";
204  return o;
205 }
UniversalLockId foedus::xct::rw_lock_to_universal_lock_id ( const memory::GlobalVolatilePageResolver resolver,
McsRwLock lock 
)
inline

Definition at line 1231 of file xct_id.hpp.

References to_universal_lock_id().

Referenced by foedus::xct::McsMockAdaptor< RW_BLOCK >::add_rw_async_mapping(), foedus::thread::ThreadPimplMcsAdaptor< RW_BLOCK >::add_rw_async_mapping(), foedus::xct::McsMockThread< RW_BLOCK >::get_mcs_rw_async_block_index(), foedus::thread::ThreadPimplMcsAdaptor< RW_BLOCK >::get_rw_other_async_block(), foedus::xct::McsMockAdaptor< RW_BLOCK >::remove_rw_async_mapping(), and foedus::thread::ThreadPimplMcsAdaptor< RW_BLOCK >::remove_rw_async_mapping().

1233  {
1234  return to_universal_lock_id(resolver, reinterpret_cast<uintptr_t>(lock));
1235 }
UniversalLockId to_universal_lock_id(uint64_t numa_node, uint64_t local_page_index, uintptr_t lock_ptr)
If you already have the numa_node, local_page_index, prefer this one.
Definition: xct_id.hpp:1217

Here is the call graph for this function:

Here is the caller graph for this function:

UniversalLockId foedus::xct::to_universal_lock_id ( storage::VolatilePagePointer  page_id,
uintptr_t  addr 
)
inline

Definition at line 63 of file sysxct_impl.hpp.

References foedus::storage::VolatilePagePointer::get_numa_node(), and foedus::storage::VolatilePagePointer::get_offset().

Referenced by foedus::xct::SysxctLockList::assert_sorted_impl(), foedus::xct::SysxctLockList::batch_get_or_add_entries(), foedus::xct::SysxctLockList::get_or_add_entry(), lock_assert_sorted(), foedus::xct::Xct::on_record_read(), foedus::xct::PageComparator::operator()(), rw_lock_to_universal_lock_id(), to_universal_lock_id(), and xct_id_to_universal_lock_id().

63  {
64  return to_universal_lock_id(page_id.get_numa_node(), page_id.get_offset(), addr);
65 }
UniversalLockId to_universal_lock_id(storage::VolatilePagePointer page_id, uintptr_t addr)
Definition: sysxct_impl.hpp:63

Here is the call graph for this function:

Here is the caller graph for this function:

UniversalLockId foedus::xct::to_universal_lock_id ( const memory::GlobalVolatilePageResolver resolver,
uintptr_t  lock_ptr 
)

Always use this method rather than doing the conversion yourself.

See also
UniversalLockId

Definition at line 36 of file xct_id.cpp.

References ASSERT_ND, foedus::storage::assert_within_valid_volatile_page(), foedus::memory::GlobalVolatilePageResolver::begin_, foedus::storage::construct_volatile_page_pointer(), foedus::memory::GlobalVolatilePageResolver::end_, foedus::storage::Page::get_header(), foedus::storage::VolatilePagePointer::get_numa_node(), foedus::storage::VolatilePagePointer::get_offset(), foedus::memory::GlobalVolatilePageResolver::numa_node_count_, foedus::storage::to_page(), and to_universal_lock_id().

38  {
39  storage::assert_within_valid_volatile_page(resolver, reinterpret_cast<void*>(lock_ptr));
40  const storage::Page* page = storage::to_page(reinterpret_cast<void*>(lock_ptr));
41  const auto& page_header = page->get_header();
42  ASSERT_ND(!page_header.snapshot_);
43  storage::VolatilePagePointer vpp(storage::construct_volatile_page_pointer(page_header.page_id_));
44  const uint64_t node = vpp.get_numa_node();
45  const uint64_t page_index = vpp.get_offset();
46 
47  // See assert_within_valid_volatile_page() why we can't do these assertions.
48  // ASSERT_ND(lock_ptr >= base + vpp.components.offset * storage::kPageSize);
49  // ASSERT_ND(lock_ptr < base + (vpp.components.offset + 1U) * storage::kPageSize);
50 
51  // Although we have the addresses in resolver, we can NOT use it to calculate the offset
52  // because the base might be a different VA (though pointing to the same physical address).
53  // We thus calculate UniversalLockId purely from PageId in the page header and in_page_offset.
54  // Thus, actually this function uses resolver only for assertions (so far)!
55  ASSERT_ND(node < resolver.numa_node_count_);
56  ASSERT_ND(vpp.get_offset() >= resolver.begin_);
57  ASSERT_ND(vpp.get_offset() < resolver.end_);
58  return to_universal_lock_id(node, page_index, lock_ptr);
59 }
Page * to_page(const void *address)
super-dirty way to obtain Page the address belongs to.
Definition: page.hpp:395
UniversalLockId to_universal_lock_id(storage::VolatilePagePointer page_id, uintptr_t addr)
Definition: sysxct_impl.hpp:63
void assert_within_valid_volatile_page(const memory::GlobalVolatilePageResolver &resolver, const void *address)
Definition: page.hpp:428
VolatilePagePointer construct_volatile_page_pointer(uint64_t word)
Definition: storage_id.hpp:230
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72

Here is the call graph for this function:

UniversalLockId foedus::xct::to_universal_lock_id ( uint64_t  numa_node,
uint64_t  local_page_index,
uintptr_t  lock_ptr 
)
inline

If you already have the numa_node, local_page_index, prefer this one.

Definition at line 1217 of file xct_id.hpp.

References kLockPageSize.

1220  {
1221  const uint64_t in_page_offset = lock_ptr % kLockPageSize;
1222  return (numa_node << 48) | (local_page_index * kLockPageSize + in_page_offset);
1223 }
const uint64_t kLockPageSize
Must be same as storage::kPageSize.
Definition: xct_id.hpp:1215
UniversalLockId foedus::xct::xct_id_to_universal_lock_id ( const memory::GlobalVolatilePageResolver resolver,
RwLockableXctId lock 
)
inline

Definition at line 1226 of file xct_id.hpp.

References to_universal_lock_id().

Referenced by foedus::xct::Xct::add_to_read_set(), foedus::xct::RetrospectiveLockList::construct(), foedus::xct::CurrentLockList::get_or_add_entry(), foedus::xct::Xct::on_record_read_take_locks_if_needed(), and foedus::xct::RecordXctAccess::set_owner_id_resolve_lock_id().

1228  {
1229  return to_universal_lock_id(resolver, reinterpret_cast<uintptr_t>(lock));
1230 }
UniversalLockId to_universal_lock_id(uint64_t numa_node, uint64_t local_page_index, uintptr_t lock_ptr)
If you already have the numa_node, local_page_index, prefer this one.
Definition: xct_id.hpp:1217

Here is the call graph for this function:

Here is the caller graph for this function:

Variable Documentation

const LockListPosition foedus::xct::kLockListPositionInvalid = 0

Definition at line 149 of file xct_id.hpp.

Referenced by foedus::xct::SysxctLockList::assert_sorted_impl(), foedus::xct::SysxctLockList::batch_get_or_add_entries(), foedus::xct::CurrentLockList::batch_insert_write_placeholders(), foedus::xct::SysxctLockList::calculate_last_locked_entry_from(), foedus::xct::CurrentLockList::calculate_last_locked_entry_from(), foedus::xct::SysxctLockList::clear_entries(), foedus::xct::RetrospectiveLockList::clear_entries(), foedus::xct::CurrentLockList::clear_entries(), foedus::xct::SysxctLockList::compress_entries(), foedus::xct::RetrospectiveLockList::construct(), foedus::xct::CurrentLockListIteratorForWriteSet::CurrentLockListIteratorForWriteSet(), foedus::xct::CurrentLockList::get_max_locked_id(), foedus::xct::SysxctLockList::get_or_add_entry(), foedus::xct::CurrentLockList::get_or_add_entry(), foedus::xct::RetrospectiveLockList::is_empty(), foedus::xct::SysxctLockList::is_empty(), foedus::xct::CurrentLockList::is_empty(), foedus::xct::RetrospectiveLockList::is_valid_entry(), foedus::xct::SysxctLockList::is_valid_entry(), foedus::xct::CurrentLockList::is_valid_entry(), lock_assert_sorted(), lock_binary_search(), lock_lower_bound(), foedus::xct::Xct::on_record_read_take_locks_if_needed(), foedus::xct::XctManagerPimpl::precommit_xct_lock(), foedus::xct::CurrentLockList::prepopulate_for_retrospective_lock_list(), foedus::xct::CurrentLockList::release_all_after(), foedus::xct::SysxctLockList::release_all_locks(), foedus::xct::CurrentLockList::try_async_multiple_locks(), foedus::xct::CurrentLockList::try_or_acquire_multiple_locks(), and foedus::xct::CurrentLockList::try_or_acquire_single_lock().

const uint64_t foedus::xct::kLockPageSize = 1 << 12

Must be same as storage::kPageSize.

To avoid header dependencies, we declare a dedicated constant here and statically asserts the equivalence in cpp.

Definition at line 1215 of file xct_id.hpp.

Referenced by to_universal_lock_id().

constexpr uint32_t foedus::xct::kMcsMockDataPageFiller
Initial value:
- (sizeof(RwLockableXctId) + sizeof(McsWwLock)) * kMcsMockDataPageLocksPerPage
constexpr uint32_t kMcsMockDataPageLocksPerPage
constexpr uint32_t kMcsMockDataPageHeaderSize
const uint16_t kPageSize
A constant defining the page size (in bytes) of both snapshot pages and volatile pages.
Definition: storage_id.hpp:45

Definition at line 175 of file xct_mcs_adapter_impl.hpp.

constexpr uint32_t foedus::xct::kMcsMockDataPageHeaderPad = kMcsMockDataPageHeaderSize - sizeof(storage::PageHeader)

Definition at line 170 of file xct_mcs_adapter_impl.hpp.

constexpr uint32_t foedus::xct::kMcsMockDataPageHeaderSize = 128U

Definition at line 168 of file xct_mcs_adapter_impl.hpp.

constexpr uint32_t foedus::xct::kMcsMockDataPageLocksPerPage
Initial value:
/ (sizeof(RwLockableXctId) + sizeof(McsWwLock))
constexpr uint32_t kMcsMockDataPageHeaderSize
const uint16_t kPageSize
A constant defining the page size (in bytes) of both snapshot pages and volatile pages.
Definition: storage_id.hpp:45

Definition at line 172 of file xct_mcs_adapter_impl.hpp.

Referenced by foedus::xct::McsMockContext< RW_BLOCK >::get_rw_lock_address(), foedus::xct::McsMockContext< RW_BLOCK >::get_ww_lock_address(), foedus::xct::McsMockDataPage::init(), and foedus::xct::McsMockContext< RW_BLOCK >::init().

const uint16_t foedus::xct::kReadsetPrefetchBatch = 16

Definition at line 646 of file xct_manager_pimpl.cpp.

const uint64_t foedus::xct::kXctIdBeingWrittenBit = 1ULL << 61

Definition at line 885 of file xct_id.hpp.

Referenced by foedus::xct::XctId::set_being_written().

const uint64_t foedus::xct::kXctIdDeletedBit = 1ULL << 63

Definition at line 883 of file xct_id.hpp.

Referenced by foedus::xct::XctId::set_deleted().

const uint64_t foedus::xct::kXctIdMaskEpoch = 0x0FFFFFFF00000000ULL

Definition at line 888 of file xct_id.hpp.

const uint64_t foedus::xct::kXctIdMaskOrdinal = 0x00000000FFFFFFFFULL

Definition at line 889 of file xct_id.hpp.

const uint64_t foedus::xct::kXctIdMaskSerializer = 0x0FFFFFFFFFFFFFFFULL

Definition at line 887 of file xct_id.hpp.

Referenced by foedus::xct::XctId::clear_status_bits().

const uint64_t foedus::xct::kXctIdMovedBit = 1ULL << 62

Definition at line 884 of file xct_id.hpp.

Referenced by foedus::xct::XctId::set_moved().

const uint64_t foedus::xct::kXctIdNextLayerBit = 1ULL << 60

Definition at line 886 of file xct_id.hpp.

Referenced by foedus::xct::XctId::set_next_layer().