libfoedus-core
FOEDUS Core Library
foedus::thread::Thread Class Referencefinal

Represents one thread running on one NUMA core. More...

Detailed Description

Represents one thread running on one NUMA core.

MCS-Locking

SILO uses a simple spin lock with atomic CAS, but we observed a HUUUGE bottleneck with it on big machines (8 sockets or 16 sockets) while it was totally fine up to 4 sockets. It causes a cache invalidation storm even with exponential backoff. The best solution is MCS locking with local spins. We implemented it with advices from HLINUX team.

Definition at line 48 of file thread.hpp.

#include <thread.hpp>

Inheritance diagram for foedus::thread::Thread:
Collaboration diagram for foedus::thread::Thread:

Public Types

enum  Constants { kMaxFindPagesBatch = 32 }
 

Public Member Functions

 Thread ()=delete
 
 Thread (Engine *engine, ThreadId id, ThreadGlobalOrdinal global_ordinal)
 
 ~Thread ()
 
ErrorStack initialize () override
 Acquires resources in this object, usually called right after constructor. More...
 
bool is_initialized () const override
 Returns whether the object has been already initialized or not. More...
 
ErrorStack uninitialize () override
 An idempotent method to release all resources of this object, if any. More...
 
Engineget_engine () const
 
ThreadId get_thread_id () const
 
ThreadGroupId get_numa_node () const
 
ThreadGlobalOrdinal get_thread_global_ordinal () const
 
xct::Xctget_current_xct ()
 Returns the transaction that is currently running on this thread. More...
 
bool is_running_xct () const
 Returns if this thread is running an active transaction. More...
 
memory::NumaCoreMemoryget_thread_memory () const
 Returns the private memory repository of this thread. More...
 
memory::NumaNodeMemoryget_node_memory () const
 Returns the node-shared memory repository of the NUMA node this thread belongs to. More...
 
log::ThreadLogBufferget_thread_log_buffer ()
 Returns the private log buffer for this thread. More...
 
const memory::GlobalVolatilePageResolverget_global_volatile_page_resolver () const
 Returns the page resolver to convert page ID to page pointer. More...
 
const memory::LocalPageResolverget_local_volatile_page_resolver () const
 Returns page resolver to convert only local page ID to page pointer. More...
 
uint64_t get_snapshot_cache_hits () const
 [statistics] count of cache hits in snapshot caches More...
 
uint64_t get_snapshot_cache_misses () const
 [statistics] count of cache misses in snapshot caches More...
 
void reset_snapshot_cache_counts () const
 [statistics] resets the above two More...
 
storage::Pageresolve (storage::VolatilePagePointer ptr) const
 Shorthand for get_global_volatile_page_resolver.resolve_offset() More...
 
storage::Pageresolve_newpage (storage::VolatilePagePointer ptr) const
 Shorthand for get_global_volatile_page_resolver.resolve_offset_newpage() More...
 
storage::Pageresolve (memory::PagePoolOffset offset) const
 Shorthand for get_local_volatile_page_resolver.resolve_offset() More...
 
storage::Pageresolve_newpage (memory::PagePoolOffset offset) const
 Shorthand for get_local_volatile_page_resolver.resolve_offset_newpage() More...
 
template<typename P >
P * resolve_cast (storage::VolatilePagePointer ptr) const
 resolve() plus reinterpret_cast More...
 
template<typename P >
P * resolve_newpage_cast (storage::VolatilePagePointer ptr) const
 
template<typename P >
P * resolve_cast (memory::PagePoolOffset offset) const
 
template<typename P >
P * resolve_newpage_cast (memory::PagePoolOffset offset) const
 
ErrorCode find_or_read_a_snapshot_page (storage::SnapshotPagePointer page_id, storage::Page **out)
 Find the given page in snapshot cache, reading it if not found. More...
 
ErrorCode find_or_read_snapshot_pages_batch (uint16_t batch_size, const storage::SnapshotPagePointer *page_ids, storage::Page **out)
 Batched version of find_or_read_a_snapshot_page(). More...
 
ErrorCode read_a_snapshot_page (storage::SnapshotPagePointer page_id, storage::Page *buffer)
 Read a snapshot page using the thread-local file descriptor set. More...
 
ErrorCode read_snapshot_pages (storage::SnapshotPagePointer page_id_begin, uint32_t page_count, storage::Page *buffer)
 Read contiguous pages in one shot. More...
 
ErrorCode install_a_volatile_page (storage::DualPagePointer *pointer, storage::Page **installed_page)
 Installs a volatile page to the given dual pointer as a copy of the snapshot page. More...
 
ErrorCode follow_page_pointer (storage::VolatilePageInit page_initializer, bool tolerate_null_pointer, bool will_modify, bool take_ptr_set_snapshot, storage::DualPagePointer *pointer, storage::Page **page, const storage::Page *parent, uint16_t index_in_parent)
 A general method to follow (read) a page pointer. More...
 
ErrorCode follow_page_pointers_for_read_batch (uint16_t batch_size, storage::VolatilePageInit page_initializer, bool tolerate_null_pointer, bool take_ptr_set_snapshot, storage::DualPagePointer **pointers, storage::Page **parents, const uint16_t *index_in_parents, bool *followed_snapshots, storage::Page **out)
 Batched version of follow_page_pointer with will_modify==false. More...
 
ErrorCode follow_page_pointers_for_write_batch (uint16_t batch_size, storage::VolatilePageInit page_initializer, storage::DualPagePointer **pointers, storage::Page **parents, const uint16_t *index_in_parents, storage::Page **out)
 Batched version of follow_page_pointer with will_modify==true and tolerate_null_pointer==true. More...
 
void collect_retired_volatile_page (storage::VolatilePagePointer ptr)
 Keeps the specified volatile page as retired as of the current epoch. More...
 
xct::McsRwSimpleBlockget_mcs_rw_simple_blocks ()
 Unconditionally takes MCS lock on the given mcs_lock. More...
 
xct::McsRwExtendedBlockget_mcs_rw_extended_blocks ()
 
ErrorCode cll_try_or_acquire_single_lock (xct::LockListPosition pos)
 Methods related to Current Lock List (CLL) These are the only interface in Thread to lock records. More...
 
ErrorCode cll_try_or_acquire_multiple_locks (xct::LockListPosition upto_pos)
 Acquire multiple locks up to the given position in canonical order. More...
 
void cll_giveup_all_locks_after (xct::UniversalLockId address)
 This gives-up locks in CLL that are not yet taken. More...
 
void cll_giveup_all_locks_at_and_after (xct::UniversalLockId address)
 
void cll_release_all_locks_after (xct::UniversalLockId address)
 Release all locks in CLL of this thread whose addresses are canonically ordered before the parameter. More...
 
void cll_release_all_locks_at_and_after (xct::UniversalLockId address)
 same as mcs_release_all_current_locks_after(address - 1) More...
 
void cll_release_all_locks ()
 
ErrorCode run_nested_sysxct (xct::SysxctFunctor *functor, uint32_t max_retries=0)
 Methods related to System transactions (sysxct) nested under this thread. More...
 
ErrorCode sysxct_record_lock (xct::SysxctWorkspace *sysxct_workspace, storage::VolatilePagePointer page_id, xct::RwLockableXctId *lock)
 Takes a lock for a sysxct running under this thread. More...
 
ErrorCode sysxct_batch_record_locks (xct::SysxctWorkspace *sysxct_workspace, storage::VolatilePagePointer page_id, uint32_t lock_count, xct::RwLockableXctId **locks)
 Takes a bunch of locks in the same page for a sysxct running under this thread. More...
 
ErrorCode sysxct_page_lock (xct::SysxctWorkspace *sysxct_workspace, storage::Page *page)
 Takes a page lock in the same page for a sysxct running under this thread. More...
 
ErrorCode sysxct_batch_page_locks (xct::SysxctWorkspace *sysxct_workspace, uint32_t lock_count, storage::Page **pages)
 Takes a bunch of page locks for a sysxct running under this thread. More...
 
Epochget_in_commit_epoch_address ()
 Currently we don't have sysxct_release_locks() etc. More...
 
ThreadPimplget_pimpl () const
 Returns the pimpl of this object. More...
 
assorted::UniformRandomget_lock_rnd ()
 
bool is_hot_page (const storage::Page *page) const
 
- Public Member Functions inherited from foedus::Initializable
virtual ~Initializable ()
 

Friends

std::ostream & operator<< (std::ostream &o, const Thread &v)
 

Member Enumeration Documentation

Enumerator
kMaxFindPagesBatch 

Max size for find_or_read_snapshot_pages_batch() etc.

This must be same or less than CacheHashtable::kMaxFindBatchSize.

Definition at line 50 of file thread.hpp.

50  {
55  kMaxFindPagesBatch = 32,
56  };
Max size for find_or_read_snapshot_pages_batch() etc.
Definition: thread.hpp:55

Constructor & Destructor Documentation

foedus::thread::Thread::Thread ( )
delete
foedus::thread::Thread::Thread ( Engine engine,
ThreadId  id,
ThreadGlobalOrdinal  global_ordinal 
)

Definition at line 32 of file thread.cpp.

References foedus::assorted::UniformRandom::set_current_seed().

36  : pimpl_(nullptr) {
37  lock_rnd_.set_current_seed(global_ordinal);
38  pimpl_ = new ThreadPimpl(engine, this, id, global_ordinal);
39 }
void set_current_seed(uint64_t seed)

Here is the call graph for this function:

foedus::thread::Thread::~Thread ( )

Definition at line 40 of file thread.cpp.

40  {
41  delete pimpl_;
42  pimpl_ = nullptr;
43 }

Member Function Documentation

void foedus::thread::Thread::cll_giveup_all_locks_after ( xct::UniversalLockId  address)

This gives-up locks in CLL that are not yet taken.

preferred mode will be set to either NoLock or same as taken_mode, and all incomplete async locks will be cancelled.

Definition at line 847 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::cll_giveup_all_locks_after().

Referenced by cll_giveup_all_locks_at_and_after(), and foedus::xct::Xct::on_record_read_take_locks_if_needed().

847  {
848  pimpl_->cll_giveup_all_locks_after(address);
849 }
void cll_giveup_all_locks_after(xct::UniversalLockId address)

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::thread::Thread::cll_giveup_all_locks_at_and_after ( xct::UniversalLockId  address)
inline

Definition at line 297 of file thread.hpp.

References cll_giveup_all_locks_after(), and foedus::xct::kNullUniversalLockId.

297  {
298  if (address == xct::kNullUniversalLockId) {
300  } else {
301  cll_giveup_all_locks_after(address - 1U);
302  }
303  }
void cll_giveup_all_locks_after(xct::UniversalLockId address)
This gives-up locks in CLL that are not yet taken.
const UniversalLockId kNullUniversalLockId
This never points to a valid lock, and also evaluates less than any vaild alocks. ...
Definition: xct_id.hpp:137

Here is the call graph for this function:

void foedus::thread::Thread::cll_release_all_locks ( )

Definition at line 841 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::cll_release_all_locks().

Referenced by foedus::xct::XctManagerPimpl::release_and_clear_all_current_locks().

841  {
842  pimpl_->cll_release_all_locks();
843 }

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::thread::Thread::cll_release_all_locks_after ( xct::UniversalLockId  address)

Release all locks in CLL of this thread whose addresses are canonically ordered before the parameter.

This is used where we need to rule out the risk of deadlock.

Definition at line 844 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::cll_release_all_locks_after().

Referenced by cll_release_all_locks_at_and_after(), and foedus::xct::XctManagerPimpl::precommit_xct_lock().

844  {
845  pimpl_->cll_release_all_locks_after(address);
846 }
void cll_release_all_locks_after(xct::UniversalLockId address)
RW-locks.

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::thread::Thread::cll_release_all_locks_at_and_after ( xct::UniversalLockId  address)
inline

same as mcs_release_all_current_locks_after(address - 1)

Definition at line 310 of file thread.hpp.

References cll_release_all_locks_after(), and foedus::xct::kNullUniversalLockId.

310  {
311  if (address == xct::kNullUniversalLockId) {
313  } else {
314  cll_release_all_locks_after(address - 1U);
315  }
316  }
void cll_release_all_locks_after(xct::UniversalLockId address)
Release all locks in CLL of this thread whose addresses are canonically ordered before the parameter...
const UniversalLockId kNullUniversalLockId
This never points to a valid lock, and also evaluates less than any vaild alocks. ...
Definition: xct_id.hpp:137

Here is the call graph for this function:

ErrorCode foedus::thread::Thread::cll_try_or_acquire_multiple_locks ( xct::LockListPosition  upto_pos)

Acquire multiple locks up to the given position in canonical order.

This is invoked by the thread to keep itself in canonical mode. This method is unconditional, meaning waits forever until we acquire the locks. Hence, this method must be invoked when the thread is still in canonical mode. Otherwise, it risks deadlock.

Definition at line 853 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::cll_try_or_acquire_multiple_locks().

Referenced by foedus::xct::Xct::on_record_read_take_locks_if_needed().

853  {
854  return pimpl_->cll_try_or_acquire_multiple_locks(upto_pos);
855 }
ErrorCode cll_try_or_acquire_multiple_locks(xct::LockListPosition upto_pos)

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::thread::Thread::cll_try_or_acquire_single_lock ( xct::LockListPosition  pos)

Methods related to Current Lock List (CLL) These are the only interface in Thread to lock records.

We previously had methods to directly lock without CLL, but we now prohibit bypassing CLL. CLL guarantees deadlock-free lock handling. CLL only handle record locks. In FOEDUS, normal transactions never take page lock. Only system transactions are allowed to take page locks.Methods below take or release locks, so they receive MCS_RW_IMPL, a template param. Inline definitions of CurrentLockList methods below.

To avoid vtable and allow inlining, we define them at the bottom of this file. Acquire one lock in this CLL.

This method automatically checks if we are following canonical mode, and acquire the lock unconditionally when in canonical mode (never returns until acquire), and try the lock instanteneously when not in canonical mode (returns RaceAbort immediately).

These are inlined primarily because they receive a template param, not because we want to inline for performance. We could do explicit instantiations, but not that lengthy, either. Just inlining them is easier in this case.

Definition at line 850 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::cll_try_or_acquire_single_lock().

Referenced by foedus::xct::Xct::on_record_read_take_locks_if_needed(), and foedus::xct::XctManagerPimpl::precommit_xct_lock().

850  {
851  return pimpl_->cll_try_or_acquire_single_lock(pos);
852 }
ErrorCode cll_try_or_acquire_single_lock(xct::LockListPosition pos)

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::thread::Thread::collect_retired_volatile_page ( storage::VolatilePagePointer  ptr)

Keeps the specified volatile page as retired as of the current epoch.

Parameters
[in]ptrthe volatile page that has been retired
Precondition
in the page ptr points to, is_retired()==true.

This thread buffers such pages and returns to volatile page pool when it is safe to do so.

Definition at line 113 of file thread.cpp.

References foedus::thread::ThreadPimpl::collect_retired_volatile_page().

Referenced by foedus::storage::masstree::Adopt::adopt_case_a(), foedus::storage::masstree::Adopt::adopt_case_b(), foedus::storage::masstree::grow_case_a_common(), foedus::storage::masstree::grow_case_b_common(), and foedus::storage::masstree::SplitIntermediate::split_impl_no_error().

113  {
114  pimpl_->collect_retired_volatile_page(ptr);
115 }
void collect_retired_volatile_page(storage::VolatilePagePointer ptr)
Keeps the specified volatile page as retired as of the current epoch.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::thread::Thread::find_or_read_a_snapshot_page ( storage::SnapshotPagePointer  page_id,
storage::Page **  out 
)

Find the given page in snapshot cache, reading it if not found.

Definition at line 95 of file thread.cpp.

References foedus::thread::ThreadPimpl::find_or_read_a_snapshot_page().

Referenced by foedus::storage::hash::HashStoragePimpl::follow_page(), foedus::storage::hash::HashStoragePimpl::follow_page_bin_head(), foedus::storage::hash::HashStoragePimpl::locate_record_in_snapshot(), foedus::storage::masstree::MasstreeStoragePimpl::prefetch_pages_follow(), and foedus::storage::array::ArrayStoragePimpl::prefetch_pages_recurse().

97  {
98  return pimpl_->find_or_read_a_snapshot_page(page_id, out);
99 }
ErrorCode find_or_read_a_snapshot_page(storage::SnapshotPagePointer page_id, storage::Page **out)
Find the given page in snapshot cache, reading it if not found.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::thread::Thread::find_or_read_snapshot_pages_batch ( uint16_t  batch_size,
const storage::SnapshotPagePointer page_ids,
storage::Page **  out 
)

Batched version of find_or_read_a_snapshot_page().

Parameters
[in]batch_sizeBatch size. Must be kMaxFindPagesBatch or less.
[in]page_idsArray of Page IDs to look for, size=batch_size
[out]outOutput

This might perform much faster because of parallel prefetching, SIMD-ized hash calculattion (planned, not implemented yet) etc.

Definition at line 100 of file thread.cpp.

References foedus::thread::ThreadPimpl::find_or_read_snapshot_pages_batch().

103  {
104  return pimpl_->find_or_read_snapshot_pages_batch(batch_size, page_ids, out);
105 }
ErrorCode find_or_read_snapshot_pages_batch(uint16_t batch_size, const storage::SnapshotPagePointer *page_ids, storage::Page **out)
Batched version of find_or_read_a_snapshot_page().

Here is the call graph for this function:

ErrorCode foedus::thread::Thread::follow_page_pointer ( storage::VolatilePageInit  page_initializer,
bool  tolerate_null_pointer,
bool  will_modify,
bool  take_ptr_set_snapshot,
storage::DualPagePointer pointer,
storage::Page **  page,
const storage::Page parent,
uint16_t  index_in_parent 
)

A general method to follow (read) a page pointer.

Parameters
[in]page_initializercallback function in case we need to initialize a new volatile page. null if it never happens (eg tolerate_null_pointer is false).
[in]tolerate_null_pointerwhen true and when both the volatile and snapshot pointers seem null, we return null page rather than creating a new volatile page.
[in]will_modifyif true, we always return a non-null volatile page. This is true when we are to modify the page, such as insert/delete.
[in]take_ptr_set_snapshotif true, we add the address of volatile page pointer to ptr set when we do not follow a volatile pointer (null or volatile). This is usually true to make sure we get aware of new page installment by concurrent threads. If the isolation level is not serializable, we don't take ptr set anyways.
[in,out]pointerthe page pointer.
[out]pagethe read page.
[in]parentthe parent page that contains a pointer to the page.
[in]index_in_parentSome index (meaning depends on page type) of pointer in parent page to the page.
Precondition
!tolerate_null_pointer || !will_modify (if we are modifying the page, tolerating null pointer doesn't make sense. we should always initialize a new volatile page)

This is the primary way to retrieve a page pointed by a pointer in various places. Depending on the current transaction's isolation level and storage type (represented by the various arguments), this does a whole lots of things to comply with our commit protocol.

Remember that DualPagePointer maintains volatile and snapshot pointers. We sometimes have to install a new volatile page or add the pointer to ptr set for serializability. That logic is a bit too lengthy method to duplicate in each page type, so generalize it here.

Definition at line 353 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::follow_page_pointer().

Referenced by foedus::storage::masstree::MasstreeStoragePimpl::follow_page(), foedus::storage::hash::HashStoragePimpl::follow_page(), foedus::storage::hash::HashStoragePimpl::follow_page_bin_head(), foedus::storage::array::ArrayStoragePimpl::follow_pointer(), foedus::storage::masstree::MasstreeStoragePimpl::get_first_root(), foedus::storage::array::ArrayStoragePimpl::get_root_page(), and foedus::storage::hash::HashStoragePimpl::get_root_page().

361  {
362  return pimpl_->follow_page_pointer(
363  page_initializer,
364  tolerate_null_pointer,
365  will_modify,
366  take_ptr_set_snapshot,
367  pointer,
368  page,
369  parent,
370  index_in_parent);
371 }
ErrorCode follow_page_pointer(storage::VolatilePageInit page_initializer, bool tolerate_null_pointer, bool will_modify, bool take_ptr_set_snapshot, storage::DualPagePointer *pointer, storage::Page **page, const storage::Page *parent, uint16_t index_in_parent)
A general method to follow (read) a page pointer.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::thread::Thread::follow_page_pointers_for_read_batch ( uint16_t  batch_size,
storage::VolatilePageInit  page_initializer,
bool  tolerate_null_pointer,
bool  take_ptr_set_snapshot,
storage::DualPagePointer **  pointers,
storage::Page **  parents,
const uint16_t *  index_in_parents,
bool *  followed_snapshots,
storage::Page **  out 
)

Batched version of follow_page_pointer with will_modify==false.

Parameters
[in]batch_sizeBatch size. Must be kMaxFindPagesBatch or less.
[in]page_initializercallback function in case we need to initialize a new volatile page. null if it never happens (eg tolerate_null_pointer is false).
[in]tolerate_null_pointerwhen true and when both the volatile and snapshot pointers seem null, we return null page rather than creating a new volatile page.
[in]take_ptr_set_snapshotif true, we add the address of volatile page pointer to ptr set when we do not follow a volatile pointer (null or volatile). This is usually true to make sure we get aware of new page installment by concurrent threads. If the isolation level is not serializable, we don't take ptr set anyways.
[in,out]pointersthe page pointers.
[in]parentsthe parent page that contains a pointer to the page.
[in]index_in_parentsSome index (meaning depends on page type) of pointer in parent page to the page.
[in,out]followed_snapshotsAs input, must be same as parents[i]==followed_snapshots[i]. As output, same as out[i]->header().snapshot_. We receive/emit this to avoid accessing page header.
[out]outthe read page.
Note
this method is guaranteed to work even if parents==out

Definition at line 373 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::follow_page_pointers_for_read_batch().

Referenced by foedus::storage::array::ArrayStoragePimpl::follow_pointers_for_read_batch().

382  {
384  batch_size,
385  page_initializer,
386  tolerate_null_pointer,
387  take_ptr_set_snapshot,
388  pointers,
389  parents,
390  index_in_parents,
391  followed_snapshots,
392  out);
393 }
ErrorCode follow_page_pointers_for_read_batch(uint16_t batch_size, storage::VolatilePageInit page_initializer, bool tolerate_null_pointer, bool take_ptr_set_snapshot, storage::DualPagePointer **pointers, storage::Page **parents, const uint16_t *index_in_parents, bool *followed_snapshots, storage::Page **out)
Batched version of follow_page_pointer with will_modify==false.

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::thread::Thread::follow_page_pointers_for_write_batch ( uint16_t  batch_size,
storage::VolatilePageInit  page_initializer,
storage::DualPagePointer **  pointers,
storage::Page **  parents,
const uint16_t *  index_in_parents,
storage::Page **  out 
)

Batched version of follow_page_pointer with will_modify==true and tolerate_null_pointer==true.

Parameters
[in]batch_sizeBatch size. Must be kMaxFindPagesBatch or less.
[in]page_initializercallback function in case we need to initialize a new volatile page. null if it never happens (eg tolerate_null_pointer is false).
[in,out]pointersthe page pointers.
[in]parentsthe parent page that contains a pointer to the page.
[in]index_in_parentsSome index (meaning depends on page type) of pointer in parent page to the page.
[out]outthe read page.
Note
this method is guaranteed to work even if parents==out

Definition at line 395 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::follow_page_pointers_for_write_batch().

Referenced by foedus::storage::array::ArrayStoragePimpl::follow_pointers_for_write_batch().

401  {
403  batch_size,
404  page_initializer,
405  pointers,
406  parents,
407  index_in_parents,
408  out);
409 }
ErrorCode follow_page_pointers_for_write_batch(uint16_t batch_size, storage::VolatilePageInit page_initializer, storage::DualPagePointer **pointers, storage::Page **parents, const uint16_t *index_in_parents, storage::Page **out)
Batched version of follow_page_pointer with will_modify==true and tolerate_null_pointer==true.

Here is the call graph for this function:

Here is the caller graph for this function:

xct::Xct & foedus::thread::Thread::get_current_xct ( )

Returns the transaction that is currently running on this thread.

Definition at line 75 of file thread.cpp.

References foedus::thread::ThreadPimpl::current_xct_.

Referenced by foedus::xct::XctManagerPimpl::abort_xct(), foedus::storage::sequential::SequentialStorage::append_record(), foedus::xct::XctManagerPimpl::begin_xct(), foedus::xct::RetrospectiveLockList::construct(), foedus::storage::PageHeader::contains_hot_records(), foedus::storage::hash::HashStoragePimpl::follow_page_bin_head(), foedus::storage::array::ArrayStoragePimpl::get_record(), foedus::storage::array::ArrayStoragePimpl::get_record_for_write(), foedus::storage::array::ArrayStoragePimpl::get_record_for_write_batch(), foedus::storage::array::ArrayStoragePimpl::get_record_payload(), foedus::storage::array::ArrayStoragePimpl::get_record_payload_batch(), foedus::storage::array::ArrayStoragePimpl::get_record_primitive(), foedus::storage::array::ArrayStoragePimpl::get_record_primitive_batch(), foedus::storage::array::ArrayStoragePimpl::increment_record(), foedus::storage::array::ArrayStoragePimpl::increment_record_oneshot(), foedus::storage::hash::HashStoragePimpl::locate_bin(), foedus::storage::masstree::MasstreeStoragePimpl::locate_record(), foedus::storage::hash::HashStoragePimpl::locate_record(), foedus::storage::masstree::MasstreeStoragePimpl::locate_record_normalized(), foedus::storage::sequential::SequentialStorageControlBlock::optimistic_read_truncate_epoch(), foedus::storage::array::ArrayStoragePimpl::overwrite_record(), foedus::storage::array::ArrayStoragePimpl::overwrite_record_primitive(), foedus::xct::XctManagerPimpl::precommit_xct(), foedus::xct::XctManagerPimpl::precommit_xct_apply(), foedus::xct::XctManagerPimpl::precommit_xct_lock(), foedus::xct::XctManagerPimpl::precommit_xct_lock_batch_track_moved(), foedus::xct::XctManagerPimpl::precommit_xct_readwrite(), foedus::xct::XctManagerPimpl::precommit_xct_sort_access(), foedus::xct::XctManagerPimpl::precommit_xct_verify_page_version_set(), foedus::xct::XctManagerPimpl::precommit_xct_verify_pointer_set(), foedus::xct::XctManagerPimpl::precommit_xct_verify_readonly(), foedus::xct::XctManagerPimpl::precommit_xct_verify_readwrite(), foedus::storage::masstree::MasstreeStoragePimpl::register_record_write_log(), foedus::storage::hash::HashStoragePimpl::register_record_write_log(), foedus::xct::XctManagerPimpl::release_and_clear_all_current_locks(), foedus::storage::masstree::MasstreeStoragePimpl::reserve_record(), and foedus::storage::masstree::MasstreeStoragePimpl::reserve_record_normalized().

75 { return pimpl_->current_xct_; }
xct::Xct current_xct_
Current transaction this thread is conveying.

Here is the caller graph for this function:

Epoch * foedus::thread::Thread::get_in_commit_epoch_address ( )

Currently we don't have sysxct_release_locks() etc.

All locks will be automatically released when the sysxct ends. Probably this is enough as sysxct should be short-living.

See also
foedus::xct::InCommitEpochGuard

Definition at line 55 of file thread.cpp.

References foedus::thread::ThreadPimpl::control_block_, and foedus::thread::ThreadControlBlock::in_commit_epoch_.

Referenced by foedus::xct::XctManagerPimpl::precommit_xct_readwrite().

55 { return &pimpl_->control_block_->in_commit_epoch_; }
ThreadControlBlock * control_block_

Here is the caller graph for this function:

const memory::LocalPageResolver & foedus::thread::Thread::get_local_volatile_page_resolver ( ) const
assorted::UniformRandom& foedus::thread::Thread::get_lock_rnd ( )
inline

Definition at line 368 of file thread.hpp.

Referenced by foedus::xct::RwLockableXctId::hotter().

368 { return lock_rnd_; }

Here is the caller graph for this function:

xct::McsRwExtendedBlock * foedus::thread::Thread::get_mcs_rw_extended_blocks ( )

Definition at line 836 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::mcs_rw_extended_blocks_.

836  {
837  return pimpl_->mcs_rw_extended_blocks_;
838 }
xct::McsRwExtendedBlock * mcs_rw_extended_blocks_
xct::McsRwSimpleBlock * foedus::thread::Thread::get_mcs_rw_simple_blocks ( )

Unconditionally takes MCS lock on the given mcs_lock.

MCS Locking methods We previously had the locking algorithm implemented here, but we separated it to xct_mcs_impl.cpp/hpp.

We now have only delegation here.Thread -> ThreadPimpl forwardings

Definition at line 833 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::mcs_rw_simple_blocks_.

833  {
834  return pimpl_->mcs_rw_simple_blocks_;
835 }
xct::McsRwSimpleBlock * mcs_rw_simple_blocks_
memory::NumaNodeMemory * foedus::thread::Thread::get_node_memory ( ) const

Returns the node-shared memory repository of the NUMA node this thread belongs to.

Definition at line 58 of file thread.cpp.

References foedus::thread::ThreadPimpl::core_memory_, and foedus::memory::NumaCoreMemory::get_node_memory().

58  {
59  return pimpl_->core_memory_->get_node_memory();
60 }
NumaNodeMemory * get_node_memory() const
Returns the parent memory repository.
memory::NumaCoreMemory * core_memory_
Private memory repository of this thread.

Here is the call graph for this function:

ThreadPimpl* foedus::thread::Thread::get_pimpl ( ) const
inline

Returns the pimpl of this object.

Use it only when you know what you are doing.

Definition at line 366 of file thread.hpp.

366 { return pimpl_; }
uint64_t foedus::thread::Thread::get_snapshot_cache_hits ( ) const

[statistics] count of cache hits in snapshot caches

Definition at line 62 of file thread.cpp.

References foedus::thread::ThreadPimpl::control_block_, and foedus::thread::ThreadControlBlock::stat_snapshot_cache_hits_.

62  {
64 }
ThreadControlBlock * control_block_
uint64_t foedus::thread::Thread::get_snapshot_cache_misses ( ) const

[statistics] count of cache misses in snapshot caches

Definition at line 66 of file thread.cpp.

References foedus::thread::ThreadPimpl::control_block_, and foedus::thread::ThreadControlBlock::stat_snapshot_cache_misses_.

66  {
68 }
ThreadControlBlock * control_block_
ThreadGlobalOrdinal foedus::thread::Thread::get_thread_global_ordinal ( ) const

Definition at line 54 of file thread.cpp.

References foedus::thread::ThreadPimpl::global_ordinal_.

Referenced by foedus::thread::operator<<().

54 { return pimpl_->global_ordinal_; }
const ThreadGlobalOrdinal global_ordinal_
globally and contiguously numbered ID of thread

Here is the caller graph for this function:

log::ThreadLogBuffer & foedus::thread::Thread::get_thread_log_buffer ( )

Returns the private log buffer for this thread.

Definition at line 78 of file thread.cpp.

References foedus::thread::ThreadPimpl::log_buffer_.

Referenced by foedus::xct::XctManagerPimpl::abort_xct(), foedus::storage::sequential::SequentialStorage::append_record(), foedus::xct::XctManagerPimpl::begin_xct(), foedus::storage::masstree::MasstreeStoragePimpl::delete_general(), foedus::storage::hash::HashStoragePimpl::delete_record(), foedus::storage::masstree::MasstreeStoragePimpl::increment_general(), foedus::storage::array::ArrayStoragePimpl::increment_record(), foedus::storage::hash::HashStoragePimpl::increment_record(), foedus::storage::array::ArrayStoragePimpl::increment_record_oneshot(), foedus::storage::masstree::MasstreeStoragePimpl::insert_general(), foedus::storage::hash::HashStoragePimpl::insert_record(), foedus::storage::masstree::MasstreeStoragePimpl::overwrite_general(), foedus::storage::array::ArrayStoragePimpl::overwrite_record(), foedus::storage::hash::HashStoragePimpl::overwrite_record(), foedus::storage::array::ArrayStoragePimpl::overwrite_record_primitive(), foedus::xct::XctManagerPimpl::precommit_xct_readonly(), foedus::xct::XctManagerPimpl::precommit_xct_readwrite(), foedus::storage::masstree::MasstreeStoragePimpl::upsert_general(), and foedus::storage::hash::HashStoragePimpl::upsert_record().

78 { return pimpl_->log_buffer_; }
log::ThreadLogBuffer log_buffer_
Thread-private log buffer.

Here is the caller graph for this function:

ErrorStack foedus::thread::Thread::initialize ( )
overridevirtual

Acquires resources in this object, usually called right after constructor.

Precondition
is_initialized() == FALSE

If and only if the return value was not an error, is_initialized() will return TRUE. This method is usually not idempotent, but some implementation can choose to be. In that case, the implementation class should clarify that it's idempotent. This method is responsible for releasing all acquired resources when initialization fails. This method itself is NOT thread-safe. Do not call this in a racy situation.

Implements foedus::Initializable.

Definition at line 45 of file thread.cpp.

References CHECK_ERROR, foedus::DefaultInitializable::initialize(), and foedus::kRetOk.

45  {
46  CHECK_ERROR(pimpl_->initialize());
47  return kRetOk;
48 }
ErrorStack initialize() override final
Typical implementation of Initializable::initialize() that provides initialize-once semantics...
#define CHECK_ERROR(x)
This macro calls x and checks its returned value.
const ErrorStack kRetOk
Normal return value for no-error case.

Here is the call graph for this function:

ErrorCode foedus::thread::Thread::install_a_volatile_page ( storage::DualPagePointer pointer,
storage::Page **  installed_page 
)

Installs a volatile page to the given dual pointer as a copy of the snapshot page.

Parameters
[in,out]pointerdual pointer. volatile pointer will be modified.
[out]installed_pagephysical pointer to the installed volatile page. This might point to a page installed by a concurrent thread.
Precondition
pointer->snapshot_pointer_ != 0 (this method is for a page that already has snapshot)
pointer->volatile_pointer.components.offset == 0 (but not mandatory because concurrent threads might have installed it right now)

This is called when a dual pointer has only a snapshot pointer, in other words it is "clean", to create a volatile version for modification.

Definition at line 107 of file thread.cpp.

References foedus::thread::ThreadPimpl::install_a_volatile_page().

Referenced by foedus::storage::masstree::MasstreeStoragePimpl::prefetch_pages_follow(), and foedus::storage::array::ArrayStoragePimpl::prefetch_pages_recurse().

109  {
110  return pimpl_->install_a_volatile_page(pointer, installed_page);
111 }
ErrorCode install_a_volatile_page(storage::DualPagePointer *pointer, storage::Page **installed_page)
Installs a volatile page to the given dual pointer as a copy of the snapshot page.

Here is the call graph for this function:

Here is the caller graph for this function:

bool foedus::thread::Thread::is_initialized ( ) const
overridevirtual

Returns whether the object has been already initialized or not.

Implements foedus::Initializable.

Definition at line 49 of file thread.cpp.

References foedus::DefaultInitializable::is_initialized().

49 { return pimpl_->is_initialized(); }
bool is_initialized() const override final
Returns whether the object has been already initialized or not.

Here is the call graph for this function:

bool foedus::thread::Thread::is_running_xct ( ) const

Returns if this thread is running an active transaction.

Definition at line 76 of file thread.cpp.

References foedus::thread::ThreadPimpl::current_xct_, and foedus::xct::Xct::is_active().

76 { return pimpl_->current_xct_.is_active(); }
bool is_active() const
Returns whether the object is an active transaction.
Definition: xct.hpp:121
xct::Xct current_xct_
Current transaction this thread is conveying.

Here is the call graph for this function:

ErrorCode foedus::thread::Thread::read_a_snapshot_page ( storage::SnapshotPagePointer  page_id,
storage::Page buffer 
)

Read a snapshot page using the thread-local file descriptor set.

Attention
this method always READs, so no caching done. Actually, this method is used from caching module when cache miss happens. To utilize cache, use find_or_read_a_snapshot_page().

Definition at line 84 of file thread.cpp.

References foedus::thread::ThreadPimpl::read_a_snapshot_page().

86  {
87  return pimpl_->read_a_snapshot_page(page_id, buffer);
88 }
ErrorCode read_a_snapshot_page(storage::SnapshotPagePointer page_id, storage::Page *buffer) __attribute__((always_inline))
Read a snapshot page using the thread-local file descriptor set.

Here is the call graph for this function:

ErrorCode foedus::thread::Thread::read_snapshot_pages ( storage::SnapshotPagePointer  page_id_begin,
uint32_t  page_count,
storage::Page buffer 
)

Read contiguous pages in one shot.

Other than that same as read_a_snapshot_page().

Definition at line 89 of file thread.cpp.

References foedus::thread::ThreadPimpl::read_snapshot_pages().

92  {
93  return pimpl_->read_snapshot_pages(page_id_begin, page_count, buffer);
94 }
ErrorCode read_snapshot_pages(storage::SnapshotPagePointer page_id_begin, uint32_t page_count, storage::Page *buffer) __attribute__((always_inline))
Read contiguous pages in one shot.

Here is the call graph for this function:

void foedus::thread::Thread::reset_snapshot_cache_counts ( ) const
storage::Page * foedus::thread::Thread::resolve ( storage::VolatilePagePointer  ptr) const

Shorthand for get_global_volatile_page_resolver.resolve_offset()

Definition at line 129 of file thread.cpp.

References get_global_volatile_page_resolver(), and foedus::memory::GlobalVolatilePageResolver::resolve_offset().

Referenced by foedus::storage::masstree::MasstreeStoragePimpl::find_border_physical(), foedus::storage::hash::HashStoragePimpl::follow_page_bin_head(), resolve_cast(), and foedus::storage::masstree::verify_page_basic().

129  {
131 }
const memory::GlobalVolatilePageResolver & get_global_volatile_page_resolver() const
Returns the page resolver to convert page ID to page pointer.
Definition: thread.cpp:125
storage::Page * resolve_offset(uint8_t numa_node, PagePoolOffset offset) const __attribute__((always_inline))
Resolves offset plus NUMA node ID to storage::Page*.

Here is the call graph for this function:

Here is the caller graph for this function:

storage::Page * foedus::thread::Thread::resolve ( memory::PagePoolOffset  offset) const

Shorthand for get_local_volatile_page_resolver.resolve_offset()

Definition at line 135 of file thread.cpp.

References get_local_volatile_page_resolver(), and foedus::memory::LocalPageResolver::resolve_offset().

135  {
137 }
const memory::LocalPageResolver & get_local_volatile_page_resolver() const
Returns page resolver to convert only local page ID to page pointer.
Definition: thread.cpp:80
storage::Page * resolve_offset(PagePoolOffset offset) const __attribute__((always_inline))
Resolves offset in this pool to storage::Page*.

Here is the call graph for this function:

template<typename P >
P* foedus::thread::Thread::resolve_cast ( memory::PagePoolOffset  offset) const
inline

Definition at line 116 of file thread.hpp.

References resolve().

116  {
117  return reinterpret_cast<P*>(resolve(offset));
118  }
storage::Page * resolve(storage::VolatilePagePointer ptr) const
Shorthand for get_global_volatile_page_resolver.resolve_offset()
Definition: thread.cpp:129

Here is the call graph for this function:

storage::Page * foedus::thread::Thread::resolve_newpage ( storage::VolatilePagePointer  ptr) const

Shorthand for get_global_volatile_page_resolver.resolve_offset_newpage()

Definition at line 132 of file thread.cpp.

References get_global_volatile_page_resolver(), and foedus::memory::GlobalVolatilePageResolver::resolve_offset_newpage().

Referenced by resolve_newpage_cast().

132  {
134 }
const memory::GlobalVolatilePageResolver & get_global_volatile_page_resolver() const
Returns the page resolver to convert page ID to page pointer.
Definition: thread.cpp:125
storage::Page * resolve_offset_newpage(uint8_t numa_node, PagePoolOffset offset) const __attribute__((always_inline))
As the name suggests, this version is for new pages, which don't have sanity checks.

Here is the call graph for this function:

Here is the caller graph for this function:

storage::Page * foedus::thread::Thread::resolve_newpage ( memory::PagePoolOffset  offset) const

Shorthand for get_local_volatile_page_resolver.resolve_offset_newpage()

Definition at line 138 of file thread.cpp.

References get_local_volatile_page_resolver(), and foedus::memory::LocalPageResolver::resolve_offset_newpage().

138  {
140 }
storage::Page * resolve_offset_newpage(PagePoolOffset offset) const __attribute__((always_inline))
As the name suggests, this version is for new pages, which don't have sanity checks.
const memory::LocalPageResolver & get_local_volatile_page_resolver() const
Returns page resolver to convert only local page ID to page pointer.
Definition: thread.cpp:80

Here is the call graph for this function:

template<typename P >
P* foedus::thread::Thread::resolve_newpage_cast ( storage::VolatilePagePointer  ptr) const
inline

Definition at line 113 of file thread.hpp.

References resolve_newpage().

Referenced by foedus::storage::hash::ReserveRecords::create_new_tail_page().

113  {
114  return reinterpret_cast<P*>(resolve_newpage(ptr));
115  }
storage::Page * resolve_newpage(storage::VolatilePagePointer ptr) const
Shorthand for get_global_volatile_page_resolver.resolve_offset_newpage()
Definition: thread.cpp:132

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename P >
P* foedus::thread::Thread::resolve_newpage_cast ( memory::PagePoolOffset  offset) const
inline

Definition at line 119 of file thread.hpp.

References resolve_newpage().

119  {
120  return reinterpret_cast<P*>(resolve_newpage(offset));
121  }
storage::Page * resolve_newpage(storage::VolatilePagePointer ptr) const
Shorthand for get_global_volatile_page_resolver.resolve_offset_newpage()
Definition: thread.cpp:132

Here is the call graph for this function:

ErrorCode foedus::thread::Thread::sysxct_batch_page_locks ( xct::SysxctWorkspace sysxct_workspace,
uint32_t  lock_count,
storage::Page **  pages 
)

Takes a bunch of page locks for a sysxct running under this thread.

Precondition
sysxct_workspace->running_sysxct_

Definition at line 971 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::sysxct_batch_page_locks().

Referenced by foedus::storage::masstree::Adopt::run(), and foedus::storage::masstree::SplitIntermediate::run().

974  {
975  return pimpl_->sysxct_batch_page_locks(sysxct_workspace, lock_count, pages);
976 }
ErrorCode sysxct_batch_page_locks(xct::SysxctWorkspace *sysxct_workspace, uint32_t lock_count, storage::Page **pages)

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::thread::Thread::sysxct_batch_record_locks ( xct::SysxctWorkspace sysxct_workspace,
storage::VolatilePagePointer  page_id,
uint32_t  lock_count,
xct::RwLockableXctId **  locks 
)

Takes a bunch of locks in the same page for a sysxct running under this thread.

Precondition
sysxct_workspace->running_sysxct_
the record locks must be within the page of the given ID

Definition at line 961 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::sysxct_batch_record_locks().

Referenced by foedus::storage::masstree::SplitBorder::lock_existing_records().

965  {
966  return pimpl_->sysxct_batch_record_locks(sysxct_workspace, page_id, lock_count, locks);
967 }
ErrorCode sysxct_batch_record_locks(xct::SysxctWorkspace *sysxct_workspace, storage::VolatilePagePointer page_id, uint32_t lock_count, xct::RwLockableXctId **locks)

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::thread::Thread::sysxct_page_lock ( xct::SysxctWorkspace sysxct_workspace,
storage::Page page 
)

Takes a page lock in the same page for a sysxct running under this thread.

Precondition
sysxct_workspace->running_sysxct_

Definition at line 968 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::sysxct_page_lock().

Referenced by foedus::storage::hash::ReserveRecords::find_and_lock_spacious_tail(), foedus::storage::hash::ReserveRecords::find_or_create_or_expand(), foedus::storage::masstree::GrowFirstLayerRoot::run(), foedus::storage::masstree::GrowNonFirstLayerRoot::run(), foedus::storage::masstree::SplitBorder::run(), foedus::storage::masstree::ReserveRecords::run(), and foedus::storage::masstree::SplitIntermediate::run().

968  {
969  return pimpl_->sysxct_page_lock(sysxct_workspace, page);
970 }
ErrorCode sysxct_page_lock(xct::SysxctWorkspace *sysxct_workspace, storage::Page *page)

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorCode foedus::thread::Thread::sysxct_record_lock ( xct::SysxctWorkspace sysxct_workspace,
storage::VolatilePagePointer  page_id,
xct::RwLockableXctId lock 
)

Takes a lock for a sysxct running under this thread.

Precondition
sysxct_workspace->running_sysxct_
the record lock must be within the page of the given ID

Definition at line 955 of file thread_pimpl.cpp.

References foedus::thread::ThreadPimpl::sysxct_record_lock().

Referenced by foedus::storage::hash::ReserveRecords::expand_record(), foedus::storage::masstree::GrowNonFirstLayerRoot::run(), and foedus::storage::masstree::ReserveRecords::run().

958  {
959  return pimpl_->sysxct_record_lock(sysxct_workspace, page_id, lock);
960 }
ErrorCode sysxct_record_lock(xct::SysxctWorkspace *sysxct_workspace, storage::VolatilePagePointer page_id, xct::RwLockableXctId *lock)

Here is the call graph for this function:

Here is the caller graph for this function:

ErrorStack foedus::thread::Thread::uninitialize ( )
overridevirtual

An idempotent method to release all resources of this object, if any.

After this method, is_initialized() will return FALSE. Whether this method encounters an error or not, the implementation should make the best effort to release as many resources as possible. In other words, Do not leak all resources because of one issue. This method itself is NOT thread-safe. Do not call this in a racy situation.

Attention
This method is NOT automatically called from the destructor. This is due to the fundamental limitation in C++. Explicitly call this method as soon as you are done, checking the returned value. You can also use UninitializeGuard to ameliorate the issue, but it's not perfect.
Returns
The error this method encounters, if any. In case there are multiple errors while uninitialization, the implementation should use ErrorStackBatch to produce a batched ErrorStack object.

Implements foedus::Initializable.

Definition at line 50 of file thread.cpp.

References foedus::DefaultInitializable::uninitialize().

50 { return pimpl_->uninitialize(); }
ErrorStack uninitialize() override final
Typical implementation of Initializable::uninitialize() that provides uninitialize-once semantics...

Here is the call graph for this function:

Friends And Related Function Documentation

std::ostream& operator<< ( std::ostream &  o,
const Thread v 
)
friend

Definition at line 118 of file thread.cpp.

118  {
119  o << "Thread-" << v.get_thread_global_ordinal() << "(id=" << v.get_thread_id() << ") [";
120  o << "status=" << (v.pimpl_->control_block_->status_);
121  o << "]";
122  return o;
123 }

The documentation for this class was generated from the following files: