libfoedus-core
FOEDUS Core Library
Assorted Methods/Classes

Assorted Methods/Classes that are too subtle to have their own packages. More...

Detailed Description

Assorted Methods/Classes that are too subtle to have their own packages.

Do NOT use this package to hold hundleds of classes/methods. That's a class design failure. This package should contain really only a few methods and classes, each of which should be extremely simple and unrelated each other. Otherwise, make a package for them. Learn from the stupid history of java.util.

Collaboration diagram for Assorted Methods/Classes:

Files

file  atomic_fences.hpp
 Atomic fence methods and load/store with fences that work for both C++11/non-C++11 code.
 
file  cacheline.hpp
 Constants and methods related to CPU cacheline and its prefetching.
 
file  endianness.hpp
 A few macros and helper methods related to byte endian-ness.
 
file  raw_atomics.hpp
 Raw atomic operations that work for both C++11 and non-C++11 code.
 

Classes

struct  foedus::assorted::Hex
 Convenient way of writing hex integers to stream. More...
 
struct  foedus::assorted::HexString
 Equivalent to std::hex in case the stream doesn't support it. More...
 
struct  foedus::assorted::Top
 Write only first few bytes to stream. More...
 
struct  foedus::assorted::ConstDiv
 The pre-calculated p-m pair for optimized integer division by constant. More...
 
class  foedus::assorted::DumbSpinlock
 A simple spinlock using a boolean field. More...
 
class  foedus::assorted::FixedString< MAXLEN, CHAR >
 An embedded string object of fixed max-length, which uses no external memory. More...
 
struct  foedus::assorted::ProbCounter
 Implements a probabilistic counter [Morris 1978]. More...
 
struct  foedus::assorted::ProtectedBoundary
 A 4kb dummy data placed between separate memory regions so that we can check if/where a bogus memory access happens. More...
 
class  foedus::assorted::UniformRandom
 A very simple and deterministic random generator that is more aligned with standard benchmark such as TPC-C. More...
 
class  foedus::assorted::ZipfianRandom
 A simple zipfian generator based off of YCSB's Java implementation. More...
 

Macros

#define SPINLOCK_WHILE(x)   for (foedus::assorted::SpinlockStat __spins; (x); __spins.yield_backoff())
 A macro to busy-wait (spinlock) with occasional pause. More...
 
#define INSTANTIATE_ALL_TYPES(M)
 A macro to explicitly instantiate the given template for all types we care. More...
 
#define INSTANTIATE_ALL_NUMERIC_TYPES(M)
 INSTANTIATE_ALL_TYPES minus std::string. More...
 
#define INSTANTIATE_ALL_INTEGER_PLUS_BOOL_TYPES(M)
 INSTANTIATE_ALL_TYPES minus std::string/float/double. More...
 
#define INSTANTIATE_ALL_INTEGER_TYPES(M)
 INSTANTIATE_ALL_NUMERIC_TYPES minus bool/double/float. More...
 

Functions

template<typename T , uint64_t ALIGNMENT>
foedus::assorted::align (T value)
 Returns the smallest multiply of ALIGNMENT that is equal or larger than the given number. More...
 
template<typename T >
foedus::assorted::align8 (T value)
 8-alignment. More...
 
template<typename T >
foedus::assorted::align16 (T value)
 16-alignment. More...
 
template<typename T >
foedus::assorted::align64 (T value)
 64-alignment. More...
 
int64_t foedus::assorted::int_div_ceil (int64_t dividee, int64_t dividor)
 Efficient ceil(dividee/dividor) for integer. More...
 
std::string foedus::assorted::replace_all (const std::string &target, const std::string &search, const std::string &replacement)
 target.replaceAll(search, replacement). More...
 
std::string foedus::assorted::replace_all (const std::string &target, const std::string &search, int replacement)
 target.replaceAll(search, String.valueOf(replacement)). More...
 
std::string foedus::assorted::os_error ()
 Thread-safe strerror(errno). More...
 
std::string foedus::assorted::os_error (int error_number)
 This version receives errno. More...
 
std::string foedus::assorted::get_current_executable_path ()
 Returns the full path of current executable. More...
 
void foedus::assorted::spinlock_yield ()
 Invoke _mm_pause(), x86 PAUSE instruction, or something equivalent in the env. More...
 
template<uint64_t SIZE1, uint64_t SIZE2>
int foedus::assorted::static_size_check ()
 Alternative for static_assert(sizeof(foo) == sizeof(bar), "oh crap") to display sizeof(foo). More...
 
std::string foedus::assorted::demangle_type_name (const char *mangled_name)
 Demangle the given C++ type name if possible (otherwise the original string). More...
 
template<typename T >
std::string foedus::assorted::get_pretty_type_name ()
 Returns the name of the C++ type as readable as possible. More...
 
uint64_t foedus::assorted::generate_almost_prime_below (uint64_t threshold)
 Generate a prime or some number that is almost prime less than the given number. More...
 
void foedus::assorted::memory_fence_acquire ()
 Equivalent to std::atomic_thread_fence(std::memory_order_acquire). More...
 
void foedus::assorted::memory_fence_release ()
 Equivalent to std::atomic_thread_fence(std::memory_order_release). More...
 
void foedus::assorted::memory_fence_acq_rel ()
 Equivalent to std::atomic_thread_fence(std::memory_order_acq_rel). More...
 
void foedus::assorted::memory_fence_consume ()
 Equivalent to std::atomic_thread_fence(std::memory_order_consume). More...
 
void foedus::assorted::memory_fence_seq_cst ()
 Equivalent to std::atomic_thread_fence(std::memory_order_seq_cst). More...
 
template<typename T >
foedus::assorted::atomic_load_seq_cst (const T *target)
 Atomic load with a seq_cst barrier for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
foedus::assorted::atomic_load_acquire (const T *target)
 Atomic load with an acquire barrier for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
foedus::assorted::atomic_load_consume (const T *target)
 Atomic load with a consume barrier for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
void foedus::assorted::atomic_store_seq_cst (T *target, T value)
 Atomic store with a seq_cst barrier for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
void foedus::assorted::atomic_store_release (T *target, T value)
 Atomic store with a release barrier for raw primitive types rather than std::atomic<T>. More...
 
void foedus::assorted::prefetch_cacheline (const void *address)
 Prefetch one cacheline to L1 cache. More...
 
void foedus::assorted::prefetch_cachelines (const void *address, int cacheline_count)
 Prefetch multiple contiguous cachelines to L1 cache. More...
 
void foedus::assorted::prefetch_l2 (const void *address, int cacheline_count)
 Prefetch multiple contiguous cachelines to L2/L3 cache. More...
 
template<typename T >
foedus::assorted::read_bigendian (const void *be_bytes)
 Convert a big-endian byte array to a native integer. More...
 
template<typename T >
void foedus::assorted::write_bigendian (T host_value, void *be_bytes)
 Convert a native integer to big-endian bytes and write them to the given address. More...
 
int foedus::assorted::mod_numa_node (int numa_node)
 In order to run even on a non-numa machine or a machine with fewer sockets, we allow specifying arbitrary numa_node. More...
 
template<typename T >
bool foedus::assorted::raw_atomic_compare_exchange_strong (T *target, T *expected, T desired)
 Atomic CAS. More...
 
template<typename T >
bool foedus::assorted::raw_atomic_compare_exchange_weak (T *target, T *expected, T desired)
 Weak version of raw_atomic_compare_exchange_strong(). More...
 
bool foedus::assorted::raw_atomic_compare_exchange_strong_uint128 (uint64_t *ptr, const uint64_t *old_value, const uint64_t *new_value)
 Atomic 128-bit CAS, which is not in the standard yet. More...
 
bool foedus::assorted::raw_atomic_compare_exchange_weak_uint128 (uint64_t *ptr, const uint64_t *old_value, const uint64_t *new_value)
 Weak version of raw_atomic_compare_exchange_strong_uint128(). More...
 
template<typename T >
foedus::assorted::raw_atomic_exchange (T *target, T desired)
 Atomic Swap for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
foedus::assorted::raw_atomic_fetch_add (T *target, T addendum)
 Atomic fetch-add for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
foedus::assorted::raw_atomic_fetch_and_bitwise_and (T *target, T operand)
 Atomic fetch-bitwise-and for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
foedus::assorted::raw_atomic_fetch_and_bitwise_or (T *target, T operand)
 Atomic fetch-bitwise-or for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
foedus::assorted::raw_atomic_fetch_and_bitwise_xor (T *target, T operand)
 Atomic fetch-bitwise-xor for raw primitive types rather than std::atomic<T> More...
 

Variables

const uint16_t foedus::assorted::kCachelineSize = 64
 Byte count of one cache line. More...
 
const bool foedus::assorted::kIsLittleEndian = false
 A handy const boolean to tell if it's little endina. More...
 

Macro Definition Documentation

#define INSTANTIATE_ALL_INTEGER_PLUS_BOOL_TYPES (   M)
Value:
M(bool);
#define INSTANTIATE_ALL_INTEGER_TYPES(M)
INSTANTIATE_ALL_NUMERIC_TYPES minus bool/double/float.

INSTANTIATE_ALL_TYPES minus std::string/float/double.

Definition at line 317 of file assorted_func.hpp.

#define INSTANTIATE_ALL_INTEGER_TYPES (   M)
Value:
M(int64_t); \
M(int32_t); M(int16_t); M(int8_t); M(uint64_t); \
M(uint32_t); M(uint16_t); M(uint8_t);

INSTANTIATE_ALL_NUMERIC_TYPES minus bool/double/float.

Definition at line 313 of file assorted_func.hpp.

#define INSTANTIATE_ALL_NUMERIC_TYPES (   M)
Value:
M(bool); M(float); M(double);
#define INSTANTIATE_ALL_INTEGER_TYPES(M)
INSTANTIATE_ALL_NUMERIC_TYPES minus bool/double/float.

INSTANTIATE_ALL_TYPES minus std::string.

Definition at line 320 of file assorted_func.hpp.

#define INSTANTIATE_ALL_TYPES (   M)
Value:
M(std::string);
#define INSTANTIATE_ALL_NUMERIC_TYPES(M)
INSTANTIATE_ALL_TYPES minus std::string.

A macro to explicitly instantiate the given template for all types we care.

M is the macro to explicitly instantiate a template for the given type. This macro explicitly instantiates the template for bool, float, double, all integers (signed/unsigned), and std::string. This is useful when definition of the template class/method involve too many details and you rather want to just give declaration of them in header.

Use this as follows. In header file.

template <typename T> void cool_func(T arg);

Then, in cpp file.

template <typename T> void cool_func(T arg) {
... (implementation code)
}
#define EXPLICIT_INSTANTIATION_COOL_FUNC(x) template void cool_func< x > (x arg);
INSTANTIATE_ALL_TYPES(EXPLICIT_INSTANTIATION_COOL_FUNC);

Remember, you should invoke this macro in cpp, not header, otherwise you will get multiple-definition errors.

Note
Doxygen doesn't understand template explicit instantiation, giving warnings. Not a big issue, but you should shut up the Doxygen warnings by putting cond/endcond. See externalizable.cpp for example.

Definition at line 323 of file assorted_func.hpp.

Function Documentation

template<typename T , uint64_t ALIGNMENT>
T foedus::assorted::align ( value)
inline

Returns the smallest multiply of ALIGNMENT that is equal or larger than the given number.

Template Parameters
Tinteger type
ALIGNMENTalignment size. must be power of two

In other words, round-up. For example of 8-alignment, 7 becomes 8, 8 becomes 8, 9 becomes 16.

See also
https://en.wikipedia.org/wiki/Data_structure_alignment
Hacker's Delight 2nd Ed. Chap 3-1.

Definition at line 44 of file assorted_func.hpp.

References ASSERT_ND.

44  {
45  uint64_t left = (value + ALIGNMENT - 1);
46  uint64_t right = -ALIGNMENT;
47  uint64_t result = left & right;
48  ASSERT_ND(result >= static_cast<uint64_t>(value));
49  ASSERT_ND(result % ALIGNMENT == 0);
50  ASSERT_ND(result < static_cast<uint64_t>(value) + ALIGNMENT);
51  return static_cast<T>(result);
52 }
#define ASSERT_ND(x)
A warning-free wrapper macro of assert() that has no performance effect in release mode even when 'x'...
Definition: assert_nd.hpp:72
template<typename T >
T foedus::assorted::align16 ( value)
inline

16-alignment.

Definition at line 64 of file assorted_func.hpp.

64 { return align<T, 16>(value); }
template<typename T >
T foedus::assorted::align64 ( value)
inline

64-alignment.

Definition at line 70 of file assorted_func.hpp.

70 { return align<T, 64>(value); }
template<typename T >
T foedus::assorted::align8 ( value)
inline

8-alignment.

Definition at line 58 of file assorted_func.hpp.

Referenced by foedus::storage::masstree::adjust_payload_hint(), foedus::storage::hash::adjust_payload_hint(), foedus::storage::sequential::SequentialPage::append_record_nosync(), foedus::storage::hash::ReserveRecords::append_record_to_page(), foedus::storage::hash::HashInsertLogType::apply_record(), foedus::storage::masstree::MasstreeInsertLogType::apply_record(), foedus::storage::hash::HashUpdateLogType::apply_record(), foedus::storage::masstree::MasstreeUpdateLogType::apply_record(), foedus::storage::masstree::MasstreeCommonLogType::apply_record_prepare(), foedus::storage::masstree::MasstreeBorderPage::assert_entries_impl(), foedus::storage::hash::HashCommonLogType::assert_record_and_log_keys(), foedus::storage::array::calculate_levels(), foedus::storage::hash::HashCommonLogType::calculate_log_length(), foedus::storage::masstree::MasstreeCommonLogType::calculate_log_length(), foedus::storage::sequential::SequentialAppendLogType::calculate_log_length(), foedus::storage::array::ArrayOverwriteLogType::calculate_log_length(), foedus::storage::array::ArrayStoragePimpl::calculate_offset_intervals(), foedus::storage::array::ArrayStoragePimpl::calculate_required_pages(), foedus::storage::masstree::calculate_suffix_length_aligned(), foedus::storage::sequential::SequentialPage::can_insert_record(), foedus::storage::hash::HashDataPage::create_record_in_snapshot(), foedus::storage::masstree::MasstreeCommonLogType::equal_record_and_log_suffixes(), foedus::storage::masstree::MasstreeStorage::estimate_records_per_page(), foedus::storage::masstree::fill_payload_padded(), foedus::storage::hash::RecordLocation::get_aligned_key_length(), foedus::storage::hash::HashDataPage::Slot::get_aligned_key_length(), foedus::storage::sequential::SequentialPage::get_all_records_nosync(), foedus::storage::hash::HashCommonLogType::get_key_length_aligned(), foedus::storage::masstree::MasstreeCommonLogType::get_key_length_aligned(), foedus::storage::array::ArrayPage::get_leaf_record(), foedus::storage::array::ArrayPage::get_leaf_record_count(), foedus::storage::sequential::SequentialPage::get_record_offset(), foedus::storage::masstree::MasstreeBorderPage::initialize_layer_root(), foedus::storage::masstree::SplitBorder::migrate_records(), foedus::storage::sequential::SequentialRecordIterator::next(), foedus::storage::hash::HashCommonLogType::populate_base(), foedus::storage::masstree::MasstreeCommonLogType::populate_base(), foedus::storage::hash::HashDataPage::required_space(), foedus::storage::hash::HashDataPage::reserve_record(), foedus::storage::masstree::MasstreeBorderPage::reserve_record_space(), foedus::storage::hash::HashTmpBin::Record::set_all(), foedus::storage::hash::HashTmpBin::Record::set_payload(), foedus::storage::masstree::MasstreeBorderPage::to_record_length(), and foedus::storage::array::to_records_in_leaf().

58 { return align<T, 8>(value); }

Here is the caller graph for this function:

template<typename T >
T foedus::assorted::atomic_load_acquire ( const T *  target)
inline

Atomic load with an acquire barrier for raw primitive types rather than std::atomic<T>.

Template Parameters
Tinteger type
Returns
result of load

Definition at line 114 of file atomic_fences.hpp.

114  {
115  return ::__atomic_load_n(target, __ATOMIC_ACQUIRE);
116 }
template<typename T >
T foedus::assorted::atomic_load_consume ( const T *  target)
inline

Atomic load with a consume barrier for raw primitive types rather than std::atomic<T>.

Template Parameters
Tinteger type
Returns
result of load

Definition at line 125 of file atomic_fences.hpp.

125  {
126  return ::__atomic_load_n(target, __ATOMIC_CONSUME);
127 }
template<typename T >
T foedus::assorted::atomic_load_seq_cst ( const T *  target)
inline

Atomic load with a seq_cst barrier for raw primitive types rather than std::atomic<T>.

Template Parameters
Tinteger type
Returns
result of load

Definition at line 103 of file atomic_fences.hpp.

103  {
104  return ::__atomic_load_n(target, __ATOMIC_SEQ_CST);
105 }
template<typename T >
void foedus::assorted::atomic_store_release ( T *  target,
value 
)
inline

Atomic store with a release barrier for raw primitive types rather than std::atomic<T>.

Template Parameters
Tinteger type

Definition at line 145 of file atomic_fences.hpp.

145  {
146  ::__atomic_store_n(target, value, __ATOMIC_RELEASE);
147 }
template<typename T >
void foedus::assorted::atomic_store_seq_cst ( T *  target,
value 
)
inline

Atomic store with a seq_cst barrier for raw primitive types rather than std::atomic<T>.

Template Parameters
Tinteger type

Definition at line 135 of file atomic_fences.hpp.

135  {
136  ::__atomic_store_n(target, value, __ATOMIC_SEQ_CST);
137 }
std::string foedus::assorted::demangle_type_name ( const char *  mangled_name)

Demangle the given C++ type name if possible (otherwise the original string).

Definition at line 151 of file assorted_func.cpp.

References foedus::fs::status().

Referenced by foedus::assorted::get_pretty_type_name(), foedus::assorted::BacktraceContext::get_results(), and foedus::assorted::BacktraceContext::GlibcBacktraceInfo::parse_symbol().

151  {
152 #ifdef __GNUC__
153  int status;
154  char* demangled = abi::__cxa_demangle(mangled_name, nullptr, nullptr, &status);
155  if (demangled) {
156  std::string ret(demangled);
157  ::free(demangled);
158  return ret;
159  }
160 #endif // __GNUC__
161  return mangled_name;
162 }
FileStatus status(const Path &p)
Returns the status of the file.
Definition: filesystem.cpp:45

Here is the call graph for this function:

Here is the caller graph for this function:

uint64_t foedus::assorted::generate_almost_prime_below ( uint64_t  threshold)

Generate a prime or some number that is almost prime less than the given number.

Parameters
[in]thresholdReturns a number less than this number

In a few places, we need a number that is a prime or at least not divided by many numbers. For example in hashing. It doesn't have to be a real prime. Instead, we want to cheaply calculate such number. This method uses a complex polynomial to generate that looks-like a prime.

See also
http://mathworld.wolfram.com/Prime-GeneratingPolynomial.html

Definition at line 164 of file assorted_func.cpp.

Referenced by foedus::cache::determine_logical_buckets().

164  {
165  if (threshold <= 2) {
166  return 1; // almost an invalid input...
167  } else if (threshold < 5000) {
168  // for a small number, we just use a (very) sparse prime list
169  uint16_t small_primes[] = {3677, 2347, 1361, 773, 449, 263, 151, 89, 41, 17, 2};
170  for (int i = 0;; ++i) {
171  if (threshold > small_primes[i]) {
172  return small_primes[i];
173  }
174  }
175  } else {
176  // the following formula is monotonically increasing for i>=22 (which gives 3923).
177  uint64_t prev = 3677;
178  for (uint64_t i = 22;; ++i) {
179  uint64_t cur = (i * i * i * i * i - 133 * i * i * i * i + 6729 * i * i * i
180  - 158379 * i * i + 1720294 * i - 6823316) >> 2;
181  if (cur >= threshold) {
182  return prev;
183  } else if (cur <= prev) {
184  // sanity checking.
185  return prev;
186  } else {
187  prev = cur;
188  }
189  }
190  }
191 }

Here is the caller graph for this function:

std::string foedus::assorted::get_current_executable_path ( )

Returns the full path of current executable.

This relies on linux /proc/self/exe. Not sure how to port it to Windows..

Definition at line 81 of file assorted_func.cpp.

References foedus::assorted::os_error().

Referenced by foedus::soc::SocOptions::convert_spawn_executable_pattern().

81  {
82  char buf[1024];
83  ssize_t len = ::readlink("/proc/self/exe", buf, sizeof(buf));
84  if (len == -1) {
85  std::cerr << "Failed to get the path of current executable. error=" << os_error() << std::endl;
86  return "";
87  }
88  return std::string(buf, len);
89 }
std::string os_error()
Thread-safe strerror(errno).

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename T >
std::string foedus::assorted::get_pretty_type_name ( )

Returns the name of the C++ type as readable as possible.

Template Parameters
Tthe type

Definition at line 215 of file assorted_func.hpp.

References foedus::assorted::demangle_type_name().

Referenced by foedus::externalize::Externalizable::add_element().

215  {
216  return demangle_type_name(typeid(T).name());
217 }
std::string demangle_type_name(const char *mangled_name)
Demangle the given C++ type name if possible (otherwise the original string).

Here is the call graph for this function:

Here is the caller graph for this function:

int64_t foedus::assorted::int_div_ceil ( int64_t  dividee,
int64_t  dividor 
)

Efficient ceil(dividee/dividor) for integer.

Definition at line 40 of file assorted_func.cpp.

Referenced by foedus::memory::PagePoolPimpl::attach(), foedus::storage::array::calculate_levels(), foedus::storage::array::ArrayStoragePimpl::calculate_required_pages(), foedus::storage::array::ArrayComposer::construct_root(), foedus::storage::hash::HashStorageControlBlock::get_root_children(), and foedus::EngineOptions::prescreen().

40  {
41  std::ldiv_t result = std::div(dividee, dividor);
42  return result.rem != 0 ? (result.quot + 1) : result.quot;
43 }

Here is the caller graph for this function:

void foedus::assorted::memory_fence_acq_rel ( )
inline
void foedus::assorted::memory_fence_acquire ( )
inline

Equivalent to std::atomic_thread_fence(std::memory_order_acquire).

A load operation with this memory order performs the acquire operation on the affected memory location: prior writes made to other memory locations by the thread that did the release become visible in this thread.

Definition at line 46 of file atomic_fences.hpp.

Referenced by foedus::log::MetaLogBuffer::commit(), foedus::storage::masstree::MasstreeStoragePimpl::find_border_physical(), foedus::proc::ProcManagerPimpl::find_by_name(), foedus::storage::hash::ReserveRecords::find_or_create_or_expand(), foedus::thread::ThreadRef::get_in_commit_epoch(), foedus::log::ThreadLogBuffer::get_logs_to_write(), foedus::thread::ThreadGroupRef::get_min_in_commit_epoch(), foedus::thread::ThreadPimplMcsAdaptor< RW_BLOCK >::get_rw_other_async_block(), foedus::cache::CacheManagerPimpl::handle_cleaner(), foedus::xct::XctManagerPimpl::handle_epoch_chime(), foedus::snapshot::SnapshotManagerPimpl::handle_snapshot(), foedus::soc::SharedRendezvous::is_signaled(), foedus::thread::ThreadPimpl::is_stop_requested(), foedus::storage::hash::HashStoragePimpl::locate_record(), foedus::xct::Xct::on_record_read(), foedus::storage::sequential::SequentialStorageControlBlock::optimistic_read_truncate_epoch(), foedus::storage::masstree::MasstreeStoragePimpl::peek_volatile_page_boundaries_next_layer(), foedus::xct::XctManagerPimpl::precommit_xct_readonly(), foedus::log::LogManagerPimpl::refresh_global_durable_epoch(), foedus::storage::masstree::MasstreeStoragePimpl::reserve_record(), foedus::storage::masstree::MasstreeStoragePimpl::reserve_record_normalized(), foedus::storage::hash::HashStoragePimpl::track_moved_record_search(), foedus::thread::ThreadRef::try_impersonate(), foedus::soc::SharedCond::uninitialize(), foedus::log::ThreadLogBuffer::wait_for_space(), foedus::log::LogManagerPimpl::wait_until_durable(), and foedus::log::LoggerRef::wakeup_for_durable_epoch().

46  {
47  ::__atomic_thread_fence(__ATOMIC_ACQUIRE);
48 }

Here is the caller graph for this function:

void foedus::assorted::memory_fence_consume ( )
inline

Equivalent to std::atomic_thread_fence(std::memory_order_consume).

A load operation with this memory order performs a consume operation on the affected memory location: prior writes to data-dependent memory locations made by the thread that did a release operation become visible to this thread.

Definition at line 81 of file atomic_fences.hpp.

Referenced by foedus::storage::array::ArrayStoragePimpl::get_record_for_write_batch(), foedus::storage::array::ArrayStoragePimpl::get_record_payload_batch(), foedus::storage::array::ArrayStoragePimpl::get_record_primitive_batch(), foedus::storage::masstree::MasstreeStoragePimpl::locate_record(), foedus::storage::hash::RecordLocation::populate_logical(), and foedus::storage::hash::HashStoragePimpl::track_moved_record_search().

81  {
82  ::__atomic_thread_fence(__ATOMIC_CONSUME);
83 }

Here is the caller graph for this function:

void foedus::assorted::memory_fence_release ( )
inline

Equivalent to std::atomic_thread_fence(std::memory_order_release).

A store operation with this memory order performs the release operation: prior writes to other memory locations become visible to the threads that do a consume or an acquire on the same location.

Definition at line 58 of file atomic_fences.hpp.

Referenced by foedus::storage::masstree::Adopt::adopt_case_b(), foedus::log::LogManagerPimpl::announce_new_durable_global_epoch(), foedus::storage::sequential::SequentialPage::append_record_nosync(), foedus::storage::hash::ReserveRecords::append_record_to_page(), foedus::storage::hash::ReserveRecords::create_new_record_in_tail_page(), foedus::storage::hash::ReserveRecords::expand_record(), foedus::storage::hash::ReserveRecords::find_and_lock_spacious_tail(), foedus::storage::masstree::grow_case_b_common(), foedus::xct::XctManagerPimpl::handle_epoch_chime(), foedus::snapshot::SnapshotManagerPimpl::handle_snapshot_triggered(), foedus::xct::McsRwSimpleBlock::init_common(), foedus::storage::masstree::MasstreeBorderPage::initialize_as_layer_root_physical(), foedus::savepoint::SavepointManagerPimpl::initialize_once(), foedus::cache::CacheHashtable::install(), foedus::storage::masstree::SplitIntermediate::migrate_pointers(), foedus::xct::XctManagerPimpl::precommit_xct_apply(), foedus::xct::XctManagerPimpl::precommit_xct_readwrite(), foedus::log::LogManagerPimpl::refresh_global_durable_epoch(), foedus::storage::hash::HashDataPage::reserve_record(), foedus::xct::McsRwLock::reset(), foedus::storage::masstree::SplitBorder::run(), foedus::storage::masstree::ReserveRecords::run(), foedus::savepoint::SavepointManagerPimpl::savepoint_main(), foedus::storage::masstree::SplitIntermediate::split_impl_no_error(), foedus::storage::sequential::SequentialStoragePimpl::truncate(), foedus::storage::masstree::MasstreeBorderPage::try_expand_record_in_page_physical(), foedus::log::ThreadLogBuffer::wait_for_space(), foedus::snapshot::SnapshotManagerControlBlock::wakeup_snapshot_children(), and foedus::xct::InCommitEpochGuard::~InCommitEpochGuard().

58  {
59  ::__atomic_thread_fence(__ATOMIC_RELEASE);
60 }

Here is the caller graph for this function:

void foedus::assorted::memory_fence_seq_cst ( )
inline

Equivalent to std::atomic_thread_fence(std::memory_order_seq_cst).

Same as memory_order_acq_rel, plus a single total order exists in which all threads observe all modifications in the same order.

Definition at line 92 of file atomic_fences.hpp.

92  {
93  ::__atomic_thread_fence(__ATOMIC_SEQ_CST);
94 }
int foedus::assorted::mod_numa_node ( int  numa_node)
inline

In order to run even on a non-numa machine or a machine with fewer sockets, we allow specifying arbitrary numa_node.

we just take mod.

Definition at line 31 of file mod_numa_node.hpp.

References numa_available(), and numa_num_configured_nodes().

Referenced by foedus::memory::AlignedMemory::alloc(), foedus::thread::NumaThreadScope::NumaThreadScope(), and foedus::memory::ScopedNumaPreferred::ScopedNumaPreferred().

31  {
32  // if the machine is not a NUMA machine (1-socket), then avoid calling libnuma functions.
33  if (::numa_available() < 0) {
34  return 0;
35  }
36  int hardware_nodes = ::numa_num_configured_nodes();
37  if (numa_node >= 0 && numa_node >= hardware_nodes && hardware_nodes > 0) {
38  numa_node = numa_node % hardware_nodes;
39  }
40  return numa_node;
41 }
int numa_num_configured_nodes()
int numa_available(void)

Here is the call graph for this function:

Here is the caller graph for this function:

std::string foedus::assorted::os_error ( )
std::string foedus::assorted::os_error ( int  error_number)

This version receives errno.

Definition at line 71 of file assorted_func.cpp.

71  {
72  if (error_number == 0) {
73  return "[No Error]";
74  }
75  std::stringstream str;
76  // NOTE(Hideaki) is std::strerror thread-safe? Thre is no std::strerror_r. Windows, mmm.
77  str << "[Errno " << error_number << "] " << std::strerror(error_number);
78  return str.str();
79 }
void foedus::assorted::prefetch_cacheline ( const void *  address)
inline

Prefetch one cacheline to L1 cache.

Parameters
[in]addressmemory address to prefetch.

Definition at line 49 of file cacheline.hpp.

Referenced by foedus::snapshot::MergeSort::fetch_logs(), foedus::storage::array::ArrayStoragePimpl::get_record_primitive_batch(), foedus::storage::array::ArrayStoragePimpl::lookup_for_read_batch(), foedus::storage::array::ArrayStoragePimpl::lookup_for_write_batch(), foedus::xct::XctManagerPimpl::precommit_xct_verify_page_version_set(), foedus::xct::XctManagerPimpl::precommit_xct_verify_pointer_set(), foedus::xct::XctManagerPimpl::precommit_xct_verify_readonly(), foedus::xct::XctManagerPimpl::precommit_xct_verify_readwrite(), foedus::assorted::prefetch_cachelines(), and foedus::assorted::prefetch_l2().

49  {
50 #if defined(__GNUC__)
51 #if defined(__aarch64__)
52  ::__builtin_prefetch(address, 1, 3);
53 #else // defined(__aarch64__)
54  ::__builtin_prefetch(address, 1, 3);
55  // ::_mm_prefetch(address, ::_MM_HINT_T0);
56 #endif // defined(__aarch64__)
57 #endif // defined(__GNUC__)
58 }

Here is the caller graph for this function:

void foedus::assorted::prefetch_cachelines ( const void *  address,
int  cacheline_count 
)
inline

Prefetch multiple contiguous cachelines to L1 cache.

Parameters
[in]addressmemory address to prefetch.
[in]cacheline_countcount of cachelines to prefetch.

Definition at line 66 of file cacheline.hpp.

References foedus::assorted::prefetch_cacheline().

Referenced by foedus::cache::CacheHashtable::evict_main_loop(), foedus::cache::CacheHashtable::find(), foedus::cache::CacheHashtable::find_batch(), foedus::storage::masstree::MasstreeIntermediatePage::MiniPage::prefetch(), foedus::storage::masstree::MasstreeIntermediatePage::prefetch(), foedus::storage::masstree::MasstreeBorderPage::prefetch(), foedus::storage::masstree::MasstreeBorderPage::prefetch_additional_if_needed(), and foedus::storage::masstree::MasstreePage::prefetch_general().

66  {
67  for (int i = 0; i < cacheline_count; ++i) {
68  const void* shifted = reinterpret_cast<const char*>(address) + kCachelineSize * cacheline_count;
69  prefetch_cacheline(shifted);
70  }
71 }
void prefetch_cacheline(const void *address)
Prefetch one cacheline to L1 cache.
Definition: cacheline.hpp:49
const uint16_t kCachelineSize
Byte count of one cache line.
Definition: cacheline.hpp:42

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::assorted::prefetch_l2 ( const void *  address,
int  cacheline_count 
)
inline

Prefetch multiple contiguous cachelines to L2/L3 cache.

Parameters
[in]addressmemory address to prefetch.
[in]cacheline_countcount of cachelines to prefetch.

Definition at line 79 of file cacheline.hpp.

References foedus::assorted::prefetch_cacheline().

Referenced by foedus::storage::prefetch_page_l2().

79  {
80  for (int i = 0; i < cacheline_count; ++i) {
81  const void* shifted = reinterpret_cast<const char*>(address) + kCachelineSize * cacheline_count;
82  prefetch_cacheline(shifted); // this also works for L2/L3
83  }
84 }
void prefetch_cacheline(const void *address)
Prefetch one cacheline to L1 cache.
Definition: cacheline.hpp:49
const uint16_t kCachelineSize
Byte count of one cache line.
Definition: cacheline.hpp:42

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename T >
bool foedus::assorted::raw_atomic_compare_exchange_strong ( T *  target,
T *  expected,
desired 
)
inline

Atomic CAS.

Template Parameters
Tinteger type

Definition at line 49 of file raw_atomics.hpp.

49  {
50  // Use newer builtin instead of __sync_val_compare_and_swap
51  return ::__atomic_compare_exchange_n(
52  target,
53  expected,
54  desired,
55  false,
56  __ATOMIC_SEQ_CST,
57  __ATOMIC_SEQ_CST);
58  // T expected_val = *expected;
59  // T old_val = ::__sync_val_compare_and_swap(target, expected_val, desired);
60  // if (old_val == expected_val) {
61  // return true;
62  // } else {
63  // *expected = old_val;
64  // return false;
65  // }
66 }
bool foedus::assorted::raw_atomic_compare_exchange_strong_uint128 ( uint64_t *  ptr,
const uint64_t *  old_value,
const uint64_t *  new_value 
)

Atomic 128-bit CAS, which is not in the standard yet.

Parameters
[in,out]ptrPoints to 128-bit data. MUST BE 128-bit ALIGNED.
[in]old_valuePoints to 128-bit data. If ptr holds this value, we swap. Unlike std::atomic_compare_exchange_strong, this arg is const.
[in]new_valuePoints to 128-bit data. We change the ptr to hold this value.
Returns
Whether the swap happened

We shouldn't rely on it too much as double-word CAS is not provided in older CPU. Once the C++ standard employs it, this method should go away. I will be graybeard by then, tho.

Attention
You need to give "-mcx16" to GCC to use its builtin 128bit CAS. Otherwise, __GCC_HAVE_SYNC_COMPARE_AND_SWAP_16 is not set and we have to resort to x86 assembly. Check out "gcc -dM -E - < /dev/null".

Definition at line 25 of file raw_atomics.cpp.

Referenced by foedus::storage::DualPagePointer::atomic_compare_exchange_strong(), and foedus::assorted::raw_atomic_compare_exchange_weak_uint128().

28  {
29  bool ret;
30 #if defined(__GNUC__) && defined(__GCC_HAVE_SYNC_COMPARE_AND_SWAP_16)
31  // gcc-x86 (-mcx16), then simply use __sync_bool_compare_and_swap.
32  __uint128_t* ptr_casted = reinterpret_cast<__uint128_t*>(ptr);
33  __uint128_t old_casted = *reinterpret_cast<const __uint128_t*>(old_value);
34  __uint128_t new_casted = *reinterpret_cast<const __uint128_t*>(new_value);
35  ret = ::__sync_bool_compare_and_swap(ptr_casted, old_casted, new_casted);
36 #elif defined(__GNUC__) && defined(__aarch64__)
37  // gcc-AArch64 doesn't allow -mcx16. But, it supports __atomic_compare_exchange_16 with
38  // libatomic.so. We need to link to it in that case.
39  __uint128_t* ptr_casted = reinterpret_cast<__uint128_t*>(ptr);
40  __uint128_t old_casted = *reinterpret_cast<const __uint128_t*>(old_value);
41  __uint128_t new_casted = *reinterpret_cast<const __uint128_t*>(new_value);
42  ret = ::__atomic_compare_exchange_16(
43  ptr_casted,
44  &old_casted,
45  new_casted,
46  false, // strong CAS
47  __ATOMIC_ACQ_REL, // to make it atomic, of course acq_rel
48  __ATOMIC_ACQUIRE); // matters only when it fails. acquire is enough.
49 #else // everything else
50  // oh well, then resort to assembly, assuming x86. clang on ARMv8? oh please...
51  // see: linux/arch/x86/include/asm/cmpxchg_64.h
52  uint64_t junk;
53  asm volatile("lock; cmpxchg16b %2;setz %1"
54  : "=d"(junk), "=a"(ret), "+m" (*ptr)
55  : "b"(new_value[0]), "c"(new_value[1]), "a"(old_value[0]), "d"(old_value[1]));
56  // Note on roll-our-own non-gcc ARMv8 cas16. It's doable, but...
57  // ARMv8 does have 128bit atomic instructions, called "pair" operations, such as ldaxp and stxp.
58  // There is actually a library that uses it:
59  // https://github.com/ivmai/libatomic_ops/blob/master/src/atomic_ops/sysdeps/gcc/aarch64.h
60  // (but this is GPL. Don't open the URL unless you are ready for it.)
61  // As of now (May 2014), GCC can't handle them, nor provide __uint128_t in ARMv8.
62  // I think it's coming, however. I'm waiting for it... if it's not coming, let's do ourselves.
63 #endif
64  return ret;
65 }

Here is the caller graph for this function:

template<typename T >
bool foedus::assorted::raw_atomic_compare_exchange_weak ( T *  target,
T *  expected,
desired 
)
inline

Weak version of raw_atomic_compare_exchange_strong().

Template Parameters
Tinteger type

Definition at line 74 of file raw_atomics.hpp.

74  {
75  return ::__atomic_compare_exchange_n(
76  target,
77  expected,
78  desired,
79  true, // weak
80  __ATOMIC_SEQ_CST,
81  __ATOMIC_SEQ_CST);
82 }
bool foedus::assorted::raw_atomic_compare_exchange_weak_uint128 ( uint64_t *  ptr,
const uint64_t *  old_value,
const uint64_t *  new_value 
)
inline

Weak version of raw_atomic_compare_exchange_strong_uint128().

Definition at line 108 of file raw_atomics.hpp.

References foedus::assorted::raw_atomic_compare_exchange_strong_uint128().

Referenced by foedus::storage::DualPagePointer::atomic_compare_exchange_weak().

111  {
112  if (ptr[0] != old_value[0] || ptr[1] != old_value[1]) {
113  return false; // this comparison is fast but not atomic, thus 'weak'
114  } else {
115  return raw_atomic_compare_exchange_strong_uint128(ptr, old_value, new_value);
116  }
117 }
bool raw_atomic_compare_exchange_strong_uint128(uint64_t *ptr, const uint64_t *old_value, const uint64_t *new_value)
Atomic 128-bit CAS, which is not in the standard yet.
Definition: raw_atomics.cpp:25

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename T >
T foedus::assorted::raw_atomic_exchange ( T *  target,
desired 
)
inline

Atomic Swap for raw primitive types rather than std::atomic<T>.

Template Parameters
Tinteger type
Parameters
[in,out]targetPoints to the data to be swapped.
[in]desiredThis value will be installed.
Returns
returns the old value.

This is a non-conditional swap, which always succeeds.

Definition at line 130 of file raw_atomics.hpp.

130  {
131  return ::__atomic_exchange_n(target, desired, __ATOMIC_SEQ_CST);
132  // Note: We must NOT use __sync_lock_test_and_set, which is only acquire-barrier for some
133  // reason. We instead use GCC/Clang's __atomic_exchange() builtin.
134  // return ::__sync_lock_test_and_set(target, desired);
135  // see https://gcc.gnu.org/onlinedocs/gcc-4.4.3/gcc/Atomic-Builtins.html
136  // and https://bugzilla.mozilla.org/show_bug.cgi?id=873799
137 
138  // BTW, __atomic_exchange_n/__ATOMIC_SEQ_CST demands a C++11-capable version of gcc/clang,
139  // but FOEDUS anyway relies on C++11. It just allows the linked program to be
140  // compiled without std=c++11. So, nothing lost.
141 }
template<typename T >
T foedus::assorted::raw_atomic_fetch_add ( T *  target,
addendum 
)
inline

Atomic fetch-add for raw primitive types rather than std::atomic<T>.

Template Parameters
Tinteger type
Returns
the previous value.

Definition at line 150 of file raw_atomics.hpp.

150  {
151  return ::__atomic_fetch_add(target, addendum, __ATOMIC_SEQ_CST);
152  // Just to align with above, use __atomic_fetch_add rather than __sync_fetch_and_add.
153  // It's equivalent.
154  // return ::__sync_fetch_and_add(target, addendum);
155 }
template<typename T >
T foedus::assorted::raw_atomic_fetch_and_bitwise_and ( T *  target,
operand 
)
inline

Atomic fetch-bitwise-and for raw primitive types rather than std::atomic<T>.

Template Parameters
Tinteger type
Returns
the previous value.

Definition at line 164 of file raw_atomics.hpp.

164  {
165  return ::__atomic_fetch_and(target, operand, __ATOMIC_SEQ_CST);
166 }
template<typename T >
T foedus::assorted::raw_atomic_fetch_and_bitwise_or ( T *  target,
operand 
)
inline

Atomic fetch-bitwise-or for raw primitive types rather than std::atomic<T>.

Template Parameters
Tinteger type
Returns
the previous value.

Definition at line 174 of file raw_atomics.hpp.

174  {
175  return ::__atomic_fetch_or(target, operand, __ATOMIC_SEQ_CST);
176 }
template<typename T >
T foedus::assorted::raw_atomic_fetch_and_bitwise_xor ( T *  target,
operand 
)
inline

Atomic fetch-bitwise-xor for raw primitive types rather than std::atomic<T>

Template Parameters
Tinteger type
Returns
the previous value.

Definition at line 184 of file raw_atomics.hpp.

184  {
185  return ::__atomic_fetch_xor(target, operand, __ATOMIC_SEQ_CST);
186 }
template<typename T >
T foedus::assorted::read_bigendian ( const void *  be_bytes)
inline

Convert a big-endian byte array to a native integer.

Parameters
[in]be_bytesa big-endian byte array. MUST BE ALIGNED.
Returns
converted native integer
Template Parameters
Ttype of native integer

Almost same as bexxtoh in endian.h except this is a single template function that supports signed integers.

Definition at line 116 of file endianness.hpp.

References ASSUME_ALIGNED, and foedus::assorted::betoh().

116  {
117  // all if clauses below will be elimiated by compiler.
118  const T* be_address = reinterpret_cast<const T*>(ASSUME_ALIGNED(be_bytes, sizeof(T)));
119  T be_value = *be_address;
120  return betoh(be_value);
121 }
#define ASSUME_ALIGNED(x, y)
Wraps GCC's __builtin_assume_aligned.
Definition: compiler.hpp:111
T betoh(T be_value)

Here is the call graph for this function:

std::string foedus::assorted::replace_all ( const std::string &  target,
const std::string &  search,
const std::string &  replacement 
)

target.replaceAll(search, replacement).

Sad that std C++ doesn't provide such a basic stuff. regex is an overkill for this purpose.

Definition at line 45 of file assorted_func.cpp.

Referenced by foedus::log::LogOptions::convert_folder_path_pattern(), foedus::snapshot::SnapshotOptions::convert_folder_path_pattern(), foedus::proc::ProcOptions::convert_shared_library_dir_pattern(), foedus::proc::ProcOptions::convert_shared_library_path_pattern(), foedus::soc::SocOptions::convert_spawn_executable_pattern(), foedus::soc::SocOptions::convert_spawn_ld_library_path_pattern(), and foedus::assorted::replace_all().

46  {
47  std::string subject = target;
48  while (true) {
49  std::size_t pos = subject.find(search);
50  if (pos != std::string::npos) {
51  subject.replace(pos, search.size(), replacement);
52  } else {
53  break;
54  }
55  }
56  return subject;
57 }

Here is the caller graph for this function:

std::string foedus::assorted::replace_all ( const std::string &  target,
const std::string &  search,
int  replacement 
)

target.replaceAll(search, String.valueOf(replacement)).

Definition at line 59 of file assorted_func.cpp.

References foedus::assorted::replace_all().

60  {
61  std::stringstream str;
62  str << replacement;
63  std::string rep = str.str();
64  return replace_all(target, search, rep);
65 }
std::string replace_all(const std::string &target, const std::string &search, const std::string &replacement)
target.replaceAll(search, replacement).

Here is the call graph for this function:

void foedus::assorted::spinlock_yield ( )

Invoke _mm_pause(), x86 PAUSE instruction, or something equivalent in the env.

Invoke this where you do a spinlock. It especially helps valgrind. Probably you should invoke this after a few spins.

See also
http://stackoverflow.com/questions/7371869/minimum-time-a-thread-can-pause-in-linux "NOP instruction can be between 0.4-0.5 clocks and PAUSE instruction can consume 38-40 clocks."
SPINLOCK_WHILE(x)

Definition at line 193 of file assorted_func.cpp.

Referenced by foedus::soc::SharedCond::broadcast(), foedus::thread::ThreadPimpl::handle_tasks(), foedus::thread::ConditionVariable::notify_all(), foedus::assorted::spin_until(), foedus::soc::SharedCond::uninitialize(), foedus::soc::SocManagerPimpl::wait_for_child_attach(), foedus::soc::SocManagerPimpl::wait_for_child_terminate(), foedus::soc::SocManagerPimpl::wait_for_children_module(), foedus::soc::SocManagerPimpl::wait_for_master_module(), foedus::soc::SocManagerPimpl::wait_for_master_status(), foedus::assorted::SpinlockStat::yield_backoff(), foedus::assorted::yield_if_valgrind(), and foedus::thread::ConditionVariable::~ConditionVariable().

193  {
194  // we initially used gcc's mm_pause and manual assembly, but now we use this to handle AArch64.
195  // It might be no-op (not guaranteed to yield, according to the C++ specifictation)
196  // depending on GCC's implementation, but portability is more important.
197  std::this_thread::yield();
198  // #if defined(__GNUC__)
199  // ::_mm_pause();
200  // #else // defined(__GNUC__)
201  // // Non-gcc compiler.
202  // asm volatile("pause" ::: "memory");
203  // #endif // defined(__GNUC__)
204 }

Here is the caller graph for this function:

template<uint64_t SIZE1, uint64_t SIZE2>
int foedus::assorted::static_size_check ( )
inline

Alternative for static_assert(sizeof(foo) == sizeof(bar), "oh crap") to display sizeof(foo).

Use it like this:

STATIC_SIZE_CHECK(sizeof(foo), sizeof(bar))

Definition at line 195 of file assorted_func.hpp.

References CXX11_STATIC_ASSERT.

195  {
196  CXX11_STATIC_ASSERT(SIZE1 == SIZE2,
197  "Static Size Check failed. Look for a message like this to see the value of Size1 and "
198  "Size2: 'In instantiation of int foedus::assorted::static_size_check() [with long unsigned"
199  " int SIZE1 = <size1>ul; long unsigned int SIZE2 = <size2>ul]'");
200  return 0;
201 }
#define CXX11_STATIC_ASSERT(expr, message)
Used in public headers in place of "static_assert" of C++11.
Definition: cxx11.hpp:135
template<typename T >
void foedus::assorted::write_bigendian ( host_value,
void *  be_bytes 
)
inline

Convert a native integer to big-endian bytes and write them to the given address.

Parameters
[in]host_valuea native integer.
[out]be_bytesaddress to write out big endian bytes. MUST BE ALIGNED.
Template Parameters
Ttype of native integer

Definition at line 131 of file endianness.hpp.

References ASSUME_ALIGNED.

131  {
132  T* be_address = reinterpret_cast<T*>(ASSUME_ALIGNED(be_bytes, sizeof(T)));
133  *be_address = htobe<T>(host_value);
134 }
#define ASSUME_ALIGNED(x, y)
Wraps GCC's __builtin_assume_aligned.
Definition: compiler.hpp:111

Variable Documentation

const uint16_t foedus::assorted::kCachelineSize = 64

Byte count of one cache line.

Several places use this to avoid false sharing of cache lines, for example separating two variables that are frequently accessed with atomic requirements.

Todo:
This should be automatically detected by cmakedefine.

Definition at line 42 of file cacheline.hpp.

Referenced by foedus::cache::CacheHashtable::evict_main_loop(), foedus::storage::masstree::MasstreeBorderPage::prefetch_additional_if_needed(), and foedus::storage::prefetch_page_l2().

const bool foedus::assorted::kIsLittleEndian = false

A handy const boolean to tell if it's little endina.

Most compilers would resolve "if (kIsLittleEndian) ..." at compile time, so no overhead. Compared to writing ifdef each time, this is handier. However, there are a few cases where we have to write ifdef (class definition etc).

Definition at line 54 of file endianness.hpp.