libfoedus-core
FOEDUS Core Library
foedus::assorted Namespace Reference

Assorted Methods/Classes that are too subtle to have their own packages. More...

Detailed Description

Assorted Methods/Classes that are too subtle to have their own packages.

Do NOT use this package to hold hundleds of classes/methods. That's a class design failure. This package should contain really only a few methods and classes, each of which should be extremely simple and unrelated each other. Otherwise, make a package for them. Learn from the stupid history of java.util.

Classes

struct  BacktraceContext
 
struct  ConstDiv
 The pre-calculated p-m pair for optimized integer division by constant. More...
 
class  DumbSpinlock
 A simple spinlock using a boolean field. More...
 
class  FixedString
 An embedded string object of fixed max-length, which uses no external memory. More...
 
struct  Hex
 Convenient way of writing hex integers to stream. More...
 
struct  HexString
 Equivalent to std::hex in case the stream doesn't support it. More...
 
struct  ProbCounter
 Implements a probabilistic counter [Morris 1978]. More...
 
struct  ProtectedBoundary
 A 4kb dummy data placed between separate memory regions so that we can check if/where a bogus memory access happens. More...
 
struct  SpinlockStat
 Helper for SPINLOCK_WHILE. More...
 
struct  Top
 Write only first few bytes to stream. More...
 
class  UniformRandom
 A very simple and deterministic random generator that is more aligned with standard benchmark such as TPC-C. More...
 
class  ZipfianRandom
 A simple zipfian generator based off of YCSB's Java implementation. More...
 

Functions

template<typename T , uint64_t ALIGNMENT>
align (T value)
 Returns the smallest multiply of ALIGNMENT that is equal or larger than the given number. More...
 
template<typename T >
align8 (T value)
 8-alignment. More...
 
template<typename T >
align16 (T value)
 16-alignment. More...
 
template<typename T >
align64 (T value)
 64-alignment. More...
 
int64_t int_div_ceil (int64_t dividee, int64_t dividor)
 Efficient ceil(dividee/dividor) for integer. More...
 
std::string replace_all (const std::string &target, const std::string &search, const std::string &replacement)
 target.replaceAll(search, replacement). More...
 
std::string replace_all (const std::string &target, const std::string &search, int replacement)
 target.replaceAll(search, String.valueOf(replacement)). More...
 
std::string os_error ()
 Thread-safe strerror(errno). More...
 
std::string os_error (int error_number)
 This version receives errno. More...
 
std::string get_current_executable_path ()
 Returns the full path of current executable. More...
 
void spinlock_yield ()
 Invoke _mm_pause(), x86 PAUSE instruction, or something equivalent in the env. More...
 
template<uint64_t SIZE1, uint64_t SIZE2>
int static_size_check ()
 Alternative for static_assert(sizeof(foo) == sizeof(bar), "oh crap") to display sizeof(foo). More...
 
std::string demangle_type_name (const char *mangled_name)
 Demangle the given C++ type name if possible (otherwise the original string). More...
 
template<typename T >
std::string get_pretty_type_name ()
 Returns the name of the C++ type as readable as possible. More...
 
uint64_t generate_almost_prime_below (uint64_t threshold)
 Generate a prime or some number that is almost prime less than the given number. More...
 
void memory_fence_acquire ()
 Equivalent to std::atomic_thread_fence(std::memory_order_acquire). More...
 
void memory_fence_release ()
 Equivalent to std::atomic_thread_fence(std::memory_order_release). More...
 
void memory_fence_acq_rel ()
 Equivalent to std::atomic_thread_fence(std::memory_order_acq_rel). More...
 
void memory_fence_consume ()
 Equivalent to std::atomic_thread_fence(std::memory_order_consume). More...
 
void memory_fence_seq_cst ()
 Equivalent to std::atomic_thread_fence(std::memory_order_seq_cst). More...
 
template<typename T >
atomic_load_seq_cst (const T *target)
 Atomic load with a seq_cst barrier for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
atomic_load_acquire (const T *target)
 Atomic load with an acquire barrier for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
atomic_load_consume (const T *target)
 Atomic load with a consume barrier for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
void atomic_store_seq_cst (T *target, T value)
 Atomic store with a seq_cst barrier for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
void atomic_store_release (T *target, T value)
 Atomic store with a release barrier for raw primitive types rather than std::atomic<T>. More...
 
void prefetch_cacheline (const void *address)
 Prefetch one cacheline to L1 cache. More...
 
void prefetch_cachelines (const void *address, int cacheline_count)
 Prefetch multiple contiguous cachelines to L1 cache. More...
 
void prefetch_l2 (const void *address, int cacheline_count)
 Prefetch multiple contiguous cachelines to L2/L3 cache. More...
 
template<typename T >
betoh (T be_value)
 
template<>
uint64_t betoh< uint64_t > (uint64_t be_value)
 
template<>
uint32_t betoh< uint32_t > (uint32_t be_value)
 
template<>
uint16_t betoh< uint16_t > (uint16_t be_value)
 
template<>
uint8_t betoh< uint8_t > (uint8_t be_value)
 
template<>
int64_t betoh< int64_t > (int64_t be_value)
 
template<>
int32_t betoh< int32_t > (int32_t be_value)
 
template<>
int16_t betoh< int16_t > (int16_t be_value)
 
template<>
int8_t betoh< int8_t > (int8_t be_value)
 
template<typename T >
htobe (T host_value)
 
template<>
uint64_t htobe< uint64_t > (uint64_t host_value)
 
template<>
uint32_t htobe< uint32_t > (uint32_t host_value)
 
template<>
uint16_t htobe< uint16_t > (uint16_t host_value)
 
template<>
uint8_t htobe< uint8_t > (uint8_t host_value)
 
template<>
int64_t htobe< int64_t > (int64_t host_value)
 
template<>
int32_t htobe< int32_t > (int32_t host_value)
 
template<>
int16_t htobe< int16_t > (int16_t host_value)
 
template<>
int8_t htobe< int8_t > (int8_t host_value)
 
template<typename T >
read_bigendian (const void *be_bytes)
 Convert a big-endian byte array to a native integer. More...
 
template<typename T >
void write_bigendian (T host_value, void *be_bytes)
 Convert a native integer to big-endian bytes and write them to the given address. More...
 
int mod_numa_node (int numa_node)
 In order to run even on a non-numa machine or a machine with fewer sockets, we allow specifying arbitrary numa_node. More...
 
template<typename T >
bool raw_atomic_compare_exchange_strong (T *target, T *expected, T desired)
 Atomic CAS. More...
 
template<typename T >
bool raw_atomic_compare_exchange_weak (T *target, T *expected, T desired)
 Weak version of raw_atomic_compare_exchange_strong(). More...
 
bool raw_atomic_compare_exchange_strong_uint128 (uint64_t *ptr, const uint64_t *old_value, const uint64_t *new_value)
 Atomic 128-bit CAS, which is not in the standard yet. More...
 
bool raw_atomic_compare_exchange_weak_uint128 (uint64_t *ptr, const uint64_t *old_value, const uint64_t *new_value)
 Weak version of raw_atomic_compare_exchange_strong_uint128(). More...
 
template<typename T >
raw_atomic_exchange (T *target, T desired)
 Atomic Swap for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
raw_atomic_fetch_add (T *target, T addendum)
 Atomic fetch-add for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
raw_atomic_fetch_and_bitwise_and (T *target, T operand)
 Atomic fetch-bitwise-and for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
raw_atomic_fetch_and_bitwise_or (T *target, T operand)
 Atomic fetch-bitwise-or for raw primitive types rather than std::atomic<T>. More...
 
template<typename T >
raw_atomic_fetch_and_bitwise_xor (T *target, T operand)
 Atomic fetch-bitwise-xor for raw primitive types rather than std::atomic<T> More...
 
std::vector< std::string > get_backtrace (bool rich=true)
 Returns the backtrace information of the current stack. More...
 
bool is_running_on_valgrind ()
 
template<typename COND >
uint64_t spin_until (COND spin_until_cond)
 Spin locally until the given condition returns true. More...
 
void yield_if_valgrind ()
 Use this in your while as a stop-gap before switching to spin_until(). More...
 
std::ostream & operator<< (std::ostream &o, const Hex &v)
 
std::ostream & operator<< (std::ostream &o, const HexString &v)
 
std::ostream & operator<< (std::ostream &o, const Top &v)
 
void libbt_create_state_error (void *data, const char *msg, int errnum)
 
void libbt_full_error (void *data, const char *msg, int errnum)
 
int libbt_full (void *data, uintptr_t pc, const char *filename, int lineno, const char *function)
 

Variables

const uint16_t kCachelineSize = 64
 Byte count of one cache line. More...
 
const uint32_t kPower2To31 = 1U << 31
 
const uint64_t kPower2To63 = 1ULL << 63
 
const uint32_t kFull32Bits = 0xFFFFFFFF
 
const uint32_t kFull31Bits = 0x7FFFFFFF
 
const uint64_t kFull64Bits = 0xFFFFFFFFFFFFFFFFULL
 
const uint64_t kFull63Bits = 0x7FFFFFFFFFFFFFFFULL
 
const bool kIsLittleEndian = false
 A handy const boolean to tell if it's little endina. More...
 
const uint64_t kProtectedBoundaryMagicWord = 0x42a6292680d7ce36ULL
 
const char * kUpperHexChars = "0123456789ABCDEF"
 

Function Documentation

template<typename T >
T foedus::assorted::betoh ( be_value)

Referenced by read_bigendian().

Here is the caller graph for this function:

template<>
int16_t foedus::assorted::betoh< int16_t > ( int16_t  be_value)
inline

Definition at line 79 of file endianness.hpp.

79  {
80  return be16toh(static_cast<uint16_t>(be_value)) - (1U << 15);
81 }
template<>
int32_t foedus::assorted::betoh< int32_t > ( int32_t  be_value)
inline

Definition at line 76 of file endianness.hpp.

76  {
77  return be32toh(static_cast<uint32_t>(be_value)) - (1U << 31);
78 }
template<>
int64_t foedus::assorted::betoh< int64_t > ( int64_t  be_value)
inline

Definition at line 73 of file endianness.hpp.

73  {
74  return be64toh(static_cast<uint64_t>(be_value)) - (1ULL << 63);
75 }
template<>
int8_t foedus::assorted::betoh< int8_t > ( int8_t  be_value)
inline

Definition at line 82 of file endianness.hpp.

82  {
83  return be_value - (1U << 7);
84 }
template<>
uint16_t foedus::assorted::betoh< uint16_t > ( uint16_t  be_value)
inline

Definition at line 67 of file endianness.hpp.

67 { return be16toh(be_value); }
template<>
uint32_t foedus::assorted::betoh< uint32_t > ( uint32_t  be_value)
inline

Definition at line 66 of file endianness.hpp.

66 { return be32toh(be_value); }
template<>
uint64_t foedus::assorted::betoh< uint64_t > ( uint64_t  be_value)
inline

Definition at line 65 of file endianness.hpp.

65 { return be64toh(be_value); }
template<>
uint8_t foedus::assorted::betoh< uint8_t > ( uint8_t  be_value)
inline

Definition at line 68 of file endianness.hpp.

68 { return be_value; }
std::vector< std::string > foedus::assorted::get_backtrace ( bool  rich = true)

Returns the backtrace information of the current stack.

If rich flag is given, the backtrace information is converted to human-readable format as much as possible via addr2line (which is linux-only). Also, this method does not care about out-of-memory situation. When you are really concerned with it, just use backtrace, backtrace_symbols_fd etc.

Definition at line 263 of file rich_backtrace.cpp.

References foedus::assorted::BacktraceContext::call_glibc_backtrace(), foedus::assorted::BacktraceContext::get_results(), libbt_create_state_error(), libbt_full(), and libbt_full_error().

Referenced by foedus::print_backtrace().

263  {
264  BacktraceContext context;
265  context.call_glibc_backtrace();
266  if (rich) {
267  // try to use the shiny new libbacktrace
268  backtrace_state* state = ::backtrace_create_state(
269  nullptr, // let libbacktrace to figure out this executable
270  0, // only this thread accesses it
272  &context);
273  if (state) {
274  int full_result = ::backtrace_full(
275  state,
276  0,
277  libbt_full,
279  &context);
280  if (full_result != 0) {
281  // return ret;
282  std::cerr << "[FOEDUS] libbacktrace backtrace_full failed: " << full_result << std::endl;
283  }
284  }
285  }
286 
287  return context.get_results(1); // skip the first (this method itself)
288 }
void libbt_full_error(void *data, const char *msg, int errnum)
int libbt_full(void *data, uintptr_t pc, const char *filename, int lineno, const char *function)
void libbt_create_state_error(void *data, const char *msg, int errnum)

Here is the call graph for this function:

Here is the caller graph for this function:

template<typename T >
T foedus::assorted::htobe ( host_value)
template<>
int16_t foedus::assorted::htobe< int16_t > ( int16_t  host_value)
inline

Definition at line 98 of file endianness.hpp.

98  {
99  return htobe16(static_cast<uint16_t>(host_value) - (1U << 15));
100 }
template<>
int32_t foedus::assorted::htobe< int32_t > ( int32_t  host_value)
inline

Definition at line 95 of file endianness.hpp.

95  {
96  return htobe32(static_cast<uint32_t>(host_value) - (1U << 31));
97 }
template<>
int64_t foedus::assorted::htobe< int64_t > ( int64_t  host_value)
inline

Definition at line 92 of file endianness.hpp.

92  {
93  return htobe64(static_cast<uint64_t>(host_value) - (1ULL << 63));
94 }
template<>
int8_t foedus::assorted::htobe< int8_t > ( int8_t  host_value)
inline

Definition at line 101 of file endianness.hpp.

101  {
102  return host_value - (1U << 7);
103 }
template<>
uint16_t foedus::assorted::htobe< uint16_t > ( uint16_t  host_value)
inline

Definition at line 89 of file endianness.hpp.

89 { return htobe16(host_value); }
template<>
uint32_t foedus::assorted::htobe< uint32_t > ( uint32_t  host_value)
inline

Definition at line 88 of file endianness.hpp.

88 { return htobe32(host_value); }
template<>
uint8_t foedus::assorted::htobe< uint8_t > ( uint8_t  host_value)
inline

Definition at line 90 of file endianness.hpp.

90 { return host_value; }
bool foedus::assorted::is_running_on_valgrind ( )
Returns
whether this process is running on valgrind. Equivalent to RUNNING_ON_VALGRIND macro (but you don't have to include valgrind.h just for it.)

Definition at line 24 of file spin_until_impl.cpp.

Referenced by spin_until(), and yield_if_valgrind().

24  {
25  return RUNNING_ON_VALGRIND;
26 }

Here is the caller graph for this function:

void foedus::assorted::libbt_create_state_error ( void *  data,
const char *  msg,
int  errnum 
)

Definition at line 117 of file rich_backtrace.cpp.

Referenced by get_backtrace().

117  {
118  if (data == nullptr) {
119  std::cerr << "[FOEDUS] wtf. libbt_create_state_error received null" << std::endl;
120  } else {
121  reinterpret_cast<BacktraceContext*>(data)->on_libbt_create_state_error(msg, errnum);
122  }
123 }

Here is the caller graph for this function:

int foedus::assorted::libbt_full ( void *  data,
uintptr_t  pc,
const char *  filename,
int  lineno,
const char *  function 
)

Definition at line 131 of file rich_backtrace.cpp.

Referenced by get_backtrace().

131  {
132  if (data == nullptr) {
133  std::cerr << "[FOEDUS] wtf. libbt_full received null" << std::endl;
134  return 1;
135  } else {
136  reinterpret_cast<BacktraceContext*>(data)->on_libbt_full(pc, filename, lineno, function);
137  return 0;
138  }
139 }

Here is the caller graph for this function:

void foedus::assorted::libbt_full_error ( void *  data,
const char *  msg,
int  errnum 
)

Definition at line 124 of file rich_backtrace.cpp.

Referenced by get_backtrace().

124  {
125  if (data == nullptr) {
126  std::cerr << "[FOEDUS] wtf. libbt_full_error received null" << std::endl;
127  } else {
128  reinterpret_cast<BacktraceContext*>(data)->on_libbt_full_error(msg, errnum);
129  }
130 }

Here is the caller graph for this function:

std::ostream& foedus::assorted::operator<< ( std::ostream &  o,
const Hex v 
)

Definition at line 92 of file assorted_func.cpp.

References foedus::assorted::Hex::fix_digits_, and foedus::assorted::Hex::val_.

92  {
93  // Duh, even if I recover the flags, not good to contaminate the given ostream object.
94  // std::ios::fmtflags old_flags = o.flags();
95  // o << "0x";
96  // if (v.fix_digits_ >= 0) {
97  // o.width(v.fix_digits_);
98  // o.fill('0');
99  // }
100  // o << std::hex << std::uppercase << v.val_;
101  // o.flags(old_flags);
102 
103  // Let's do it ourselves
104  char buffer[17];
105  buffer[16] = 0;
106  for (uint16_t i = 0; i < 16U; ++i) {
107  buffer[i] = kUpperHexChars[(v.val_ >> ((15 - i) * 4)) & 0xFU];
108  }
109  uint16_t start_pos;
110  for (start_pos = 0; start_pos < 15U; ++start_pos) {
111  if (buffer[start_pos] != '0') {
112  break;
113  }
114  if (v.fix_digits_ >= 0 && start_pos > 16 - v.fix_digits_) {
115  break;
116  }
117  }
118 
119  o << "0x" << (buffer + start_pos);
120  return o;
121 }
const char * kUpperHexChars
std::ostream& foedus::assorted::operator<< ( std::ostream &  o,
const HexString v 
)

Definition at line 123 of file assorted_func.cpp.

References foedus::assorted::HexString::max_bytes_, and foedus::assorted::HexString::str_.

123  {
124  // Same above
125  o << "0x";
126  for (uint32_t i = 0; i < v.str_.size() && i < v.max_bytes_; ++i) {
127  if (i > 0 && i % 8U == 0) {
128  o << " "; // put space for every 8 bytes for readability
129  }
130  o << kUpperHexChars[(v.str_[i] >> 4) & 0xFU] << kUpperHexChars[v.str_[i] & 0xFU];
131  }
132  if (v.max_bytes_ != -1U && v.str_.size() > v.max_bytes_) {
133  o << " ...(" << (v.str_.size() - v.max_bytes_) << " more bytes)";
134  }
135  return o;
136 }
const char * kUpperHexChars
std::ostream& foedus::assorted::operator<< ( std::ostream &  o,
const Top v 
)

Definition at line 138 of file assorted_func.cpp.

References foedus::assorted::Top::data_, foedus::assorted::Top::data_len_, and foedus::assorted::Top::max_bytes_.

138  {
139  for (uint32_t i = 0; i < std::min<uint32_t>(v.data_len_, v.max_bytes_); ++i) {
140  o << i << ":" << static_cast<int>(v.data_[i]);
141  if (i != 0) {
142  o << ", ";
143  }
144  }
145  if (v.data_len_ > v.max_bytes_) {
146  o << "...";
147  }
148  return o;
149 }
template<typename COND >
uint64_t foedus::assorted::spin_until ( COND  spin_until_cond)
inline

Spin locally until the given condition returns true.

Even if you think your while-loop is trivial, make sure you use this. The yield is necessary for valgrind runs. This template has a negligible overhead in non-valgrind release compilation.

This is a frequently appearing pattern without which valgrind runs would go into an infinite loop. Unfortunately it needs lambda, so this is an _impl file.

Example
spin_until([block_address]{
return (*block_address) != 0; // Spin until the block becomes non-zero
});

In general:

while(XXXX) {}
is equal to
spin_until([]{ return !XXXX; });

Notice the !. spin_"until", thus opposite to "while".

Returns
the number of cycles (using RDTSC) spent in this function.

Definition at line 61 of file spin_until_impl.hpp.

References foedus::debugging::RdtscWatch::elapsed(), is_running_on_valgrind(), spinlock_yield(), and foedus::debugging::RdtscWatch::stop().

Referenced by foedus::xct::spin_until(), foedus::xct::McsRwSimpleBlock::timeout_granted(), and foedus::xct::McsRwExtendedBlock::timeout_granted().

61  {
62  debugging::RdtscWatch watch;
63 
64  const bool on_valgrind = is_running_on_valgrind();
65  while (!spin_until_cond()) {
66  // Valgrind never switches context without this.
67  // This if should have a negligible overhead.
68  if (on_valgrind) {
70  }
71  }
72 
73  watch.stop();
74  return watch.elapsed();
75 }
void spinlock_yield()
Invoke _mm_pause(), x86 PAUSE instruction, or something equivalent in the env.

Here is the call graph for this function:

Here is the caller graph for this function:

void foedus::assorted::yield_if_valgrind ( )
inline

Use this in your while as a stop-gap before switching to spin_until().

See also
spin_until()

Definition at line 81 of file spin_until_impl.hpp.

References is_running_on_valgrind(), and spinlock_yield().

Referenced by foedus::xct::McsRwSimpleBlock::timeout_granted(), and foedus::xct::McsRwExtendedBlock::timeout_granted().

81  {
82  const bool on_valgrind = is_running_on_valgrind();
83  if (on_valgrind) {
85  }
86 }
void spinlock_yield()
Invoke _mm_pause(), x86 PAUSE instruction, or something equivalent in the env.

Here is the call graph for this function:

Here is the caller graph for this function:

Variable Documentation

const uint32_t foedus::assorted::kFull31Bits = 0x7FFFFFFF

Definition at line 30 of file const_div.hpp.

const uint32_t foedus::assorted::kFull32Bits = 0xFFFFFFFF

Definition at line 29 of file const_div.hpp.

const uint64_t foedus::assorted::kFull63Bits = 0x7FFFFFFFFFFFFFFFULL

Definition at line 32 of file const_div.hpp.

const uint64_t foedus::assorted::kFull64Bits = 0xFFFFFFFFFFFFFFFFULL

Definition at line 31 of file const_div.hpp.

const uint32_t foedus::assorted::kPower2To31 = 1U << 31

Definition at line 27 of file const_div.hpp.

const uint64_t foedus::assorted::kPower2To63 = 1ULL << 63

Definition at line 28 of file const_div.hpp.

const uint64_t foedus::assorted::kProtectedBoundaryMagicWord = 0x42a6292680d7ce36ULL

Definition at line 33 of file protected_boundary.hpp.

Referenced by foedus::assorted::ProtectedBoundary::reset().

const char* foedus::assorted::kUpperHexChars = "0123456789ABCDEF"

Definition at line 91 of file assorted_func.cpp.