Commit 017cd274 authored by Ning Xu's avatar Ning Xu Committed by Facebook Github Bot

Surround code snippets with tildes (#1032)

Summary:
Angled brackets inside angled brackets is not displayed.
This makes the document misleading.
Pull Request resolved: https://github.com/facebook/folly/pull/1032

Reviewed By: shixiao

Differential Revision: D14207772

Pulled By: yfeldblum

fbshipit-source-id: 4f2e58145a5473a7b887ef1da4efbeb16d5330c6
parent 75d20258
......@@ -14,7 +14,7 @@ production at Facebook. Switching to it can improve memory efficiency
and performance at the same time. The hash table implementations
widely deployed in C++ at Facebook exist along a spectrum of space/time
tradeoffs. The fastest is the least memory efficient, and the most
memory efficient (google::sparse_hash_map) is much slower than the rest.
memory efficient (`google::sparse_hash_map`) is much slower than the rest.
F14 moves the curve, simultaneously improving memory efficiency and
performance when compared to most of the existing algorithms.
......@@ -23,21 +23,21 @@ performance when compared to most of the existing algorithms.
The core hash table implementation has a pluggable storage strategy,
with three policies provided:
F14NodeMap stores values indirectly, calling malloc on each insert like
std::unordered_map. This implementation is the most memory efficient
`F14NodeMap` stores values indirectly, calling malloc on each insert like
`std::unordered_map`. This implementation is the most memory efficient
for medium and large keys. It provides the same iterator and reference
stability guarantees as the standard map while being faster and more
memory efficient, so you can substitute F14NodeMap for std::unordered_map
memory efficient, so you can substitute `F14NodeMap` for `std::unordered_map`
safely in production code. F14's filtering substantially reduces
indirection (and cache misses) when compared to std::unordered_map.
indirection (and cache misses) when compared to `std::unordered_map`.
F14ValueMap stores values inline, like google::dense_hash_map.
`F14ValueMap` stores values inline, like `google::dense_hash_map`.
Inline storage is the most memory efficient for small values, but for
medium and large values it wastes space. Because it can tolerate a much
higher load factor, F14ValueMap is almost twice as memory efficient as
dense_hash_map while also faster for most workloads.
higher load factor, `F14ValueMap` is almost twice as memory efficient as
`dense_hash_map` while also faster for most workloads.
F14VectorMap keeps values packed in a contiguous array. The main hash
`F14VectorMap` keeps values packed in a contiguous array. The main hash
array stores 32-bit indexes into the value vector. Compared to the
existing internal implementations that use a similar strategy, F14 is
slower for simple keys and small or medium-sized tables (because of the
......@@ -46,11 +46,11 @@ about 16 bytes per entry on average.
We also provide:
F14FastMap inherits from either F14ValueMap or F14VectorMap depending
`F14FastMap` inherits from either F14ValueMap or F14VectorMap depending
on entry size. When the key and mapped_type are less than 24 bytes, it
inherits from F14ValueMap. For medium and large entries, it inherits
from F14VectorMap. This strategy provides the best performance, while
also providing better memory efficiency than dense_hash_map or the other
inherits from `F14ValueMap`. For medium and large entries, it inherits
from `F14VectorMap`. This strategy provides the best performance, while
also providing better memory efficiency than `dense_hash_map` or the other
hash tables in use at Facebook that don't individually allocate nodes.
## WHICH F14 VARIANT IS RIGHT FOR ME?
......@@ -64,24 +64,24 @@ should use it.
## HETEROGENEOUS KEY TYPE WITH TRANSPARENT HASH AND EQUALITY
In some cases it makes sense to define hash and key equality across
types. For example, StringPiece's hash and equality are capable of
accepting std::string (because std::string is implicitly convertible
to StringPiece). If you mark the hash functor and key equality functor
types. For example, `StringPiece`'s hash and equality are capable of
accepting `std::string` (because `std::string` is implicitly convertible
to `StringPiece`). If you mark the hash functor and key equality functor
as _transparent_, then F14 will allow you to search the table directly
using any of the accepted key types without converting the key.
For example, using H =
folly::transparent<folly::hasher<folly::StringPiece>> and
E = folly::transparent<std::equal_to<folly::StringPiece>>, an
F14FastSet<std::string, H, E> will allow you to use a StringPiece key
without the need to construct a std::string.
For example, using `H =
folly::transparent<folly::hasher<folly::StringPiece>>` and
`E = folly::transparent<std::equal_to<folly::StringPiece>>`, an
`F14FastSet<std::string, H, E>` will allow you to use a `StringPiece` key
without the need to construct a `std::string`.
Heterogeneous lookup and erase works for any key types that can be passed
to operator() on the hasher and key_equal functors. For operations
such as operator[] that might insert there is an additional constraint,
which is that the passed-in key must be explicitly convertible to the
table's key_type. F14 maps understand all possible forms that can be
used to construct the underlying std::pair<key_type const, value_type),
used to construct the underlying `std::pair<key_type const, value_type)`,
so heterogeneous keys can be used even with insert and emplace.
## RANDOMIZED BEHAVIOR IN DEBUG BUILDS
......@@ -91,13 +91,13 @@ the address sanitizer (ASAN) is in use. This randomness is designed to
expose bugs during test that might otherwise only occur in production.
Bugs are exposed probabilistically, they may appear only some of the time.
In debug builds F14ValueMap and F14NodeMap randomize the relationship
In debug builds `F14ValueMap` and `F14NodeMap` randomize the relationship
between insertion and iteration order. This means that adding the same
k1 and k2 to two empty maps (or the same map twice after clearing it)
can produce the iteration order k1,k2 or k2,k1. Unit tests will
fail if they assume that the iteration order is the same between
identically constructed maps, even in the same process. This also
affects folly::dynamic's object mode.
affects `folly::dynamic`'s object mode.
When the address sanitizer is enabled all of the F14 variants perform some
randomized extra rehashes on insert, which exposes iterator and reference
......@@ -149,7 +149,7 @@ hashing, or Cuckoo hashing), it is also an option to find a displaced key,
relocate it, and then recursively repair the new hole.
Tombstones must be eventually reclaimed to deal with workloads that
continuously insert and erase. google::dense_hash_map eventually triggers
continuously insert and erase. `google::dense_hash_map` eventually triggers
a rehash in this case, for example. Unfortunately, to avoid quadratic
behavior this rehash may have to halve the max load factor of the table,
resulting in a huge decrease in memory efficiency.
......@@ -257,7 +257,7 @@ our threshold of 12/14.
The standard requires that a hash table be iterable in O(size()) time
regardless of its load factor (rather than O(bucket_count()). That means
if you insert 1 million keys then erase all but 10, iteration should
be O(10). For std::unordered_map the cost of supporting this scenario
be O(10). For `std::unordered_map` the cost of supporting this scenario
is an extra level of indirection in every read and every write, which is
part of why we can improve substantially on its performance. Low load
factor iteration occurs in practice when erasing keys during iteration
......@@ -277,11 +277,11 @@ The standard requires that clear() be O(size()), which has the practical
effect of prohibiting a change to bucket_count. F14 deallocates
all memory during clear() if it has space for more than 100 keys, to
avoid leaving a large table that will be expensive to iterate (see the
previous paragraph). google::dense_hash_map works around this tradeoff
previous paragraph). `google::dense_hash_map` works around this tradeoff
by providing both clear() and clear_no_resize(); we could do something
similar.
As stated above, F14NodeMap and F14NodeSet are the only F14 variants
As stated above, `F14NodeMap` and `F14NodeSet` are the only F14 variants
that provides reference stability. When running under ASAN the other
storage policies will probabilistically perform extra rehashes, which
makes it likely that reference stability problems will be found by the
......@@ -289,13 +289,13 @@ address sanitizer.
An additional subtlety for hash tables that don't provide reference
stability is whether they rehash before evaluating the arguments passed
to insert(). F14 tables may rehash before evaluating the arguments
to `insert()`. F14 tables may rehash before evaluating the arguments
to a method that causes an insertion, so it's not safe to write
something like `map.insert(k2, map[k1])` with F14FastMap, F14ValueMap,
or F14VectorMap. This behavior matches google::dense_hash_map and the
excellent absl::flat_hash_map.
something like `map.insert(k2, map[k1])` with `F14FastMap`, `F14ValueMap`,
or `F14VectorMap`. This behavior matches `google::dense_hash_map` and the
excellent `absl::flat_hash_map`.
F14NodeMap does not currently support the C++17 node API, but it could
`F14NodeMap` does not currently support the C++17 node API, but it could
be trivially added.
* Nathan Bronson -- <ngbronson@fb.com>
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment