Commit 14010eb6 authored by Dave Watson's avatar Dave Watson Committed by Facebook Github Bot

Add more docs

Summary: Try and answer some common recurring questions

Differential Revision: D15081359

fbshipit-source-id: 1bb0fde2194b9b8304d556f2a4b690ac4aa1e414
parent 8a4aeb80
...@@ -24,11 +24,19 @@ ...@@ -24,11 +24,19 @@
namespace folly { namespace folly {
/** /**
* Based on Java's ConcurrentHashMap * Implementations of high-performance Concurrent Hashmaps that
* support erase and update.
* *
* Readers are always wait-free. * Readers are always wait-free.
* Writers are sharded, but take a lock. * Writers are sharded, but take a lock.
* *
* Multithreaded performance beats anything except the lock-free
* atomic maps (AtomicUnorderedMap, AtomicHashMap), BUT only
* if you can perfectly size the atomic maps, and you don't
* need erase(). If you don't know the size in advance or
* your workload frequently calls erase(), this is the
* better choice.
*
* The interface is as close to std::unordered_map as possible, but there * The interface is as close to std::unordered_map as possible, but there
* are a handful of changes: * are a handful of changes:
* *
...@@ -53,22 +61,63 @@ namespace folly { ...@@ -53,22 +61,63 @@ namespace folly {
* std::unordered_map which iterates over a linked list of elements. * std::unordered_map which iterates over a linked list of elements.
* If the table is sparse, this may be more expensive. * If the table is sparse, this may be more expensive.
* *
* * rehash policy is a power of two, using supplied factor.
*
* * Allocator must be stateless. * * Allocator must be stateless.
* *
* * ValueTypes without copy constructors will work, but pessimize the * 1: ConcurrentHashMap, based on Java's ConcurrentHashMap.
* implementation. * Very similar to std::unodered_map in performance.
* *
* Comparisons: * 2: ConcurrentHashMapSIMD, based on F14ValueMap. If the map is
* Single-threaded performance is extremely similar to std::unordered_map. * larger than the cache size, it has superior performance due to
* vectorized key lookup.
* *
* Multithreaded performance beats anything except the lock-free *
* atomic maps (AtomicUnorderedMap, AtomicHashMap), BUT only *
* if you can perfectly size the atomic maps, and you don't * USAGE FAQs
* need erase(). If you don't know the size in advance or *
* your workload frequently calls erase(), this is the * Q: Is simultaneous iteration and erase() threadsafe?
* better choice. * Example:
*
* ConcurrentHashMap<int, int> map;
*
* Thread 1: auto it = map.begin();
* while (it != map.end()) {
* // Do something with it
* it++;
* }
*
* Thread 2: map.insert(2, 2); map.erase(2);
*
* A: Yes, this is safe. However, the iterating thread is not
* garanteed to see (or not see) concurrent insertions and erasures.
* Inserts may cause a rehash, but the old table is still valid as
* long as any iterator pointing to it exists.
*
* Q: How do I update an existing object atomically?
*
* A: assign_if_equal is the recommended way - readers will see the
* old value until the new value is completely constructed and
* inserted.
*
* Q: Why does map.erase() not actually destroy elements?
*
* A: Hazard Pointers are used to improve the performance of
* concurrent access. They can be thought of as a simple Garbage
* Collector. To reduce the GC overhead, a GC pass is only run after
* reaching a cetain memory bound. erase() will remove the element
* from being accessed via the map, but actual destruction may happen
* later, after iterators that may point to it have been deleted.
*
* The only guarantee is that a GC pass will be run on map destruction
* - no elements will remain after map destruction.
*
* Q: Are pointers to values safe to access *without* holding an
* iterator?
*
* A: The SIMD version guarantees that references to elements are
* stable across rehashes, the non-SIMD version does *not*. Note that
* unless you hold an iterator, you need to ensure there are no
* concurrent deletes/updates to that key if you are accessing it via
* reference.
*/ */
template < template <
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment