Commit 7c05f8af authored by Giuseppe Ottaviano's avatar Giuseppe Ottaviano Committed by Facebook Github Bot 0

Fix a couple comments

Reviewed By: nbronson

Differential Revision: D3905865

fbshipit-source-id: 2743af4ae1b34adb0f8e611e672f9b6068430ec9
parent c1ad77a5
...@@ -16,8 +16,8 @@ ...@@ -16,8 +16,8 @@
/** /**
* AtomicHashArray is the building block for AtomicHashMap. It provides the * AtomicHashArray is the building block for AtomicHashMap. It provides the
* core lock-free functionality, but is limitted by the fact that it cannot * core lock-free functionality, but is limited by the fact that it cannot
* grow past it's initialization size and is a little more awkward (no public * grow past its initialization size and is a little more awkward (no public
* constructor, for example). If you're confident that you won't run out of * constructor, for example). If you're confident that you won't run out of
* space, don't mind the awkardness, and really need bare-metal performance, * space, don't mind the awkardness, and really need bare-metal performance,
* feel free to use AHA directly. * feel free to use AHA directly.
......
...@@ -17,7 +17,7 @@ ...@@ -17,7 +17,7 @@
/* /*
* AtomicHashMap -- * AtomicHashMap --
* *
* A high performance concurrent hash map with int32 or int64 keys. Supports * A high-performance concurrent hash map with int32 or int64 keys. Supports
* insert, find(key), findAt(index), erase(key), size, and more. Memory cannot * insert, find(key), findAt(index), erase(key), size, and more. Memory cannot
* be freed or reclaimed by erase. Can grow to a maximum of about 18 times the * be freed or reclaimed by erase. Can grow to a maximum of about 18 times the
* initial capacity, but performance degrades linearly with growth. Can also be * initial capacity, but performance degrades linearly with growth. Can also be
...@@ -25,7 +25,7 @@ ...@@ -25,7 +25,7 @@
* internal storage (retrieved with iterator::getIndex()). * internal storage (retrieved with iterator::getIndex()).
* *
* Advantages: * Advantages:
* - High performance (~2-4x tbb::concurrent_hash_map in heavily * - High-performance (~2-4x tbb::concurrent_hash_map in heavily
* multi-threaded environments). * multi-threaded environments).
* - Efficient memory usage if initial capacity is not over estimated * - Efficient memory usage if initial capacity is not over estimated
* (especially for small keys and values). * (especially for small keys and values).
...@@ -56,7 +56,7 @@ ...@@ -56,7 +56,7 @@
* faster because of reduced data indirection. * faster because of reduced data indirection.
* *
* AHMap is a wrapper around AHArray sub-maps that allows growth and provides * AHMap is a wrapper around AHArray sub-maps that allows growth and provides
* an interface closer to the stl UnorderedAssociativeContainer concept. These * an interface closer to the STL UnorderedAssociativeContainer concept. These
* sub-maps are allocated on the fly and are processed in series, so the more * sub-maps are allocated on the fly and are processed in series, so the more
* there are (from growing past initial capacity), the worse the performance. * there are (from growing past initial capacity), the worse the performance.
* *
......
...@@ -204,11 +204,11 @@ ...@@ -204,11 +204,11 @@
// //
// If you have observed by profiling that your SharedMutex-s are getting // If you have observed by profiling that your SharedMutex-s are getting
// cache misses on deferredReaders[] due to another SharedMutex user, then // cache misses on deferredReaders[] due to another SharedMutex user, then
// you can use the tag type plus the RWDEFERREDLOCK_DECLARE_STATIC_STORAGE // you can use the tag type to create your own instantiation of the type.
// macro to create your own instantiation of the type. The contention // The contention threshold (see kNumSharedToStartDeferring) should make
// threshold (see kNumSharedToStartDeferring) should make this unnecessary // this unnecessary in all but the most extreme cases. Make sure to check
// in all but the most extreme cases. Make sure to check that the // that the increased icache and dcache footprint of the tagged result is
// increased icache and dcache footprint of the tagged result is worth it. // worth it.
// SharedMutex's use of thread local storage is as an optimization, so // SharedMutex's use of thread local storage is as an optimization, so
// for the case where thread local storage is not supported, define it // for the case where thread local storage is not supported, define it
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment