1. 28 Jul, 2021 6 commits
    • Giuseppe Ottaviano's avatar
      Reduce memory usage of CoreCachedSharedPtr · 65180b25
      Giuseppe Ottaviano authored
      Summary:
      We only need as many slots as the number of L1 caches.
      
      Also avoid allocating control blocks when the passed pointer has no managed object.
      
      Reviewed By: philippv, luciang
      
      Differential Revision: D29872059
      
      fbshipit-source-id: 8c221b0523494c44a5c6828bafd26eeb00e573c4
      65180b25
    • Andrew Smith's avatar
      Workaround for opt-gcc compiler bug · 5fbc8492
      Andrew Smith authored
      Summary: This diff appears to work around a compiler bug in gcc10 with coroutines. GCC is not keeping alive a temporary passed to a coroutine function.
      
      Reviewed By: nmagerko
      
      Differential Revision: D29847926
      
      fbshipit-source-id: 69e0fc4cece8f5d0e26647581800ad81b1e44c74
      5fbc8492
    • Felix Handte's avatar
      Change Context Pool Stripes from 4 to 128 · cc0f64d2
      Felix Handte authored
      Summary:
      This diff increases the number of domains that locally cache a context from
      4 to 128. In effect, this stops sharing contexts between cores, and instead
      causes each core to have its own context. (If a machine has fewer than 128
      CPUs, the unused slots will never be initialized.)
      
      This diff is an experimental tuning. We don't expect it to have large positive
      or negative effects, but we do currently see a non-zero amount of CPU spent
      contending on the underlying pool's mutex. So this is an attempt to address
      that. We'll look at CPU after it has rolled out and assess the results.
      
      Reviewed By: terrelln
      
      Differential Revision: D29944813
      
      fbshipit-source-id: 7b10cb9dd2c49c02fad1f680a1bea02747990796
      cc0f64d2
    • Shai Szulanski's avatar
      Add CancellableAsyncScope overloads for makeUnorderedAsyncGenerator · 854cb5ce
      Shai Szulanski authored
      Reviewed By: capickett
      
      Differential Revision: D29942172
      
      fbshipit-source-id: 98649f6bf911d2a7a2f0ac687fa51fe76e0b777c
      854cb5ce
    • Kian Ostad's avatar
      Fix typo in Hazptr (#1611) · dc491606
      Kian Ostad authored
      Summary: Pull Request resolved: https://github.com/facebook/folly/pull/1611
      
      Reviewed By: magedm
      
      Differential Revision: D29938492
      
      Pulled By: yfeldblum
      
      fbshipit-source-id: d0fab63bdb59e714baaee2be15139f04958672d5
      dc491606
    • Maged Michael's avatar
      SharedMutex: Remove single-use intermediate constants · d33f356c
      Maged Michael authored
      Summary: Remove single-use intermediate constants from SharedMutexImpl
      
      Reviewed By: yfeldblum, ot
      
      Differential Revision: D29934333
      
      fbshipit-source-id: 5e4157fa1e6576f0fa9f2a2a10a32f0e1e24d7f5
      d33f356c
  2. 27 Jul, 2021 3 commits
    • Yedidya Feldblum's avatar
      spell small-vector uses of the trait as is_tivially_copyable_v · c3c6b788
      Yedidya Feldblum authored
      Summary: The `folly::` prefix is not necessary from within folly's namespace and the `_v` is available on the folly trait even in C++14 builds when `std::is_trivially_copyable_v` from C++17 is unavailable.
      
      Reviewed By: ot
      
      Differential Revision: D29925029
      
      fbshipit-source-id: 570d17c57ca68bea1c7c8b80ce59d8560b1aba2b
      c3c6b788
    • Shai Szulanski's avatar
      Allow using CancellableAsyncScope with external cancellation token · 2f5a71dd
      Shai Szulanski authored
      Summary: Right now the internal cancellation signal will be silently ignored, which hurts usability. Add mechanism for merging with external source and comment describing the pitfall.
      
      Reviewed By: capickett
      
      Differential Revision: D29935016
      
      fbshipit-source-id: 9311930ff9fbfc0470fdcfb5d425f36e7f0aff06
      2f5a71dd
    • Maged Michael's avatar
      SharedMutex: Change SharedMutexPolicyDefault and change default spin and yield counts · c17ed205
      Maged Michael authored
      Summary:
      Change SharedMutexPolicyDefault to include max_spin_count and max_soft_yield_count instead of bool block_immediately.
      
      Change the default spin and yield counts from 1000 and 1000 to 2 and 1, respectively.
      
      Reviewed By: ot
      
      Differential Revision: D29559594
      
      fbshipit-source-id: 93a3bdf43c20f456031265daf7b76ab40e3dcbdf
      c17ed205
  3. 26 Jul, 2021 3 commits
  4. 23 Jul, 2021 6 commits
    • Giuseppe Ottaviano's avatar
      Do not leak GFlags.h in widely included headers · ff841baa
      Giuseppe Ottaviano authored
      Summary: It defines several generically named macros, better avoid including it everywhere.
      
      Reviewed By: aary
      
      Differential Revision: D29870524
      
      fbshipit-source-id: b8703a737a6dc53e00c13daebc445855bfbadd1f
      ff841baa
    • Alan Frindell's avatar
      Fix SSL exception slicing · 5e2ab64f
      Alan Frindell authored
      Summary:
      SSLException derives from AsyncSocketException, so need to construct the exception_wrapper differently to prevent slicing it.
      
      I wish there were a more future-proof way to do this
      
      Reviewed By: yangchi
      
      Differential Revision: D29836520
      
      fbshipit-source-id: df4222d94952c66b4c86f12861b3792babdce3c6
      5e2ab64f
    • Shai Szulanski's avatar
      Make co_awaitTry(AsyncGenerator) return Try<NextResult<T>> · 832f135a
      Shai Szulanski authored
      Summary:
      There are two problems with the current approach of returning Try<T>:
      - It is impossible to write generic algorithms like coro::timeout that convert any awaitable into a Task of its await result without throwing exceptions because there's no way to reconstruct the expected return type. More generally, we want the property that the await_try_result_t::element_type matches the await_result_t so we can make drop-in replacements by wrapping in functions like timeout.
      - There's no way to both avoid moving yielded values and avoid throwing exceptions because Try doesn't support references (and an earlier diff adding this support was rejected), which means the two performance optimizations avaioable to users of AsyncGenerator are mutually exclusive
      
      We fix this to restore the aforementioned invariant by wrapping the existing result type. This is a marginal inefficiency, so if we notice regressions as a result we can specialize these Try instantiations to consolidate the storage. For now we do not except this to matter.
      
      Reviewed By: andriigrynenko
      
      Differential Revision: D29680441
      
      fbshipit-source-id: 4ef74f4645d990b623bb95a297718fb576a9b977
      832f135a
    • Pranjal Raihan's avatar
      RequestContext::StaticContextAccessor · 7fc541e8
      Pranjal Raihan authored
      Summary: `RequestContext::StaticContextAccessor` acts as a guard, preventing all threads with a `StaticContext` from being destroyed (or created).
      
      Reviewed By: dtolnay
      
      Differential Revision: D29684337
      
      fbshipit-source-id: 2b785b9293dd0b9c190512363afddaff50ec1f01
      7fc541e8
    • Pranjal Raihan's avatar
      Don't use typeid without RTTI in UniqueInstance · 44683993
      Pranjal Raihan authored
      Summary:
      The class depends on RTTI. It's a sanity check that crashes if two instances of a singleton are created. So doing nothing in `-fno-rtti` code is fine.
      
      Redo of D29630207 (https://github.com/facebook/folly/commit/160eb4d284eb67cc2641b6718c964dab8fc6486b)
      
      Reviewed By: dtolnay
      
      Differential Revision: D29684338
      
      fbshipit-source-id: 38355df5297681329f227fd10570a816f4672b9b
      44683993
    • Shai Szulanski's avatar
      makeUnorderedAsyncGeneratorFromAwaitableRange -> makeUnorderedAsyncGenerator · 3275d892
      Shai Szulanski authored
      Summary: We use the FromBla to distinguish collect-range (type fixed, count varies) algorithms from collect-tuple (types vary, count fixed) algorithms. But in this case there is no sensible translation from collect-tuple to an async-generator so the from-range bit is not necessary.
      
      Reviewed By: vitaut
      
      Differential Revision: D29877021
      
      fbshipit-source-id: 69dfa764fca880bd3770a4d57ff0d60fe500a206
      3275d892
  5. 22 Jul, 2021 2 commits
    • Giuseppe Ottaviano's avatar
      Use CoreCachedSharedPtr in Singleton · 937fc980
      Giuseppe Ottaviano authored
      Summary: `CoreCachedSharedPtr` is almost as fast as `ReadMostlySharedPtr`, so we can use it to have a better default that does not have pathological behavior under heavy contention. `try_get_fast()` is still useful if we need to squeeze out the last cycle.
      
      Reviewed By: philippv, luciang
      
      Differential Revision: D29812053
      
      fbshipit-source-id: 49e9e53444f8099dbfe13e36c3c07c1b57bb89fb
      937fc980
    • Lucian Grijincu's avatar
      thrift: varint: BMI2 (pdep) based varint encoding: branchless 2-5x faster than loop unrolled · 4baba282
      Lucian Grijincu authored
      Summary:
      BMI2 (`pdep`) varint encoding that's mostly branchless. It's 2-5x faster than the current loop-unrolled version.
      
      Being mostly branchless there's less variability in micro-benchmark runtime compared to the loop-unrolled version:
      - the loop-unrolled versions are slowest when encoding random numbers across the entire 64-bit range (some likely large) and branch prediction has most failures.
      
      Kept the fast-pass for values <127 (encoded in 1 byte) which are likely to be frequent. I couldn't find a fully branchless version that performed better anyway.
      
      TLDR:
      - `u8`: unroll the two possible values (1B and 2B encoding). Faster in micro-benchmarks than branchless versions I tried (needed more instructions to produce the same value without branches).
      - `u16` & `u32`:
      -- u16 encodes in up to 3B, u32 in up to 5B.
      -- Use `pdep` to encode into a u64 (8 bytes). Write 8 bytes to `QueueAppender`, but keep track of only the bytes that had to be written. This is faster than appending a buffer of bytes using &u64 and size.
      -- u16 could be written by encoding using `_pdep_u32` (3 bytes max fit in u32) and using smaller 16B lookup tables. In micro-benchmark that's not faster than using the same code as the one to encode u32 using `_pdep_u64`. In prod will perform better due to sharing the same lookup tables with u32 and u64 versions (less d-cache pressure).
      - `u64`: needs up to 10B. `pdep` to encode first 8B and unconditionally write last 2B too (but keep track of `QueueAppender` size properly).
      
      Reviewed By: vitaut
      
      Differential Revision: D29250074
      
      fbshipit-source-id: 1f6a266f45248fcbea30a62ed347564589cb3348
      4baba282
  6. 21 Jul, 2021 6 commits
    • Samuel Miller's avatar
      Factor ticket key manager into handler interface · dd7d175a
      Samuel Miller authored
      Summary:
      I've created an abstract class called `OpenSSLTicketHandler` to which the
      server's ssl context dispatches ticket crypto preparation. This means that
      changing the configuration of how tickets are encrypted can be changed by
      using a different implementation of the handler.
      
      This abstraction is sort of broken by all the set and get APIs on the context
      that modify ticket secrets, rather than properly abstracting this detail into
      a concrete impl of the handler (e.g. a `TLSTicketKeyManager` can manage secrets
      from a file and not require users to pass all these secrets down from the
      acceptors). For now though we rely on (checked!) dynamic casts to get a
      `TLSTicketKeyManager` from which we can freely modify the secrets.
      
      Reviewed By: mingtaoy
      
      Differential Revision: D24686664
      
      fbshipit-source-id: fb30941982fb3114e2aba531372a9d35ccc0ee48
      dd7d175a
    • Felix Handte's avatar
      Allow JemallocHugePageAllocator to Grow · d4241c98
      Felix Handte authored
      Summary:
      This diff switches the JemallocHugePageAllocator to permit initially reserving
      a (very) large contiguous region of virtual address space, which can then be
      progressively backed by huge pages on demand.
      
      This is intended to increase the flexibility of the allocator. In particular,
      this should enable automatic initialization of the JHA without having to
      carefully and correctly size the arena. (Assumption: `mmap()`'ing a big chunk
      of address is cheap, right? Although even if it is this is probably something
      we should only do for long-lived processes. Identifying "long-lived processes"
      and triggering initialization in them is itself an open topic to be addressed
      in a subsequent diff.)
      
      The concern here is that this potentially moves `madvise(..., MADV_HUGEPAGE)`
      calls later into the process lifetime, when memory pressure/fragmentation may
      be greater. This might induce stalls in the process? This can be mitigated
      using the existing pattern of explicitly calling `::init()` in your processes'
      `main()`. This will commit the requested pages.
      
      Context: I intend to use this allocator to back allocations in the Zstd
      Compression Context Singletons.
      (`folly/compression/CompressionContextPoolSingletons.cpp`)
      
      Feedback on the approach taken here is greatly appreciated!
      
      Reviewed By: davidtgoldblatt
      
      Differential Revision: D29502147
      
      fbshipit-source-id: 814e1ba3544cf5b5cfb67a08abd18f940255362f
      d4241c98
    • Giuseppe Ottaviano's avatar
      Add CoreCachedWeakPtr::lock() method, improve benchmarks · 9acfd80d
      Giuseppe Ottaviano authored
      Summary:
      Currently `CoreCachedWeakPtr` only exposes `get()`, which returns by value, so to lock the pointer we need to do two refcount operations, one on the weak count and one on the shared count. This is expensive.
      
      We could return by `const&`, but I don't want to expose the internal state, as we may be able to optimize the footprint by relying on implementation details of `std::shared/weak_ptr` in the future.
      
      Instead, expose a natural `lock()` method.
      
      Also, improve the benchmarks:
      - Add comparison with `ReadMostlySharedPtr`
      - Ensure that all threads are busy for the same time, so that wall time * `numThreads` is a good approximation of overall CPU time.
      
      Reviewed By: philippv
      
      Differential Revision: D29762995
      
      fbshipit-source-id: 851a82111e2726425e16d65729ec3fdd21981738
      9acfd80d
    • Michael Stella's avatar
      Remove unnecessary string copy in JSON serialization · 6696e55c
      Michael Stella authored
      Summary:
      `asString()` returns a string by value, which means a copy of the string must be made.
      
      We don't actually need to do this at all:
      - We know that the dynamic contains a string, thanks to the switch statement
      - The function being called with the result, `escapeString`, only wants a StringPiece anyway.
      
      So this copy is a waste. And uses significant CPU in my application.
      
      Reviewed By: ispeters
      
      Differential Revision: D29812197
      
      fbshipit-source-id: 60df668f7501f78f4282717d6896cd891950b6f5
      6696e55c
    • Dan Oprescu's avatar
      CancellableAsyncScope pass through the correct returnAddress · cdb7a478
      Dan Oprescu authored
      Reviewed By: capickett
      
      Differential Revision: D29764496
      
      fbshipit-source-id: 855bcfc749358d3754b604e417cf5512a91ab6df
      cdb7a478
    • Giuseppe Ottaviano's avatar
      Use small_vector::copyInlineTrivial only if the storage is small · f623e994
      Giuseppe Ottaviano authored
      Reviewed By: Liang-Dong
      
      Differential Revision: D29784019
      
      fbshipit-source-id: f7516603bc73b33ad7c57da1103451f75f8566b4
      f623e994
  7. 20 Jul, 2021 6 commits
  8. 16 Jul, 2021 4 commits
    • Mihnea Olteanu's avatar
      Fix stub of sockets for EMSCRIPTEN and XROS · b8fdbc94
      Mihnea Olteanu authored
      Summary:
      The current implementation of function stubs in `SocketFileDescriptorMap.cpp` generates the following build errors:
      ```
      stderr: xplat/folly/net/detail/SocketFileDescriptorMap.cpp:171:3: error: 'socketToFd' has a non-throwing exception specification but can still throw [-Werror,-Wexceptions]
        throw std::logic_error("Not implemented!");
        ^
      xplat/folly/net/detail/SocketFileDescriptorMap.cpp:170:30: note: function declared non-throwing here
      int SocketFileDescriptorMap::socketToFd(void* sock) noexcept {
      ```
      because the methods are stubbed out to throw and exception even though they are marked as `noexcept`.
      
      To fix the warning the subbing implementation is changed to call `std::terminate()` instead of throwing an exception. According to the language specification (https://en.cppreference.com/w/cpp/language/noexcept_spec) this should not result in any change in run-time behavior, since throwing and exception in a method marked as `noexcept` is effectively a call to `std::terminate()`.
      
      Differential Revision: D29687674
      
      fbshipit-source-id: 77405d8a31e45c8836e8746c9b25e12ef06335c4
      b8fdbc94
    • Xintong Hu's avatar
      Add API to set cmsg for write · 653703a3
      Xintong Hu authored
      Summary: allow users to set/append a list of cmsgs to be sent for each write
      
      Reviewed By: bschlinker
      
      Differential Revision: D29313594
      
      fbshipit-source-id: 8f78c59ecfe56ddb2c8c016d6105a676cd501c18
      653703a3
    • Alex Zhu's avatar
      Add AsyncSSLSocket::setSupportedProtocols · ca7ce442
      Alex Zhu authored
      Summary: This diff adds AsyncSSLSocket::setSupportedProtocols, analagous to SSL_set_alpn_protos, which allows connection specific ALPNs to be set. Prior to this diff, there was no easy way to change the set of ALPNs to use other than creating a separate SSLContext or manually using the low level OpenSSL interface.
      
      Reviewed By: mingtaoy
      
      Differential Revision: D29247716
      
      fbshipit-source-id: 6f378b4fc75f404e06fe0131ab520b7e4b8f33b6
      ca7ce442
    • Dan Melnic's avatar
      Add support for Subprocess to call sched_setaffinity · 2f7fdc20
      Dan Melnic authored
      Summary: Add support for Subprocess to call sched_setaffinity
      
      Reviewed By: yfeldblum
      
      Differential Revision: D29722725
      
      fbshipit-source-id: b0d4577bf3caaeb65137c8168bd27e1f402969da
      2f7fdc20
  9. 15 Jul, 2021 3 commits
    • Shai Szulanski's avatar
      Reorder definitions in AsyncGenerator.h · d26d241b
      Shai Szulanski authored
      Summary: Prepares for next diff
      
      Reviewed By: Mizuchi
      
      Differential Revision: D29665910
      
      fbshipit-source-id: 1026b0836ca803d566086ab9ed8e13e36d607c5f
      d26d241b
    • Philip Pronin's avatar
      fix semantics of QMS::Iterator::skipTo · bdf37414
      Philip Pronin authored
      Summary: Passing large `key` doesn't correctly advance position to the end.
      
      Reviewed By: ot
      
      Differential Revision: D29712973
      
      fbshipit-source-id: 7da7c49250753c12f3703ccf49107e56bf841131
      bdf37414
    • Aaryaman Sagar's avatar
      Have collect() handle the case of a not-ready future · ea91c9bc
      Aaryaman Sagar authored
      Summary:
      If one of the input futures is off the end of a folly::Executor::weakRef()
      executor, then there is a chance that it may never complete with a value or an
      ecception.  In this case, collect() would crash because it assumes that the
      folly::Try instances for all input futures have either a value or an exception.
      
      Fix that case by injecting a BrokenPromise exception for the case where a future
      never has an exception or a value.
      
      Reviewed By: yfeldblum
      
      Differential Revision: D26989091
      
      fbshipit-source-id: b810fe4d5d071233da1f453b3759991e057d78c6
      ea91c9bc
  10. 14 Jul, 2021 1 commit
    • Pranjal Raihan's avatar
      Remove unused UniqueInstance::PtrRange · 74f3c043
      Pranjal Raihan authored
      Summary: `UniqueInstance.cpp` has its own `PtrRange` which it exclusively uses.
      
      Reviewed By: yfeldblum, Mizuchi
      
      Differential Revision: D29685352
      
      fbshipit-source-id: 32658b3ee6fc1830c2c2f27693baefa16026f13e
      74f3c043