1. 27 Jun, 2021 40 commits
    • Andrii Grynenko's avatar
      Make cycle detection FATAL instead of throw and disable it in opt mode · 29bc878d
      Andrii Grynenko authored
      Summary: Cycle detection can be very expensive, so it's better to disable it in opt mode. Because of that we need to make sure we catch such cycles in dbg builds, so we have to replace exceptions with LOG(FATAL).
      
      Reviewed By: joshkirstein
      
      Differential Revision: D29367695
      
      fbshipit-source-id: 9c2038eb5b42f98f4ab997f963b6a131b8d26cf9
      29bc878d
    • Yedidya Feldblum's avatar
      update the Core fake layout for testing · 45666e3d
      Yedidya Feldblum authored
      Summary: There is a golden image of the `Core` layout so that tests will catch accidental increases of the `Core` size. Updated it to the latest `Core` layout.
      
      Differential Revision: D29251632
      
      fbshipit-source-id: d16086390548848f4302678e0b86d9841be1140b
      45666e3d
    • Brandon Schlinker's avatar
      Use is_pod, add <system_error> include for TcpInfo · 40d61f3f
      Brandon Schlinker authored
      Summary:
      - `std::is_pod_v` is only available in C++17, shifting to `std::is_pod`
      - `std::errc` needs the `system_error` header; missing header wasn't causing an error on most platforms because it is included indirectly
      
      Differential Revision: D29355681
      
      fbshipit-source-id: e035c3f4ffac9d2c6f0d8ec511f7e0ea8544ba80
      40d61f3f
    • Brandon Schlinker's avatar
      Remove dependency on FixedString.h from TcpInfo · c634f9c3
      Brandon Schlinker authored
      Summary: Remove dependency on `folly/FixedString.h`
      
      Differential Revision: D29317987
      
      fbshipit-source-id: dbce91f117776a1dcd966230d9eed616b2a95613
      c634f9c3
    • Rob Lyerly's avatar
      Add collectAnyNoDiscard() · 5dff2932
      Rob Lyerly authored
      Summary:
      D28945040 added `collectAny()` which early-returns when any of the SemiAwaitables produces a value (or exception).  There's the potential for discarded results, however - multiple SemiAwaitables can produce results depending on whether they're at a cancellation point and when cancellation is signaled.
      
      This diff adds a variant `collectAnyNoDiscard()` that signals cancellation when any SemiAwaitable finishes and returns *all* results that completed.  It produces a tuple of optional values from the SemiAwaitables (or `std::nullopt` if it was canceled).
      
      Reviewed By: iahs
      
      Differential Revision: D29240725
      
      fbshipit-source-id: 3e664339e8692cbb9114138a96345cf9f9d5cb0b
      5dff2932
    • Chad Austin's avatar
      move watchman includes into their own directory · 28858c2e
      Chad Austin authored
      Summary:
      Bring Watchman closer to the EdenFS file name convention by moving
      source files and includes into watchman/.
      
      Reviewed By: fanzeyi
      
      Differential Revision: D29242789
      
      fbshipit-source-id: 6e29a4a50e7202dbf6b603ccc7e4c8184afeb115
      28858c2e
    • Brandon Schlinker's avatar
      Move test utilities into TcpInfoTestUtil · 384b72ff
      Brandon Schlinker authored
      Summary: Move some of the test utilities into `TcpInfoTestUtil.h` to enable use by other components.
      
      Differential Revision: D29307490
      
      fbshipit-source-id: 978947ff57ed02e438addf6190a4ea9955596333
      384b72ff
    • Robin Cheng's avatar
      Fix concurrency issues in ConcurrentSkipList. · 6f4811ef
      Robin Cheng authored
      Summary:
      ## An Overview of ConcurrentSkipList Synchronization
      folly::ConcurrentSkipList, at a high level, is a normal skip list, except:
      * Access to the nodes' "next" pointers are atomic. (The implementation used release/consume, which this diff changes to release/acquire. It's not clear the original use for consume was correct, and consume is very complicated without any practical benefit, so we should just avoid it.)
      * All accesses (read/write) must go through an Accessor, which basically is nothing except an RAII object that calls addRef() on construction and releaseRef() on destruction.
      * Deleting a node will defer the deletion to a "recycler", which is just a vector of nodes to be recycled. When releaseRef() drops the refcount to zero, the nodes in the recycler are deleted.
      
      Intuitively speaking, when refcount drops to zero, it is safe to delete the nodes in the recycler because nobody holds any Accessors. It's a very simple way to manage the lifetime of the nodes, without having to worry about *which* nodes are accessed or to be deleted.
      
      However, this refcount/recycling behavior is very hard to get right when using atomics as the main synchronization mechanism. In the buggy implementation before this diff, I'll highlight three relevant parts:
      * To find an element in the skip list (either to fetch, insert, or delete), we start from the head node and skips by following successor pointers at the appropriate levels until we arrives at the node in question. Rough pseudocode:
        ```
        def find(val):
          node = head
          while read(node) < val:
            node = skip(node)  # read the node to find the successor at some level
          return node
        ```
      
      * To delete an element from the skip list, after finding the element, we modify the predecessor at each level by changing their successors to point to the deleted element's successors, and place the deleted element into the recycler.
        ```
        def delete(node):
          for level in range(...):
            node->pred->setSkip(node->succ)
          recycle(node)
      
        def recycle(node):
          lock(RECYCLER_LOCK)
          recycler.add(node)
          unlock(RECYCLER_LOCK)
        ```
      
      * releaseRef() and addRef():
        ```
        def releaseRef():
          if refcount > 1:
            refcount--
            return
      
          lock(RECYCLER_LOCK)
          if refcount > 1:
            refcount--
            return
          to_recycle, recycler = recycler, [] # swap out nodes to recycle
          unlock(RECYCLER_LOCK)
      
          for node in to_recycle:
            free(node)
          refcount--
      
        def addRef():
          recount++
        ```
      
      ## The Safety Argument
      The Accessor/deletion mechanism is safe if we can ensure the following:
      * If for a particular node X, a thread performs read(X) and a different thread performs free(X), the read(X) happens-before free(X).
      
      ### Problem 1: Relaxed decrement
      The buggy implementation used relaxed memory order when doing `refcount--`. Let's see why this is a problem.
      
      Let thread 1 be the one performing read(X) and let thread 2 be the one performing free(X). The intuitive argument is that free(X) can only happen after the refcount drops to zero, which cannot be true while read(X) is happening. The somewhat more formal argument is that read(X) happens before refcount-- in thread 1, which happens before refcount--, which happens before free(X). But because we use relaxed memory order, the two refcount-- operations do not synchronize.
      
      ### Problem 2: Relaxed increment
      The implementation also used relaxed memory order for addRef(). Normally, for a refcount, it is OK to use relaxed increments, but this refcount is different: the count can go back up once it reaches zero. When reusing refcounts this way, we can no longer use relaxed increment.
      
      To see why, suppose thread 2 performs the following in this order:
      ```
      setSkip(P, not X)  # step before deleting X; P is a predecessor at some level
      recycle(X)
      refcount-- # acq_rel, gets 0
      free(X)
      ```
      and thread 2 performs the following in this order:
      ```
      refcount++ # relaxed
      skip(P) # gets X
      read(X)
      ```
      See Appendix A below; it's possible for the refcount to reach 0, and thread 2 to get X when reading the successor from P. This means that free(X) might theoretically still race with read(X), as we failed to show that once we delete something from the skip list, another accessor can't possibly reach X again.
      
      This might feel like an unnecessary argument, but without this reasoning, we would not have found a problem even if we just modified releaseRef() to instead delete some random nodes from the list. That will certainly cause trouble when someone else later tries to read something.
      
      ### Problem 3: No release operation on refcount before freeing nodes
      This is much more subtle. Suppose thread 1 performs the following:
      ```
      refcount--  # end of some previous accessor
      refcount++  # begin of new accessor
      skip(P)
      ```
      and thread 2 does this:
      ```
      setSkip(P, not X)  # step before deleting X; P is a predecessor
      recycle(X)
      read(refcount) # gets 1, proceeds to lock
      lock(RECYCLER_LOCK)
      read(refcount) # gets 1  ***
      unlock(RECYCLER_LOCK)
      free(X)
      refcount--
      ```
      
      The interleaving to make this possible is:
      ```
      thread 1 refcount--
      thread 2 everything until free(X)
      thread 1 refcount++
      thread 2 free(X)
      thread 2 refcount--
      ```
      
      We wish to show that `setSkip(P, not X)` happens before `skip(P)`, because this will allow us to prove that `skip(P) != X` (otherwise, we would not be able to show that a subsequent read(X) in thread 1 happens before free(X) - it might legitimately not).
      
      The intuitive argument is that `setSkip(P, not X)` happened before we decrement the refcount, which happens before the increment of the refcount in thread 1, which happens before `skip(P)`. However, because thread 2 actually decremented refcount quite late, it might be the case that thread 1's `refcount++` happened before thread 2's `refcount--` (and the increment synchronized with its own decrement earlier). There's nothing else in the middle that provided a synchronizes-with relationship (in particular, the read(refcount) operations do not provide synchronization because those are *loads* - wrong direction!).
      
      ### Correct implementation
      In addition to using acq_rel memory order on all operations on refcount, this diff modifies releaseRef() like this:
      ```
      def releaseRef():
        if refcount > 1: # optimization
          refcount--
          return
      
        lock(RECYCLER_LOCK)
        if --refcount == 0:  # "GUARD"
          to_recycle, recycler = recycler, [] # swap out nodes to recycle
        unlock(RECYCLER_LOCK)
      
        for node in to_recycle:
          free(node)
      ```
      
      I'll use "GUARD" to denote the event that the --refcount within the lock *returns 0*. I'll use this for the correctness proof.
      
      ### Correct implementation proof
      The proof will still be to show that if thread 1 performs read(X) and thread 2 performs free(X), read(X) happens-before free(X).
      
      Proof: thread 1 must have grabbed an accessor while reading X, so its sequence of actions look like this:
      ```
      refcount++
      skip(P)  # gets X
      read(X)
      refcount--
      ```
      thread 2 performs:
      ```
      GUARD
      free(X)
      ```
      
      Now, all writes on refcount are RMW operations and they all use acq_rel ordering, so all the RMW operations on refcount form a total order where successive operations have a synchronizes-with relationship. We'll look at where GUARD might stand in this total order.
      
      * Case 1: GUARD is after refcount-- from thread 1 in the total order.
        * In this case, read(X) happens before refcount-- in thread 1, which happens before GUARD, which happens before free(X).
      * Case 2: GUARD is between refcount++ and refcount-- from thread 1 in the total order.
        * In this case, observe (by looking at the total ordering on refcount RMW) that we have at least two threads (1 and 2) that contribute 1 to the refcount, right before GUARD. In other words, GUARD could not possibly have returned 0, which is a contradiction.
      * Case 3: GUARD is before refcount++ from thread 1 in the total order.
        * Let `setSkip(P, not X)` be a predecessor write operation before X is added to the recycler (this can happen on any thread). We will prove in the below lemma that `setSkip(P, not X)` happens before GUARD. Once we have that, then `setSkip(P, not X)` happens before GUARD which happens before thread 1's refcount++ which happens before `skip(P)`, and that renders it impossible for `skip(P)` to return X, making it a contradiction.
      
      It remains to prove that `setSkip(P, not X)` happens before GUARD.
      
      In the thread that performs an `setSkip(P, not X)` operation, it subsequently performs `recycle(X)`, which adds X to the recycler within RECYCLER_LOCK. In thread 2, GUARD happens within the RECYCLER_LOCK, and the subsequent swapping of the recycler vector contained X (which is shown by the fact that we free(X) after GUARD), so the lock must have been grabbed *after* the lock that added X to the recycler. In other words, we have the relationship that `setSkip(P, not X)` happens before `recycler.add(X)` which happens before `unlock(RECYCLER_LOCK)` which happens before `lock(RECYCLER_LOCK)` which happens before GUARD.
      
      Note that just like the original implementation, the optimization on top of releaseRef() is not a perfect optimization; it may delay the deletion of otherwise safe-to-delete nodes. However, that does not affect our correctness argument because it's always at least as safe to *delay* deletions (this hand-wavy argument is not part of the proof).
      
      ## Appendix A
      Consider the following two threads:
      ```
      std::atomic<int> x{0}, y{1};
      // Thread 1:
      x.store(1, std::memory_order_release);  // A
      int y1 = y.fetch_add(-1, std::memory_order_acq_rel);  // B
      
      // Thread 2:
      y.fetch_add(1, std::memory_order_relaxed);  // C
      int x2 = x.load(std::memory_order_acquire);  // D
      ```
      Intuitively, if y1 = 1, then thread 1's fetch_add was executed first, so thread 2 should get x2 = 1. Otherwise, if thread 2's fetch_add happened first, then y1 = 2, and x2 could be either 0 or 1.
      
      But, could it happen that y1 = 1 and x2 = 0? Let's look at the happens-before relationships between these operations. For intra-thread (sequenced-before), we have A < B and C < D (I'm using < to denote happens-before). Now, for inter-thread (synchronizes-with), the only pair that could establish a synchronizes-with relationship is A and D. (B and C is not eligible because C uses relaxed ordering.) For y1 = 1 and x2 = 0 to happen, we must not have D read the result of A, so it must be the case that they A does not synchronize-with D. But that's as far as we can go; there's nothing that really enforces much of an ordering between the two threads.
      
      We can also think about this in terms of reordering of memory operations. Thread 1 is pretty safe from reordering because of the acq_release, but in thread 2, an "acquire" ordering typically means no memory operations after D may be reordered before D, but it doesn't prevent C from being reordered after D. C itself does not prevent reordering due to it being an relaxed operation. So if thread 2 executed D and then C, then it would be trivially possible to have y1 = 1 and x2 = 0.
      
      The point of this is to highlight that just by having a release/acquire pair does not magically *order* them. The pair merely provides a synchronizes-with relationship *if* the read happens to obtain the value written by the write, but not any guarantees of *which* value would be read.
      
      ## Appendix B
      Problem 1 is detected by TSAN, but problems 2 and 3 are not. Why?
      
      TSAN detects data races by deriving synchronization relationships from the *actual* interleaving of atomics at runtime. If the actual interleaving would always happen but is not *guaranteed* by the standard, there may be a real undetected data race.
      
      For example, it is well known that the following code will be detected by TSAN as a data race on int "x":
      ```
      int x = 1;
      std::atomic<bool> y{false};
      
      // Thread 1
      x = 2;
      y.store(true, memory_order_relaxed);
      
      // Thread 2
      while (!y.load(memory_order_acquire)) {  // acquire is useless because writer used relaxed
      }
      std::cout << x << std::endl;
      ```
      TSAN reports a data race on x because `y` failed to provide proper synchronizes-with relationship between the two threads due to incorrect memory ordering. However, when compiling on x86, most likely we will end up with a binary that always guarantees the intuitively desired behavior anyway.
      
      So now consider the following code:
      ```
      std::atomic<int> x = 1;
      std::atomic<bool> y{false};
      int z = 8;
      
      // Thread 1
      z = 9;
      x.store(2, memory_order_release);
      y.store(true, memory_order_relaxed);
      
      // Thread 2
      while (!y.load(memory_order_acquire)) {  // acquire is useless because writer used relaxed
      }
      x.load(memory_order_acquire);
      std::cout << z << std::endl;
      ```
      There is a data race on the access to z, because the happens-before chain of `write z -> x.store -> y.store -> y.load -> x.load -> read z` is broken on the `y.store -> y.load` link. However, TSAN will not report a data race, because it sees the chain as `write z -> x.store -> x.load -> read z`. It sees x.store as synchronizing with x.load because it *observed* that the x.load obtained the value written by the x.write *at runtime*, so it inferred that it was valid synchronization. This isn't guaranteed, though, because it's possible in some execution (in theory) that x.load does not get the value written by x.store (similar to Appendix A).
      
      Reviewed By: yfeldblum
      
      Differential Revision: D29248955
      
      fbshipit-source-id: 2a3c9379c7c3a6469183df64582ca9cf763c0890
      6f4811ef
    • Dylan Yudaken's avatar
      Support returning move-only types in folly::Expected::then · 13bccbbf
      Dylan Yudaken authored
      Summary: the codepath for .then is copying the return value in one place, which prevents using move-only types (and might incur extra costs).
      
      Reviewed By: yfeldblum
      
      Differential Revision: D29159637
      
      fbshipit-source-id: 892b73266cfe45c9e09b9b648d7b7703871c4323
      13bccbbf
    • Brandon Schlinker's avatar
      Use optlen instead of return code to determine bytes read · b194210a
      Brandon Schlinker authored
      Summary:
      Was using the return of `getsockopt` as the number of bytes read, which is incorrect. Changed to using the length field. Also changed how the `Options` fields are initialized to prevent issues on certain platforms.
      
      Will follow up with an integration test.
      
      Differential Revision: D29257090
      
      fbshipit-source-id: 518794c76bf74ab092ed7955c48ec8a3b3472c24
      b194210a
    • Brandon Schlinker's avatar
      Disable all options by default · dfb73b05
      Brandon Schlinker authored
      Summary: Extra lookup options should be disabled by default to ensure that things that need them explicitly enable them.
      
      Differential Revision: D29255973
      
      fbshipit-source-id: 5c92ad8685cb2f490aebd55a837a1463a624be97
      dfb73b05
    • Brandon Schlinker's avatar
      Add function that enable all observer options for AsyncTransport · f1d5088b
      Brandon Schlinker authored
      Summary: Enables an observer to automatically subscribe to all available signals.
      
      Differential Revision: D29255979
      
      fbshipit-source-id: 3675ef9bf2442c3b6e26c331a6089f42c1fd8ee9
      f1d5088b
    • Maged Michael's avatar
      ConcurrentHashMap: Fix a bug in replacing the value of an existing key · c7400627
      Maged Michael authored
      Summary: Add missing protection of the new node when replacing an existing node.
      
      Differential Revision: D29271517
      
      fbshipit-source-id: 77812f27c37d4950a6e485db674813fab0cf8772
      c7400627
    • Brandon Schlinker's avatar
      Enable observers to request socket timestamps · a05360ec
      Brandon Schlinker authored
      Summary:
      D24094832 (https://github.com/facebook/folly/commit/842ecea531e8d6a90559f213be3793f7cd36781b) added `ByteEvent` support to `AsyncSocket`, making it easier to use socket timestamps for SCHED/TX/ACK events. With D24094832 (https://github.com/facebook/folly/commit/842ecea531e8d6a90559f213be3793f7cd36781b):
      - An application can request socket timestamps by installing an observer with `ByteEvents` enabled, and then writing to the socket with a relevant timestamping flag (e.g., `TIMESTAMP_TX`, `TIMESTAMP_ACK`).
      - Timestamps are delivered to the observer via the `byteEvent` callback.
      
      This diff enables *observers* to request socket timestamping by interposing between the application and the socket by way of the `prewrite` event:
      - Each time bytes from the application are about to be written to the underlying raw socket / FD, `AsyncSocket` will give observers an opportunity to request timestamping via a `prewrite` event.
      - If an observer wishes to request timestamping, it can return a `PrewriteRequest` with information about the `WriteFlags` to add.
      - If an observer wishes to timestamp a specific byte (first byte, every 1000th byte, etc.), it can request this with the `maybeOffsetToSplitWrite` field — socket timestamp requests apply to the *last byte* in the buffer being written, and thus if an observer wants to timestamp a specific byte, the buffer must be split so that the byte to timestamp is the final byte. The `AsyncSocket` implementation handles this split on behalf of the observer and adds `WriteFlags::CORK` (triggering `MSG_MORE`) where appropriate.
      - If multiple observers are attached, `PrewriteRequests` are combined so that all observer needs are satisfied. In addition, `WriteFlags` set by the application and `WriteFlags` set by observers are combined during processing of `PrewriteRequests`.
      
      Reviewed By: yfeldblum
      
      Differential Revision: D24976575
      
      fbshipit-source-id: 885720173d4a9ceefebc929a86d5e0f10f8889c4
      a05360ec
    • Tudor Bosman's avatar
      Speed up findLocation in the absence of .debug_aranges (#1607) · 5c4c45a4
      Tudor Bosman authored
      Summary:
      If we don't find `.debug_aranges`, we used to jump directly to running the line number VM for every single compilation unit (CU). This is obviously not great.
      
      Instead, every CU lists one (or multiple) address ranges that make up its `.text`, so do that first. I think (but haven't benchmarked) that this shouldn't be significantly slower than `.debug_aranges` (probably why clang doesn't emit `.debug_aranges` by default) -- both `.debug_aranges` and this approach suffer from the same drawback: they're grouped by CU instead of being sorted by address, so we still need to iterate for all CUs.
      
      Pull Request resolved: https://github.com/facebook/folly/pull/1607
      
      Reviewed By: yfeldblum
      
      Differential Revision: D29175717
      
      Pulled By: luciang
      
      fbshipit-source-id: d626babdbb7f9a2f7dd51aefd914f6659124eb4e
      5c4c45a4
    • Maged Michael's avatar
      hazard pointers: Support class and function names consistent with WG21 P1121 · 852cd96d
      Maged Michael authored
      Summary:
      Support class and function names consistent with [WG21 P1121](http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p1121r3.pdf)
      
      hazard_pointer = hazptr_holder
      hazard_pointer_obj_base = hazptr_obj_base
      hazard_pointer_domain = hazptr_domain
      hazard_pointer_default_domain = default_hazptr_domain
      hazard_pointer_clean_up = hazptr_cleanup
      
      Reviewed By: yfeldblum
      
      Differential Revision: D29252308
      
      fbshipit-source-id: f5bbf1af87bad4c0d6a54f052b9379c042a724e8
      852cd96d
    • Misha Shneerson's avatar
      fix destruction race for terminateLoopSoon · fa9ccf03
      Misha Shneerson authored
      Summary:
      Calling `EventBase::terminateLoopSoon` from a different thread should be a thread safe
      operation when there is a concurrently executing `loopForever`, immediately
      followed by `EventBase` destruction.
      
      Today, we first set the stop_ flag to stop the event loop, then post a message to
      tell eventlib to stop its event loop. ... but IIUC the stop_ flag is the thing
      that makes the `while()` loop to keep going forever. Thus setting it before
      message is posted may result in the `loopForever` terminate and underlying
      EventBase destroyed before we are able to post a message to eventlib.
      
      The fix is to set `stop_ = true` in loop.
      
      Reviewed By: yfeldblum, andriigrynenko
      
      Differential Revision: D29143212
      
      fbshipit-source-id: f102fbad31653dd7525eff0f70600aa71ae02534
      fa9ccf03
    • Yedidya Feldblum's avatar
      let the semaphore test use Latch · 96f58937
      Yedidya Feldblum authored
      Summary: Rather than reinventing it in the test.
      
      Reviewed By: markisaa
      
      Differential Revision: D29223788
      
      fbshipit-source-id: 3d35bd0046b876d22cd2397549dcb6f4cc77d688
      96f58937
    • Dongyi Ye's avatar
      Add setTimestamping in AsyncUDPServerSocket · ca2e0d75
      Dongyi Ye authored
      Summary: Enabled the owner of AsyncUDPServerSocket to call setTimestamping for the underlying AsyncUDPSocket. So in the `onDataAvailable` callback we could have ts set in `OnDataAvailableParams`
      
      Reviewed By: yfeldblum
      
      Differential Revision: D28661584
      
      fbshipit-source-id: d060cfdd8a4e105ada8a2c3b0fd13ddafb6f0d7c
      ca2e0d75
    • Yedidya Feldblum's avatar
      invoker suites · 3b4f9dfc
      Yedidya Feldblum authored
      Summary:
      A new pattern which creates an invoker type and a variable, both named for the member, with the type suffixed with `_fn` and the instance unsuffixed. Applied to free-invokers, member-invokers, and static-member invokers.
      
      Automatic mangling as this does is not great but this is intended to be used selectively.
      
      Changes the existing unit-tests to use the invoker variables generated by the invoker-suite macros, since the invoker variables depend on the invoker types and the generation of both depends on the invoker macros. So everything gets tested transitively.
      
      Reviewed By: luciang
      
      Differential Revision: D29190157
      
      fbshipit-source-id: 72d8fb622c4c99bae48efc3e5e9f0bd411d6a813
      3b4f9dfc
    • Maged Michael's avatar
      hazptr: Improve readability, specialize friends, use specialized aliases · 16837f09
      Maged Michael authored
      Summary: Use aliases. Reduce the use of the Atom template parameter.
      
      Reviewed By: yfeldblum
      
      Differential Revision: D29206490
      
      fbshipit-source-id: d0637593c48ef150560b4feb47a454afe25ecba6
      16837f09
    • Francesco Zoffoli's avatar
      Support CO_ASSERT_THAT · bb47922f
      Francesco Zoffoli authored
      Summary:
      `ASSERT_THAT` is defined in gmock, this adds the equivalent for coroutine code to GmockHelper.
      
      The implementation depends on the inclusion of GtestHelper, but to avoid forcing anyone that includes GmockHelper to also include GtestHelper I didn't add it in the file.
      
      Would it be preferable to include the needed header?
      
      Reviewed By: yfeldblum
      
      Differential Revision: D29067561
      
      fbshipit-source-id: 26aa6021efe55aa03dd7cf064563a732e47e39a1
      bb47922f
    • Brandon Schlinker's avatar
      TcpInfo, an abstraction layer to capture and access TCP state · 68a78d99
      Brandon Schlinker authored
      Summary:
      An cross-platform abstraction layer for capturing current TCP and congestion control state.
      
      Fetches information from four different resources:
      - `TCP_INFO` (state of TCP)
      - `TCP_CONGESTION` (name of congestion control algorithm)
      - `TCP_CC_INFO` (details for a given congestion control algorithm)
      - `SIOCOUTQ`/`SIOCINQ` (socket buffers)
      
      `TcpInfo` is designed to solve two problems:
      
      **(1) `TcpInfo` unblocks use of the latest `tcp_info` struct and related structs.**
      
      As of 2020, the `tcp_info` struct shipped with glibc (sysdeps/gnu/netinet/tcp.h) has not been updated since 2007 due to compatibility concerns; see commit titled "Update netinet/tcp.h from Linux 4.18" in glibc repository. This creates scenarios where fields that have long been available in the kernel ABI cannot be accessed.
      
      Even if glibc does eventually update the `tcp_info` shipped, we don't want to be limited to their update cycle. `TcpInfo` solves this in two ways:
         - First, `TcpInfoTypes.h` contains a copy of the latest `tcp_info` struct for Linux, and `TcpInfo` always uses this struct for lookups; this decouples `TcpInfo` from glibc's / the platform's `tcp_info`.
         - Second, `TcpInfo` determines which fields in the struct are populated (and thus valid) based on the number of bytes the kernel ABI copies into the struct during the corresponding getsockopt operation. When a field is accessed through `getFieldAsOptUInt64` or through an accessor, `TcpInfo` returns an empty optional if the field is unavailable at run-time.
      
      In this manner, `TcpInfo` enables the latest struct to always be used while ensuring that programs can determine at runtime which fields are available for use --- there's no risk of a program assuming that a field is valid when it in fact was never initialized/set by the ABI.
      
      **(2) `TcpInfo` abstracts platform differences while still keeping details available.**
      
      The `tcp_info` structure varies significantly between Apple and Linux. `TcpInfo` exposes a subset of `tcp_info` and other fields through accessors that hide these differences, and reduce potential errors (e.g., Apple stores srtt in milliseconds, Linux stores in microseconds, `TcpInfo::srtt` does the conversions needed to always return in microseconds). When a field is unavailable on a platform, the accessor returns an empty optional.
      
      In parallel, the underlying structures remain accessible and can be safely accessed through the appropriate `getFieldAsOptUInt64(...)`. This enables platform-specific code to have full access to the underlying structure while also benefiting from `TcpInfo`'s knowledge of whether a given field was populated by the ABI at run-time.
      
      Support for FreeBSD will be added in a subsequent diff.
      
      Differential Revision: D22134355
      
      fbshipit-source-id: accae8762aa88c187cc473b8121df901c6ffb456
      68a78d99
    • JTJL's avatar
      Remove semicolons at the end of macros after `do {} while (0)` (#1605) · 16ac56e4
      JTJL authored
      Summary:
      The semicolons at the end of macros after `do {} while (0)` is useless and may cause potential compile errors in the future.
      
      Pull Request resolved: https://github.com/facebook/folly/pull/1605
      
      Reviewed By: Mizuchi
      
      Differential Revision: D29109549
      
      Pulled By: yfeldblum
      
      fbshipit-source-id: 0c585b2db059bc5f53a31671b044a2b86a707359
      16ac56e4
    • Prabhakaran Ganesan's avatar
      Set TOS for AsyncServer listener socket · a1056c1d
      Prabhakaran Ganesan authored
      Summary: Added set/get APIs to configure TOS for listener sockets. The setListenerTos() sets the TOS for the server sockets and all accepted connections are expected to inherit the same. These APIs would be used by higher layers (like thrift server) to set the TOS on the server socket.
      
      Reviewed By: jmswen
      
      Differential Revision: D28651968
      
      fbshipit-source-id: 30f251970269155adbf5e88e1079096dbeceb216
      a1056c1d
    • Francesco Zoffoli's avatar
      Support move only objects in `collectAny` · d92bb4bb
      Francesco Zoffoli authored
      Summary: `collectAny` does not compile when used with `Task`s that return move only objects
      
      Reviewed By: yfeldblum
      
      Differential Revision: D29137632
      
      fbshipit-source-id: d8fd4f46d4c014c7492dcd2fb7fe84921db8aad0
      d92bb4bb
    • Misha Shneerson's avatar
      fix race between EventBase and EventBaseLocal dtors · 033fa8af
      Misha Shneerson authored
      Summary:
      EventBase keeps a registry of EventBaseLocal instances.EventBaseLocal keeps a registry of EventBase instances.
      At destruction time, both are trying to remove themselves from the other's registry, and it is possible that dtors are racing each other.
      
      There are two changes to address the race:
      1. remove virtual method in EventBaseLocal because calling through vptr makes TSAN unhappy - the underlying vtbl is being mutated during destruction.
      2.  Since deregistration involves acquiring two locks, a lock inversion must be avoided. This is achieved by retrying if inner lock acquisition has failed.
      
      Reviewed By: andriigrynenko
      
      Differential Revision: D29120201
      
      fbshipit-source-id: 93c93c8d7cc7126e3432ac06562d55a838008e4a
      033fa8af
    • Ter Chrng Ng's avatar
      Add opt outs to shipit · 76c832bd
      Ter Chrng Ng authored
      Summary: As title
      
      Reviewed By: mzlee
      
      Differential Revision: D29140913
      
      fbshipit-source-id: 6a90756f1c340faaf9e857d743ccbeb1dc991b2f
      76c832bd
    • Mihnea Olteanu's avatar
      Stub out sockets for EMSCRIPTEN · 02d4e327
      Mihnea Olteanu authored
      Summary: Stub out sockets when building under EMSCRIPTEN (aka WASM compiler) like was done in D26579892 (https://github.com/facebook/folly/commit/c76b89b60652af52ee163795d526f2f10a114b20) for XROS.
      
      Reviewed By: yfeldblum
      
      Differential Revision: D28107594
      
      fbshipit-source-id: 8a0d3033793a857cce587c5349934bc6f2a4bec5
      02d4e327
    • Bennett Magy's avatar
      Add CO_TEST_P · c30526f7
      Bennett Magy authored
      Summary:
      Copied TEST_P def from https://github.com/google/googletest/blob/master/googletest/include/gtest/gtest-param-test.h
      
      Implemented `TestBody()` as `blockingWait(co_TestBody())`. User is responsible for delivering impl of `co_TestBody()`.
      
      Reviewed By: yfeldblum
      
      Differential Revision: D29124282
      
      fbshipit-source-id: ca8e9b874903b84ab529e7eefa6a2b7f72793b9b
      c30526f7
    • Genevieve Helsel's avatar
      add option not to prefer /usr/bin python on mac · a7b4818a
      Genevieve Helsel authored
      Reviewed By: chadaustin
      
      Differential Revision: D29084022
      
      fbshipit-source-id: 0605c1bfdd86ab94f4aa6893737b296ab4cdd913
      a7b4818a
    • Francesco Zoffoli's avatar
      Implement coro::collectAny · b8f35551
      Francesco Zoffoli authored
      Summary:
      `collectAll` allows to `co_await`s multiple tasks using structured concurrency.
      
      Unfortunately `future::collectAny` does not follow the structured concurrency pattern, and detaches the uncompleted operations.
      This can result in memory errors (the coroutines access data that has already been freed).
      
      This diff introduces `coro::collectAny`, which given a number of awaitables it returns the result of the first awaitable to finish, in addition to its index, cancels the remaining operations **and waits for them to complete**.
      
      The implementation uses `collectAll` as a building block.
      The return signature mirrors the one from `future::collectAny`.
      
      Reviewed By: yfeldblum, rptynan
      
      Differential Revision: D28945040
      
      fbshipit-source-id: 402be03e004d373cbc74821ae8282b1aaf621b2d
      b8f35551
    • Emanuele Altieri's avatar
      The Latch synchronization class · dc7ba0b5
      Emanuele Altieri authored
      Summary:
      Similar to std::latch (C++20) but with timed waits:
      https://en.cppreference.com/w/cpp/thread/latch
      
      The latch class is a downward counter which can be used to synchronize
      threads. The value of the counter is initialized on creation. Threads may
      block on the latch until the counter is decremented to zero. There is no
      possibility to increase or reset the counter, which makes the latch a
      single-use barrier.
      
      Example:
      
        const int N = 32;
        folly::Latch latch(N);
        std::vector<std::thread> threads;
        for (int i = 0; i < N; i++) {
          threads.emplace_back([&] {
            do_some_work();
            latch.count_down();
          });
        }
        latch.wait();
      
      A latch can be used to easily wait for mocked async methods in tests:
      
        ACTION_P(DecrementLatchImpl, latch) {
          latch.count_down();
        }
        constexpr auto DecrementLatch = DecrementLatchImpl<folly::Latch&>;
      
        class MockableObject {
         public:
          MOCK_METHOD(void, someAsyncEvent, ());
        };
      
        TEST(TestSuite, TestFeature) {
          MockableObject mockObjA;
          MockableObject mockObjB;
      
          folly::Latch latch(5);
      
          EXPECT_CALL(mockObjA, someAsyncEvent())
              .Times(2)
              .WillRepeatedly(DecrementLatch(latch)); // called 2 times
      
          EXPECT_CALL(mockObjB, someAsyncEvent())
              .Times(3)
              .WillRepeatedly(DecrementLatch(latch)); // called 3 times
      
          // trigger async events
          // ...
      
          EXPECT_TRUE(latch.try_wait_for(std::chrono::seconds(60)));
        }
      
      Reviewed By: yfeldblum
      
      Differential Revision: D28951720
      
      fbshipit-source-id: 6a9e20ad925a38d1cdb0134eedad826771bef3e0
      dc7ba0b5
    • Yedidya Feldblum's avatar
      complete the transition away from LockTraits · ddcb93e0
      Yedidya Feldblum authored
      Summary: `Synchronized` no longer needs a full lock-traits facility. Absorb the few things it needs and cut the rest.
      
      Reviewed By: simpkins
      
      Differential Revision: D28774648
      
      fbshipit-source-id: 0679a3192a8eb17444628d12704cdc34fe5911b3
      ddcb93e0
    • Yedidya Feldblum's avatar
      cut legacy LockedPtr::getUniqueLock · b65ef9f8
      Yedidya Feldblum authored
      Summary: Now that `LockedPtr::as_lock` is always available regardless of mutex type and regardless of lock category, `getUniqueLock` is no longer needed.
      
      Differential Revision: D28987941
      
      fbshipit-source-id: a6894cffb30d280ec8325c14784592b2d4381f4c
      b65ef9f8
    • Yedidya Feldblum's avatar
      migrate from LockedPtr::getUniqueLock · 07ab2e2b
      Yedidya Feldblum authored
      Summary: The new name is `LockedPtr::as_lock`.
      
      Reviewed By: aary
      
      Differential Revision: D28987868
      
      fbshipit-source-id: 8abd6a69a1b9c884adf137f06c24fe0df9ddd089
      07ab2e2b
    • Roman Koshelev's avatar
      Correcting and adding a coarse_ * clock (#1580) · 78e483e0
      Roman Koshelev authored
      Summary: Pull Request resolved: https://github.com/facebook/folly/pull/1580
      
      Reviewed By: luciang
      
      Differential Revision: D28627136
      
      Pulled By: yfeldblum
      
      fbshipit-source-id: 1362506502ad3282f53512999d1c79822f2ce6e8
      78e483e0
    • Yedidya Feldblum's avatar
      suppress lint-time diagnostics in OpenSSLThreadding.cpp · 7a06e2f4
      Yedidya Feldblum authored
      Differential Revision: D29089239
      
      fbshipit-source-id: 83cbe9d74d8f7f648e18b8ce1e3e13ca8cb33006
      7a06e2f4
    • Yedidya Feldblum's avatar
      revise Synchronized LockedPtr to use lock types · b805d853
      Yedidya Feldblum authored
      Summary:
      Use `std::unique_lock`, `std::shared_lock`, and `folly::upgrade_lock`. There are two reasons:
      
      * Makes generic the use of `std::unique_lock` with `std::mutex`, which is currently special-cased.
      * Permits specializations of `std::unique_lock` and the other lock types to be found automatically.
      
      In particular, this permits the use of `Synchronized<T, DistributedMutex>`, which is only proxy-lockable and not lockable.
      
      Reviewed By: simpkins
      
      Differential Revision: D28705607
      
      fbshipit-source-id: 48daa2910ce16ee4fde6f5ea629a41d9768f3c87
      b805d853
    • Yedidya Feldblum's avatar
      cut legacy friends of SharedMutex · 424e569f
      Yedidya Feldblum authored
      Summary: They were used as extension points at one time, but no longer.
      
      Reviewed By: Alfus
      
      Differential Revision: D28987212
      
      fbshipit-source-id: e9d59e5cf9641323657314b088eef516ce068112
      424e569f