- 09 Jan, 2019 2 commits
-
-
Orvid King authored
Reviewed By: yfeldblum Differential Revision: D13603396 fbshipit-source-id: 90d2ce9e4d3cfc681e27df84e8e989c7cee182cc
-
Tianjiao Yin authored
Summary: The code is not calling posix_spawn(...) correctly. Reviewed By: yfeldblum Differential Revision: D13596190 fbshipit-source-id: 266522ec2387b6e2294fc5899aa85afc19d8d919
-
- 08 Jan, 2019 5 commits
-
-
Victor Zverovich authored
Summary: Minor improvements to `Rcu.h`: * Spell out the first use of RCU for the sake of people who are not familiar with the abbreviation. * Correct the default template argument type in the comment: it's `RcuTag`, not `void`. * Don't expose `rcu_token` constructor to the users because it's only supposed to be obtained via `lock_shared`. * Make other single-argument constructors explicit to prevent undesirable conversions. * Parameterize `rcu_token` on the tag type to prevent cross-domain use. Reviewed By: yfeldblum Differential Revision: D13510133 fbshipit-source-id: d5d214cfa3b30d0857c14ac293da6e4310db1100
-
Victor Zverovich authored
Summary: Since we require C++14 in Folly and use `std::enable_if_t` in some places, there is no need for the `_t` workaround. Replace `_t<enable_if<...>>` with `enable_if_t<...>`. Reviewed By: yfeldblum Differential Revision: D13511564 fbshipit-source-id: 314b4a63281ce6b8275174ae89fab5fba1101bfb
-
Yedidya Feldblum authored
Summary: [Folly] `SingletonRelaxedCounter`, a singleton-per-tag relaxed counter. Differential Revision: D13149336 fbshipit-source-id: 7cf0144758e9595e188465137a336d712c5d9a76
-
Yedidya Feldblum authored
Summary: [Folly] Cut unused `hazptr` dep on `Singleton`. Reviewed By: magedm Differential Revision: D13586564 fbshipit-source-id: e0e87807f51f0d050e045961b5e40e600026f182
-
Yedidya Feldblum authored
Summary: [Folly] Alias `std::apply` for libc++ if c++17, and for msvc. Fixes #987. Reviewed By: gkmhub Differential Revision: D13562821 fbshipit-source-id: b1fef92eed24ce201e50dda72d1ee8b6db9ed6dd
-
- 07 Jan, 2019 4 commits
-
-
Tristan Rice authored
folly/synchronization/LifoSem: fixed race condition between tryRemoveNode and shutdown by checking the lock in shutdown (#989) Summary: Pull Request resolved: https://github.com/facebook/folly/pull/989 Original post: https://fb.workplace.com/groups/560979627394613/permalink/1370192126473355/ There was a bug where we weren't checking the isLocked bit when setting the isShutdown bit. Thus, tryRemoveNode would acquire the lock, shutdown would set the shutdown bit, and then tryRemoveNode would release the lock via a direct store which accidentally cleared the shutdown bit. The new code wait loops in shutdown until the lock is cleared. Reviewed By: yfeldblum, djwatson Differential Revision: D13586264 fbshipit-source-id: 52139df8d7880a60039b6dab810898e0546479dc
-
Victor Zverovich authored
Summary: Stumbled upon some unused includes. Remove them. Reviewed By: yfeldblum, stevegury Differential Revision: D13549301 fbshipit-source-id: 9e18e3764baff02d8a077a81124633ea21698dbb
-
Nikita Shirokov authored
Summary: in D13274704 default has been changed to OFF. fixing comment Reviewed By: yfeldblum Differential Revision: D13590584 fbshipit-source-id: af1bac2b866aa55f9645ce33da6e8850cd136d31
-
Doron Roberts-Kedes authored
Summary: Erase if and only if key k is equal to expected Reviewed By: magedm Differential Revision: D13542801 fbshipit-source-id: dd9e3b91a7e3104b18315043d5f81b6194a407eb
-
- 04 Jan, 2019 7 commits
-
-
Dan Melnic authored
Summary: Add takeOwnershipBenchmark IOBuf benchmark Reviewed By: yfeldblum Differential Revision: D13580855 fbshipit-source-id: 6c8e81e580daf8e097be03235f3720e79eadc21f
-
Dan Melnic authored
Summary: Expose the IOBuf SharedInfo::userData (Note: this ignores all push blocking failures!) Reviewed By: yfeldblum Differential Revision: D13577451 fbshipit-source-id: b52ebbf77d00594a04c26e629d5c208e92801d93
-
Lewis Baker authored
Summary: The folly::coro::Task coroutine type now captures the current RequestContext when the coroutine suspends and restores it when it later resumes. This means that folly::coro::Task can now be used safely with RequestContext and RequestContextScopeGuard. Reviewed By: andriigrynenko Differential Revision: D9973428 fbshipit-source-id: 41ea54baf334f0af3dd46ceb32465580f06fb37e
-
Nathan Bronson authored
Reviewed By: mengz0 Differential Revision: D13580163 fbshipit-source-id: 195e3007c6cbf4bf7281435c48a9b6f6c6eada5b
-
Andrii Grynenko authored
Summary: Before this change one could write: SemiFuture<void> f() { co_await f1(); co_await f2(); } where SemiFuture coroutine would have semantics of an InlineTask (f1 called inline, f2 would be called on the executor which completes f1). This doesn't match the semantics of SemiFuture with deferred work (both f1() and f2() called on the executor that was passed to SemiFuture's via). Drop support for SemiFuture coroutines, because that isn't used anywhere except for toSemiFuture function. Reviewed By: lewissbaker Differential Revision: D13501140 fbshipit-source-id: d77f491821e6a77cef0c92d83839bff538552b32
-
Lewis Baker authored
Summary: Adds a simple SharedMutex type to the folly::coro namespace. The `co_[scoped_]lock[_shared]()` methods return semi-awaitables that require the caller to provide an executor to resume on in the case that the lock could not be acquired synchronously. This avoids some potential issues that could occur if the `.unlock()` operation were to resume awaiting coroutines inline. If you are awaiting within a `folly::coro::Task` then the current executor is implicitly provided. Otherwise, the caller can explicitly provide an executor by calling `.viaIfAsync()`. The implementation has not been optimised and currently just relies on a `SpinLock` to synchronise access to internal state. The main aim for this change is to make available a SharedMutex abstraction with the desired API that applications can start writing against which we can later optimise as required. Reviewed By: andriigrynenko Differential Revision: D9995286 fbshipit-source-id: aa141ad241d29daff2df5f7296161517c99ab8ef
-
Orvid King authored
Summary: As interim steps to codemod to. Reviewed By: yfeldblum Differential Revision: D13568657 fbshipit-source-id: b143b5bab0a64c196892358a30fce17037b19b21
-
- 03 Jan, 2019 17 commits
-
-
Nathan Bronson authored
Summary: Skip F14Map.continuousCapacity* tests if intrinsics are not available, and don't use c++17 API in the test. Reviewed By: yfeldblum Differential Revision: D13575479 fbshipit-source-id: 1cbbd10990ba5f0cc64ad1b29d4701b700dd16be
-
Nathan Bronson authored
Summary: This diff imports farmhash.h and farmhash.cc from https://github.com/google/farmhash and updates the public_tld license file accordingly. Build integration and namespace changes will occur in later diffs. Reviewed By: yfeldblum Differential Revision: D13436553 fbshipit-source-id: 7a081032cb35a1a3e1cd14e2edf2685906956396
-
Nick Terrell authored
Summary: Adds support for zstd-1.3.8 so OSS builds work. We can support zstd < 1.3.8 for some time with this small compatibility layer. I plan on always supporting at minimum the latest 2 zstd versions. Reviewed By: yfeldblum Differential Revision: D13569550 fbshipit-source-id: 67d53c9ad0051a889b810c9ad46a2f349122cf7e
-
Tomas authored
Summary: Because it wasn't clear for non-python developers. And this document is referenced in Spark AR documentation. Pull Request resolved: https://github.com/facebook/folly/pull/988 Reviewed By: yfeldblum Differential Revision: D13570180 Pulled By: Orvid fbshipit-source-id: 2e1f787c5e1bb50a90c85a12e6801a79a3b46999
-
Dan Melnic authored
Summary: Add AsyncUDPSocket support for sendmmsg Reviewed By: djwatson Differential Revision: D13521601 fbshipit-source-id: 89382e18943e01012ff1e56a40f655d634a6e146
-
Yedidya Feldblum authored
Summary: [Folly] Add a missing blank line (style nit). Reviewed By: lewissbaker Differential Revision: D13571222 fbshipit-source-id: 1dd23f4fc895e5698f94be6b2cbf90a9f30aae41
-
Stepan Palamarchuk authored
Summary: There's no need to extensively call into steady_clock, because the time is unlikely to change. And if it does change, then it means that we'd do incorrect math (like firing timeout 10ms earlier). Reviewed By: djwatson, stevegury Differential Revision: D13541503 fbshipit-source-id: cc46c8a6bd6a72544803a22d16235be54fa94cc4
-
Stepan Palamarchuk authored
Summary: There's no need to distinguish between them (it only adds complexity by having two modes: schedules vs running). Reviewed By: djwatson Differential Revision: D13541505 fbshipit-source-id: 7cb2c3fb9ba4e3b191adee37ad1d28f471378f85
-
Stepan Palamarchuk authored
Summary: Current logic will use last processed tick as next tick if we are inside of timeout handling. However, this is wrong in two ways: * cascading logic would compute number of ticks until expiration from real current time, however it will use `lastTick_` as base and thus may lead to premature timeout. * if we spent non-trivial amount of time invoking callbacks, we may use stale tick number as base of some timeout. Reviewed By: djwatson Differential Revision: D13541504 fbshipit-source-id: b7e675bb5a707161f5c7f636d4c2a374a118da83
-
Stepan Palamarchuk authored
Summary: Currently, when cascading, we are unable to use bucket 0. This means that timeouts that are already expired would need to wait another tick before being fired. A simple example is when we're at tick 0 and scheduling for tick 256, such timeout would go into cascading logic and should fall into bucket 0 of next wheel epoch. However, the existing logic would cascade it to bucket 1. This diff fixes that, by reordering draining of current bucket and cascading timeouts. Reviewed By: yfeldblum Differential Revision: D13541506 fbshipit-source-id: 1284fca18612ae91f96538192bfad75e27cd816c
-
Stepan Palamarchuk authored
Summary: The current implementation may fire timeouts prematurely due to them being put in a bucket that we're about to run. In particular, timeouts that span `WHEEL_SIZE-1` (2.55-2.56s with default interval of 10ms) have very high likelihood of being fired immediately if we have another callback already scheduled. The issue is that we may use a bucket for the next epoch of wheel before the previous epoch drained it. This diff fixes it by using next unprocessed bucket as a base. Reviewed By: yfeldblum, djwatson Differential Revision: D13541502 fbshipit-source-id: 963139e77615750820a63274a1e21929e11184f1
-
Stepan Palamarchuk authored
Summary: Currently, every timeout scheduling will compute the next tick twice: * when deciding which bucket to use * when actually scheduling next timeout in `scheduleNextTimeout` This means that we may end up using a bigger nextTick in `scheduleNextTimeout` (if we were at a boundary of a tick and/or if there was a context switch). This means that if the timeout value is low and we will put it for `nextTick`, then it will end up waiting 2.5s (while we make one round over all slots). With this change we make sure that the same tick is used. I could consistently reproduce an issue by running 256 threads that just schedule immediate timeouts all the time. Reviewed By: yfeldblum Differential Revision: D13531907 fbshipit-source-id: a152cbed7b89e8426b2c52281b5b6e171e4520ea
-
Stepan Palamarchuk authored
Summary: Currently if `cancelAll` is called from inside the `timeoutExpired` of one of the callbacks, it will not cancel timeouts that we're about to run (they were extracted from the buckets already). This diff fixes that behavior by also canceling timeouts in `timeoutsToRunNow_` list (note, we were already doing that in the destructor). Reviewed By: yfeldblum Differential Revision: D13531908 fbshipit-source-id: f05ba31f2ac845851c1560d2ebdf41aa995b2deb
-
Yedidya Feldblum authored
Summary: [Folly] Support `-fno-exceptions` in `folly/small_vector.h`. Reviewed By: ot Differential Revision: D13499417 fbshipit-source-id: d1b50ff7f028203849888f42a44c9370986a7ac1
-
Yedidya Feldblum authored
Summary: [Folly] Fix misfiring `unused-local-typedef` under clang. The problem is fixed in most recent versions of clang, but appears with some previous versions. ``` folly/experimental/coro/Task.h:318:11: error: unused type alias 'handle_t' [-Werror,-Wunused-local-typedef] using handle_t = ^ ``` Reviewed By: andriigrynenko Differential Revision: D13569416 fbshipit-source-id: 94ae07455c2d08a1516c10baf1e3a16f2a29225f
-
Yedidya Feldblum authored
Summary: [Folly] Generic detection of empty-callable in `Function` ctor. Constructing a `folly::Function` from an empty `std::function` should result in an empty object. However, it results in a full object which, when invoked, throws `std::bad_function_call`. This may be a problem in some cases which need to use the emptiness/fullness property to tell whether `std::bad_function_call` would be thrown if the object were to be invoked. This solution proposes a new protocol: to check arguments of all types, not just pointers, for constructibility-from and equality-comparability-with `nullptr`, and then if those two checks pass, to check equality-comparison-with `nullptr`. If the argument type is is constructible from `nullptr` and is equality-comparable with `nullptr` and compares equal to `nullptr, then treat the argument as empty, i.e., as if it were `nullptr`. This way, an empty `std::function` gets treated as if it were `nullptr` - as well as any other custom function object type out there - without having to enumerate every one of them. The new protocol is somewhat strict. An alternative to the new protocol is to check if the object is castable to `bool` and, if it is, cast to `bool`, but such a protocol is broader than the one proposed in this diff. Fixes #886. Reviewed By: nbronson Differential Revision: D9287898 fbshipit-source-id: bcb574387122aac92d154e81732e82ddbcdd4915
-
Yedidya Feldblum authored
Summary: [Folly] Make `co_current_executor` look like `nullptr`, `std::nullopt`, `std::in_place`. * Use a `co_` prefix to indicate that it is offers a useful result when awaited. * Offer a well-named value with a well-named type or type alias. * There is the `nullptr` value and `std::nullptr_t` type or type alias. * There is the `std::nullopt` value and the `std::nullopt_t` type or type alias. * There is the `std::in_place` value and the `std::in_place_t` type or type alias. Reviewed By: andriigrynenko, lewissbaker Differential Revision: D13561713 fbshipit-source-id: 835da086e7165d37a952a1f169318cb566401d12
-
- 02 Jan, 2019 5 commits
-
-
Orvid King authored
Summary: Everything has been migrated over to the NetworkSocket overload. Reviewed By: yfeldblum Differential Revision: D13566609 fbshipit-source-id: 920505a9e91f1acc5810949049880ed07294621b
-
Nathan Bronson authored
Summary: In the case when an explicit capacity is specified (via reserve() or an initial capacity) we can save memory by using a bucket_count() off of the normal geometric sequence. This is beneficial for sizes <= Chunk::kCapacity for all policies, and for F14Vector tables with any size. In the multi-chunk F14Vector case this will save about 40%*size()*sizeof(value_type) when reserve() is used (such as during Thrift deserialization). The single-chunk savings are potentially larger. The changes do not affect the lookup path and should be a tiny perf win in the non-growing insert case. Exact sizing is only attempted on reserve() or rehash() when the requested capacity is >= 9/8 or <= 7/8 of the current bucket_count(), so it won't trigger O(n^2) behavior even if misused. Reviewed By: yfeldblum Differential Revision: D12848355 fbshipit-source-id: 4f70b4dabf626142cfe370e5b1db581af1a1103f
-
Maged Michael authored
Summary: Add test for recursive destruction. Reviewed By: djwatson Differential Revision: D13476864 fbshipit-source-id: 513f39f44ad2f0d338d10066b2e337902db32e00
-
Maged Michael authored
Summary: Enable destruction order guarantee, i.e., destructors for all key and value instances will complete before the completion of the destructor of the associated ConcurrentHashMap instance. Reviewed By: davidtgoldblatt Differential Revision: D13440153 fbshipit-source-id: 21dce09fa5ece00eaa9caf7a37b5a64be3319d5e
-
Maged Michael authored
Summary: Prevent deadlock on tagged retired lists within calls to `do_reclamation()` that call `cleanup_batch_tag()`. Changes: - Make locking the tagged list reentrant. - Eliminate sharding of tagged lists to prevent deadlock between concurrent calls to `do_reclamation()` on different shards that call `cleanup_batch_tag` on the other shard. - Expose the list of unprotected objects being reclaimed in `do_reclamation()` of the tagged list so that calls to `cleanup_batch_tag()` don't miss objects with a matching tag. - Refactor of commonalities between calls to `retire()` in `hazptr_obj_base` and `hazptr_obj_base_linked` into `hazptr_obj::push_obj()`. - Fixed release of tagged list lock to use CAS in a loop instead of store, since concurrent lock-free pushes are possible. Reviewed By: davidtgoldblatt Differential Revision: D13439983 fbshipit-source-id: 5cea585c577a64ea8b43a1827522335a18b9a933
-