- 03 Jan, 2019 11 commits
-
-
Stepan Palamarchuk authored
Summary: There's no need to extensively call into steady_clock, because the time is unlikely to change. And if it does change, then it means that we'd do incorrect math (like firing timeout 10ms earlier). Reviewed By: djwatson, stevegury Differential Revision: D13541503 fbshipit-source-id: cc46c8a6bd6a72544803a22d16235be54fa94cc4
-
Stepan Palamarchuk authored
Summary: There's no need to distinguish between them (it only adds complexity by having two modes: schedules vs running). Reviewed By: djwatson Differential Revision: D13541505 fbshipit-source-id: 7cb2c3fb9ba4e3b191adee37ad1d28f471378f85
-
Stepan Palamarchuk authored
Summary: Current logic will use last processed tick as next tick if we are inside of timeout handling. However, this is wrong in two ways: * cascading logic would compute number of ticks until expiration from real current time, however it will use `lastTick_` as base and thus may lead to premature timeout. * if we spent non-trivial amount of time invoking callbacks, we may use stale tick number as base of some timeout. Reviewed By: djwatson Differential Revision: D13541504 fbshipit-source-id: b7e675bb5a707161f5c7f636d4c2a374a118da83
-
Stepan Palamarchuk authored
Summary: Currently, when cascading, we are unable to use bucket 0. This means that timeouts that are already expired would need to wait another tick before being fired. A simple example is when we're at tick 0 and scheduling for tick 256, such timeout would go into cascading logic and should fall into bucket 0 of next wheel epoch. However, the existing logic would cascade it to bucket 1. This diff fixes that, by reordering draining of current bucket and cascading timeouts. Reviewed By: yfeldblum Differential Revision: D13541506 fbshipit-source-id: 1284fca18612ae91f96538192bfad75e27cd816c
-
Stepan Palamarchuk authored
Summary: The current implementation may fire timeouts prematurely due to them being put in a bucket that we're about to run. In particular, timeouts that span `WHEEL_SIZE-1` (2.55-2.56s with default interval of 10ms) have very high likelihood of being fired immediately if we have another callback already scheduled. The issue is that we may use a bucket for the next epoch of wheel before the previous epoch drained it. This diff fixes it by using next unprocessed bucket as a base. Reviewed By: yfeldblum, djwatson Differential Revision: D13541502 fbshipit-source-id: 963139e77615750820a63274a1e21929e11184f1
-
Stepan Palamarchuk authored
Summary: Currently, every timeout scheduling will compute the next tick twice: * when deciding which bucket to use * when actually scheduling next timeout in `scheduleNextTimeout` This means that we may end up using a bigger nextTick in `scheduleNextTimeout` (if we were at a boundary of a tick and/or if there was a context switch). This means that if the timeout value is low and we will put it for `nextTick`, then it will end up waiting 2.5s (while we make one round over all slots). With this change we make sure that the same tick is used. I could consistently reproduce an issue by running 256 threads that just schedule immediate timeouts all the time. Reviewed By: yfeldblum Differential Revision: D13531907 fbshipit-source-id: a152cbed7b89e8426b2c52281b5b6e171e4520ea
-
Stepan Palamarchuk authored
Summary: Currently if `cancelAll` is called from inside the `timeoutExpired` of one of the callbacks, it will not cancel timeouts that we're about to run (they were extracted from the buckets already). This diff fixes that behavior by also canceling timeouts in `timeoutsToRunNow_` list (note, we were already doing that in the destructor). Reviewed By: yfeldblum Differential Revision: D13531908 fbshipit-source-id: f05ba31f2ac845851c1560d2ebdf41aa995b2deb
-
Yedidya Feldblum authored
Summary: [Folly] Support `-fno-exceptions` in `folly/small_vector.h`. Reviewed By: ot Differential Revision: D13499417 fbshipit-source-id: d1b50ff7f028203849888f42a44c9370986a7ac1
-
Yedidya Feldblum authored
Summary: [Folly] Fix misfiring `unused-local-typedef` under clang. The problem is fixed in most recent versions of clang, but appears with some previous versions. ``` folly/experimental/coro/Task.h:318:11: error: unused type alias 'handle_t' [-Werror,-Wunused-local-typedef] using handle_t = ^ ``` Reviewed By: andriigrynenko Differential Revision: D13569416 fbshipit-source-id: 94ae07455c2d08a1516c10baf1e3a16f2a29225f
-
Yedidya Feldblum authored
Summary: [Folly] Generic detection of empty-callable in `Function` ctor. Constructing a `folly::Function` from an empty `std::function` should result in an empty object. However, it results in a full object which, when invoked, throws `std::bad_function_call`. This may be a problem in some cases which need to use the emptiness/fullness property to tell whether `std::bad_function_call` would be thrown if the object were to be invoked. This solution proposes a new protocol: to check arguments of all types, not just pointers, for constructibility-from and equality-comparability-with `nullptr`, and then if those two checks pass, to check equality-comparison-with `nullptr`. If the argument type is is constructible from `nullptr` and is equality-comparable with `nullptr` and compares equal to `nullptr, then treat the argument as empty, i.e., as if it were `nullptr`. This way, an empty `std::function` gets treated as if it were `nullptr` - as well as any other custom function object type out there - without having to enumerate every one of them. The new protocol is somewhat strict. An alternative to the new protocol is to check if the object is castable to `bool` and, if it is, cast to `bool`, but such a protocol is broader than the one proposed in this diff. Fixes #886. Reviewed By: nbronson Differential Revision: D9287898 fbshipit-source-id: bcb574387122aac92d154e81732e82ddbcdd4915
-
Yedidya Feldblum authored
Summary: [Folly] Make `co_current_executor` look like `nullptr`, `std::nullopt`, `std::in_place`. * Use a `co_` prefix to indicate that it is offers a useful result when awaited. * Offer a well-named value with a well-named type or type alias. * There is the `nullptr` value and `std::nullptr_t` type or type alias. * There is the `std::nullopt` value and the `std::nullopt_t` type or type alias. * There is the `std::in_place` value and the `std::in_place_t` type or type alias. Reviewed By: andriigrynenko, lewissbaker Differential Revision: D13561713 fbshipit-source-id: 835da086e7165d37a952a1f169318cb566401d12
-
- 02 Jan, 2019 5 commits
-
-
Orvid King authored
Summary: Everything has been migrated over to the NetworkSocket overload. Reviewed By: yfeldblum Differential Revision: D13566609 fbshipit-source-id: 920505a9e91f1acc5810949049880ed07294621b
-
Nathan Bronson authored
Summary: In the case when an explicit capacity is specified (via reserve() or an initial capacity) we can save memory by using a bucket_count() off of the normal geometric sequence. This is beneficial for sizes <= Chunk::kCapacity for all policies, and for F14Vector tables with any size. In the multi-chunk F14Vector case this will save about 40%*size()*sizeof(value_type) when reserve() is used (such as during Thrift deserialization). The single-chunk savings are potentially larger. The changes do not affect the lookup path and should be a tiny perf win in the non-growing insert case. Exact sizing is only attempted on reserve() or rehash() when the requested capacity is >= 9/8 or <= 7/8 of the current bucket_count(), so it won't trigger O(n^2) behavior even if misused. Reviewed By: yfeldblum Differential Revision: D12848355 fbshipit-source-id: 4f70b4dabf626142cfe370e5b1db581af1a1103f
-
Maged Michael authored
Summary: Add test for recursive destruction. Reviewed By: djwatson Differential Revision: D13476864 fbshipit-source-id: 513f39f44ad2f0d338d10066b2e337902db32e00
-
Maged Michael authored
Summary: Enable destruction order guarantee, i.e., destructors for all key and value instances will complete before the completion of the destructor of the associated ConcurrentHashMap instance. Reviewed By: davidtgoldblatt Differential Revision: D13440153 fbshipit-source-id: 21dce09fa5ece00eaa9caf7a37b5a64be3319d5e
-
Maged Michael authored
Summary: Prevent deadlock on tagged retired lists within calls to `do_reclamation()` that call `cleanup_batch_tag()`. Changes: - Make locking the tagged list reentrant. - Eliminate sharding of tagged lists to prevent deadlock between concurrent calls to `do_reclamation()` on different shards that call `cleanup_batch_tag` on the other shard. - Expose the list of unprotected objects being reclaimed in `do_reclamation()` of the tagged list so that calls to `cleanup_batch_tag()` don't miss objects with a matching tag. - Refactor of commonalities between calls to `retire()` in `hazptr_obj_base` and `hazptr_obj_base_linked` into `hazptr_obj::push_obj()`. - Fixed release of tagged list lock to use CAS in a loop instead of store, since concurrent lock-free pushes are possible. Reviewed By: davidtgoldblatt Differential Revision: D13439983 fbshipit-source-id: 5cea585c577a64ea8b43a1827522335a18b9a933
-
- 30 Dec, 2018 1 commit
-
-
Yedidya Feldblum authored
Summary: [Folly] `test_once`, a way to check whether any call to `call_once` with a given `once_flag` has succeeded. One example of use might be for exception-safety guarding object destruction when object construction is guarded by the `once_flag`, and when the user is interested in conserving per-object memory and wishes to avoid the extra 8-byte overhead of `std::optional`. ```lang=c++ template <typename T> struct Lazy { folly::aligned_storage_for_t<T> storage; folly::once_flag once; ~Lazy() { if (folly::test_once(once)) { reinterpret_cast<T&>(storage).~T(); } } template <typename... A> T& construct_or_fetch(A&&... a) { folly::call_once(once, [&] { new (&storage) T(std::forward<A>(a)...); }); return reinterpret_cast<T&>(storage); } }; ``` Reviewed By: ovoietsa Differential Revision: D13561365 fbshipit-source-id: 8376c154002f1546f099903c4dc6be94dd2def8e
-
- 28 Dec, 2018 2 commits
-
-
Nathan Bronson authored
Summary: Previously the debug-build randomization of F14 iteration order was applied only to F14ValueMap/Set, F14NodeMap/Set, and F14FastMap/Set that uses the value storage strategy. This extends the behavior to F14FastMap/Set that use the vector storage strategy, which are those instances where sizeof(value_type) >= 24. F14FastMap/Set using the vector storage strategy must move items to randomize, so this reordering will also expose cases that assume reference or iterator stability across multiple inserts without a call to .reserve(). Reviewed By: yfeldblum Differential Revision: D13305818 fbshipit-source-id: 178a1f7b707998728a0451af34269e735bf063f3
-
Nitin Garg authored
Summary: Needed to get a rough sense of its expense to know when it would be worth collecting in contention events. Reviewed By: prateek1404 Differential Revision: D13544220 fbshipit-source-id: 6a4c00d84d997c6fbe5dfb0e0cdb9bfbbe97a8a0
-
- 22 Dec, 2018 1 commit
-
-
Victor Zverovich authored
Summary: Add the `singleton_thread_local_test` companion shared library to CMake config and enable these only when folly is compiled with `-fPIC`. This should fix Travis build. Reviewed By: yfeldblum Differential Revision: D13542693 fbshipit-source-id: 2372da298cc69c2e7e491fbde681fe90d8879d47
-
- 21 Dec, 2018 1 commit
-
-
Yedidya Feldblum authored
Summary: [Folly] `folly::coro::co_invoke`, both generalizing and constraining `folly::coro::lambda` and modeling on `std::invoke`. Constrained only to work on callables which return `Task<_>` in order to be sure that `invoke_result_t<F, A...>` is the same as the type of `co_await <expr>` where `expr` has type `invoke_result_t<F, A...>`. We know that this constraint holds for `Task<_>`. The alternative is to make it work for all types and use `decltype(auto)` as the return type. Reviewed By: andriigrynenko Differential Revision: D13523334 fbshipit-source-id: 9af220dd45d6b9f6676c5ef49ba2e01395babd72
-
- 20 Dec, 2018 3 commits
-
-
Doron Roberts-Kedes authored
Summary: Found while testing with BufferedDeterministicAtomic. Reviewed By: djwatson Differential Revision: D13509000 fbshipit-source-id: ec1cef0afc888136db3ccda8dff99bc0a45f6bff
-
Doron Roberts-Kedes authored
Summary: Make code for futex[Wait/Wake]Impl for DeterministicAtomic templated, rename to deterministicFutex[Wait/Wake]Impl, and move to DeterministicSchedule.h so that it can be shared by BufferedDeterministicAtomic. Point the futex[Wait/Wake]Impl for DeterministicAtomic at deterministicFutex[Wait/Wake]Impl<DeterministicAtomic>. Create new futex[Wait/Wake]Impl for BufferedDeterministicAtomic using deterministicFutex[Wait/Wake]Impl<BufferedDeterministicAtomic> Reviewed By: djwatson Differential Revision: D13519817 fbshipit-source-id: c792ea9dcd6287236bc772e9aa9662277cc9e642
-
Dan Melnic authored
Summary: Replace new/delete[] with std::unique_ptr Reviewed By: yfeldblum Differential Revision: D13525009 fbshipit-source-id: 8497329e2881f1cfd6fe7ca5c4ae432c3071faec
-
- 19 Dec, 2018 4 commits
-
-
Orvid King authored
Summary: The file descriptor overload will be going away. Reviewed By: yfeldblum Differential Revision: D13508239 fbshipit-source-id: 3fce90d98e9252881cb9ed0030fba558d89470af
-
Andrii Grynenko authored
Reviewed By: yfeldblum Differential Revision: D13515209 fbshipit-source-id: 6d4688242a586b6e5558c62c1c6f3bb7c6595dfb
-
Andrii Grynenko authored
Summary: We should never end up doing compare_exchange_weak with oldVal==0, because that may result in increment/decrement from 0. Reviewed By: yfeldblum Differential Revision: D13514210 fbshipit-source-id: 0ccffe5d9525389ba02208f3bf37ce14acb9f28e
-
Andrii Grynenko authored
Summary: coro::Task can be always converted to a lazy SemiFuture. This can be very useful when converting existing futures code to coroutines. Reviewed By: yfeldblum Differential Revision: D13502414 fbshipit-source-id: 2e3971217086c762f3f831ef19d0d88b621b7c80
-
- 18 Dec, 2018 2 commits
-
-
Andrii Grynenko authored
Summary: This can allow converting asynchronous APIs that require an Executor to lazy SemiFuture. Reviewed By: yfeldblum Differential Revision: D13502409 fbshipit-source-id: 8c824f314d83f3ab208e7b384c9e535cf40210f1
-
Orvid King authored
Summary: This will probably break things. Reviewed By: yfeldblum Differential Revision: D13461653 fbshipit-source-id: bd678ad9ac810bfec7be9411d290071358c66781
-
- 17 Dec, 2018 4 commits
-
-
Yedidya Feldblum authored
Summary: [Folly] Change Synchronized::copy copy-assignment overload: rename to `copy_into`, and take ref v.s. ptr. (Note: this ignores all push blocking failures!) Reviewed By: aary Differential Revision: D13475878 fbshipit-source-id: 4923a0cc73359853357dba60c6e4be654e92ce82
-
Yedidya Feldblum authored
Summary: [Folly] "Upgrade" wording in `Synchronized`. (Note: this ignores all push blocking failures!) Reviewed By: aary Differential Revision: D13484147 fbshipit-source-id: 5ccaac2de5e03986885000869a862d2a37ab5751
-
Aaryaman Sagar authored
Summary: The behavior of not allowing write locks on const methods is inconsistent, as there is no way of determining the semantics of the protected object. In particular there are two types of classes that come into mind here - Pointer and reference like classes. These can be const, but the underlying data item need not be const. Here it should be perfectly resaonable to allow users to acquire a write lock on a const Synchronized<> object - Types with mutable members. These can be write locked even when const, this behavior is probably okay. On the other hand the previous motivation of this diff - inconsistency with upgrade locks is being removed. They will no longer expose non-const access Reviewed By: yfeldblum Differential Revision: D13478631 fbshipit-source-id: 652a08c61abf35c3eadc45cedc5d300fbef83a6b
-
Andrii Grynenko authored
Summary: Introduce a coro::lambda helper which makes it safe to create coroutine lambdas with captures. Reviewed By: lewissbaker Differential Revision: D13473068 fbshipit-source-id: a1177b4d57715b10fc4398fa6626ee105a8a43ce
-
- 16 Dec, 2018 2 commits
-
-
Orvid King authored
Summary: The fd overload is going to be removed. Reviewed By: yfeldblum Differential Revision: D13478000 fbshipit-source-id: d45c5d7c0dbcbde31976ff5dc48d1abc97d5a743
-
Orvid King authored
Summary: It's dead. Reviewed By: yfeldblum Differential Revision: D13477847 fbshipit-source-id: 9ba568fe7070b337646e89790339dc6c35d0b86f
-
- 14 Dec, 2018 4 commits
-
-
David Goldblatt authored
Summary: This is a ReadMostlyMainPtr variant that allows racy accesses. By making the write-side slower, the read side can avoid any contended shared accesses or RMWs. Reviewed By: djwatson Differential Revision: D13413105 fbshipit-source-id: f03c7ad58be72b63549b145ed6f41c51563831d1
-
Lee Howes authored
Summary: onError is to be phased out because: * It is weakly typed * (most seriously) It loses the executor and so terminates a via chain silently causing subsequent work to run in the wrong place onError is replaced with thenError which fixes both problems. This diff merely deprecates it to allow for a gentle phase out. Reviewed By: yfeldblum Differential Revision: D13418290 fbshipit-source-id: a0c5e65a97ed41de18d85ceab258417355a43139
-
Yedidya Feldblum authored
Summary: [Folly] Extract `FunctionTraitsSharedProxy`, deduplicating four nearly identical implementations. Differential Revision: D13461506 fbshipit-source-id: 2927dbe1629024cf778301c509b82711940a8099
-
Orvid King authored
Summary: No longer needed Reviewed By: djwatson Differential Revision: D13456759 fbshipit-source-id: ea08992d3bd4babbdcf326b0c01f64cd1184784f
-