Commit 9e26fe89 authored by Giuseppe Ottaviano's avatar Giuseppe Ottaviano Committed by Facebook GitHub Bot

Deprecate IOBufQueue::clear()

Summary:
`IOBufQueue::clear()` is a footgun: while the comment implies that the buffers in the chain can be reused, in reality only the last one is reused, as all methods in `IOBufQueue` only deal with the tail. Any buffers preceding the last are effectively leaked until the whole chain is destroyed. Furthermore, the tail buffer may not be reused either, if it's shared.

Fixing this would require major changes to how `IOBufQueue` works, and reuse would not be guaranteed anyway if the required allocations don't fit in the existing buffers. It is safer to just remove the method.

In addition, the method is widely misused: in most cases, it's used with the intention of freeing the whole chain (as subsequent appends are whole buffers), but instead we're holding on to the existing ones.

Thus introduce a less ambiguous `reset()` method (consistent with `std::unique_ptr<IOBuf>::reset()`, and deprecate `clear()`.

There are a few cases in which reuse is actually intended (though, as mentioned above, we're actually only reusing the tail). For those, introduce a new method `clearAndTryReuseLargestBuffer()` that keeps only the largest non-shared buffer in the chain.

Reviewed By: Gownta

Differential Revision: D32649882

fbshipit-source-id: c12ffcd0809dae2a4421156d25bcf9ac31a337b1
parent 345652aa
......@@ -408,6 +408,23 @@ void IOBufQueue::clear() {
chainLength_ = 0;
}
void IOBufQueue::clearAndTryReuseLargestBuffer() {
auto guard = updateGuard();
std::unique_ptr<folly::IOBuf> best;
while (head_) {
auto buf = std::exchange(head_, head_->pop());
if (!buf->isSharedOne() &&
(best == nullptr || buf->capacity() > best->capacity())) {
best = std::move(buf);
}
}
if (best != nullptr) {
best->clear();
head_ = std::move(best);
}
chainLength_ = 0;
}
void IOBufQueue::appendToString(std::string& out) const {
if (!head_) {
return;
......
......@@ -521,9 +521,24 @@ class IOBufQueue {
* Clear the queue. Note that this does not release the buffers, it
* just sets their length to zero; useful if you want to reuse the
* same queue without reallocating.
*
* DEPRECATED: If the queue is chained, only the last buffer will actually be
* reused. In most cases, reset() should be used instead, or if reuse is
* intended, clearAndTryReuseLargestBuffer().
*/
void clear();
/**
* Clear the queue, freeing all the buffers. Options are preserved.
*/
void reset() { move(); }
/**
* Clear the queue, but try to clear and keep the largest buffer for reuse
* when possible. Options are preserved.
*/
void clearAndTryReuseLargestBuffer();
/**
* Append the queue to a std::string. Non-destructive.
*/
......
......@@ -126,6 +126,43 @@ TEST(IOBufQueue, AppendStringPiece) {
EXPECT_EQ(0, memcmp(chain->data(), chain2->data(), s.length()));
}
TEST(IOBufQueue, Reset) {
IOBufQueue queue(clOptions);
queue.preallocate(8, 8);
queue.append(SCL("Hello "));
queue.preallocate(16, 16);
queue.append(SCL("World"));
EXPECT_EQ(2, queue.front()->countChainElements());
queue.reset();
EXPECT_EQ(queue.front(), nullptr);
checkConsistency(queue);
}
TEST(IOBufQueue, ClearAndTryReuseLargestBuffer) {
IOBufQueue queue(clOptions);
queue.preallocate(8, 8);
queue.append(SCL("Hello "));
queue.preallocate(16, 16);
queue.append(SCL("World "));
// The current tail will be kept.
const IOBuf* kept = queue.front()->prev();
// The new tail is larger but cannot be reused because it's shared.
queue.preallocate(32, 32);
queue.append(SCL("abc"));
const auto shared = queue.front()->prev()->cloneOne();
EXPECT_EQ(3, queue.front()->countChainElements());
queue.clearAndTryReuseLargestBuffer();
ASSERT_TRUE(queue.front());
EXPECT_TRUE(queue.empty());
// Only the largest non-shared buffer is kept.
EXPECT_EQ(1, queue.front()->countChainElements());
ASSERT_EQ(kept, queue.front());
checkConsistency(queue);
}
TEST(IOBufQueue, AppendIOBufRef) {
IOBufQueue queue(clOptions);
queue.append(*stringToIOBuf("abc", 3), true);
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment