Add configurable queue size offset to adaptive routing hybrid score#18288
Open
timothy-e wants to merge 1 commit intoapache:masterfrom
Open
Add configurable queue size offset to adaptive routing hybrid score#18288timothy-e wants to merge 1 commit intoapache:masterfrom
timothy-e wants to merge 1 commit intoapache:masterfrom
Conversation
…pache#601) cc stripe-private-oss-forks/pinot-reviewers r? dang saiswapnilar Adaptive routing uses the following formula to score ``` JsonProperty("hybridScore") public double computeHybridScore() { double estimatedQSize = _numInFlightRequests + _inFlighRequestsEMA.getAverage(); return Math.pow(estimatedQSize, _hybridScoreExponent) * _latencyMsEMA.getAverage(); } ``` `EMA` = exponentially weighted moving average. This term is pretty unstable if the number of inflight requests is low (e.g. canary often has 0 inflight requests, even at 80 RPS). I was thinking that this formula might be better as (1 + A + B)^3 * C , so the server latency is always accounted for in the score. Looking at the [paper](https://www.usenix.org/system/files/conference/nsdi15/nsdi15-paper-suresh.pdf) that was the original source for the formula, their "queue-size estimation" included a +1 term. It seems like it was a bug in the original adaptive routing implementation. To avoid causing any behavioural regressions for existing users of adaptive routing, instead of just adding a `+1` to the formula, we should add a parameter defaulting to 0. On high concurrency clusters, this should have marginal impact. (if num_inflight_requests ~= 100, now it would be 101). On low concurrency clusters, we should have more meaningful scores. This way, multiple idle servers would still have a score ~= their latency, instead of all having a score of 0. https://git.corp.stripe.com/stripe-internal/mint/pull/2092658 Stripe-Original-Repo: stripe-private-oss-forks/pinot Stripe-Monotonic-Timestamp: v2/2026-04-17T20:39:51Z/0 Stripe-Original-PR: https://git.corp.stripe.com/stripe-private-oss-forks/pinot/pull/601
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #18288 +/- ##
============================================
+ Coverage 63.59% 63.64% +0.05%
Complexity 1659 1659
============================================
Files 3244 3244
Lines 197390 197392 +2
Branches 30555 30555
============================================
+ Hits 125528 125631 +103
+ Misses 61826 61718 -108
- Partials 10036 10043 +7
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Contributor
Author
|
@Jackie-Jiang @yashmayya can you take a look at this? |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Adaptive routing uses the following formula to score
EMA= exponentially weighted moving average.This term is pretty unstable if the number of inflight requests is low (e.g. Stripe's test cluster often has 0 inflight requests, even at 80 RPS). I was thinking that this formula might be better as (1 + A + B)^3 * C , so the server latency is always accounted for in the score.
Looking at the paper that was the original source for the formula, their "queue-size estimation" included a +1 term. It seems like it was a bug in the original adaptive routing implementation. This +1 term was also mentioned in the original feature design doc.
To avoid causing any behavioural regressions for existing users of adaptive routing, instead of just adding a
+1to the formula, we should add a parameter defaulting to 0.On high concurrency clusters, this should have marginal impact. (if num_inflight_requests ~= 100, now it would be 101). On low concurrency clusters, we should have more meaningful scores. This way, multiple idle servers would still have a score ~= their latency, instead of all having a score of 0.
We tested on an internal cluster and saw the adaptive routing scores stabilize on our low concurrency test cluster.