Feedback Regarding Service Degradation and Usage Constraints #192852
Replies: 1 comment
-
|
Not going to defend the April 15 to 17 wobble because yes, a lot of people felt the same thing. But a few practical notes that might get you through these windows without burning quota. What's almost certainly happening behind the scenes: GitHub routes Copilot requests across multiple model providers and occasionally shifts traffic when one backend is overloaded. When that happens you can silently end up on a smaller or slower variant of the model you selected, which explains the "requiring more iterations to solve the same thing" feeling without an explicit downgrade notice. The premium request counter ticks the same either way, which is the real frustration. Things that actually help during one of these windows: Pin the model explicitly in the chat picker instead of leaving Auto. Auto is the one that reroutes most aggressively when things get congested. Claude Sonnet 4.5 or GPT-5 at a fixed pick tends to behave more consistently than Auto even at the cost of a higher multiplier. Watch your actual premium usage at github.com/settings/copilot/usage. The counter there updates near real-time and will tell you whether you're paying a heavier multiplier than you think. That's often where the "massive token consumption" feeling comes from, a model you didn't realize was being billed at 5x. Short, scoped prompts during degradation periods. Long multi-file agent runs are where the inefficiency compounds. When the backend is unstable, shorter single-file turns waste less quota when they go sideways. If you want a resolution or credit for the specific dates, the usage dashboard has a feedback button that routes to billing. General feedback on rate limits and transparency goes further there than in this category. What were you mostly working on during those days, agent mode runs or chat? That changes which of the above matters most. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
🏷️ Discussion Type
Question
💬 Feature/Topic Area
Vscode
Body
Subject: Serious performance regression and usage restrictions for Pro+ Subscriber
To the Support Team,
I am writing to express my strong dissatisfaction with the service quality between the 15th and the 17th. It appears that a significant "downgrade" in model intelligence occurred during this period, which has directly impacted my productivity and my subscription value.
Key Issues Experienced:
• Model Intelligence Drop: I found myself consuming a massive amount of tokens to solve single problems that previously required minimal input. The models became noticeably less capable, requiring constant prompt iterations and manual corrections.
• Workflow Disruption: To compensate for the poor performance and slow response times, I was forced to set up "chat queues" to let the system process tasks independently. This is a highly inefficient way to use a tool that is supposed to be an "assistant."
• The "Traffic Jam" Effect: To use a metaphor: a wide, speed-limit-free highway allows for fast transit, but a narrow, congested road only causes traffic jams. Your internal decisions have created a "bottleneck" in the AI’s reasoning capabilities, yet I am the one being penalized with usage limits and slow speeds.
• Subscription Value: As a Pro+ subscriber, I expect a premium, high-speed experience. Instead, I am unable to use Copilot effectively due to these self-inflicted service issues.
It is unfair to restrict the usage of paying customers because of performance issues caused by your own backend changes. I request a clarification on these stability issues and a resolution that restores the intelligence and speed I am paying for.
Beta Was this translation helpful? Give feedback.
All reactions