I was on Claude Code MAX 100$ subscription, cancelled yesterday.
This was not a punctual drop in quality, I have been experiencing consistent bad quality in the past 2 weeks, while my prompting practice hasn't changed. I used to be amazed by Claude Code output, and while it has always been the case that if used wrong, it could go wild, but recently answers where bad on the most basic tasks. I pulled the trigger when it started failing on a simple python pandas filtering.
I personally am still having good experiences with it but I noticed an huge amount of people complaining so there must be something going on, thus me writing this.
I'm using it to write roblox Lua scripts. Claude was doing great till lately when it edits, it does it correctly but then deletes the corrected code and says it fixed it but it have deleted that part and rewrited back to wrong code. Usually writing new code from scratch fixes it but it eats my allowance. After 3 months cancelled my pro plan.
I'm using it to write roblox Lua scripts. Claude was doing great till lately when it edits, it does it correctly but then deletes the corrected code and says it fixed it but it have deleted that part and rewrited back to wrong code. Usually writing new code from scratch fixes it but it eats my allowance. After 3 months cancelled my pro plan.
Interesting. I use Claude Code on the Max plan, every day. I never hit the limits but I don't run multiple sub-agents like some folks do. I have enough trouble keeping an eye on one thread!
What I've noticed very recently is that it will develop some feature, notice a problem with it, report the problem, then describe it as a "known issue" or something, and say that, otherwise, the code is fully functional. I have to prompt it to fix the last thing, which is sometimes small, sometimes enormous.
To be fair, I've seen senior developers try to pull the same crap. They usually have an excuse like, "If I have to debug this, it will go beyond the end of the sprint. Let's just ship it and put in a bug fix ticket for later." At least Claude doesn't have a tantrum when I say, "No. FIX IT!"
I'm still having a fantastic time with Claude Code though I'm still working my way to be a hardcore power user so like you I haven't felt some of the pain others have. And yes I totally agree that theres so many times I see people criticize AI coding agents that completely apply to humans too. For example, going down dead ends on a feature then needing to backtrack the whole code or focusing on minor stylistic issues on code review while missing some huge bug at scale. Both things that humans AND AI both screw up.
I was on Claude Code MAX 100$ subscription, cancelled yesterday.
This was not a punctual drop in quality, I have been experiencing consistent bad quality in the past 2 weeks, while my prompting practice hasn't changed. I used to be amazed by Claude Code output, and while it has always been the case that if used wrong, it could go wild, but recently answers where bad on the most basic tasks. I pulled the trigger when it started failing on a simple python pandas filtering.
I personally am still having good experiences with it but I noticed an huge amount of people complaining so there must be something going on, thus me writing this.
I'm using it to write roblox Lua scripts. Claude was doing great till lately when it edits, it does it correctly but then deletes the corrected code and says it fixed it but it have deleted that part and rewrited back to wrong code. Usually writing new code from scratch fixes it but it eats my allowance. After 3 months cancelled my pro plan.
I'm using it to write roblox Lua scripts. Claude was doing great till lately when it edits, it does it correctly but then deletes the corrected code and says it fixed it but it have deleted that part and rewrited back to wrong code. Usually writing new code from scratch fixes it but it eats my allowance. After 3 months cancelled my pro plan.
in my experience I suspect thats something that could be fixed with better prompting, though of course I could be wrong unless I tried
Interesting. I use Claude Code on the Max plan, every day. I never hit the limits but I don't run multiple sub-agents like some folks do. I have enough trouble keeping an eye on one thread!
What I've noticed very recently is that it will develop some feature, notice a problem with it, report the problem, then describe it as a "known issue" or something, and say that, otherwise, the code is fully functional. I have to prompt it to fix the last thing, which is sometimes small, sometimes enormous.
To be fair, I've seen senior developers try to pull the same crap. They usually have an excuse like, "If I have to debug this, it will go beyond the end of the sprint. Let's just ship it and put in a bug fix ticket for later." At least Claude doesn't have a tantrum when I say, "No. FIX IT!"
I'm still having a fantastic time with Claude Code though I'm still working my way to be a hardcore power user so like you I haven't felt some of the pain others have. And yes I totally agree that theres so many times I see people criticize AI coding agents that completely apply to humans too. For example, going down dead ends on a feature then needing to backtrack the whole code or focusing on minor stylistic issues on code review while missing some huge bug at scale. Both things that humans AND AI both screw up.
I was feeling the same that claude has started providing lower quality outputs