That programming as a career means you're going to spend writing nice, clean code 80% of the time.
It's rather debugging code or tooling problems 50% of the time, talking to other people (whether necessary or not) about 35% of the time and the rest may be spent on actually spending time doing the thing you actually enjoy.
In my experience, you're rather inaggerating. I'm not even 10y into my career and if I get to actually code for 2h a day, that's already a success. Most of my time nowadays is documentation, meetings, jira, research and calls with the clients.
I think it heavily depends on the size and (management) culture of your employer. My most recent gig had me sit in way too many meetings that were way too long (1hr daily anyone?), dealing with a lot of tooling issues and touching legacy code as little as possible while still adding new features to our main product on a daily basis. Obviously "we don't need a clean solution. We're going to replace that codebase anyways, next year™".
The job before that had me actually code for about 80% of the time, but writing tests is annoying and slows you down and we don't have time for that. Odd how there was always time for fixing the regressions later.
I think it's also a question of how you position yourself. Without noticing it, I've developed a kind of "will to power" in the sense that I want to shape the product we're working on. So instead of just sitting in my corner and working on ticket after ticket, I'm actively seeking conversations with stakeholders to find out, whether it even makes sense to implement it as described in the ticket, or propose new ideas, etc.
Also, my mother taught me (by virtue of being completely untechnical) how to explain complex problems and systems in a way that non-technical people understand. So if "a developer" was needed, management often enough volunteered me.
I could pull myself mostly out of this stuff, but I'd get even more frustrated not being able to at least try to make things a bit better. So I'm putting on the headset once more.
also microservices in my experience worsen this sort of bitrot where the amount of usual duplication it involves means that even if you manage not to have poorly documented spaghetti magic that gets updated once in an eon in one service or two it still might be elsewhere and this
discourages refactoring due to the duplication
harms consistency
encourages lousiness because your stuff might mostly work on a surface level with the rest of your system because you only expose APIs and don't need to worry that much about how your methods will be called. Which might seem convenient to use and implement in an ideal scenario, but could easily become troublesome to debug if anything goes wrong.