The good ones aren't "blur", they're "subpixel rearrange".
It takes about 4x4 square pixels to emulate the subpixels of a single round one... just like it takes about 4x4 round pixels to emulate the subpixels of a square one.
All pixels are a "blur" of R, G, and B subpixels. Their arrangement is what makes a picture look either as designed, or messed up.
For rendering text, on modern OSs you can still pick whichever subpixel arrangement the screen uses to make them look crisper. Can't do the same with old games that use baked-in sprites for everything.
It gets even worse when the game uses high brightness pixels surrounded by low brightness ones because it expects the bright ones to spill over in some very specific way.
That's still some Vsauce level reaching that "we don't actually even see anything". The tech doesn't matter when playing and if it looks blurry, then it is blurry.
I said that it doesn't matter. Only the end result does. There is no game I would play on a CRT simply because it looks worse. It's not an objective fact but my preference. I don't care how you are trying achieve the "CRT look" since it looks like shit and I don't want to see it.
Have you checked the examples...? I feel like we're going in circles. There are cases where the CRT looks objectively better, supporting examples have been provided, technical explanation has been provided... it's up to you to look at them or not.
If you wish to discusd some of the examples, or the tech, I'm open to that. Otherwise I'll leave it here. ✌️
The objective part is in whether it matches what the creator intended.
Sometimes they intended crisp contours, like in ClearType; sometimes they intended to add extra colors; sometimes they designed pixel perfect and it looked blurry on CRT; very rarely they used vector graphics or 3D that can be rendered at better quality by just throwing some extra resolution.
Many artists of the time pushed this tech to these limits, "objectively better" is to emulate that.