After doing some google-fu, I've been puzzled further as to how the finnish man has done it.
What I mean is, Linux is widely known and praised for being more efficient and lighter on resources than the greasy obese N.T. slog that is Windows 10/11
To the big brained ones out there, was this because the Linux Kernel more "stripped down" than a Windows bases kernel? Removing bits of bloated code that could affect speed and operations?
I'm no OS expert or comp sci graduate, but I'm guessing it has a better handle of processes, the CPU tasks it gets given and "more refined programming" under the hood?
If I remember rightly, Linux was more a server/enterprise OS first than before shipping with desktop approaches hence it's used in a lot of institutions and educational sectors due to it being efficient as a server OS.
Hell, despite GNOME and Ubuntu getting flak for being chubby RAM hog bois, they're still snappier than Windows 11.
MacOS? I mean, it's snappy because it's a descendant of UNIX which sorta bled to Linux.
Maybe that's why? All of the snappiness and concepts were taken out of the UNIX playbook in designing a kernel and OS that isn't a fat RAM hog that gobbles your system resources the minute you wake it up.
I apologise in advance for any possible techno gibberish but I would really like to know the "Linux is faster than a speeding bullet" phenomenon.
This post is so full of inaccuracies that I don't know where to begin. I'll just mention the first thing I noticed: just because drivers are compiled with the kernel doesn't mean they're all loaded at runtime. modprobe exists for a reason.
You're mostly correct. People here don't take Windows praise lightly.
NT is probably the best part about Windows. If you're gonna complain about Windows, the kernel is the last thing to complain about.
As you've said, there are things that are still better about NT to this day;
OOM conditions are way better - system continues to run mostly fine after emergency swapping memory pages into the page file. No crashes, just a freeze until the OS swaps stuff out. No data is usually lost due to this. Apps continue to run and you have the chance to save and reboot your machine.
The driver architecture, as you've alluded to, is much more flexible. No need to rebuild a DKMS module every time the kernel updates. The drivers are self-contained and best of all - backwards compatible. You can still use XP 64-bit drivers on modern Windows (if you ever need to)
Process scheduling is very good for anything equal to or lower than 64 CPU threads. Windows at its core can multitask pretty good on one thread and that scales up to a certain point.
Most of NT stigma comes from NTFS (which has its own share of problems) and the bugcheck screens that people kept seeing (which weren't even mostly MS' fault to begin with, that was on the driver vendors).
Mark Russinovich has some of his old talks up on his YT channel and one of them compares Linux (2.6 at the time) to NT and goes into great detail. Most of the points made there still applies to this day.