After doing some google-fu, I've been puzzled further as to how the finnish man has done it.
What I mean is, Linux is widely known and praised for being more efficient and lighter on resources than the greasy obese N.T. slog that is Windows 10/11
To the big brained ones out there, was this because the Linux Kernel more "stripped down" than a Windows bases kernel? Removing bits of bloated code that could affect speed and operations?
I'm no OS expert or comp sci graduate, but I'm guessing it has a better handle of processes, the CPU tasks it gets given and "more refined programming" under the hood?
If I remember rightly, Linux was more a server/enterprise OS first than before shipping with desktop approaches hence it's used in a lot of institutions and educational sectors due to it being efficient as a server OS.
Hell, despite GNOME and Ubuntu getting flak for being chubby RAM hog bois, they're still snappier than Windows 11.
MacOS? I mean, it's snappy because it's a descendant of UNIX which sorta bled to Linux.
Maybe that's why? All of the snappiness and concepts were taken out of the UNIX playbook in designing a kernel and OS that isn't a fat RAM hog that gobbles your system resources the minute you wake it up.
I apologise in advance for any possible techno gibberish but I would really like to know the "Linux is faster than a speeding bullet" phenomenon.
You're going to want to read the foundational published papers on operating system design. Especially kernel design and considerations. You can just Google any random graduate operating system class, and look at their reading list to get started.
The big thing you want to look at is the different types of kernels there are, microkernels, monolithic kernels. How they divide memory, how they do IPC, how they incorporate drivers. All of these have different trade-offs.
BSD/Mac OS, Linux, NT/windows, xen/hypervisors... They all currently have different approaches, and they're all actually quite performant.
A while ago, process multiplexing, scheduling was had a huge impact on the perceived performance of a system, but now with multicore machines becoming extremely common well this is still important it is not as impactful.
Approaches to memory management, virtual memory, swapping to disk, the aggressiveness of this also has an impact on perceived system performance.
......
As you alluded to in your post, a lot of the perceived performance is not the operating system and kernel itself, but the user interface and extra services offered. Windows 11 is going to feel like a clunker for any retail user just due to all of the network driven advertisements incorporated which slow down the core interaction loop. If you click on the start menu and everything lags for a second will it pulls a new advertisements, you're going to feel that.
Start adding in background scanning for viruses, indexing for AI features, you're adding a lot of load to the system that's not necessary.
Because everything's a trade-off, people optimize different systems for different things, if you have a real-time operating system that runs a power plant, it doesn't matter if the interface is clunky as long as it hits its time targets for its tasks.
If you're running a data center server, you're probably worried more about total throughput over time, rather than immediate responsiveness to a terminal.
For a computer that does lots of machine learning and vector math, you might spend a massive amount of time making certain programs run a few percentage points faster by changing how memory is managed, cache allocation across CPUs, network access, you're going to find your critical path and bottleneck of performance and optimize that.
When we're talking about a general use desktop computer, we tend to focus on anything that a human would interact with minimize that loop. But because people could do anything, this becomes difficult to do perfectly in all scenarios. Just ask anybody who's run Chrome for a while, without restarting, and has a thousand tabs open, because all the RAM is being consumed the computer starts to feel slow, because virtual memory management becomes more demanding....
TLDR, all of the operating systems are capable of being very performant, all of the kernels are really good, it's all the extra stuff that people run at the same time that makes them feel different.
Linux encourages users to send patches while Microsoft is the sole company that can modify Windows.
It's very common to see patches from Google/Meta/Cloudflare/Amazon squeezing more performance for their particular use cases. That benefits everyone in the end.
Microsoft on the other hand is more concerned about its enterprise sales and overall profits. So they don't care that much. Windows 7 was horribly bloated, and they didn't address until Windows 8 because they had to, because they realized it was too bloated to run on their new tablet PCs so they had to do something about it.
Apple cares a lot, because their thing is energy efficient fanless netbooks, and phones, and tablets. macOS and iOS are very close in how they work, so Apple has all the incentive to keep it efficient because their software will also affect the hardware side of the business. Microsoft doesn't, it's the hardware partners that get stuck dealing with it.
The NT kernel is fairly good, it just doesn't get the attention it deserves. Microsoft mostly add features on top of older features, they never go in and be like "this sucks" and rewrite a feature, because that's very risky to do and may break millions of applications and affect their bottomline. Linux doesn't have to care about that.
I'd say, if Windows was open-source, we'd have some pretty solid Windows distributions because the community would care to go in and fix a ton of bottlenecks that aren't worth it for Microsoft as a company to even bother reviewing the patches let alone develop and test them. It's much more lucrative for them to release AI crap like Copilot than make Windows 10% snappier. Because most Windows users are corporate people that makes decisions based on marketing and business items than being an enjoyable experience. Less frustrated users? Nah. More productive employees with crappy AI features that barely works? Hell yeah 🤑
TL;DR: Windows sucks because of Microsoft's business interests don't require Windows to be that good, merely good enough.
This is the right answer. To complement it, I’d just say I’ve read someone before say that at Microsoft there’s no incentive to squeeze performance, so why bother if it won’t help you get promoted or get a bonus? All these things add up over time to make Windows only care about it when there is actually a huge bottleneck.
It’s also worth noting (for non programmers out there) that speed has no correlation with the amount of code. Often it’s actually the opposite: things start simple and begin to grow in complexity and amount of code exactly to squeeze more optimizations for specific use-cases.
As I'm sure you've gathered, this is a complex and nuanced discussion, but to me the biggest factors making Linux fast are:
Absence of telemetry/data collection monitoring your use of the device
Open source development model encouraging the entire world to contribute to make the Linux kernel better
There's something to be said where a consistent 5% performance improvement in a filesystem or process scheduler would be taken as a huge win. How would you even manage finding or contributing such a change to something closed source like Windows 11? Academics write papers about the kernel's performance and how it can be improved whereas I tend to think Microsoft takes more of a 'good enough' approach to such details.
I think both the Windows NT Kernel and the Linux Kernel are solid speedy parts of the OS. The main bloat is what's on top.
Windows seems to have progressively more bloated phases. Newer stock Windows programs are built from much heavier components.
There's the Win32 phase, which is super fast and lightweight. Few programs are still alive using this phase, WordPad (RIP) is one of them.
Then there's the broad Win64 phase, comprised of mostly Win Vista/7/8/10 parts. Word, Excel, and the old Outlook are examples of these programs. Slow upon their inception, they have become considerably faster due to better hardware, but still aren't very snappy.
And finally there's the new phase, Windows 11. Horribly bloated and laughably slow when pushed beyond the absolute basics. Examples include File Explorer, Notepad, Teams, and the new Outlook. Notepad is mostly fine, but even File Explorer takes multiple seconds to load basic things on midrange hardware. We all know how bad Teams is, and the new Outlook takes 30 seconds to launch and open an email even on high end hardware.
Much of the modern bloat comes from this latest phase, but somehow other parts of the system have seriously bloated as well, like all of the disk processes on startup and even the Windowing system, which used to be near instant on crappy hardware back in the Win2000 era, now takes up to a second on modern midrange hardware on 11.
Linux has fared better against the onset of bloat than Windows, which is the main reason why it feels much snappier and uses less memory. Despite this, you can still see Linux getting significantly heavier over the years, from the super lightweight Trinity Desktop to what we have now. But, web browsers powering many greedy tabs can easily out-bloat GNOME, to the point where Linux only feels slightly faster than Windows because everything is in a browser.
I thought UWP/Metro was Win8/10. Win11 is "Fluent". Perhaps there were 4 phases, not just 3, but my post was already getting too long and the WinForms phase has been pretty much fully conquered by today's fast hardware.
I agree wholeheartedly and want to add that with Linux systems it's much easier to clean out the bloat and be left with a usable system. It's possible on Windows but might as well not be since the bloat is so deeply integrated into the OS and SDK.
This may be a minor point, but I often think in discussions like these, people are talking about the entire OS rather than just the kernel. And while you can take a fully featured desktop system environment for a spin, and it's pretty good, a lightweight window manager is lightning quick.
If you stick to minimalistic apps for things like photo viewing, you can open folders with 1000s of images in thumbnail mode at incredible speeds, or enormous PDFs. Those are the types of tasks that seemingly slow W10 to a crawl.
In general I also have pretty good luck with stability on my machine. I don't find myself needing to kill apps that start misbehaving for unexplained reasons, except Firefox... But usually an update sorts it out.
Windows has an entirely different set of objectives. The coders have to layer on so many services that are insisted upon by marketing that no matter how optimised they make the kernel, it's always doing to be a little boat carrying far too much cargo.
There's also a lot of fairly reliable rumour that the Windows codebase is very messy. Evolved and complicated, supporting many obsolete things and has suffered from different managers over the years changing styles and objectives. We don't know for sure because it's proprietary.
But that said, I use both and find each good for different things. Windows is much more stable than it used to be, and speed is adequate for most things, largely because we've become used to buying better hardware every few years.
Windows has an entirely different set of objectives.
I never thought of it this way. My first reaction was "What do you mean 'different objectives', they're both operating systems!" But Windows is an operating system with the objective of making profit for Microsoft. Linux is an operating system with the goal of... being an operating system.
It really puts it in perspective. Windows (and Mac) can and will only use useful to the consumer up to a point.
You mean you've become used to buying better hardware every few years. Some of us are poor and are still kicking a 2012 Thinkpad we got off eBay in high school.
That is of course where Linux shines - you can have an up to date operating system on 12 year old hardware that is secure, usable and responsive In fact, it's the only option.
I was actually talking about the royal 'we' - generally we have become trained to buy new shiny things. Computers, phones, tvs, every few years. Marketing works, folks. Apologies if it rubbed a bit raw for you.
You've accidentally triggered a core thing with me. I've done the poor thing, I have actually had zero money and no way to pay rent. I've had pretty much nothing at one point in my life. Although I've got some spending money now after 35 years of working too hard for it, that never really leaves you - if you've been truly poor, then you'll always be looking for money off deals when buying food, and you'll always be several steps behind the latest hardware just because it doesn't feel right to spend that much. The laptop I'm typing this on was one I found in a skip. It's a HP pavilion about 8 years old. Runs just fine on Debian.
Sounds like a you problem. I'm poor AF working at a grocery store and I still have a decent PC just by buying new parts here and there and saving money where necessary to do so.
Windows also still runs software unchanged
from 20 or more years ago, while software on Linux has to be constantly updated to use new libraries and APIs, else it's considered "dead" and very soon will no longer run or even compile in its current form.
It has a lot of baggage that Linux doesn't need to worry about.
@independantiste@TimeSquirrel , I could be wrong, but Windows NTFS is also incredibly terrible at reading/writing large numbers of small files. Windows explorer can now be opened in different processes, at least that's some improvement.
Edit: There's a reason why game developers create an archive of the files for the game rather than reading them from the FS itself.
Not sure about DOS, but Windows 10 will happily run 16-bit Windows software. You have to use the 32-bit version of Windows though - the 64-bit version dropped support.
You could still run 16-bit apps on the 32-bit version of Windows 10! You just had to manually install NTVDM from the optional features dialog. It was completely unsupported by Microsoft, though.
They never ported NTVDM to 64-bit Windows, so it died once Windows because 64-bit only.
This is just a theory, I don't have knowledge of the inner-workings of either Linux or Windows (beyond the basics). While Microsoft has been packing tons of telemetry in their OS since Windows 10, I think they fucked up the I/O stack somewhere along the way. Windows used to run well enough on HDDs, but can barely boot now.
This is most easily highlighted by using a disk drive. I was trying to read a DVD a while ago and noticed my whole system was locked up on a very modern system. Just having the drive plugged in would prevent windows from opening anything if already on, or getting past the spinner on boot.
The same wasn't observed on Linux. It took a bit to mount the DVD, but at no point did it lock up my system until it was removed. I used to use CDs and DVDs all the time on XP and 7 without this happening, so I only can suspect that they messed up something with I/O and has gone unnoticed because of their willingness to ignore the issues with the belief they're being caused by telemetry
I sure some people would disagree with the "doesn't have a ton of bloat" in some cases... I've seen people complain about the number of apps preinstalled on the Fedora KDE spin for example.
Well, true up to the point where the OS itself uses up most of the available RAM just for basic processes (like tracking and reporting the users' data to the manufacturer's data centre).
This post is so full of inaccuracies that I don't know where to begin. I'll just mention the first thing I noticed: just because drivers are compiled with the kernel doesn't mean they're all loaded at runtime. modprobe exists for a reason.
You're mostly correct. People here don't take Windows praise lightly.
NT is probably the best part about Windows. If you're gonna complain about Windows, the kernel is the last thing to complain about.
As you've said, there are things that are still better about NT to this day;
OOM conditions are way better - system continues to run mostly fine after emergency swapping memory pages into the page file. No crashes, just a freeze until the OS swaps stuff out. No data is usually lost due to this. Apps continue to run and you have the chance to save and reboot your machine.
The driver architecture, as you've alluded to, is much more flexible. No need to rebuild a DKMS module every time the kernel updates. The drivers are self-contained and best of all - backwards compatible. You can still use XP 64-bit drivers on modern Windows (if you ever need to)
Process scheduling is very good for anything equal to or lower than 64 CPU threads. Windows at its core can multitask pretty good on one thread and that scales up to a certain point.
Most of NT stigma comes from NTFS (which has its own share of problems) and the bugcheck screens that people kept seeing (which weren't even mostly MS' fault to begin with, that was on the driver vendors).
Mark Russinovich has some of his old talks up on his YT channel and one of them compares Linux (2.6 at the time) to NT and goes into great detail. Most of the points made there still applies to this day.
Windows kernel is a much modern design. An hybrid kernel. Linux is stuck in a monolithic design, and tries to be kind of hybrid, but the design is still monolithic.
I don't know how well this kernel could fit as the global defacto kernel for all computers.
IMHO I think the future in order to take lead beyond Apple and Microsoft kernels is to take the microkernel path. That's either helping with the GNU Hurd or starting a new project.
A competent modern Kernel would be a serious blow to the modern privative Oses. We are in a good position to strike. Windows and MacOs getting shittier and shittier.