X11 is "complete" in the sense that we have followed it to the end of the road.
X11 has a series of well documented fundamental problems that does not make it suitable for a modern OS. I will not belabor them here (except to note that security in particulat in X11, is exceptionally weak for modern standars).
These issues are unfixable because they are built into core assumptions and behaviours of all legacy apps.
At some point there has to be a switch. There simply is not manpower to maintain 2 separate windowing systems. I am sure we would all want there to be an army of devs working on these things on maintain the 2 stacks. But that is not the timeline we live in. The number of devs working on these things is very low.
Was it too early? I don't know. There will never be 1-1 feature parity with 30 years of legacy apps. I honestly believe that fixing things like a11y are gonna be much more tenable with only a single windowing system.
For someone who has not used Gnome in 14+ years you sure seem to know a lot about it...
X11 has effectively already been deprecated for years, seeing little to no development on it. No one should be surprised.
And still, there are SEVERAL Long Term Support distros out there that will support X11 for the coming years. Please stop pretending that stuff will start breaking. It will not.
I find that my projects hosted on codeberg are heavily deranked or entirely missing on the top mainstream search engines. My github projects are almost always top 3.
So if it is a library someone might gind useful it has to go in gh. My personal toys can stay on cb.
Targeting vulnerable people based on metadata with any form of commercial intent, is morally and ethically highly questionable! A vulnerable person is by definition extremely susceptible to exploitation. Assuming that companies are gonna act out of philanthropy and goodness of their hearts seems a bit naiive.
Can't divulge too many details, but one example was when we had 2 options for solving a problem: 1. The "easy" way, storing a bunch small blobs to s3 as a job was running on an embedded device, or 2. The slightly tricky, implement streaming of said data on the device (not as easy as it sounds).
We went with option 1, the easy one, because it was deemed faster bang for the buck. I did some basic math showing that the bandwidth required upload the high number of blobs to s3 within our time budget was not possible on our uplink.
After we spend a month failing on 1., it was clear that we hit the predicted problem. Eventuelly we implement option 2.
Being comfortable with basic back-of-the-envelope math can be a huge benefit. (Full disclosure: i am a math major who is now a programmer)
Over my career I have several examples of projects that have saved weeks worth of dev time because someone could predict the result with some basic calculations. I also have several examples where I have shown people some basic math showing that their idea is never gonna work, they don't listen and do it anyway, and I see them 1 month later and the project failed in the way i predicted.
A popular (and wise) saying is that "Weeks of work can save you hours of meetings".
I think the same is true for basic math. "Weeks of coding can save you minutes of calculation".
You can definitely be a successful programmer career without great math skills. Math is a tool that can help you be more effective.
Interesting observation! The most simple explanation would be that it is memory claimed by the Go runtime during parsing of the incoming bson from Mongo. You can try calling runtime.GC() 3 times after ingest and see if it changes your memory. Go does not free memory to the OS immediately, but this should do it.
2 other options, a bit more speculative:
Go maps have been known to have a bit of overhead in particular for small maps. Even when calling make() with the correct capacity. That doesn't fit well with the memory profile you posted well, as I didn't see any map container memory in there...
More probable might be that map keys are duplicated. So if you have 100 maps with the key "hello" you have 100 copies of the string "hello" in memory. Ideally all 100 maps qould share the same string instance. This often happens when parsing data from an incoming stream. You can either try to manually dedup the stringa, see if the mongo driver has the option, or use the new 'unique' package in Go 1.23
There is a dangerously large population of devs and managers that look at themselves, unironically, as the gigachads pumping out ui "upgrades"
Many of these fail to realize how disruptive it is. UI change is like API breakage for the brain.
I have lost track of how many times I've tried to help an elderly family member with an app after some pointless, trivial, ui change. Only ending with them entirely giving up on using the app after the "upgrade" because the cognitive overhead of the change is beyond the skill that can fairly be expected for them 💔
The context package is such a big mistake. But at this point we just have to live with it and accept our fate because it's used everywhere
It adds boilerplate everywhere, is easily misused, can cause resource leaks, has highly ambiguous conotations for methods that take a ctx: Does the function do IO? Is it cancellable? What transactional semantics are there if you cancel the context during method execution.
Almost all devs just blindly throw it around without thinking about these things
And dont get me startet on all the ctx.Value() calls that traverse a linked list
Depending on your needs you can also break it into a columnar format with some standard compression on top. This allows you to search individual fields without looking at the rest.
It also compress exceptionally well, and "rare" fields will be null in most records, so run length encoding will compress them to near zero
Postgres and MySQL/mariadb are all primarily written in C.
Contrary to what other posters here claim, most programming languages are not written in C, but are self hosted. Ie. written using themselves. This usually involves a small bootstrapping component written in C or something similar, but that is a minor part of a whole
You should probably change page content entirely, server sizey, based on the user agent og request IP.
Using CSS to change layout based on the request has long since been "fixed" by smart crawlers. Even hacks that use JS to show/hide content is mostly handled by crawlers.
X11 is "complete" in the sense that we have followed it to the end of the road. X11 has a series of well documented fundamental problems that does not make it suitable for a modern OS. I will not belabor them here (except to note that security in particulat in X11, is exceptionally weak for modern standars). These issues are unfixable because they are built into core assumptions and behaviours of all legacy apps.
At some point there has to be a switch. There simply is not manpower to maintain 2 separate windowing systems. I am sure we would all want there to be an army of devs working on these things on maintain the 2 stacks. But that is not the timeline we live in. The number of devs working on these things is very low.
Was it too early? I don't know. There will never be 1-1 feature parity with 30 years of legacy apps. I honestly believe that fixing things like a11y are gonna be much more tenable with only a single windowing system.