There's two ways to perform every task. There's the way we say and maintain the illusion of doing. And, there's the practical way we actually get the work done. If we don't maintain the illusion then they'll cut budget. If they cut our budget we can't even afford the practical way, let alone what they think we're doing.
Your success in this position will be determined by how quickly you learn both processes and how well you choose which is appropriate for the situation.
TBF all the jobs are a decade old and written by our researchers in dotnet framework as Winforms apps I hacked up to be console apps so it's gotta be windows. I'm converting them one by one to dotnet core and moving them to my Linux containers but it's a slow process and I've got a v1 release to prepare for next month.
Everyone is just stoked that no longer do a half dozen researchers have to twice a day log in to their pet server, open their Winforms app, run it, and copy paste the results to a shared drive. Now my docker harness does it all on a scheduled task triggered automatically from rundeck server I manage. WE'RE LIVING IN THE FUTURE BABY
Docker images I have run dotnet in a container but the docker server host is Ubuntu. Though I really should flatten it and run it on proxmox.
However, it's not like that would save real dollars on licensing we have Windows servers still for AD et. al. and therefore have to license all CPU cores in a hypervisor cluster so having fewer windows servers is irrelevant in our environment with regards to license costs.
Oh yeah, all my code is dotnet core running on Ubuntu servers in docker.
Just all this legacy code is written in dotnet framework which doesn't run on Linux, and requires some moderate effort to switch (relies on libraries that are framework, and those also rely on framework libraries, etc)
It's completely possible, but for now, I've got these 2022 servers running "good enough" to go to production, and I'll convert them as soon as the first issue arises.