Honestly, it's still ridiculous to me how slow Python, Java, JS, Ruby etc. continue to feel, even after decades of hardware optimizations. You'd think their slowness would stop being relevant at some point, because processors and whatnot have become magnitudes faster, but you can still feel it quite well, when something was implemented in one of those.
Many of these have C-bindings for their libraries, which means that slowness is caused by bad code (such as making a for loop with a C-call for each iteration instead of once for the whole loop).
I am no coder, but it is my experience that bad code can be slow regardless of language used.
Bad code can certainly be part of it. The average skill level of those coding C/C++/Rust tends to be higher. And modern programs typically use hundreds of libraries, so even if your own code is immaculate, not all of your dependencies will be.
But there's other reasons, too:
Python, Java etc. execute their compiler/interpreter while the program is running.
CLIs are magnitudes slower, because these languages require a runtime to be launched before executing the CLI logic.
GUIs and simulations stutter around, because these languages use garbage collection for memory management.
And then just death by a thousand paper cuts. For example, when iterating over text, you can't tell it to just give you a view/pointer into the existing memory of the text. Instead, it copies each snippet of text you want to process into new memory.
And when working with multiple threads in Java, it is considered best practice to always clone memory of basically anything you touch. Like, that's good code and its performance will be mediocre. Also, you better don't think about using multiple threads in Python+JS. For those two, even parallelism was an afterthought.
Well, and then all of the above feeds back into all the libraries not being performant. There's no chance to use the languages for performance-critical stuff, so no one bothers optimizing the libraries.
Idk numpy go brrrrrrrrrr. I think it's more just the right tool for the right job. Most languages have areas they excel at, and areas where they're weaker, siloing yourself into one and thinking it's faster for every implementation seems short sighted.
At it's heart, numpy is C tho. That's exactly what I'm talking about. Python is amazing glue code. It makes this fast code more useful by wrapping it in simple(r) scripts and classes.
But yeah I'd like it if the features given by Lombok were standard in the language though it's not a big deal these days since adding Lombok support is very trivial.
You shouldn’t use Lombok, as it uses non-public internal Java APIs, which is why it breaks every release. At one point we had a bug with Lombok that only resolved if you restarted the application. Switching off of Lombok resolved the issue.
Just switch to kotlin. You can even just use Kotlin as a library if you really want (just for POJOs), but at this point Kotlin is just better than Java in almost every way.
Energy use? That's a pointless metric. If that is the goal then whole idea of desktop should be scraped. Waste of memory and hard drive space. Just imagine the amount of energy wasted on booting GUI.
If you want to talk about climate change then electronics is the wrong place to point the finger at. For start look at cement manufacturing. It requires huge amounts of energy to produce even though we have eco-friendly variants ready to go. And cement production amounts to 8% of all greenhouse gasses released annually.
Hell, just ban private jets and you've offset all of the bad things datacenters ever made. Elon had 10 minute flight to avoid traffic which consumed around 300l of fuel. Royal family makes so many flights a year that you could go into the wild and eat bark until the rest of your life and you wouldn't be able to offset their footprint in thousands of lives.
Bill Gates himself talks a lot about reducing carbon footprint we make and yet he refuses to sell his collection of airplanes. He has A COLLECTION of them.
Using higher level language that requires more operations than assembler is not a thing to worry about when talking about climate change. Especially without taking into account how much pollution have those managed to reduce by smartly controlling irrigation and other processes.
For example, when iterating over text, you can't tell it to just give you a view/pointer into the existing memory of the text. Instead, it copies each snippet of text you want to process into new memory.
As someone used to embedded programming, this sounds horrific.
Yep. I used to code a lot in JVM languages, then started learning Rust. My initial reaction was "Why the hell does Rust have two string types?".
Then I learned that it's for representing actual memory vs. view and what that meant. Since then I'm thinking "Why the hell do JVM languages not have two string types?".
I'm not a java programmer, but I think the equivalent to str would be char[]. However the ergonomics of rust for str isn't there for char[], so java devs probably use String everywhere.
Nope, crucial difference between Java's char[] and Rust's &str is that the latter is always a pointer to an existing section of memory. When you create a char[], it allocates a new section of memory (and then you get a pointer to that).
One thing that they might be able to do, is to optimize it in the JVM, akin to Rust's Cow.
Basically, you could share the same section of memory between multiple String instances and only if someone writes to their instance of that String, then you copy it into new memory and do the modification there.
Java doesn't have mutability semantics, which Rust uses for this, but I guess, with object encapsulation, they could manually implement it whenever a potentially modifying method is called...?
They do have optimizations however they are interpreted at runtime so they can only be so fast
Frankly you won't notice much unless the program is doing something computation heavy which shouldn't be done in languages such as JavaScript and Python
That's because it's not relevant. Speed can be compensated for either by caching or outsourcing your load, if there's such a huge need to process large amount of data quickly. In day to day work I can't say I have ever ran into issues because code was executing slow. Normal operation Python is more than capable of keeping up.
On the other side of the coin you have memory management, buff and stack overflows and general issues almost exclusive to C, which is something you don't have to worry about as much with higher level languages. Development with Python is simply faster and safer. We as developers have different tools, and we should use them for their appropriate purpose. You can drive nails with a rock as well, but you generally don't see carpenters doing this all day.
You can sometimes deal with performance issues by caching, if you want to trade one hard problem for another (cache invalidation). There's plenty of cases where that's not a solution though. I recently had a 1ns time budget on a change. That kind of optimization is fun/impossible to do in Python and straightforward to accomplish Rust or C/C++ once you've set up your measurements.
Which is exactly what I said. Most of the times you can work around it. Sure cache invalidation can be hard, but doesn't have to be. If you need performance use more performant language. Right tool for the job.
You can find plenty of people complaining online about the startup time of the windows and gnome (snap) calculators. The problem in those cases isn't solved by compiled languages, but it illustrates that it's important to consider performance even for things like calculator apps.
You can sometimes deal with performance issues by caching, if you want to trade one hard problem for another (cache invalidation). There's plenty of cases where that's not a solution though. I recently had a 1ns time budget on a change. That kind of optimization is fun/impossible to do in Python and trivial to accomplish Rust or C/C++ once you've set up your measurements
Especially since languages such as Python and JavaScript are really good a event programing where you have an event that runs a function. Most of the CPU time is idling anyway.
True, plus the bloated websites I see are using hundreds of thousands of lines of JavaScript. Why would you possibly need that much code? My full fledged web games use under 10,000.
It is always a question of chosing the right tool for the right task. My core code is in C (but probably better structured than most C++ programs), and it needs to be this way. But I also do a lot of stuff in PERL. When I have to generate a source code or smart-edit a file, it is faster and easier to do this in PERL, especially if the execution time is so short that one would not notice a difference anyway.
Or the code that generates files for the production: Yes, a single run may take a minute (in the background), but it produces the files necessary for the production of goods of over 100k worth. And the run is still faster than the surrounding processes like getting the request from production, calculating the necessary parameters, then wrapping all the necessary files with the results of the run into a reply to the production department.
They aren't as fast as a native language but they aren't all that slow if you aren't trying to use them for performance sensitive applications. Modern machines run all those very quickly as CPUs are crazy fast.
Also it seems weird to put Java/OpenJDK in the list as it is in its own category from my experience
Java is certainly the fastest of the bunch, but I still find it rather noticeable how long the startup of applications takes and how it always feels a bit laggy when used for graphical stuff.
Certainly possible to ignore that on a rational level, but that's why I'm talking about how it feels.
I'm guessing, this has to do with just the basic UX principle of giving the user feedback. If I click a button, I want feedback that my click was accepted and when the triggered action completed. The sooner those happen, the more confident I feel about my input and the better everything feels.
Yep, I also don't fully agree on that one. I'm typing this on a degoogled Android phone with quite a bit stronger hardware than the iPhone SE that my workplace provides, e.g. octacore rather than hexacore, 8GB vs. 3GB RAM.
And yet, you guessed it, my Android phone feels quite a bit laggier. Scrolling on the screen has a noticeable delay. Typing on the touchscreen doesn't feel great on the iPhone either, because the screen is tiny, but at least it doesn't feel like I'm typing via SSH.
Why? I certainly expect that to be a factor, but I've gone through several generations of Android devices and I have never seen it without the GC-typical micro-stutters.
I have experienced the delayed scrolling, mostly on cheaper phones.
But that's mostly because i'm used to phones having 120+hz screens now, going back to a 60hz screen does feel a bit sluggish, which is especially noticeable on a phone where you're physically touching the thing. I think it might also have something to do with the cheaper touch matrixes, which may have a lower polling rate as well.