They go on to deduce it's an off-by-one error in the time domain.
So instead of 0-127 it's processing 0-126 samples (a classing i < 127 instead of i <= 127 in a for loop)
I get that this is a bug, but it kinda sucks that people feel it's all right to act this way. Software is hard and unless you're using a language with zero-overhead iteration you're probably writing your drivers in C and iterating with a for-loop like our ancestors did. Off by one errors are stupidly common and everyone is human.
I mean, fuck mega corporations. This is still cringeworthy.
That being said, it's going to be fun to see quality differences in these operating systems in a few years because, as far as I know, Apple would rather force Swift into the systems-level language space than adopt a memory-safe language today.
Meanwhile Microsoft, Google, and Amazon, etc are all investing heavily in Rust by integrating it into their platforms.