About a year ago I ended up with a floating point value that was something like 1.0000000000078 when it should have been 1. Tore my hair out for hours trying to get the piece of crap embedded vendor locked device to just make it 1.
It's almost like some useless person created a variable with a distinct set unlikely to be higher than the hundreds as a floating point - when it obviously should have been an int.
Nah, it makes sense to use a floating point number here, since unless the test is marked out of a factor of 100 then there will likely be a fractional value as the final percentage. The mistake was not rounding the final displayed value.
The issue here lies in how it calculates each correct answer value, which is set at 1/15. This approach introduces an approximation error. When you sum all these values together, the total doesn't quite reach 1.
the pedantic answer is that, from a rigorous perspective, 99.9999999999999% isn't the same as 100% because the decimals don't repeat forever. but a more practical answer would be that they are the same number. because of how computers (usually) round numbers, the stuff showing up after the 8th decimal place is (usually) junk that can be ignored.