You can't have a babysitter following every human to make sure they don't do something dangerous. Except for high risk areas, liability is the most practical option.
So you want to read 50 page regulation about how to boil water in your home because boiling water can hurt people?
And how do you regulate AI when you have no idea how it works or what could go wrong. Not as if politicians are AI experts. Driving itself is already heavily regulated, the AI has to follow traffic rules just like anyone else, if that is what you are thinking.
Why do you believe that judges (or even juries made of lay people) can make sense of the very things that you’re so confident legislators or regulators cannot?
I’m not saying regulation is perfect, and as a result, certainly there is a role for judicial review. But come on, man…lots of non sequiturs and straw dogs in your argument.
Quite often, juries don't have to rule on technical matters. Juries will have available internal communications of the company, testimonies of the engineers working on the project etc. If safety concerns were being ignored, you can usually find enough witnesses and documents proving so.
On the other hand, how do you even begin to regulate something that is only in the process of being invented? What would the regulation look like?