Personal take: If they didn't say how the videos on the page were created, I genuinely think that several of the AI generated videos could be passed off as being made with a camera or CGI (though there's probably still inconsistencies when looking hard enough).
This is a serious question, I'd love to hear some other views on this: should there be laws that assess new tech before it is allowed into the public sphere? How would such things be enforeced?
I don’t think this would work since most governments don’t understand technology well (just look at the Flipper Zero ban in Canada as an example). Technology has also been disruptive to existing industries (Uber, Airbnb, Netflix, etc.). I think traditional industries would just end up lobbying governments when they are challenged by new technology companies and we’d see less technology overall. That being said I can see the need for more tech regulation in a lot of areas (looking at you Apple), I just can’t see a blanket solution being the right approach.
It's quite frightening to see how fast these AI models have improved during the last few years. You can still spot errors in the videos, but how long will it take until you can't do that anymore?
It sounds terrifying to not know what's real or not anymore.
And also, these videos will put a lot of people out of jobs, especially in the creative industry. Who needs someone following a car with a drone anymore, when you can just generate that footage on the fly?