Anonymized data as a concept has always been a joke. With enough data points, the origin can usually be traced.
The design goal of anonimized data is that it is processed to explicitly disallow tracing. This means not only removing personally identifiable information but also disallow session data.
Depends. Eg: If the goal is to show data about which vendors at a sports game got the most traffic, you can easily share data about how many people went to which vendor, and at which times, without there being any possibility to identify Joe getting a fifth hot dog.
If enough information about the original person is destroyed, it cannot be recreated. Just like how you can't enhance a picture like they do in those serialized crime shows.
I'm not sure I'm expressing this in clear terms. The same company is collecting data and anonymizing it. They have people dedicated to review which data a service is designed to store, to classify that data according to their privacy implications, and to anonymize all data they have in order to comply with all sorts of legislation.
If the data they are collecting isn't anonymous, or could be deanonimized, they are liable to pay huge fines and suffer other painful legal consequences.
This is not about hypothetical scenarios where you can argue that tracking random brower fingerprints can pinpoint who you are. This is about a single company having to legally demonstrate they do not directly or indirectly abuse personally identifiable information, otherwise they have to pay a fortune in fines for no good reason.