Oof... At work we deal with clients whose projects are covered by NDAs and confidentiality agreements, among other things. This is bad enough if the information scanned is siloed per organization, as it could create a situation where somebody not under NDA could access confidential client info leaked by an LLM that ingested every PDF in Adobe's cloud service without regard to distribution. Even worse if they're feeding everything back into a single global LLM -- corporate espionage becomes as simple as a bit of prompt engineering!
I highly doubt that they would be able to use private user data for training. Using data available on the internet is a bit legally grey, but using data that is not publicly available would surely be illegal. When the document is "read" by the LLM it is no longer training, so it won't store the data and be able to regurgitate it.*
* that is, if they have designed this in an ethical and legal way 🙃
They will use every scrap of data you haven’t explicitly told them not to use, and they will make it so that the method to disable these ‘features’ is little known, difficult to understand/access, and automatically re-enabled every release cycle.
When they are sued, they will point to announcements like this and the one or two paragraphs in their huge EULA to discourage, dismiss, and slow down lawsuits.
Does the AI include a feature that converts the bloated, non-functional hulk of an application that is Adobe Acrobat into a usable, fit-for-purpose PDF viewer/writer/editor with a consistent interface? Oo I really hope it does, that would be really helpful.
I hope governments around the world punish this kind of espionage by publicly banning these "AI assisted" products in government organizations for being a (national) security threat. Their PR needs to be in the dirt.
Ya know, AI has really pushed the Cyber Crime field years into the future! Adobe made an excellent decision adding it to their suite of technology used by businesses around the world!
For the past 24 hours, I've been arguing with somebody who insists AI should be given our political, religious, and brand biases in order to tailor a search engine that will only show us results we are comfortable with.
In a privacy community.
Things are going to get dumber before they get better.
If the AI software is on-premises/offline or completely turned off by default. Anything else is espionage.
Ideally, consider using open-source software if possible to avoid potential bait-and-switches. But understandably, this is not always a practical option.