The company further explained that it takes steps to limit personal data in the datasets used to train its AI models
The U.K. Information Commissioner’s Office (ICO) has confirmed that LinkedIn has temporarily halted the use of U.K. users’ data for training its artificial intelligence (AI) models. This move follows concerns raised by the ICO regarding how the professional networking platform was handling user information.
“We are pleased that LinkedIn has reflected on the concerns we raised about its approach to training generative AI models with information relating to its U.K. users,” said Stephen Almond, the ICO’s executive director of regulatory risk. “We welcome LinkedIn’s confirmation that it has suspended such model training pending further engagement with the ICO.”
The ICO emphasised its intention to closely monitor LinkedIn and other companies with generative AI capabilities, such as Microsoft, to ensure that user data is adequately protected.
This action from LinkedIn comes after revelations that the Microsoft-owned company had been using user data to train its AI models without explicit consent. This change was quietly included in LinkedIn’s updated privacy policy, which came into effect on September 18, 2024, according to a report by 404 Media.
In response, LinkedIn clarified its position, stating, “At this time, we are not enabling training for generative AI on member data from the European Economic Area, Switzerland, and the United Kingdom, and will not provide the setting to members in those regions until further notice.”
The company further explained that it takes steps to limit personal data in the datasets used to train its AI models. This includes using privacy-enhancing technologies to remove or redact personal information.
For users outside of Europe, LinkedIn offers the option to opt out of AI training. This can be done by adjusting privacy settings under the “Data for Generative AI Improvement” option. However, LinkedIn noted that while opting out prevents future use of personal data for AI training, it does not impact the data already used.
This issue mirrors broader concerns across the tech industry, as other companies have also been scrutinized for their data-handling practices. Recently, Meta admitted to using non-private user data for AI training without explicit consent, a practice dating back to 2007. Similarly, Zoom backtracked on its plans to use customer content for AI model training following public outcry over privacy concerns.
These incidents highlight the growing focus on how companies are using personal data to train large AI models. As technology evolves, questions about consent, transparency, and user protection have become more prominent.
Meanwhile, a recent report from the U.S. Federal Trade Commission (FTC) also criticized social media and video streaming platforms for their data collection practices. The report noted that many companies engage in widespread surveillance, using personal data to build comprehensive profiles that are often monetized.
The FTC raised alarms over inadequate privacy safeguards, particularly for children and teens, and cited examples of poor data deletion practices, even after users requested their information be removed.
As regulators increase scrutiny, platforms like LinkedIn are under pressure to strengthen their data protection measures and provide greater transparency about how they use personal information.

