Generative Artificial Intelligence has burst onto the scene, bringing with it significant ESG challenges, particularly around data privacy, labor practices, and corporate governance.
A major issue is the source of AI training data, with lawsuits against OpenAI, Anthropic, and Google DeepMind over alleged copyright violations. Global regulatory bodies are also investigating AI companies for anti-competitive behavior and privacy breaches.
Concerns over transparency in AI and the spread of misinformation continue to grow, with AI models accused of generating false or biased content. Additionally, worries about cybersecurity vulnerabilities, such as data leaks and hacking risks, have further fueled scrutiny. Labor and working conditions also remain a concern in the industry, with reports of low wages and weak protections, while whistleblowers call for better safeguards and highlight governance instability.
What does the GenAI landscape look like for ESG issues? Read on to find out.
OpenAI: Navigating Copyright Infringement and Regulatory Scrutiny
As a leader in Gen AI, OpenAI has faced increasing scrutiny over ESG issues, particularly copyright infringement. It has been sued by major news outlets, publishers, and music labels for allegedly using copyrighted content without permission to train its AI models. Beyond copyright concerns, OpenAI has been fined for privacy violations and is under regulatory scrutiny, including antitrust investigations in the U.S. and Europe. Data security risks, working conditions for AI data workers, and internal governance challenges—such as whistleblower concerns and executive upheaval—have also drawn criticism.
Key Controversies:
-
-
- Copyright infringement lawsuits from The New York Times, ANI, Major Canadian Media Outlets, etc.
- Concerns over carbon footprint
- FTC launches inquiry into Microsoft's OpenAI partnership
- Cyberattacks on ChatGPT
- OpenAI insiders warn of serious risks and call for whistleblower protections
- US Space Force halts ChatGPT use over data security concerns
-
Anthropic: Balancing Ethical AI Practices with Data Privacy Challenges
Despite lower volumes of ESG controversies, Anthropic still faces scrutiny over data privacy, ethical AI, and corporate governance. The company has been sued for allegedly using copyrighted material in AI training and accused of bypassing anti-scraping rules. Security concerns grew after vulnerabilities in its Claude AI model and a confirmed data leak. Its ethical AI stance has also been questioned over reported military ties. Meanwhile, former employees have called for stronger whistleblower protections, highlighting transparency and accountability concerns.
Key Controversies:
-
- Anthropic's ethical AI stance under scrutiny over military ties
- Anthropic accused of bypassing website anti-scraping rules
- UK launches probe into Google’s investment in Anthropic
- A call for stronger whistleblower protections
- Anthropic confirms data leak incident
- FTC investigation for anti-competitive practices
Microsoft AI: Facing Antitrust and Intellectual Property Controversies
Microsoft's AI controversies have grown into serious legal challenges from 2023 to 2025. The company faces lawsuits over its Copilot chatbot, raising intellectual property concerns. Ongoing antitrust inquiries are examining Microsoft’s AI partnerships, while publishers have filed copyright claims against the company. With investigations by U.S. regulators, the EU, and UK watchdogs, scrutiny has intensified globally. Microsoft’s hiring practices have also come under fire, particularly its recruitment of key talent from AI startups, raising concerns over potential anti-competitive behavior.
Key Controversies:
-
- US regulators investigate Microsoft, OpenAI, and Nvidia over antitrust concerns
- US FTC probes Microsoft's cloud and AI practices
- Elon Musk adds Microsoft to his lawsuit against ChatGPT
- UK investigates Microsoft over AI startup hires
- Chicago Tribune, NYT sue OpenAI and Microsoft over copyright infringement
- Microsoft and Amazon face UK scrutiny over AI deals
DeepSeek: Data Privacy and Ethical Use in AI-Powered Discovery
Although relatively new, DeepSeek has been the center of attention for the past few months. It has been involved in anti-competitive practices scandals over its disruption of OpenAI and issues linked to data privacy and cybersecurity. Countries like Australia, South Korea, France, and India have criticized and, in some instances, banned the AI platform. Additionally, DeepSeek has been questioned about its supply chain and forced labor practices.
Key Controversies:
-
- DeepSeek disrupts OpenAI
- Australia bans DeepSeek on government devices
- South Korea says DeepSeek sends user data to TikTok
- India's finance ministry asks employees to avoid AI tools like ChatGPT, DeepSeek
- French privacy watchdog to quiz DeepSeek on AI
- China's new AI app DeepSeek is trying to erase our genocide from history, Uyghurs warn
Mistral AI: Open-Source Development and Accountability in AI Systems
Mistral AI, a French AI startup, has been hit with data privacy and cybersecurity controversies, anti-competitive practices, and senior management issues.
Key Controversies:
Conclusion
The rise of Gen AI has come with significant ESG challenges. Its major players, like OpenAI, Anthropic, and Microsoft, face issues such as copyright infringements, privacy violations, labor practices, and environmental impacts. As regulators step up their investigations, these firms will have to focus on transparency, ethical practices, and sustainability or risk additional controversies.
Reach out to SESAMm
TextReveal’s web data analysis of over five million public and private companies is essential for keeping tabs on ESG investment risks. To learn more about how you can analyze web data or to request a demo, reach out to one of our representatives.