Tighter controls over AI systems needed to stop data exposure

Urgent Need for Tighter AI Controls to Prevent Data Exposure, Experts Warn

San Francisco, CA – As artificial intelligence (AI) systems become increasingly integrated into workplaces worldwide, experts are sounding the alarm on the growing risk of data exposure due to unregulated AI use. A recent TELUS Digital survey revealed that 68% of enterprise employees using generative AI at work access publicly available tools like ChatGPT, Microsoft Copilot, or Google Gemini through personal accounts, with 57% admitting to entering sensitive information into these platforms. This practice, known as “shadow AI,” bypasses company IT oversight and poses significant risks of data leaks, prompting calls for stricter controls.

Shadow AI, the unauthorized use of AI tools in workplaces, is a mounting concern. Menlo Security, a browser security firm, warns that such unchecked usage can lead to inadvertent data leakage, where sensitive information is exposed through generative AI systems. “While data loss is a concern, data leakage in GenAI is a bigger issue,” their report states, highlighting the need for organizations to implement guardrails to regulate which AI tools are used within company networks.

The broader implications of AI-driven data exposure extend beyond workplaces. Jennifer King, a privacy and data policy fellow at Stanford University’s Institute for Human-Centered Artificial Intelligence, notes that AI systems exacerbate privacy risks due to their data-hungry nature and lack of transparency. “AI systems collect vast amounts of data, often without user consent, and repurpose it for training models, sometimes with civil rights implications,” King explains. For instance, biased AI hiring tools and facial recognition systems have led to discriminatory outcomes, underscoring the need for robust data governance.

Regulatory frameworks like the European Union’s General Data Protection Regulation (GDPR) and AI Act set high standards for data privacy, requiring transparency and strict data handling protocols. In contrast, the U.S. lacks comprehensive federal AI privacy laws, relying on voluntary guidelines like the NIST AI Risk Management Framework. The White House’s 2023 Executive Order on AI emphasizes privacy impact assessments, but experts argue for binding legislation to address risks like unauthorized data scraping by tech companies.

Innovative solutions are emerging to balance AI utility with privacy. Techniques like data masking, federated learning, and differential privacy allow AI systems to function without exposing personal data. For example, Ocean Protocol’s decentralized AI platform uses blockchain to enable secure data sharing, ensuring privacy in applications like healthcare. Additionally, AI-driven cybersecurity tools can enhance data protection by automating threat detection and enforcing encryption, though they must be paired with strict access controls to prevent adversarial attacks.

The rise of AI-driven cyber threats, such as model poisoning and deepfake generation, further complicates the landscape. Malicious actors can exploit AI vulnerabilities, necessitating proactive measures like regular audits and employee training on secure AI use. Posts on X reflect growing public concern, with users like @mungship warning that uploading sensitive documents to AI without data controls can lead to unauthorized data use by providers like OpenAI.

As AI adoption accelerates, organizations must prioritize privacy-by-design principles, transparent data practices, and compliance with evolving regulations to mitigate risks. Without tighter controls, the unchecked proliferation of AI could lead to widespread data exposure, eroding trust and amplifying security threats.

Sources: Information compiled from TELUS Digital survey, Menlo Security reports, Stanford HAI white paper, EU AI Act, and posts on X.

Leave a Comment