As artificial intelligence reshapes industries, a darker side emerges: cybercriminals wielding AI to launch sophisticated attacks that outpace traditional defenses. Experts warn that without proactive measures, businesses could face unprecedented breaches, making cybersecurity a top priority in 2025.
In a timely discussion, Covington & Burling’s Ashden Fein and Micaela McMurrough break down these AI cybersecurity threats, drawing from recent regulatory guidance to offer practical strategies. Key trending terms like AI-enabled social engineering, deepfake phishing, AI-enhanced cyberattacks, NYDFS AI guidance, and cybersecurity risk assessments highlight the urgency for U.S. firms to adapt swiftly amid rising digital vulnerabilities.
Understanding the Rise of AI-Powered Cyber Threats
Artificial intelligence isn’t just a tool for innovation—it’s becoming a weapon in the hands of threat actors. According to insights from Fein and McMurrough, AI amplifies the scale, speed, and sophistication of cyberattacks, transforming routine threats into highly targeted assaults.
The duo points to the New York Department of Financial Services (NYDFS) industry letter issued on October 16, 2024, which outlines specific risks without imposing new rules but clarifying how existing regulations apply to AI. This guidance, aimed at financial institutions but relevant across sectors, identifies AI as a “new and emerging threat” that demands updated risk management.
Background reveals a surge in AI-fueled incidents. For instance, deepfakes—AI-generated audio, video, or text—have spiked in phishing schemes, tricking employees into divulging sensitive data or authorizing fraudulent transfers. Global reports indicate a 300% increase in deepfake-related fraud attempts since 2023, per cybersecurity firms like CrowdStrike.
Key Threats Highlighted by Fein and McMurrough
Fein, vice chair of Covington’s global Cybersecurity practice, and McMurrough, co-chair of the firm’s Technology Group and AI/IoT initiative, categorize threats into two main buckets: those from adversaries using AI against organizations and risks from organizations’ own AI adoption.
AI-Enabled Social Engineering
This tops the list as a critical danger. Threat actors leverage AI to create convincing deepfakes for phishing, vishing (voice phishing), and even videoconference impersonations. “AI makes these attacks more realistic and interactive,” McMurrough explains, noting how they bypass traditional security like biometric checks.
AI-Enhanced Cyberattacks
AI allows hackers to rapidly scan for vulnerabilities, automating exploits at unprecedented speeds. This includes scaling malware distribution or optimizing ransomware demands.
Data and Supply Chain Vulnerabilities
Organizations hoarding non-public information for AI training become prime targets. Additionally, relying on third-party AI vendors introduces supply chain risks, where a single weak link can compromise entire networks.
Fein emphasizes that these aren’t hypothetical: “We’ve seen real-world cases where AI tools exposed sensitive data, amplifying incentives for attacks.”
Expert Recommendations for Bracing Against Risks
Drawing from NYDFS guidance, Fein and McMurrough advocate integrating AI considerations into existing cybersecurity frameworks.
Strengthen Risk Assessments and Governance
Conduct annual risk assessments that explicitly factor in AI threats. Boards should receive regular updates on AI-related cybersecurity, ensuring policies evolve with the technology.
Bolster Access Controls and Authentication
Shift to AI-resistant methods like physical security keys or digital certificates. Implement “zero trust” architectures and liveness detection for biometrics to counter deepfakes.
Enhance Training and Monitoring
Mandatory annual training on spotting AI-driven scams, such as unusual urgent requests. Monitor AI systems for anomalous queries that could indicate data exfiltration attempts.
Vendor Management and Data Minimization
Vet third parties rigorously, demanding warranties on AI security. Adhere to data minimization principles, inventorying and securing AI-related data to reduce exposure.
The experts also highlight AI’s dual nature: “While it poses risks, AI can enhance defenses through advanced threat detection,” Fein notes, urging balanced adoption.
Public Reactions and Broader Implications
Reactions from the cybersecurity community praise the guidance for its practicality. On platforms like X, professionals applaud Fein and McMurrough’s clear breakdown, with one post stating, “This is essential reading for CISOs navigating AI’s double-edged sword.”
Industry groups echo calls for federal standards, amid concerns that fragmented state rules could hinder innovation.
Impact on U.S. Readers: Economy, Lifestyle, and Beyond
For Americans, these risks hit close to home. The U.S. economy, reliant on digital infrastructure, faces potential losses exceeding $100 billion annually from cyber incidents, per FBI estimates—amplified by AI. Small businesses and consumers risk identity theft via deepfake scams, disrupting finances and trust in online services.
Politically, it fuels debates on AI regulation, with bills like the AI Accountability Act gaining traction. In technology, firms must invest in resilient systems, potentially raising costs but fostering jobs in cybersecurity. Lifestyle impacts include safer online banking and e-commerce, but vigilance against AI-manipulated media could alter social interactions.
For sports and entertainment, AI deepfakes threaten ticket scams or athlete impersonations, urging fans to verify sources.
Conclusion: Preparing for an AI-Driven Future
In essence, Covington’s Ashden Fein and Micaela McMurrough illuminate AI as a transformative yet perilous force in cybersecurity, urging immediate action through updated protocols and awareness. By heeding NYDFS-like guidance, organizations can mitigate these emerging threats.
Looking forward, as AI evolves, expect tighter regulations and innovative defenses. U.S. readers should prioritize AI cybersecurity threats, deepfake phishing awareness, AI-enhanced cyberattacks prevention, NYDFS AI guidance implementation, and cybersecurity risk assessments to safeguard against this “new and emerging threat” in an increasingly digital world.
