tech
December 3, 2025
Critics scoff after Microsoft warns AI feature can infect machines and pilfer data
Integration of Copilot Actions into Windows is off by default, but for how long?

TL;DR
- Microsoft's Copilot Actions, experimental AI features in Windows, pose risks of data theft and malware.
- Flaws like hallucinations and prompt injection in AI models are difficult to prevent and can lead to data exfiltration and malicious code execution.
- Security experts compare the warnings to those given for macros, which have historically been exploited.
- Microsoft states that IT admins can control Copilot Actions at account and device levels.
- Users may struggle to detect or prevent exploitation attacks targeting AI agents.
- Copilot Actions are experimental and off by default, but past features have become default over time.
- Microsoft's stated goals for securing agentic features include non-repudiation, confidentiality, and user approval.
- Critics suggest Microsoft's warnings are a 'cover your ass' measure, shifting liability to users.
- Concerns about AI security risks extend to offerings from other major tech companies.
Continue reading
the original article