A single click mounted a covert, multistage attack against Copilot

BIAS: Lean Left
RELIABILITY: High

Political Bias Rating

This rating indicates the source’s editorial stance on the political spectrum, based on analysis from Media Bias/Fact Check, AllSides, and Ad Fontes Media.

Far Left / Left: Progressive editorial perspective
Lean Left: Slightly progressive tendency
Center: Balanced, minimal editorial slant
Lean Right: Slightly conservative tendency
Right / Far Right: Conservative editorial perspective

Current source: Lean Left. Stories with cross-spectrum coverage receive elevated prominence.

Reliability Rating

This rating measures the source’s factual accuracy, sourcing quality, and journalistic standards based on third-party fact-checking assessments.

Very High: Exceptional accuracy, rigorous sourcing
High: Strong factual reporting, minor issues rare
Mixed: Generally accurate but occasional concerns
Low: Frequent errors or misleading content
Very Low: Unreliable, significant factual issues

Current source: High. Higher reliability sources receive elevated weighting in story prioritization.

Ars Technica Security
22:03Z

Microsoft has fixed a vulnerability in its Copilot AI assistant that allowed hackers to pluck a host of sensitive user data with a single click on a legitimate URL. The hackers in this case were white-hat researchers from security firm Varonis . The net effect of their multistage attack was that they exfiltrated data, including the target’s name, location, and details of specific events from the user’s Copilot chat history.

The attack continued to run even when the user closed the Copilot chat, with no further interaction needed once the user clicked the link, a legitimate Copilot one, in the email. The attack and resulting data theft bypassed enterprise endpoint security controls and detection by endpoint protection apps. It just works “Once we deliver this link with this malicious prompt

Continue reading at the original source

Read Full Article at Ars Technica Security →