ChatGPT falls to new data-pilfering attack as a vicious cycle in AI continues

BIAS: Lean Left
RELIABILITY: High

Political Bias Rating

This rating indicates the source’s editorial stance on the political spectrum, based on analysis from Media Bias/Fact Check, AllSides, and Ad Fontes Media.

Far Left / Left: Progressive editorial perspective
Lean Left: Slightly progressive tendency
Center: Balanced, minimal editorial slant
Lean Right: Slightly conservative tendency
Right / Far Right: Conservative editorial perspective

Current source: Lean Left. Stories with cross-spectrum coverage receive elevated prominence.

Reliability Rating

This rating measures the source’s factual accuracy, sourcing quality, and journalistic standards based on third-party fact-checking assessments.

Very High: Exceptional accuracy, rigorous sourcing
High: Strong factual reporting, minor issues rare
Mixed: Generally accurate but occasional concerns
Low: Frequent errors or misleading content
Very Low: Unreliable, significant factual issues

Current source: High. Higher reliability sources receive elevated weighting in story prioritization.

Ars Technica Security
14:00Z

There’s a well-worn pattern in the development of AI chatbots. Researchers discover a vulnerability and exploit it to do something bad. The platform introduces a guardrail that stops the attack from working.

Then, researchers devise a simple tweak that once again imperils chatbot users. The reason more often than not is that AI is so inherently designed to comply with user requests that the guardrails are reactive and ad hoc, meaning they are built to foreclose a specific attack technique rather than the broader class of vulnerabilities that make it possible. It’s tantamount to putting a new highway guardrail in place in response to a recent crash of a compact car but failing to safeguard larger types of vehicles.

Enter ZombieAgent, son of ShadowLeak One of the latest examples is a vulnera

Continue reading at the original source

Read Full Article at Ars Technica Security →