HackerOne rolls out industry framework to support ‘good faith’ AI research

BIAS: Center
RELIABILITY: High

Political Bias Rating

This rating indicates the source’s editorial stance on the political spectrum, based on analysis from Media Bias/Fact Check, AllSides, and Ad Fontes Media.

Far Left / Left: Progressive editorial perspective
Lean Left: Slightly progressive tendency
Center: Balanced, minimal editorial slant
Lean Right: Slightly conservative tendency
Right / Far Right: Conservative editorial perspective

Current source: Center. Stories with cross-spectrum coverage receive elevated prominence.

Reliability Rating

This rating measures the source’s factual accuracy, sourcing quality, and journalistic standards based on third-party fact-checking assessments.

Very High: Exceptional accuracy, rigorous sourcing
High: Strong factual reporting, minor issues rare
Mixed: Generally accurate but occasional concerns
Low: Frequent errors or misleading content
Very Low: Unreliable, significant factual issues

Current source: High. Higher reliability sources receive elevated weighting in story prioritization.

CyberScoop
20:59Z

Four years ago, the Department of Justice announced it would no longer seek criminal charges against independent and third-party security researchers for “good faith” security research under the Computer Fraud and Abuse Act. Now, a prominent bug bounty platform is attempting to build a framework for industry to offer similar protections to researchers who study flaws in AI systems, including fields like AI safety and others that look at unintended behaviors and outputs that can impact security outcomes. Ilona Cohen, chief legal and policy officer at HackerOne, told CyberScoop the Good Faith AI Research Safe Harbor is meant to build off previous efforts — like the DOJ policy change and the company’s own Gold Standard Safe Harbor framework — that provide wider legal freedom for third-party s

Continue reading at the original source

Read Full Article at CyberScoop →