OSINT

What a Year of Living With AI Taught Me

BIAS: Right
RELIABILITY: High

Political Bias Rating

This rating indicates the source’s editorial stance on the political spectrum, based on analysis from Media Bias/Fact Check, AllSides, and Ad Fontes Media.

Far Left / Left: Progressive editorial perspective
Lean Left: Slightly progressive tendency
Center: Balanced, minimal editorial slant
Lean Right: Slightly conservative tendency
Right / Far Right: Conservative editorial perspective

Current source: Right. Stories with cross-spectrum coverage receive elevated prominence.

Reliability Rating

This rating measures the source’s factual accuracy, sourcing quality, and journalistic standards based on third-party fact-checking assessments.

Very High: Exceptional accuracy, rigorous sourcing
High: Strong factual reporting, minor issues rare
Mixed: Generally accurate but occasional concerns
Low: Frequent errors or misleading content
Very Low: Unreliable, significant factual issues

Current source: High. Higher reliability sources receive elevated weighting in story prioritization.

AEI
10:34Z

A year ago, I shared some reflections on how I was using AI and suggested that it’s helpful to think of these tools as competent interns working remotely: earnest and sophisticated, but still in need of direction and supervision. In 2025, those interns grew up. What surprised me wasn’t the pace of technical progress but how quickly AI stopped feeling novel and became ordinary.

I didn’t realize how deep the integration went until OpenAI’s year-in-review revealed I had logged over 3,500 chats, placing me in the top 0.1 percent of users , apparently edging out Sam Altman. Via ChatGPT My “tech stack” includes ChatGPT , Gemini , Claude , Manus , Perplexity , Google’s AI Studio, NotebookLM , ElevenLabs , MidJourney , Elicit , and Claude Code. With this context, I felt it might be useful to share

Continue reading at the original source

Read Full Article at AEI →