Skip to content
Personal blog. Opinions are my own. Always refer to official documentation.

AI Pulse: Daily Digest — April 10, 2026

Summaries are AI-generated. Click through to read the original reporting.

Ars Technica Ars Technica
Anthropic's Mythos Model Gets Psychiatric Evaluation — and a Restricted Release

Anthropic subjected its newest model, Mythos, to 20 hours of psychiatric evaluation, calling it "the most psychologically settled model we have trained to date." Despite that milestone, the company is limiting its public release, citing the model's alarming ability to discover exploitable security vulnerabilities in widely used software. Critics are questioning whether the cybersecurity rationale is a convenient cover for deeper commercial or reputational concerns at the frontier lab.

Read more →
Ars Technica Ars Technica
Trump-Appointed Judges Refuse to Block Blacklisting of Anthropic AI Technology

A federal appeals court denied Anthropic's emergency motion to stay a government blacklisting of its AI technology, dealing a significant legal blow to the company. The ruling, issued by Trump-appointed judges, leaves the restriction in place while the broader case proceeds. The decision adds to a mounting set of regulatory and legal pressures facing Anthropic at a critical moment in its growth.

Read more →
TechCrunch AI TechCrunch AI
Florida AG Launches Investigation into OpenAI Following Campus Shooting

Florida Attorney General James Uthmeier has opened a formal investigation into OpenAI, citing public safety and national security concerns after ChatGPT was allegedly used to plan a shooting at Florida State University that killed two people and injured five. The AG also raised fears that OpenAI's data and technology could be accessed by adversaries including the Chinese Communist Party. A victim's family has separately announced plans to sue OpenAI over the incident.

Read more →
The Verge AI The Verge AI
The AI Industry's Race for Profits Has Become Existential

A deep-dive analysis from The Verge examines whether the biggest AI companies — including OpenAI and Anthropic — can build sustainable, profitable businesses before running out of runway. Despite OpenAI's recent $122 billion funding round at an $852 billion valuation, internal cultural tensions and a looming monetization cliff raise serious questions about long-term viability. The piece frames the current moment as a make-or-break inflection point for the entire industry.

Read more →
Ars Technica Ars Technica
Meta's Superintelligence Lab Launches First Public Model, Muse Spark

Meta's newly formed Superintelligence Lab has released Muse Spark, its first publicly available model, which now powers the Meta AI app and website in the US with a broader rollout to WhatsApp, Instagram, Facebook, and Messenger planned in coming weeks. The launch sent the Meta AI app rocketing from No. 57 to No. 5 on the App Store almost overnight. Meta touts strong benchmark performance but openly acknowledges "performance gaps" in agentic and coding tasks.

Read more →
TechCrunch AI TechCrunch AI
ChatGPT Introduces $100/Month Pro Tier Focused on Codex Power Users

OpenAI has filled a long-requested gap in its subscription lineup by launching a $100-per-month Pro plan, sitting between the $20 Plus and $200 Pro tiers. The new plan offers five times more usage of its Codex coding tool compared to Plus and is designed for users who need extended, high-effort coding sessions. The move signals OpenAI's intent to more aggressively monetize its developer and power-user base.

Read more →
The Verge AI The Verge AI
Google Gemini Can Now Generate Interactive 3D Models and Simulations

Google has upgraded Gemini with the ability to produce interactive 3D models and real-time simulations directly within the chat interface, allowing users to rotate objects, adjust sliders, and input custom values. The feature represents a significant leap beyond text and image generation, moving toward dynamic, exploratory AI responses. Google is also rolling out a "notebooks" feature in Gemini that lets users organize files, past conversations, and custom instructions around specific projects.

Read more →
TechCrunch AI TechCrunch AI
Google and Intel Deepen AI Infrastructure Partnership to Co-Develop Custom Chips

Google and Intel have announced an expanded partnership to co-develop custom AI chips, a strategic move amid a growing global shortage of CPUs that is driving up demand across the industry. The collaboration positions both companies to reduce dependence on third-party silicon suppliers as AI infrastructure investment accelerates. The deal comes as Amazon's Andy Jassy also publicly challenged Intel in his annual shareholder letter while defending $200 billion in capital expenditure.

Read more →
TechCrunch AI TechCrunch AI
Amazon CEO Andy Jassy Defends $200B AI Capex Spend, Takes Aim at Rivals

In his annual shareholder letter, Amazon CEO Andy Jassy mounted an aggressive defense of the company's $200 billion capital expenditure plan, calling out competitors including Nvidia, Intel, and Starlink by name. Jassy framed the massive spending as essential to winning the AI infrastructure race and positioned AWS's custom silicon and cloud services as superior alternatives. The letter doubles as a strategic manifesto for Amazon's ambitions across AI, chips, and connectivity.

Read more →
TechCrunch AI TechCrunch AI
AWS Chief Defends Investing Billions in Both Anthropic and OpenAI Simultaneously

The head of AWS addressed the apparent conflict of interest in backing both Anthropic and OpenAI with multi-billion dollar investments, arguing that Amazon's long history of competing with its own partners gives it the cultural infrastructure to manage such tensions. The explanation comes as both AI companies are locked in fierce competition for enterprise customers and cloud compute contracts. Critics remain skeptical that neutrality is truly achievable when the stakes are this high.

Read more →
The Verge AI The Verge AI
YouTube Shorts Rolls Out AI Avatar Feature, Making Self-Deepfakes Easy for Creators

YouTube Shorts is launching a tool that lets creators generate realistic AI-powered clones of themselves for use in short-form videos, lowering the barrier to synthetic media production significantly. The rollout highlights the platform's increasingly contradictory stance on AI content — adding generative features while simultaneously struggling to contain AI slop, deepfake scams, and impersonation abuse. The feature is expected to intensify ongoing debates about consent, authenticity, and platform responsibility.

Read more →
Ars Technica Ars Technica
State Police Corporal Charged After Creating 3,000+ AI Deepfake Porn Images from License Photos

A state police corporal has been charged after allegedly using AI tools to generate more than 3,000 non-consensual pornographic deepfake images sourced from driver's license photographs he had access to through his official duties. The case is a stark illustration of how AI image generation tools are being weaponized by those in positions of institutional trust. It adds urgency to calls for stronger legal frameworks governing both AI-generated imagery and law enforcement data access.

Read more →
Ars Technica Ars Technica
First Conviction Under Take It Down Act: Ohio Man Kept Making AI Nudes After Arrest

An Ohio man has become the first person convicted under the Take It Down Act after using more than 100 AI tools to create non-consensual fake nude images of women and minors — and continuing to do so even after his initial arrest. The case marks a significant legal milestone in the US government's effort to criminalize AI-generated sexual exploitation material. Prosecutors say the defendant's brazen continuation of the behavior underscores the need for swift and severe enforcement.

Read more →
TechCrunch AI TechCrunch AI
Mercor Faces Lawsuits and Customer Exodus After Data Breach at $10B Valuation

AI hiring startup Mercor, valued at $10 billion, is reeling from the fallout of a significant data breach that has triggered multiple lawsuits and the departure of high-profile customers. The incident raises serious questions about data security practices at fast-scaling AI startups that handle sensitive personal and professional information. The episode is being closely watched as a cautionary tale about the risks of rapid growth outpacing security infrastructure.

Read more →
MIT Technology Review MIT Technology Review
Microsoft's Mustafa Suleyman: AI Progress Is Exponential and Won't Hit a Wall Soon

Microsoft AI CEO Mustafa Suleyman argues that human intuitions about linear progress make it nearly impossible to grasp how fast AI is actually advancing, and that the exponential trends driving the field show no signs of plateauing. Writing for MIT Technology Review, he pushes back against AI skeptics who predict an imminent slowdown, framing the current moment as the early stages of a transformation that will dwarf previous technological revolutions. The essay is a direct rebuttal to growing "AI winter" narratives circulating in the industry.

Read more →
TechCrunch AI TechCrunch AI
Sierra's Bret Taylor Says Natural Language Will Replace Click-Based Apps

Salesforce co-CEO turned AI startup founder Bret Taylor is making a bold claim: the era of traditional button-and-menu software interfaces is over, replaced by natural language interactions with autonomous agents. Sierra's new Ghostwriter tool lets users describe a task in plain language, after which the system automatically builds and deploys a specialized AI agent to execute it. Taylor frames this as a fundamental platform shift comparable to the move from desktop to mobile.

Read more →
TechCrunch AI TechCrunch AI
OpenAI Releases Child Safety Blueprint Amid Surge in AI-Enabled Exploitation

OpenAI has published a new Child Safety Blueprint outlining its policies and technical measures to combat the alarming rise in AI-generated child sexual abuse material and exploitation. The release comes amid a wave of high-profile cases and legislative pressure, including the Take It Down Act conviction, and positions OpenAI as proactively engaging with one of AI's most serious harms. Advocates welcomed the blueprint but noted that voluntary commitments must be backed by enforceable standards.

Read more →
Ars Technica Ars Technica
LinkedIn Sued Twice After Users Discover It Was Scanning Their Browser Extensions

LinkedIn is facing two separate lawsuits after it emerged that the platform was scanning users' installed browser extensions without clear disclosure, sparking a significant privacy backlash. The company has disputed the framing, claiming the controversy was manufactured by a browser extension developer that had been suspended for scraping LinkedIn data. Regardless of the competing narratives, the incident has reignited broader concerns about the extent of data collection practices by major social platforms.

Read more →

Summaries are AI-generated. Click through to read the original reporting.