AI Pulse: Daily Digest — May 8, 2026
Summaries are AI-generated. Click through to read the original reporting.
Testimony and trial exhibits in the Musk v. Altman lawsuit are offering the most detailed public account yet of the chaotic week in November 2023 when OpenAI's board fired Sam Altman, citing concerns that he was "not consistently candid" with them. Former OpenAI CTO Mira Murati's deposition sheds new light on the internal power dynamics that led to — and ultimately reversed — the dramatic ouster.
Read more →Court testimony in the Musk v. Altman trial has revealed that Elon Musk attempted to recruit OpenAI's founding team to create an AI division within Tesla, and was reportedly "prepared to do the for-profit, provided he would get control." The disclosure adds important context to Musk's lawsuit, which accuses OpenAI of abandoning its nonprofit mission in favor of profit — a mission Musk himself sought to shape on his own terms.
Read more →The high-stakes Musk v. Altman trial may ultimately hinge on whether OpenAI's for-profit conversion enhances or undermines its founding mission of ensuring AI benefits humanity. Legal scrutiny of OpenAI's safety practices and governance decisions is intensifying as the trial progresses, with implications that could reshape how frontier AI labs are structured and held accountable.
Read more →SpaceX is planning to invest at least $55 billion into a chip manufacturing facility dubbed "Terafab" in Austin, Texas, according to details from a public hearing notice filed in Grimes County. The massive bet would put Elon Musk's rocket company in direct competition with established semiconductor giants, signaling an aggressive push to control AI hardware infrastructure from the ground up.
Read more →Chinese AI startup Moonshot AI has closed a $2 billion funding round at a $20 billion valuation, driven by surging demand for open-source AI models. The company's annualized recurring revenue surpassed $200 million in April, fueled by rapid growth in both paid subscriptions and API usage, underscoring the intensifying global competition in frontier AI development.
Read more →Mozilla says Anthropic's AI-powered security tool Mythos has uncovered 271 vulnerabilities in Firefox, with researchers reporting the findings have "almost no false positives" — a remarkable signal-to-noise ratio for automated bug discovery. Mozilla says it has "completely bought in" on AI-assisted security testing, marking a significant shift in how major software organizations approach vulnerability management.
Read more →The high-profile success of Anthropic's Mythos in finding critical software vulnerabilities appears to have nudged the Trump administration toward embracing AI safety testing — a stance it had previously resisted. Experts are cautiously welcoming the shift but warn of significant pitfalls in how such testing frameworks could be designed, implemented, or captured by industry interests.
Read more →OpenAI is rolling out an optional safety feature for ChatGPT that lets adult users designate an emergency contact — a friend, family member, or caregiver — who will be notified if the system detects conversations involving self-harm or suicide. The "Trusted Contact" feature represents one of the more concrete steps any major AI company has taken to address the real-world mental health risks of emotionally engaged chatbot interactions.
Read more →Anthropic is doubling usage limits for Claude Code on its Pro and Max tiers, citing expanded compute capacity enabled by a new enterprise deal with SpaceX — following similar agreements with Microsoft and Amazon. The move signals Anthropic's growing confidence in its infrastructure partnerships as demand for its coding-focused AI tools accelerates.
Read more →Anthropic has introduced a new capability for its Claude Managed Agents that allows them to perform background processing during idle periods — loosely analogized as "dreaming" — enabling more sophisticated reasoning and task preparation between active sessions. The update accompanies the doubling of five-hour usage limits for Claude Code Pro and Max subscribers, making the platform significantly more capable for extended agentic workflows.
Read more →OpenAI has expanded its developer API with new voice intelligence capabilities, targeting use cases in customer service, education, and creator platforms. The additions give developers more tools to build sophisticated voice-driven applications on top of OpenAI's models, intensifying competition with other voice AI providers as the market for conversational AI infrastructure heats up.
Read more →Perplexity has opened its Personal Computer feature — which brings AI agents directly to the Mac desktop — to all users after an earlier limited rollout. The product represents Perplexity's push beyond search into ambient, OS-level AI assistance, putting it in more direct competition with Apple Intelligence and other platform-native AI offerings.
Read more →Snap has confirmed that its $400 million partnership with Perplexity — which would have embedded Perplexity's AI search engine directly into Snapchat — has come to an end. The collapse of the high-profile deal raises questions about the durability of big-ticket AI integrations between consumer platforms and standalone AI search providers.
Read more →Google has announced the $99 Fitbit Air, a screenless fitness tracker with a metallic fabric clasp that draws clear comparisons to the Whoop wearable, alongside a new Google Health app intended to eventually replace the existing Fitbit app. The launch signals Google's intent to reposition its health wearable strategy around AI-driven coaching rather than display-centric devices.
Read more →Spotify is positioning itself as the destination for AI-generated personal podcasts, enabling users to create audio content via tools like Codex or Claude Code and import it directly to the platform. A new command-line tool called "Save to Spotify" is designed specifically for AI agents, letting users push AI-summarized research and custom audio into their Spotify libraries alongside traditional content.
Read more →Google's Gemma 4 open models now leverage speculative decoding — a technique that predicts future tokens to accelerate inference — delivering up to three times faster performance with no reported quality degradation. The improvement could make Gemma 4 significantly more competitive for on-device and cost-sensitive deployments where inference speed is a critical constraint.
Read more →Summaries are AI-generated. Click through to read the original reporting.