Meta rolls out Muse Spark, a new multimodal AI assistant across its apps and AI glasses

Meta has begun deploying Muse Spark, a new multimodal artificial intelligence model that already powers the standalone Meta AI app and meta.ai website and is scheduled to appear inside WhatsApp, Instagram, Facebook, Messenger and the company’s AI-enabled glasses in the coming weeks.

Overview

Meta described Muse Spark as its “most powerful model yet” and the first in a new series of large language models developed by a team it calls Meta Superintelligence Labs. The company says the lab rebuilt its AI stack over the last nine months to accelerate development.

Muse Spark is presented as a product-first model: relatively small and fast but capable of "complex reasoning and multimodal tasks," including image understanding, parallel sub-agent workflows and specialized modes for tasks such as shopping and visual coding.

What Muse Spark can do

  • Multimodal understanding: Users can upload or capture images for the assistant to interpret, enabling visual Q&A and image-aware responses.
  • Parallel sub-agents: The model can spin up multiple sub-agents to tackle different parts of a problem simultaneously, which Meta says improves handling of complex requests.
  • Task modes: New modes tailor the assistant’s behavior for different uses—examples Meta cited include a shopping mode that surfaces creator and brand content and tools to generate simple websites or mini-games.
  • Developer access: Meta plans a private API preview for select partners, signaling an eventual push to position Muse Spark competitively against developer-facing models from rivals.

Integration and business implications

The most immediate impact for users will be how often they encounter Meta’s assistant. Muse Spark-backed experiences will be woven into feeds, chats and camera views across Meta’s Family of Apps and into its Ray-Ban and other AI-enabled glasses.

That integration could change how people search for information, discover content and shop inside Meta’s ecosystem. In particular, a shopping mode that draws on creators’ and brands’ content could shift more e-commerce activity into Meta’s properties and create new touchpoints for advertisers and creators. For Meta’s advertising-driven business model, a more capable assistant that keeps users engaged longer or surfaces products more effectively could improve personalization and ad performance—metrics closely watched by investors.

How Muse Spark fits with Meta’s previous work

Meta has previously invested heavily in the Llama series of open models, which were released under permissive licenses to spur external development. Muse Spark, by contrast, is pitched as a proprietary, product-integrated line rather than an open general-purpose model. Meta said Muse Spark is an early data point in a series and that larger models remain in development.

Privacy, safety and regulatory questions

The rollout revives familiar questions about privacy, safety and oversight—particularly given Meta’s regulatory history in Europe. The company has not spelled out exactly what data Muse Spark will use across WhatsApp, Instagram, Facebook and Messenger, or how content will be treated when crossing services.

WhatsApp is end-to-end encrypted in many regions, which typically means only sender and recipient can read message contents. Observers will scrutinize whether AI features operate on-device or rely on server-side processing and how that affects the confidentiality of encrypted messages.

European regulators have already pressed Meta over data transfers and handling; a high-profile 2023 decision from Ireland’s Data Protection Commission and the European Data Protection Board led to a €1.2 billion fine and restrictions on transatlantic data flows. Those precedents make the privacy posture of new cross-app AI capabilities likely targets for scrutiny.

Health and misinformation risks

Meta said it consulted a team of physicians to help Muse Spark provide useful information on common health questions, but it did not identify those clinicians or specify the model’s limits for medical guidance. Health-related outputs from large language models can be incomplete or outdated; absent clear guardrails and disclaimers, users may misinterpret AI-generated information as personalized medical advice.

More powerful multimodal models embedded into social and messaging apps also raise the risk of more convincing synthetic media and misinformation. Meta’s announcement emphasized rapid engineering progress and ambitions for “personal superintelligence,” but offered limited public data on error rates, red-teaming results or third-party audits.

What remains unclear

Many practical aspects of Muse Spark’s behavior and scope remain unspecified. Meta has not detailed:

  • Exactly how the model will operate inside encrypted messaging or what data it will retain or access across services.
  • Quantitative measures of model performance, hallucination rates or results from safety testing and independent audits.
  • The extent to which recommendations and shopping prompts will be algorithmically promoted inside feeds and chats.

What’s next

Meta says Muse Spark will roll out to more users in the coming weeks and be offered via a private API preview to select partners. As the model appears in more feeds, chats and camera frames, its effects on convenience, commerce and regulatory attention will become clearer.

For now, users are being asked to accept a new proprietary model increasingly mediating their experience across Meta’s apps and devices. How that trade-off between utility, commercial impact and privacy plays out will be a key question for users, advertisers and regulators alike in the months ahead.

Tags: #meta, #ai, #musespark, #privacy