Meta Mango AI model is in development as a next-generation system for AI-generated images and video, with reports pointing to a first-half 2026 target, as Meta expands consumer tools like the Vibes AI video feed and deepens its generative media strategy.
What happened
Meta is developing a new artificial intelligence model code-named “Mango” that is designed to generate images and video, according to multiple reports published December 19, 2025. The same reporting describes a separate next-generation text model, “Avocado,” aimed at stronger performance in areas such as coding.
Meta has not publicly confirmed the “Mango” or “Avocado” codenames in a formal product announcement. Still, the timing fits a broader push by Meta to make AI media creation easier inside its own ecosystem—especially as rivals move quickly to ship higher-quality video generation tools.
Why it matters
AI-generated video is moving from a novelty to a format people actively scroll, remix, and share. Platforms that make creation simple—and keep trust and safety protections strong—could win attention, creators, and ad dollars.
For Meta, the stakes are high:
- Consumer engagement: AI video and image tools can keep users creating inside Meta apps instead of using third-party generators.
- Creator workflows: Better AI video can power new creator formats and editing features.
- Advertising: Generative media can reduce production cost and speed up creative testing for campaigns.
- Competition: The AI video race is intensifying across major tech companies and creator software providers.
What is the “Mango” model expected to do?
Based on reporting, the Meta Mango AI model is being developed specifically for image and video generation. While technical specs are not public, the most common “next-gen video model” goals across the industry include:
- Higher fidelity visuals: sharper frames, better lighting, and more realistic textures
- Temporal consistency: fewer glitches where objects change shape, flicker, or “melt” between frames
- Better prompt control: more reliable responses to instructions like camera motion, style, mood, and scene constraints
- Editing and transformation: turning an existing clip into a new style, or changing elements (background, outfit, objects) without rebuilding from scratch
- Longer, more stable clips: moving beyond short snippets toward longer sequences that still look coherent
If Meta can deliver meaningful improvements in these areas, Mango could become a foundation for consumer tools (like feeds and remix features), creator utilities (like editing and templates), and business products (like ad creative generation).
How “Vibes” hints at Meta’s product strategy
Meta has already rolled out Vibes, a feature designed for discovering and remixing AI-generated videos inside Meta’s AI experiences. The product concept is simple: make AI video feel “native” to social platforms, not separate from them.
Vibes’ approach—watch a clip, view the prompt, remix it, add changes, and share—creates a loop that can spread AI-made media quickly. If Mango improves quality and control, it could upgrade that loop by making AI clips look better, respond more predictably to prompts, and support more complex edits.
What users typically do in an AI video feed
- Browse short AI clips like any other social feed
- Reuse prompts (or modify them) to create variations
- Apply styles, music, or transformations
- Share outward into formats that already have distribution (short-video feeds and stories)
This “feed-first” design matters because it turns model capability into daily usage—especially if it becomes integrated into widely used social apps.
How Meta’s earlier research foreshadows Mango
Meta has previously published research on generative video systems (including video generation and editing). That research direction signals what Meta likely wants long-term: models that can generate video, edit video with instructions, and support richer creative workflows rather than one-off clips.
Research disclosures in this space often emphasize:
- Instruction-based editing: change the scene while preserving key identity elements
- Higher-resolution output: fewer artifacts and more stable details
- Multimodal inputs: combining text prompts with images or other references for more control
- Audio alignment (in research stacks): generating or syncing audio to video
Mango may represent a practical step toward bringing those research ambitions into more scaled consumer and business experiences.
Where Mango fits in Meta’s 2026 AI roadmap
Here is a clear snapshot of what has been reported and what has already launched.
| Item | What it is | Main focus | Reported / known timing |
| Mango | AI model under development | Image + video generation | Targeted for first half of 2026 (reported) |
| Avocado | AI model under development | Next-gen text (including coding improvements) | Targeted for first half of 2026 (reported) |
| Vibes | Product feature | Discover/create/remix AI videos | Launched September 25, 2025 |
| AI content labeling tools | Policy + product approach | Transparency for AI-generated or AI-edited media | Expanded during 2024 and ongoing |
The competition: why Meta is moving fast
Meta is building in a market where top AI labs and major software companies are heavily investing in video generation and editing. The overall trend is toward:
- Consumer creation apps for making and sharing AI clips
- Developer platforms for integrating generation into products
- Creator software workflows where AI sits inside editing tools
A simplified comparison shows why Mango’s positioning matters.
| Company / ecosystem | Public direction | Typical strengths |
| Major AI labs | Pushing video model quality and capability | Rapid research iteration, strong model performance |
| Big tech platforms | Integrating AI creation into social distribution | Audience reach, sharing loops, product surfaces |
| Creator software providers | Embedding AI into production workflows | Editing features, templates, commercial pipelines |
Meta’s advantage is distribution. If Mango delivers high-quality output and Meta integrates it seamlessly into its apps, users may create and share AI media without leaving the platform.
Trust, safety, and labeling: the hard constraint on AI media
As AI image and video tools improve, so do risks:
- Deepfakes and impersonation
- Misleading political or crisis content
- Non-consensual or harmful edits
- Scams using synthetic media
- Confusion about what is real
Meta has publicly discussed expanding labeling for AI-generated and AI-edited content and aligning with industry efforts around content provenance and metadata.
In practice, major platforms tend to combine:
- user disclosures,
- automated signals (where available),
- policy enforcement, and
- friction for repeated violators.
For Mango and AI video feeds, this will likely be a decisive test. Higher realism increases creative power—but also raises the bar for transparency and enforcement so AI content does not erode trust.
What to watch next
If Meta remains on track, the biggest signals to monitor between now and the first half of 2026 include:
1) Product expansion beyond early AI surfaces
Will Meta broaden AI video creation deeper into mainstream short-video experiences, not only inside AI-specific areas?
2) Real improvements users can feel
The average user will notice:
- fewer weird frame glitches,
- better faces and hands,
- more consistent objects,
- better camera motion control,
- editing tools that “just work.”
3) Business-grade creative tools
If Mango becomes a foundation for advertising and brand use, Meta will likely emphasize:
- faster creative iteration,
- style consistency controls,
- safer defaults and brand protections,
- and clearer licensing and labeling rules.
4) Transparency systems that scale
The more AI media floods feeds, the more important it becomes to:
- label content clearly,
- reduce incentives for deception,
- and respond quickly to harmful synthetic content.
Final Thoughts: Meta is building for an AI-first media era
The reported development of the Meta Mango AI model suggests Meta is planning a major upgrade to its image and video generation capabilities, with an apparent goal of rolling it out in 2026. Combined with the launch of Vibes and continued work on labeling and safety, the direction is consistent: Meta wants AI creation to be a default part of how people make and share media inside its apps.
The outcome will depend on three things: quality, usability, and trust. If Mango can meaningfully improve video realism and control—while Meta keeps transparency and safety strong—it could accelerate a shift where AI-made images and videos become a normal, everyday format across social platforms.






