Adobe is expanding its Firefly generative AI platform with new prompt-based video editing tools and deeper third-party model integrations that let users control video and image outputs using simple text instructions. The latest Firefly update introduces a dedicated video editor powered by text-driven prompts and a major expansion of external AI models connected to Adobe’s ecosystem, signaling a shift toward more open, multimodel workflows for creative professionals.
What’s new in Firefly: prompt-based video editing
Adobe is upgrading Firefly with text-driven video edits that allow users to modify clips using natural-language prompts instead of relying solely on manual timeline tools, with the company positioning this as a way to turn written directions into frame-by-frame adjustments across a sequence. In practice, that means a creator can describe the look they want, such as a “cinematic, high-contrast night scene” or “soft pastel color grading,” and Firefly interprets the prompt to adjust color, lighting, and style across the selected footage, a shift that the update frames as a core evolution of Firefly’s generative capabilities rather than a side feature.
Alongside these controls, Adobe is rolling out a new video editor inside Firefly that is designed specifically for prompt-based video manipulation, enabling changes to styles, colors, or environments from text rather than keyframes or masks. By making prompt-based video editing a core capability that, as reported in Adobe Firefly now supports prompt-based video editing, adds more third-party models, is meant to sit directly alongside traditional tools, Adobe is placing Firefly in direct competition with other AI video-editing systems that promise fast, automated transformations of existing footage.
How prompt-driven video editing works for creators
The new workflow centers on a text field in Firefly’s interface where users enter instructions that drive automated edits to existing footage, with the system parsing each phrase to determine which visual attributes to adjust. According to the description of how Adobe upgrades Firefly with text-driven video edits and new AI models, the tool is designed so that a user can specify changes like “make the sky more dramatic,” “turn this office into a neon-lit nightclub,” or “replace the background with a mountain landscape,” and Firefly applies those edits across the clip while preserving motion and composition, which could significantly reduce the time non-experts spend learning complex color grading or compositing workflows.
Examples highlighted for the new Firefly video editor show creators using prompts to apply style transfers, tweak scenes, or change objects without manually rotoscoping or masking each element, with the system handling tasks such as turning a daytime street into a rainy cyberpunk alley or swapping a product’s color across an entire ad spot. The prompt-based video editing feature in Adobe Firefly is described in coverage of Revolutionary: Adobe Firefly Unleashes Prompt-Based Video Editing and Major Third-Party AI Model Integration as “revolutionary” for simplifying complex edits for non-experts, and that framing underscores the stakes for editors, marketers, and social media teams who need high-quality video variations without the budget or time for full post-production passes.
Expansion of third-party AI models inside Firefly
Beyond video, Adobe is expanding Firefly by adding more third-party models into the platform, allowing users to tap non-Adobe generative systems from within the same interface they already use for images and text effects. The update described in Adobe Firefly now supports prompt-based video editing, adds more third-party models emphasizes that Firefly is no longer limited to a single, proprietary engine, but instead can route prompts to external providers, which broadens the range of styles, resolutions, and content types available to creative teams that want both Adobe’s tuned models and specialized third-party options.
In addition to Adobe’s own generative systems, the company is integrating new AI models with Firefly so that users can choose from a wider catalog of engines for tasks like image generation, style transfer, or video enhancement, all orchestrated through the same prompt-based interface. Reporting on Adobe updates Firefly AI with new video editor and third-party models and the characterization of a major third-party AI model integration highlight that Firefly is being connected to external providers rather than kept fully closed, a move that positions Adobe as a hub where agencies, studios, and freelancers can mix and match AI capabilities without constantly switching tools or file formats.
What’s changed from earlier versions of Firefly
Earlier versions of Firefly focused primarily on image and text effects, with generative fill, style transfer, and text-based image creation forming the core of the experience, and video support limited to more traditional workflows in tools like Premiere Pro and After Effects. The arrival of text-driven video edits in Firefly, as detailed in the explanation of how Adobe upgrades Firefly with text-driven video edits and new AI models, marks a clear break from that image-first focus, since video clips can now be manipulated with the same kind of natural-language prompts that previously applied only to stills, effectively bringing motion content into the same generative pipeline.
At the same time, the addition of a dedicated video editor and third-party models shifts Firefly from a single-model, image-centric system to a broader multimedia platform that treats video, images, and potentially other formats as peers. The description of how Adobe updates Firefly AI with new video editor and third-party models and the broader move to support prompt-based video editing and more third-party models together show Adobe explicitly expanding Firefly’s scope beyond its initial generative art positioning, which has implications for how the company competes with standalone AI tools that started with video or multimodal capabilities from day one.
Impact on Adobe’s ecosystem and creative workflows
Within Adobe’s ecosystem, the upgrade that brings text-driven video edits into Firefly is designed to make advanced editing faster for Creative Cloud users who rely on Premiere Pro, After Effects, and related tools, since assets generated or transformed in Firefly can flow into those applications for final polish. The description of how Adobe upgrades Firefly with text-driven video edits and new AI models underscores that the company is targeting scenarios where editors need to quickly test multiple looks for a campaign, social clip, or explainer video, and the ability to iterate through prompts rather than manual color and compositing passes could compress review cycles and reduce the number of handoffs between departments.
The new Firefly video editor and third-party models also have the potential to change collaboration between designers, video editors, and non-technical stakeholders, since a marketing manager or client can describe desired changes in plain language and see Firefly generate a preview that professionals can then refine. Coverage of major third-party AI model integration in Adobe Firefly frames Adobe’s strategy as turning Firefly into a hub that orchestrates multiple AI engines instead of competing with them in isolation, and that hub model could make Adobe’s tools more central to how agencies and in-house teams coordinate creative work, manage brand consistency across formats, and decide which AI models are appropriate for different projects.