Emulsio 5 is here with AI slow motion, video upscaling, and improved stabilization.

We're shipping Emulsio 5 this week. You can find out more on its webpage, and on the App Store. The main new features are AI-powered slo-mo, upscaling, and an improved multi-frame stabilization technique.

Emulsio, née Movie Stiller, has been here for quite a long time. We first shipped it in 2011, fourteen years ago, when video capabilities of the iPhone were simple, sensors & compute limited, and video stabilization very much needed. Nowadays, video has become an essential part of communication between people, for media, marketing, you name it. We are bringing new innovations in Emulsio in this 5th major iteration that you can review below.

AI-powered slow motion

As its name suggests, this lets you insert slow motion short sequences into existing videos that are generated by on-device AI. But why do we need that in the first place? Nowadays, cameras used to create content often come from small devices like iPhone, iPad, vlogging camera, action cam or drone. They share a common limitation that is their limited-size sensor. This makes it imperfect in poorly lit situations, and high-speed video recording makes the problem even worse as a short exposure duration for a frame means less light, and thus more noise. Moreover, slo-mo by themselves are a great creative tool, but they require that you know in advance a high-speed action will take place, which you can't always predict. Finally, shooting everything in high-speed takes enormous space on your device, and possibly decreases image quality, as discussed above. For all these reasons, it absolutely makes sense to have access to a post-processing tool for adding slo-mos.

Generated slo-mos have progressed a lot lately with the arrival of AI. It is a form of frame interpolation that uses both motion and visual features from the video being processed and learning of a synthesis technique making use of these. Generation artefacts have been much reduced when compared to previous non-AI techniques (but can still occur). Although cloud-based operation could theoretically bring additional processing power, we wanted to bring these AI models on the device as we think this provides improved privacy and removes dependency on online resources. The challenge was to fit that into limited iPhone RAM / computing caps to still bring useful creative power to users.

A video processing platform

In its early days, the challenge for Emulsio was to enable processing and bring value to creators on fairly limited devices, in terms of compute and sensor quality. So it mostly was a technical achievement in an emerging field. Nowadays, iPhone, iPad, and Mac are extremely capable video recording and editing devices, and have been recently extended to handle HDR encoding and playback as well on EDR capable displays. Moreover, they've been receiving specific AI capabilities with Apple Silicon and Core ML, which enable new use cases for running neural networks locally for a variety of domains. 

Noticing all these enhancements, we opted to grow Emulsio into a platform for processing videos. Not standard editing that has been there for ages and for which many great apps exist, but more towards the video processing & enhancement aspect of it. To achieve this, we have transformed Emulsio's toolchain into a video processing pipeline that allows GPU accelerated frame-by-frame processing including previous/future full quality frame contexts that enable not only standard CPU and GPU image processing, but also AI image processing with custom hi-performance models. Slow motion, FPS increase, multi-frame stabilization, and upscaling are concrete examples of this, and this sets the ground for more future capabilities. Emulsio's new UI represents that well too, stacking the currently chosen capabilities with room to grow.

This new processing pipeline works identically on iPhone/iPad and Mac, even though we noticed not all devices are equal. Indeed, AI video processing requires a lot RAM, and we can deliver the best performance and resolution on devices with at least 6GB of RAM. This may be related to why Apple Intelligence is also limited to the most recent devices: it is not that much about compute power than it is linked to the availability of sufficient memory for storing intermediate inference data.

The challenges of iOS 26 / macOS 26

Latest Apple OSes bring important changes in terms of user interface. Emulsio has adopted a floating interface for a long time, as it always was our intention to put content first. As such, the Liquid Glass proposal by Apple is a great candidate for adoption in the short term (we're on it), and it will totally make sense in the way the app operates. Emulsio feels at home with Liquid Glass. More on this very soon.

But there's more to these new OSes, as they also bring new video editing capabilities for developers. We are currently working to integrate Apple's new video processing APIs directly in Emulsio, and so far what we've seen is that their slo-mo generation offers a faster frame generation process when compared to the one we currently provide in Emulsio (custom Core ML models). Image quality under fast motion is yet to be assessed for each technique though but our plan is to offer all possible options to Emulsio users for synthesizing slo-mos, and documenting pros/cons for each. More on this soon.

Development choices

As said earlier, Emulsio has been there for quite a long time, and provides a lot of legacy code. We nonetheless wanted to modernize its user interface with SwiftUI both to offer a refreshed and future-proof experience and to target all devices (iPhone, iPad, Mac) with a common code base. On the iOS side, it is a standard app and on the Mac side, it is a Mac Catalyst app. Older code (still needed) has been bridged to be easily usable from Swift/SwiftUI, and specific UI added for the Mac, which has "windows" to account for.

Beyond UI elements, video processing is pleasantly unified between Mac Catalyst and iOS, up to the display of HDR content to EDR screens, that Emulsio uses. This let us focus on other aspects of the app, as compatibility was mostly a non-issue. Not sure if that would have applied in the case of a standard Mac app though (non Catalyst), as the Mac has a long history of video processing that predates iOS with its own conventions.

Business model

About pricing of the app and the free app limitations, it is similar to what we generally offer. Emulsio can be fully tried for free with a watermark inserted in the generated outputs. This lets you fully experience the capabilities of the app and help you decide whether it provides value to your creative toolbox.

Purchase of the app can be made through either a one-time payment or a subscription, and the difference between these options is how future upgrades are handled. In the case of the subscription, all future upgrades are included, while they'll require an upgrade at some point for the one-off purchase. This gives you freedom whether or when you want to upgrade, and even let you review the new features before you commit. Depending on the next features we'll offer and the development cycle they require, it is well likely that a number of those will be included in both options initially but availability will diverge at a later stage.