I challenged myself to a weekend project to try out and learn Unreal Engine’s Sequencer. I wanted to take a moment to share the result and some thoughts.

In a couple of hours, I had covered some basic UE4 tutorials getting familiar with the editor interface and Sequencer. I felt ready to put into practice what I learned and give Unreal’s Sequencer a real test run. The goal was to put a cinematic together in one weekend using off-the-shelf assets so I could focus on shot work in Sequencer. In this post-mortem, I’ll be analyzing UE4 purely from a linear/cinematics workflow POV as I’m mostly interested in its application for animation and VFX. Have a look at the end result if you haven’t yet and scroll down for the post-mortem.

Setup#

Getting started was really easy. I just pressed Cinematics->“Add Master Sequence” on the toolbar and I was ready to start working. I remember when first learning to do the same in Unity I got tripped up with creating a GameObject, PlayableDirector, Timeline, and wiring that all up.

User Interface#

Overall, the editor interface felt pretty familiar to other packages and I hit the ground running after the first tutorial. Similarly, the main Sequencer UI was simple to grasp as it looked and worked like any other non-linear editing interface that you may have used.

At least it looks that way on the surface, but double-click on a shot to jump into it, animate or tweak any elements in the shot, pop back out, and trim the shot in the master sequence without losing any of the animation in case you change your mind.

When it came to keyframing, it felt a lot like Maya. You set keys, autokey, and move keys directly in the sequencer or open up the curve editor if you want to tweak the curves.

The viewport adapts for Sequencer playback. It frames the viewport to match your cinematic camera format, shows data like your frame range and current frame, and includes features like overlays that you can toggle to show a 3x3 grid or title-safe and action-safe reticles.

Other Features I Like#

There were a few other features that I came across that caught my attention. Some of them are very simple, but it made it clear to me the tools were designed or refined by folks who understood the workflow.

  1. Default names for new shots are numbered by 10s and include a take number suffix. Numbering by 10s allowed me to insert new shots without breaking the sequential number ordering of my shots which could add confusion if I share the project with someone (shot8 is before shot3?).
  2. It’s easy to create a new take for a shot. I can try out an alternate camera and switch between the two takes whenever I want. Alternatively, I could have multiple cameras in a shot and define which one is the shot’s main camera.
  3. Objects could either be possessed or spawned in a shot. It’s a concept I didn’t need to think about at first and everything intuitively worked. Possessing, the common case, gives the shot control over an object in the scene. If you move an object bound to a shot whether keyed or not, the shot tracks it and while you’re in the shot or playing it back, the shot’s override wins. Spawning involves adding an object to a shot so it only exists in that shot. Spawning works great in order to not clutter the outliner or to avoid adding an object to the scene, disabling it, then enabling it for the shot it’s meant for. (e.g. particle systems or fill/rim lights).
  4. There’s a concept of subscenes that from what I’ve gathered is essentially layered overrides of a scene. This would allow multiple artists or departments to work on the same shot. I think Oats Studios worked similarly in Unity for their ADAM project, but it wasn’t obvious to me how to set that up or work that way in Unity.
  5. The outliner adds a column listing what shots an object is overridden in. I didn’t need it for my work, but it looks promising that data like that is being tracked as it would be valuable if you used Sequencer for previs and then wanted to pass data downstream about what may have been animated or cheated in a shot.

Problems#

My original dream was to do everything in UE4. The quality I was expecting for a cinematic created on a busy weekend made it seem reasonable and the audio editing and even time warping features that I found in Sequencer made it seem like it would be possible. Imagine not having to even render out a movie from my editing software. Unfortunately, it didn’t work. I couldn’t get the movie to render with reliable audio even after using the -deterministicaudio flag that UE4 warned would be required. The sound would drop out at different points. To make things worse, I couldn’t export the audio that I had cut together at all. I had to redo it in my editing software.

Time-warping also let me down. The time-slowing effect at the end rendered out correctly when I rendered a movie format, but not for image sequences. I had to redo that in the editor as well.

My rendering confusion and frustration was compounded by the different rendering approaches that UE offered. I used UE 4.25 which I later learned is the first experimental release of the “Movie Render Queue” or “High-Quality Media Export” (hopefully they get the branding sorted out before release). I flip-flopped between that and the older “Render Movie” UI in Sequencer. I came to the conclusion that the “Render Movie” approach would not produce acceptable anti-aliasing and motion blur and finally committed to “Movie Render Queue” which specifically addresses those issue, but came with less features and a few bugs.

After sorting out the rendering pipeline, there were still a couple of artifacts that I couldn’t figure out how to fix in time. I welcome pointers from more experienced UE users:

  1. Greystone jitters about midway through the first shot. Might have something to do with animation blending.
  2. There’s some light bleeding on the dragon’s wings in the second shot. It shows up from far away, but not when you get up close.

Conclusion#

Overall, I’m proud of some elements of this project that I was able to accomplish in such a reduced time frame. It was cool to be able to ideate quickly in 3D with a good idea of what the final result would be. Real-time has it’s own set of technical challenges to deal with, but I’m interested in exploring it more.