[vcdm id=x1cwgce]

The Bafta-winning visual effects behind Gravity, Watch Making of Gravity by Framestore Studio built an astounding 2,200 models for Gravity, from large structures such as the International Space Station to the tiny props floating within it.

Gravity: CG sets and suits

It was a remarkable number of asset builds – far more than in a traditional VFX-heavy film. “The number of assets to manage was more akin to that of a CG animated film, but this is on a different scale of complexity entirely as it’s all photo-real. I definitely wouldn’t classify this as a CG film, but it is certainly comparable in terms of the number of assets” says Gravity lead modeller, Ben Lambert.

“We were creating all the things that physical set-builders would normally make in a film studio, and then doing the extensions on top” adds Ben. “The interior sets, which are all CG inside the ISS, were phenomenally detailed too, and every bit of that had to be modelled by someone.” says Senior Producer Charles Howell. “It took over a year to build everything. We never really stopped – we were constantly adding detail.”

Building the International Space Station

Viewed from every angle, inside and out, and then stunningly destroyed, the International Space Station (ISS) was the biggest of Gravity’s thousands of asset builds – ending up as a 100 million polygon model. “It’s made up of around 50 pieces, each sent up at a different time over the last 25 years and each from different manufacturers and serving a different purpose. It’s the galaxy’s biggest jigsaw” says Ben. “So we couldn’t just throw a great big generic sci-fi kit all over it, make it look cool and put shiny chrome aerials on there. We had to source photographs really carefully.

“What we found with the switch to Arnold (see also: Rendering the universe) was that it was very fast at raytracing polygons. Traditionally if we wanted to do some destruction, we would bake in some wobble on a piece of metal to make it look as if it had been dented using a displacement map. However displacement maps are very memory intensive as they put all the detail in at rendertime and what we found was that we could just give Arnold the really heavy model we baked from. We would reduce it as much as possible but keep all the detail in there and Arnold was very happy to render it. So the ISS model is 100 million polygons with no displacement maps! It’s just raw polygons for the most part with UVs.”

A big design challenge was familiarity as the ISS, Hubble Telescope and the Atlantis and Explorer space shuttles are so iconic that many people will recognise them, so creating them in a photo-real way was a big undertaking. “We had to source reference photographs really carefully and think ‘why are we putting an aerial on this unit? This one’s an airlock so it should have this on the outside’, things like that. It’s a lot more complex way of building things than simply adding details just so the render looks prettier – this had to be functional and believable.”

“Although we worked with an art department we had the same reference material that is available to the public, not some secret Nasa blueprints. So rather than just making a best guess we wanted to take the images and actually calibrate them. If it exists in reality we had to create a calibration scene. At the time we used a bit of software called Image Modeller for this. We’d take a couple of images, select matching points on different photographs of the same thing, and keep repeating the process until it ends up giving you 3D cameras. Modellers can actually model through these cameras and line-up the photographs.”

We had more creativity on the interior sequences. Although they are based on real ISS dimensions there are lots of floating props, which obviously would differ between each mission. “Our set dressing is after a state of panic when the crew have left the ISS. Alfonso could move and position a prop wherever he liked, change the colours, chose a different poster, things like that, just to make that shot more believable” says Ben.“You could probably look at one of our interior shots and a photo of the ISS and work out what module the scene is in, it’s that accurate. It has the same colour coding, the same panelling. The actual underlying walls are exactly as they are in reality, but the dressing upon them might not be the same, because this is a different mission, so there are different personal items on board.”

Blowing it up

The destruction of all that hard work was one of the most difficult sequences for the FX team, led by Alexis Wajsbrot. “It’s two minutes long and we had to choreograph everything based on where Sandra Bullock is. We wanted it to look as real as possible and everything was incredibly high-res.”

There’s also a fire explosion in that scene, which is in a vacuum so it we need to design the look of that.” Add to that a swooping camera that goes both close up and far away and you have one very complex shot. “They were the most detailed models I have seen so far on a project. Because we were using Arnold the modellers weren’t afraid to up the polygon count and then we had to deal with it and try to find a workflow that would be able to deform and destroy them.”

“I remember when we started shattering the ISS and [VFX Supervisor] Tim Webber saying “It’s a metallic structure not rock, it’s supposed to bend and deform before breaking”. We had to develop a way we could have some super high-res geometry bending/deforming then breaking and colliding with a very large amount of rigid body debris.”

“After testing different solutions such as nCloth, soft bodies and Finite Element Analysis, which all have the inconvenience of not being fast or flexible enough, we decided to create the ‘structure system’, with the goal of simulating everything into the same solver, Bullet. The idea was to deform using a RBD solver where a single mesh could have multiple rigid bodies constrained together to drive it, which allowed deformation.” “We managed to have a very fast and detailed simulation deforming millions of polygons. The tools were so successful that modellers used it on Gravity to help them model some damaged structures. We are actually still using the same tool on some current productions.”

The detail of the models and length of the shots combined to make it impossible for one TD to handle it from start to finish, so a way had to be found to split the shot in both time and between artists. Of course you couldn’t split it up too much though, because the ISS had to behave like one asset throughout the shot – with each impact moving the whole station in a realistic way.

“The good thing was we had a year to prepare” says Alexis. “RnD polished the development of fBounce, our own implementation of Bullet into Maya, and in FX we wrote the structure system – over 15,000 lines of code! We created a simulation rig for each of the 48 modules of the ISS and made four sim rigs per module with a different degree of destruction. So there were 192 sim rigs in all, which allowed us to have the control we needed to destroy the ISS at as high-res as possible.”

“We first did a pass of the whole shot using low-res sim rig of each module that allowed us to get the overall motion of the station. The animators made sure we were telling the right story and that the camera was always facing an interesting part of the ISS.”

With the overall motion of the ISS approved, the team identified four moments when the camera wasn’t pointing at the ISS. “That was very helpful to be able to split the shot between different TDs” says Alexis. “For the modules that had to be hit multiple times during the shot, we first had to approve part A of the simulation in order to start simulating part B. We also modelled a damaged variant of each module to be used on the later part of the shot.”

The shot was also split between TDs by foreground and background as well as by module, so that they could focus on the incredibly high detail destruction. The FX team worked very closely with Modelling, as they needed them to model the assets in a specific way that’s could be easily destroyed, as well as including some additional structures, cables and interiors so that the destruction simulation would look more realistic.

“We presented things to Alfonso on a daily basis,” explains Alexis, “which meant he could choreograph the destruction. He gave us a bit of freedom and we came up with a few ideas to improve the shot. We were all super enthusiast about working on such a cool shot and we really pushed to make it look as impressive as possible.”

Additional FX TDs worked on some secondary debris, sparks and particle simulations and a senior FX TD was also in charge of the fire explosion, he spent four months working on it to nail the look of what an explosion in space should look like. How spherical it should be, how fast it should dissipate, should it create a shockwave? There was a lot of back and forth with Tim and Alfonso to find the right balance between realism and making it look good.

In total there were four FX TDs looking after the main destruction, with three adding additional debris and particles and another creating the explosion.

Space on fire

This was one of the big challenges for The FX team – what does a zero-G fire look like? Fortunately for astronauts, but unfortunately for us, fire in space is a very difficult thing to find. “There are very few papers about it and almost no references online” says Alexis. “We found a match being lit in space on YouTube, so it was a very small. We had to imagine how it would look on a bigger scale.”

“We used it as our main reference for the blobs of fire floating in the ISS, little bits of plastic, cables and flying paper that are flying off and burning. We slightly tweaked it from the perfect sphere look, adding a bit more detail and turbulence to the simulation to make it look more believable and interesting.”

“For the main fire simulation inside the ISS, as we did not find good enough material online, we went on a three day shoot with special effects artists to get some fire references. We were mainly emitting some fire directly to a metallic ceiling so that buoyancy had a minimum impact to the fire and it create some really cool, blobby fire crawling along the ceiling.”

“In order to reproduce it, we used Naiad and worked closely with Marcus Nordenstam (the founder of Naiad) and Framestore’s RnD department to be able to generate a very large scale and efficient fire simulation within Naiad. I believe we were the first show here to use Naiad to create fire simulation.”

“We turned off buoyancy, gravity and all kinds of drag, which made our simulation very low in terms of detail and almost ‘boring’, so we spent time finding the right balance between what the solver believes a zero-G fire should do, and what we believed was interesting to show to an audience. Some of the simulations were so high-res that they took more than two days to simulate!”

Digitally sewing the space suits

As well as the sets we also had to create the costumes, specifically Sandra and George’s space suits. “The goal, which sounded simple enough, was to make the astronauts look real” says Head of Rigging Nico Scapel. “The complexity there is that they had to look right, from far away to extreme close-up, and they had to move right – every fold on their suits – across very longs shots and changing lighting conditions.” They were unthinkably detailed and each suit could have a few million polygons, between ten and a hundred times the resolution the team might normally have worked at.

We borrowed many techniques from physical costume design to make sure everything would behave believably. “One of the novel things about the Russian spacesuits is that in building the digital suits we started from the original cloth panel sewing patterns that were used to build real suits. CFX artists used a program that was designed for the fashion industry to digitally sew the suits together. For the Nasa suit, our starting point was a selection of photos and video references. We had to get the right amount of fabric to ensure that it fits the character properly, turning us into digital tailors of a sort” says CFX lead Juan-Luis Sanchez.

“We had a large number of iterations on the suit design and models to achieve the level of realism required” says Nico. “Our rigging system enabled us to rebuild all rigs from scratch, including the suit, body and facial deformations – which meant we were able to push these iterations to animation and rendering, so they could be evaluated in a shot context. Over the course of the project, we were maintaining over 800 assets totalling over 7000 rig versions. Ryan alone actually contained several dozen models and rigs, for different usages throughout the pipeline, from pre-vis and tech-vis to mocap, ragdoll, tracking, animation and rendering.

Some liberties had to be taken however. Real suits are rigid, limiting movement considerably and the wearer has difficulty in raising their arm above shoulder height. “That was really restrictive, we had to find ways to adjust it, making them occasionally deform in ways that wouldn’t be obvious to give them more freedom” says Nico. The also had to create something that was robust enough to be both animated with all the degrees of freedom of space, and lined up perfectly with shot elements, so that the animators could work with freedom while still respecting the live action plate.

Sandra Bullock and George Clooney were sometimes shot independently and tracked with different cameras, so those elements had to be brought together. Their suits would need to be animated relevant to a new camera while respecting the orientation of those plates to camera so nothing was lost. One of the riggers, Pierre-Loic Hamon, developed some powerful tools to help solve the problem which used the chest as a cushioning area, so that the body could be animated as if it was in micro-gravity, but the rim of the helmet and the live action would still match up pixel-perfectly.

“Our goal was to spatially stage CG plate cards in animator’s work scenes, by making direct use of live action tracking data, in relation to the independently animated CG body and camera” says Pierre-Loic. “When you do such thing it is very easy to lock things down, but for Gravity there was no question about it: it had to be more flexible. Additional tools handled that system to interactively display and help reduce the amount of distortion to the live action by correcting animation directly, yet maintaining framing and motion. It allowed us to have a 1-1 match between the card and the body at their point of contact – most likely the helmet rims, but not always.”

“Finally there were tools to help dampen the motion of that point of contact to the gross motion of the body, which could come out jerky, and therefore accommodate for live action shooting constraints versus digital zero gravity flight. This used a mix of handy rig control logic and polynomial smoothing algorithms, and contributed in preserving this weightless feel from the original pre-vis through to the final renders.”

“Imagine a shot where Sandra is flying through space” says Animation Supervisor Max Solomon. “On set she was strapped into a in a rig and the camera was moving around her, so for her performance often it was only her head that was moving. Those tools allowed us to animate the characters but then spread the difference between the gross motion and the helmet across the body, so you didn’t see a big disconnect between the live action face and the animated body. You could soften the head and the shoulders. They were incredibly powerful and fast tools and gave a great deal of freedom to animate.”

For a few shots we had to do full facial replacements because it would have been impossible or too dangerous to film the movement of the actors. “Even though they were action shots and relatively fast moving, we looked for the most accurate capture technique available at the time” says Nico. “The goal was to shoot the performance and have the ability to relight it in a shot without any hand animation, and without trying to solve the performance as individual shapes. We captured neutral scans of the actors in the lightstage, which gave us a solid basis for our model and skin shader. The performance capture was done with a Mova contour system, enhanced with additional cameras to allow for more head movement, and four Arri Alexa cameras to capture higher resolution texture. The post processing involved head, lip and eye tracking, and a transfer rig to drive the digi-double head from the captured ‘mask’.

Parachutes and tethers in zero-G

For some elements there were simply no references, such as the Soyuz parachute that is accidentally deployed in space. “How is this supposed to work?” thought Simulation Supervisor Sylvain Degrotte. “For the sequence where the parachute is caught on the space station, we had to mock up a little model of the ISS and a little model of the parachute and we were in our mocap studio figuring out the sequence. We had to come up with something that is believable for Alfonso – for example cloth fabric tends to go back to its manufactured shape in space that is why it starts with the open canopy at the beginning of the shot.”

As the simulation couldn’t be split, it had to be done on one machine and as a result it was taking up to 24 hours. “When you press the ‘sim’ button on the farm it was like launching something into space! We had to come up with some ideas to get around it.” As with the destruction of the ISS a low resolution simulation was required to mock up the movement more quickly. The parachute dynamic setup was split into six dynamic parts, with each having its own material property: rigid metal, thick ropes, thin ribbons, silk canopy for example. “It was definitely one of the biggest challenges” says Sylvain.

Another was the tethers that hold Clooney and bullock to the Hubble and then to each other, which were simulated by Sylvain’s team. Unlike the very long parachute simulation, the tether sims had to be very quick. “Because they were both simulated and used by animators we needed something else that could be more real-time, so we had to look at this aspect and try and get something more interactive in order to get the animators quick feedback” says Sylvain.

There are different kinds of tethers used during the movie, each of them requiring realistic and art directed motion. “We established that the generic anatomy of tethers would be made of two rigid pieces connected to thin strands of high-strength fibre. Since each tether variant would require the same level of quality and a number of common controls we decided to build one master simulation set-up that we could then adapt for each of the different kinds of tether in the movie. It saved us tonnes of time and made it easier for artists to switch between the various tasks in the film” says CFX Lead Russell Lloyd.

Rendering the universe

How do you render a show like Gravity with its incredibly long, almost fully CG shots? Combine those with Alfonso Cuaron’s attention to detail and it means you have nowhere to hide – we needed to make everything as physically accurate as possible and there was only one way to do that: switch renderer to Solid Angle’s Arnold.

“I really wanted to use a physical-based rendering solution because it enables you to concentrate on making beautiful images and the other things you want to be worrying about” explains VFX Supervisor Tim Webber. “You don’t need to agonize over certain aspects of reality, or worry about cheating and faking things, as they are closer to what they should be in the first place.”

“We investigated a number of options, including mixing renderers, before settling on using Arnold for the show”, says Head of R&D Martin Preston. “Once we’d made the decision we needed to add support for Arnold to all our in-house technology, and our shader team (led by Dan Evans) needed to re-engineer much of how they approached their work. Doing all that for such a big show was quite nerve-wracking, but I think we were surprised how smoothly it went. In fact while we were doing it our other shows were starting to follow Gravity’s lead in using Arnold.”

Rendering and its scheduling was a huge challenge – render times could become phenomenal, owing not just to the incredibly long shots but also the amount of geometry in each one. In fact, had it been done on a single core machine with one processor rendering would have needed to have started at the dawn of Egyptian civilisation.

“Optimisation therefore became the buzzword of the show”, says Preston. “We spent an awful lot of time learning how to make best use of Arnold, which tended to result in either us developing new technology, particularly on the interior shots, or refining our lighting setups to focus computation where it mattered.”

But with a considerable amount of careful planning Arnold was up to the task. “I don’t know what we’d do have done without it. It was a great tool in our arsenal” says CG Supervisor Chris Lawrence. The realism it provided was essential to the compositors too “I don’t think the interior shots would have looked as good if we hadn’t rendered with a raytracer like Arnold. It would have taken a lot more lighting adjustment to make it look real” says Anthony Smith. For the Earth it was “key in getting such realistic and beautiful clouds, and the water surfaces” says Earth Supervisor Kyle McCulloch.

Pre-lighting

The lighting was another major benefit to the switch. “To me, it’s all about the way that the light bounces. Arnold does it beautifully, and realistically, in a subtle way that you have to work very hard to achieve with other renderers” says Webber. The improvement was felt right from the beginning, when cinematographer Chivo was pre-lighting the film because the light was behaving in a way that he was used to.

Our computer lighting rigs were also designed to be very similar to the traditional rigs that would be used on set, with bounces and blockers, so Chivo was working with a set-up as close to what he was familiar with as possible. “It made the whole pre-lighting process realistic. I think if you were pre-lighting with another rendering solution so much would change by the end lighting that there wouldn’t be any tie-in. There were a massive number of advantages to it” says Webber.

As Arnold was so fast at raytracing polygons the modelling team found it could make much heavier models. It was more efficient to up the polygon count knowing the renderer could handle it than use lots of displacement maps that would put the detail in at render time.

With more than 2000 assets (from the ISS to the floating props inside it) displayed from almost every angle for minutes at a time, it was a godsend. “What we found was that we could just give Arnold the actual heavy model we baked from, reduce it as much as possible but keep all the detail in there, and it was very happy to render it. The ISS is 100 million polygons with no displacement maps! It’s just raw polygons for the most part with UVs” says head of modelling, Ben Lambert.

Arnold’s ability to accommodate a brute-force approach to modelling meant that Framestore needed to boost the performance of the infrastructure serving the renders. “Some of that was improvements to our file servers”, continues Preston, “and some of it was gained by us carefully optimising the way our tools handled that enormous quantity of data. Previously we’d relied on streaming data into the renderer as and when required, on Gravity we just needed to get everything we had into the renderer as fast as possible and let it handle it!

“Overall it was a big step to go away from Framestore’s normal rendering solutions, but one that certainly paid off. Probably the best decision I made on the whole movie” says Tim Webber.

Adriano Sanna - 2023 All rights reserved.