Design Lab Project 4

Bryce Li
12 min readMar 30, 2021

--

3/29/2021

3/30

Choosing what story to tell

One of the central concepts of the story was to illustrate the gradual progression of pollution that our dumping of plastic in the ocean has created. I choose between different impacts of the scope of pollution– one was the mere presence of pollution, the other was the action of eating plastic bags, and the last was the physical harm to turtles that ate the bag. I decided that illustrating the overwhelming presence of plastic was important for the turtle, as I could demonstrate it’s behavior within pollution.

Throughout this storyboard, there were two behaviors I wanted to illustrate the turtle doing. The first was the behavior of swimming, which is a notable feature of the leatherback turtle– it’s efficient locomotion is expressed in its streamlined dorsal ridges. Another behavior that I wanted to illustrate was its consumption of jellyfish, which comprise most of its diet. Additionally, I could incorporate how turtles could confuse plastic bags for jellyfish in a similar manner.

I had also taken cues from animal documentaries to simulate the attention and gaze of an animal– looking at [this] video, I liked how close ups of the lizard’s face implicitly revealed its observational behavior. As a result, I also strove to incorporate this into my storyboard– I felt that this could better help the viewer empathize with the turtle.

Talking with Q, he noted that there were a lot of changes in the progression of the story. While the close-ups of the turtle were useful in showing what it was supposed to be reacting to, it also broke up scenes and made things much less continuous. As a result, there were several scenes that I merged together in my first iteration of storyboarding to create a more continuous graphic.

Since this animation wasn’t limited to physically possible events, like that of shooting real videos, I realized that there could be transitions and changes that are more figurative Looking at themes of continuity, I saw how this could be used for transitions as well– text could suddenly disappear or reappear when an object in the foreground passed over it, or the camera could pan to a new scene that would be physically impossible to smoothly transition to in real life. This allowed me to dive a bit deeper into transitions.

4/1

Lessons from initial animation

I started animating my first scene in after effects, which was one that was very similar to my poster. As a result, I focused more on exploring movement with my turtle and the objects in the foreground. Because everything in the scene was already supposed to be drifting around, I found it was effective to keyframe constant velocities. However, it was important to ease in and out movement when changing the direction of velocity of an object with mass– in this case, the movement of a flipper of the turtle. On top of that, using the puppet tool allowed me to make gradual deformations to indicate flexibility of an object. By making the outer fin drag behind the inner fin, I was able to create a sense of inertia and flexibility that gave mass and implied physical properties to the turtle.

Computation: representation vs simulation

Adding the reflection was an interesting part of this iteration. I enjoyed the refraction that occurred when looking up at the surface of a body of water. However, I could not translate this to motion very easily manually. This became an opportunity to experiment with how I could depict shifting water and it’s interactions with light.

My starting point was to write shaderlab/CG shader code in Unity to deform and refract whatever was behind the water. This was a visible three-dimensional effort– as the angle of the viewer relative to the surface of the water decreases, the shape should reflect that angle as well. Since I couldn’t calculate individual light rays at a satisfactory performance, I decided to get textures from behind objects, place them on the surface of the water, and deform them with fragment shaders. As a result of this, I was forced to avoid direct calculations of light rays and take a designed approach to represent this effect instead of simply recreating it.

The most important aspect of displaying deformation was to find how things could be deformed. Because the waves needed to be moved with time, I decided to displace the waves by time on one axis and move them through the noise. By passing into the noise first, I was able to mimic the movement of water while keeping it relative to the position of the camera.

fixed4 frag(v2f i) : SV_TARGET{
float noiseUV = 1 * noise(float3(i.screenUV.x*10.0 + _Time.x * 20, i.screenUV.y*10.0, i.screenUV.z*10.0));
fixed4 grab = tex2Dproj(_GrabTexture,i.screenUV + float4(noiseUV, noiseUV, noiseUV,noiseUV));return grab;
}

Although this noise function produced some sharp edges, I could move around the camera in a scene with pre-selected colors that could produce different refractions of the surface. Although this was by no means an accurate motion according to real physical properties, I wrote this code to best represent the look and motion that I observed. In the process of making this, I enjoyed exploring a process of visual creation that was through the creations of rules that reflected visual properties of the subject.

4/6

Working on transitions and changing things for understandability

When making the first scene, I experimented with many aspects of animation– masking and frame-by-frame movement. I first traced the jellyfish moving across the screen to reveal my title, which I had planned out in my storyboard and discussed on April 30th. I initially tried different source videos of jellyfish swimming to see which ones were the most recognizable, and I settled on the movement of a Sea Nettle. It turned out that throughout the footage that I saw, there was an incredible diversity of shapes of Jellies. It also seemed that turtles seemed to eat a wide range of jellies with different shapes, like that of a jellyfish with two heads in each direction, or non-stinging jellyfish with puffy tentacles. In the end, I settled on animating the shape of the sea nettle with a single bulbous head and a trail of tentacles, as it was the most iconic look.

I had to also modify the proportions of the sea nettles, which are very long in real life (often 4–8 times longer than they are wide). In addition to being more difficult to trace, it would also mean that the jellyfish would have to either take more time to cross the screen or move unrealistically quickly to fully unmask text in roughly three seconds. As a result, I moved the focus of tracing to the pulsing of the head, which was a dominating part of the behavior of the jellyfish. Meanwhile, I hand-drew the swirling tentacles in a much shorter and simple form than the source footage. As for much of the animation so far, I found that deliberate visual interpretation was needed when closely working with source footage.

While tracing the sea turtle, I made sure to get an orthogonal view of it. As a result, there was little discrepancy between the perspective of the traced object and the perspective of the scene around it. This also allowed me to move the object on a path following the camera without it looking unnatural.

Experimenting with three dimensions

On top of that, the background was also an important feature of the composition of the scene. By experimenting with three-dimensionality in After Effects, I was able to create a subtle visual understanding of the environment around the turtle. Because the background jellyfish were so varied in their distance to the camera, a parallax effect took place that made the viewer feel as if they were actually moving through space. The jellyfish themselves were made to be simple and still two-dimensional silhouettes in a three-dimensional space, as at this distance, position in visual hierarchy, and level of detail, there would be no noticeable foreshortening or movement.

4/8

Initial issues

During class, one problem that was brought up was that my turtle in my first scene was very simple– there was a lot of discrepancy between the pattern of the turtle in the beginning and the end scene. Another pressing issue was that the jellyfish was much larger than the turtle, creating confusion in the scale of both the jellyfish and the turtle. Although the original intention was to show depth in this transition, it became confusing to the viewer. To address this, I adjusted the scene– this time, the turtle would sweep across the screen to mask away the text revealed by the jellyfish. As the movement occurred, I planned to have the turtle transition into a more detailed color palette that the seven-color limit would allow. The end product effectively made this transition feel smooth. My plan is to have the turtle swim for a few more seconds and eat a plastic bag from the same angle.

Combining stop motion and procedural animation

During this iteration, I also focused on depicting a scene in my storyboard where a turtle eats a jellyfish. I had two versions of animation in this iteration– one animation frame by frame, and one animation with a still reference that was bent with the puppet tool. I decided to combine frame-by-frame animation and procedural animation, utilizing the best of both worlds. I chose to use a stop motion traced jellyfish in my original animation, as I found that it best captured the fluidity and softness of the jellyfish. For the turtle, I chose to use a more rigid method of animation for a couple of reasons– firstly, it allowed me to have more control over the movement of the fins, and secondly, it saved a lot of time. At this point, I wasn’t able to animate the movement of the fins yet. However, I was able to puppet the mouth in a way that the turtle looked like it was eating the jelly.

4/13

Changing narrative focus

Talking with Daphne and my TAs, both noted that my original concept of adding more and more plastic bags into the background was not a particularly effective one to illustrate the story. I agreed– most of the time, the background’s primary function became one to visually orient the viewer of the environment. As a result, people didn’t focus too much on the background and would likely not notice this. On top of that, viewers were thinking that the eating of the jellyfish was the main focus of the scene, which made for a pretty boring story given that it would just be a turtle eating a jellyfish three times and eating a plastic bag in the end. Because of these factors, this idea of background plastic bags could not be the main narrative. Looking back at my original storyboards, I found that the most important narrative was already there– the depiction of how turtles behave in response to their polluted environment. I shifted the emphasis of the narrative to be on how turtles could confuse plastic bags for jellyfish instead, instead of the emphasis of the proliferation of plastic bags throughout the environment. Daphne proposed that I could create a scene from the turtle’s point of view demonstrating the visual similarity between plastic bags and jellyfish.

When experimenting with different plastic bags and shapes, I wanted to make them feel similar not only just through their visuals, but also the composition and framing of the objects. By visuals, I mean the direct image of the jellyfish and the plastic bag. They are visually similar through similar colors and lines, but they are also framed through the movement of the camera. This was done through a transition from the previous scene, as the camera panned up from the shot of the turtle eating the jellyfish. The only two subjects that enter the scene and persist for the next few seconds are the jellyfish and the plastic bag, and my intention was to use this movement to group the two.

Additionally, I included a secondary background depicting Snell’s window at a distinct angle to illustrate the shape of the environment itself in relation to the images around it. The perspective implied by this window minimally shows the angle and position of the surface of the water. Additionally, the perspective was helped by movement of the background jellyfish to help the smoothness of the transitions. These improvements allowed for me to create a more cohesive environment that could be better understood by the viewer, and it showed– Connor remarked that it was compelling in how immersive the environment was.

Colors of jellyfish and turtle

In class, one of the discussions was what color to make the jellyfish. In a lot of the source photography, the jellyfish looked as if it was very colorful. However, this was due to optimal lighting conditions that lit the jellyfish from the front and thus reduced its transparency– making it very orange and purple. In other lighting conditions, the jellyfish is lit from behind with the natural diffusion of light through water, making it a light blue color. To resolve this, I made all the jellyfish the same light blue color as the ones shown in the last scene.

Additionally, Daphne brought up the fact that it was a bit awkward in the first scene to have the turtle move away from the camera and become more detailed. To reduce the distractions within this scene, I adjusted the colors and the timing of the scene to allow me to depict the details on the turtle from the moment it entered the scene.

4/15 + Final

Final changes to plastic bag scene

In this iteration, I continued to build on the scene with a plastic bag and the jellyfish. Originally, I would switch to a different scene to depict a turtle eating the plastic bag, as this scene was initially meant to depict a turtle’s point of view. However, I changed this to be a continuous scene where a turtle would swoop in from the side to eat the plastic bag. This would reduce the number of unnecessary cuts and preserve the mental image of the environment that had already been built up to the viewer for the past few seconds.

Additionally, I wanted to change the transition between the cut from the previous scene to this scene. I had originally planned to have the jellyfish enter the view from the right, and to create a match cut that would imply that the jellyfish from the previous scene was the same as the one in this scene. I changed this to introducing the jellyfish only on the second scene, as the timing of the previous idea was too long to fit in the budget of time that I had. This would also create less confusion around the cut, and focus more attention to the motion of the swimming.

To do this, I animated two things– the turtle swimming, and the animation of the plastic bag. I wanted to depict how the plastic bag could crinkle and crunch under the movement of being yanked out of the water. This required me to create compelling folds of the plastic bag. To do this, I traced the physical structure of the bag from footage of a turtle eating a plastic bag, while adding custom detail and contour from my own imagination. This was so because I wanted the bag to be lit partially from behind, which would create highlights and emphasize certain parts of its structure while retaining a visual similarity to the jellyfish. When creating the turtle, I traced a 3d animation to get a camera angle that would match the rest of the scene.

Creating the final scene

In this scene, I wanted to illustrate the scope of the problem of plastic pollution. To create something compelling, I decided to depict something like the great pacific garbage patch. This time, I chose to depict the scene from below– this would create more of an empathetic experience on behalf of the turtle, while typical depictions of the plastic garbage patch are from above the water from a human perspective.

To create identifiable elements of plastic, I chose to create visible and recognizable bottles that would indicate the plastic. I mixed in the bottles and the plastic bags to create a texture that would emphasize the prevalence of plastic as well as individual qualities of it.

Creating sounds

In the process of creating audio, I used a lot of unexpected methods to create sounds. My original plan was to record the sounds of water being moved by a flipper with my phone underwater, but listening to my audio, I could not distinguish those sounds clearly. As a result, I decided to listen to real-world recordings with specialized hydrophones. I recognized a few separate elements: a constant low pitch drone, the sound of moving water, and a very subtle bubbling.

I recorded these three sounds separately, but I noticed that they were too clear and high-pitched to truly feel underwater. I used Adobe Audition to pitch down the sounds, remove the higher frequencies, and muffle the remaining sounds. As a result, it felt much more similar to underwater audio that one might experience with a hydrophone.

--

--