Perseid Meteor Shower Timelapse

I went out to try and catch a timelapse of the meteor shower last night. Didn't catch too many, but ended up with a pretty cool timelapse of the sky. An interesting issue I ran into = even at 800 ISO, the GH2 was showing horrible, big thick bands of color noise across the frame with long exposures. They would move around between frames and it made shooting a timelapse with it impossible.

After some poking around I realized what was causing this was the GH2's long exposure noise reduction setting being set to ON. After switching it off the bands completely disappeared. Of course now you can see you get some really hot stuck white pixels in your shot. Overall I'm very disappointed in the GH2's low light performance. I used to shoot 20 second exposures with my 5 year old Pentax DSLR and never once saw a stuck pixel, or big bands of noise, especially at 800 ISO. And yes, I did a pixel refresh in camera before shooting. And yes, the banding from the noise reduction set to on is present in RAW, and it's ugly.

Result below (I highly recommend switching to HD and entering fullscreen to be able to see the whole starfield)

Mandelbrot Based 3D Geometry

Lately I've been heavily experimenting with Mandelbrot sets. Not sets ran in 2D space that give you your typical hippie fractal images, but sets computed in 3D space, that give you complex and infinite 3D geometry. I've been experimenting with Mandelbulb3d - It's completely free software with support for DOF, fog, stereoscopic 3D rendering, an animation engine, etc. It could quite easily be integrated into a post pipeline utilizing Nuke/Houdini/Fusion/etc, if not generating the set math directly in Nuke with it's built-in Python API for example. The only downside I've found generating assets in Mandelbulb3d directly is no alpha support! This makes exporting renders and comping them into other work a bit difficult. Thankfully it supports the exporting of depth maps. What I've been doing is exporting depth maps, and using them as a luma matte - this will get rid of the background for you, just tweak the depth map with curves or levels to get the key you want.

Here's a quick render out of Mandelbulb3d. You can produce all kinds of geometry, and remember they sit in 3D space and everything can be animated.

For a great example of what these objects and spaces can look like when worked on and animated, check the video below created by Ricardo Montalban -

The past two days I've also been playing around with Mir, A new After Effects plugin from Red Giant. It uses fractal sets to create and modify geometry natively inside AE, and is completely OpenGL powered so it's nearly realtime. It's 100% 3D inside AE, so you can set up your scenes using AE cameras and lights. I made this quick render using Mir a couple nights ago.

So that's what I've been up to the last two weeks. I started playing with 3D fractals out of pure curiosity but I'm coming to the belief that they have some serious potential in a VFX/3D environment for the creation of spaces/visuals. I've begun to toy with Mandelbulber which is similar to Mandelbulb3d, however completely open source, ray tracing 3D support, alpha support (woo!), and x64 packages available. (rendering time with these things is horrible - however they have an alpha build with GPU support!). In the next week I'm going to start working with getting these renders into a Nuke/Maya workflow and see what kind of spaces I can create compositing these sets into 3D tracked live footage and a mix of live/rendered spaces. There's only rough OBJ export support and no camera data export, so I have a fun week ahead of me! Have a good weekend!

Underwater Shoot

I've always wanted to shoot underwater, if for nothing but fun. I absolutely love being in water in any form and love me some film, so I've always wanted to combine the two. I picked up a really cheap ($40) DiCAPac generic DSLR underwater enclosure (more of an underwater "bag") and shoved the GH2 in it. I have to say, in regards to the bag anyway, I was very happy with the results. It could have looked a lot better, but that wasn't the bags fault. I was planning on 1. externally lighting the pool, and 2, being able to use my f/1.7 pancake on the GH2 wide open, to stay as light sensitive as possible. Well, the bags one downfall is the lens port is just long enough, that with the pancake lens on the camera, the bags lens port extended far past the end of the lens and you could see the port in the shot. I can't attribute fault to the bag though, the GH2 with the inch long f/1.7 pancake is probably the smallest setup anyone would ever use in this bag by far. It was also a wide ass lens, which didn't help. So I was forced to put my f/3.5 on the cam and shoot with that, as the lens was more than twice as long and cleared the port. Also, by the time we showed up, I didn't have time to externally light the pool, leaving just the one underwater light as our single source of illumination.

This all added up to me having to shoot at 1280 ISO, and the GH2 is not known for being clean. It's hard to spot any horrible noise in the resultant clip (at least on youtube), but the main hassle was that now I didn't have the image headroom to pull as much skintone and neutral balance back as I would have liked. The image right off the cam was essentially completely blue. I didn't have anything on me to do a proper underwater white balance, so I balanced as best I could. I shot a couple RAW frames underwater and found out that the underwater light was sitting around 5500K, with more than +93 magenta shift needed in ACR to get rid of the blue cast from the water. So, I pulled as much tone back as I could but of course that really brought the noise out.

Overall a great time was had, I got shooting underwater out of my system (not really, must do again), and for $40, I have nothing bad to say about the DiCAPac. I followed the instructions extremely carefully, tested it in the bathtub before putting a camera in it, and while shooting we occasionally checked for any leaks. During a few hours of shooting, not a single drop ever entered the bag. Great purchase! Watch the result below -

Cobra 427

We took the cobra out last evening (an original 427, not a kit car!) and I had one of my cameras on me. Mind you I didn't have my proper gear, so I rigged the gh2 with only two suction cups instead of the usual three. You'll note the resulting wobble!

I used the lavalier mic I had with me in the lower part of the cockpit out of winds way, ran directly into the gh2. With the gh2's gain turned all the way down, it still clipped the whole time! Yes it's a loud car, but the lav is a hot mic and the gh2's audio gain isn't the most adjustable thing in the world. Next time I'll have an in-line pad on me.

Musings on 3D, Douglas Trumbull Interview

The interview itself below is pretty goofball, but Trumbull is a genius as always, and he echos what I've been quietly thinking for a while now. The 24fps, 2D nature of cinema has always created a sense of "non-reality" on the screen, an intrinsic sense of detachment with its lack of temporal resolution and depth, a sort of "gauze". This is where you get the feeling of immersion into another plane, a separate ephemeral world of experience, not a separate physical world.

Over the history of film, the language of cinema and visual storytelling has not just been adapted to cater to this, but it has been completely designed around it. Reverse angles, coverage shots, over the shoulder, these were crafted and are used with the notion that you will experience film in this detached ephemeral manner, and it works.

So here's the problem - When you suddenly remove this gauze, this sense of detachment, and jump to 48 frames, 60, or even 120, let alone jump to stereoscopic 3D, but continue to tell stories using the same cinematic language designed specifically for this lack of physical realness and this ephemeral detachment, of course it's going to make no sense, and it's going to look and feel bad. My thought for a while now has been that with these new technologies allowing a complete removal of this gauze, and a complete sense of physical realness on the screen, we need to adapt, perhaps even create, a new visual language for narrative storytelling, catered to this innate realness.

This has already been done in the past by a few very talented filmmakers, IE Kubrick with 2001. 2001 was shot on 70mm, and was meant to be seen projected in arguably the highest resolution format to date. Kubrick realized this sense of realness and physical attachment the viewer would experience in a 70mm theater, and crafted the films visual language around this. The lack of constant dialog, the single shots lasting 5 minutes that everyone complains about - these were meant to be experienced in a theater with a total sense of realness, no plane detachment. Without this detachment, there is no need for constant dialog, constant cutting and coverage shots to establish a physical understanding of the films spatial parameters, etc - You're already really there, experiencing another physical world, not another ephemeral world.

My argument is that we need to begin experimenting with visual language catering to an intrinsic sense of presence and realness in your medium, because this is what 4k/120p/stereoscopic is going to bring us. We can stick to our 100+ year old methods of visual language and complain about how it doesn't work with a new medium, or we can adapt our trade and take advantage of an amazing new level of immersion. People rarely take a step back and acknowledge how much of a baby cinema still is compared to almost all other artforms, and it's growing and changing so fast right before our eyes. This very same scenario happened with the jump from monochrome to color. Cinematographers had adapted their visual language to working only in greyscale, and were appalled at the idea of introducing color, something that would instantly have the audience feel like they were looking through a window, not at a piece of art. But look at how well we've adapted our visual language and taken advantage of this! I can't wait to see the same thing happen with high framerate, high resolution, digital stereoscopic imaging.

I could write equally as much about how long it's going to take our brains to release their preconceived notion of how narrative immersion should work and accept a new visual language, and probably even more on how important proper spatial design and consistency is when working with 3D (the importance of close collaboration with your stereographer!) - But on the whole, I can't describe how excited I am that this trade is going through so many amazing changes, and the best part is we're all around for it.

Fohdeesha Show / Live Visuals

My good friends David Peck, Tyler Ebbinghouse and Sia Hanna organized an electronic music show under the Fohdeesha moniker last weekend. The turn out was great (for Indianapolis) and a good time was had. The sets were fantastic.

They asked me if I was up to VJ-ing the show. I hadn't messed around with live video in roughly 4 years, and I had never actually done them live. But I took the challenge! I spent the two nights before the show frantically assembling and compositing clips to use. Among them are some never before seen footage from the fohdeesha archives, some animated CT scans of my abdomen, and some 3D composites I had been working on. It turned out great! The people in the crowd were not expecting what was about to be flashed in their face all night, and the artists playing absolutely loved it. Now I really want to do it again.

For the technical people out there, I was running Resolume with two midi controllers. I set up behind the stage so I could have a nice little space to myself. I didn't own a laptop (who owns a laptop?) so I got to lug in my 60 pound rackmount workstation (no joke!). The upside to this was the ability to have an entire HD workflow. 1080p source clips, 1080p Resolume workspace, outputting 1080p to a 1080p projector. I was running layers on top of layers and the thing was rock stable all night. I was running the Resolume UI on the main monitor output, then the live mix out of the second DVI output. I used a DVI splitter to run the second output to the projector as well as to a second monitor on my desk so I had a full screen reference of what was actually being sent to the projector.

I meant to bring one of my cameras and record the thing, but I ended up not having time. This is the only video I can find that shows anything, but you can see a few clips of what was going on. Some pictures below that as well.

Caught In The Middle

Finally found a break to get this edited.

Shot last year in the Indianapolis airport for Jeff McIlwain (Lusine ICL). Was a great exercise.

Lusine Shoot

The shoot for Lusine went much better than expected, and the footage came out absolutely beautiful. Here's a couple frames from the video. Film soon!


Subscribe to Fohdeesha Media RSS