Bullet Time Camera Rig

We wanted to build something to excite audiences in the new Developer Experiance Hub at Oracle Code One. We came up with the idea of building a bullet time camera rig like used in the film The Matrix. Rather than dozens of expensive DSLR cameras we would build it developer style with 60 Raspberry Pi Mini Computers with 8Mpix Pi Cameras. The number 60 came from the idea of 2.5 seconds of footage at 24fps.

Videos Produced By the Rig

Designing the Hardware

The biggest hardware challange was how to provide power to all 60 Raspberry Pis. If we said each Pi & Camera needed 2A at 5V that would mean 120A for complete rig. If we had a single bus it would need a copper conductor more than 1 inch diameter. So to reduce the current we needed to up the voltage. So I upped the voltage to 48V which reduced the current to 12A which I could then run through an off the shelf industrial ceiling lighting track. So the rig was designed around a range of track lighting parts and each camera unit had its own mini step-down power supply from 48V to 5V.

Building the Rig

In principle building the rig is simple, what we learnt the hard way is any small job multiplied by 60 times is a LOT of work. My wife was a star and stepped in and saved the day and did most of the repetative work to build and wire all the cameras. Initially we designed the cameras with Infa-Red transmitters and sensors on each camera so we could auto configure the ordering of the cameras. That sounded cool on paper but in practice the alignment and background interferance made it unreliable dispite days of wire stripping and soldering, oops.

Shipping the Rig

The camera units were much heavier than expected and the track a lot more flexable. The result of that was it was difficult and fradgile to move. We ended up getting a custom crate made. Which turned out to be huge, so big it would not fit in the elevator at Moscone so a crew had to carry the whole crate up the stairs.

The Software

On the software side, we needed to run software both on the Raspberry Pi units and on a central coordinating server. We also had a web UI for running the demo. Users entered their Twitter username so that the final video that we uploaded to Twitter could be linked back to their own personal Twitter account. The overall system worked like this:

  • The user would input their Twitter handle on the Oracle JavaScript Extension Toolkit (Oracle JET) web UI we built for this demo, which was running on a Microsoft Surface tablet.
  • The user would then click a button on the Oracle JET web UI to start a 10-second countdown.
  • The web UI would invoke a REST API on the Java server to start the countdown.
  • After a 10-second delay, the Java server would send a multicast message to all the Raspberry Pi units at the same moment instructing them to take a picture.
  • Each camera would take a picture and send the picture data back up to the server.
  • The server would make any adjustments necessary to the picture (see below), and then using FFMPEG, the server would turn those 60 images into an MP4 movie.
  • The server would respond to the Oracle JET web UI’s REST request with a link to the completed movie.
  • The Oracle JET web UI would display the movie and allow the user to either upload it to Twitter or discard it.

Camera Alignment

In general, this system worked really well. The primary challenge that we encountered was getting all 60 cameras to focus on exactly the same point in space. If the cameras were not precisely focused on the same point, then it would seem like the “virtual” camera (the resulting movie) would jump all over the place. One camera might be pointed a little higher, the next a little lower, the next a little left, and the next rotated a little. This would create a disturbing “bouncy” effect in the movie.

We took two approaches to solve this. First, each Raspberry Pi camera was mounted with a series of adjustable parts, such that we could manually visit each Raspberry Pi and adjust the yaw, pitch, and roll of each camera. We would place a tripod with a pyramid target mounted to it in the center of the camera helix as a focal point, and using a hand-held HDMI monitor we visited each camera to manually adjust the cameras as best we could to line them all up on the pyramid target. Even so, this was only a rough adjustment and the resulting videos were still very bouncy.

The next approach was a software-based approach to adjusting the translation (pitch and yaw) and rotation (roll) of the camera images. We created a JavaFX app to help configure each camera with settings for how much translation and rotation was necessary to perfectly line up each camera on the same exact target point. Within the app, we would take a picture from the camera. We would then click the target location, and the software would know how much it had to adjust the x and y axis for that point to end up in the dead center of each image. Likewise, we would rotate the image to line it up relative to a “horizon” line that was superimposed on the image. We had to visit each of the 60 cameras to perform both the physical and virtual configuration.

Then at runtime, the server would query the cameras to get their adjustments. Then, when images were received from the cameras (see step 6 above), we used the Java 2D API to transform those images according to the translation and rotation values previously configured. We also had to crop the images, so we adjusted each Raspberry Pi camera to take the highest resolution image possible, and then we cropped it to 1920×1080 for a resealing hi-def movie.

On each Raspberry Pi, we used a simple Python app. All communication between the Pi units and the server was done over a multicast connection. On the server, when images were received they were held in memory and streamed to FFMPEG, such that only the resulting movie was actually written to disk. All communication between the Oracle JET web UI and the server was done using REST. The server itself was a simple Java 9 application (we just used the built-in Java web server for our REST API).