How does 360° video even work? How can a camera see in all directions at once? We use multiple cameras and stitch the images together into a sphere. There are single-camera solutions like the VSN Mobile or 360Fly, but they have significant blind spots. Spherical video is made manageable by editing software, digital transmission, or online streaming, by flattening it out into an “equirectangular projection,” just as a globe is flattened into a flat map of the Earth.
- How does 360 degree video even work? How on Earth can a camera see in all directions at once? Well, for starters, we use multiple cameras and stitch their images together into a sphere. There are single-camera solutions like the VSN Mobil or the 360fly but they have significant blindspots. Even one-piece units like the Ricoh Theta or the much more expensive Nokia OZO are actually using multiple cameras to capture every direction at once to cover the entire sphere. Now this spherical video that we make is made manageable in editing, in digital transmission and in online streaming by flattening it out into a flat, equirectangular projection.
Just like a globe is flattened out into a flat map of the Earth. Now equirectangular just means two by one. Two units wide by one unit high. Now this ratio will become more important later on when we get into editing. More on that to come. But once our multiple camera shots have been stitched together into a sphere and flattened out into an equirectangular rectangle, we can deal with them just like normal video. Then we can add a bit of metadata to indicate that that video is a spherical video so then your player knows which direction you're looking when you're looking at it.
The motion sensors in your tablet or your smartphone inside your head-mounted display will tell the video which way you're looking and show you just the part of the sphere you should be seeing wherever you look. When you put the phone in your head display like a Google Cardboard, the effect can be extremely immersive. Now although this is a type of virtual reality, it's still not 3D because both eyes are seeing the same spherical image. 3D only happens when your two eyes see two slightly different images shot from the same distance apart as your two eyes.
This is a rapidly changing field and things are shifting all the time. But let me give you an overview on some of the different camera options that are out there. At the very low end is the Ricoh Theta. This was one of the first viable 360 cameras out there. It's inexpensive. It's small and very easy to use and it shoots 360 stills and video. But the resolution maxes out at 1080p HD. Actually 1920 by 960 so it's a little shorter than HD. Remember it's equirectangular.
Now that resolution is plenty high enough for sharp images when you're looking at the whole frame all at once like on a big screen TV or a computer monitor. But with spherical video, remember, that 1920 by 960 video has to be stretched out over an entire virtual sphere. And when you look at it in goggles, all you're seeing is a small slice of it. Maybe 200 by 300 pixels. Way lower resolution than what you're used to looking at on video. Now the much improved Theta S has a lot going for it. It's easier to use.
It can shoot longer. It has a slightly better image quality, but the resolution still hasn't been improved to a useful level for 360 video. Now people have been making 360 videos for a few years with various configurations of GoPro cameras. These have the advantage of shooting very high resolution video. But multiple GoPro cameras can be fussy to work with. You've got all the different batteries, all the different lenses, all the different cards. Also stitching four or six or more GoPro camera images together requires some very complex and effective software and can leave you with many zig-zagging stitch lines all through your sphere that are hard to avoid.
Samsung has just released a very neat little 360 camera that shoots in 4k video. So it's much better looking than the Theta, but it requires either Windows computer software or Samsung phones to handle the files making it difficult to use for larger projects. At the very high end, there are cameras like the Nokia OZO and the Google Odyssey Jump camera. Now these shoot 360 and 3D at the same time and stitch the files together for you automatically but these have their own issues and are pretty far out of reach for most smaller production companies.
The camera that I like to use is the Kodak SP360 PIXPRO. It's a matched pair of cameras that comes in a kit including free stitching software plus a bunch of various mounting equipment. These shoot in 4k so you have enough pixels to give you acceptable resolution over your whole sphere. And they can be mounted and configured in a variety of ways. In this course, we'll go into depth on how to use this camera, but most of our techniques and workflows will apply no matter what camera system or editing suite you prefer.
This course explores some of the current camera options available to the 360° filmmaker, their relative strengths and weaknesses, possible applications of the 360° format, in-depth workflows to get you from shooting to finished project, and the best methods for sharing and viewing your 360° videos. Mark W. Gray helps you assemble and edit a final, polished 360° video using Final Cut Pro, but once you have mastered the basics, you can apply the lessons to your own preferred tools and workflows.
- How 360° video works
- Setting up your camera
- Dealing with sound
- Deciding what to shoot
- Importing 360° footage
- Stitching 360° video
- Assembling 360° video on the timeline
- Exporting and finishing 360° video
- Sharing 360° video