Markerless Mocap From Single Feed

  • For the ladder part of this week, I experimented with some markerless mocap options from a single camera feed. First, I tested Dollar Mocap which seemed to work for simple mocap but freaked out when occlusion occurred. It is also capable of streaming into Unreal Engine. I also set up Azure Kinect to potentially use its different cameras for mocap. In testing, I piped the various Azure feed into Dollar Mocap with mixed results. Unfortunately, the Mocap SDK for the Kinect and depth feed wasn’t helpful as DollarMocap could not differentiate the solid-colored person as a person. Infrared and normal footage had similar mocap quality but infrared was better if the actor was wearing a material that reacts white with infrared. So, Dollar Mocap works best when the actor is easily differentiated from the background and still retains their feature, eg green screen.
  • Additionally, I tested markerless mocap with IPI. IPI was relatively simple to set up and was capable of streaming to Unreal. Although, it required exporting the 3d character you wish to stream to and importing it into IPI. So, for Metahumans I had to export and import the basic female body from Unreal to IPI. It functioned better with occlusion than Dollar Mocap due to its use of the Kinect depth sensor. IPI also has an option for multiple depth sensors so that may be a future potential test using both the Azure Kinect and the Xbox 360 Kinect to reduce problems.

Subscribe to XR Storytelling

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe