Tech Trends for 2016: Facial motion capture technology

It was used in the making of this year’s Star Wars film and now Apple has taken an interest – 2016 sees virtual reality go one step further...

What do Apple, Star Wars and Zurich all have in common? Faceshift – a pioneering start-up which uses innovative technology to create animated avatars by capturing a person’s facial expressions in real-time.

The Swiss company has invented software which revolutionises facial animation by analysing the facial movements of an actor, such as head orientation, eye gaze and basic facial expressions, which it then uses to animate virtual characters.

Disney subsidiary Lucasfilm recently used the start-up technology to bring alien characters to life for the upcoming Star Wars: The Force Awakens film. Yet it is Apple’s recent involvement which really makes facial motion capture technology one to watch in 2016.

Last month, it was revealed (via Tech Crunch) that the tech giant had acquired Faceshift, opening up a raft of commercial opportunities alongside the obvious uses for gaming and movie animation. Faceshift have pointed to a number of ways in which the technology could be used for businesses in the “future” such as incorporating facial recognition into enterprise apps for ID and security purposes, using live avatars for call centres or sales departments, for use in “magic mirrors” in retail stores, and even the use of personalised avatars at events and other venues.

While Apple has yet to disclose how it will make use of the acquisition, it has made a series of notable investments in the space with the recent purchases of augmented reality company Metaio and 3D sensor company PrimeSense. Earlier this year, the company hired HoloLens engineer Nick Thompson from Microsoft to develop the technology and has also published vacancies for those skilled in “virtual and augmented reality”.

We predict that the New Year could mark Apple’s big reveal with facial motion capture technology having a big part to play…

How it works

Facial motion capture is the process of using visual or mechanical means to manipulate computer generated characters with input from human faces.

In the case of Faceshift, its software uses an infrared dot grid to detect objects and motion in three dimensions (3D), and movement data from the camera such as head position, the direction of your gaze and the location of market points on the face.

These are then applied to a 3D model in real-time or in post-processing. This technology has existed previously, but it was always applied after the performance had been filmed and processed and has never been as advanced as in the case of Faceshift.

For instance in gaming, the technology can be used for people to adopt avatars whose faces then alter in real-time based on the player’ actual experience, or in film production; the technology can be used to make animated characters mimic an actor’s facial movements more closely.

Comments

(will not be published)