Live mode transmits captured animation data over a local WIFI network using the OSC protocol.
The idea is to provide users with an 'open' way to integrate facial motion capture data into their projects.
Because project needs can vary widely it is up to users to develop a receiver that suits their needs. However as Unity is a very common platform an example project is provided. You will also find some external links to projects that enable Face Cap data for other software package/platforms.
Open Sound Control (OSC) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology.
OSC is easy to implement as libraries are widely available in many different programming languages. For example for Unity, VVVV, Processing, Open framworks, Arduino, etc.
OSC allows the data Face Cap sends to be used to drive pretty much anything. Use the data to do eye tracking or drive a synthesizer with your face etc.
- Address: /HT + 3 Floats (x,y,z) = Head position.
- Address: /HR + 3 Floats (x,y,z) = Head rotation in degrees.
- Address: /HRQ + 4 Floats (x,y,z,w) = Head rotation as quaternion.
- Address: /ELR + 2 Floats (x,y) = Eye left rotation.
- Address: /ERR + 2 Floats (x,y) = Eye right rotation.
- Address: /W + 1 Int + 1 Float (blendshape index, value) = Blendshape parameters.
You might have to compensate for your application's coordinate ststem and units. For Example in Unity the coordinate system is right handed with -z. This can resulting in mirrored translation and rotations if not converted/compensated.
The naming convention: _L and _R indicate symmetrical shapes. The naming convention Left and Right indicate a direction in non symmetrical shapes
- Blendshape index : Blendshape name
- 00 : browInnerUp
- 01 : browDown_L
- 02 : browDown_R
- 03 : browOuterUp_L
- 04 : browOuterUp_R
- 05 : eyeLookUp_L
- 06 : eyeLookUp_R
- 07 : eyeLookDown_L
- 08 : eyeLookDown_R
- 09 : eyeLookIn_L
- 10 : eyeLookIn_R
- 11 : eyeLookOut_L
- 12 : eyeLookOut_R
- 13 : eyeBlink_L
- 14 : eyeBlink_R
- 15 : eyeSquint_L
- 16 : eyeSquint_R
- 17 : eyeWide_L
- 18 : eyeWide_R
- 19 : cheekPuff
- 20 : cheekSquint_L
- 21 : cheekSquint_R
- 22 : noseSneer_L
- 23 : noseSneer_R
- 24 : jawOpen
- 25 : jawForward
- 26 : jawLeft
- 27 : jawRight
- 28 : mouthFunnel
- 29 : mouthPucker
- 30 : mouthLeft
- 31 : mouthRight
- 32 : mouthRollUpper
- 33 : mouthRollLower
- 34 : mouthShrugUpper
- 35 : mouthShrugLower
- 36 : mouthClose
- 37 : mouthSmile_L
- 38 : mouthSmile_R
- 39 : mouthFrown_L
- 40 : mouthFrown_R
- 41 : mouthDimple_L
- 42 : mouthDimple_R
- 43 : mouthUpperUp_L
- 44 : mouthUpperUp_R
- 45 : mouthLowerDown_L
- 46 : mouthLowerDown_R
- 47 : mouthPress_L
- 48 : mouthPress_R
- 49 : mouthStretch_L
- 50 : mouthStretch_R
- 51 : tongueOut
The Unity live mode OSC Receiver project is stored and maintained on Github: Download it here.
- Install Unity v2019.3.0f3 or newer.
- Sign up for a Unity account.
- Make sure Unity is allowed through the firewall
- Disable your VPN if you have one.
- From the project window load the scene : FaceCapOSCReceiverGenericExample.
- Play the scene and select the scripts not in the hierarchy window.
- Note the IP address and port and enter this when connecting live mode in the app.
- You should see the app and Unity move in sync.
- In 99% of cases Unity is being blocked by the operating system firewall. Dig into the firewall settings and make sure Unity is allowed through.
- If this does not resolve your issue please use the contact form and we'll debug the network together.
The Unity project contains two example scenes. One for avatars that contain the exact same blendshapes and blendshape names as the Face Cap generic avatar and one for custom avatars that do not have all the blendshapes or have different blendshape naming. Examine the FaceCapOSCReceiverCustomExample to get an idea for how it's setup.Steps to configure a custom avatar:
- A new FaceCapRemappingObject needs to be created. Right click in the project window to create one.
- Click on the newly created FaceCapRemappingObject to configure it. Assign the custom avatar mesh that contains the blendshapes and match the custom avatar blendshapes to those in the Face Cap data. Save the project to save your configuration.
- Next the setup in the FaceCapOSCReceiverCustomExample scene needs to be replicated. Duplicate the scene in the project window and open it. Removed and replaced the avatar with one of your own. In the scripts node assign the required properties including the newly created FaceCapRemappingObject.
You should now be all set to drive the custom avatar using Face Cap.