Lip-sync

[Last updated: 01/18/2019]

 

Identification of lip-sync parameters

Lip-sync effects can be used to apply lip-sync behavior to a model.
The following processing is performed to apply the lip-sync effect.

• Mapping of the lip-sync effect values to the parameters to be applied, as described in the .model3.json file

• Passing values to lip-sync effects via voice input, motion, or other means

Of these, the information that maps parameters to lip-sync effects described in the .model3.json file is
It can be obtained by using the CubismModelSettingJson class, which extends the ICubismModelSetting class.

 

Check “Eye Blinking Settings” for definitions in the .model3.json file.
After blink and lip-sync settings are made on the Editor and then output, the .model3.json file will contain the following description.

 

 

Three ways to lip-sync

There are three main categories of lip-sync.

1. Method of acquiring volume in real time and directly specifying the degree of opening/closing

By obtaining the voice level in some way and scaling it to the target parameters
real-time lip-sync is achieved.

Before CubismModel::Update function in Native (C++) or CubismModel.update function in Web (TypeScript),
the mouth opening can be controlled by setting a value between 0 and 1 directly to the second argument of the Native (C++) CubismModel::SetParameterValue function, Web (TypeScript) CubismModel.setParameterValue function,
the Native (C++) CubismModel::AddParameterValue function, or the Web (TypeScript) CubismModel.addParameterValue function.

iPhone / Android 2.3 or later (*) can acquire the volume in real time during playback.
The acquired value of the volume during playback can be processed to a range of 0..1, and that value can be set with the above instruction to lip-sync the sound.
(as per standard parameter settings, mouth open/close is created with a parameter of 0 to 1)

Setting a value less than 0 or greater than 1 does not cause an error, but in that case lip-sync may not operate properly.
(*): For Android 2.2 and earlier, it is not possible to obtain the volume during playback at runtime.
Whether or not volume can be obtained in real time on other platforms depends on the audio playback library.

How to get it on your iPhone: AVAudioPlayer class
How to get it on Android: Visualizer Class

 

2. method using motion with information for lip-sync

This is a method of working in Editor to incorporate audio movement into the motion itself.
See “Creating Scenes with Background Music and Sound” for instructions on how to include lip-sync motion in your motion.
Before playback, if you use the Native (C++) CubismMotion::SetEffectIds function or the Web (TypeScript) CubismMotion.setEffectIds function to set the lip-sync and blink parameters,
the motion will be replaced with the target parameters during the parameter update process of the CubismMotion instance, and then the motion will be played back.

 

 

3. Method using information-only motion for lip-sync (Native)

This method is to prepare a motion manager that exclusively handles the motion handled in 2 and control only the mouth.
This is useful when you want to separate body or head motion from lip-sync.

3. Method using information-only motion for lip-sync (Web)

This method is to prepare a motion manager that exclusively handles the motion handled in 2 and control only the mouth.
This is useful when you want to separate body or head motion from lip-sync.

© 2010 - 2022 Live2D Inc.