Lip-sync

Updated: 10/06/2022

Identification of lip-sync parameters

Lip-sync effects can be used to apply lip-sync behavior to a model.
The following processing is performed to apply the lip-sync effect.

  • Mapping of the lip-sync effect values described in the .model3.json file to the parameters to be applied
  • Passing values to lip-sync effects via audio input, motion, or other means

Of these, the information that maps parameters to lip-sync effects described in the .model3.json file
can be obtained by using the CubismModelSettingJson class, which implements the ICubismModelSetting interface

// C++
for (csmInt32 i = 0; i < _modelSetting->GetLipSyncParameterCount(); ++i)
{
   CubismIdHandle lipsyncParameter = _modelSetting->GetLipSyncParameterId(i);
}
// TypeScript
for(let i: number = 0; i < _modelSetting.getLipSyncParameterCount(); ++i)
{
    let lipSyncParameter = _modelSetting.getLipSyncParameterId(i);
}
// Java
for(int i = 0; i < _modelSetting.getLipSyncParameterCount(); i++) {
CubismId lipSyncParameter = _modelSetting.getLipSyncParameterId(i);
}

Refer to “Eye Blinking Settings” for definitions in the .model3.json file.
After eye blinking and lip-sync settings are made in the Editor and then output, the .model3.json file will contain the following description.

{
    ... Omitted ...
       "Groups": [
                {
                        "Target": "Parameter",
                        "Name": "LipSync",
                        "Ids": [
                                "ParamMouthOpenY"
                        ]
                },
                {
                        "Target": "Parameter",
                        "Name": "EyeBlink",
                        "Ids": [
                                "ParamEyeLOpen",
                                "ParamEyeROpen"
                        ]
                }
        ]
}

Three Ways to Lip-sync

There are three main categories of lip-sync.

1. Method of acquiring volume in real time and directly specifying the degree of opening/closing

By obtaining the audio level in some way and scaling it to the target parameters,
real-time lip-sync is achieved.

// C++
csmFloat32 value = GetAudioLevelExampleFunction(); // Get the latest volume level.

for (csmInt32 i = 0; i < _modelSetting->GetLipSyncParameterCount(); ++i)
{
    _model->AddParameterValue(_modelSetting->GetLipSyncParameterId(i), value, 0.8f);
}
// TypeScript
let value: number = getAudioLevelExampleFunction(); // Get the latest volume level.

for(let i: number = 0; i < _modelSetting.getLipSyncParameterCount(); ++i)
{
    _model.addParameterValue(_modelSetting.getLipSyncParameterId(i), value, 0.8);
}
// Java
float value = getAudioLevelExampleFunction(); // Get the latest volume level.
for(int i = 0; i < _modelSetting.getLipSyncParameterCount(); i++){
_model.addParameterValue((_modelSetting.getLipSyncParameterId(i), value, 0.8f);
}

Before CubismModel::Update function in Native (C++) or CubismModel.update function in Web (TypeScript) and Java, the mouth opening can be controlled by setting a value between 0 and 1 directly to the second argument of the CubismModel::SetParameterValue function in Native (C++), CubismModel.setParameterValue function in Web (TypeScript) and Java, the CubismModel::AddParameterValue function in Native (C++), or the CubismModel.addParameterValue function in Web (TypeScript) and Java.

iPhone / Android 2.3 or later (*) can acquire the volume in real time during playback.
The acquired value of the volume during playback can be processed in the range of 0 to 1, and that value can be set with the above instruction to lip-sync the sound.
(As per standard parameter settings, mouth open/close is created with a parameter of 0 to 1.)

Setting a value less than 0 or greater than 1 does not cause an error, but in that case lip-sync may not operate properly.
(*): For Android 2.2 and earlier, it is not possible to obtain the volume during playback at runtime.
Whether or not volume can be obtained in real time on other platforms depends on the audio playback library.

How to get it on an iPhone: AVAudioPlayer class
How to get it on an Android phone: Visualizer class

2. Method using motion with information for lip-sync

This is a method of working in Editor to incorporate audio movement into the motion itself.
See “Creating Scenes with Background Music and Audio” for instructions on how to include lip-sync motion in your motion.
Before playback, if you use the CubismMotion::SetEffectIds function in Native (C++) or the CubismMotion.setEffectIds function in Web (TypeScript) and Java to set the lip-sync and eye blinking parameters,
the motion will be replaced with the target parameters during the parameter update process of the CubismMotion instance, and then the motion will be played back.

// C++
    // Import the eye blinking parameter described in .model3.json.
    csmVector<CubismIdHandle> eyeBlinkIds;
    csmInt32 eyeBlinkCount = _modelSetting->GetEyeBlinkParameterCount();
    for (csmInt32 i = 0; i < eyeBlinkCount; i++)
    {
        eyeBlinkIds.PushBack(_modelSetting->GetEyeBlinkParameterId(i));
    }

    // Import the lip-sync parameter described in .model3.json.
    csmVector<CubismIdHandle> lipSyncIds;
    csmInt32 lipSyncCount = _modelSetting->GetLipSyncParameterCount();
    for (csmInt32 i = 0; i < lipSyncCount; i++)
    {
        lipSyncIds.PushBack(_modelSetting->GetLipSyncParameterId(i));
    }

    // Import motion file.
    csmByte* buffer;
    csmSizeInt size;
    csmString path = "example.motion3.json";
    buffer = CreateBuffer(path.GetRawString(), &size);
    CubismMotion* tmpMotion = static_cast<CubismMotion*>(LoadMotion(buffer, size, name.GetRawString()));
    DeleteBuffer(buffer, path.GetRawString());

    // Register parameters for eye blinking and lip-sync to the loaded motion file.
    tmpMotion->SetEffectIds(eyeBlinkIds, lipSyncIds);
// TypeScript
    // Import the eye blinking parameter described in .model3.json.
    let eyeBlinkIds: csmVector<CubismIdHandle> = new csmVector<CubismIdHandle>();
    let eyeBlinkCount: number = _modelSetting.getEyeBlinkParameterCount();
    for(let i: number = 0; i < eyeBlinkCount; i++)
    {
        eyeBlinkIds.pushBack(_modelSetting.getEyeBlinkParameterId(i));
    }

    // Import the lip-sync parameter described in .model3.json.
    let lipSyncIds: csmVector<CubismIdHandle> = new csmVector<CubismIdHandle>();
    let lipSyncCount: number = _modelSetting.getLipSyncParameterCount();
    for(let i: number = 0; i < lipSyncCount; i++)
    {
        lipSyncIds.pushBack(_modelSetting.getLipSyncParamterId(i));
    }

    // Import motion file.
    let path: string = fileName;
    path = this._modelHomeDir + path;

    fetch(path).then(
        (response) =>
        {
            return response.arrayBuffer();
        }
    ).then(
        (arrayBuffer) =>
        {
            let buffer: ArrayBuffer = arrayBuffer;
            let size = buffer.byteLength;
            let tmpMotion = <CubismMotion>this.loadMotion(buffer, size, name);
            deleteBuffer(buffer, path);

            // Register parameters for eye blinking and lip-sync to motion file as part of loading.
            motion.setEffectIds(this._eyeBlinkIds, this._lipSyncIds);
        }
    );
// Java
    // Import the eye blinking parameter described in .model3.json.
    List<CubismId> eyeBlinkIds;
    int eyeBlinkCount = _modelSetting.getEyeBlinkParameterCount();
    for (int i = 0; i < eyeBlinkCount; i++) {
        eyeBlinkIds.add(_modelSetting.getEyeBlinkParameterId(i));
    }
    // Import the lip-sync parameter described in .model3.json.
    List<CubismId> lipSyncIds;
    int lipSyncCount = _modelSetting.getLipSyncParameterCount();
    for (int i = 0; i < lipSyncCount; i++) {
        lipSyncIds.add(_modelSetting.getLipSyncParameterId(i));
    }
    // Import motion file.
    byte[] buffer;
    String path = "example.motion3.json";
    buffer = createBuffer(path);
    CubismMotion tmpMotion = loadMotion(buffer);
    // Register parameters for eye blinking and lip-sync to the loaded motion file.
    tmpMotion.setEffectIds(eyeBlinkIds, lipSyncIds);

3. Method using information-only motion for lip-sync

Native:

This method prepares a motion manager that exclusively handles the motion handled in chapter 2 and controls only the mouth.
This is useful when you want to separate body or head motion from lip-sync.

// C++
    /**
     * @brief Implementation class of the model actually used by the user<br>
     * Model generation, functional component generation, update processing, and rendering call.
     *
     */
    class LAppModel : public Csm::CubismUserModel
    {
    /* Omitted */  

    private:    
        CubismMotionManager*    _mouthMotionManager; // <<< Add
    };
// C++
    void LAppModel::Update()
    {
 /* Omitted */  

        //-----------------------------------------------------------------
        _model->LoadParameters(); // Load previously saved state.
        if (_motionManager->IsFinished())
        {
            // If there is no motion playback, playback is performed at random from among the standby motions.
            StartRandomMotion(MotionGroupIdle, PriorityIdle);
        }
        else
        {
            const csmFloat32 playSpeed = pow(2, (csmFloat32)_motionSpeed / 10.0);
            motionUpdated = _motionManager->UpdateMotion(_model, deltaTimeSeconds * playSpeed); // Update motion.
        }
        _mouthMotionManager->UpdateMotion(_model, deltaTimeSeconds); // <<< Add
        _model->SaveParameters(); // Save the state.
        //-----------------------------------------------------------------

/* Omitted */  

    }
// C++
	_mouseMotionManager->StartMotionPriority(lipsyncMotion, autoDelete, priority);

Web:

This method prepares a motion manager that exclusively handles the motion handled in chapter 2 and controls only the mouth.
This is useful when you want to separate body or head motion from lip-sync.

// TypeScript
    export class LAppModel extends CubismUserModel {
/* Omitted */

        _mouthMotionManager: CubismMotionManager; // <<< Add
    }
// TypeScript
	public update(): void
    {
/* Omitted */

        //--------------------------------------------------------------------------
        this._model.loadParameters();   // Load previously saved state.
        if(this._motionManager.isFinished())
        {
            // If there is no motion playback, playback is performed at random from among the standby motions.
            this.startRandomMotion(LAppDefine.MotionGroupIdle, LAppDefine.PriorityIdle);
            
        }
        else
        {
            motionUpdated = this._motionManager.updateMotion(this._model, deltaTimeSeconds);    // Update motion.
        }
		_mouthMotionManager.udpateMotion(_model, deltaTimeSeconds); // <<< Add
        this._model.saveParameters(); // Save the state.
        //--------------------------------------------------------------------------

/* Omitted */
    }
// TypeScript
	_mouseMotionManager.startMotionPriority(lipSyncMotion, autoDelete, priority);

Java :

This method prepares a motion manager that exclusively handles the motion handled in chapter 2 and controls only the mouth.
This is useful when you want to separate body or head motion from lip-sync.

// Java
/**
 * Implementation class of the model actually used by the user
 * Model generation, functional component generation, update processing, and rendering call.
 */
public class LAppModel extends CubismUserModel {
    // Omitted

    private CubismMotionManager _mouthMotionManager; // <<< Add
}
// Java
public void update() {
/* Omitted */
    // -----------------------------
    _model.loadParameters(); // Load previously saved state.
    if (_motionManager.isFinished()) {
        // If there is no motion playback, playback is performed at random from among the standby motions.
        startRandomMotion(LAppDefine.MotioNGroup.IDLE.getId(), LAppDefine.Priority.IDLE.getPriority());
    } else {
        final float playSpeed = Math.pow(2, _motionSpeed / 10);
        isMotionUpdated = _motionManager.updateMotion(_model, deltaTimeSeconds * playSpeed); // Update motion.
    }  
    _mouthMotionManager.updateMotion(_model, deltaTimeSeconds); // <<< Add
    _model.saveParameters(); // Save the state.
    // -----------------------------
/* Omitted */
}
// Java
_mouseMotionManager.startMotionPriority(lipSyncMotion, autoDelete, priority);
Please let us know what you think about this article.