Creating scenes with background music and audio

(Updated 04/25/2019)

This section describes functions for creating scenes and using audio files that handle background music and audio files.

Features to be introduced

  • Loading audio files
  • Lip-sync
  • Generate scenes automatically from the audio files

Readable audio files

Audio files can only be read in wav format.
Some formats, even [wav format], are not supported and may not be loaded with a warning message.
In such cases, encoding to the corresponding [wav format (16bit, 44100Hz)] may allow reading.

 

Loading audio files

Drag and drop an audio file (.wav) onto the [Timeline] palette.
Once successfully loaded, a track of the audio file is added to the timeline.

The loaded audio file will appear in the [Project] palette.

 

Editing Audio Files

Right-click on a track in the audio file to edit it.

 

Lip-sync

The mouth movements of a model can be automatically generated from an audio file.

 

Lip-sync Setting

To use the lip-sync function, the model must be configured for lip-sync.

Open the model data in the [Model Workspace] and click on [Settings for Eye Blinking and Lip-sync] in the Palette menu of the [Parameter] palette.
Check the [Lip-sync] checkbox under “Mouth Open/Close” in the dialog box, click [OK], and save the model data.

After saving is complete, return to the [Animation Workspace].

Expand Animation Data in the [Project] palette and right-click on the model file name.
Click [Reload Data] to apply the lip-sync settings.

Note that a model with lip-sync will have [Lip-sync] added to the Properties group.

 

If you perform a lip-sync-related operation without setting up lip-sync, a warning may appear as shown in the figure below.
Perform the lip-sync setup described above, reload the model, and perform the operation again.

 

 

Generate lip-sync from audio files

Make sure the model and audio files are placed on the timeline, then click on the [Animation] menu -> [Apply Lip-sync from the Audio File].
When lip-sync is applied, a volume keyframe is added to the [Lip-sync] section of the Model Track.

 

Generate scenes automatically from the audio files

This function automatically creates scenes and applies lip-sync to audio files.
Although it can be applied to a single audio file, it is useful when there are multiple audio files.

  1. Load all the audio files you want to use into the [Project] palette.
  2. Click on the [Animation] menu -> [Scene] -> [Generate scenes automatically from the audio files].
  3. Check that the model name displayed in [Select Model] is the name of the model to be applied. If different, select the correct model from the pull-down menu.
  4. Click [OK] to generate a scene with the same name as the audio file.

The length of the entire scene and the model display will be the same length as the audio file.

 

When multiple audio files are used in a single scene

This section describes the operation when multiple audio files are placed for a single scene and lip-sync is applied.

  1. Order the audio tracks on the timeline.
  2. Go to the [Animation] menu, [Track], [Apply Lip-sync from the Audio File] for each audio.

If audio files overlap, the lip-sync of the audio file placed on top will take precedence.
If the spacing of audio files is adjusted after lip-sync generation, it is necessary to generate lip-sync again because the movement of audio files and the lip-sync content are not synchronized.

 

Lip-sync Property Group

(1) Volume:

The volume extracted from the audio file is set for each hour.
By adjusting the volume keyframe, the mouth opening can be varied.

(2) Scale:

Specifies the magnification factor when volume is assigned to the opening and closing of the mouth.
For example, if the overall volume of an audio file is low and only opens to a maximum of 0.4, to adjust it so that the mouth opens fully at a [volume] of 0.4, set the magnification to 2.5. ( 1.0 / 0.4 = 2.5 )
* This value is a constant value (a value that cannot be changed by a key).

(3) Reference value:

The mouth will not open when it is below the specified value.
For example, if [Reference Value] is specified as 0.2, it will be ignored when [Volume] is less than 0.2 and will not be affected by mouth opening and closing.
This can be utilized when using audio with noise.
* This value is a constant value (a value that cannot be changed by a key).

(4) Effect:

You can set what percentage of the effect will be multiplied by the percentage set in Lip-sync.
The effect can be set in the range of 0 to 100, with 0 being disabled.
Basically, it is set at 0 or 100 and is utilized when you want to temporarily disable the effect.

 

Mute audio

To mute the audio, click on the [speaker icon] to the right of the track name.

The [speaker icon] will change to an “X” and the audio will be muted.

© 2010 - 2022 Live2D Inc.