π¨:low π:high π:low |
π¨:low π:high π:high |
π¨:high π:low π:low |
π¨:high π:high π:high |
Ref Image |
Chinese |
English |
French |
This study delves into the intricacies of synchronizing facial dynamics with multilingual audio inputs, focusing on the creation of visually compelling, time-synchronized animations through diffusion-based techniques. Diverging from traditional parametric models for facial animation, our approach adopts a holistic diffusion-based framework that integrates audio-driven visual synthesis to enhance the synergy between auditory stimuli and visual responses. We process audio features separately and derive the corresponding control gates, which implicitly govern the enhancement of movements in lips, eyes, and head, irrespective of the portraitβs origin. The advanced audio-driven visual synthesis mechanism provides nuanced control over a diverse range of expressions and postures, allowing for a more tailored and effective portrayal of distinct personas across different languages. The significant improvements in the fidelity of animated portraits, the accuracy of lip-syncing, and the richness of motion variations achieved by our method render it a versatile tool for animating any portrait in any language.
As LinguaLinker implicitly split the control signal to head, mouth and eyes, we could adjust the corresponding parameters to influence the generated portraits, separately control the motion range and motion frequency of different regions related to the character.
π¨:low π:high π:low |
π¨:low π:high π:high |
π¨:high π:low π:low |
π¨:high π:high π:high |
π¨:low π:high π:low |
π¨:low π:high π:high |
π¨:high π:low π:low |
π¨:high π:high π:high |