DEEP GENERATIVE MODELS FOR REAL-TIME SYNTHESIS OF FACIAL MICRO-EXPRESSIONS

Authors

  • Тогаева Замира Файзуллаевна Начальник отдела управления и развития человеческих ресурсов Агентства специализированных образовательных учреждений
  • Сафарова Зилола Олимжоновна ООО “ONE-NET”, главный специалист по делопроизводству и кадровому делу

Keywords:

Facial micro-expressions, real-time micro-expression synthesis, deep generative models, Facial Action Coding System (FACS), diffusion models, FLAME, 3D Gaussian Splatting, GAN, Video Transformer, Mamba, deepfake, emotional expressiveness, photorealism, human-computer interaction.

Abstract

The article provides a comprehensive review of state-of-the-art deep generative models capable of synthesizing photorealistic facial micro-expressions in real time (≥60 fps) on consumer-grade and mobile hardware. Four major research directions from 2021–2025 are examined: (1) two-stream and hierarchical GANs augmented with perceptual losses from micro-expression detectors, (2) diffusion models with fine-grained Action Unit (AU) and temporal control, (3) hybrid parametric 3D face models (FLAME/DECA) combined with neural rendering techniques (3D Gaussian Splatting, NeuS2), and (4) long-sequence Video Transformers and Mamba-based architectures. Achieved quality metrics (FID, LPIPS, MERA-F1), inference speed, anatomical plausibility, and temporal consistency are thoroughly analysed. Particular attention is devoted to remaining challenges: cross-identity transfer and personalization, the scarcity of large-scale 4D datasets, and ethical risks posed by next-generation deepfakes. The technologies are shown to be ready for widespread commercial deployment, with a forecast that the gap between macro- and micro-expression synthesis quality will be fully closed between 2026 and 2028.

Downloads

Published

2025-12-19

Issue

Section

Articles

How to Cite

DEEP GENERATIVE MODELS FOR REAL-TIME SYNTHESIS OF FACIAL MICRO-EXPRESSIONS. (2025). American Journal of Interdisciplinary Research and Development, 47, 66-69. https://ajird.journalspark.org/index.php/ajird/article/view/1652