Generating realistic and coherent human motions from text descriptions is essential for applications in computer vision, animations, and digital environments. However, existing text-to-motion generation models often overlook kinematic uncertainties and linguistic ambiguities, leading to unnatural and misaligned motion sequences. To address these issues, we propose a novel framework that integrates fuzzy kinematic modeling with large language model (LLM) guidance to jointly model kinematic uncertainties and resolve linguistic ambiguities. Our approach first extracts rich kinematic attributes from raw motion data and converts them into fuzzy kinematic facts (FKFs), which serve as an uncertainty-aware motion representation across different kinematic hierarchies. Simultaneously, we refine ambiguous text descriptions by extracting contextual terms using LLM-guided few-shot in-context learning, enhancing text with additional semantic clarity. These FKFs and contextual terms are then used to train a diffusion-based motion generation model, ensuring semantically accurate and physically plausible motion synthesis. To further improve motion quality, we introduce a Graph-Augmented Self-Attention (GASA) module, which injects spatio-temporal relational constraints into the diffusion process, enhancing motion coherence and structural consistency. Evaluations on HumanML3D and KIT-ML datasets demonstrate that our method outperforms state-of-the-art models, achieving the lowest FID scores (0.052 and 0.091) and reducing uncertainty footprint by 21.1% and 17.7%, respectively.
FQK-T2M generates semantically accurate and contextually aligned human motions.
Overview of the Proposed Framework: (a) Stage 1: modeling kinematic uncertainty and linguistic ambiguity resolution, (b) Stage 2: contextual motion diffusion (C-MDM).
Visualization of kinematic facts: LLA, JLA, JJA, JJD, JLD, LLD, JD, LD, LAD, JS, and LS across two consecutive frames t and t-1.
The Dual-branch Fuzzy Kinematic Fact (D-FKF) module automatically learn these membership functions, employing a dual-branch adaptive neuro-fuzzy inference (D-ANFIS).
We introduce Graph-Augmented Self-Attention (GASA) Module, a modified self-attention block, that integrates missing spatio-temporal relational guidance via two graph structures.
Method |
A person is walking |
A person is walking |
A person is sprinting |
A person is walking |
TEMOS |
||||
T2M-GPT |
||||
MotionGPT |
||||
Ours |