LLM-Guided Fuzzy Kinematic Modeling for Resolving Kinematic Uncertainties and Linguistic Ambiguities in Text-to-Motion Generation

1Beijing Institute of Technology, China, 2Mehran University of Engineering and Technology, Pakistan,

2The University of Dodoma, Tanzania.

A person walking in a circle.

Abstract

Generating realistic and coherent human motions from text descriptions is essential for applications in computer vision, animations, and digital environments. However, existing text-to-motion generation models often overlook kinematic uncertainties and linguistic ambiguities, leading to unnatural and misaligned motion sequences. To address these issues, we propose a novel framework that integrates fuzzy kinematic modeling with large language model (LLM) guidance to jointly model kinematic uncertainties and resolve linguistic ambiguities. Our approach first extracts rich kinematic attributes from raw motion data and converts them into fuzzy kinematic facts (FKFs), which serve as an uncertainty-aware motion representation across different kinematic hierarchies. Simultaneously, we refine ambiguous text descriptions by extracting contextual terms using LLM-guided few-shot in-context learning, enhancing text with additional semantic clarity. These FKFs and contextual terms are then used to train a diffusion-based motion generation model, ensuring semantically accurate and physically plausible motion synthesis. To further improve motion quality, we introduce a Graph-Augmented Self-Attention (GASA) module, which injects spatio-temporal relational constraints into the diffusion process, enhancing motion coherence and structural consistency. Evaluations on HumanML3D and KIT-ML datasets demonstrate that our method outperforms state-of-the-art models, achieving the lowest FID scores (0.052 and 0.091) and reducing uncertainty footprint by 21.1% and 17.7%, respectively.

FQK-T2M generates semantically accurate and contextually aligned human motions.

Performing karate back kick.
Punching the opponent repetitively.
Performing jump rope.
Standing on one leg and hopping.
Balance on a rope and
left foot is infront.
A person waving with
left hand.
Professional player kicking
a ball.
Novice kicking a ball.



Proposed Method

(Hover the mouse over image to Zoom)


Overview of the Proposed Framework: (a) Stage 1: modeling kinematic uncertainty and linguistic ambiguity resolution, (b) Stage 2: contextual motion diffusion (C-MDM).



Kinematic Facts (KFs)

(Hover the mouse over image to Zoom)


Visualization of kinematic facts: LLA, JLA, JJA, JJD, JLD, LLD, JD, LD, LAD, JS, and LS across two consecutive frames t and t-1.





D-FKF Module

The Dual-branch Fuzzy Kinematic Fact (D-FKF) module automatically learn these membership functions, employing a dual-branch adaptive neuro-fuzzy inference (D-ANFIS).

GASA Module

We introduce Graph-Augmented Self-Attention (GASA) Module, a modified self-attention block, that integrates missing spatio-temporal relational guidance via two graph structures.








Evaluation and Comparison with SOTA


Qualitative Evaluation on HumanML3D

(The frames in BLUE meshes indicate valid motions, frames with RED meshes indicate anomalies)

Method
A person is walking
straight
and turns left.
A person is walking
forward slowly
while
holding both arms slightly up.
A person is sprinting
forward
and bends down.
A person is walking
slightly slowly
in a
circle.
TEMOS
T2M-GPT
MotionGPT
Ours




Learned Fuzzy Membership Functions and Uncertainity Modeling





Quantitative Evaluation on HumanML3D





Quantitative Evaluation on KITML





Ablation Studies