HoloLLM

: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning

MARS Lab, Nanyang Technological University
{chuhao002@e., jianfei.yang@}ntu.edu.sg


We introduce HoloLLM, a Multimodal Large Language Model (MLLM) that integrates uncommon but powerful sensing modalities, such as LiDAR, infrared, mmWave radar, and WiFi, to enable seamless human perception and reasoning across heterogeneous environments. Extensive experiments on two newly constructed benchmarks show that HoloLLM significantly outperforms existing MLLMs, improving language-grounded human sensing accuracy by up to 30%. This work establishes a new foundation for real-world, language-informed multisensory embodied intelligence.


Abstract

Embodied agents operating in smart homes must understand human behavior through diverse sensory inputs and communicate via natural language. While Vision-Language Models (VLMs) have enabled impressive language-grounded perception, their reliance on visual data limits robustness in real-world scenarios with occlusions, poor lighting, or privacy constraints. In this paper, we introduce HoloLLM, a Multimodal Large Language Model (MLLM) that integrates uncommon but powerful sensing modalities, such as LiDAR, infrared, mmWave radar, and WiFi, to enable seamless human perception and reasoning across heterogeneous environments. We address two key challenges: (1) the scarcity of aligned modality-text data for rare sensors, and (2) the heterogeneity of their physical signal representations. To overcome these, we design a Universal Modality-Injection Projector (UMIP) that enhances pre-aligned modality embeddings with fine-grained, text-aligned features from tailored encoders via coarse-to-fine cross-attention without introducing significant alignment overhead. We further introduce a human-VLM collaborative data curation pipeline to generate paired textual annotations for sensing datasets. Extensive experiments on two newly constructed benchmarks show that HoloLLM significantly outperforms existing MLLMs, improving language-grounded human sensing accuracy by up to 30%. This work establishes a new foundation for real-world, language-informed multisensory embodied intelligence.

Motivation

Embodied agents in smart homes, e.g., household robots and intelligent appliances, have garnered increasing attention in recent years. To interact effectively with humans and execute real-world tasks, agents must understand human behavior and be capable of engaging in natural language communication. This necessitates the development of models that seamlessly integrate rich human perception with advanced language understanding and generation capabilities.
Vision-Language Models (VLMs) have emerged as promising tools for enabling language-conditioned perception and reasoning. However, the visual modality alone struggles to operate in the real world, e.g., low-light environments, occlusions, and privacy-sensitive scenarios.
In contrast, humans naturally rely on multiple sensory modalities, such as vision, audition, and olfaction, to perceive and adapt to diverse environments. Similarly, sensing modalities brings distinct advantages: LiDAR enables high-precision 3D reconstruction, infrared cameras support perception in darkness, and mmWave radar and WiFi are resilient to visual occlusions and lighting variations. Hence, we propose HoloLLM that integrates diverse sensor inputs to provide excellent adaptability and reliability in complex, real-world environments.

Method

We propose a novel multisensory foundation model, HoloLLM, for seamless human perception and reasoning across heterogeneous environments.

HoloLLM takes the Universal Modality-Injection Projector (UMIP) to efficiently align sensing modalities with the text via only minimal fine-tuning. Besides, the modality-specific discriminative features are adequately explored by tailored encoders and adaptively injected into the aligned multimodal tokens through UMIP.


Architecture of HoloLLM. Given multimodal inputs \( X^m \), HoloLLM utilizes modality-specific tokenizers and a universal encoder to extract pre-aligned initial embeddings \( Y^m_{CLIP} \). Meanwhile, pre-trained tailored encoders are applied to explore modality features \( Y^m_{T} \). The UMIP then transforms \( Y^m_{CLIP} \) and \( Y^m_{T} \) into coarse queries \( Q^m \) and fine-grained keys and values \( K^m \) / \( V^m \). By iteratively enhancing the queries via coarse-to-fine cross-attention and projecting them to the LLM text space, the aligned multimodal tokens \( Z^m \) fully enriched by modality features can be achieved.

We compare UMIP with state-of-the-art multimodal projectors. Specifically, most existing works adopt modality-specific encoders and projectors, which commonly requires substantial ‘modality-text’ data pairs for pre-training. OneLLM [1] attempts to handle various modalities via a unified framework that consists of a universal encoder and projector. However, without a dedicated design for capturing heterogeneous spatial features, the universal encoder struggles to obtain sufficiently discriminative multimodal tokens. Different from existing works, UMIP only utilizes the universal encoder to generate initial embeddings for each modality. These embeddings are then progressively enhanced by fine-grained, text-aligned features from tailored encoders.


Comparison between UMIP and other projectors: (a) Modality-Specific Projector, (b) Universal Projector, and (c) Universal Modality-Injection Projector (Ours).

Evaluations

Qualitative Results

Visualization of Multimodal and Text Tokens



Visualization results by tSNE. (a) Visualization of aligned tokens from 5 action categories (denoted by different colors) generated by Baseline, OneLLM, and HoloLLM for ‘Video’ and ‘mmWave’ modalities. (b) Visualization of multimodal tokens from 2 action categories (denoted by different colors) for ‘mmWave’ (circles), ‘WiFi’ (pentagrams), and ‘Text’ (triangles) modalities generated by HoloLLM without or with UMIP.


Qualitative Results across Common and Sensing Modalities



We give some qualitative results of HoloLLM in the “CrossEnv” setting. These results show that HoloLLM can perform Action QA and Action Caption tasks across common and sensing modalities in diverse environments.


Quantitive Results

We train and evaluate our proposed HoloLLM on two multimodal human-sensing datasets MM-Fi [2] and XRF55 [3] with generated textual desriptions. Specifically, MM-Fi consists of 5 modalities: Video (V), Depth images (D), LiDAR (L), mmWave Radar (M), and WiFi-CSI (W). Besides, XRF55 also contains 5 modalities: Video (V), Depth images (D), Infrared images (I), RFID signals (R), and WiFi-CSI (W).

To comprehensively evaluate various MLLMs across diverse scenarios, we design three experimental settings: (1) Random Split (Random), (2) Cross-Subject Split (CrossSub), and (3) Cross-Environment Split (CrossEnv). Specifically, ‘Random’ involves a random split of all samples with a ratio of 3:1, and ‘CrossSub’ / ‘CrossEnv’ selects samples from nonoverlapping human subjects / environments for training and testing.

For quantitative evaluation, we use the accuracy for Action Recognition and Action QA, and the METEOR metric for Action Caption.

Action QA and Caption Results on MM-Fi

Evaluation of Human Action QA and Caption tasks on MM-Fi across three settings. The Accuracy (%) and METEOR (%) are adopted for Action QA and Caption.
Action QA and Caption Results on XRF55

Evaluation of Human Action QA and Caption tasks on XRF55 across three settings. The Accuracy (%) and METEOR (%) are adopted for Action QA and Caption.

Related works

  • Jianfei Yang, He Huang, Yunjiao Zhou, Xinyan Chen, Yuecong Xu, Shenghai Yuan, Han Zou, Chris Xiaoxuan Lu, and Lihua Xie. Mm-fi: Multi-modal non-intrusive 4d human dataset for versatile wireless sensing. Advances in Neural Information Processing Systems, 36, 2024.
  • Fei Wang, Yizhe Lv, Mengdie Zhu, Han Ding, and Jinsong Han. Xrf55: A radio frequency dataset for human indoor action analysis. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 8(1):1–34, 2024.
  • BibTeX

    @misc{zhou2025holollm,
          title={HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning}, 
          author={Chuhao Zhou and Jianfei Yang},
          year={2025},
          eprint={2505.17645},
          archivePrefix={arXiv},
          primaryClass={cs.CV},
          url={https://www.arxiv.org/abs/2505.17645}, 
    }