Embodied agents operating in smart homes must understand human behavior through diverse sensory inputs and communicate via natural language. While Vision-Language Models (VLMs) have enabled impressive language-grounded perception, their reliance on visual data limits robustness in real-world scenarios with occlusions, poor lighting, or privacy constraints. In this paper, we introduce HoloLLM, a Multimodal Large Language Model (MLLM) that integrates uncommon but powerful sensing modalities, such as LiDAR, infrared, mmWave radar, and WiFi, to enable seamless human perception and reasoning across heterogeneous environments. We address two key challenges: (1) the scarcity of aligned modality-text data for rare sensors, and (2) the heterogeneity of their physical signal representations. To overcome these, we design a Universal Modality-Injection Projector (UMIP) that enhances pre-aligned modality embeddings with fine-grained, text-aligned features from tailored encoders via coarse-to-fine cross-attention without introducing significant alignment overhead. We further introduce a human-VLM collaborative data curation pipeline to generate paired textual annotations for sensing datasets. Extensive experiments on two newly constructed benchmarks show that HoloLLM significantly outperforms existing MLLMs, improving language-grounded human sensing accuracy by up to 30%. This work establishes a new foundation for real-world, language-informed multisensory embodied intelligence.
Embodied agents in smart homes, e.g., household robots and intelligent appliances, have garnered increasing attention in recent years. To interact effectively with humans and execute real-world tasks, agents must understand human behavior and be capable of engaging in natural language communication. This necessitates the development of models that seamlessly integrate rich human perception with advanced language understanding and generation capabilities.
Vision-Language Models (VLMs) have emerged as promising tools for enabling language-conditioned perception and reasoning. However, the visual modality alone struggles to operate in the real world, e.g., low-light environments, occlusions, and privacy-sensitive scenarios.
In contrast, humans naturally rely on multiple sensory modalities, such as vision, audition, and olfaction, to perceive and adapt to diverse environments. Similarly, sensing modalities brings distinct advantages: LiDAR enables high-precision 3D reconstruction, infrared cameras support perception in darkness, and mmWave radar and WiFi are resilient to visual occlusions and lighting variations. Hence, we propose HoloLLM that integrates diverse sensor inputs to provide excellent adaptability and reliability in complex, real-world environments.
We propose a novel multisensory foundation model, HoloLLM, for seamless human perception and reasoning across heterogeneous environments.
HoloLLM takes the Universal Modality-Injection Projector (UMIP) to efficiently align sensing modalities with the text via only minimal fine-tuning. Besides, the modality-specific discriminative features are adequately explored by tailored encoders and adaptively injected into the aligned multimodal tokens through UMIP.
We compare UMIP with state-of-the-art multimodal projectors. Specifically, most existing works adopt modality-specific encoders and projectors, which commonly requires substantial ‘modality-text’ data pairs for pre-training. OneLLM [1] attempts to handle various modalities via a unified framework that consists of a universal encoder and projector. However, without a dedicated design for capturing heterogeneous spatial features, the universal encoder struggles to obtain sufficiently discriminative multimodal tokens. Different from existing works, UMIP only utilizes the universal encoder to generate initial embeddings for each modality. These embeddings are then progressively enhanced by fine-grained, text-aligned features from tailored encoders.
Visualization results by tSNE. (a) Visualization of aligned tokens from 5 action categories (denoted by different colors) generated by Baseline, OneLLM, and HoloLLM for ‘Video’ and ‘mmWave’ modalities. (b) Visualization of multimodal tokens from 2 action categories (denoted by different colors) for ‘mmWave’ (circles), ‘WiFi’ (pentagrams), and ‘Text’ (triangles) modalities generated by HoloLLM without or with UMIP.
We give some qualitative results of HoloLLM in the “CrossEnv” setting. These results show that HoloLLM can perform Action QA and Action Caption tasks across common and sensing modalities in diverse environments.
We train and evaluate our proposed HoloLLM on two multimodal human-sensing datasets MM-Fi [2] and XRF55 [3] with generated textual desriptions. Specifically, MM-Fi consists of 5 modalities: Video (V), Depth images (D), LiDAR (L), mmWave Radar (M), and WiFi-CSI (W). Besides, XRF55 also contains 5 modalities: Video (V), Depth images (D), Infrared images (I), RFID signals (R), and WiFi-CSI (W).
To comprehensively evaluate various MLLMs across diverse scenarios, we design three experimental settings: (1) Random Split (Random), (2) Cross-Subject Split (CrossSub), and (3) Cross-Environment Split (CrossEnv). Specifically, ‘Random’ involves a random split of all samples with a ratio of 3:1, and ‘CrossSub’ / ‘CrossEnv’ selects samples from nonoverlapping human subjects / environments for training and testing.
For quantitative evaluation, we use the accuracy for Action Recognition and Action QA, and the METEOR metric for Action Caption.
@misc{zhou2025holollm,
title={HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning},
author={Chuhao Zhou and Jianfei Yang},
year={2025},
eprint={2505.17645},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://www.arxiv.org/abs/2505.17645},
}