China Automotive Multimodal Interaction Development Research Report, 2025
Research on Automotive Multimodal Interaction: The Interaction Evolution of L1~L4 Cockpits
ResearchInChina has released the "China Automotive Multimodal Interaction Development Research Report, 2025". This report comprehensively sorts out the installation of Interaction Modalities in automotive cockpits, multimodal interaction patents, mainstream cockpit interaction modes, application of interaction modes in key vehicle models launched in 2025, cockpit interaction solutions of automakers/suppliers, and integration trends of multimodal interaction.
I. Closed-Loop Evolution of Multimodal Interaction: Progressive Evolution of L1~L4 Intelligent Cockpits
According to the "White Paper on Automotive Intelligent Cockpit Levels and Comprehensive Evaluation" jointly released by the China Society of Automotive Engineers (China-SAE), five levels of intelligent cockpits are defined: L0-L4.
As a key driver for cockpit intelligence, multimodal interaction capability relies on the collaboration of AI large models and multiple hardware to achieve the fusion processing of multi-source interaction data. On this basis, it accurately understands the intentions of drivers and passengers and provides scenario-based feedback, ultimately achieving natural, safe, and personalized human-machine interaction. Currently, the automotive intelligent cockpit industry is generally in the L2 stage, with some leading manufacturers exploring and moving towards the L3.
The core feature of L2 intelligent cockpits is "strong perception, weak cognition". In the L2 stage, the multimodal interaction function of cockpits achieves signal-level fusion. Based on multimodal large model technology, it can "understand users' ambiguous intentions" and "simultaneously process multiple commands" to execute users' immediate and explicit commands. At present, most mass-produced intelligent cockpits can enable this.
In the case of Li i6, it is equipped with MindGPT-4o, the latest multimodal model which boasts understanding and response capabilities with ultra-long memory and ultra-low latency, and features more natural language generation. It supports multimodal "see and speak" (voice + vision fusion search: allowing illiterate children to select the cartoons they want to watch by describing the content on the video cover); multimodal referential interaction (voice + gesture: ① Voice reference to objects: while issuing commands, extend the index finger: pointing left can control the window and complete vehicle control. ② Voice reference to personnel: passengers in the same row can achieve voice control over designated personnel through gesture and voice coordination, e.g., pointing right and saying "Turn on the seat heating for him").
The core feature of L3 intelligent cockpits is "strong perception, strong cognition". In the L3 stage, the multimodal interaction function of cockpits achieves cognitive-level fusion. Relying on large model capabilities, the cockpit system can comprehensively understand the complete current scenario and actively initiate reasonable services or suggestions without the user issuing explicit commands.
The core feature of L4 intelligent cockpits is "full-domain cognition and autonomous evolution", creating a "full-domain intelligent manager" for users. In the L4 stage, the application of intelligent cockpits will go far beyond the tool attribute and become a "digital twin partner" that can predict users' unspoken needs, have shared memories, and dispatch all resources for users. Its core experience is: before the user clearly perceives or expresses the need, the system has completed prediction and planning and entered the execution state.
II. Multimodal AI Agent: Understand What You Need and Predict What You Think
AI Agent can be regarded as the core execution unit and key technical architecture for the specific implementation of functions in the evolution of intelligent cockpits from L2 to L4. By integrating voice, vision, touch and situational information, AI Agent can not only "understand" commands, but also "see" the environment and "perceive" the state, thereby integrating the original discrete cockpit functions into a coherent, active and personalized service process.
Agent applications under L2 can be regarded as "enhanced command execution", which is the ultimate extension of L2 cockpit interaction capabilities. Based on large model technology, the cockpit system decomposes a user's complex command into multiple steps and then calls different Agent tools to execute them. For example, a passenger says: "I'm tired, help me buy a cup of coffee." The large model of the L2 cockpit system will understand this complex command and then call in sequence:
1.Voice Agent: Parse user needs in real time;
2.Food Ordering Agent: Recommend the best options according to user preferences, real-time location, and restaurant business status;
3.Payment Agent: Automatically complete unconscious payment;
4.Delivery Agent: Dynamically plan the food delivery time combined with vehicle navigation data (e.g., "food arrives when the car arrives", ensuring that the food is delivered synchronously when the user reaches the destination).
Currently, Agent applications are essentially responses and executions to a user's explicit and complex commands. The cockpit system does not do anything "actively", and it just "completes the tasks assigned by the user" more intelligently.
Case (1): IM Motors released the "IM AIOS Ecological Cockpit" jointly developed with Banma Zhixing. This cockpit is the first to implement Alibaba's ecosystem services in the form of AI Agent, creating a "No Touch & No App" human-vehicle interaction mode. The "AI Food Ordering Agent" and "AI Ticketing Agent" functions launched by the IM AIOS Ecological Cockpit allow users to complete food selection/ticketing and payment only through voice interaction without needing manual operation.
Case (2): On August 4, 2025, Denza officially launched the "Car Life Agent" intelligent service system at its brand press conference, which is first equipped on two flagship models, Denza Z9 and Z9GT. The "Car Life Agent" supports voice food ordering and enables payment by face with face recognition technology. After completing the order, the system will automatically plan the navigation route, forming a seamless experience of "demand-service-closed loop".
In the next level of intelligent cockpits, Agent applications will change from "you say, I do" to "I watch, I guess, I suggest, let's do it together". Users do not need to issue any explicit commands. They just sigh and rub their temples, and the system can comprehensively judge data from "camera" (tired micro-expressions), "biological sensors" (heart rate changes), "navigation data" (continuous driving for 2 hours), and "time" (3 pm (afternoon sleepiness period)) via the large model to know that "the user is in the tired period of long-distance driving and has the need to rest and refresh". Based on this, the system will take the initiative to initiate interaction: "You seem to need a rest. There is a service zone* kilometers ahead with your favorite ** coffee. Do you need me to turn on the navigation? At the same time, I can play refreshing music for you." After the user agrees, the system then calls navigation, entertainment and other Agent tools.
Automotive Intelligent Diagnosis Industry Report, 2026
Automotive Intelligent Diagnosis Research: Powered by AI, Remote Diagnosis Is Being Upgraded towards Intelligence.
ResearchInChina released the Automotive Intelligent Diagnosis Industry Report, 2026....
Automotive Cloud Service Platform Research Report, 2026
Research on automotive cloud service platform: with architecture upgrade and computing power improvement, cloud services enter a new stage
In 2026, the Internet of Vehicles industry generates petaby...
Integrated Battery and Innovative Battery Technology Research Report, 2026
Power Battery Research: Sales of High-Capacity Vehicles Keep Rising, and Solid-State Batteries Begin to Be Installed in Vehicles
I. Sales of High-Capacity Vehicles Sustain Growth, and Those with A C...
Chinese Independent OEMs’ ADAS and Autonomous Driving Report, 2026
Research on OEMs' Intelligent Driving: Era of Physical AI, Standard Configuration of D2D, and Initial Exploration of L3 Commercial Pilot Projects
From 2023 to 2025, the intelligent driving installati...
Intelligent Vehicle New Technology Application Analysis Report, 2025-2026
New Technology Research: Innovative Products such as Bionic Cameras, Vision-LiDAR Fusion Sensors, Auditory Sensors Further Enhance Vehicle Perception Capabilities
ForewordResearchInChina released th...
Automotive Optical Fiber Communication (Optical Fiber Ethernet, PON) and Supply Chain Research Report, 2026
Research on Automotive Optical Fiber Communication: Introduction of Optical Fiber in Vehicles Accelerates, with Priority Deployment in High-Speed Communication Link (10+Gbps) Scenarios
Automotive opt...
Automotive Intelligent Cockpit SoC Research Report, 2026
Automotive Cockpit SoC Research: Passenger Cars in the Price Range of RMB100,000–200,000 Account for Nearly 50% of Total Sales, and New-Generation Cockpit SoC Products Largely Enter Mass Production
P...
LiDAR (Automotive, Pan-Robotics, etc.) Application Research Report, 2025-2026
LiDAR research: hardware competition shifts to combined sensing capabilities from "point cloud" to "images” and from automotive to robots The "LiDAR (Automotive, Pan-Robotics, ...
Global and China Passenger Car T-Box Market Report, 2026
Based on 2025 market data and the latest business layouts of OEMs and suppliers from 2025 to 2026, this report analyzes the development status quo and future trends of China’s passenger car T-Box mark...
Global and China Range Extended Electric Vehicle (REEV) and Plug-in Hybrid Electric Vehicle (PHEV) Research Report, 2026
Research on REEVs and PHEVs: Foreign OEMs are considering extended-range technology as an important strategic option and will launch a series of new vehicles
Global PHEVs & REEVs tend to be domin...
Automotive Voice Industry Report, 2026
Automotive Voice Research: Explosive Growth in Features Like "See and Speak", 35-Fold Increase in External Voice Interaction in Two Years
ResearchInChina has released the Automotive Voice Industry R...
China Passenger Car Digital Chassis Research Report, 2026
Research on Digital Chassis: Leading OEMs Have Completed Configuration of Version 2.0 1. Leading OEMs Have Completed Configuration of Digital Chassis 2.0
By the degree of wired control of each c...
Vehicle Functional Safety and Safety Of The Intended Functionality (SOTIF) Research Report, 2026
Multiple Mandatory Standards for Intelligent Vehicles in China Upgrade Functional Safety Requirements from Recommended to Mandatory Access Criteria In 2026, China has intensively issued and promo...
Automotive 12V/48V Low-Voltage Lithium-ion Battery/Sodium-ion Battery Industry Research Report, 2026
Research on 12V/48V automotive low-voltage lithium-ion (sodium-ion) batteries: promoted by regulations and standardization, it is imperative to "replace lithium-ion (sodium-ion) batteries with lead-ac...
Next-Generation Automotive Wireless Communication Technologies (6G/5G-A, NearLink, Satellite Communication, UWB, etc.) and Automotive Communication Module Industry Report, 2026
Research on Next-Generation Communication and Modules: Accelerated Deployment of 5G-A, Satellite Communication, NearLink, UWB and Other Technologies in Automobiles
Automotive wireless communication t...
Research on Zonal Architecture: Smart Actuators (Micro-motors) and Application Trends in Sub-scenarios, 2026
Smart Actuator and Micro-motor Research: Under Zonal Architecture, Actuators Are Developing towards Edge Computing, 48V, and Brushless Motors.
The core components of automotive zonal architecture mai...
China Passenger Car Navigate on Autopilot (NOA) Industry Report, 2025
In 2025, NOA standardization was popularized, refined and deepened in parallel. In 2026, core variables will be added to the competitive landscape.
The evolution of autonomous driving follows a clear...
Smart Car OTA Industry Report, 2025-2026
Automotive OTA Research: In the Era of Mandatory Standards, OTA Transforms from a "Function Channel" to a New Stage of "Full Lifecycle Management"
Driven by the development and promotion of AI and so...