Research Report on Automotive Memory Chip Industry and Its Impact on Foundation Models, 2025
Research on automotive memory chips: driven by foundation models, performance requirements and costs of automotive memory chips are greatly improved.
From 2D+CNN small models to BEV+Transformer foundation models, the number of model parameters has soared, making memory a performance bottleneck.
The global automotive memory chip market is expected to be worth over USD17 billion in 2030, compared with about USD4.3 billion in 2023, with a CAGR up to 22% during the period. Automotive memory chips took an 8.2% share in automotive semiconductor value in 2023, a figured projected to rise to 17.4% in 2030, indicating a substantial increase in memory chip costs.
The main driver for the development of the automotive memory chip industry lies in the rapid rise of automotive LLMs. From the previous 2D+CNN small models to BEV+Transformer foundation models, the number of model parameters has significantly increased, leading to a surge in computing demands. CNN models typically have fewer than 10 million parameters, while foundation models (LLMs) generally range from 7 billion to 200 billion parameters. Even after distillation, automotive models can still have billions of parameters.
From a computing perspective, in BEV+Transformer foundation models, typically those with LLaMA decoder architecture, the Softmax operator plays a core role. Its weaker parallelization capability than that of traditional convolution operators makes memory the bottleneck. Especially memory-intensive models like GPT pose high requirements for memory bandwidth, and common autonomous driving SoCs on market often face the problem of "memory wall".
End-to-end essentially embeds a small LLM. With the increasing amount of data fed, the parameters of the foundation model will continue to grow. The initial model size is around 10 billion parameters, and through continuous iteration, it will eventually exceed 100 billion.
On April 15, 2025, at its AI sharing event, XPeng disclosed for the first time that it is developing XPeng World Foundation Model, a 72-billion-parameter ultra-large autonomous driving model. XPeng's experimental results show that the scaling law effect is evident in models with 1 billion, 3 billion, 7 billion, and 72 billion parameters: the larger the parameter scale, the greater the model's capabilities. For models of the same size, the more training data, the greater the model's performance.
The main bottleneck in multimodal model training is not only GPUs but also the efficiency of data access. XPeng has independently developed underlying data infrastructure (Data Infra), increasing data upload capacity by 22 times, and data bandwidth by 15 times in training. By optimizing both GPU/CPU and network I/O, the model training speed has been improved by 5 times. Currently, XPeng uses up to 20 million video clips to train its foundation model, a figure that will increase to 200 million this year.
In the future, XPeng will deploy the "XPeng World Foundation Model" to vehicles by distilling small models over the cloud. The parameter scale of automotive foundation models will only continue to grow, posing significant challenges to computing chips and memory. To address this, XPeng has self-developed Turing AI chip, which boasts a utilization 20% higher than general automotive high-performance chips and can handle foundation models with up to 30B (30 billion) parameters. In contrast, Li Auto's current VLM (Vision-Language Model) has about 2.2 billion parameters.
More model parameters often come with higher inference latency. How to solve the latency problem is crucial. It is expected that the Turing AI chip may offer big improvements in memory bandwidth through multi-channel design or advanced packaging technology, so as to support the local operation of 30B-parameter foundation models.
Memory bandwidth determines the upper limit of inference computing speed. LPDDR5X is widely adopted but still falls short. GDDR7 and HBM may be put on the agenda.
Memory bandwidth determines the upper limit of inference computing speed. Assuming a foundation model has 7 billion parameters, at INT8 precision for automotive use, it occupies 7GB of storage. Tesla's first-generation FSD chip has memory bandwidth of 63.5GB/s, meaning it generates one token every 110 milliseconds, with a frame rate of lower than 10Hz, compared with the typical image frame rate of 30Hz in the autonomous driving field. Nvidia Orin with memory bandwidth of 204.5GB/s generates one token every 34 milliseconds (7GB ÷ 204.5GB/s = 0.0343s, about 34ms), barely reaching 30Hz (frame rate = 1 ÷ 0.0343s = 29Hz). Noticeably this only accounts for the time required for data transfer, completely ignoring the time for actual computation, so the real speed will be much lower than the data.

DRAM Selection Path (1): LPDDR5X will be widely adopted, and the LPDDR6 standard is still being formulated.
Apart from Tesla, all current automotive chips only support up to LPDDR5. The next step for the industry is to promote LPDDR5X. For example, Micron has launched a LPDDR5X + DLEP DRAM automotive solution, which has passed ISO26262 ASIL-D certification and meets critical automotive FuSa requirements.
Nvidia Thor-X already supports automotive LPDDR5X, with memory bandwidth increased to 273GB/s. It supports the LPDDR5X standard and PCIe 5.0 interface. Thor-X-Super has an astonishing memory bandwidth of 546GB/s, and utilizes 512-bit wide LPDDR5X memory to ensure extremely high data throughput. In reality, the Super version, like Apple's chip series, simply integrates two X chips into one package, but it is not expected to enter mass production in the short term.
Thor has multiple versions, with five currently known: ① Thor-Super, with 2000T computing power; ② Thor-X, with 1000T computing power; ③ Thor-S, with 700T computing power; ④ Thor-U, with 500T computing power; ⑤ Thor-Z, with 300T computing power. Lenovo's first Thor central computing unit in the world plans to adopt dual Thor-X chips.
Micron 9600MTPS LPDDR5X already has samples, targeting mobile devices, with no automotive-grade products available yet. Samsung's new LPDDR5X product, K3KL9L90DM-MHCU, empowers high performance from PCs, servers, vehicles, to emerging on-device AI applications. It delivers speeds 1.25 times faster and 25% better power efficiency compared to the previous generation, and has a maximum operating temperature of 105°C. Mass production started in early 2025. A single K3KL9L90DM-MHCU features 8GB and x32 bus, eight chips totaling 64GB.
As LPDDR5X gradually enters the era of 9600Mbps or even 10Gbps, JEDEC has started developing the next-generation LPDDR6 standard, targeting 6G communications, L4 autonomous driving, and immersive AR/VR scenarios. LPDDR6, as the next-generation memory technology, is expected to have a rate of over 10.7Gbps, even possibly up to 14.4Gbps, with improvements in both bandwidth and energy efficiency - 50% better than the current LPDDR5X. However, mass production of LPDDR6 memory may not occur until 2026. Qualcomm's next-generation flagship chip, Snapdragon 8 Elite Gen 2 (codenamed SM8850), will support LPDDR6. Automotive LPDDR6 may take even longer to arrive.
DRAM Selection Path (2): GDDR6 is already installed in vehicles but faces cost and power consumption issues. A GDDR7+LPDDR5X hybrid memory architecture may be viable.
Aside from LPDDR5X, another path is GDDR6 or GDDR7. Tesla’s second-gen FSD chip already supports first-gen GDDR6. HW4.0 uses 32GB GDDR6 (model: MT61M512M32KPA-14) running at 1750MHz (the minimum LPDDR5 frequency is also above 3200MHz). Since it is the first-gen GDDR6, its speed is relatively low. Even with GDDR6, running 10 billion-parameter foundation models smoothly remains unfeasible, though it’s currently the best available.
Tesla’s third-gen FSD chip is likely under development and may be completed in late 2025, with support for at least GDDR6X.
The next-generation GDDR7 standard was officially released in March 2024, but Samsung had already unveiled the world’s first GDDR7 in July 2023. Currently, both SK Hynix and Micron have introduced GDDR7 products. GDDR requires a special physical layer and controllers, and chips must have a built-in GDDR physical layer and controllers to use GDDR. Companies like Rambus and Synopsys sell relevant IPs.

Future autonomous driving chips may adopt hybrid memory architecture, for example, use GDDR7 for processing high-load AI tasks and LPDDR5X for low-power general computing, balancing performance and cost.
DRAM Selection Path (3): HBM2E is already deployed in L4 Robotaxis but remains far from production passenger cars. Memory chip vendors are working on migration of HBM technology from data centers to edge devices.
High bandwidth memory (HBM) is primarily used in servers. Stacking SDRAM using TSV technology increases not only the cost of the memory itself, but also the cost of TSMC's CoWoS process. Currently CoWoS capacity is tight and expensive. HBM has a much higher price than LPDDR5X, LPDDR5, and LPDDR4X commonly used in production passenger cars, and is not economical.
SK Hynix’s HBM2E is being exclusively used in Waymo’s L4 Robotaxis, offering 8GB capacity, transmission rate of 3.2Gbps, and impressive bandwidth of 410GB/s, setting a new industry benchmark.
SK Hynix is currently the only vendor capable of supplying HBMs that meet stringent AEC-Q automotive standards. SK Hynix is actively collaborating with autonomous driving solution giants like NVIDIA and Tesla to expand HBM applications from AI data centers to intelligent vehicles.
Both SK Hynix and Samsung are working to migrate HBM from data centers to edge devices like smartphones and cars. Adoption of HBMs in mobile devices will focus on improving edge AI performance and low-power design, driven by technological innovation and industry chain synergy. Cost and yield remain the primary short-term challenges, mainly involving HBM production process improvement.
Key Differences: Traditional data center HBM is a "high bandwidth, high power consumption" solution designed for high-performance computing, while on-device HBM is a "moderate bandwidth, low power consumption" solution tailored for mobile devices.
Technology Path: Traditional data center HBM relies on TSV and interposers, whereas on-device HBM achieves performance breakthroughs through packaging innovations (e.g., vertical wire bonding) and low-power DRAM technology.
For example, Samsung’s LPW DRAM (Low-Power Wide I/O DRAM) uses similar technology, offering low latency and up to 128GB/s bandwidth while consuming only 1.2pJ/b. It is expected to enter mass production during 2025-2026.
LPW DRAM significantly increases I/O interfaces by stacking LPDDR DRAM to achieve the dual goals of improving performance and reducing power consumption. Its bandwidth can exceed 200GB/s, 166% higher than LPDDR5X. Its power consumption is reduced to 1.9pJ/bit, 54% lower than LPDDR5X.

UFS 3.1 has already been widely adopted in vehicles and will gradually iterate to UFS 4.0 and UFS 5.0, while PCIe SSD will become the preferred choice for L3/L4 high-level autonomous vehicles.
At present, high-level autonomous vehicles generally adopt UFS 3.1 storage. As vehicle sensors and computing power advance, higher-specification data transmission solutions are imperative, and UFS 4.0 products will become one of the mainstream options in the future. UFS 3.1 offers a maximum speed of 2.9GB/s, which is dozens of times lower than SSD. The next-generation version UFS 4.0 will reach 4.2GB/s, providing higher speed while reducing power consumption by 30% compared to UFS 3.1. By 2027, UFS 5.0 is expected to arrive with speeds of around 10GB/s, still much lower than SSD, but with the advantages of controllable costs and a stable supply chain.
Given the strong demand for foundation models from both cockpit and autonomous driving, and to ensure sufficient performance headroom, SSD should be adopted instead of the current mainstream UFS (which is not fast enough) or eMMC (which is even slower). Automotive SSD uses the PCIe standard, which offers tremendous flexibility and potential. JESD312 defines the PCIe 4.0 standard, which actually includes multiple rates. 4 lanes is the lowest PCIe 4.0 standard, and 16-lane duplex can reach 64GB/s. PCIe 5.0 was released in 2019, doubling the signaling rate to 32GT/s, with x16 full-duplex bandwidth approaching 128GB/s.
Currently, both Micron and Samsung offer automotive-grade SSD. Samsung AM9C1 Series ranges from 128GB to 1TB, while Micron 4150AT Series comes in 220GB, 440GB, 900GB, and 1800GB capacities. The 220GB version is suitable for standalone cockpit or intelligent driving, while cockpit-driving integration requires at least 440GB.
Multi-port BGA SSD can serve as a centralized storage and computing unit in vehicles, connecting via multiple ports to SoCs for cockpit, ADAS, gateways, and more. It efficiently processes and stores different types of data in designated areas. Its benefit of independence ensures that non-core SoCs cannot access critical data without authorization, preventing interference, misidentification, or corruption of core SoC data. This maximizes data transmission isolation and independence and also reduces hardware cost of each SoC for vehicle storage.
For future L3/L4 high-level autonomous vehicles, PCIe 5.0 x4 + NVMe 2.0 will be the preferred choice for high-performance storage:
Ultra-high-speed transmission: Read speeds up to 14.5GB/s and write speeds up to 13.6GB/s, three times faster than UFS 4.0.
Low latency & high concurrency: Support higher queue depths (QD32+) for parallel processing of multiple data streams.
AI computing optimization: Combined with vehicle SoCs, accelerate AI inference computing to meet requirements of fully autonomous driving.
In autonomous driving applications, PCIe NVMe SSD can cache AI computing data, reducing memory access pressure and improving real-time processing capabilities. For example, Tesla’s FSD system uses a high-speed NVMe solution to store autonomous driving training data to enhance perception and decision-making efficiency.
Synopsys has already launched the world’s first automotive-grade PCIe 5.0 IP solution, which includes PCIe controller, security module, physical layer device (PHY), and verification IP, and complies with ISO 26262 and ISO/SAE 21434 standards. This means PCIe 5.0 will soon be available for automotive applications.
Automotive Cloud Service Platform Research Report, 2026
Research on automotive cloud service platform: with architecture upgrade and computing power improvement, cloud services enter a new stage
In 2026, the Internet of Vehicles industry generates petaby...
Integrated Battery and Innovative Battery Technology Research Report, 2026
Power Battery Research: Sales of High-Capacity Vehicles Keep Rising, and Solid-State Batteries Begin to Be Installed in Vehicles
I. Sales of High-Capacity Vehicles Sustain Growth, and Those with A C...
Chinese Independent OEMs’ ADAS and Autonomous Driving Report, 2026
Research on OEMs' Intelligent Driving: Era of Physical AI, Standard Configuration of D2D, and Initial Exploration of L3 Commercial Pilot Projects
From 2023 to 2025, the intelligent driving installati...
Intelligent Vehicle New Technology Application Analysis Report, 2025-2026
New Technology Research: Innovative Products such as Bionic Cameras, Vision-LiDAR Fusion Sensors, Auditory Sensors Further Enhance Vehicle Perception Capabilities
ForewordResearchInChina released th...
Automotive Optical Fiber Communication (Optical Fiber Ethernet, PON) and Supply Chain Research Report, 2026
Research on Automotive Optical Fiber Communication: Introduction of Optical Fiber in Vehicles Accelerates, with Priority Deployment in High-Speed Communication Link (10+Gbps) Scenarios
Automotive opt...
Automotive Intelligent Cockpit SoC Research Report, 2026
Automotive Cockpit SoC Research: Passenger Cars in the Price Range of RMB100,000–200,000 Account for Nearly 50% of Total Sales, and New-Generation Cockpit SoC Products Largely Enter Mass Production
P...
LiDAR (Automotive, Pan-Robotics, etc.) Application Research Report, 2025-2026
LiDAR research: hardware competition shifts to combined sensing capabilities from "point cloud" to "images” and from automotive to robots The "LiDAR (Automotive, Pan-Robotics, ...
Global and China Passenger Car T-Box Market Report, 2026
Based on 2025 market data and the latest business layouts of OEMs and suppliers from 2025 to 2026, this report analyzes the development status quo and future trends of China’s passenger car T-Box mark...
Global and China Range Extended Electric Vehicle (REEV) and Plug-in Hybrid Electric Vehicle (PHEV) Research Report, 2026
Research on REEVs and PHEVs: Foreign OEMs are considering extended-range technology as an important strategic option and will launch a series of new vehicles
Global PHEVs & REEVs tend to be domin...
Automotive Voice Industry Report, 2026
Automotive Voice Research: Explosive Growth in Features Like "See and Speak", 35-Fold Increase in External Voice Interaction in Two Years
ResearchInChina has released the Automotive Voice Industry R...
China Passenger Car Digital Chassis Research Report, 2026
Research on Digital Chassis: Leading OEMs Have Completed Configuration of Version 2.0 1. Leading OEMs Have Completed Configuration of Digital Chassis 2.0
By the degree of wired control of each c...
Vehicle Functional Safety and Safety Of The Intended Functionality (SOTIF) Research Report, 2026
Multiple Mandatory Standards for Intelligent Vehicles in China Upgrade Functional Safety Requirements from Recommended to Mandatory Access Criteria In 2026, China has intensively issued and promo...
Automotive 12V/48V Low-Voltage Lithium-ion Battery/Sodium-ion Battery Industry Research Report, 2026
Research on 12V/48V automotive low-voltage lithium-ion (sodium-ion) batteries: promoted by regulations and standardization, it is imperative to "replace lithium-ion (sodium-ion) batteries with lead-ac...
Next-Generation Automotive Wireless Communication Technologies (6G/5G-A, NearLink, Satellite Communication, UWB, etc.) and Automotive Communication Module Industry Report, 2026
Research on Next-Generation Communication and Modules: Accelerated Deployment of 5G-A, Satellite Communication, NearLink, UWB and Other Technologies in Automobiles
Automotive wireless communication t...
Research on Zonal Architecture: Smart Actuators (Micro-motors) and Application Trends in Sub-scenarios, 2026
Smart Actuator and Micro-motor Research: Under Zonal Architecture, Actuators Are Developing towards Edge Computing, 48V, and Brushless Motors.
The core components of automotive zonal architecture mai...
China Passenger Car Navigate on Autopilot (NOA) Industry Report, 2025
In 2025, NOA standardization was popularized, refined and deepened in parallel. In 2026, core variables will be added to the competitive landscape.
The evolution of autonomous driving follows a clear...
Smart Car OTA Industry Report, 2025-2026
Automotive OTA Research: In the Era of Mandatory Standards, OTA Transforms from a "Function Channel" to a New Stage of "Full Lifecycle Management"
Driven by the development and promotion of AI and so...
Automotive AI Box Research Report, 2026
Automotive AI Box Research: A new path of edge AI accelerates
This report studies the current application status of automotive AI Box from the aspects of scenario demand, product configuration, and i...