From:Internet Info Agency 2026-04-15 12:34:00
Researchers at the University of Michigan College of Engineering have developed a hardware-software co-design approach that directly maps state-space models onto in-memory computing architectures, significantly improving energy efficiency and reducing processing latency for artificial intelligence on edge devices. This method enables real-time processing of continuous data streams—such as video or sensor data—allowing high-performance AI to run locally on devices like smartphones, hearing aids, or autonomous vehicle cameras. The findings were published in *Nature Communications*. The study highlights that computations inherent in state-space models can leverage the physical characteristics of in-memory computing systems for efficient execution, overcoming limitations these hardware platforms face when running convolutional neural networks and Transformer models.

Jaguar Land Rover FY2025/26 Results: Premium Models Drive Recovery, China Market Leads
Baidu Intelligent Cloud Powered Delivery of Over 20 Million L2 ADAS Vehicles Last Year
Xiaomi Unveils and Open-Sources XiaomiOneVL Autonomous Driving Framework
FAW Unveils Bestune 08 Sedan: Powered by Snapdragon 8295, Offers BEV and EREV Options
Trump's China Delegation Includes Nearly 20 U.S. Executives from Apple, Tesla and More
BYD Unveils Yun辇-P Ultra Tech: Enables Wheel Replacement, Three-Wheel Driving, and 9-Ton Lifting
Tesla Unveils Reusable Suspension Clip Patent, Balancing Cabin Quietness and Serviceability