WAIC 2025: Silicon Evolution + AI Chips and Smart Hardware Co-Shape a New Era of Computing Power
In July 2025, the global tech spotlight returned to Shanghai. The 2025 World Artificial Intelligence Conference (WAIC), themed "Connecting the World, Boundless Metaverse," grandly commenced. While continuing the fervent discussions on large General Artificial Intelligence (AGI) models, this year's conference focused the spotlight on the underlying bedrock enabling their flourishing development: AI chips and smart hardware. Within the exhibition halls, a quiet revolution concerning computing density, energy efficiency limits, and hardware form factors unfolded, revealing the core drivers of the future intelligent world.
I. AI Chips: Breaking Physical Boundaries, Racing for Computing Supremacy
With Moore's Law approaching its physical limits, the evolution paths of AI chips exhibit unprecedented diversity and breakthroughs:
1、Pushing the Frontiers of Process Technology: Leading players like NVIDIA, Huawei HiSilicon, and Cambricon unveiled next-generation AI training and inference chips based on 1nm and sub-1nm advanced process nodes. These chips achieve exponential gains in transistor density, providing robust physical support for the efficient operation of models with hundreds of billions, even trillions, of parameters. The booths of these chip giants were swarmed by professional attendees, eager to witness these marvels of silicon engineering destined to power future computing.
2、Mainstreaming Heterogeneous Integration & Chiplet Technology: As single-chip performance gains face bottlenecks, Chiplet technology emerged as the key solution. AMD, Intel, and several domestic innovators showcased solutions using advanced packaging (e.g., 3D stacking, silicon interposers) to heterogeneously integrate compute chiplets, high-bandwidth memory (HBM3e/HBM4), and high-speed interconnect chiplets. This model of "modular integration and collaborative processing" significantly boosts system-wide computing power and energy efficiency while reducing the design and manufacturing costs of complex chips, establishing itself as the industry's recognized development trajectory.
3、Accelerated Adoption of Compute-in-Memory (CIM) Architectures: Addressing the "memory wall" bottleneck of the traditional von Neumann architecture, Compute-in-Memory (CIM) chips moved from labs into the spotlight. Institutions like Tsinghua University's Brain-Inspired Computing Research Center, Samsung, and IBM demonstrated near-memory or in-memory computing prototype chips. By performing calculations within or adjacent to memory cells, these chips drastically reduce the immense energy consumption and latency caused by data movement. They show disruptive potential in edge AI inference scenarios demanding ultra-low power, such as real-time video analytics and sensor networks.
4、Rise of Photonic Computing and Quantum Chips: Looking further ahead, photonic AI chips attracted attention for their ultra-high speed, low power consumption, and resistance to electromagnetic interference. Domestic and international startups and research institutions showcased prototype systems using light for matrix multiplication and addition (core AI operations). Concurrently, specialized quantum acceleration chips, though in early exploratory stages, sparked significant academic and investor interest due to their potential advantages for specific optimization problems.
II. Smart Hardware: Form Factor Reimagined, AI Empowers the Intelligent Connection of Everything
The evolution of AI chips directly catalyzed disruptive changes in the form and capabilities of smart hardware:
1、Explosion of Edge AI Devices: Edge computing boxes, industrial smart cameras, AI sensors, and other devices equipped with high-efficiency AI accelerator chips were abundant. Capable of performing complex image recognition, speech processing, and predictive maintenance tasks locally in real-time without relying on the cloud, they meet the stringent requirements for low latency and high privacy in scenarios like industrial automation, smart cities, and unmanned retail. Booths from Huawei Ascend, Qualcomm, and NXP were crowded with industry clients seeking deployment solutions.
2、AI PCs/AI Phones Enter the "True Intelligence" Era: Terminal manufacturers like Lenovo, Dell, Apple, Xiaomi, and Honor unanimously positioned "local large model operation capability" as the core selling point of their new-generation PCs and phones. Leveraging powerful dedicated NPUs, these devices can efficiently run multi-billion parameter models locally, enabling smarter document summarization, image generation, real-time translation, personalized assistants, and more – delivering a qualitative leap in user experience. Long queues formed at exhibition experience zones as attendees vied to interact with powerful AI assistants running "offline."
3、Embodied Intelligence Platforms Mature: Humanoid robots, intelligent bionic hands, and highly flexible robotic arms, combining advanced AI chips with sophisticated hardware, became focal points. Tesla's Optimus demonstrated smoother gaits and finer manipulation skills, while robots from domestic companies like UBTECH and CloudMinds exhibited enhanced environmental perception and task execution. They are not only feats of hardware engineering but also core carriers materializing AI algorithms in the physical world, heralding profound transformations in smart manufacturing, rehabilitation medicine, and specialized operations.
4、Neuromorphic Hardware Opens a New Arena: Neuromorphic chips/hardware, inspired by biological brains, were another major highlight. Successors to Intel's Loihi and IBM's TrueNorth, alongside achievements from domestic research teams, showcased their event-driven, ultra-low-power characteristics and unique advantages in processing spatiotemporal data (e.g., dynamic vision, olfactory sensing, spiking neural networks), paving a new path for next-generation brain-inspired intelligent hardware.
III. Convergence and Challenges: Building an Open, Symbiotic Smart Hardware Ecosystem
During conference forums and roundtables, industry leaders and scholars focused on two critical issues:
Hardware-Software Co-Optimization is Paramount: Experts emphasized that hardware design divorced from algorithm optimization is a castle in the air. Chip architects need deep collaboration with AI framework developers (e.g., PyTorch, TensorFlow) and model creators to fully unleash hardware potential through compiler optimization, custom operator libraries, and support for sparsity/quantization. Open-source hardware-software interface standards (e.g., ONNX, MLIR) are foundational for building a thriving ecosystem.
Energy Efficiency & Sustainability Become Core Metrics: With the explosive growth in AI computing demand, its staggering energy consumption has become unavoidable. Whether for cloud data centers or vast edge device fleets, energy consumption per unit of computing power (TOPS/W) has become the gold standard for evaluating chip and hardware solutions. Advanced cooling technologies like liquid and immersion cooling, alongside high-efficiency chip design, received unprecedented attention. Green computing is the essential path for the industry's sustainable development.