Arm's Vision for the Future of Computing
Arm is constantly thinking about the future of computing. Whether it’s the capabilities of the latest architectures or new technologies for chip solutions, everything Arm creates and designs is focused on the future technology and user experience.
With its unique position in the technology ecosystem, Arm has a comprehensive understanding of the highly specialized and interconnected global semiconductor supply chain, covering markets such as data centers, IoT, automotive, and smart devices. As a result, Arm has broad and deep insights into the direction of future technology development and the major trends that are likely to emerge in the coming years.
Based on this, Arm has made the following predictions for 2025 and beyond, covering all aspects of technology, from the future of AI to chip design, as well as key trends in different technological markets.
Rethinking Chip Design: Chiplets Become Crucial Components of Solutions
From a cost and physics perspective, traditional chip manufacturing is becoming increasingly challenging. The industry needs to rethink chip design and break away from traditional methods. For example, people are realizing that not all functions need to be integrated into a single monolithic chip. As foundries and packaging companies explore new pathways and break the limits of Moore’s Law in new dimensions, chiplets and other methods are starting to emerge.
The implementation of chiplet technologies is receiving significant attention and is having a profound impact on core architectures and microarchitectures. Designers need to gradually understand the advantages of different implementation technologies, including process nodes and packaging technologies, to leverage their characteristics to improve performance and efficiency.
Chiplets can effectively address specific market needs and challenges and are expected to continue developing in the coming years. In the automotive market, chiplets help companies achieve automotive-grade certification during the chip development process. Additionally, chiplets can help scale chip solutions and achieve differentiation by combining various computing components. For instance, chiplets focused on computation have different numbers of cores, while chiplets focused on memory have different sizes and types of memory. System integrators can combine and package these chiplets to develop highly differentiated products.
"Recalibrating" Moore's Law
In the past, Moore’s Law stated that the number of transistors on a chip would double every two years, leading to a doubling of performance and halving of power consumption. However, this approach of continuously adding more transistors, achieving higher performance, and lowering power consumption on a single monolithic chip is no longer sustainable. The semiconductor industry needs to rethink and recalibrate Moore’s Law and its significance to the industry.
One approach is to stop focusing solely on performance in chip design and instead focus on metrics such as performance per watt, performance per unit area, performance per unit power, and total cost of ownership. Additionally, new metrics should be introduced to address system implementation challenges (which are one of the biggest issues facing development teams), ensuring that the integration of IPs into system-on-chips (SoCs) and the overall system does not degrade performance. As the tech industry rapidly moves towards more efficient AI workloads, these metrics will become more critical in relevant areas.
Real Business Differentiation Through Chip Solutions
To achieve real business differentiation through chip solutions, companies are increasingly seeking specialized chips. This is reflected in the growing prevalence of computing subsystems, which allow companies of all sizes to differentiate and customize their solutions by configuring each to perform or support specific computational tasks or specialized functions.
The Growing Importance of Standardization
Standardized platforms and frameworks are crucial for ensuring that ecosystems can provide products and services with differentiated advantages. They not only increase real business value but also save time and costs. As chiplets with different computing components emerge, standardization is becoming more important than ever. It will allow hardware from different vendors to work seamlessly together. Arm has already partnered with over 50 technology partners to develop the Arm Chiplet System Architecture (CSA). As more partners join, Arm and its partners will drive the standardization process in the chiplet market. In the automotive sector, this aligns with the founding mission of SOAFEE, which aims to decouple hardware from software in software-defined vehicles (SDV), increasing the flexibility and interoperability between computing components and speeding up the development cycle.
Unprecedented Collaboration Between Chips and Software in Ecosystems
As chip and software complexity continues to rise, no single company can handle all aspects of chip and software design, development, and integration. Deep collaboration within the ecosystem is essential. This type of collaboration provides unique opportunities for companies of all sizes to offer differentiated computing components and solutions based on their core competencies. This is especially important in the automotive sector, where the entire supply chain, including chip suppliers, Tier 1 suppliers, automakers, and software providers, must come together to share expertise, technology, and products to define the future of AI-driven SDVs and enable end users to realize the full potential of AI.
The Rise of AI-Enhanced Hardware Design
The semiconductor industry will increasingly adopt AI-assisted chip design tools, using AI to optimize chip layouts, power distribution, and timing closure. This approach will not only optimize performance but also accelerate the chip solution development cycle, allowing small companies to enter the market with specialized chips. AI will not replace human engineers, but it will become a vital tool in tackling the growing complexity of modern chip designs, particularly for high-efficiency AI accelerators and edge devices.
Continued Development of AI Inference
In the coming year, AI inference workloads will continue to increase, helping ensure the broad and lasting adoption of AI. This trend is driven by the growing number of devices and services with AI capabilities. Most daily AI inferences, such as text generation and summarization, will be done on smartphones and laptops, providing users with faster and more secure AI experiences. To support this growth, these devices will need technologies that enable faster processing speeds, lower latency, and efficient power management. Armv9 architecture’s key features, SVE2 and SME2, will enable Arm CPUs to execute AI workloads quickly and efficiently.
Edge AI Takes Off
In 2024, many AI workloads have shifted from large data centers to the edge (i.e., devices), offering businesses both energy savings and privacy/security advantages. By 2025, we are likely to see advanced hybrid AI architectures that efficiently distribute AI tasks between edge devices and the cloud. In these systems, AI algorithms on edge devices will identify important events, while cloud models will intervene to provide additional information. The decision of whether to execute AI workloads locally or in the cloud will depend on factors like available energy, latency requirements, privacy concerns, and computational complexity.
Edge AI represents the decentralization of AI, allowing devices to perform smarter, faster, and more secure processing closer to the data source. This is especially crucial for industries requiring high performance and localized decision-making, such as industrial IoT and smart cities.