Graphcore aims to revolutionize the AI chip market, by investing in a new IPU architecture and the world’s first software toolchain designed specifically for machine intelligence.
Conventional Artificial Intelligence (AI) chips feature a traditional GPU architecture, which was originally developed for graphics rendering purposes. The reason why chips with such architecture functions in simulating intelligence are due to their proven efficiency for data operations. Efficient data operations at multiple scales are the basis of machine learning, the foundational core of AI. However, a UK-based company called Graphcore is aiming to revolutionize the AI chip market by significantly disrupting the semiconductor industry. It’s doing so by aiming to develop the world’s most advanced AI chip that features a new architecture altogether.
A brief history of Graphcore
Graphcore was founded in 2016 by semiconductor industry veterans Nigel Toon and Simon Knowles, who serve as the CEO and CTO of the company respectively. Prior to Graphcore, they were involved in companies such as Altera, Element14, and Icera. At present, Graphcore’s valuation is a mammoth $1.7 billion. It has partnered with Dell, the world’s largest server provider, Samsung, the world’s largest consumer electronics company, and Bosch, the world’s largest supplier of electronics for the automotive industry. Off late, the company also managed to secure $200 million funding from BMW, Microsoft, and leading financial investors such as Atomico, Merian Chrysalis, Sofia, and Sequoia. With such financial wealth at their disposal, Graphcore aims to deliver the world’s most advanced AI chip at scale. The chip would feature a new architecture that’s vastly different than traditional GPU ones.
Market leaders in AI Chip
When it comes to the production and distribution of AI chips, Nvidia is the dominant market player. It had evolved over time with its GPU chips and has single-handedly managed to retain the monopoly in the industry, on account of a clear, coherent strategy and an effective product in the marketplace. Along with Nvidia, Google has also invested a significant amount of money on AI chips. Intel, on the other hand, has managed to sell more than $1 billion in AI chips in 2017. These companies build specialized chips (ASICs) that are very good at some specific mathematical operation on data, optimized for a specific workload. However, according to CEO Nigel Toon, these chips are not designed to take on tomorrow’s workload.
The limitations of energy efficiency call for a new architecture
Graphcore’s AI chips would feature an Intelligent Processor Unit (IPU) module embedded in them, which would manage to do tasks that seem impossible for ASICs and GPU chips. CTO Simon Knowles was quick to dismiss the speculation claiming that the AI chips would be neuromorphic in nature. A neuromorphic AI chip is built after a model of the human brain, with its neurons and synapses mirrored in its architecture. They advocate communication by electrical spikes, very similar to the ones found in the brain. Knowles explains, “A basic analysis of energy efficiency immediately concludes that an electrical spike (two edges) is half as efficient for information transmission as a single edge, so following the brain is not automatically a good idea.” He iterates that while computer architects should aim to learn how the brain computes, literally copying it in silicon shouldn’t be the answer.
Energy efficiency is not just a limiting factor for neuromorphic architectures, but also for Moore’s law – which states that the number of transistors in a dense integrated circuit would double every two years. According to Toon, some fundamental limits of Moore’s law has already been achieved. Not in terms of the maximum number of transistors, but in terms of the lowest voltage that can be used on chips. He says, “Your laptop still runs at 2 GHz, it’s just got more cores in it”. Thus, adding more transistors won’t make them run faster. Designing AI chips require thousands of such cores to run in parallel, hence they need a different architectural process to operate faster with the given voltage.
Handing high dimensional data structures efficiently
IPU’s are specifically designed to carry out the tasks that require machine intelligence and are specifically optimized for reinforcement learning and various other future approaches. According to Toon, “The IPU architecture enables us to outperform GPUs — it combines massive parallelism with over 1000 independent processor cores per IPU and on-chip memory so the entire model can be held on the chip.” So, how exactly does an IPU architecture achieve that? The answer lies in data structures. For machine learning, data structures are high dimensional and complex, making them vastly different from one another. Hence, even the most powerful GPUs lose their efficiency while handling them. Graphcore’s IPUs aims to develop an architecture that can process these data structures 10-100 times faster.
In addition, Graphcore has also developed Poplar, the first software toolchain designed specifically for machine intelligence. It has facilitated the evolution of traditional microprocessor software stack, allowing developers to define everything in terms of graphs and tensors rather than in terms of vectors and scalars. According to Toon, “Traditional toolchains do not have the capabilities required to provide an easy and open platform for developers. The models and applications of Poplar are massively parallel and rely on millions of identical calculations to be performed at the same time.”
Learning from complex data
Since the advent of computers, humans have been telling them what to do in a systematic way, using algorithms and programs. However, now machines should learn to develop the algorithm from data – the basis of machine learning. This has altered the development and behavior of myriad applications. According to Toon, “With enough data and compute, we can build models that outperform humans in pattern recognition tasks.” The key construct for a toolchain like Poplar to succeed is by the use of graphs. The graph represents the knowledge model and the application which is built by the toolchain. Poplar is built around a computational graph abstraction, the intermediate representation (IR) of its graph compiler is a large directed graph.
By using graphs as a fundamental metaphor, Graphcore shares graph images that are the internal representation of their graph compiler. A representation of the entire knowledge model broken down to expose the huge parallel workloads, which Graphcore schedules and executes across the IPU processor. The IPU processor and Poplar were designed together, and this philosophy of both silicon architecture and software programming environment being developed at the same time reflects the culture and environment of Graphcore.
Revolutionizing AI chips
With a new architecture and toolchain, Graphcore will be providing IPU developers full open-source access to its optimized graph libraries so they can see how Graphcore builds applications. On top of that, Graphcore’s AI chips would also give Nvidia a run for its money and may accelerate an already growing market. According to a recent report by Allied Market research, the AI chip market is projected to reach $91,185 million by 2025, growing at a CAGR of 45.4% from 2018 to 2025. New applications have the opportunity of springing up thanks to technological advancement akin to the likes of Graphcore, which also has the potential of disrupting the existing semiconductor industry.
laptop with a walkie-talkie on the mountain peak