When you upgrade to a new iPhone —as millions will next month at the unveiling of the “A13” powered iPhone 11 —you’re voting with your dollars for a future driven by advanced new silicon with incredible sophistication. There is no way Apple or any other company could design and manufacture this future without you. The recent history of Google’s Pixel Visual Core explains why.
Google’s Pixel Visual Core silicon was not ready to roll
Two years ago, Google stoked tremendous excitement from its fan base when it announced that the Pixel 2 incorporated custom silicon branded the Pixel Visual Core. This wasn’t a complete “System on a Chip” with multiple integrated processing cores and other specialized controllers like Apple’s A12, but rather a specialized Image Signal Processor with programmable features designed to augment its off the shelf Snapdragon chip.
Yet, the Pixel 2 shipped without any support for actually using the Pixel Visual Core. Google stoked further excitement by announcing that it would be “activated” in the future, enabling third parties to take advantage of it to do all sorts of interesting things.
Months later, Google itself hadn’t figured out how to reliably accelerate its camera app to take full advantage of the Pixel Visual Core, and third parties were effectively limited to using it to capture their own HDR shots, far short of the explosion of magic that Pixel 2 buyers were anticipating.
The fact that Pixel 2 models barely found any audience at all erased any real interest in developers writing novel software specifically to take advantage of an incredibly tiny segment of the Android installed base. After Google released Pixel 3, users worked to get its camera software to run on Pixel 2 and take advantage of its Pixel Visual Core, but this ended up buggy and problematic.
After several months of even worse sales of Pixel 3, Google announced a cheaper Pixel 3a, which lacked the company’s Pixel Visual Core hardware entirely. This means a significant portion of the Pixel installed base doesn’t even have it. This saga illustrates that no amount of hype about custom silicon means anything unless there is some significant market created to induce real applications of it.
The unique business of silicon
The invention of microprocessors erupted decades ago in Silicon Valley, a region that had previously been known for its bucolic fruit orchards. Apple’s corporate name sprang directly from this association. When unveiling the design of Apple Park, Steve Jobs nodded to this nostalgic past in pointing out that the new tech campus would feature liberal expanses of green space that included rows of fruit trees.
Yet outside of this symbolic gesture, Cupertino, Calif., and much of Silicon Valley is paved over with car-centric development featuring freeways that cut between the glass and steel headquarters of various tech companies. A freeway even cut through the parcel of land designated for Apple Park, shaping its design far more significantly than the goal of replanting some fruit trees.
The former orchards and farmland of Silicon Valley were rapidly paved over in the 1980s due to the vastly higher commercial value of silicon chips over fruit. The semiconductor technologies that began to commercially emerge there in the 1970s enabled mass manufacturing of increasingly tiny electronics components on a tremendous scale.
Rather than slowly growing edible fruit in orchards covering acres of land, silicon semiconductor manufacturing laid out invisibly tiny electronic rows of functional, computational engines along with storage sheds for the electrons they sort. Once designed, a chip blueprint can be mass-produced using chemical photography to yield enormous numbers of usable chips at relatively little cost.
However, the technology to drive advancements in silicon —designing those chips to be smaller, faster and more powerful —is extraordinarily expensive. The only way to deliver consistent technological increases is to develop massive markets capable of paying for this work. Google’s expectation that it could build some new silicon and that developers would flock to take full advantage of it was simply wrong.
Apple, silicon and scale
Apple was founded as a way to turn semiconductor chips into consumer products capable of delivering real value to regular users —making them more productive and unleashing their creativity while keeping them entertained, socially connected and better educated.
Apple didn’t enter the computing business as a chipmaker. Instead, it focused on the user experience of finished computers, and delegated chip design and production to others. In 1984, the all-new Macintosh was powered by a chip speculatively designed by Motorola. It wasn’t until the end of the 1980s that Apple discovered that its ambitious plans for the future simply couldn’t be powered by existing chip designs.
To deliver the future of mobile devices it imagined for the Newton MessagePad, Apple worked with Acorn and VLSI in 1990 to develop the new ARM6 architecture optimized for battery-powered mobile devices. In 1991 it also began work with IBM and Motorola to develop an all-new “PC” class processor for future Macs, which was delivered as PowerPC.
While both of these developments dramatically advanced the state of the art, neither was ultimately successful for the purpose it was intended. Apple’s 1990s Newton tablets couldn’t sell enough units to justify the ongoing development of ARM chips for advanced tablets. Apple’s PowerPC Macs also couldn’t rival the economies of scale that were driving vast numbers of PCs powered by Intel’s x86 chips.
At the end of the decade in which Newton failed to take off, Apple ended up selling its shares in the ARM partnership for more than $1 billion, helping to finance the future development of new technology products. While Newton was ultimately a dud, ARM gradually became valuable largely because Nokia and other phone makers had adopted ARM chips as a cost-effective, efficient architecture for powering tens of millions of basic phones.
In 2001, thee years after Steve Jobs canceled Newton as a product, Apple introduced the iPod as a new device for carrying around a large library of digital music. The incredible scale that this new ARM-based product sold at helped to drive the continued development of ARM chips customized specifically to make better, more desirable iPods in the future.
In 2005, Apple similarly adopted the industry standard of Intel’s x86 chips for its Macs. Yet when Apple decided to scale down its Mac platform into a handheld phone, Intel couldn’t imagine that Apple’s new product could make sense financially, or support the expense required to maintain a mobile version of its desktop chips. To launch the iPhone, Apple instead partnered with Samsung, the supplier building ARM chips for its iPods.
A few years later, Apple’s success with iPhone made Intel very interested in providing chips to power Apple’s next new product category: a new tablet-based on Apple’s same scaled-down Mac platform. Yet rather than using chips from Intel, Apple was preparing to enter the silicon business on its own.
That involved a series of acquisitions including Apple’s 2008 purchase of PA Semi, a chip design firm selling PWRficient chips based on the PowerPC architecture. Within just a couple years of assembling a silicon design team, Apple coproduced a new design with Samsung it branded A4, which was used to both power its new iPad and its fourth-generation iPhone 4.
Apple had solved the problem of designing its silicon by creating high volume markets for its mobile products capable of sustaining future development, although this ultimately took a full decade to deliver.
As sales of iPads and iPhones rocketed upward, it became increasingly clear that owning its technology would not only give Apple tremendous cost savings, but would also free the company to tightly optimize its silicon to build exactly the kinds of products it was imagining for the future. The higher Apple could drive its volumes of sales, the more cost-effective it became to deliver new custom silicon.
Apple goes silicon silent
Owning its internal silicon chip design team also helped to keep Apple’s future direction a secret. If it were only buying chips off the shelf from Intel, Samsung, Dialog and others, it would be relatively obvious what the company could deliver. Apple’s competitors would also be aware of these suppliers’ road maps and essentially have the same access to buy the same parts, erasing much potential for Apple to outmaneuver them or surprise the market.
Across the 2010s, Apple relentlessly introduced regular new generations of its A-series chips powering new iPhones and iPads. This was a tremendously expensive investment, but it also meant that the profits Apple generated were invested back into its silicon future, rather than effectively subsidizing the entire industry.
Conversely, on the Mac side, Apple’s use of Intel x86 chips has supported advancements that generally benefit all PC makers. Intel has even directly invested its profits into bringing PC cloners up to speed in competing against Apple —particularly in “UltraBooks,” Intel’s definition for building notebooks capable of competing with Apple’s MacBook Air. If Apple had relied upon Intel’s mobile Atom x86 chips to deliver iPad, it would be in the same frenemy position with iOS.
More than a processor
Instead, Apple has been able to tightly customize the performance of its ARM processor designs to the needs of its devices. But unlike PowerPC or Intel x86 processors, Apple’s A-series chips are more than just microprocessor. Referred to as a “System on Chip,” the A12 Bionic in Apple’s latest phones also packs in specialized coprocessor units including the company’s GPU, its custom-designed Image Signal Processor optimized to perform camera and imaging tasks, and a specialized Neural Engine designed specifically to accelerate Machine Learning tasks.
Additionally, Apple’s A12 SoC also includes the company’s internally-designed Integrated Memory Controller. This handles data moving in and out of its processor. Because it has multiple processing units, the IMC orchestrates a unified memory architecture that shares memory between the ARM CPU, the Apple GPU, the ISP, and the Neural Engine.
Apple’s IMC design erases the need to copy data between different memory stores when a large set of data needs to shift between general computing functions and specialized tasks that can be accelerated by the GPU or Neural Engine. On a mobile device, the path between discreet data stores would be a lot slower than the unified memory architecture shared between them. Additionally, there are other considerations, including energy efficiency, that are optimized into the A12 that wouldn’t apply on a desktop computer.
Apple’s A12 SoC is also unique in that it includes a Secure Enclave, essentially a secure computing unit with its storage that is dedicated to handling highly private data including Face ID biometrics. The A12 also has other custom features, including Apple’s Storage Controller for managing SSD writes and encryption and media encoding used to accelerate the capture and playback of high-resolution video using HEVC.
So beyond just making iOS devices fast overall, Apple’s custom SoC also gives them specialized powers in specific areas of computing, and enables functionality stretching from data writes to camera optimization to image capture and data privacy. Apple has taken a lot of its specialized A-chip features and introduced them on Macs over the past year using the T2 chip, which is a version of the A10 sporting the Secure Enclave handling Touch ID, along with support for encryption, imaging, Touch Bar, and other features.
Integrated design of custom silicon at Apple
Rather than taking an off the shelf chip from a supplier and trying to figure out how to make the best use of it, Apple has noted that its design teams develop its custom silicon in advance with constant feedback from groups with specific needs —such as teams working on security, or imaging functionality, or efforts such as Augmented Reality, Pro Apps for creative, and gaming, each of which has specific bottlenecks, potentially hampering performance.
In an interview with Ars Technica last year, Apple’s head of marketing Phil Schiller noted that the company’s silicon designers regularly meet with other groups to explore how their needs can be accommodated in future hardware designs.
“We’re planning, we really want more insight,” Schiller rhetorically stated. “What exactly do you want to do, how do you want it to work? What are the bottlenecks, where can we start creating silicon that ultimately will be part of a well-crafted system?”
Schiller added, “Those meetings happen multiple times a week. It’s not like there’s some big get together, once a year, just to align schedules. They are having these discussions weekly about a growing number of topics. It’s not a finite set. It’s a growing number.”
These fleshed-out objectives affect the design of Apple’s new silicon years before it gets released. As a result, Apple can deploy fully-implemented hardware-accelerated features that are ready to work as soon as the new chips become available. Last year, Apple’s new camera features, AR, and Memoji shipped ready to roll on new iPhones, with no wait to “activate” them for use.
While Google got more sympathetic media coverage, Apple has been building an ISP into all of its iPhones for years now. It effectively works as part of the camera system, both for users and third-party apps. Every new iPhone delivers further advancements in the ISP, allowing every new generation to capture better photos, higher frame rates, improved exposure and a variety of other advancements that enhance imaging inside and outside the camera app.
At its upcoming introduction of the new iPhone, Apple will detail its latest year of work in custom silicon supporting new camera features, AR enhancements, memory and storage, new processing capabilities, and other benefits drive-by silicon advancements —all driven by the hundreds of millions of buyers of iPhones.