BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Apple's Rosetta Move

Apple's Rosetta Move

This item in japanese

Bookmarks

At the virtual WWDC 2020 conference this year, Apple announced that they would be releasing Apple Silicon based Macs in the future, and that developers should start preparing themselves for the upcoming hardware transition. The Apple Silicon is an ARM based processor, not unlike the chips used in the high-end iPad Pros at the moment.

The transition to the new architecture should not impact users, claim Apple, because software can be recompiled for ARMs easily using Xcode, and those that can't can be emulated with a translate-on-install process called Rosetta 2. This will translate most x86_64 instructions into equivalent ARM/A12X instructions so that the users can continue to enjoy the applications that they currently use without having to re-buy new versions of the applications. The translation should also apply at run-time by working with JIT compilers to translate code on the fly as it is generated. However, not all instructions are supported; those using AVX instructions in x86_64 code will find that they won't work, although applications linked with the Accelerate framework will automatically use the native instruction set depending on the platform.

Apple is not new to either translation software called Rosetta or chip transitions in general. Because most software on macOS is dynamically linked to Apple-provided frameworks, as long as those frameworks provide implementations in multiple architectures, the users should be able to run applications that take advantage of the host's local architecture, regardless of where they run. Apple's transition from PowerPC to Intel in the early 2000s was facilitated by a runtime translation tool called Rosetta, and macOS applications and libraries/frameworks have been able to take advantage of machO's fat binaries to host multiple architectures in the same bundle. At times, Apple code has had powerpc+intel32, intel32+64, and now intel64+arm64. By providing a gradual path from the old to the new, Apple has managed to keep users and developers on board.

The transition planning for ARM started several years ago. Apple's approach to transitioning between different architectures is to enable new functionality but only for the new world; when Carbon (the C based compatibility layer) was phased out in preference for Cocoa (the NeXT derived Objective-C layer) many new frameworks would ultimately only be available in the new world, and increasing levels of deprecation finally moved laggards like Adobe from their Carbonized world when the next major OS release stopped supporting Carbon. We saw similar noises when Apple declared that 32-bit macOS software days was numbered, explicitly highlighting to the user that such old apps may not be supported in a future release. The reason for this mandatory transition to 64-bit we can now see is that the future state mac/arm devices will only be 64-bit, and thus it was necessary to flush out non-64-bit apps and libraries first.

So what does Apple Silicon bring? Well, there is only a developer test kit available for loan (at $500/year, an expensive one at that) which isn't indicative of the final performance that we might expect. However, we can look at the iPad Pro which has a similar system to get an idea of what kind of results we might find; long life battery, high processing power, and a mixture of cores that have high power and low power variants.

Apple explicitly documented in their video introducing Apple Silicon at WWDC that the new Apple Silicon macs will have a big.LITTLE approach to cores. Some of the cores will be low power, suitable for background tasks that aren't too demanding, while others will be high power suitable for interacting with the user or doing heavy lifting. There will also be GPU capabilities that developers writing Metal applications will Just Work for free. The goal is to run-fast-and-sleep-quickly which will be a power saver for those on battery, while generating less heat waste for those that are plugged in to the wall.

The other interesting fact to come out of the talk is that the Apple Silicon hardware will share memory between the GPU and CPU. As a result, heavy graphics operations that perform bulk uploads and processing of textures will not need to explicitly shuttle data back and forth between the CPU and GPU. The Intel macs currently have two different systems for memory for the CPU and GPU, meaning that operations requiring bulk loading – as might be expected with heavy graphics processing or neural data processing – can elide one copy of the input and output data.

Apple's goal of having complete control of the system means they can make such decisions because they are now not constrained by a particular standardised Intel or PCI architecture to co-ordinate with the graphics card. Instead, the systems can share memory, potentially balancing out the load on the CPU or GPU depending on memory requirements.

Another significant difference between the architectures is that of the device driver partitioning and steps to prevent against exploits. On Intel processors, all the device drivers share a single IOMMU, which potentially leads to cross-device targeting and leaks. On Apple Silicon hardware, each device driver will have its own IOMMU, and although inter-device traffic will still be permitted via a command channel, there won't be arbitrary read and write of each other's memory. This is impossible to do on an Intel machine due to the way that Intel system architectures have evolved, but it's a strengthening that can be applied when you own the whole stack.

The other thing the Apple Silicon hardware will do is enforce pointer authentication, R^X protection, and mapping the kernel in a completely read-only/signed partition. The boot loader process will use a chain of trust allowing only the most recent signed Apple kernels, so that malware and other nefarious actors cannot interfere with the operation of the kernel. Apple claims that this read-only view of the kernel will prevent code from being executed, and the ARM pointer authentication will help reduce the scope of such exploits, but it may not prevent all such cases of escapes. Many of these protections have been available on iOS devices in the past, and it's certainly true that while the iOS devices are more secure than macOS devices, they aren't invulnerable – but the transition to ARM gives Apple the opportunity to beef up the macOS security in the same way as with their hand held devices. These changes to the boot protocols means that BootCamp to run Windows natively will no longer work, although Apple demonstrated a Parallels desktop emulating an x86 Linux machine running on ARM hardware, so it's possible that slow emulation will work for those wishing to use Windows based applications on the new hardware.

What does this mean for the rest of the industry? Well, Microsoft has been shipping Windows on ARM for some time – though its main use is probably on Raspberry Pi desktops at the moment; but more importantly, Amazon have launched their Graviton processors which are a 64-bit ARM machine. Developers tend to be wary of new machine types, but since Amazon can run these machines with less cooling and power, they translate to greater cost savings – and if you have a higher level runtime such as Java, Python or Node.JS which has been ported to run on ARM already, you may be able to take advantage of running cheaper hardware in the cloud simply by moving to the new processors.

What Apple brings to the table is a comfortable, high powered desktop machine (or laptop) which is capable of debugging native code written in ARM. This will facilitate the porting effort from Intel (with its relatively strong memory ordering) to ARM (with its relatively weaker memory ordering). It's likely that porting will find some of these hidden race conditions that have Just Worked on x86 but which will have transient errors on ARM based systems. Being able to debug these on powerful desktops, rather than through a remote window onto an ARM server running in the cloud, will help to uncover and fix these issues much faster. At the recent Open Source Summit, Dirk Hondel asked Linus Torvalds about the availability of ARM hardware, who replied:

For ten years or so I'd be complaining about the fact that it's really, really hard to find ARM hardware that is usable for development. They exist, but they have certainly not been real competition for x86 so far.

The transition from Intel to ARM isn't going to happen overnight, but the direction of travel is certainly clear, and early adopters are flocking to the ARM processor in the cloud for cheaper runtimes. As the availability of ARM macs improves over the coming years, more developers are going to be using them for ARM development work for libraries and servers, and these will help drive the adoption of ARM processors in the server. By the end of the decade, ARM processors may be the majority of servers in data centres.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT