Products You May Like
Apple has announced the launch of a “groundbreaking cloud intelligence system” called Private Cloud Compute (PCC) that’s designed for processing artificial intelligence (AI) tasks in a privacy-preserving manner in the cloud.
The tech giant described PCC as the “most advanced security architecture ever deployed for cloud AI compute at scale.”
PCC coincides with the arrival of new generative AI (GenAI) features – collectively dubbed Apple Intelligence, or AI for short – that the iPhone maker unveiled in its next generation of software, including iOS 18, iPadOS 18, and macOS Sequoia.
All of the Apple Intelligence features, both the ones that run on-device and those that rely on PCC, leverage in-house generative models trained on “licensed data, including data selected to enhance specific features, as well as publicly available data collected by our web-crawler, AppleBot.”
Present alongside Apple Intelligence is an integration with OpenAI’s ChatGPT into Siri and systemwide Writing Tools to generate text and images based on user-provided prompts, with Apple pointing out the privacy protections baked into the process for those who opt to access the virtual assistant.
“Their IP addresses are obscured, and OpenAI won’t store requests,” Apple said. “ChatGPT’s data-use policies apply for users who choose to connect their account.”
With PCC, the idea is to essentially offload complex requests that require more processing power to the cloud, at the same time ensure that data is never retained or exposed to any third-party, including Apple, a mechanism the company refers to as stateless computation.
The architecture that underpins PCC is a custom-built server node that brings together Apple silicon, Secure Enclave, and Secure Boot against the backdrop of a hardened operating system that’s tailor made for running Large Language Model (LLM) inference workloads.
This not only presents an “extremely narrow attack surface,” according to Apple, but also allows it to leverage Code Signing and sandboxing to ensure that only authorized and cryptographically measured code is executable on the data center and that the user data doesn’t break out of the confines of the trust perimeter.
“Technologies such as Pointer Authentication Codes and sandboxing act to resist such exploitation and limit an attacker’s horizontal movement within the PCC node,” it said. “The inference control and dispatch layers are written in Swift, ensuring memory safety, and use separate address spaces to isolate initial processing of requests.”
“This combination of memory safety and the principle of least privilege removes entire classes of attacks on the inference stack itself and limits the level of control and capability that a successful attack can obtain.”
Another notable security and privacy measure is the routing of PCC requests through an Oblivious HTTP (OHTTP) relay that’s operated by an independent party to conceal the origin (i.e., IP address) of the requests, effectively preventing an attacker from using the IP address to correlate the requests to a specific individual.
It’s worth pointing out that Google also uses OHTTP relays as part of its Privacy Sandbox initiative as well as for Safe Browsing in the Chrome web browser to secure users from visiting potentially malicious sites.
It’s worth pointing out Google also uses OHTTP relays as part of its Privacy Sandbox initiative as well as for Safe Browsing in the Chrome web browser to secure users from visiting potentially malicious sites.
Apple further noted that independent security experts can inspect the code that runs on Apple silicon servers to verify the privacy aspects, adding PCC cryptographically ensures that its devices do not communicate with a server unless the software has been publicly logged for inspection.
“Every production Private Cloud Compute software image will be published for independent binary inspection — including the OS, applications, and all relevant executables, which researchers can verify against the measurements in the transparency log,” the company said.
“Software will be published within 90 days of inclusion in the log, or after relevant software updates are available, whichever is sooner.”
Apple Intelligence, which is expected to launch later this fall, will be limited to iPhone 15 Pro, iPhone 15 Pro Max, and iPad and Mac with M1 and later, with Siri and device language set to U.S. English.
Present alongside Apple Intelligence is an integration with OpenAI’s ChatGPT into Siri and systemwide Writing Tools to generate text and images based on user-provided prompts, with Apple pointing out the privacy protections baked into the process for those who opt to access the virtual assistant.
“Their IP addresses are obscured, and OpenAI won’t store requests,” Apple said. “ChatGPT’s data-use policies apply for users who choose to connect their account.”
Apple Intelligence, which is expected to be generally available later this fall, will be limited to iPhone 15 Pro, iPhone 15 Pro Max, and iPad and Mac with M1 and later, that have Siri and device language set to U.S. English.
Some of the other new privacy features Apple has introduced include options to lock and hide specific apps behind Face ID, Touch ID, or a passcode; let users choose which contacts to share with an app; a dedicated Passwords app; and a refreshed Privacy & Security section in Settings.
According to MacRumors, the Passwords app also features a setting to automatically upgrade existing accounts to passkeys. On top of that, Apple has replaced the Private Wi-Fi Address toggle for Wi-Fi networks with a new Rotate Wi-Fi Address setting to minimize tracking.