Home Technology Apple is strangely silent on artificial intelligence

Apple is strangely silent on artificial intelligence

0
Apple is strangely silent on artificial intelligence

AI training is now also available to the average Apple user with the more modest Mac Studio
(photo: Apple)

At the beginning of the week, Apple introduced many new products, but never once mentioned artificial intelligence, although elements of AI are present in most of them. This has puzzled analysts.

While presenting iconic products like the Silicon Mac Pro and Vision Pro at the WWDC 2023 developer conference, the company’s speakers never directly mentioned “artificial intelligence,” as Microsoft and Google do, but replaced it with other concepts: “machine learning” and “transformer ,” notes ArsTechnica.

Speaking about the new autocorrect and voice input algorithms in iOS 17, Apple’s senior vice president of software development, Craig Federighi, said that the feature is based on machine learning and a language model – a transformer, thanks to which the system works more accurately than ever. Or that the “pattern” is triggered with every key press, made possible by Apple’s powerful processors at the heart of the iPhone.

Federighi thus avoids the term “artificial intelligence”, but confirms that the product uses a language model with a “transformational” architecture optimized for natural language processing. The neural networks that are the basis of the DALL-E image generator and the ChatGPT chatbot work on the same architecture. This means that autocorrect in iOS 17 works at the sentence level, suggesting completion of words or whole phrases.

In addition, the language model is further learned by adapting to the speech characteristics of the owner of the phone. All this is made possible thanks to the Neural Engine, which debuted in Apple A11 processors in 2017 and optimizes the performance of machine learning applications.

The term “machine learning” was mentioned several times by Apple: when describing the new capabilities of the iPad’s lock screen – the Live Photo function itself synthesizes additional frames; in the PDF scanning function in iPadOS for subsequent automatic substitution in forms; in the AirPods Adaptive Audio feature, which reveals the user’s music preferences; and in the description of the new Smart Stack widget for Apple Watch.

“Machine learning” is used in the new Journal app – now users can keep a personal interactive journal on iPhone. The app only recommends what content to flag, based on the data stored on the phone.

Finally, “machine learning” is used to create 3D user avatars with images of the eyes displayed on the front screen of the Vision Pro AR glasses. And a codec based on a neural network algorithm is used to compress these avatars.

An indirect mention of AI technologies followed in the description of the new Apple M2 Ultra chip, which has up to 32 CPU cores and 76 GPU cores, as well as 32 Neural Engine cores. The processor provides up to 31.6 trillion operations per second and increases performance by 40% compared to the M1 Ultra.

Apple has specifically stated that these resources can be used to train “large transform models” with 192GB of RAM, which today’s discrete GPUs lack and thus remain passive on some tasks.

All of this means that AI training is accessible to the average user—not just owners of the top-of-the-line Mac Pro, starting at $6,999, but also the more modest Mac Studio, priced at $1,999. It remains to wait for comparative reviews with accelerators such as the NVIDIA H100.

LEAVE A REPLY

Please enter your comment!
Please enter your name here