Apple is gradually enhancing its artificial intelligence capabilities through a number of staff additions, acquisitions, and hardware upgrades aimed at integrating AI into the upcoming iPhone generation.
Based on industry data, research publications, and insider insights from the IT sector, it appears that the Californian corporation has mostly concentrated on addressing the technological challenge of implementing artificial intelligence on mobile devices.
According to data from PitchBook, the iPhone maker has acquired 21 AI startups since the beginning of 2017, outpacing rival Big Tech corporations in this regard. The company’s acquisition of California-based firm WaveOne, which provides AI-powered video compression, in early 2023 was the most recent of those purchases.
“They are getting ready to do some significant M&A,” said Daniel Ives at Wedbush Securities. “I’d be shocked if they don’t do a sizable AI deal this year, because there’s an AI arms race going on, and Apple is not going to be on the outside looking in.”
Nearly half of Apple’s AI job advertising now use the term “Deep Learning,” which refers to the methods underlying generative AI—models that can produce human-like text, speech, and code in a matter of seconds—according to a recent research note from Morgan Stanley. In 2018, the business brought on John Giannandrea, the top AI executive at Google.
Even as Big Tech competitors like Microsoft, Google, and Amazon brag about their multibillion-dollar expenditures in cutting-edge technology, Apple has remained reticent about its aspirations in this area. Industry insiders, however, claim that the business is developing its own huge language models, which are the basis for generative AI solutions like OpenAI’s ChatGPT.
Tim Cook, the company’s CEO, stated to analysts over the summer of last year that it “has been doing research across a wide range of AI technologies” and that it is utilizing the new technology to invest and innovate “responsibly.”
Apple seems to be working toward operating generative AI through mobile devices, which would enable AI apps and chatbots to operate on the hardware and software of the phone instead of relying on data center cloud services.
AI is powered by massive language models, which must be smaller to meet this technological challenge. Higher-performance processors are also needed.
With the release of new phones from Google and Samsung that purport to run generative AI features through the phone, other device manufacturers have moved more quickly than Apple.
It is generally anticipated that Apple will unveil iOS 18, its most recent operating system, at its annual Worldwide Developers Conference in June. Analysts from Morgan Stanley anticipate that the mobile software will be designed to facilitate generative AI, and that it may include an LLM-powered Siri speech assistant.
“They usually wait for a technological convergence where they can present one of the best examples of that technology,” stated Igor Jablokov, the founder of Yap, a voice recognition startup that Amazon bought in 2011 to integrate into its Alexa and Echo devices, and the CEO of AI enterprise group Pryon.
Additionally, Apple announced new CPUs that are more capable of running generative AI. The M3 Max CPU for the MacBook, which was unveiled in October, according to the firm, “unlocks workflows previously not possible on a laptop,” like AI developers working with billions of data parameters.
Unveiled in September, the S9 chip enables Siri to access and log data without requiring an Internet connection for newer models of the Apple Watch. Additionally, the A17 Pro CPU, which was also unveiled at the same time as the iPhone 15, boasts a neural engine that is twice as quick as those of earlier models, according to the firm.
According to Dylan Patel, an analyst at semiconductor consulting firm SemiAnalysis, “as far as the chips in their devices, they are definitely being more and more geared towards AI going forward from a design and architecture standpoint.”
In December, a report was published by Apple researchers describing their breakthrough in executing LLMs on-device using Flash memory, allowing for speedier query processing even when the device is offline.
Together with Columbia University, it launched an open-source LLM in October. Currently only available for study, “Ferret” serves as the user’s second set of eyes, alerting them to items inside the image and generally acting as a second pair of eyes.
“One of the problems of an LLM is that the only way of experiencing the world is through text,” said Amanda Stent, director of the Davis Institute for AI at Colby College. “That’s what makes Ferret so exciting: you can start to literally connect the language to the real world.” At this stage, however, the cost of running a single “inference” query of this kind would be huge, Stent said.
For instance, a virtual assistant using this technology may identify the brand of shirt someone is wearing during a video conference and then place an order for it via an app.
Thanks to the software group’s AI initiatives, Microsoft just surpassed Apple to become the most valuable listed business in the world, thrilling investors.
Nevertheless, analysts at Bank of America raised their rating on Apple shares last week. They mentioned, among other things, projections that the demand for new generative AI features to launch this year and in 2025 will accelerate the iPhone upgrading cycle.
The company’s AI approach, according to Laura Martin, a senior analyst at the investment bank Needham, would be “for the benefit of their Apple ecosystem and to protect their installed base.”
She added: “Apple doesn’t want to be in the business of what Google and Amazon want to do, which is to be the backbone of all American businesses that build apps on large language models.”
Topics #AI models #iPhones