iOS 18 AI summarizes notifications, articles and more

Apple’s next-generation operating systems will feature Project Greymatter, which will bring a number of AI-related improvements. We have new details on the AI ​​features planned for Siri, Notes and Messages.

Artificial intelligence will enhance several basic applications with summarization and transcription functions

After widespread claims and reports of AI-related improvements in iOS 18, AppleInsider got more information about Apple’s AI plans.

People familiar with the matter revealed that the company is internally testing a number of new AI-related features ahead of the annual WWDC. Codenamed Project “Greymatter,” the company’s AI enhancements will focus on practical benefits for the end user.

In pre-release versions of Apple’s operating systems, the company is working on a notification summary feature known as “Greymatter Catch Up”. The feature is tied to Siri, meaning users will be able to request and receive an overview of their recent notifications through the virtual assistant.

Siri is expected to receive significantly updated response generation capabilities through the new Smart Response framework as well as Apple’s on-device LLM. When generating answers and summaries, Siri will be able to take into account entities such as people and companies, calendar events, places, dates and more.

In our earlier reports on Safari 18, Ajax LLM and the updated Voice Memos app AppleInsider revealed that Apple plans to introduce AI-powered text summarization and transcription into its built-in apps. We’ve since learned that the company intends to bring these features to Siri as well.

This ultimately means Siri will be able to answer questions on the device, create summaries of long articles, or transcribe audio like in the updated Notes or Voice Memos apps. All this would be done using Ajax LLM or cloud processing for more complex tasks.

We’re also told that Apple is testing improved and more “natural” voices, along with improvements to text-to-speech, which should ultimately result in a significantly better user experience.

Apple is also working on cross-device media and TV control for Siri. This feature would allow someone to use Siri on their Apple Watch to play music on another device, for example, although the feature isn’t expected until later in 2024.

The company has decided to incorporate artificial intelligence into several of its core system applications with different use cases and tasks in mind. One notable area of ​​improvement has to do with photo editing.

Apple has developed generative AI software for enhanced image editing

iOS 18 and macOS 15 are expected to bring AI photo editing capabilities to apps like Photos. Internally, Apple has developed a new Clean Up feature that will allow users to remove objects from images using generative AI software.

The Clean Up tool will replace Apple’s current Retouch tool

The company also created an app for internal use known as “Generative Playground” in conjunction with Project Greymatter. People familiar with the app revealed exclusively AppleInsider that it can use Apple’s generative AI software to create and edit images, and that it includes iMessage integration in the form of a dedicated app extension.

In Apple’s test environments, it is possible to generate an image using artificial intelligence and then send it via iMessage. There are indications that the company is planning a similar feature for end users of its operating systems.

This information is in line with another report which claims that users will be able to use AI to create unique emojis, although there are other options for image generation features.

According to people familiar with the matter, pre-release versions of Apple’s Notes app also include references to the Generation Tool, though it’s unclear whether the tool will generate text or images — as it does with Generative Playground.

Notes will receive AI transcription and summarization along with Math Notes

Apple has lined up major improvements to its built-in Notes app that will debut with iOS 18 and macOS 15. The updated Notes will gain support for in-app audio recording, audio transcription, and LLM-based summarization.

The Notes app for iOS 18 will support in-app audio recording, transcription, and summarization

Audio recordings, transcripts, and text summaries will be available within a single note, along with any other material users choose to add. This means that one note can, for example, contain a recording of an entire lecture or meeting, complete with images and text on the board.

These features would turn Notes into a real powerhouse, making it a popular app for students and business professionals alike. The addition of audio transcription and summarization features will also allow Apple’s Notes app to better compete against competing offerings such as Microsoft OneNote or Otter.

While app-level audio recording support, along with AI-powered audio transcription and summarization features, will greatly improve the Notes app – they’re not the only things Apple is working on.

Math Notes – Create graphs and solve equations with AI

The Notes app will receive a brand new addition in the form of Math Notes, which will begin supporting correct math notation and allow integration with Apple’s new GreyParrot Calculator app. We now have more details on what Math Notes will entail.

The Notes app for iOS 18 will introduce support for AI-assisted audio transcription and math annotation

People familiar with the new feature revealed that Math Notes will allow the app to recognize text in the form of math equations and offer solutions to them. Support for graphical expressions is also in the works, meaning we could see something similar to the Grapher app on macOS, but within Notes.

Apple is also working on improvements aimed at math-related input in the form of a feature known as “Keyboard Math Predictions.” AppleInsider this feature was said to allow math expressions to be completed whenever they are recognized as subtext input.

This means that within Notes, users would be given the ability to autocomplete their math equations in a similar way to how Apple currently offers predictive text or inline completion on iOS — which are also expected to come to visionOS later this year.

Apple visionOS will also see improved integration with Apple Transformer LM, a predictive text model that offers suggestions as you type. The operating system is also expected to receive a redesigned Voice Commands user interface, which serves as an indicator of how much Apple values ​​input-related improvements.

The company is also looking to improve user input with so-called “smart replies” that will be available in the Messages, Mail and Siri apps. This would allow users to reply to messages or emails with basic text responses that are instantly generated by Apple Ajax LLM on the device.

Apple’s AI vs. Google’s Gemini and other third-party products

Artificial intelligence has found its way into virtually every application and device. The use of AI-focused products such as OpenAI’s ChatGPT and Google Gemini has also seen a significant increase in overall popularity.

Google Gemini is a popular AI tool

While Apple has developed its own AI software to better compete against the competition, the company’s AI is nowhere near as impressive as something like Google’s Gemini Advanced, AppleInsider he learned

During its annual Google I/O developer conference on May 14, Google demonstrated an interesting AI use case — where users could ask a question in the form of a video and receive an AI-generated answer or suggestion.

As part of the event, Google’s AI was shown a video of a broken record player and asked why it wasn’t working. The software identified the turntable model and suggested that the turntable might not be properly balanced and that this is why it is not working.

The company also announced Google Veo – software that can generate video using artificial intelligence. OpenAI also has its own video generation model known as Sora.

Apple’s Project Greymatter and Ajax LLM can’t generate or process video, meaning the company’s software can’t answer complex video questions about consumer products. This is probably why Apple has sought to partner with companies like Google and OpenAI to reach a licensing agreement and make more features available to its user base.

Apple will compete with products like the Rabbit R1 by offering vertically integrated AI software on established hardware

Compared to physical AI-themed products like the Humane AI Pin or the Rabbit R1, Apple’s AI projects have the significant advantage of running on devices that users already own. This means that users will not need to buy special AI devices to enjoy the benefits of artificial intelligence.

Humane’s AI Pin and Rabbit R1 are also commonly seen as unfinished or partially functional products, and have even been revealed to be little more than a custom Android app.

Apple’s AI-related projects are expected to make their debut at the company’s annual WWDC on June 10 as part of iOS 18 and macOS 15. Updates to the Calendar, Freeform, and System Settings apps are also in the works.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top