As the popularity of solutions like ChatGPT continues to rise, companies are increasingly looking into implementing AI technologies to enhance existing offerings and create new products – including AI-powered mobile apps. According to the State of Mobile 2025 report, downloads of AI chatbot and AI art generator apps were downloaded almost 1.5 billion times in 2024, while in-app purchase (IAP) revenue from these solutions reached $1.27 billion USD.
AI technologies can be applied to mobile solutions in various ways, for example, by combining machine learning (ML) capabilities with existing features. This article will focus on how mobile app developers can utilize the TrueDepth camera system in iOS to create functionalities that enable users to interact with an app through gestures and facial expressions. Read on to find out more about TrueDepth and how to apply machine learning to iOS mobile apps.
What is the TrueDepth camera system?
The TrueDepth camera system is a key element of Face ID technology. It enables Face ID – Apple’s biometric authentication facial recognition solution – to accurately map and recognize a user’s face. It’s used in iPhone X and newer models, except for SE models where it’s only available on iPhone SE 4. Generally, if an iPhone comes with a notch (a black area at the top of the screen where the sensors are located) and Dynamic Island (the area at the top of an unlocked screen where you can check notifications and activity in progress), it uses TrueDepth.
TrueDepth consists of three main elements:
- Dot projector – projects infrared light in the form of thousands of dots to map a user’s face.
- Flood illuminator – enables the system to precisely process projected dots at night or in low light.
- Infrared camera – scans the projected dots and sends the resulting image to a processor which interprets the dots to identify the user’s face.
After configuration, whenever Face ID is used (for example, whenever you unlock your phone using this method), it saves images generated by TrueDepth. By utilizing a machine learning algorithm, Face ID learns the differences between the images and, as a result, adapts to changes in a user’s appearance (e.g., facial hair).
When Face ID became available, some users voiced their concern about its security. However, chances that someone randomly unlocks your phone via Face ID are less than 1 in 1,000,000, while for Touch ID (electronic fingerprint recognition) it’s only less than 1 in 50,000. The likelihood for Face ID increases in the case of twins or younger children, but overall, this technology seems to be the more secure option.
ML use cases in mobile systems
Besides Face ID, machine learning is already widely used in phones and other mobile devices – a common example of it is text prediction when you’re typing text messages. This technology is also applied in such areas as:
- Image analysis – cameras can use neural networks instead of TrueDepth to create depth and blur backgrounds in images as well as recognize faces. However, AI-based image analysis is not as secure as TrueDepth as a face recognition functionality because it doesn’t create 3D maps and, as a result, is susceptible to identifying faces from photos.
- Text analysis – an ML-driven app can analyze the context of a text message and suggest replies.
- Speech analysis – virtual assistants such as Siri use ML to understand and react to voice commands.
- Sound recognition – iPhones can identify sounds such as a siren, doorbell or a baby crying and send you notifications when these sounds occur.
Machine learning in iOS app development
Machine learning is a field of study in which systems learn new information from databases. By analyzing these large data collections and utilizing various algorithms, ML solutions can detect patterns in databases and apply this information later to other processes.
More complex ML features might require advanced specialized knowledge. However, mobile developers can implement some basic functionalities based on ML or neural networks without extensive experience in this field as Apple gives access to useful tools that help developers apply ML technology in iOS mobile app development.
When working on solutions that offer more common features (for example, animal or plant recognition), sometimes you’ll be able to utilize pre-trained data models. These models are often created in a format that can be easily deployed into an app, but depending on your solution’s requirements, you might need to adjust your selected model to suit your app. In iOS, you can also leverage the Neural Engine – a group of processors found in new iPhones and iPads that speeds up AI and ML calculations.
It’s not recommended to create and train your ML model on a mobile application. Usually, these models are prepared and trained on a server or on a computer before they’re deployed to a mobile app, as it streamlines the training process. Especially if you want to create complex datasets and use them to train your model, this process can be expensive, require high computing power and, overall, be more efficient when conducted on desktop rather than a mobile device.
Using Core ML in iOS app development
To facilitate integrating machine learning into your iOS mobile app, Apple offers Core ML, a framework that enables you to use pre-trained models or create your own custom ML models. The Core ML app is integrated with Xcode, Apple’s development environment, which further streamlines ML implementation and gives you access to live previews and performance reports. Additionally, Core ML minimizes memory footprint and power consumption by running models on a user’s device, which optimizes on-device performance. This leads to better app responsiveness and improved data privacy.
Now, take a look at two examples of creating ML-based app features with Core ML and TrueDepth.
Example: Implementing a pre-trained model
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard let device = sceneView.device else { return nil }
let node = SCNNode(geometry: ARSCNFaceGeometry(device: device))
//Projects the white lines on the face.
node.geometry?.firstMaterial?.fillMode = .lines
return node
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry = node.geometry as? ARSCNFaceGeometry
else {
return
}
//Updates the face geometry.
faceGeometry.update(from: faceAnchor.geometry)
let blendshape = Blendshapes(faceAnchor: faceAnchor)
DispatchQueue.main.async {
self.label.text = blendshape?.rawValue ?? ""
}
}
}
enum Blendshapes: String {
case eyeBlinkLeft = "Left blink"case eyeBlinkRight = "Right blink"private init?(rawValue: ARFaceAnchor.BlendShapeLocation) {
switch rawValue {
case .eyeBlinkLeft:
self = .eyeBlinkLeft
case .eyeBlinkRight:
self = .eyeBlinkRight
default:
return nil
}
}
init?(faceAnchor: ARFaceAnchor) {
guard let eyeBlinkLeftValue = faceAnchor.blendShapes[.eyeBlinkLeft] as? Float,
let eyeBlinkRightValue = faceAnchor.blendShapes[.eyeBlinkRight] as? Float else {
return nil
}
let dict: [Blendshapes: Float] = [.eyeBlinkLeft: eyeBlinkLeftValue, .eyeBlinkRight: eyeBlinkRightValue]
guard let max = dict.max(by: {
$0.value < $1.value }) else { return nil } guard max.value > 0.5 else {
return nil
}
self = max.key
}
}
You can build a simple app that enables users to interact with it through facial expressions (for example, by blinking with their right or left eye, moving their lips or cheeks) which the solution recognizes with the help of TrueDepth.
To build this feature, you can use the ARKit framework, which helps you develop various augmented reality (AR) functionalities – for example, including virtual elements in an app that users will see on their screens in the camera view. In this example of an app controlled by facial expressions, you could use ARFaceAnchor, which provides a directory of various expressions. This way you don’t have to create and train your own ML model, but you can still effectively utilize this technology.
Example: Using a custom Core ML model
extension ViewController: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
guard let device = sceneView.device else { return nil }
let node = SCNNode(geometry: ARSCNFaceGeometry(device: device))
//Projects the white lines on the face.
node.geometry?.firstMaterial?.fillMode = .fill
return node
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let faceAnchor = anchor as? ARFaceAnchor,
let faceGeometry = node.geometry as? ARSCNFaceGeometry,
let snapshot = self.sceneView.snapshot().cgImage
else {
return
}
//Updates the face geometry.
faceGeometry.update(from: faceAnchor.geometry)
//Creates Vision Image Request Handler using the current frame and performs an MLRequest. try? VNImageRequestHandler(cgImage: snapshot, orientation: .right, options: [:]).perform([VNCoreMLRequest(model: model) { [weak self] request, error in
guard let firstResult = (request.results as? [VNClassificationObservation])?.first,
firstResult.confidence > 0.8 else { return }
DispatchQueue.main.async {
self?.label.text = firstResult
.identifier
print(firstResult.identifier)
}
}])
}
}
This example focuses on an app prototype which utilizes a custom Core ML model to recognize if a user has their mouth open or closed. You can train your own model using Create ML, a developer tool available in Xcode within the Core ML framework. This software offers different methods of neural network training, depending on the type of data you’re using, including image classification, object detection, activity classification and word tagging. After the training, Create ML enables you to upload testing data to check if the training has been successful and your ML model performs as expected. Finally, a generated file with your model can be downloaded and used in your mobile app development project.
In this example, image classification was used to train the custom model based on 800 photos of one person (400 photos for each category – open and closed mouth). The photos showed unified masks generated by TrueDepth with the fill attribute, which resulted in effective model training without involving a high number of different people. Additionally, to improve the model’s performance, a rule was deployed that required the model to be at least 80% confident that the classification is correct before it assigns a category.
In practice, it means that to run this feature, the app takes a screenshot of a user’s photo. The screenshot is then sent to a request handler, which processes the Core ML model. The system generates several dozen screenshots per second each time FaceAnchor’s node updates the TrueDepth-generated mask. The model then sorts the screenshot into one of the defined categories (closed or open mouth).
Developing AI-powered mobile apps
Machine learning and AI can help companies add innovative app features and expand their offerings with new mobile solutions that utilize emerging technologies and attract more users. For iOS, Apple offers tools that facilitate ML implementation and model training for mobile apps. This way many mobile developers can deploy basic ML-based functionalities without an extensive background in neural networks. However, more complex solutions involving ML and AI will likely require advanced, specialized knowledge in this field.
That’s why many organizations looking to implement machine learning and AI in their mobile solutions for iOS or other systems decide to partner with an experienced software development company like Software Mind. This cooperation gives them access to engineering experts with a track record of creating AI-driven mobile applications. If you want to learn more about how our AI and mobile specialists can support your software development, reach out to us via this contact form.
About the authorKamil Stanuszek
Senior Software Engineer
A Senior Software Engineer with seven years’ experience in software development, Kamil has built high-performing mobile solutions for clients in the fintech, social media, travel and insurance industries. A skilled mobile development expert specializing in iOS and Flutter, he’s currently creating and providing architecture design support for an insurance application, as well as automating the development of white label mobile apps. In his engineering projects, Kamil champions simple, clean code, broad AI adoption and the automation of repetitive processes.