This repository will not be updated. The repository will be kept available in read-only mode.
Read this in other languages: 中国.
The easiest way to find and connect to people around the world is through social media apps like Facebook, Twitter and LinkedIn. These, however, only provide text based search capabilities. However, with the recently announced release of the iOS ARKit toolkit, search is now possible using facial recognition. Combining iOS face recognition using Vision API, classification using IBM Visual Recognition, and person identification using classified image and data, one can build an app to search faces and identify them. One of the use cases is to build a Augmented Reality based résumé using visual recognition.
The main purpose of this code pattern is to demonstrate how to identify a person and his details using Augmented Reality and Visual Recognition. The iOS app recognizes the face and presents you with the AR view that displays a résumé of the person in the camera view. The app classifies a person face with Watson Visual Recognition and Core ML. The images are classified offline using a deep neural network that is trained by Visual Recognition.
After completing this code pattern a user will know how to:
As an alternative to the steps below, you can create this project as a starter kit on IBM Cloud, which automatically provisions required services, and injects service credentials into a custom fork of this pattern.
git clone https://github.com/IBM/ar-resume-with-visual-recognition
When the app loads, it also loads 3 Core ML models which is bundled part of the app. The models were trained using IBM Watson Visual Recognition Tool and downloaded as Core ML model.
To create a new classifier use the Watson Visual Recognition tool. A classifier will train the visual recognition service, it will be able to recognize different images of the same person. Use at least ten images of your head shot and also create a negative data set by using headshots that are not your own.
A JSON file UserInfo.json is used to store the information about the user. Its indexed by classification identifier. You need to have an entry in this file for any new user that you have classified in visual recognition service.
ios_swift directory and open the project using
ResumeARStarter/BMSCredentials.plist in the project and replace the credentials. The
plist file looks like below:
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>visualrecognitionApi_key</key> <string>VR_API_KEY</string> <key>cloudantUrl</key> <string>CLOUDANT_URL</string> </dict> </plist>
At a command line, run
pod install to install the Watson SDK and other dependencies.
Once the previous steps are complete go back to Xcode and run the application by clicking the
Run menu options.
NOTE: The training in Watson Visual Recognition might take couple of minutes. If the status is in
training, then the AR will show
Training in progress in your AR view. You can check the status of your classifier by using following curl command:
API_KEY with the Watson Visual Recognition api key.
| | |
This code pattern is licensed under the Apache Software License, Version 2. Separate third party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 (DCO) and the Apache Software License, Version 2.