This app demonstrates how to use the Cloud Vision API to run label and face detection on an image.
- An API key for the Cloud Vision API (See the docs to learn more)
- An OSX machine or emulator
- Xcode 7
- Install CocoaPods if you don't have it already by running the command
sudo gem install cocoapods.- You'll need to have RubyGems installed in order to install CocoaPods.
- Clone this repo and
cdinto theSwiftdirectory. - In
ImagePickerViewController.swift, replaceYOUR_API_KEYwith the API key obtained above. cdinto theSwiftdirectory and install theSwiftyJSONpod by runningpod install.- Open the project by running
open imagepicker.xcworkspace. - Build and run the app.
-
As with all Google Cloud APIs, every call to the Vision API must be associated with a project within the Google Cloud Console that has the Vision API enabled. This is described in more detail in the getting started doc, but in brief:
- Create a project (or use an existing one) in the Cloud Console
- Enable billing and the Vision API.
- Create an API key, and save this for later.
-
Clone this
cloud-visionrepository on GitHub. If you havegitinstalled, you can do this by executing the following command:$ git clone https://github.com/GoogleCloudPlatform/cloud-vision.gitThis will download the repository of samples into the directory
cloud-vision.Otherwise, GitHub offers an auto-generated zip file of the
masterbranch, which you can download and extract. Either method will include the desired directory atcloud-vision/ios/Swift. -
cdinto thecloud-vision/ios/Swiftdirectory you just cloned. The app uses theSwiftyJSONpod to parse the JSON response. This pod is defined in thePodfile, and you can install it by runningpod install. -
Run the command
open imagepicker.xcworkspaceto open this project in Xcode. Be sure to open thexcworkspaceversion of the project. -
In Xcode's Project Navigator, open the
ImagePickerViewController.swiftfile within theimagepickerdirectory. -
Find the line where the
API_KEYis set. Replace the string value with the API key obtained from the Cloud console above. This key is the credential used in thecreateRequestmethod to authenticate all requests to the Vision API. Calls to the API are thus associated with the project you created above, for access and billing purposes. -
You are now ready to build and run the project. In Xcode you can do this by clicking the 'Play' button in the top left. This will launch the app on the simulator or on the device you've selected.
-
Click the
Choose an image to analyzebutton. This calls theloadImageButtonTappedaction to load the device's photo library. -
Select an image from your device. If you're using the simulator, you can drag and drop an image from your computer into the simulator using Finder.
- This executes the
imagePickerController, which saves the selected image and calls thebase64EncodeImagefunction. This function base64 encodes the image and resizes it if it's too large to send to the API. - The
createRequestmethod creates and executes a label and face detection request to the Cloud Vision API. - When the API responds, the
analyzeResultsfunction is called. This method constructs a string of the labels returned from the API. If there are faces detected in the photo, it analyzes the emotions detected. It then displays the label and face results in the UI by populating thelabelResultsandfaceResultsUITextViewwith the data returned from the API.
- This executes the