Using Microsoft Cognitive Emotion API with Android App Studio
Published Mar 21 2019 07:33 AM 2,149 Views
Microsoft
First published on MSDN on Oct 14, 2017
Guest blog by Suyash Kabra. Microsoft Student Partner at University College London

About me

I am a 2 nd year Computer Science student at University College London. I am passionate about hackathons, playing video games, listening to music and watching movies. I am really interested in cognitive services as well as virtual/augmented reality and machine learning. You can find me on LinkedIn

Introduction

As a human it is very easy for us to see someone’s face and understand their emotions. This helps us to act appropriately towards them. But imagine if your phone was able to do the same thing. Depending on your mood it could play you the appropriate songs or show you the appropriate cat pictures (or dog ones if you love dogs instead). You might think I am joking, but believe me I’m not making this up! Using Microsoft’s Emotion API, we can detect the emotion of the people in an image or even in real time!

Emotion API is one of the many cognitive services provided my Microsoft. Using the other cognitive services are similar to Emotion API so feel free to check out the other services after this tutorial.

In this tutorial we are going to create an android application which lets the user select an image from their gallery and find out the emotion of the people in the selected image. Isn’t that awesome?

Prerequisite

You should have basic knowledge of how to create a basic android application (know about xml and java) and know how REST API work You can find this project on GitHub – https://github.com/skabra07/MSP-Emotion-API .

What we will use:

· Android Studio to create your android application. You can download that from https://developer.android.com/studio/index.html

· Any Android device. If you don’t have one that’s fine you can use the Android Virtual Device (AVD) provided with Android Studio. Here is a link which shows you how to setup an AVD https://developer.android.com/studio/run/managing-avds.html

By the end of this article you will be able to:

· Create a basic android application which can select an image from your gallery.

· Be able to use the Emotion API to get emotions of the people in the image

How will this work:

1. In the main activity page present the user with an option to select an image from the gallery.

2. Once they have selected the image click on “Get emotion”

3. The emotion we get will be displayed.

If you don’t understand what is happening, do not fear. Below is a step by step guide on how to create the whole application.

Getting your Emotion API Subscription key

We can get a free 30-day standard Emotion API subscription. Note that this subscription only allows you to make 20 calls per minute or 30,000 calls per month.

First go to https://azure.microsoft.com/en-us/try/cognitive-services/

Then select the “Get API Key” for the Emotion API. After that accept the conditions and select your country/region

Finally log in with your preferred choice of account.

One logged in you will be sent to the confirmation page. Here you will have your subscription key which you should write down as we will use it later.

Creating a new Android Studio Project.

Open Android Studio and select “Start a new Android Studio Project”:

Fill in the details about the project. Ensure the project location is where you want to place the project in your PC. Then click next.

Select “Phone and Tablet” with the minimum SDK as API 19 (KitKat) and then click on next

Select “Empty Activity” and click next

Leave the new page as it is and click on “Finish” to create the new project

The App Layout

Now that we have the android project ready we will first create a layout for the “MainActivity” page of the application.

Open the activity_main.xml file (found under the res -> layout folder):

Remove the existing Text View code. In our view we will have a Image View (to show the image picked form the gallery), 2 buttons and a Textbox for the result. The layout will look like:

The code for the layout:

<?xml version="1.0" encoding="utf-8"?>

<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context="emotionapi.emotionapi.MainActivity"
tools:layout_editor_absoluteY="81dp"
tools:layout_editor_absoluteX="0dp">




<ImageView
android:id="@+id/imageView"
android:layout_width="0dp"
android:layout_height="0dp"
app:srcCompat="@mipmap/ic_launcher"
tools:layout_constraintTop_creator="1"
tools:layout_constraintRight_creator="1"
tools:layout_constraintBottom_creator="1"
app:layout_constraintBottom_toTopOf="@+id/getEmotion"
android:layout_marginStart="3dp"
android:layout_marginEnd="2dp"
app:layout_constraintRight_toRightOf="@+id/getEmotion"
android:layout_marginTop="16dp"
tools:layout_constraintLeft_creator="1"
android:layout_marginBottom="12dp"
app:layout_constraintLeft_toLeftOf="@+id/getImage"
app:layout_constraintTop_toTopOf="parent" />


<Button
android:id="@+id/getImage"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Get Image"
android:onClick="getImage"
android:layout_marginStart="60dp"
tools:layout_constraintTop_creator="1"
android:layout_marginTop="12dp"
app:layout_constraintTop_toBottomOf="@+id/imageView"
tools:layout_constraintLeft_creator="1"
app:layout_constraintLeft_toLeftOf="parent" />


<Button
android:id="@+id/getEmotion"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Get Emotion"
android:onClick="getEmotion"
android:layout_marginEnd="36dp"
tools:layout_constraintRight_creator="1"
tools:layout_constraintBottom_creator="1"
app:layout_constraintBottom_toTopOf="@+id/resultText"
app:layout_constraintRight_toRightOf="parent"
android:layout_marginBottom="37dp" />


<TextView
android:id="@+id/resultText"
android:layout_width="0dp"
android:layout_height="136dp"
android:ems="10"
android:text=""
android:textAlignment="viewStart"
android:textAppearance="@style/TextAppearance.AppCompat.Widget.PopupMenu.Large"
android:textSize="18sp"
android:typeface="normal"
android:layout_marginStart="18dp"
android:layout_marginEnd="20dp"
tools:layout_constraintRight_creator="1"
tools:layout_constraintBottom_creator="1"
app:layout_constraintBottom_toBottomOf="parent"
app:layout_constraintRight_toRightOf="parent"
tools:layout_constraintLeft_creator="1"
android:layout_marginBottom="24dp"
app:layout_constraintLeft_toLeftOf="@+id/getImage" />


<TextView
android:id="@+id/textView"
android:layout_width="84dp"
android:layout_height="26dp"
android:ems="10"
android:text="Result:"
android:textAlignment="viewStart"
android:textAppearance="@style/TextAppearance.AppCompat.Widget.PopupMenu.Large"
android:textSize="18sp"
android:typeface="normal"
android:layout_marginStart="4dp"
app:layout_constraintBaseline_toBaselineOf="@+id/resultText"
tools:layout_constraintBaseline_creator="1"
tools:layout_constraintLeft_creator="1"
app:layout_constraintLeft_toLeftOf="parent" />



</android.support.constraint.ConstraintLayout>

From the code you can see we have two buttons with the id “getEmotion” and “getImage”. These buttons also have a method name for the “android:onClick” field. This relevant method will be called when the user clicks on the button. The rest of the code is the font size of the texts and the layout position of the elements.

Application Permissions and External Libraries

Since we are accessing the gallery and using the internet we will need to take permission from the user for it. We will open the AndroidManifest.xml…

...and add the following lines of code:

<uses-permission android:name="android.permission.INTERNET" />

<uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />

The above two lines get permission from the user to access the storage and internet of the device we are using. Without this we will not be able to access the gallery or send the picture which may crash our application. Your AndroidManifest.xml will look as follows:

Also since we are making a POST call we will need to use a library which can handle these calls and get a response. I will explain more about this later. To add the external library open the “build.gradle (Model:app)” (found under the “Gradel Scripts” section of the left navigation bar) and add the following line inside the “dependencies” block of code:

compile group: 'cz.msebera.android' , name: 'httpclient', version: '4.4.1.1'

The new file should look as follows:

How the application works

Before we continue I will explain what exactly is going to happen (the logic behind the application) and then show you how to do it using a step by step guide.

1. When the user opens the application they will see the layout we created.

2. They will then click on the “GET IMAGE” button.

3. Upon clicking the button, the application will check if it has permission to access the gallery. If not, it will ask the user for permission. This ensures that the application does not crash if the user does not provide the application permission.

4. Once we get the permission, the user will select an image from their gallery.

5. After they have selected their image, the user will be sent back to the main page and here they will see the image they have selected.

6. If they are happy with the image they will click on the “GET EMOTION” button.

7. Once we get the emotion we will display it to the user.

But how exactly do we get the emotion?

When the user clicks on the “GET EMOTION” button, we execute an asynchronous class in the background. This class first converts the image selected to base64 so that we can send the image to the Emotion API. After that we make a POST call to https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize which is the endpoint of the Emotion API. An endpoint is a URL to which the request is being made. Along with the image we add 2 headers to the POST call. Headers are “key-value” pairs, providing some more detailed information about the request we are making. The headers we will send will be “Content-Type” and the “Subscription Key”. Finally we will add a body to the POST call. A body is the data we will be sending to the server, and so our POST body will be the image in base64.

Once we have all of this we make the POST call and then read for the response from the server. The response will be in JSON format and will look something like:

[  {    "faceRectangle": {      "left": 68,      "top": 97,      "width": 64,      "height": 97    },    "scores": {      "anger": 0.00300731952,      "contempt": 5.14648448E-08,      "disgust": 9.180124E-06,      "fear": 0.0001912825,      "happiness": 0.9875571,      "neutral": 0.0009861537,      "sadness": 1.889955E-05,      "surprise": 0.008229999    }  }]

As you can see it gives a probability of all the emotions the API feels about the image. We will however parse through all the emotions and pick the emotion with the highest probability. After we have done that we will display the emotion on the results section of the main page.

I know that was a lot of information but here is a step by step guide on how to achieve the above.

Step-by-Step guide

1. Open MainActivity.java:

2. Declare class variable which will hold the Image View and Text View from our layout so that we can show the image selected by the user and display the emotion of the image. (We declare class variables inside the MainActivity Class but outside any of the classes methods). Once we have declared the variables we will initiate those variables inside the “onCreate” method

public class MainActivity extends AppCompatActivity {


private ImageView imageView; // variable to hold the image view in our activity_main.xml
private TextView resultText; // variable to hold the text view in our activity_main.xml




@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout. activity_main );


// initiate our image view and text view
imageView = (ImageView) findViewById(R.id. imageView );
resultText = (TextView) findViewById(R.id. resultText );
}}

3. Next we will write the methods which will check if we have permission to access the gallery. If we don’t have permission (checkPermission return False) we will have a method which asks for permission

public boolean checkPermission() {
int result = ContextCompat. checkSelfPermission (getApplicationContext(), READ_EXTERNAL_STORAGE );
return result == PackageManager. PERMISSION_GRANTED ;

} private void requestPermission() {
ActivityCompat. requestPermissions (MainActivity.this,new String[]{ READ_EXTERNAL_STORAGE }, REQUEST_PERMISSION_CODE );

}

4. Then we will create a function called “getImage”. This method is executed when the user clicks on the “GET IMAGE” button. Along with that we will also write a method called “onActivityResult”. This method will display the image the user selected on the Image View.

private static final int RESULT_LOAD_IMAGE = 100;

private static final int REQUEST_PERMISSION_CODE = 200; // when the "GET IMAGE" Button is clicked this function is executed

public void getImage(View view) {
// check if user has given us permission to access the gallery
if(checkPermission()) {
Intent choosePhotoIntent = new Intent(Intent. ACTION_PICK , android.provider.MediaStore.Images.Media. EXTERNAL_CONTENT_URI );
startActivityForResult(choosePhotoIntent, RESULT_LOAD_IMAGE );
}
else {
requestPermission();
}

}



// This function gets the selected picture from the gallery and shows it on the image view

protected void onActivityResult(int requestCode, int resultCode, Intent data) {


// get the photo URI from the gallery, find the file path from URI and send the file path to ConfirmPhoto
if (requestCode == RESULT_LOAD_IMAGE && resultCode == RESULT_OK && null != data) {


Uri selectedImage = data.getData();
String[] filePathColumn = { MediaStore.Images.Media. DATA };
Cursor cursor = getContentResolver().query(selectedImage,filePathColumn, null, null, null);
cursor.moveToFirst();
int columnIndex = cursor.getColumnIndex(filePathColumn[0]);

String picturePath= cursor.getString(columnIndex);
cursor.close();
Bitmap bitmap = BitmapFactory. decodeFile (picturePath);
imageView.setImageBitmap(bitmap);
}

}

In the onActivityResult() we get the path to the image the user has selected. Then we create a bitmap image of the selected image and finally show the bitmap image in our Image View

5. Now we will write a method to convert an image to base 64

// convert image to base 64 so that we can send the image to Emotion API

public byte[] toBase64(ImageView imgPreview) {
Bitmap bm = ((BitmapDrawable) imgPreview.getDrawable()).getBitmap();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bm.compress(Bitmap.CompressFormat. JPEG , 100, baos);
return baos.toByteArray();

}

6. Then create a method “getEmotion()” which will be called when the user clicks on the “GET EMOTION” button. This method will create an object of the GetEmotionCall class (which we will create in the next step) and then call the execute method of the class (it is an asynchronous class)

// when the "GET EMOTION" Button is clicked this function is executed

public void getEmotion(View view) {
// run the GetEmotionCall class in the background
GetEmotionCall emotionCall = new GetEmotionCall(imageView);
emotionCall.execute();

}

7. Now we will create a new inner asynchronous class called “GetEmotionCall” which will handle the API call. An asynchronous class is a class which runs on the background. This prevents the task that the class is executing from interrupting the main thread. It ensures our application doesn’t freeze while we are waiting for a response from our API call and also ensure the application doesn’t crash if we don’t get a response. To read more about asynchronous class work you should check out https://developer.android.com/reference/android/os/AsyncTask.html

private class GetEmotionCall extends AsyncTask<Void, Void, String> {private final ImageView img;



GetEmotionCall(ImageView img) {
this.img = img;

} // this function is called before the API call is made

@Override

protected void onPreExecute() {
super.onPreExecute();

} // this function is called when the API call is made

@Override

protected String doInBackground(Void... params) { } // this function is called when we get a result from the API call

@Override

protected void onPostExecute(String result) { }

8. From the above you could see that the onPreExecute,doInBackground and onPostExecute are empty. We will first write the method body for onPreExecute. It will be a simple line which displays “Getting results…” in the result text view of the layout

protected void onPreExecute() {
super.onPreExecute();
resultText.setText("Getting results...");

}

9. The doInBackground will make a POST call using the library we added before. (NB: Enter your subscription key where you see the text “subscription key here”). We will then read the response from the POST call as a string and then return it so that onPostExecute can use it. Note that the parameter for the method request.setEntity() calls the toBase64() method as the setEntity() sets the body for the request and our request body is the image in base64

protected String doInBackground(Void... params) {
HttpClient httpclient = HttpClients. createDefault ();
StrictMode.ThreadPolicy policy = new StrictMode.ThreadPolicy.Builder().permitAll().build();
StrictMode. setThreadPolicy (policy);


try {
URIBuilder builder = new URIBuilder("https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize");


URI uri = builder.build();
HttpPost request = new HttpPost(uri);
request.setHeader("Content-Type", "application/octet-stream");
request.setHeader("Ocp-Apim-Subscription-Key", "subscription key here");


// Request body. The parameter of setEntity converts the image to base64
request.setEntity(new ByteArrayEntity(toBase64(img)));


// getting a response and assigning it to the string res
HttpResponse response = httpclient.execute(request);
HttpEntity entity = response.getEntity();
String res = EntityUtils. toString (entity);


return res;
}
catch (Exception e){
return "null";
}

}

10. Now that we have the result from the Emotion API we will parse it to get the emotion with the highest probability and then display it. We will use the JSON library to parse the JSON file.

protected void onPostExecute(String result) {
JSONArray jsonArray = null;
try {
// convert the string to JSONArray
jsonArray = new JSONArray(result);
String emotions = "";
// get the scores object from the results
for(int i = 0;i<jsonArray.length();i++) {
JSONObject jsonObject = new JSONObject(jsonArray.get(i).toString());
JSONObject scores = jsonObject.getJSONObject("scores");
double max = 0;
String emotion = "";
for (int j = 0; j < scores.names().length(); j++) {
if (scores.getDouble(scores.names().getString(j)) > max) {
max = scores.getDouble(scores.names().getString(j));
emotion = scores.names().getString(j);
}
}
emotions += emotion + "\n";
}
resultText.setText(emotions);


} catch (JSONException e) {
resultText.setText("No emotion detected. Try again later");
}

}

The “result” variable contains the JSON response from the API call in string format. Since it’s a JSON array we convert the string to a JSON array using the JSONArray class. We then have a string called emotions which stores all the emotions of the people in the image. After that we have 2 for loops. The outer loop is used to go through the emotions of the people in the image. We may have 2 people in the image so we need to get the emotion of each people. The 2 nd for loop goes through the emotions and the scores for the individual people in the image. In the 2 nd for loop we try to find the emotion with the highest probability and once we have that we write the new emotion into the “emotions” variable. Once we have gone through the 2 for loops we display the “emotions” string in our results text view.

And that’s it. The full code of MainActivity.java:

public class MainActivity extends AppCompatActivity {


private ImageView imageView; // variable to hold the image view in our activity_main.xml
private TextView resultText; // variable to hold the text view in our activity_main.xml
private static final int RESULT_LOAD_IMAGE = 100;
private static final int REQUEST_PERMISSION_CODE = 200;




@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout. activity_main );


// initiate our image view and text view
imageView = (ImageView) findViewById(R.id. imageView );
resultText = (TextView) findViewById(R.id. resultText );
}


// when the "GET EMOTION" Button is clicked this function is called
public void getEmotion(View view) {
// run the GetEmotionCall class in the background
GetEmotionCall emotionCall = new GetEmotionCall(imageView);
emotionCall.execute();
}


// when the "GET IMAGE" Button is clicked this function is called
public void getImage(View view) {
// check if user has given us permission to access the gallery
if(checkPermission()) {
Intent choosePhotoIntent = new Intent(Intent. ACTION_PICK , android.provider.MediaStore.Images.Media. EXTERNAL_CONTENT_URI );
startActivityForResult(choosePhotoIntent, RESULT_LOAD_IMAGE );
}
else {
requestPermission();
}
}


// This function gets the selected picture from the gallery and shows it on the image view
protected void onActivityResult(int requestCode, int resultCode, Intent data) {


// get the photo URI from the gallery, find the file path from URI and send the file path to ConfirmPhoto
if (requestCode == RESULT_LOAD_IMAGE && resultCode == RESULT_OK && null != data) {


Uri selectedImage = data.getData();
String[] filePathColumn = { MediaStore.Images.Media. DATA };
Cursor cursor = getContentResolver().query(selectedImage,filePathColumn, null, null, null);
cursor.moveToFirst();
int columnIndex = cursor.getColumnIndex(filePathColumn[0]);
// a string variable which will store the path to the image in the gallery
String picturePath= cursor.getString(columnIndex);
cursor.close();
Bitmap bitmap = BitmapFactory. decodeFile (picturePath);
imageView.setImageBitmap(bitmap);
}
}


// convert image to base 64 so that we can send the image to Emotion API
public byte[] toBase64(ImageView imgPreview) {
Bitmap bm = ((BitmapDrawable) imgPreview.getDrawable()).getBitmap();
ByteArrayOutputStream baos = new ByteArrayOutputStream();
bm.compress(Bitmap.CompressFormat. JPEG , 100, baos); //bm is the bitmap object
return baos.toByteArray();
}




// if permission is not given we get permission
private void requestPermission() {
ActivityCompat. requestPermissions (MainActivity.this,new String[]{ READ_EXTERNAL_STORAGE }, REQUEST_PERMISSION_CODE );
}




public boolean checkPermission() {
int result = ContextCompat. checkSelfPermission (getApplicationContext(), READ_EXTERNAL_STORAGE );
return result == PackageManager. PERMISSION_GRANTED ;
}


// asynchronous class which makes the API call in the background
private class GetEmotionCall extends AsyncTask<Void, Void, String> {


private final ImageView img;


GetEmotionCall(ImageView img) {
this.img = img;
}


// this function is called before the API call is made
@Override
protected void onPreExecute() {
super.onPreExecute();
resultText.setText("Getting results...");
}


// this function is called when the API call is made
@Override
protected String doInBackground(Void... params) {
HttpClient httpclient = HttpClients. createDefault ();
StrictMode.ThreadPolicy policy = new StrictMode.ThreadPolicy.Builder().permitAll().build();
StrictMode. setThreadPolicy (policy);


try {
URIBuilder builder = new URIBuilder("https://westus.api.cognitive.microsoft.com/emotion/v1.0/recognize");


URI uri = builder.build();
HttpPost request = new HttpPost(uri);
request.setHeader("Content-Type", "application/octet-stream");
request.setHeader("Ocp-Apim-Subscription-Key", "d2445b75d6d54c07970a7f834c92ff3c");


// Request body.The parameter of setEntity converts the image to base64
request.setEntity(new ByteArrayEntity(toBase64(img)));


// getting a response and assigning it to the string res
HttpResponse response = httpclient.execute(request);
HttpEntity entity = response.getEntity();
String res = EntityUtils. toString (entity);


return res;


}
catch (Exception e){
return "null";
}


}


// this function is called when we get a result from the API call
@Override
protected void onPostExecute(String result) {
JSONArray jsonArray = null;
try {
// convert the string to JSONArray
jsonArray = new JSONArray(result);
String emotions = "";
// get the scores object from the results
for(int i = 0;i<jsonArray.length();i++) {
JSONObject jsonObject = new JSONObject(jsonArray.get(i).toString());
JSONObject scores = jsonObject.getJSONObject("scores");
double max = 0;
String emotion = "";
for (int j = 0; j < scores.names().length(); j++) {
if (scores.getDouble(scores.names().getString(j)) > max) {
max = scores.getDouble(scores.names().getString(j));
emotion = scores.names().getString(j);
}
}
emotions += emotion + "\n";
}
resultText.setText(emotions);


} catch (JSONException e) {
resultText.setText("No emotion detected. Try again later");
}
}
}

}

Uploading the application to your device

Now that our application is ready it’s time to use it. To upload the application to your device, click on the green “Play” button and then select the device you want to upload the application to.

Congratulations. You can now use this application to see the emotion of the images in your library. Here is a video link to see the application in action:

Resources

You can find this project on GitHub – https://github.com/skabra07/MSP-Emotion-API

For more information about the Emotion API you can read its documentation here - https://docs.microsoft.com/en-us/azure/cognitive-services/emotion/home

For other cognitive services like the Face API check out the Azure Cognitive Services website here - https://azure.microsoft.com/en-gb/services/cognitive-services/

Version history
Last update:
‎Mar 21 2019 07:34 AM
Updated by: