How to incorporate Machine Learning in Android App Development?

Table of Content
1. Introduction
2. Integrating machine learning and android apps
3. How to implement machine learning in an android application? Using Tensor Flow Lite Training a Tensor Flow Model
4. MLKIT for machine learning
5. Final Thoughts

Introduction

You must be wondering Why Amazon Music and Spotify have such great songs? Or how does Facebook automatically tag you when your friend posts a photo? The answer is, these giant companies use the capabilities of machine learning in mobile applications to prompt “WOW” reactions from users.

Machine Learning is a study of computer algorithms and an application of Artificial Intelligence that not only analyzes data but also helps to empower software to explore, learn, and envisage outcomes automatically without human intervention. It is used in various fields such as internet search engines, banking software to detect unusual access, and presently serving mobile application development. Machine learning is changing the way software is created as it eliminates the need for giving commands to a computer in decision-making. To improve and measure productivity, software development metrics are used to prepare data that is later fed into ML algorithms and the system discerns essential patterns from the data.

According to research conducted by BCC, the global machine learning market computed $1.4 billion in 2017 and it is expected to reach $8.8 billion by the end of 2022.

You’ll be surprised to know that nowadays Machine learning vs artificial intelligence is one of the most trendiest topics for data analysts. The capabilities of machine learning are used in various ways in an Android app. One of the most suitable ways relies on tasks you want to interpret with the help of machine learning. It can identify your target user behavior and have search requests to make suggestions and recommendations. It is widely used in e-commerce applications. Apart from this, machine learning is also used for fingerprint and face recognition to ease authentication.

If you’re planning to integrate machine learning in your android app, then you’re in the right place. In this post, we’re going to discuss various machine learning algorithms and share some insights on how to apply them to an industry-specific mobile app. So without any further ado, let’s get started!

Integrating Machine Learning and Android Apps

In the quickly evolving technology sector and increasing use of mobile applications, machine learning and AI are the spots that are rapidly growing. Numerous tech businesses are investing in machine learning tools that allow developers to add the capabilities of machine learning in an android application.

According to a report generated by Grand View Research, the machine learning market is expected to reach $18.20 billion at a CAGR of 7.7% by the end of 2025.

Machine vision assessment allows the devices to locate, track, organize and recognize entities in the form of images. The intricate machine learning algorithm allows you to acquire data from images and help you in studying it. The machine learning abilities are used for rebuilding, recognizing, identifying 3D images, and much more.

More than 60% of manufacturing industries are now utilizing machine vision for evaluating quality and safety. You can use machine notion cameras to recognize things having defects and resolve them. The three major use of machine learning algorithms in industries is Agriculture, Warehouse, Retail, and logistics as it offers various benefits to them.

Retail is a large sector where the use of machine learning algorithms is widely done to identify images that the customers are willing to buy. Whenever a client uploads a product picture they want, machine vision recognizes and searches for a similar product to meet their requirement.

Despite this, the agricultural sector also utilizes machine learning capabilities to determine the plant type by clicking pictures of the plant. LeafSnap is a widely used and best mobile app used for identifying plants.

How to implement machine learning in an Android application?

There are many machine learning frameworks available in the market, but here we are choosing Tensorflow for example.

TensorFlow is an open-sourced machine learning library from Google that is used to create and modify deep learning models. There are various aspects of machine learning and one of them is Deep learning which was inspired by how the human brain works. For mobile devices, TensorFlow Lite works as a TensorFlow solution for mobile devices and allows you to lower level latency which is why it is very fast. It is a great platform for mobile devices and even backs hardware acceleration.

Using TensorFlow Lite

The TensorFlow library works on various platforms, from large servers to small IoT devices. Have a look at how you can easily implement machine learning in your android app by executing the model using TensorFlow Lite. You’re required to change the model into the model (.tflite) which is acquired by the TensorFlow Lite. By acquiring the label file and model, you can label files in the app for loading the model and predicting the output by using the TensorFlow Lite library.

Training a TensorFlow Model

You may require to invest a long time to train a TensorFlow model as it needs a huge amount of data. However, there is one way to train a model quickly without using huge GPU processing power. You can use an already trained model to build a new model using the transfer learning process and follow each step included in it. To do so, have a look at the steps given below:

Step 1: Gather all the required data

Step 2: Change the data into required images

Step 3: Make an image folder and group them

Step 4: Again train the model with new images

Step 5: Optimize the model for accessible mobile devices

Step 6: Integrate the .tflite file into your android application

Step 7: Run the application locally and see if it detects the images

MLKIT for machine learning

Google introduced Firebase MLKit which is one of the best ways to use frameworks for machine learning. Using the MLKIT APIs, developers can use object recognition, text recognition, image labeling, face recognition, landmark recognition, and barcode recognition.

Whether you’re a newbie or an experienced developer, you can easily use MLKit in your android apps.

Let me show you an example of how to use MLKit for image labeling.

1. Connect to Firebase services by entering the Firebase console and creating a new project.

2. Add the following dependencies to your Gradle file:

implementation ‘com.google.firebase:firebase-ml-vision:18.0.1’implementation ‘com.google.firebase:firebase-ml-vision-image-label-model:17.0.2’

3. If you’re using the on-device API, then you need to configure your app to automatically download the ML model installed from the Play Store. Now, add the declaration to your app’s AndroidManifest.xml file as shown here:

<meta-data android:name=”com.google.firebase.ml.vision.DEPENDENCIES” android:value=”label” />

4. Create a screen layout.

<?xml version=”1.0″ encoding=”utf-8″?><android.support.constraint.ConstraintLayout xmlns:android=”http://schemas.android.com/apk/res/android” xmlns:app=”http://schemas.android.com/apk/res-auto” xmlns:tools=”http://schemas.android.com/tools” android:layout_width=”match_parent” android:layout_height=”match_parent” tools:context=”.MainActivity”> <LinearLayout android:id=”@+id/layout_preview” android:layout_width=”0dp” android:layout_height=”0dp” android:orientation=”horizontal” app:layout_constraintBottom_toBottomOf=”parent” app:layout_constraintEnd_toEndOf=”parent” app:layout_constraintStart_toStartOf=”parent” app:layout_constraintTop_toTopOf=”parent” /> <Button android:id=”@+id/btn_take_picture” android:layout_width=”wrap_content” android:layout_height=”wrap_content” android:layout_marginStart=”8dp” android:layout_marginBottom=”16dp” android:text=”Take Photo” android:textSize=”15sp” app:layout_constraintBottom_toBottomOf=”parent” app:layout_constraintStart_toStartOf=”parent” /> <TextView android:id=”@+id/txt_result” android:layout_width=”0dp” android:layout_height=”wrap_content” android:layout_marginStart=”16dp” android:layout_marginEnd=”16dp” android:textAlignment=”center” android:textColor=”@android:color/white” app:layout_constraintBottom_toBottomOf=”parent” app:layout_constraintEnd_toEndOf=”parent” app:layout_constraintStart_toEndOf=”@+id/btn_take_picture” /></android.support.constraint.ConstraintLayout>

5. To show camera preview, you can use SurfaceView and add the camera permission in Manifest.xml:

android:name=”android.permission.CAMERA

6. Create a class extending SurfaceView and implement the SurfaceHolder.Callback interface:

import java.io.IOException;import android.content.Context;import android.hardware.Camera;import android.util.Log;import android.view.SurfaceHolder;import android.view.SurfaceView;public class CameraPreview extends SurfaceView implements SurfaceHolder.Callback { private SurfaceHolder mHolder; private Camera mCamera; public CameraPreview(Context context, Camera camera) { super(context); mCamera = camera; mHolder = getHolder(); mHolder.addCallback(this); // deprecated setting, but required on Android versions prior to 3.0 mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS); } public void surfaceCreated(SurfaceHolder holder) { try { // create the surface and start camera preview if (mCamera == null) { mCamera.setPreviewDisplay(holder); mCamera.startPreview(); } } catch (IOException e) { Log.d(VIEW_LOG_TAG, “Error setting camera preview: ” + e.getMessage()); } } public void refreshCamera(Camera camera) { if (mHolder.getSurface() == null) { // preview surface does not exist return; } // stop preview before making changes try { mCamera.stopPreview(); } catch (Exception e) { // ignore: tried to stop a non-existent preview } // set preview size and make any resize, rotate or // reformatting changes here // start preview with new settings setCamera(camera); try { mCamera.setPreviewDisplay(mHolder); mCamera.startPreview(); } catch (Exception e) { Log.d(VIEW_LOG_TAG, “Error starting camera preview: ” + e.getMessage()); } } public void surfaceChanged(SurfaceHolder holder, int format, int w, int h) { // If your preview can change or rotate, take care of those events here. // Make sure to stop the preview before resizing or reformatting it. refreshCamera(mCamera); } public void setCamera(Camera camera) { //method to set a camera instance mCamera = camera; } @Override public void surfaceDestroyed(SurfaceHolder holder) { }}

7. Lastly, add logic for the app’s main screen such as asking the user for camera permission, if it is granted then only it will show the camera preview.

import android.Manifest;import android.graphics.Bitmap;import android.graphics.BitmapFactory;import android.hardware.Camera;import android.support.annotation.NonNull;import android.support.v7.app.AppCompatActivity;import android.os.Bundle;import android.view.WindowManager;import android.widget.LinearLayout;import android.widget.TextView;import android.widget.Toast;import com.google.android.gms.tasks.OnFailureListener;import com.google.android.gms.tasks.OnSuccessListener;import com.google.firebase.ml.vision.FirebaseVision;import com.google.firebase.ml.vision.common.FirebaseVisionImage;import com.google.firebase.ml.vision.label.FirebaseVisionLabel;import com.google.firebase.ml.vision.label.FirebaseVisionLabelDetector;import com.google.firebase.ml.vision.label.FirebaseVisionLabelDetectorOptions;import com.gun0912.tedpermission.PermissionListener;import com.gun0912.tedpermission.TedPermission;import java.util.ArrayList;import java.util.List;import butterknife.BindView;import butterknife.ButterKnife;import butterknife.OnClick;public class MainActivity extends AppCompatActivity { private Camera mCamera; private CameraPreview mPreview; private Camera.PictureCallback mPicture; @BindView(R.id.layout_preview) LinearLayout cameraPreview; @BindView(R.id.txt_result) TextView txtResult; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); ButterKnife.bind(this);//keep screen always on getWindow().addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON); checkPermission(); }//camera permission is dangerous permission, so the user should grant this permission directly in real-time. Here we show a permission pop-up and listen for the user’s response. private void checkPermission() { //Set up the permission listener PermissionListener permissionlistener = new PermissionListener() { @Override public void onPermissionGranted() { setupPreview(); } @Override public void onPermissionDenied(ArrayList deniedPermissions) { Toast.makeText(MainActivity.this, “Permission Denied\n” + deniedPermissions.toString(), Toast.LENGTH_SHORT).show(); } }; //Check camera permission TedPermission.with(this) .setPermissionListener(permissionlistener) .setDeniedMessage(“If you reject permission,you can not use this service\n\nPlease turn on permissions at [Setting] > [Permission]”) .setPermissions(Manifest.permission.CAMERA) .check(); }//Here we set up the camera preview private void setupPreview() { mCamera = Camera.open(); mPreview = new CameraPreview(getBaseContext(), mCamera); try { //Set camera autofocus Camera.Parameters params = mCamera.getParameters(); params.setFocusMode(Camera.Parameters.FOCUS_MODE_CONTINUOUS_PICTURE); mCamera.setParameters(params); }catch (Exception e){ } cameraPreview.addView(mPreview); mCamera.setDisplayOrientation(90); mCamera.startPreview(); mPicture = getPictureCallback(); mPreview.refreshCamera(mCamera); }// take photo when the users tap the button @OnClick(R.id.btn_take_picture) public void takePhoto() { mCamera.takePicture(null, null, mPicture); } @Override protected void onPause() { super.onPause(); //when on Pause, release camera in order to be used from other applications releaseCamera(); } private void releaseCamera() { if (mCamera != null) { mCamera.stopPreview(); mCamera.setPreviewCallback(null); mCamera.release(); mCamera = null; } }//Here we get the photo from the camera and pass it to mlkit processor private Camera.PictureCallback getPictureCallback() { return new Camera.PictureCallback() { @Override public void onPictureTaken(byte[] data, Camera camera) { mlinit(BitmapFactory.decodeByteArray(data, 0, data.length)); mPreview.refreshCamera(mCamera); } }; }//the main method that processes the image from the camera and gives labeling result private void mlinit(Bitmap bitmap) { //But it is too much for us and we wants to get less FirebaseVisionLabelDetectorOptions options = new FirebaseVisionLabelDetectorOptions.Builder() .setConfidenceThreshold(0.5f) .build(); //To label objects in an image, create a FirebaseVisionImage object from a bitmap FirebaseVisionImage image = FirebaseVisionImage.fromBitmap(bitmap); //Get an instance of FirebaseVisionCloudLabelDetector FirebaseVisionLabelDetector detector = FirebaseVision.getInstance() .getVisionLabelDetector(options); detector.detectInImage(image) .addOnSuccessListener( new OnSuccessListener>() { @Override public void onSuccess(List labels) { StringBuilder builder = new StringBuilder(); // Get information about labeled objects for (FirebaseVisionLabel label : labels) { builder.append(label.getLabel()) .append(” “) .append(label.getConfidence()).append(“\n”); } txtResult.setText(builder.toString()); } }) .addOnFailureListener( new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { txtResult.setText(e.getMessage()); } }); }}

Final Thoughts

As we all know, Machine learning services allow intelligent platforms to guide you in hosting, creating, and training the required predictive models. The current offerings for AI and ML are feature-rich and vast. There are many companies who are still confused to delve into the world of machine learning. But once you start integrating machine learning algorithms in your mobile application, then you would love to work with it again. It is great to see how quickly machine learning can be adapted to fit various industries.

Leave a Comment