本文使用JNI技术在Android平台部署深度学习模型,并使用MNN框架进行模型推理。

模型及C++程序准备

mnist-mnn

Android环境配置

  1. 打开Android studio, 创建一个Native C++工程,并配置OpenCV。
    在Android中使用OpenCV

  2. 在PC上编译MNN-Android的动态链接库
    MNN安装和编译

  3. CMakeLists.txt编写
    在jni中编译C/C++程序有两种方法:一是使用ndk-build(需要配置.mk文件),二是使用CMake,本文使用CMake编译的方法。

cmake_minimum_required(VERSION 3.4.1)# Creates and names a library, sets it as either STATIC# or SHARED, and provides the relative paths to its source code.# You can define multiple libraries, and CMake builds them for you.# Gradle automatically packages shared libraries with your APK.# opencvset( OpenCV_DIR /home/yinliang/software/OpenCV-android-sdk/sdk/native/jni )find_package(OpenCV REQUIRED)# MNN_DIR为自己安装的MNN的路径set(MNN_DIR /home/yinliang/software/MNN)# mnn的头文件include_directories(${MNN_DIR}/include)include_directories(${MNN_DIR}/include/MNN)include_directories(${MNN_DIR}/tools)include_directories(${MNN_DIR}/tools/cpp)include_directories(${MNN_DIR}/source)include_directories(${MNN_DIR}/source/backend)include_directories(${MNN_DIR}/source/core)# 这个是自己定义的.h文件include_directories(get_result.h)# 链接mnn的动态库,这里编译的是64位的,对应Android里面的arm64-v8a架构aux_source_directory(. SRCS)add_library( # Sets the name of the library.        native-lib        # Sets the library as a shared library.        SHARED        # Provides a relative path to your source file(s).        ${SRCS})find_library( # Sets the name of the path variable.        log-lib        log)# 需要把libMNN.so放到工程文件里来,具体位置在 app/libs下,放在工程外好像不行set(dis_DIR ../../../../libs)add_library(        MNN        SHARED        IMPORTED)set_target_properties(        MNN        PROPERTIES IMPORTED_LOCATION        ${dis_DIR}/arm64-v8a/libMNN.so)# 代码主要依赖opencv和mnn两个库,这里链接一下target_link_libraries( # Specifies the target library.        native-lib        # Links the target library to the log library        # included in the NDK.        ${log-lib}        MNN        jnigraphics        ${OpenCV_LIBS})
  1. 修改app下的build.gradle文件
    添加以下内容,不然无法成功链接到libMNN.so
sourceSets {            main{                jniLibs.srcDirs=['libs']            }        }

完整的build.gradle为:

apply plugin: 'com.android.application'android {    compileSdkVersion 30    defaultConfig {        applicationId "com.mnn.mnist"        minSdkVersion 25        targetSdkVersion 26        versionCode 1        versionName "1.0"        testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"        sourceSets {            main{                jniLibs.srcDirs=['libs']            }        }        externalNativeBuild {            cmake {                cppFlags "-std=c++14"                arguments "-DANDROID_STL=c++_shared"                abiFilters  "arm64-v8a"            }        }    }    buildTypes {        release {            minifyEnabled false            proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'        }    }    externalNativeBuild {        cmake {            path "src/main/cpp/CMakeLists.txt"            version "3.10.2"        }    }}dependencies {    implementation fileTree(dir: 'libs', include: ['*.jar'])    implementation 'androidx.appcompat:appcompat:1.0.2'    implementation 'androidx.constraintlayout:constraintlayout:1.1.3'    testImplementation 'junit:junit:4.12'    androidTestImplementation 'androidx.test:runner:1.1.1'    androidTestImplementation 'androidx.test.espresso:espresso-core:3.1.1'}

编写native-lib.cpp

  1. 在src/main/cpp下新建一个get_result.cpp文件,实现MNN的前向推理过程。
//// Created by yinliang on 20-8-17.//#include #include #include #include #include #include #include #include #include #include "Backend.hpp"#include "Interpreter.hpp"#include "MNNDefine.h"#include "Interpreter.hpp"#include "Tensor.hpp"using namespace MNN;using namespace std;using namespace cv;int mnist(Mat image_src, const char* model_name){    // const char* model_name = "/home/yinliang/works/C/MNN-APPLICATIONS/applications/mnist/onnx/jni/graphs/mnist.mnn";    int forward = MNN_FORWARD_CPU;    // int forward = MNN_FORWARD_OPENCL;    int precision  = 2;    int power      = 0;    int memory     = 0;    int threads    = 1;    int INPUT_SIZE = 28;    cv::Mat raw_image = image_src;    cv::Mat image;    cv::resize(raw_image, image, cv::Size(INPUT_SIZE, INPUT_SIZE));    // cout<<"model_path:" << model_name<    // 1. 创建Interpreter, 通过磁盘文件创建: static Interpreter* createFromFile(const char* file);    std::shared_ptr<Interpreter> net(Interpreter::createFromFile(model_name));    MNN::ScheduleConfig config;    // 2. 调度配置,    // numThread决定并发数的多少,但具体线程数和并发效率,不完全取决于numThread    // 推理时,主选后端由type指定,默认为CPU。在主选后端不支持模型中的算子时,启用由backupType指定的备选后端。    config.numThread = threads;    config.type      = static_cast<MNNForwardType>(forward);    MNN::BackendConfig backendConfig;    // 3. 后端配置    // memory、power、precision分别为内存、功耗和精度偏好    backendConfig.precision = (MNN::BackendConfig::PrecisionMode)precision;    backendConfig.power = (MNN::BackendConfig::PowerMode) power;    backendConfig.memory = (MNN::BackendConfig::MemoryMode) memory;    config.backendConfig = &backendConfig;    // 4. 创建session    auto session = net->createSession(config);    net->releaseModel();    clock_t start = clock();    // preprocessing    image.convertTo(image, CV_32FC3);    image = image / 255.0f;    // 5. 输入数据    // wrapping input tensor, convert nhwc to nchw    std::vector<int> dims{1, INPUT_SIZE, INPUT_SIZE, 3};    auto nhwc_Tensor = MNN::Tensor::create<float>(dims, NULL, MNN::Tensor::TENSORFLOW);    auto nhwc_data   = nhwc_Tensor->host<float>();    auto nhwc_size   = nhwc_Tensor->size();    ::memcpy(nhwc_data, image.data, nhwc_size);    std::string input_tensor = "data";    // 获取输入tensor    // 拷贝数据, 通过这类拷贝数据的方式,用户只需要关注自己创建的tensor的数据布局,    // copyFromHostTensor会负责处理数据布局上的转换(如需)和后端间的数据拷贝(如需)。    auto inputTensor  = net->getSessionInput(session, nullptr);    inputTensor->copyFromHostTensor(nhwc_Tensor);    // 6. 运行会话    net->runSession(session);    // 7. 获取输出    std::string output_tensor_name0 = "dense1_fwd";    // 获取输出tensor    MNN::Tensor *tensor_scores  = net->getSessionOutput(session, output_tensor_name0.c_str());    MNN::Tensor tensor_scores_host(tensor_scores, tensor_scores->getDimensionType());    // 拷贝数据    tensor_scores->copyToHostTensor(&tensor_scores_host);    // post processing steps    auto scores_dataPtr  = tensor_scores_host.host<float>();    // softmax    float exp_sum = 0.0f;    for (int i = 0; i < 10; ++i)    {        float val = scores_dataPtr[i];        exp_sum += val;    }    // get result idx    int  idx = 0;    float max_prob = -10.0f;    for (int i = 0; i < 10; ++i)    {        float val  = scores_dataPtr[i];        float prob = val / exp_sum;        if (prob > max_prob)        {            max_prob = prob;            idx      = i;        }    }    // printf("the result is %d\n", idx);    return idx;}

函数的输入为一个Mat类型的图像,const char*类型的模型地址,输出为识别结果。

  1. 编写对应的.h文件
    在相同目录下创建get_result.cpp对应的头文件。
#include #include #include #include #include #include #include #include #include #include "Backend.hpp"#include "Interpreter.hpp"#include "MNNDefine.h"#include "Interpreter.hpp"#include "Tensor.hpp"using namespace MNN;using namespace std;using namespace cv;int mnist(Mat image_src, const char* model_name);
  1. 编写native-lib.cpp
    定义jni接口函数,也就是我们最后在Android端可以调用的本地方法,函数的参数类型都是jni特有的类型,可参考jni技术简介
#include #include #include #include #include "get_result.h"#include "stdio.h"#include "stdlib.h"extern "C" JNIEXPORT jstring JNICALLJava_com_mnn_mnist_MainActivity_mnistJNI (JNIEnv *env, jobject obj, jobject bitmap, jstring jstr){    AndroidBitmapInfo info;    void *pixels;    CV_Assert(AndroidBitmap_getInfo(env, bitmap, &info) >= 0);    CV_Assert(info.format == ANDROID_BITMAP_FORMAT_RGBA_8888 ||              info.format == ANDROID_BITMAP_FORMAT_RGB_565);    CV_Assert(AndroidBitmap_lockPixels(env, bitmap, &pixels) >= 0);    CV_Assert(pixels);    if (info.format == ANDROID_BITMAP_FORMAT_RGBA_8888) {        Mat temp(info.height, info.width, CV_8UC4, pixels);        Mat temp2 = temp.clone();//将jstring类型转换成C++里的const char*类型        const char *path = env->GetStringUTFChars(jstr, 0);        Mat RGB;        //先将图像格式由BGRA转换成RGB,不然识别结果不对        cvtColor(temp2, RGB, COLOR_RGBA2RGB);        //调用之前定义好的mnist()方法,识别文字图像        int result = mnist(RGB, path);        //将图像转回RGBA格式,Android端才可以显示        Mat show(info.height, info.width, CV_8UC4, pixels);        cvtColor(RGB, temp, COLOR_RGB2RGBA);        //将int类型的识别结果转成jstring类型,并返回        string re_reco = to_string(result);        const char* ss = re_reco.c_str();        char cap[12];        strcpy(cap, ss);        return (env)->NewStringUTF(cap);;    } else {        Mat temp(info.height, info.width, CV_8UC2, pixels);            }    AndroidBitmap_unlockPixels(env, bitmap);}

Android端调用

由于不会Android开发,这部分代码很粗糙,能正确运行,但是不够优雅。

package com.mnn.mnist;import androidx.annotation.NonNull;import androidx.appcompat.app.AppCompatActivity;import androidx.core.app.ActivityCompat;import androidx.core.content.ContextCompat;import android.Manifest;import android.content.res.AssetManager;import android.graphics.Bitmap;import android.graphics.BitmapFactory;import android.os.Bundle;import android.os.Environment;import android.view.View;import android.widget.ImageView;import android.widget.TextView;import android.widget.Toast;import java.io.File;import static android.content.pm.PackageManager.PERMISSION_GRANTED;public class MainActivity extends AppCompatActivity implements View.OnClickListener {    //定义两个控件,分别用来显示图像和文本    private ImageView imageView;    private TextView textView;    // 加载生成的动态链接库    // Used to load the 'native-lib' library on application startup.    static {        System.loadLibrary("native-lib");    }    // 声明JNI函数,对应native-lib.cpp里定义的函数    native String mnistJNI(Object bitmap, String str);    @Override    protected void onCreate(Bundle savedInstanceState) {        super.onCreate(savedInstanceState);        setContentView(R.layout.activity_main);        imageView = findViewById(R.id.imageView);        findViewById(R.id.show).setOnClickListener((View.OnClickListener) this);        findViewById(R.id.process).setOnClickListener((View.OnClickListener) this);        findViewById(R.id.gray).setOnClickListener((View.OnClickListener) this);        textView = findViewById(R.id.textView);        findViewById(R.id.textView).setOnClickListener((View.OnClickListener) this);        myRequetPermission();    }    // 由于我把.mnn模型用adb push放到手机的sd目录下了,需要加权限才能访问到    private void myRequetPermission() {        if (ContextCompat.checkSelfPermission(this, Manifest.permission.READ_EXTERNAL_STORAGE) != PERMISSION_GRANTED) {            ActivityCompat.requestPermissions(this, new String[]{Manifest.permission.READ_EXTERNAL_STORAGE}, 1);        } else {            Toast.makeText(this, "您已经申请了权限!", Toast.LENGTH_SHORT).show();        }    }    @Override    public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {        super.onRequestPermissionsResult(requestCode, permissions, grantResults);        if (requestCode == 1) {            for (int i = 0; i < permissions.length; i++) {                if (grantResults[i] == PERMISSION_GRANTED) {//选择了“始终允许”                    Toast.makeText(this, "" + "权限" + permissions[i] + "申请成功", Toast.LENGTH_SHORT).show();                }            }        }    }    @Override    public void onClick(View v) {    // show为一个button,只用来显示一下图像        if (v.getId() == R.id.show) {        //放一张图像到res/drawable目录下,并命名为test.jpg        //读取图像,在Android里对应的类型为Bitmap            Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.test);            //显示图像            imageView.setImageBitmap(bitmap);        } else {         //             Bitmap bitmap = BitmapFactory.decodeResource(getResources(), R.drawable.test);            //读取sd卡下的mnist.mnn模型            String model_path = Environment.getExternalStorageDirectory().getPath() + "/mnist.mnn";            System.out.println("模型路径:" + model_path);            //显示图像            imageView.setImageBitmap(bitmap);            //显示识别结果            textView.setText(mnistJNI(bitmap, model_path));        }    }    @Override    public void onPointerCaptureChanged(boolean hasCapture) {    }}

对应的界面布局文件

<?xml version="1.0" encoding="utf-8"?><RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"    android:layout_width="match_parent"    android:layout_height="match_parent">    <ImageView        android:id="@+id/imageView"        android:layout_width="match_parent"        android:layout_height="match_parent" />    <LinearLayout        android:layout_width="match_parent"        android:layout_height="wrap_content"        android:layout_alignParentBottom="true"        android:orientation="horizontal">        <Button            android:id="@+id/show"            android:layout_width="match_parent"            android:layout_height="wrap_content"            android:layout_weight="1"            android:text="show" />        <Button            android:id="@+id/process"            android:layout_width="match_parent"            android:layout_height="wrap_content"            android:layout_weight="1"            android:text="mnist" />        <Button            android:id="@+id/gray"            android:layout_width="match_parent"            android:layout_height="wrap_content"            android:layout_weight="1"            android:text="gray" />    LinearLayout><TextView    android:id="@+id/textView"    android:layout_width="match_parent"    android:layout_height="wrap_content"    android:gravity="center"    android:textSize="24sp"    android:textColor="#00ff00"    android:text="result"    />RelativeLayout>

TODO

  1. 目前是把.mnn文件事先放在手机里面,在运行程序的时候从手机读取模型,不知道怎么放在项目里面读取;
  2. 输入图像为res里面存放的固定图像,不知道怎么从相册里选取一张图识别或是拍照识别。

更多相关文章

  1. 《Android/OPhone开发完全讲义》连载(7):使用SharedPreferences存
  2. Android(安卓)NDK开发:JNI基础篇
  3. Android中多媒体处理
  4. Android之循环显示图像的Android(安卓)Gallery组件
  5. Android复杂数据模型序列化
  6. [Android]Smali语法
  7. 使用Android(安卓)Studio进行JNI开发 - Mac篇
  8. Android的图形与图像处理之三 逐帧动画(Frame)
  9. Android(安卓)动画之帧动画

随机推荐

  1. 初识SeekBar
  2. Android锁屏控制
  3. android 混淆jar及apk的心得
  4. android 中的 handler
  5. android 图片叠加效果
  6. android source
  7. 【Android(安卓)Demo】让Android支持自定
  8. Android字体
  9. Android(安卓)Fresco - SimpleDraweeView
  10. Android(安卓)开发中的一些小技巧