android - 使用ORB和RANSAC的OpenCV中用于Android关键点匹配和阈值的性能

我最近开始在Android Studio上开发应用程序,而我刚完成编写代码。我获得的精度令人满意,但设备花费的时间很多。 {}我遵循了一些有关如何在android studio上监视性能的教程,我发现我的代码的一小部分花了 6秒,这是我的应用显示整个结果所花费的时间的一半。我在OpenCV / JavaCV上看到过很多Java OpenCV - extracting good matches from knnMatch,OpenCV filtering ORB matches的帖子,但是没有人问过这个问题。 OpenCV链接http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html确实提供了很好的教程,但是与C++相比,OpenCV中的RANSAC函数采用不同的关键点参数。

这是我的代码

     public Mat ORB_detection (Mat Scene_image, Mat Object_image){
    /*This function is used to find the reference card in the captured image with the help of
    * the reference card saved in the application
    * Inputs - Captured image (Scene_image), Reference Image (Object_image)*/
    FeatureDetector orb = FeatureDetector.create(FeatureDetector.DYNAMIC_ORB);
    /*1.a Keypoint Detection for Scene Image*/
    //convert input to grayscale
    channels = new ArrayList<Mat>(3);
    Core.split(Scene_image, channels);
    Scene_image = channels.get(0);
    //Sharpen the image
    Scene_image = unsharpMask(Scene_image);
    MatOfKeyPoint keypoint_scene = new MatOfKeyPoint();
    //Convert image to eight bit, unsigned char
    Scene_image.convertTo(Scene_image, CvType.CV_8UC1);
    orb.detect(Scene_image, keypoint_scene);
    channels.clear();

    /*1.b Keypoint Detection for Object image*/
    //convert input to grayscale
    Core.split(Object_image,channels);
    Object_image = channels.get(0);
    channels.clear();
    MatOfKeyPoint keypoint_object = new MatOfKeyPoint();
    Object_image.convertTo(Object_image, CvType.CV_8UC1);
    orb.detect(Object_image, keypoint_object);

    //2. Calculate the descriptors/feature vectors
    //Initialize orb descriptor extractor
    DescriptorExtractor orb_descriptor = DescriptorExtractor.create(DescriptorExtractor.ORB);
    Mat Obj_descriptor = new Mat();
    Mat Scene_descriptor = new Mat();
    orb_descriptor.compute(Object_image, keypoint_object, Obj_descriptor);
    orb_descriptor.compute(Scene_image, keypoint_scene, Scene_descriptor);

    //3. Matching the descriptors using Brute-Force
    DescriptorMatcher brt_frc = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
    MatOfDMatch matches = new MatOfDMatch();
    brt_frc.match(Obj_descriptor, Scene_descriptor, matches);

    //4. Calculating the max and min distance between Keypoints
    float max_dist = 0,min_dist = 100,dist =0;
    DMatch[] for_calculating;
    for_calculating = matches.toArray();
    for( int i = 0; i < Obj_descriptor.rows(); i++ )
    {   dist = for_calculating[i].distance;
        if( dist < min_dist ) min_dist = dist;
        if( dist > max_dist ) max_dist = dist;
    }

    System.out.print("\nInterval min_dist: " + min_dist + ", max_dist:" + max_dist);
    //-- Use only "good" matches (i.e. whose distance is less than 2.5*min_dist)
    LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
    double ratio_dist=2.5;
    ratio_dist = ratio_dist*min_dist;
    int i, iter = matches.toArray().length;
    matches.release();

    for(i = 0;i < iter; i++){
        if (for_calculating[i].distance <=ratio_dist)
            good_matches.addLast(for_calculating[i]);
    }
    System.out.print("\n done Good Matches");

    /*Necessary type conversion for drawing matches
    MatOfDMatch goodMatches = new MatOfDMatch();
    goodMatches.fromList(good_matches);
    Mat matches_scn_obj = new Mat();
    Features2d.drawKeypoints(Object_image, keypoint_object, new Mat(Object_image.rows(), keypoint_object.cols(), keypoint_object.type()), new Scalar(0.0D, 0.0D, 255.0D), 4);
    Features2d.drawKeypoints(Scene_image, keypoint_scene, new Mat(Scene_image.rows(), Scene_image.cols(), Scene_image.type()), new Scalar(0.0D, 0.0D, 255.0D), 4);
    Features2d.drawMatches(Object_image, keypoint_object, Scene_image, keypoint_scene, goodMatches, matches_scn_obj);
    SaveImage(matches_scn_obj,"drawing_good_matches.jpg");
    */

    if(good_matches.size() <= 6){
        ph_value = "7";
        System.out.println("Wrong Detection");
        return Scene_image;
    }
    else{
        //5. RANSAC thresholding for finding the optimum homography
        Mat outputImg = new Mat();
        LinkedList<Point> objList = new LinkedList<Point>();
        LinkedList<Point> sceneList = new LinkedList<Point>();

        List<org.opencv.core.KeyPoint> keypoints_objectList = keypoint_object.toList();
        List<org.opencv.core.KeyPoint> keypoints_sceneList = keypoint_scene.toList();

        //getting the object and scene points from good matches
        for(i = 0; i<good_matches.size(); i++){
            objList.addLast(keypoints_objectList.get(good_matches.get(i).queryIdx).pt);
            sceneList.addLast(keypoints_sceneList.get(good_matches.get(i).trainIdx).pt);
        }
        good_matches.clear();
        MatOfPoint2f obj = new MatOfPoint2f();
        obj.fromList(objList);
        objList.clear();

        MatOfPoint2f scene = new MatOfPoint2f();
        scene.fromList(sceneList);
        sceneList.clear();

        float RANSAC_dist=(float)2.0;
        Mat hg = Calib3d.findHomography(obj, scene, Calib3d.RANSAC, RANSAC_dist);

        for(i = 0;i<hg.cols();i++) {
            String tmp = "";
            for ( int j = 0; j < hg.rows(); j++) {

                Point val = new Point(hg.get(j, i));
                tmp= tmp + val.x + " ";
            }
        }

        Mat scene_image_transformed_color = new Mat();
        Imgproc.warpPerspective(original_image, scene_image_transformed_color, hg, Object_image.size(), Imgproc.WARP_INVERSE_MAP);
        processing(scene_image_transformed_color, template_match);

        return outputImg;
    }
} }

而这部分需要6秒钟才能在运行时实现-
    LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
    double ratio_dist=2.5;
    ratio_dist = ratio_dist*min_dist;
    int i, iter = matches.toArray().length;
    matches.release();

    for(i = 0;i < iter; i++){
        if (for_calculating[i].distance <=ratio_dist)
            good_matches.addLast(for_calculating[i]);
    }
    System.out.print("\n done Good Matches");}

我当时想也许可以使用NDK用C++编写这部分代码,但我只是想确保语言是问题所在,而不是代码本身。
请不要严格,第一个问题!任何批评都非常感谢!

最佳答案

因此,问题在于logcat给了我错误的计时结果。滞后是由于代码后面的巨大高斯模糊。我使用System.out.print而不是System.currentTimeMillis,向我展示了该错误。

关于android - 使用ORB和RANSAC的OpenCV中用于Android关键点匹配和阈值的性能问题,我们在Stack Overflow上找到一个类似的问题: https://stackoverflow.com/questions/39039760/

相关文章:

opencv - 拼接细节中相机参数调整失败问题

python - 延时图像的亮度/直方图归一化

matlab - 光流直方图如何工作?

python-2.7 - Python OpenCV:Python 2.7与Python 3.5之间

python-2.7 - 特征点的OpenCV密度

opencv - 计算平行平面的单应性

python - OpenCV:车轴检测

python - 从笔记本中提取文本

java - 如何将Mat转换为字节数组,存储值,然后再次将其转换回? java opencv

python - 从阈值图像中找到轮廓