LOFTER for ipad —— 让兴趣,更有趣

点击下载 关闭

LOFTER-网易轻博

opencv

3076浏览    190参与
薛定谔的慵懒喵

BLURRING

Normalized Box Filter

This filter is the simplest of all! Each output pixel is the mean of its kernel neighbors ( all of them contribute with equal weights)

kernel:


Gaussian Filter

Gaussian filtering is done by convolving each point in the input array with a Gaussian kernel and then summing them...

Normalized Box Filter

This filter is the simplest of all! Each output pixel is the mean of its kernel neighbors ( all of them contribute with equal weights)

kernel:


Gaussian Filter

Gaussian filtering is done by convolving each point in the input array with a Gaussian kernel and then summing them all to produce the output array

Median Filter

The median filter run through each element of the signal (in this case the image) and replace each pixel with the median of its neighboring pixels (located in a square neighborhood around the evaluated pixel).

Bilateral Filter

  • So far, we have explained some filters which main goal is to smooth an input image. However, sometimes the filters do not only dissolve the noise, but also smooth away the edges. To avoid this (at certain extent at least), we can use a bilateral filter.

  • In an analogous way as the Gaussian filter,  the bilateral filter also considers the neighboring pixels with  weights assigned to each of them. These weights have two components, the first of which is the same weighting used by the Gaussian filter. The second component takes into account the difference in intensity between the neighboring pixels and the evaluated one.


opencv:

  • Normalized Block Filter:

    blur(src,dst,Size,Point)

  • Gaussian Filter:

    GaussianBlur(src,dst,Size,0,0)

  • Median Filter:

    medianBlur(src,dst,i)

  • Bilateral Filter:

    bilateralFilter(src,dst,i,i*2,i/2)

    For a more detailed explanation you can check this link

薛定谔的慵懒喵
hdw2000
txwtech笛科思

c# openCV图片传递-尝试读取或写入受保护的内存。这通常指示其他内存已损坏。解决方法

未处理AccessViolationException

这通常指示其他内存已损坏,这里内存损坏并非物理的内存条损坏。猜想是执行到此步骤后,内存空间被清理了,没有找到内存地址的感觉。

public static bool RecognizeCpositiveAndNegative(PictureBox pbox_disImage1, Mat tempimg)

当调用此函数后,信息传递给tempimg.

Mat org_被测图片 = tempimg.Clone();

复制tempimg的内容时发生错误。估计tempimg的所占内存空间被清理掉了,所以就报错。

如何解决报...


未处理AccessViolationException

这通常指示其他内存已损坏,这里内存损坏并非物理的内存条损坏。猜想是执行到此步骤后,内存空间被清理了,没有找到内存地址的感觉。

public static bool RecognizeCpositiveAndNegative(PictureBox pbox_disImage1, Mat tempimg)

当调用此函数后,信息传递给tempimg.

Mat org_被测图片 = tempimg.Clone();

复制tempimg的内容时发生错误。估计tempimg的所占内存空间被清理掉了,所以就报错。

如何解决报错:


else if (RevString == "check") //检查是否防反。。。

            {

               // SendData("ready_ok");

                g_match_image = null;

                g_match_image = SimpleGrab.SimpleGrab.获取相机图像();

 

                //获取的图像为空

                if (g_match_image == null)

                {

                    SendData("error1");

                    this.Invoke(new EventHandler(delegate { richTextBox1.AppendText("没有获取到图像" + "\r\n"); }));

                    return;

                }

                else

                {

                    bool result = CRecognizeCpositiveAndNegative.RecognizeCpositiveAndNegative(this.pbox_disImage1, g_match_image);

tempimg的上级调用位置是g_match_image,

g_match_image = SimpleGrab.SimpleGrab.获取相机图像();//即执行拍照,图像信息保存在g_match_image里面。

既然报错。那就不要传递这个参数。直接在被调用函数里面拍照。就不执行tempimg.clone()了。
--------------------- 
作者:txwtech
来源:CSDN
原文:https://blog.csdn.net/txwtech/article/details/91432801
版权声明:本文为博主原创文章,转载请附上博文链接!

    public static bool RecognizeCpositiveAndNegative(PictureBox pbox_disImage1, Mat tempimg)
        {
            /* 1、读取本地模板图片,放入:放正模板、放反模板
             * 2、对三张图(放正模板、放反模板、被测图片)均进行:二值化分割、最小外接矩形切割
             * 3、把处理后的当前图像分别对处理后的放正模板、放反模板执行OpenCV模板匹配操作
             * 4、哪个相似度高则是哪个!
             */
            Mat org_放正模板 = new Mat(Application.StartupPath + "\\image\\放正模板.bmp", ImreadModes.Grayscale);
            Mat org_放反模板 = new Mat(Application.StartupPath + "\\image\\放反模板.bmp", ImreadModes.Grayscale);
           // Mat org_被测图片 = tempimg.Clone();
            Mat org_被测图片 = SimpleGrab.SimpleGrab.获取相机图像();//直接在这里拍照获取图片信息。
            if (tempimg.NumberOfChannels !=1)
            {
                CvInvoke.CvtColor(org_被测图片, org_被测图片, ColorConversion.Bgr2Gray);
            }


团子
t_chunbo
小陌白
小陌白
小陌白
小陌白
小陌白
小陌白
Alirio.Lau

iOS--搭建OpenCV环境

1、hombrew安装opencv

brew install opencv


2、配置Xcode

Build Settings --> Header Search Parhs :添加 /usr/local/include 


3、将 /usr/local/Cellar/opencv/3.4.1_2/lib 中的 .dylib 拖入到项目(左下角带箭头的 .dylib不用拖进项目)


4、引入头文件调用API。


1、hombrew安装opencv

brew install opencv


2、配置Xcode

Build Settings --> Header Search Parhs :添加 /usr/local/include 


3、将 /usr/local/Cellar/opencv/3.4.1_2/lib 中的 .dylib 拖入到项目(左下角带箭头的 .dylib不用拖进项目)


4、引入头文件调用API。


XYZ-bear
大脸猫猫脸大
啃屁虫

交叉编译opencv-2.4.13

主机:ubuntu 14.04
工具链:arm-linux 4.3.3
源代码:opencv-2.4.13 密码:r9ni,或从官网下载http://opencv.org/downloads.html
1.解压源码
unzip opencv-2.4.13.zip
2.创建编译工程目录
mkdir opencv-2.4.13-build
3.进入编译工程目录
cd opencv-2.4.13-build/
4. 指定工具链信息
vi toolchain.cmake (填入以下信息)
set( CMAKE_SYSTEM_NAME Linux )
set( CMAKE_SYSTEM_PROCESSOR arm )
set(...

主机:ubuntu 14.04
工具链:arm-linux 4.3.3
源代码:opencv-2.4.13 密码:r9ni,或从官网下载http://opencv.org/downloads.html
1.解压源码
unzip opencv-2.4.13.zip
2.创建编译工程目录
mkdir opencv-2.4.13-build
3.进入编译工程目录
cd opencv-2.4.13-build/
4. 指定工具链信息
vi toolchain.cmake (填入以下信息)
set( CMAKE_SYSTEM_NAME Linux )
set( CMAKE_SYSTEM_PROCESSOR arm )
set( CMAKE_C_COMPILER arm-linux-gcc )
set( CMAKE_CXX_COMPILER arm-linux-g++ )
set( CMAKE_FIND_ROOT_PATH /nfsroot/rootfs ) (可以不设置)
5.生成cmake 需要的文件
cmake -DCMAKE_TOOLCHAIN_FILE=toolchain.cmake ../opencv-2.4.13 (源代码目录)
6. 配置ccmake . (注意cmake 前面有一个“点”,“点”代表当前目录)

交叉编译opencv

7.修改配置项
Enter 键触发ON(打开)和OFF(关闭),编缉

BUILD_JPEG OFF (默认) 改为 BUILD_JPEG ON
BUILD_PNG OFF (默认) 改为 BUILD_PNG ON
WITH_TIFF ON (默认) 改为 WITH_TIFF OFF (根据实际需要)

BUILD_SHARED_LIBS ON 生成动态库 (默认)
BUILD_SHARED_LIBS OFF 生成静态库

CMAKE_INSTALL_PREFIX /home/wei/src/opencv-2.4.13-build/install (默认安装目录,根据实际需要先择更改)
改为
CMAKE_INSTALL_PREFIX /usr/local/arm/opencv-2.4.13 (根据自己需要,这是我一般安装arm板的地址)

8.按c 生成配置文件
9.按g 生成Makefile
10.make
11.make install (注意:此步需要root权限,而已要能使用交叉工具链)

TKS

空了星轨

曲率求指尖数目




//用在曲率求指尖数目里
double kkk(Point p,Point q, Point r){

double dot = ((q.x - p.x )*(r.x - p.x )  + (r.y - p.y) * (q.y - p.y))/(( sqrt((double)((p.x-q.x)*(p.x-q.x)+(p.y-q.y)*(p.y-q.y))))*( sqrt((double)((p.x-r.x)*(p.x-r.x...



   
     
   
  //用在曲率求指尖数目里
double kkk(Point p,Point q, Point r){

double dot = ((q.x - p.x )*(r.x - p.x )  + (r.y - p.y) * (q.y - p.y))/(( sqrt((double)((p.x-q.x)*(p.x-q.x)+(p.y-q.y)*(p.y-q.y))))*( sqrt((double)((p.x-r.x)*(p.x-r.x)+(p.y-r.y)*(p.y-r.y))))); 
return dot;

}
 
   
 
   
   //这里采用边缘检测的方式来生成二值图像 
    Mat cannyImage; 
    Canny(closed, cannyImage, 130, 350); 
    namedWindow("Canny");
    imshow("Canny",cannyImage);

//在得到的二值图像中寻找轮廓 
    //g_vcontours用来存储 所有轮廓 的 点向量 
    vector> g_vcontours; 
    vector g_vHierarchy; 
    findContours(closed, g_vcontours, g_vHierarchy, RETR_TREE, CHAIN_APPROX_SIMPLE, Point(0, 0)); 
 

  
//寻找轮廓   
        //findContours(skinArea, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);   
   
        // 找到最大的轮廓   
        int index; 
 
        double area, maxArea(0);   
        for (int i=0; i
        {   
            area = contourArea(Mat(g_vcontours[i]));   
            if (area > maxArea)   
            {   
                maxArea = area;   
                index = i;   
            }              
        }   
   
        //drawContours(frame, g_vcontours, index, Scalar(0, 0, 255), 2, 8,g_vHierarchy );   
           
        Moments moment = moments(cannyImage, true);   
        Point rcente(moment.m10/moment.m00, moment.m01/moment.m00);   
        circle(cannyImage,  rcente, 8 ,Scalar(0, 0, 255), CV_FILLED);   
   
        // 寻找指尖   
        vector couPoint = g_vcontours[index];   
        int max(0), count(0), notice(0); 
  double dot ,dot1,dot2;
        vector fingerTips;   
        Point p, q, r,p1, q1, r1,p2, q2, r2; 
int ee=4;int dd=10;
        for (int i = 0; (i < couPoint.size() ) && couPoint.size(); i++)   
        {    if(i>=dd){
              q = couPoint[i - dd];
      } else { q = couPoint[i - dd + couPoint.size()]; }
            p = couPoint[i];   
             if(i+dd
  r = couPoint[i + dd]; }
     else {  r = couPoint[i + dd - couPoint.size()]; }
dot =kkk(p,q,r);

dot1=0; dot2=0;
        for (int e=1;e
{

  if(i>=dd +e  ){
              q1 = couPoint[i -e - dd];
      } else { q1 = couPoint[i -e- dd + couPoint.size()]; }
              if(i-e>=0 )
  p1 = couPoint[i- e];  
  else p1 = couPoint[i- e+ couPoint.size()]; 
             if(i-e+dd
  r1 = couPoint[i - e+ dd]; }
     else {  r1 = couPoint[i -e+ dd - couPoint.size()]; }
  double doe1 =kkk(p1,q1,r1);
  if (doe1>dot1){ dot1=doe1;}

if(i>=dd -e  ){
              q2 = couPoint[i +e - dd];
      } else { q2 = couPoint[i +e- dd + couPoint.size()]; }
  
              if(i+e
  { p2 = couPoint[i+ e]; } 
  else {p2 = couPoint[i+ e- couPoint.size()];} 
  
             if(i+e+dd
  r2 = couPoint[i + e+ dd]; }
     else {  r2 = couPoint[i +e+ dd - couPoint.size()]; }
 
  double doe2 =kkk(p2,q2,r2);
if (doe2>dot2){ dot2=doe2;}
 
}








if (dot > 0.2 &&dot > dot1 && dot>dot2)
            {   
                int cross = (q.x - p.x ) * (r.y - p.y) - (r.x - p.x ) * (q.y - p.y);   
                if (cross > 0)   
                {   
                    fingerTips.push_back(p);   
                    circle(cannyImage, p, 2 ,Scalar(255, 0, 0), CV_FILLED);   
                    //line(cannyImage, rcente, p, Scalar(255, 0, 0), 2);       
                }   
            }   
        }   
         std::cout<<"fingerTips="<<         imshow("show_img", cannyImage);   

  






空了星轨

手势识别匹配失误

1.  还是自己程序错误  没报错;
2. 甚至训练样本都识别不对; 比较后同一幅图hu特征不一致
3. 训练样本正确后,测试还是识别失误,识别成其他相类似的; 原来样本中有一幅图特征值明显异于其他!!——提取手势轮廓而不是图片的hu

之前物体识别错误是哪个大识别成哪个,也是在自己程序处粗心,  特征不对应!!

还有轮廓提取前可进行闭运算!

1.  还是自己程序错误  没报错;
2. 甚至训练样本都识别不对; 比较后同一幅图hu特征不一致
3. 训练样本正确后,测试还是识别失误,识别成其他相类似的; 原来样本中有一幅图特征值明显异于其他!!——提取手势轮廓而不是图片的hu

之前物体识别错误是哪个大识别成哪个,也是在自己程序处粗心,  特征不对应!!

还有轮廓提取前可进行闭运算!

空了星轨

opencv中金字塔分割这张图得到的小块图比原图的行数大了50多倍??

#include "cv.h"

#include "highgui.h"

#include <math.h>

#include <opencv2\legacy\legacy.hpp>

#include <iostream>

using namespace std;

using namespace cv;


IplImage* image[2] = { 0, 0 }, *image0 = 0, *image1 =...



#include "cv.h"

#include "highgui.h"

#include <math.h>

#include <opencv2\legacy\legacy.hpp>

#include <iostream>

using namespace std;

using namespace cv;


IplImage* image[2] = { 0, 0 }, *image0 = 0, *image1 = 0;

CvSize size;

int w0, h0, i;

int threshold1, threshold2;

int l, level = 3;

int sthreshold1, sthreshold2;

int l_comp;

int block_size = 1500;

float parameter;

double threshold;

double rezult, min_rezult;

//int filter = CV_GAUSSIAN_5x5;

CvConnectedComp *cur_comp, min_comp;

CvSeq *comp;

CvMemStorage *storage;

CvPoint pt1, pt2;

void ON_SEGMENT(int a)

{

image0 = cvCloneImage(image[0]);

Mat img0(image0, 0);

cvPyrSegmentation(image0, image1, storage, &comp,

level, threshold1 , threshold2);

Mat img1(image1, 0);


int n_comp = comp->total; vector<Mat> block0(n_comp); vector<Mat> block1(n_comp);


cout << "n_comp =" << n_comp << endl;

//map<int, int>mapping;//map color value to component id (classify) 

char path[30];

for (int i = 0; i < (comp ? comp->total:0);i++)

{

CvConnectedComp* cc = (CvConnectedComp*)cvGetSeqElem(comp,i);

//cc->rect

//cvRectangle(image0, cvPoint(cc->rect.x, cc->rect.y),

//cvPoint(cc->rect.x + cc->rect.width, cc->rect.y + cc->rect.height), cvScalar(0.0, 255));

//mapping.insert(pair<int, int>(cc->value.val[0], i));

if (cc->rect.height <= img0.rows)

{

img0(cc->rect).copyTo(block0[i]);

img1(cc->rect).copyTo(block1[i]);

  int nl = block1[i].rows; // number of lines  

  int nc = block1[i].cols; // number of columns  





///*



for (int j = 0; j<nl; j++) {

for (int k = 0; k<nc; k++) {

// cout << "1" << endl;

// process each pixel ---------------------

if (uchar(block1[i].at<cv::Vec3b>(j, k)[0]) != uchar(cc->value.val[0]) ||

uchar(block1[i].at<cv::Vec3b>(j, k)[1]) != uchar(cc->value.val[1])||

uchar(block1[i].at<cv::Vec3b>(j, k)[2]) != uchar(cc->value.val[2]))

{

// cout << "2" << endl;

block0[i].at<cv::Vec3b>(j, k)[0] = 0;

block0[i].at<cv::Vec3b>(j, k)[1] = 0;

block0[i].at<cv::Vec3b>(j, k)[2] = 0;

}

// cout << "3" << endl;

// end of pixel processing ----------------


 } // end of line

}


/*

int nl = block1[i].rows; // number of lines

int nc = block1[i].cols * block1[i].channels(); // total number of elements per line


if (block1[i].isContinuous())  {

// then no padded pixels

nc = nc*nl;

nl = 1;  // it is now a 1D array

}

// int nk = nc/src.channels();


for (int j = 0; j<nl; j++) {


uchar* blk1 = block1[i].ptr<uchar>(j);

uchar* blk0 = block0[i].ptr<uchar>(j);



for (int k = 0; k<nc; k += 3) {

double bl0 = *blk1++; double bl1 = *blk1++; double bl2 = *blk1++;

// process each pixel ---------------------

if (bl0 == cc->value.val[0] && bl1 == cc->value.val[2] &&

bl2 == cc->value.val[2])

{

*blk0++; *blk0++; *blk0++;


}

else { *blk0++ = 0; *blk0++ = 0; *blk0++ = 0; }

// end of pixel processing ----------------


} // end of line

}*/


sprintf_s(path, "fruit%d", i);

if (block0[i].total() > 500)

imshow(path, block0[i]);

}

else if(cc->rect.height > img0.rows)

{

cout << "img0.rows =" << img0.rows << ";  img0.cols =" << img0.cols << endl;

cout << " cc->rect.height =" << cc->rect.height << ";  cc->rect.width=" << cc->rect.width << endl;

cout <<"cc->area ="<< cc->area << endl;

}

}

 





cvShowImage("Source", image0);

cvShowImage("Segmentation", image1);

}

int main(int argc, char** argv)

{

char* filename = argc == 2 ? argv[1] : (char*)"picture0.jpg";

if ((image[0] = cvLoadImage(filename, 1)) == 0)

return -1;

cvNamedWindow("Source", 0);

//cvShowImage("Source", image[0]);

cvNamedWindow("Segmentation", 0);

storage = cvCreateMemStorage(block_size);

image[0]->width &= -(1 << level);

image[0]->height &= -(1 << level);

image0 = cvCloneImage(image[0]);

image1 = cvCloneImage(image[0]);

// 对彩色图像进行分割

//l = 1;

threshold1 =92;

threshold2 = 128;

ON_SEGMENT(1);

sthreshold1 = cvCreateTrackbar("Threshold1", "Segmentation", &threshold1, 255,

ON_SEGMENT);

sthreshold2 = cvCreateTrackbar("Threshold2", "Segmentation", &threshold2, 255,

ON_SEGMENT);



//cvShowImage("Segmentation", image1);

cvWaitKey(0);

cvDestroyWindow("Segmentation");

cvDestroyWindow("Source");

cvReleaseMemStorage(&storage);

cvReleaseImage(&image[0]);

cvReleaseImage(&image0);

cvReleaseImage(&image1);

return 0;

}


空了星轨

金字塔分割

#include <opencv2\highgui\highgui.hpp>

#include <opencv2\legacy\legacy.hpp>
#include <iostream>
using namespace std;
using namespace cv;

void Pyr_fenge(Mat img0, vector<Mat>& block0, int& n_comp, const int level = 3, const double threshold1=155.0, const double threshold2=52...

#include <opencv2\highgui\highgui.hpp>

#include <opencv2\legacy\legacy.hpp>
#include <iostream>
using namespace std;
using namespace cv;


void Pyr_fenge(Mat img0, vector<Mat>& block0, int& n_comp, const int level = 3, const double threshold1=155.0, const double threshold2=52.0)
{
     

    int block_size = 1500;
    

    img0.cols &= -(1 << level);
    img0.rows &= -(1 << level);

    IplImage* image0 = &img0.operator _IplImage();
    IplImage* image1 = cvCloneIma
ge(image0);
    Mat img1(image1, 0);
    
    CvMemStorage *storage;
    storage = cvCreateMemStorage(block_size);
    CvSeq *comp;
    cvPyrSegmentation(image0, image1, storage, &comp,
        level, threshold1, threshold2);

    n_comp = comp->total; 
    block0.reserve(n_comp);
     
    for (int i = 0, k = 0; i < (comp ? comp->total : 0); i++)
    {
        CvConnectedComp* cc = (CvConnectedComp*)cvGetSeqElem(comp, i);

        if (cc->rect.height * cc->rect.width >500)

        {
             
            //img0(cc->rect).copyTo(block);
            block0.push_back(img0(cc->rect));                                  
            k++;
        }

    }
    //cvClearSeq(comp); cout << "w" << endl;
    //cvReleaseMemStorage(&storage);
    //cvReleaseImage(&image0);
    //cvReleaseImage(&image1);

}






int main(int argc, char** argv)
{
    char* filename = argc == 2 ? argv[1] : (char*)"picture0.jpg";
    Mat img0; img0 = imread(filename, 1);
    if (img0.cols == 0)
        return -1;
    
    vector<Mat> block; int n_comp=0;  
    Pyr_fenge(img0, block, n_comp);
    
    cout << "n_comp =" << n_comp << endl;
    cout << "block.size() =" << block.size() << endl;
    char path[30];
    for (int k = 0; k < block.size(); k++)
    {
        sprintf_s(path, "fruit%d", k);
        imshow(path, block[k]);
    }
    
    waitKey();
    return 0;
}

LOFTER

让兴趣,更有趣

简单随性的记录
丰富多彩的内容
让生活更加充实

下载移动端
关注最新消息