Can't stop questioning!

OpenCV C++ getting started

Tuyen D. Le March 25, 2021 [Open-CV] #opencv

Install

Download

Configure

https://acodary.wordpress.com/2018/07/24/opencv-cai-dat-opencv-visual-c-tren-windows/

Histogram equalization (HE)

Learning OpenCV 3 Computer Vision in C++ with the OpenCV Library (Adrian Kaehler and Gary Bradski)

https://github.com/oreillymedia/Learning-OpenCV-3_examples

Code

distanceTransform

Ref1

Type

A Mapping of Type to Numbers in OpenCV

C1C2C3C4TypeBitsC++ typeRange
CV_8U081624Unsigned8bitsuchar0~255
CV_8S191725Signed8bitschar-128~127
CV_16U2101826Unsigned16bitsushort0~65.535
CV_16S3111927Signed16bitsshort-32.768~32.767
CV_32S4122028Signed32bitsint-2.147.483.648~2.147.483.647
CV_32F5132129Float32bitsfloat0~1.0
CV_64F6142230Double64bitsdouble
#define CV_8U   0
#define CV_8S   1 
#define CV_16U  2
#define CV_16S  3
#define CV_32S  4
#define CV_32F  5
#define CV_64F  6

#define CV_8UC1 CV_MAKETYPE(CV_8U,1)
#define CV_8UC2 CV_MAKETYPE(CV_8U,2)
#define CV_8UC3 CV_MAKETYPE(CV_8U,3)
#define CV_8UC4 CV_MAKETYPE(CV_8U,4)
#define CV_8UC(n) CV_MAKETYPE(CV_8U,(n))

#define CV_CN_SHIFT   3
#define CV_DEPTH_MAX  (1 << CV_CN_SHIFT)

#define CV_MAT_DEPTH_MASK       (CV_DEPTH_MAX - 1)
#define CV_MAT_DEPTH(flags)     ((flags) & CV_MAT_DEPTH_MASK)

#define CV_MAKETYPE(depth,cn) (CV_MAT_DEPTH(depth) + (((cn)-1) << CV_CN_SHIFT))

// For example: 
#define CV_8UC4 CV_MAKETYPE(CV_8U,4)
// has type: 0+((4-1) << 3) == 24

Scalar

Template class for a 4-element vector derived from Vec. Being derived from Vec<Tp, 4> Scalar can be used just as typical 4-element vectors. The type Scalar is widely used in OpenCV to pass pixel values.

cv::Scalar myWhite(255, 255, 500);
cout << "Scala0: " << myWhite[0] << "; Scala1: " 
     << myWhite[1] << "; Scala2: " << myWhite[2] << endl;

// Scala0: 255; Scala1: 255; Scala2: 500

Differences of using “const cv::Mat &”, “cv::Mat &”, “cv::Mat” or “const cv::Mat” as function parameters?

https://stackoverflow.com/a/23486280

OpenCV handles all the memory automatically. First of all, std::vector, Mat, and other data structures used by the functions and methods have destructors that deallocate the underlying memory buffers when needed. This means that the destructors do not always deallocate the buffers as in case of Mat. They take into account possible data sharing. A destructor decrements the reference counter associated with the matrix data buffer. The buffer is deallocated if and only if the reference counter reaches zero, that is, when no other structures refer to the same buffer. Similarly, when a Mat instance is copied, no actual data is really copied. Instead, the reference counter is incremented to memorize that there is another owner of the same data. There is also the Mat::clone method that creates a full copy of the matrix data.

Linear vs non-linear filter

Ref

But what is the Fourier Transform? A visual introduction

https://youtu.be/spUNpyF58BY

CLAHE

Division of the image into 8x8 contextual regions usually gives good results; this implies 64 contextual regions of size 64x64 when AHE is performed on a 512x512 image

??

To avoid visibility of region boundaries, a bilinear interpolation scheme is used (see Fig.2)

Laplacian/Laplacian of Gaussian

edge preserving filter in image processing opencv

Wavelet

http://www.nsl.hcmus.edu.vn/greenstone/collect/tiensifu/index/assoc/HASH01f6.dir/2.pdf Denoising: wavelet thresholding Ứng dụng phép biến đổi wavelet trong xử lý ảnh PTIT

Mean vs Median filter

The "mean" is the "average" you're used to, where you add up all the numbers and then divide by the number of numbers. The "median" is the "middle" value in the list of numbers.

Point2f, sub-pixel coordinate origin

Understanding and evaluating template matching methods

alkasm's anwser

TM_SQDIFF_NORMED, TM_CCORR_NORMED, TM_CCOEFF_NORMED TM_SQDIFF, TM_CCORR, TM_CCOEFF

TM_CCOEFF_NORMEDTM_CCORR_NORMEDTM_SQDIFF_NORMED
[-1, 1] (mean shifted)[0, 1][0, 1]

Understanding Moments function in opencv

Michael Burdinov's answer

Definition of moments in image processing is borrowed from physics. Assume that each pixel in image has weight that is equal to its intensity. Then the point you defined is centroid (a.k.a. center of mass) of image.

Assume that I(x,y) is the intensity of pixel (x,y) in image. Then m(i,j) is the sum for all possible x and y of: I(x,y) * (x^i) * (y^j).

And here you can read a wiki article about all kinds of image moments (raw moments, central moments, scale/rotation invariant moments and so on). It is pretty good one and I recommend reading it.

Adapting this to scalar (greyscale) image with pixel intensities I(x,y), raw image moments Mij are calculated by:

Centroid: ${\displaystyle {{\bar {x}},\ {\bar {y}}}=\left{{\frac {M_{10}}{M_{00}}},{\frac {M_{01}}{M_{00}}}\right}}{\displaystyle {{\bar {x}},\ {\bar {y}}}=\left{{\frac {M_{10}}{M_{00}}},{\frac {M_{01}}{M_{00}}}\right}}$

Clone all Fiji source code

# https://forum.image.sc/t/getting-the-source-code-for-fiji-without-using-maven/31964/6

sudo apt install maven
sudo apt install libxml2-utils

git clone git://github.com/fiji/fiji
cd fiji/bin
wget https://github.com/scijava/scijava-scripts/raw/master/melting-pot.sh
bin/melt.sh -s

findContours()

Reference:

Remember:

void cv::findContours
    (   InputOutputArray    image,
        OutputArrayOfArrays contours,
        OutputArray         hierarchy,
        int                 mode,
        int                 method,
        Point               offset = Point() 
    ) 

// Note: absolute value of an area is used because
// area may be positive or negative - in accordance with the
// contour orientation
double i = std::fabs(cv::contourArea(cv::Mat(contour1)));

What is Contour Approximation Method?

CHAIN_APPROX_NONE vs CHAIN_APPROX_SIMPLE

findcontours-method

What is Contours Hierarchy?

find-contours-hierarchy

Watershed algorithm

Each pixel on the image was categozied as one of three type:

three type of pixel

The algorithm will find these points.

How it works

  1. Find regional minimum
  2. From regional minimum, pull the water into catchment basin
  3. Keeping to pull water into catchment basin until catchment basin overlap. This is watershed lines point.

Reference

Back to top