feature
¶skimage.feature.canny (image[, sigma, …]) 
Edge filter an image using the Canny algorithm. 
skimage.feature.daisy (image[, step, radius, …]) 
Extract DAISY feature descriptors densely for the given image. 
skimage.feature.hog (image[, orientations, …]) 
Extract Histogram of Oriented Gradients (HOG) for a given image. 
skimage.feature.greycomatrix (image, …[, …]) 
Calculate the greylevel cooccurrence matrix. 
skimage.feature.greycoprops (P[, prop]) 
Calculate texture properties of a GLCM. 
skimage.feature.local_binary_pattern (image, P, R) 
Gray scale and rotation invariant LBP (Local Binary Patterns). 
skimage.feature.multiblock_lbp (int_image, r, …) 
Multiblock local binary pattern (MBLBP). 
skimage.feature.draw_multiblock_lbp (image, …) 
Multiblock local binary pattern visualization. 
skimage.feature.peak_local_max (image[, …]) 
Find peaks in an image as coordinate list or boolean mask. 
skimage.feature.structure_tensor (image[, …]) 
Compute structure tensor using sum of squared differences. 
skimage.feature.structure_tensor_eigvals (…) 
Compute Eigen values of structure tensor. 
skimage.feature.hessian_matrix (image[, …]) 
Compute Hessian matrix. 
skimage.feature.hessian_matrix_det (image[, …]) 
Compute the approximate Hessian Determinant over an image. 
skimage.feature.hessian_matrix_eigvals (H_elems) 
Compute Eigenvalues of Hessian matrix. 
skimage.feature.shape_index (image[, sigma, …]) 
Compute the shape index. 
skimage.feature.corner_kitchen_rosenfeld (image) 
Compute Kitchen and Rosenfeld corner measure response image. 
skimage.feature.corner_harris (image[, …]) 
Compute Harris corner measure response image. 
skimage.feature.corner_shi_tomasi (image[, sigma]) 
Compute ShiTomasi (KanadeTomasi) corner measure response image. 
skimage.feature.corner_foerstner (image[, sigma]) 
Compute Foerstner corner measure response image. 
skimage.feature.corner_subpix (image, corners) 
Determine subpixel position of corners. 
skimage.feature.corner_peaks (image[, …]) 
Find corners in corner measure response image. 
skimage.feature.corner_moravec (image[, …]) 
Compute Moravec corner measure response image. 
skimage.feature.corner_fast (image[, n, …]) 
Extract FAST corners for a given image. 
skimage.feature.corner_orientations (image, …) 
Compute the orientation of corners. 
skimage.feature.match_template (image, template) 
Match a template to a 2D or 3D image using normalized correlation. 
skimage.feature.register_translation (…[, …]) 
Efficient subpixel image translation registration by crosscorrelation. 
skimage.feature.masked_register_translation (…) 
Masked image translation registration by masked normalized crosscorrelation. 
skimage.feature.match_descriptors (…[, …]) 
Bruteforce matching of descriptors. 
skimage.feature.plot_matches (ax, image1, …) 
Plot matched features. 
skimage.feature.blob_dog (image[, min_sigma, …]) 
Finds blobs in the given grayscale image. 
skimage.feature.blob_doh (image[, min_sigma, …]) 
Finds blobs in the given grayscale image. 
skimage.feature.blob_log (image[, min_sigma, …]) 
Finds blobs in the given grayscale image. 
skimage.feature.haar_like_feature (int_image, …) 
Compute the Haarlike features for a region of interest (ROI) of an integral image. 
skimage.feature.haar_like_feature_coord (…) 
Compute the coordinates of Haarlike features. 
skimage.feature.draw_haar_like_feature (…) 
Visualization of Haarlike features. 
skimage.feature.Cascade 
Class for cascade of classifiers that is used for object detection. 
skimage.feature.BRIEF ([descriptor_size, …]) 
BRIEF binary descriptor extractor. 
skimage.feature.CENSURE ([min_scale, …]) 
CENSURE keypoint detector. 
skimage.feature.ORB ([downscale, n_scales, …]) 
Oriented FAST and rotated BRIEF feature detector and binary descriptor extractor. 
skimage.feature.
canny
(image, sigma=1.0, low_threshold=None, high_threshold=None, mask=None, use_quantiles=False)[source]¶Edge filter an image using the Canny algorithm.
Parameters: 


Returns: 

See also
skimage.sobel
Notes
The steps of the algorithm are as follows:
sigma
width.References
[1]  Canny, J., A Computational Approach To Edge Detection, IEEE Trans. Pattern Analysis and Machine Intelligence, 8:679714, 1986 
[2]  William Green’s Canny tutorial http://dasl.unlv.edu/daslDrexel/alumni/bGreen/www.pages.drexel.edu/_weg22/can_tut.html 
Examples
>>> from skimage import feature
>>> # Generate noisy image of a square
>>> im = np.zeros((256, 256))
>>> im[64:64, 64:64] = 1
>>> im += 0.2 * np.random.rand(*im.shape)
>>> # First trial with the Canny filter, with the default smoothing
>>> edges1 = feature.canny(im)
>>> # Increase the smoothing for better results
>>> edges2 = feature.canny(im, sigma=3)
skimage.feature.
daisy
(image, step=4, radius=15, rings=3, histograms=8, orientations=8, normalization='l1', sigmas=None, ring_radii=None, visualize=False)[source]¶Extract DAISY feature descriptors densely for the given image.
DAISY is a feature descriptor similar to SIFT formulated in a way that allows for fast dense extraction. Typically, this is practical for bagoffeatures image representations.
The implementation follows Tola et al. [1] but deviate on the following points:
 Histogram bin contribution are smoothed with a circular Gaussian window over the tonal range (the angular range).
 The sigma values of the spatial Gaussian smoothing in this code do not match the sigma values in the original code by Tola et al. [2]. In their code, spatial smoothing is applied to both the input image and the center histogram. However, this smoothing is not documented in [1] and, therefore, it is omitted.
Parameters: 


Returns: 

References
[1]  (1, 2, 3) Tola et al. “Daisy: An efficient dense descriptor applied to wide baseline stereo.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 32.5 (2010): 815830. 
[2]  (1, 2) http://cvlab.epfl.ch/software/daisy 
skimage.feature.daisy
¶skimage.feature.
hog
(image, orientations=9, pixels_per_cell=(8, 8), cells_per_block=(3, 3), block_norm='L2Hys', visualize=False, visualise=None, transform_sqrt=False, feature_vector=True, multichannel=None)[source]¶Extract Histogram of Oriented Gradients (HOG) for a given image.
Compute a Histogram of Oriented Gradients (HOG) by
 (optional) global image normalization
 computing the gradient image in row and col
 computing gradient histograms
 normalizing across blocks
 flattening into a feature vector
Parameters: 


Returns: 

Notes
The presented code implements the HOG extraction method from [2] with the following changes: (I) blocks of (3, 3) cells are used ((2, 2) in the paper; (II) no smoothing within cells (Gaussian spatial window with sigma=8pix in the paper); (III) L1 block normalization is used (L2Hys in the paper).
Power law compression, also known as Gamma correction, is used to reduce
the effects of shadowing and illumination variations. The compression makes
the dark regions lighter. When the kwarg transform_sqrt is set to
True
, the function computes the square root of each color channel
and then applies the hog algorithm to the image.
References
[1]  https://en.wikipedia.org/wiki/Histogram_of_oriented_gradients 
[2]  (1, 2) Dalal, N and Triggs, B, Histograms of Oriented Gradients for Human Detection, IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2005 San Diego, CA, USA, https://lear.inrialpes.fr/people/triggs/pubs/Dalalcvpr05.pdf, DOI:10.1109/CVPR.2005.177 
[3]  (1, 2) Lowe, D.G., Distinctive image features from scaleinvatiant keypoints, International Journal of Computer Vision (2004) 60: 91, http://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf, DOI:10.1023/B:VISI.0000029664.99615.94 
[4]  (1, 2) Dalal, N, Finding People in Images and Videos, HumanComputer Interaction [cs.HC], Institut National Polytechnique de Grenoble  INPG, 2006, https://tel.archivesouvertes.fr/tel00390303/file/NavneetDalalThesis.pdf 
skimage.feature.hog
¶skimage.feature.
greycomatrix
(image, distances, angles, levels=None, symmetric=False, normed=False)[source]¶Calculate the greylevel cooccurrence matrix.
A grey level cooccurrence matrix is a histogram of cooccurring greyscale values at a given offset over an image.
Parameters: 


Returns: 

References
[1]  The GLCM Tutorial Home Page, http://www.fp.ucalgary.ca/mhallbey/tutorial.htm 
[2]  Pattern Recognition Engineering, Morton Nadler & Eric P. Smith 
[3]  Wikipedia, https://en.wikipedia.org/wiki/Cooccurrence_matrix 
Examples
Compute 2 GLCMs: One for a 1pixel offset to the right, and one for a 1pixel offset upwards.
>>> image = np.array([[0, 0, 1, 1],
... [0, 0, 1, 1],
... [0, 2, 2, 2],
... [2, 2, 3, 3]], dtype=np.uint8)
>>> result = greycomatrix(image, [1], [0, np.pi/4, np.pi/2, 3*np.pi/4],
... levels=4)
>>> result[:, :, 0, 0]
array([[2, 2, 1, 0],
[0, 2, 0, 0],
[0, 0, 3, 1],
[0, 0, 0, 1]], dtype=uint32)
>>> result[:, :, 0, 1]
array([[1, 1, 3, 0],
[0, 1, 1, 0],
[0, 0, 0, 2],
[0, 0, 0, 0]], dtype=uint32)
>>> result[:, :, 0, 2]
array([[3, 0, 2, 0],
[0, 2, 2, 0],
[0, 0, 1, 2],
[0, 0, 0, 0]], dtype=uint32)
>>> result[:, :, 0, 3]
array([[2, 0, 0, 0],
[1, 1, 2, 0],
[0, 0, 2, 1],
[0, 0, 0, 0]], dtype=uint32)
skimage.feature.greycomatrix
¶skimage.feature.
greycoprops
(P, prop='contrast')[source]¶Calculate texture properties of a GLCM.
Compute a feature of a grey level cooccurrence matrix to serve as a compact summary of the matrix. The properties are computed as follows:
‘contrast’: \(\sum_{i,j=0}^{levels1} P_{i,j}(ij)^2\)
‘dissimilarity’: \(\sum_{i,j=0}^{levels1}P_{i,j}ij\)
‘homogeneity’: \(\sum_{i,j=0}^{levels1}\frac{P_{i,j}}{1+(ij)^2}\)
‘ASM’: \(\sum_{i,j=0}^{levels1} P_{i,j}^2\)
‘energy’: \(\sqrt{ASM}\)
Each GLCM is normalized to have a sum of 1 before the computation of texture properties.
Parameters: 


Returns: 

References
[1]  The GLCM Tutorial Home Page, http://www.fp.ucalgary.ca/mhallbey/tutorial.htm 
Examples
Compute the contrast for GLCMs with distances [1, 2] and angles [0 degrees, 90 degrees]
>>> image = np.array([[0, 0, 1, 1],
... [0, 0, 1, 1],
... [0, 2, 2, 2],
... [2, 2, 3, 3]], dtype=np.uint8)
>>> g = greycomatrix(image, [1, 2], [0, np.pi/2], levels=4,
... normed=True, symmetric=True)
>>> contrast = greycoprops(g, 'contrast')
>>> contrast
array([[ 0.58333333, 1. ],
[ 1.25 , 2.75 ]])
skimage.feature.greycoprops
¶skimage.feature.
local_binary_pattern
(image, P, R, method='default')[source]¶Gray scale and rotation invariant LBP (Local Binary Patterns).
LBP is an invariant descriptor that can be used for texture classification.
Parameters: 


Returns: 

References
[1]  Multiresolution GrayScale and Rotation Invariant Texture Classification with Local Binary Patterns. Timo Ojala, Matti Pietikainen, Topi Maenpaa. http://www.ee.oulu.fi/research/mvmp/mvg/files/pdf/pdf_94.pdf, 2002. 
[2]  (1, 2) Face recognition with local binary patterns. Timo Ahonen, Abdenour Hadid, Matti Pietikainen, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.214.6851, 2004. 
skimage.feature.local_binary_pattern
¶skimage.feature.
multiblock_lbp
(int_image, r, c, width, height)[source]¶Multiblock local binary pattern (MBLBP).
The features are calculated similarly to local binary patterns (LBPs),
(See local_binary_pattern()
) except that summed blocks are
used instead of individual pixel values.
MBLBP is an extension of LBP that can be computed on multiple scales in constant time using the integral image. Nine equallysized rectangles are used to compute a feature. For each rectangle, the sum of the pixel intensities is computed. Comparisons of these sums to that of the central rectangle determine the feature, similarly to LBP.
Parameters: 


Returns: 

References
[1]  Face Detection Based on MultiBlock LBP Representation. Lun Zhang, Rufeng Chu, Shiming Xiang, Shengcai Liao, Stan Z. Li http://www.cbsr.ia.ac.cn/users/scliao/papers/ZhangICB07MBLBP.pdf 
skimage.feature.multiblock_lbp
¶skimage.feature.
draw_multiblock_lbp
(image, r, c, width, height, lbp_code=0, color_greater_block=(1, 1, 1), color_less_block=(0, 0.69, 0.96), alpha=0.5)[source]¶Multiblock local binary pattern visualization.
Blocks with higher sums are colored with alphablended white rectangles, whereas blocks with lower sums are colored alphablended cyan. Colors and the alpha parameter can be changed.
Parameters: 


Returns: 

References
[1]  Face Detection Based on MultiBlock LBP Representation. Lun Zhang, Rufeng Chu, Shiming Xiang, Shengcai Liao, Stan Z. Li http://www.cbsr.ia.ac.cn/users/scliao/papers/ZhangICB07MBLBP.pdf 
skimage.feature.draw_multiblock_lbp
¶skimage.feature.
peak_local_max
(image, min_distance=1, threshold_abs=None, threshold_rel=None, exclude_border=True, indices=True, num_peaks=inf, footprint=None, labels=None, num_peaks_per_label=inf)[source]¶Find peaks in an image as coordinate list or boolean mask.
Peaks are the local maxima in a region of 2 * min_distance + 1 (i.e. peaks are separated by at least min_distance).
If there are multiple local maxima with identical pixel intensities inside the region defined with min_distance, the coordinates of all such pixels are returned.
If both threshold_abs and threshold_rel are provided, the maximum of the two is chosen as the minimum intensity threshold of peaks.
Parameters: 


Returns: 

Notes
The peak local maximum function returns the coordinates of local peaks (maxima) in an image. A maximum filter is used for finding local maxima. This operation dilates the original image. After comparison of the dilated and original image, this function returns the coordinates or a mask of the peaks where the dilated image equals the original image.
Examples
>>> img1 = np.zeros((7, 7))
>>> img1[3, 4] = 1
>>> img1[3, 2] = 1.5
>>> img1
array([[ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 1.5, 0. , 1. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. , 0. , 0. , 0. ]])
>>> peak_local_max(img1, min_distance=1)
array([[3, 4],
[3, 2]])
>>> peak_local_max(img1, min_distance=2)
array([[3, 2]])
>>> img2 = np.zeros((20, 20, 20))
>>> img2[10, 10, 10] = 1
>>> peak_local_max(img2, exclude_border=0)
array([[10, 10, 10]])
skimage.feature.
structure_tensor
(image, sigma=1, mode='constant', cval=0)[source]¶Compute structure tensor using sum of squared differences.
The structure tensor A is defined as:
A = [Axx Axy]
[Axy Ayy]
which is approximated by the weighted sum of squared differences in a local window around each pixel in the image.
Parameters: 


Returns: 

Examples
>>> from skimage.feature import structure_tensor
>>> square = np.zeros((5, 5))
>>> square[2, 2] = 1
>>> Axx, Axy, Ayy = structure_tensor(square, sigma=0.1)
>>> Axx
array([[ 0., 0., 0., 0., 0.],
[ 0., 1., 0., 1., 0.],
[ 0., 4., 0., 4., 0.],
[ 0., 1., 0., 1., 0.],
[ 0., 0., 0., 0., 0.]])
skimage.feature.
structure_tensor_eigvals
(Axx, Axy, Ayy)[source]¶Compute Eigen values of structure tensor.
Parameters: 


Returns: 

Examples
>>> from skimage.feature import structure_tensor, structure_tensor_eigvals
>>> square = np.zeros((5, 5))
>>> square[2, 2] = 1
>>> Axx, Axy, Ayy = structure_tensor(square, sigma=0.1)
>>> structure_tensor_eigvals(Axx, Axy, Ayy)[0]
array([[ 0., 0., 0., 0., 0.],
[ 0., 2., 4., 2., 0.],
[ 0., 4., 0., 4., 0.],
[ 0., 2., 4., 2., 0.],
[ 0., 0., 0., 0., 0.]])
skimage.feature.
hessian_matrix
(image, sigma=1, mode='constant', cval=0, order=None)[source]¶Compute Hessian matrix.
The Hessian matrix is defined as:
H = [Hrr Hrc]
[Hrc Hcc]
which is computed by convolving the image with the second derivatives of the Gaussian kernel in the respective x and ydirections.
Parameters: 


Returns: 

Examples
>>> from skimage.feature import hessian_matrix
>>> square = np.zeros((5, 5))
>>> square[2, 2] = 4
>>> Hrr, Hrc, Hcc = hessian_matrix(square, sigma=0.1, order = 'rc')
>>> Hrc
array([[ 0., 0., 0., 0., 0.],
[ 0., 1., 0., 1., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 1., 0., 1., 0.],
[ 0., 0., 0., 0., 0.]])
skimage.feature.
hessian_matrix_det
(image, sigma=1, approximate=True)[source]¶Compute the approximate Hessian Determinant over an image.
The 2D approximate method uses box filters over integral images to compute the approximate Hessian Determinant, as described in [1].
Parameters: 


Returns: 

Notes
For 2D images when approximate=True
, the running time of this method
only depends on size of the image. It is independent of sigma as one
would expect. The downside is that the result for sigma less than 3
is not accurate, i.e., not similar to the result obtained if someone
computed the Hessian and took its determinant.
References
[1]  (1, 2) Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc Van Gool, “SURF: Speeded Up Robust Features” ftp://ftp.vision.ee.ethz.ch/publications/articles/eth_biwi_00517.pdf 
skimage.feature.
hessian_matrix_eigvals
(H_elems, Hxy=None, Hyy=None, Hxx=None)[source]¶Compute Eigenvalues of Hessian matrix.
Parameters: 


Returns: 

Examples
>>> from skimage.feature import hessian_matrix, hessian_matrix_eigvals
>>> square = np.zeros((5, 5))
>>> square[2, 2] = 4
>>> H_elems = hessian_matrix(square, sigma=0.1, order='rc')
>>> hessian_matrix_eigvals(H_elems)[0]
array([[ 0., 0., 2., 0., 0.],
[ 0., 1., 0., 1., 0.],
[ 2., 0., 2., 0., 2.],
[ 0., 1., 0., 1., 0.],
[ 0., 0., 2., 0., 0.]])
skimage.feature.
shape_index
(image, sigma=1, mode='constant', cval=0)[source]¶Compute the shape index.
The shape index, as defined by Koenderink & van Doorn [1], is a single valued measure of local curvature, assuming the image as a 3D plane with intensities representing heights.
It is derived from the eigen values of the Hessian, and its value ranges from 1 to 1 (and is undefined (=NaN) in flat regions), with following ranges representing following shapes:
Interval (s in …)  Shape 

[ 1, 7/8)  Spherical cup 
[7/8, 5/8)  Through 
[5/8, 3/8)  Rut 
[3/8, 1/8)  Saddle rut 
[1/8, +1/8)  Saddle 
[+1/8, +3/8)  Saddle ridge 
[+3/8, +5/8)  Ridge 
[+5/8, +7/8)  Dome 
[+7/8, +1]  Spherical cap 
Parameters: 


Returns: 

References
[1]  (1, 2) Koenderink, J. J. & van Doorn, A. J., “Surface shape and curvature scales”, Image and Vision Computing, 1992, 10, 557564. DOI:10.1016/02628856(92)90076F 
Examples
>>> from skimage.feature import shape_index
>>> square = np.zeros((5, 5))
>>> square[2, 2] = 4
>>> s = shape_index(square, sigma=0.1)
>>> s
array([[ nan, nan, 0.5, nan, nan],
[ nan, 0. , nan, 0. , nan],
[0.5, nan, 1. , nan, 0.5],
[ nan, 0. , nan, 0. , nan],
[ nan, nan, 0.5, nan, nan]])
skimage.feature.shape_index
¶skimage.feature.
corner_kitchen_rosenfeld
(image, mode='constant', cval=0)[source]¶Compute Kitchen and Rosenfeld corner measure response image.
The corner measure is calculated as follows:
(imxx * imy**2 + imyy * imx**2  2 * imxy * imx * imy)
/ (imx**2 + imy**2)
Where imx and imy are the first and imxx, imxy, imyy the second derivatives.
Parameters: 


Returns: 

skimage.feature.
corner_harris
(image, method='k', k=0.05, eps=1e06, sigma=1)[source]¶Compute Harris corner measure response image.
This corner detector uses information from the autocorrelation matrix A:
A = [(imx**2) (imx*imy)] = [Axx Axy]
[(imx*imy) (imy**2)] [Axy Ayy]
Where imx and imy are first derivatives, averaged with a gaussian filter. The corner measure is then defined as:
det(A)  k * trace(A)**2
or:
2 * det(A) / (trace(A) + eps)
Parameters: 


Returns: 

References
[1]  https://en.wikipedia.org/wiki/Corner_detection 
Examples
>>> from skimage.feature import corner_harris, corner_peaks
>>> square = np.zeros([10, 10])
>>> square[2:8, 2:8] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> corner_peaks(corner_harris(square), min_distance=1)
array([[2, 2],
[2, 7],
[7, 2],
[7, 7]])
skimage.feature.
corner_shi_tomasi
(image, sigma=1)[source]¶Compute ShiTomasi (KanadeTomasi) corner measure response image.
This corner detector uses information from the autocorrelation matrix A:
A = [(imx**2) (imx*imy)] = [Axx Axy]
[(imx*imy) (imy**2)] [Axy Ayy]
Where imx and imy are first derivatives, averaged with a gaussian filter. The corner measure is then defined as the smaller eigenvalue of A:
((Axx + Ayy)  sqrt((Axx  Ayy)**2 + 4 * Axy**2)) / 2
Parameters: 


Returns: 

References
[1]  https://en.wikipedia.org/wiki/Corner_detection 
Examples
>>> from skimage.feature import corner_shi_tomasi, corner_peaks
>>> square = np.zeros([10, 10])
>>> square[2:8, 2:8] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> corner_peaks(corner_shi_tomasi(square), min_distance=1)
array([[2, 2],
[2, 7],
[7, 2],
[7, 7]])
skimage.feature.
corner_foerstner
(image, sigma=1)[source]¶Compute Foerstner corner measure response image.
This corner detector uses information from the autocorrelation matrix A:
A = [(imx**2) (imx*imy)] = [Axx Axy]
[(imx*imy) (imy**2)] [Axy Ayy]
Where imx and imy are first derivatives, averaged with a gaussian filter. The corner measure is then defined as:
w = det(A) / trace(A) (size of error ellipse)
q = 4 * det(A) / trace(A)**2 (roundness of error ellipse)
Parameters: 


Returns: 

References
[1]  http://www.ipb.unibonn.de/uploads/tx_ikgpublication/foerstner87.fast.pdf 
[2]  https://en.wikipedia.org/wiki/Corner_detection 
Examples
>>> from skimage.feature import corner_foerstner, corner_peaks
>>> square = np.zeros([10, 10])
>>> square[2:8, 2:8] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> w, q = corner_foerstner(square)
>>> accuracy_thresh = 0.5
>>> roundness_thresh = 0.3
>>> foerstner = (q > roundness_thresh) * (w > accuracy_thresh) * w
>>> corner_peaks(foerstner, min_distance=1)
array([[2, 2],
[2, 7],
[7, 2],
[7, 7]])
skimage.feature.
corner_subpix
(image, corners, window_size=11, alpha=0.99)[source]¶Determine subpixel position of corners.
A statistical test decides whether the corner is defined as the intersection of two edges or a single peak. Depending on the classification result, the subpixel corner location is determined based on the local covariance of the greyvalues. If the significance level for either statistical test is not sufficient, the corner cannot be classified, and the output subpixel position is set to NaN.
Parameters: 


Returns: 

References
[1]  http://www.ipb.unibonn.de/uploads/tx_ikgpublication/ foerstner87.fast.pdf 
[2]  https://en.wikipedia.org/wiki/Corner_detection 
Examples
>>> from skimage.feature import corner_harris, corner_peaks, corner_subpix
>>> img = np.zeros((10, 10))
>>> img[:5, :5] = 1
>>> img[5:, 5:] = 1
>>> img.astype(int)
array([[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1],
[0, 0, 0, 0, 0, 1, 1, 1, 1, 1]])
>>> coords = corner_peaks(corner_harris(img), min_distance=2)
>>> coords_subpix = corner_subpix(img, coords, window_size=7)
>>> coords_subpix
array([[ 4.5, 4.5]])
skimage.feature.
corner_peaks
(image, min_distance=1, threshold_abs=None, threshold_rel=0.1, exclude_border=True, indices=True, num_peaks=inf, footprint=None, labels=None)[source]¶Find corners in corner measure response image.
This differs from skimage.feature.peak_local_max in that it suppresses multiple connected peaks with the same accumulator value.
Parameters: 


Examples
>>> from skimage.feature import peak_local_max
>>> response = np.zeros((5, 5))
>>> response[2:4, 2:4] = 1
>>> response
array([[ 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0.],
[ 0., 0., 1., 1., 0.],
[ 0., 0., 1., 1., 0.],
[ 0., 0., 0., 0., 0.]])
>>> peak_local_max(response)
array([[3, 3],
[3, 2],
[2, 3],
[2, 2]])
>>> corner_peaks(response)
array([[2, 2]])
skimage.feature.
corner_moravec
(image, window_size=1)[source]¶Compute Moravec corner measure response image.
This is one of the simplest corner detectors and is comparatively fast but has several limitations (e.g. not rotation invariant).
Parameters: 


Returns: 

References
[1]  https://en.wikipedia.org/wiki/Corner_detection 
Examples
>>> from skimage.feature import corner_moravec
>>> square = np.zeros([7, 7])
>>> square[3, 3] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]])
>>> corner_moravec(square).astype(int)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 2, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]])
skimage.feature.
corner_fast
(image, n=12, threshold=0.15)[source]¶Extract FAST corners for a given image.
Parameters: 


Returns: 

References
[1]  Edward Rosten and Tom Drummond “Machine Learning for highspeed corner detection”, http://www.edwardrosten.com/work/rosten_2006_machine.pdf 
[2]  Wikipedia, “Features from accelerated segment test”, https://en.wikipedia.org/wiki/Features_from_accelerated_segment_test 
Examples
>>> from skimage.feature import corner_fast, corner_peaks
>>> square = np.zeros((12, 12))
>>> square[3:9, 3:9] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> corner_peaks(corner_fast(square, 9), min_distance=1)
array([[3, 3],
[3, 8],
[8, 3],
[8, 8]])
skimage.feature.
corner_orientations
(image, corners, mask)[source]¶Compute the orientation of corners.
The orientation of corners is computed using the first order central moment i.e. the center of mass approach. The corner orientation is the angle of the vector from the corner coordinate to the intensity centroid in the local neighborhood around the corner calculated using first order central moment.
Parameters: 


Returns: 

References
[1]  Ethan Rublee, Vincent Rabaud, Kurt Konolige and Gary Bradski “ORB : An efficient alternative to SIFT and SURF” http://www.vision.cs.chubu.ac.jp/CVR/pdf/Rublee_iccv2011.pdf 
[2]  Paul L. Rosin, “Measuring Corner Properties” http://users.cs.cf.ac.uk/Paul.Rosin/corner2.pdf 
Examples
>>> from skimage.morphology import octagon
>>> from skimage.feature import (corner_fast, corner_peaks,
... corner_orientations)
>>> square = np.zeros((12, 12))
>>> square[3:9, 3:9] = 1
>>> square.astype(int)
array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
>>> corners = corner_peaks(corner_fast(square, 9), min_distance=1)
>>> corners
array([[3, 3],
[3, 8],
[8, 3],
[8, 8]])
>>> orientations = corner_orientations(square, corners, octagon(3, 2))
>>> np.rad2deg(orientations)
array([ 45., 135., 45., 135.])
skimage.feature.
match_template
(image, template, pad_input=False, mode='constant', constant_values=0)[source]¶Match a template to a 2D or 3D image using normalized correlation.
The output is an array with values between 1.0 and 1.0. The value at a given position corresponds to the correlation coefficient between the image and the template.
For pad_input=True matches correspond to the center and otherwise to the topleft corner of the template. To find the best match you must search for peaks in the response (output) image.
Parameters: 


Returns: 

Notes
Details on the crosscorrelation are presented in [1]. This implementation uses FFT convolutions of the image and the template. Reference [2] presents similar derivations but the approximation presented in this reference is not used in our implementation.
References
[1]  (1, 2) J. P. Lewis, “Fast Normalized CrossCorrelation”, Industrial Light and Magic. 
[2]  (1, 2) Briechle and Hanebeck, “Template Matching using Fast Normalized Cross Correlation”, Proceedings of the SPIE (2001). DOI:10.1117/12.421129 
Examples
>>> template = np.zeros((3, 3))
>>> template[1, 1] = 1
>>> template
array([[ 0., 0., 0.],
[ 0., 1., 0.],
[ 0., 0., 0.]])
>>> image = np.zeros((6, 6))
>>> image[1, 1] = 1
>>> image[4, 4] = 1
>>> image
array([[ 0., 0., 0., 0., 0., 0.],
[ 0., 1., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 1., 0.],
[ 0., 0., 0., 0., 0., 0.]])
>>> result = match_template(image, template)
>>> np.round(result, 3)
array([[ 1. , 0.125, 0. , 0. ],
[0.125, 0.125, 0. , 0. ],
[ 0. , 0. , 0.125, 0.125],
[ 0. , 0. , 0.125, 1. ]])
>>> result = match_template(image, template, pad_input=True)
>>> np.round(result, 3)
array([[0.125, 0.125, 0.125, 0. , 0. , 0. ],
[0.125, 1. , 0.125, 0. , 0. , 0. ],
[0.125, 0.125, 0.125, 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0.125, 0.125, 0.125],
[ 0. , 0. , 0. , 0.125, 1. , 0.125],
[ 0. , 0. , 0. , 0.125, 0.125, 0.125]])
skimage.feature.match_template
¶skimage.feature.
register_translation
(src_image, target_image, upsample_factor=1, space='real', return_error=True)[source]¶Efficient subpixel image translation registration by crosscorrelation.
This code gives the same precision as the FFT upsampled crosscorrelation in a fraction of the computation time and with reduced memory requirements. It obtains an initial estimate of the crosscorrelation peak by an FFT and then refines the shift estimation by upsampling the DFT only in a small neighborhood of that estimate by means of a matrixmultiply DFT.
Parameters: 


Returns: 

References
[1]  Manuel GuizarSicairos, Samuel T. Thurman, and James R. Fienup, “Efficient subpixel image registration algorithms,” Optics Letters 33, 156158 (2008). DOI:10.1364/OL.33.000156 
[2]  James R. Fienup, “Invariant error metrics for image reconstruction” Optics Letters 36, 83528357 (1997). DOI:10.1364/AO.36.008352 
skimage.feature.register_translation
¶skimage.feature.
masked_register_translation
(src_image, target_image, src_mask, target_mask=None, overlap_ratio=0.3)[source]¶Masked image translation registration by masked normalized crosscorrelation.
Parameters: 


Returns: 

References
[1]  Dirk Padfield. Masked Object Registration in the Fourier Domain. IEEE Transactions on Image Processing, vol. 21(5), pp. 27062718 (2012). DOI:10.1109/TIP.2011.2181402 
[2]  D. Padfield. “Masked FFT registration”. In Proc. Computer Vision and Pattern Recognition, pp. 29182925 (2010). DOI:10.1109/CVPR.2010.5540032 
skimage.feature.masked_register_translation
¶skimage.feature.
match_descriptors
(descriptors1, descriptors2, metric=None, p=2, max_distance=inf, cross_check=True, max_ratio=1.0)[source]¶Bruteforce matching of descriptors.
For each descriptor in the first set this matcher finds the closest descriptor in the second set (and viceversa in the case of enabled crosschecking).
Parameters: 


Returns: 

skimage.feature.
plot_matches
(ax, image1, image2, keypoints1, keypoints2, matches, keypoints_color='k', matches_color=None, only_matches=False, alignment='horizontal')[source]¶Plot matched features.
Parameters: 


skimage.feature.
blob_dog
(image, min_sigma=1, max_sigma=50, sigma_ratio=1.6, threshold=2.0, overlap=0.5, *, exclude_border=False)[source]¶Finds blobs in the given grayscale image.
Blobs are found using the Difference of Gaussian (DoG) method [1]. For each blob found, the method returns its coordinates and the standard deviation of the Gaussian kernel that detected the blob.
Parameters: 


Returns: 

Notes
The radius of each blob is approximately \(\sqrt{2}\sigma\) for a 2D image and \(\sqrt{3}\sigma\) for a 3D image.
References
[1]  (1, 2) https://en.wikipedia.org/wiki/Blob_detection#The_difference_of_Gaussians_approach 
Examples
>>> from skimage import data, feature
>>> feature.blob_dog(data.coins(), threshold=.5, max_sigma=40)
array([[ 267. , 359. , 16.777216],
[ 267. , 115. , 10.48576 ],
[ 263. , 302. , 16.777216],
[ 263. , 245. , 16.777216],
[ 261. , 173. , 16.777216],
[ 260. , 46. , 16.777216],
[ 198. , 155. , 10.48576 ],
[ 196. , 43. , 10.48576 ],
[ 195. , 102. , 16.777216],
[ 194. , 277. , 16.777216],
[ 193. , 213. , 16.777216],
[ 185. , 347. , 16.777216],
[ 128. , 154. , 10.48576 ],
[ 127. , 102. , 10.48576 ],
[ 125. , 208. , 10.48576 ],
[ 125. , 45. , 16.777216],
[ 124. , 337. , 10.48576 ],
[ 120. , 272. , 16.777216],
[ 58. , 100. , 10.48576 ],
[ 54. , 276. , 10.48576 ],
[ 54. , 42. , 16.777216],
[ 52. , 216. , 16.777216],
[ 52. , 155. , 16.777216],
[ 45. , 336. , 16.777216]])
skimage.feature.blob_dog
¶skimage.feature.
blob_doh
(image, min_sigma=1, max_sigma=30, num_sigma=10, threshold=0.01, overlap=0.5, log_scale=False)[source]¶Finds blobs in the given grayscale image.
Blobs are found using the Determinant of Hessian method [1]. For each blob found, the method returns its coordinates and the standard deviation of the Gaussian Kernel used for the Hessian matrix whose determinant detected the blob. Determinant of Hessians is approximated using [2].
Parameters: 


Returns: 

Notes
The radius of each blob is approximately sigma.
Computation of Determinant of Hessians is independent of the standard
deviation. Therefore detecting larger blobs won’t take more time. In
methods line blob_dog()
and blob_log()
the computation
of Gaussians for larger sigma takes more time. The downside is that
this method can’t be used for detecting blobs of radius less than 3px
due to the box filters used in the approximation of Hessian Determinant.
References
[1]  (1, 2) https://en.wikipedia.org/wiki/Blob_detection#The_determinant_of_the_Hessian 
[2]  (1, 2) Herbert Bay, Andreas Ess, Tinne Tuytelaars, Luc Van Gool, “SURF: Speeded Up Robust Features” ftp://ftp.vision.ee.ethz.ch/publications/articles/eth_biwi_00517.pdf 
Examples
>>> from skimage import data, feature
>>> img = data.coins()
>>> feature.blob_doh(img)
array([[ 270. , 363. , 30. ],
[ 265. , 113. , 23.55555556],
[ 262. , 243. , 23.55555556],
[ 260. , 173. , 30. ],
[ 197. , 153. , 20.33333333],
[ 197. , 44. , 20.33333333],
[ 195. , 100. , 23.55555556],
[ 193. , 275. , 23.55555556],
[ 192. , 212. , 23.55555556],
[ 185. , 348. , 30. ],
[ 156. , 302. , 30. ],
[ 126. , 153. , 20.33333333],
[ 126. , 101. , 20.33333333],
[ 124. , 336. , 20.33333333],
[ 123. , 205. , 20.33333333],
[ 123. , 44. , 23.55555556],
[ 121. , 271. , 30. ]])
skimage.feature.blob_doh
¶skimage.feature.
blob_log
(image, min_sigma=1, max_sigma=50, num_sigma=10, threshold=0.2, overlap=0.5, log_scale=False, *, exclude_border=False)[source]¶Finds blobs in the given grayscale image.
Blobs are found using the Laplacian of Gaussian (LoG) method [1]. For each blob found, the method returns its coordinates and the standard deviation of the Gaussian kernel that detected the blob.
Parameters: 


Returns: 

Notes
The radius of each blob is approximately \(\sqrt{2}\sigma\) for a 2D image and \(\sqrt{3}\sigma\) for a 3D image.
References
[1]  (1, 2) https://en.wikipedia.org/wiki/Blob_detection#The_Laplacian_of_Gaussian 
Examples
>>> from skimage import data, feature, exposure
>>> img = data.coins()
>>> img = exposure.equalize_hist(img) # improves detection
>>> feature.blob_log(img, threshold = .3)
array([[ 266. , 115. , 11.88888889],
[ 263. , 302. , 17.33333333],
[ 263. , 244. , 17.33333333],
[ 260. , 174. , 17.33333333],
[ 198. , 155. , 11.88888889],
[ 198. , 103. , 11.88888889],
[ 197. , 44. , 11.88888889],
[ 194. , 276. , 17.33333333],
[ 194. , 213. , 17.33333333],
[ 185. , 344. , 17.33333333],
[ 128. , 154. , 11.88888889],
[ 127. , 102. , 11.88888889],
[ 126. , 208. , 11.88888889],
[ 126. , 46. , 11.88888889],
[ 124. , 336. , 11.88888889],
[ 121. , 272. , 17.33333333],
[ 113. , 323. , 1. ]])
skimage.feature.blob_log
¶skimage.feature.
haar_like_feature
(int_image, r, c, width, height, feature_type=None, feature_coord=None)[source]¶Compute the Haarlike features for a region of interest (ROI) of an integral image.
Haarlike features have been successfully used for image classification and object detection [1]. It has been used for realtime face detection algorithm proposed in [2].
Parameters: 


Returns: 

Notes
When extracting those features in parallel, be aware that the choice of the backend (i.e. multiprocessing vs threading) will have an impact on the performance. The rule of thumb is as follows: use multiprocessing when extracting features for all possible ROI in an image; use threading when extracting the feature at specific location for a limited number of ROIs. Refer to the example Face classification using Haarlike feature descriptor for more insights.
References
[1]  (1, 2) https://en.wikipedia.org/wiki/Haarlike_feature 
[2]  (1, 2) Oren, M., Papageorgiou, C., Sinha, P., Osuna, E., & Poggio, T. (1997, June). Pedestrian detection using wavelet templates. In Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on (pp. 193199). IEEE. http://tinyurl.com/y6ulxfta DOI:10.1109/CVPR.1997.609319 
[3]  Viola, Paul, and Michael J. Jones. “Robust realtime face detection.” International journal of computer vision 57.2 (2004): 137154. http://www.merl.com/publications/docs/TR2004043.pdf DOI:10.1109/CVPR.2001.990517 
Examples
>>> import numpy as np
>>> from skimage.transform import integral_image
>>> from skimage.feature import haar_like_feature
>>> img = np.ones((5, 5), dtype=np.uint8)
>>> img_ii = integral_image(img)
>>> feature = haar_like_feature(img_ii, 0, 0, 5, 5, 'type3x')
>>> feature
array([1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 4, 1,
2, 3, 4, 1, 2, 3, 4, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1,
2, 1, 2, 1, 2, 1, 1, 1])
You can compute the feature for some precomputed coordinates.
>>> from skimage.feature import haar_like_feature_coord
>>> feature_coord, feature_type = zip(
... *[haar_like_feature_coord(5, 5, feat_t)
... for feat_t in ('type2x', 'type3x')])
>>> # only select one feature over two
>>> feature_coord = np.concatenate([x[::2] for x in feature_coord])
>>> feature_type = np.concatenate([x[::2] for x in feature_type])
>>> feature = haar_like_feature(img_ii, 0, 0, 5, 5,
... feature_type=feature_type,
... feature_coord=feature_coord)
>>> feature
array([ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 1, 3, 1, 3, 1, 3, 1, 3, 1,
3, 1, 3, 1, 3, 2, 1, 3, 2, 2, 2, 1])
skimage.feature.haar_like_feature
¶skimage.feature.
haar_like_feature_coord
(width, height, feature_type=None)[source]¶Compute the coordinates of Haarlike features.
Parameters: 


Returns: 

Examples
>>> import numpy as np
>>> from skimage.transform import integral_image
>>> from skimage.feature import haar_like_feature_coord
>>> feat_coord, feat_type = haar_like_feature_coord(2, 2, 'type4')
>>> feat_coord # doctest: +SKIP
array([ list([[(0, 0), (0, 0)], [(0, 1), (0, 1)],
[(1, 1), (1, 1)], [(1, 0), (1, 0)]])], dtype=object)
>>> feat_type
array(['type4'], dtype=object)
skimage.feature.
draw_haar_like_feature
(image, r, c, width, height, feature_coord, color_positive_block=(1.0, 0.0, 0.0), color_negative_block=(0.0, 1.0, 0.0), alpha=0.5, max_n_features=None, random_state=None)[source]¶Visualization of Haarlike features.
Parameters: 


Returns: 

Examples
>>> import numpy as np
>>> from skimage.feature import haar_like_feature_coord
>>> from skimage.feature import draw_haar_like_feature
>>> feature_coord, _ = haar_like_feature_coord(2, 2, 'type4')
>>> image = draw_haar_like_feature(np.zeros((2, 2)),
... 0, 0, 2, 2,
... feature_coord,
... max_n_features=1)
>>> image
array([[[ 0. , 0.5, 0. ],
[ 0.5, 0. , 0. ]],
<BLANKLINE>
[[ 0.5, 0. , 0. ],
[ 0. , 0.5, 0. ]]])
Cascade
¶skimage.feature.
Cascade
¶Bases: object
Class for cascade of classifiers that is used for object detection.
The main idea behind cascade of classifiers is to create classifiers of medium accuracy and ensemble them into one strong classifier instead of just creating a strong one. The second advantage of cascade classifier is that easy examples can be classified only by evaluating some of the classifiers in the cascade, making the process much faster than the process of evaluating a one strong classifier.
Attributes: 


__init__
()¶Initialize cascade classifier.
Parameters: 


detect_multi_scale
()¶Search for the object on multiple scales of input image.
The function takes the input image, the scale factor by which the searching window is multiplied on each step, minimum window size and maximum window size that specify the interval for the search windows that are applied to the input image to detect objects.
Parameters: 


Returns: 

eps
¶features_number
¶stages_number
¶stumps_number
¶window_height
¶window_width
¶BRIEF
¶skimage.feature.
BRIEF
(descriptor_size=256, patch_size=49, mode='normal', sigma=1, sample_seed=1)[source]¶Bases: skimage.feature.util.DescriptorExtractor
BRIEF binary descriptor extractor.
BRIEF (Binary Robust Independent Elementary Features) is an efficient feature point descriptor. It is highly discriminative even when using relatively few bits and is computed using simple intensity difference tests.
For each keypoint, intensity comparisons are carried out for a specifically distributed number N of pixelpairs resulting in a binary descriptor of length N. For binary descriptors the Hamming distance can be used for feature matching, which leads to lower computational cost in comparison to the L2 norm.
Parameters: 


Examples
>>> from skimage.feature import (corner_harris, corner_peaks, BRIEF,
... match_descriptors)
>>> import numpy as np
>>> square1 = np.zeros((8, 8), dtype=np.int32)
>>> square1[2:6, 2:6] = 1
>>> square1
array([[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)
>>> square2 = np.zeros((9, 9), dtype=np.int32)
>>> square2[2:7, 2:7] = 1
>>> square2
array([[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]], dtype=int32)
>>> keypoints1 = corner_peaks(corner_harris(square1), min_distance=1)
>>> keypoints2 = corner_peaks(corner_harris(square2), min_distance=1)
>>> extractor = BRIEF(patch_size=5)
>>> extractor.extract(square1, keypoints1)
>>> descriptors1 = extractor.descriptors
>>> extractor.extract(square2, keypoints2)
>>> descriptors2 = extractor.descriptors
>>> matches = match_descriptors(descriptors1, descriptors2)
>>> matches
array([[0, 0],
[1, 1],
[2, 2],
[3, 3]])
>>> keypoints1[matches[:, 0]]
array([[2, 2],
[2, 5],
[5, 2],
[5, 5]])
>>> keypoints2[matches[:, 1]]
array([[2, 2],
[2, 6],
[6, 2],
[6, 6]])
Attributes: 


CENSURE
¶skimage.feature.
CENSURE
(min_scale=1, max_scale=7, mode='DoB', non_max_threshold=0.15, line_threshold=10)[source]¶Bases: skimage.feature.util.FeatureDetector
CENSURE keypoint detector.
References
[1]  Motilal Agrawal, Kurt Konolige and Morten Rufus Blas “CENSURE: Center Surround Extremas for Realtime Feature Detection and Matching”, https://link.springer.com/chapter/10.1007/9783540886938_8 DOI:10.1007/9783540886938_8 
[2]  Adam Schmidt, Marek Kraft, Michal Fularz and Zuzanna Domagala “Comparative Assessment of Point Feature Detectors and Descriptors in the Context of Robot Navigation” http://yadda.icm.edu.pl/yadda/element/bwmeta1.element.baztech268aaf280faf4872a4df7e2e61cb364c/c/Schmidt_comparative.pdf DOI:10.1.1.465.1117 
Examples
>>> from skimage.data import astronaut
>>> from skimage.color import rgb2gray
>>> from skimage.feature import CENSURE
>>> img = rgb2gray(astronaut()[100:300, 100:300])
>>> censure = CENSURE()
>>> censure.detect(img)
>>> censure.keypoints
array([[ 4, 148],
[ 12, 73],
[ 21, 176],
[ 91, 22],
[ 93, 56],
[ 94, 22],
[ 95, 54],
[100, 51],
[103, 51],
[106, 67],
[108, 15],
[117, 20],
[122, 60],
[125, 37],
[129, 37],
[133, 76],
[145, 44],
[146, 94],
[150, 114],
[153, 33],
[154, 156],
[155, 151],
[184, 63]])
>>> censure.scales
array([2, 6, 6, 2, 4, 3, 2, 3, 2, 6, 3, 2, 2, 3, 2, 2, 2, 3, 2, 2, 4, 2, 2])
Attributes: 


ORB
¶skimage.feature.
ORB
(downscale=1.2, n_scales=8, n_keypoints=500, fast_n=9, fast_threshold=0.08, harris_k=0.04)[source]¶Bases: skimage.feature.util.FeatureDetector
, skimage.feature.util.DescriptorExtractor
Oriented FAST and rotated BRIEF feature detector and binary descriptor extractor.
Parameters: 


References
[1]  Ethan Rublee, Vincent Rabaud, Kurt Konolige and Gary Bradski “ORB: An efficient alternative to SIFT and SURF” http://www.vision.cs.chubu.ac.jp/CVR/pdf/Rublee_iccv2011.pdf 
Examples
>>> from skimage.feature import ORB, match_descriptors
>>> img1 = np.zeros((100, 100))
>>> img2 = np.zeros_like(img1)
>>> np.random.seed(1)
>>> square = np.random.rand(20, 20)
>>> img1[40:60, 40:60] = square
>>> img2[53:73, 53:73] = square
>>> detector_extractor1 = ORB(n_keypoints=5)
>>> detector_extractor2 = ORB(n_keypoints=5)
>>> detector_extractor1.detect_and_extract(img1)
>>> detector_extractor2.detect_and_extract(img2)
>>> matches = match_descriptors(detector_extractor1.descriptors,
... detector_extractor2.descriptors)
>>> matches
array([[0, 0],
[1, 1],
[2, 2],
[3, 3],
[4, 4]])
>>> detector_extractor1.keypoints[matches[:, 0]]
array([[ 42., 40.],
[ 47., 58.],
[ 44., 40.],
[ 59., 42.],
[ 45., 44.]])
>>> detector_extractor2.keypoints[matches[:, 1]]
array([[ 55., 53.],
[ 60., 71.],
[ 57., 53.],
[ 72., 55.],
[ 58., 57.]])
Attributes: 


__init__
(downscale=1.2, n_scales=8, n_keypoints=500, fast_n=9, fast_threshold=0.08, harris_k=0.04)[source]¶Initialize self. See help(type(self)) for accurate signature.
detect
(image)[source]¶Detect oriented FAST keypoints along with the corresponding scale.
Parameters: 


detect_and_extract
(image)[source]¶Detect oriented FAST keypoints and extract rBRIEF descriptors.
Note that this is faster than first calling detect and then extract.
Parameters: 


extract
(image, keypoints, scales, orientations)[source]¶Extract rBRIEF binary descriptors for given keypoints in image.
Note that the keypoints must be extracted using the same downscale and n_scales parameters. Additionally, if you want to extract both keypoints and descriptors you should use the faster detect_and_extract.
Parameters: 

