Module: feature

skimage.feature.corner_foerstner(image[, sigma]) Compute Foerstner corner measure response image.
skimage.feature.corner_harris(image[, ...]) Compute Harris corner measure response image.
skimage.feature.corner_kitchen_rosenfeld(image) Compute Kitchen and Rosenfeld corner measure response image.
skimage.feature.corner_moravec Compute Moravec corner measure response image.
skimage.feature.corner_peaks(image[, ...]) Find corners in corner measure response image.
skimage.feature.corner_shi_tomasi(image[, sigma]) Compute Shi-Tomasi (Kanade-Tomasi) corner measure response image.
skimage.feature.corner_subpix(image, corners) Determine subpixel position of corners.
skimage.feature.daisy(img[, step, radius, ...]) Extract DAISY feature descriptors densely for the given image.
skimage.feature.greycomatrix(image, ...[, ...]) Calculate the grey-level co-occurrence matrix.
skimage.feature.greycoprops(P[, prop]) Calculate texture properties of a GLCM.
skimage.feature.hog(image[, orientations, ...]) Extract Histogram of Oriented Gradients (HOG) for a given image.
skimage.feature.local_binary_pattern(image, P, R) Gray scale and rotation invariant LBP (Local Binary Patterns).
skimage.feature.match_template(image, template) Match a template to an image using normalized correlation.
skimage.feature.peak_local_max(image[, ...]) Find peaks in an image, and return them as coordinates or a boolean array.

corner_foerstner

skimage.feature.corner_foerstner(image, sigma=1)

Compute Foerstner corner measure response image.

This corner detector uses information from the auto-correlation matrix A:

A = [(imx**2)   (imx*imy)] = [Axx Axy]
    [(imx*imy)   (imy**2)]   [Axy Ayy]

Where imx and imy are the first derivatives averaged with a gaussian filter. The corner measure is then defined as:

w = det(A) / trace(A)           (size of error ellipse)
q = 4 * det(A) / trace(A)**2    (roundness of error ellipse)
Parameters :

image : ndarray

Input image.

sigma : float, optional

Standard deviation used for the Gaussian kernel, which is used as weighting function for the auto-correlation matrix.

Returns :

w : ndarray

Error ellipse sizes.

q : ndarray

Roundness of error ellipse.

References

[R99]http://www.ipb.uni-bonn.de/uploads/tx_ikgpublication/foerstner87.fast.pdf
[R100]http://en.wikipedia.org/wiki/Corner_detection

Examples

>>> from skimage.feature import corner_foerstner, corner_peaks
>>> square = np.zeros([10, 10])
>>> square[2:8, 2:8] = 1
>>> square
array([[ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0]])
>>> w, q = corner_foerstner(square)
>>> accuracy_thresh = 0.5
>>> roundness_thresh = 0.3
>>> foerstner = (q > roundness_thresh) * (w > accuracy_thresh) * w
>>> corner_peaks(foerstner, min_distance=1)
array([[2, 2],
       [2, 7],
       [7, 2],
       [7, 7]])

corner_harris

skimage.feature.corner_harris(image, method='k', k=0.05, eps=1e-06, sigma=1)

Compute Harris corner measure response image.

This corner detector uses information from the auto-correlation matrix A:

A = [(imx**2)   (imx*imy)] = [Axx Axy]
    [(imx*imy)   (imy**2)]   [Axy Ayy]

Where imx and imy are the first derivatives averaged with a gaussian filter. The corner measure is then defined as:

det(A) - k * trace(A)**2

or:

2 * det(A) / (trace(A) + eps)
Parameters :

image : ndarray

Input image.

method : {‘k’, ‘eps’}, optional

Method to compute the response image from the auto-correlation matrix.

k : float, optional

Sensitivity factor to separate corners from edges, typically in range [0, 0.2]. Small values of k result in detection of sharp corners.

eps : float, optional

Normalisation factor (Noble’s corner measure).

sigma : float, optional

Standard deviation used for the Gaussian kernel, which is used as weighting function for the auto-correlation matrix.

Returns :

response : ndarray

Harris response image.

References

[R101]http://kiwi.cs.dal.ca/~dparks/CornerDetection/harris.htm
[R102]http://en.wikipedia.org/wiki/Corner_detection

Examples

>>> from skimage.feature import corner_harris, corner_peaks
>>> square = np.zeros([10, 10])
>>> square[2:8, 2:8] = 1
>>> square
array([[ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0]])
>>> corner_peaks(corner_harris(square), min_distance=1)
array([[2, 2],
       [2, 7],
       [7, 2],
       [7, 7]])

corner_kitchen_rosenfeld

skimage.feature.corner_kitchen_rosenfeld(image)

Compute Kitchen and Rosenfeld corner measure response image.

The corner measure is calculated as follows:

:Parameters:

image : ndarray

Input image.
Returns :

response : ndarray

Kitchen and Rosenfeld response image.

corner_moravec

skimage.feature.corner_moravec()

Compute Moravec corner measure response image.

This is one of the simplest corner detectors and is comparatively fast but has several limitations (e.g. not rotation invariant).

Parameters :

image : ndarray

Input image.

window_size : int, optional (default 1)

Window size.

Returns :

response : ndarray

Moravec response image.

References

..[1] http://kiwi.cs.dal.ca/~dparks/CornerDetection/moravec.htm ..[2] http://en.wikipedia.org/wiki/Corner_detection

Examples

>>> from skimage.feature import moravec, peak_local_max
>>> square = np.zeros([7, 7])
>>> square[3, 3] = 1
>>> square
array([[ 0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  1.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.]])
>>> moravec(square)
array([[ 0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  1.,  1.,  1.,  0.,  0.],
       [ 0.,  0.,  1.,  2.,  1.,  0.,  0.],
       [ 0.,  0.,  1.,  1.,  1.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.,  0.,  0.]])

corner_peaks

skimage.feature.corner_peaks(image, min_distance=10, threshold_abs=0, threshold_rel=0.1, exclude_border=True, indices=True, num_peaks=inf, footprint=None, labels=None)

Find corners in corner measure response image.

This differs from skimage.feature.peak_local_max in that it suppresses multiple connected peaks with the same accumulator value.

Parameters :See `skimage.feature.peak_local_max`. :
Returns :See `skimage.feature.peak_local_max`. :

Examples

>>> from skimage.feature import peak_local_max, corner_peaks
>>> response = np.zeros((5, 5))
>>> response[2:4, 2:4] = 1
>>> response
array([[ 0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  0.,  0.,  0.],
       [ 0.,  0.,  1.,  1.,  0.],
       [ 0.,  0.,  1.,  1.,  0.],
       [ 0.,  0.,  0.,  0.,  0.]])
>>> peak_local_max(response, exclude_border=False)
array([[2, 2],
       [2, 3],
       [3, 2],
       [3, 3]])
>>> corner_peaks(response, exclude_border=False)
array([[2, 2]])
>>> corner_peaks(response, exclude_border=False, min_distance=0)
array([[2, 2],
       [2, 3],
       [3, 2],
       [3, 3]])

corner_shi_tomasi

skimage.feature.corner_shi_tomasi(image, sigma=1)

Compute Shi-Tomasi (Kanade-Tomasi) corner measure response image.

This corner detector uses information from the auto-correlation matrix A:

A = [(imx**2)   (imx*imy)] = [Axx Axy]
    [(imx*imy)   (imy**2)]   [Axy Ayy]

Where imx and imy are the first derivatives averaged with a gaussian filter. The corner measure is then defined as the smaller eigenvalue of A:

((Axx + Ayy) - sqrt((Axx - Ayy)**2 + 4 * Axy**2)) / 2
Parameters :

image : ndarray

Input image.

sigma : float, optional

Standard deviation used for the Gaussian kernel, which is used as weighting function for the auto-correlation matrix.

Returns :

response : ndarray

Shi-Tomasi response image.

References

[R103]http://kiwi.cs.dal.ca/~dparks/CornerDetection/harris.htm
[R104]http://en.wikipedia.org/wiki/Corner_detection

Examples

>>> from skimage.feature import corner_shi_tomasi, corner_peaks
>>> square = np.zeros([10, 10])
>>> square[2:8, 2:8] = 1
>>> square
array([[ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  1,  1,  1,  1,  1,  1,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0],
       [ 0,  0,  0,  0,  0,  0,  0,  0,  0,  0]])
>>> corner_peaks(corner_shi_tomasi(square), min_distance=1)
array([[2, 2],
       [2, 7],
       [7, 2],
       [7, 7]])

corner_subpix

skimage.feature.corner_subpix(image, corners, window_size=11, alpha=0.99)

Determine subpixel position of corners.

Parameters :

image : ndarray

Input image.

corners : (N, 2) ndarray

Corner coordinates (row, col).

window_size : int, optional

Search window size for subpixel estimation.

alpha : float, optional

Significance level for point classification.

Returns :

positions : (N, 2) ndarray

Subpixel corner positions. NaN for “not classified” corners.

References

[R105]http://www.ipb.uni-bonn.de/uploads/tx_ikgpublication/ foerstner87.fast.pdf
[R106]http://en.wikipedia.org/wiki/Corner_detection

daisy

skimage.feature.daisy(img, step=4, radius=15, rings=3, histograms=8, orientations=8, normalization='l1', sigmas=None, ring_radii=None, visualize=False)

Extract DAISY feature descriptors densely for the given image.

DAISY is a feature descriptor similar to SIFT formulated in a way that allows for fast dense extraction. Typically, this is practical for bag-of-features image representations.

The implementation follows Tola et al. [R107] but deviate on the following points:

  • Histogram bin contribution are smoothed with a circular Gaussian window over the tonal range (the angular range).
  • The sigma values of the spatial Gaussian smoothing in this code do not match the sigma values in the original code by Tola et al. [R108]. In their code, spatial smoothing is applied to both the input image and the center histogram. However, this smoothing is not documented in [R107] and, therefore, it is omitted.
Parameters :

img : (M, N) array

Input image (greyscale).

step : int, optional

Distance between descriptor sampling points.

radius : int, optional

Radius (in pixels) of the outermost ring.

rings : int, optional

Number of rings.

histograms : int, optional

Number of histograms sampled per ring.

orientations : int, optional

Number of orientations (bins) per histogram.

normalization : [ ‘l1’ | ‘l2’ | ‘daisy’ | ‘off’ ], optional

How to normalize the descriptors

  • ‘l1’: L1-normalization of each descriptor.
  • ‘l2’: L2-normalization of each descriptor.
  • ‘daisy’: L2-normalization of individual histograms.
  • ‘off’: Disable normalization.

sigmas : 1D array of float, optional

Standard deviation of spatial Gaussian smoothing for the center histogram and for each ring of histograms. The array of sigmas should be sorted from the center and out. I.e. the first sigma value defines the spatial smoothing of the center histogram and the last sigma value defines the spatial smoothing of the outermost ring. Specifying sigmas overrides the following parameter.

rings = len(sigmas) - 1

ring_radii : 1D array of int, optional

Radius (in pixels) for each ring. Specifying ring_radii overrides the following two parameters.

rings = len(ring_radii) radius = ring_radii[-1]

If both sigmas and ring_radii are given, they must satisfy the following predicate since no radius is needed for the center histogram.

len(ring_radii) == len(sigmas) + 1

visualize : bool, optional

Generate a visualization of the DAISY descriptors

Returns :

descs : array

Grid of DAISY descriptors for the given image as an array dimensionality (P, Q, R) where

P = ceil((M - radius*2) / step) Q = ceil((N - radius*2) / step) R = (rings * histograms + 1) * orientations

descs_img : (M, N, 3) array (only if visualize==True)

Visualization of the DAISY descriptors.

References

[R107](1, 2, 3) Tola et al. “Daisy: An efficient dense descriptor applied to wide- baseline stereo.” Pattern Analysis and Machine Intelligence, IEEE Transactions on 32.5 (2010): 815-830.
[R108](1, 2) http://cvlab.epfl.ch/alumni/tola/daisy.html

greycomatrix

skimage.feature.greycomatrix(image, distances, angles, levels=256, symmetric=False, normed=False)

Calculate the grey-level co-occurrence matrix.

A grey level co-occurence matrix is a histogram of co-occuring greyscale values at a given offset over an image.

Parameters :

image : array_like of uint8

Integer typed input image. The image will be cast to uint8, so the maximum value must be less than 256.

distances : array_like

List of pixel pair distance offsets.

angles : array_like

List of pixel pair angles in radians.

levels : int, optional

The input image should contain integers in [0, levels-1], where levels indicate the number of grey-levels counted (typically 256 for an 8-bit image). The maximum value is 256.

symmetric : bool, optional

If True, the output matrix P[:, :, d, theta] is symmetric. This is accomplished by ignoring the order of value pairs, so both (i, j) and (j, i) are accumulated when (i, j) is encountered for a given offset. The default is False.

normed : bool, optional

If True, normalize each matrix P[:, :, d, theta] by dividing by the total number of accumulated co-occurrences for the given offset. The elements of the resulting matrix sum to 1. The default is False.

Returns :

P : 4-D ndarray

The grey-level co-occurrence histogram. The value P[i,j,d,theta] is the number of times that grey-level j occurs at a distance d and at an angle theta from grey-level i. If normed is False, the output is of type uint32, otherwise it is float64.

References

[R109]The GLCM Tutorial Home Page, http://www.fp.ucalgary.ca/mhallbey/tutorial.htm
[R110]Pattern Recognition Engineering, Morton Nadler & Eric P. Smith
[R111]Wikipedia, http://en.wikipedia.org/wiki/Co-occurrence_matrix

Examples

Compute 2 GLCMs: One for a 1-pixel offset to the right, and one for a 1-pixel offset upwards.

>>> image = np.array([[0, 0, 1, 1],
...                   [0, 0, 1, 1],
...                   [0, 2, 2, 2],
...                   [2, 2, 3, 3]], dtype=np.uint8)
>>> result = greycomatrix(image, [1], [0, np.pi/2], levels=4)
>>> result[:, :, 0, 0]
array([[2, 2, 1, 0],
       [0, 2, 0, 0],
       [0, 0, 3, 1],
       [0, 0, 0, 1]], dtype=uint32)
>>> result[:, :, 0, 1]
array([[3, 0, 2, 0],
       [0, 2, 2, 0],
       [0, 0, 1, 2],
       [0, 0, 0, 0]], dtype=uint32)

greycoprops

skimage.feature.greycoprops(P, prop='contrast')

Calculate texture properties of a GLCM.

Compute a feature of a grey level co-occurrence matrix to serve as a compact summary of the matrix. The properties are computed as follows:

  • ‘contrast’: \sum_{i,j=0}^{levels-1} P_{i,j}(i-j)^2

  • ‘dissimilarity’: \sum_{i,j=0}^{levels-1}P_{i,j}|i-j|

  • ‘homogeneity’: \sum_{i,j=0}^{levels-1}\frac{P_{i,j}}{1+(i-j)^2}

  • ‘ASM’: \sum_{i,j=0}^{levels-1} P_{i,j}^2

  • ‘energy’: \sqrt{ASM}

  • ‘correlation’:

    \sum_{i,j=0}^{levels-1} P_{i,j}\left[\frac{(i-\mu_i) \
(j-\mu_j)}{\sqrt{(\sigma_i^2)(\sigma_j^2)}}\right]

Parameters :

P : ndarray

Input array. P is the grey-level co-occurrence histogram for which to compute the specified property. The value P[i,j,d,theta] is the number of times that grey-level j occurs at a distance d and at an angle theta from grey-level i.

prop : {‘contrast’, ‘dissimilarity’, ‘homogeneity’, ‘energy’, ‘correlation’, ‘ASM’}, optional

The property of the GLCM to compute. The default is ‘contrast’.

Returns :

results : 2-D ndarray

2-dimensional array. results[d, a] is the property ‘prop’ for the d’th distance and the a’th angle.

References

[R112]The GLCM Tutorial Home Page, http://www.fp.ucalgary.ca/mhallbey/tutorial.htm

Examples

Compute the contrast for GLCMs with distances [1, 2] and angles [0 degrees, 90 degrees]

>>> image = np.array([[0, 0, 1, 1],
...                   [0, 0, 1, 1],
...                   [0, 2, 2, 2],
...                   [2, 2, 3, 3]], dtype=np.uint8)
>>> g = greycomatrix(image, [1, 2], [0, np.pi/2], levels=4,
...                  normed=True, symmetric=True)
>>> contrast = greycoprops(g, 'contrast')
>>> contrast
array([[ 0.58333333,  1.        ],
       [ 1.25      ,  2.75      ]])

hog

skimage.feature.hog(image, orientations=9, pixels_per_cell=(8, 8), cells_per_block=(3, 3), visualise=False, normalise=False)

Extract Histogram of Oriented Gradients (HOG) for a given image.

Compute a Histogram of Oriented Gradients (HOG) by

  1. (optional) global image normalisation
  2. computing the gradient image in x and y
  3. computing gradient histograms
  4. normalising across blocks
  5. flattening into a feature vector
Parameters :

image : (M, N) ndarray

Input image (greyscale).

orientations : int

Number of orientation bins.

pixels_per_cell : 2 tuple (int, int)

Size (in pixels) of a cell.

cells_per_block : 2 tuple (int,int)

Number of cells in each block.

visualise : bool, optional

Also return an image of the HOG.

normalise : bool, optional

Apply power law compression to normalise the image before processing.

Returns :

newarr : ndarray

HOG for the image as a 1D (flattened) array.

hog_image : ndarray (if visualise=True)

A visualisation of the HOG image.

References

local_binary_pattern

skimage.feature.local_binary_pattern(image, P, R, method='default')

Gray scale and rotation invariant LBP (Local Binary Patterns).

LBP is an invariant descriptor that can be used for texture classification.

Parameters :

image : (N, M) array

Graylevel image.

P : int

Number of circularly symmetric neighbour set points (quantization of the angular space).

R : float

Radius of circle (spatial resolution of the operator).

method : {‘default’, ‘ror’, ‘uniform’, ‘var’}

Method to determine the pattern.

  • ‘default’: original local binary pattern which is gray scale but not

    rotation invariant.

  • ‘ror’: extension of default implementation which is gray scale and

    rotation invariant.

  • ‘uniform’: improved rotation invariance with uniform patterns and

    finer quantization of the angular space which is gray scale and rotation invariant.

  • ‘nri_uniform’: non rotation-invariant uniform patterns variant

    which is only gray scale invariant [2].

  • ‘var’: rotation invariant variance measures of the contrast of local

    image texture which is rotation but not gray scale invariant.

Returns :

output : (N, M) array

LBP image.

References

[R113]Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns. Timo Ojala, Matti Pietikainen, Topi Maenpaa. http://www.rafbis.it/biplab15/images/stories/docenti/Danielriccio/ Articoliriferimento/LBP.pdf, 2002.
[R114]Face recognition with local binary patterns. Timo Ahonen, Abdenour Hadid, Matti Pietikainen, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.214.6851, 2004.

match_template

skimage.feature.match_template(image, template, pad_input=False)

Match a template to an image using normalized correlation.

The output is an array with values between -1.0 and 1.0, which correspond to the probability that the template is found at that position.

Parameters :

image : array_like

Image to process.

template : array_like

Template to locate.

pad_input : bool

If True, pad image with image mean so that output is the same size as the image, and output values correspond to the template center. Otherwise, the output is an array with shape (M - m + 1, N - n + 1) for an (M, N) image and an (m, n) template, and matches correspond to origin (top-left corner) of the template.

Returns :

output : ndarray

Correlation results between -1.0 and 1.0. For an (M, N) image and an (m, n) template, the output is (M - m + 1, N - n + 1) when pad_input = False and (M, N) when pad_input = True.

Examples

>>> template = np.zeros((3, 3))
>>> template[1, 1] = 1
>>> print(template)
[[ 0.  0.  0.]
 [ 0.  1.  0.]
 [ 0.  0.  0.]]
>>> image = np.zeros((6, 6))
>>> image[1, 1] = 1
>>> image[4, 4] = -1
>>> print(image)
[[ 0.  0.  0.  0.  0.  0.]
 [ 0.  1.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0. -1.  0.]
 [ 0.  0.  0.  0.  0.  0.]]
>>> result = match_template(image, template)
>>> print(np.round(result, 3))
[[ 1.    -0.125  0.     0.   ]
 [-0.125 -0.125  0.     0.   ]
 [ 0.     0.     0.125  0.125]
 [ 0.     0.     0.125 -1.   ]]
>>> result = match_template(image, template, pad_input=True)
>>> print(np.round(result, 3))
[[-0.125 -0.125 -0.125  0.     0.     0.   ]
 [-0.125  1.    -0.125  0.     0.     0.   ]
 [-0.125 -0.125 -0.125  0.     0.     0.   ]
 [ 0.     0.     0.     0.125  0.125  0.125]
 [ 0.     0.     0.     0.125 -1.     0.125]
 [ 0.     0.     0.     0.125  0.125  0.125]]

peak_local_max

skimage.feature.peak_local_max(image, min_distance=10, threshold_abs=0, threshold_rel=0.1, exclude_border=True, indices=True, num_peaks=inf, footprint=None, labels=None)

Find peaks in an image, and return them as coordinates or a boolean array.

Peaks are the local maxima in a region of 2 * min_distance + 1 (i.e. peaks are separated by at least min_distance).

NOTE: If peaks are flat (i.e. multiple adjacent pixels have identical intensities), the coordinates of all such pixels are returned.

Parameters :

image : ndarray of floats

Input image.

min_distance : int

Minimum number of pixels separating peaks in a region of 2 * min_distance + 1 (i.e. peaks are separated by at least min_distance). If exclude_border is True, this value also excludes a border min_distance from the image boundary. To find the maximum number of peaks, use min_distance=1.

threshold_abs : float

Minimum intensity of peaks.

threshold_rel : float

Minimum intensity of peaks calculated as max(image) * threshold_rel.

exclude_border : bool

If True, min_distance excludes peaks from the border of the image as well as from each other.

indices : bool

If True, the output will be an array representing peak coordinates. If False, the output will be a boolean array shaped as image.shape with peaks present at True elements.

num_peaks : int

Maximum number of peaks. When the number of peaks exceeds num_peaks, return num_peaks peaks based on highest peak intensity.

footprint : ndarray of bools, optional

If provided, footprint == 1 represents the local region within which to search for peaks at every point in image. Overrides min_distance, except for border exclusion if exclude_border=True.

labels : ndarray of ints, optional

If provided, each unique region labels == value represents a unique region to search for peaks. Zero is reserved for background.

Returns :

output : (N, 2) array or ndarray of bools

  • If indices = True : (row, column) coordinates of peaks.
  • If indices = False : Boolean array shaped like image, with peaks represented by True values.

Notes

The peak local maximum function returns the coordinates of local peaks (maxima) in a image. A maximum filter is used for finding local maxima. This operation dilates the original image. After comparison between dilated and original image, peak_local_max function returns the coordinates of peaks where dilated image = original.

Examples

>>> im = np.zeros((7, 7))
>>> im[3, 4] = 1
>>> im[3, 2] = 1.5
>>> im
array([[ 0. ,  0. ,  0. ,  0. ,  0. ,  0. ,  0. ],
       [ 0. ,  0. ,  0. ,  0. ,  0. ,  0. ,  0. ],
       [ 0. ,  0. ,  0. ,  0. ,  0. ,  0. ,  0. ],
       [ 0. ,  0. ,  1.5,  0. ,  1. ,  0. ,  0. ],
       [ 0. ,  0. ,  0. ,  0. ,  0. ,  0. ,  0. ],
       [ 0. ,  0. ,  0. ,  0. ,  0. ,  0. ,  0. ],
       [ 0. ,  0. ,  0. ,  0. ,  0. ,  0. ,  0. ]])
>>> peak_local_max(im, min_distance=1)
array([[3, 2],
       [3, 4]])
>>> peak_local_max(im, min_distance=2)
array([[3, 2]])