restoration
¶Image restoration module.

WienerHunt deconvolution 
Unsupervised WienerHunt deconvolution. 


RichardsonLucy deconvolution. 

Recover the original from a wrapped phase image. 
Perform totalvariation denoising using splitBregman optimization. 

Perform totalvariation denoising on ndimensional images. 

Denoise image using bilateral filter. 


Perform wavelet denoising on an image. 
Perform nonlocal means denoising on 2D or 3D grayscale images, and 2D RGB images. 


Robust waveletbased estimator of the (Gaussian) noise standard deviation. 
Inpaint masked points in image with biharmonic equations. 


Cycle spinning (repeatedly apply func to shifted versions of x). 
skimage.restoration.
wiener
(image, psf, balance, reg=None, is_real=True, clip=True)[source]¶WienerHunt deconvolution
Return the deconvolution with a WienerHunt approach (i.e. with Fourier diagonalisation).
Input degraded image
Point Spread Function. This is assumed to be the impulse response (input image space) if the datatype is real, or the transfer function (Fourier space) if the datatype is complex. There is no constraints on the shape of the impulse response. The transfer function must be of shape (M, N) if is_real is True, (M, N // 2 + 1) otherwise (see np.fft.rfftn).
The regularisation parameter value that tunes the balance between the data adequacy that improve frequency restoration and the prior adequacy that reduce frequency restoration (to avoid noise artifacts).
The regularisation operator. The Laplacian by default. It can be an impulse response or a transfer function, as for the psf. Shape constraint is the same as for the psf parameter.
True by default. Specify if psf
and reg
are provided
with hermitian hypothesis, that is only half of the frequency
plane is provided (due to the redundancy of Fourier transform
of real signal). It’s apply only if psf
and/or reg
are
provided as transfer function. For the hermitian property see
uft
module or np.fft.rfftn
.
True by default. If True, pixel values of the result above 1 or under 1 are thresholded for skimage pipeline compatibility.
The deconvolved image.
Notes
This function applies the Wiener filter to a noisy and degraded image by an impulse response (or PSF). If the data model is
where \(n\) is noise, \(H\) the PSF and \(x\) the unknown original image, the Wiener filter is
where \(F\) and \(F^\dagger\) are the Fourier and inverse Fourier transfroms respectively, \(\Lambda_H\) the transfer function (or the Fourier transfrom of the PSF, see [Hunt] below) and \(\Lambda_D\) the filter to penalize the restored image frequencies (Laplacian by default, that is penalization of high frequency). The parameter \(\lambda\) tunes the balance between the data (that tends to increase high frequency, even those coming from noise), and the regularization.
These methods are then specific to a prior model. Consequently, the application or the true image nature must corresponds to the prior model. By default, the prior model (Laplacian) introduce image smoothness or pixel correlation. It can also be interpreted as highfrequency penalization to compensate the instability of the solution with respect to the data (sometimes called noise amplification or “explosive” solution).
Finally, the use of Fourier space implies a circulant property of \(H\), see [Hunt].
References
François Orieux, JeanFrançois Giovannelli, and Thomas Rodet, “Bayesian estimation of regularization and point spread function parameters for WienerHunt deconvolution”, J. Opt. Soc. Am. A 27, 15931607 (2010)
http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa2771593
B. R. Hunt “A matrix theory proof of the discrete convolution theorem”, IEEE Trans. on Audio and Electroacoustics, vol. au19, no. 4, pp. 285288, dec. 1971
Examples
>>> from skimage import color, data, restoration
>>> img = color.rgb2gray(data.astronaut())
>>> from scipy.signal import convolve2d
>>> psf = np.ones((5, 5)) / 25
>>> img = convolve2d(img, psf, 'same')
>>> img += 0.1 * img.std() * np.random.standard_normal(img.shape)
>>> deconvolved_img = restoration.wiener(img, psf, 1100)
skimage.restoration.
unsupervised_wiener
(image, psf, reg=None, user_params=None, is_real=True, clip=True)[source]¶Unsupervised WienerHunt deconvolution.
Return the deconvolution with a WienerHunt approach, where the
hyperparameters are automatically estimated. The algorithm is a
stochastic iterative process (Gibbs sampler) described in the
reference below. See also wiener
function.
The input degraded image.
The impulse response (input image’s space) or the transfer
function (Fourier space). Both are accepted. The transfer
function is automatically recognized as being complex
(np.iscomplexobj(psf)
).
The regularisation operator. The Laplacian by default. It can be an impulse response or a transfer function, as for the psf.
Dictionary of parameters for the Gibbs sampler. See below.
True by default. If true, pixel values of the result above 1 or under 1 are thresholded for skimage pipeline compatibility.
The deconvolved image (the posterior mean).
The keys noise
and prior
contain the chain list of
noise and prior precision respectively.
The stopping criterion: the norm of the difference between to successive approximated solution (empirical mean of object samples, see Notes section). 1e4 by default.
The number of sample to ignore to start computation of the mean. 15 by default.
The minimum number of iterations. 30 by default.
The maximum number of iterations if threshold
is not
satisfied. 200 by default.
A user provided callable to which is passed, if the function exists, the current image sample for whatever purpose. The user can store the sample, or compute other moments than the mean. It has no influence on the algorithm execution and is only for inspection.
Notes
The estimated image is design as the posterior mean of a probability law (from a Bayesian analysis). The mean is defined as a sum over all the possible images weighted by their respective probability. Given the size of the problem, the exact sum is not tractable. This algorithm use of MCMC to draw image under the posterior law. The practical idea is to only draw highly probable images since they have the biggest contribution to the mean. At the opposite, the less probable images are drawn less often since their contribution is low. Finally the empirical mean of these samples give us an estimation of the mean, and an exact computation with an infinite sample set.
References
François Orieux, JeanFrançois Giovannelli, and Thomas Rodet, “Bayesian estimation of regularization and point spread function parameters for WienerHunt deconvolution”, J. Opt. Soc. Am. A 27, 15931607 (2010)
http://www.opticsinfobase.org/josaa/abstract.cfm?URI=josaa2771593
Examples
>>> from skimage import color, data, restoration
>>> img = color.rgb2gray(data.astronaut())
>>> from scipy.signal import convolve2d
>>> psf = np.ones((5, 5)) / 25
>>> img = convolve2d(img, psf, 'same')
>>> img += 0.1 * img.std() * np.random.standard_normal(img.shape)
>>> deconvolved_img = restoration.unsupervised_wiener(img, psf)
skimage.restoration.unsupervised_wiener
¶skimage.restoration.
richardson_lucy
(image, psf, iterations=50, clip=True)[source]¶RichardsonLucy deconvolution.
Input degraded image (can be N dimensional).
The point spread function.
Number of iterations. This parameter plays the role of regularisation.
True by default. If true, pixel value of the result above 1 or under 1 are thresholded for skimage pipeline compatibility.
The deconvolved image.
References
Examples
>>> from skimage import color, data, restoration
>>> camera = color.rgb2gray(data.camera())
>>> from scipy.signal import convolve2d
>>> psf = np.ones((5, 5)) / 25
>>> camera = convolve2d(camera, psf, 'same')
>>> camera += 0.1 * camera.std() * np.random.standard_normal(camera.shape)
>>> deconvolved = restoration.richardson_lucy(camera, psf, 5)
skimage.restoration.richardson_lucy
¶skimage.restoration.
unwrap_phase
(image, wrap_around=False, seed=None)[source]¶Recover the original from a wrapped phase image.
From an image wrapped to lie in the interval [pi, pi), recover the original, unwrapped image.
The values should be in the range [pi, pi). If a masked array is provided, the masked entries will not be changed, and their values will not be used to guide the unwrapping of neighboring, unmasked values. Masked 1D arrays are not allowed, and will raise a ValueError.
When an element of the sequence is True, the unwrapping process will regard the edges along the corresponding axis of the image to be connected and use this connectivity to guide the phase unwrapping process. If only a single boolean is given, it will apply to all axes. Wrap around is not supported for 1D arrays.
Unwrapping 2D or 3D images uses random initialization. This sets the seed of the PRNG to achieve deterministic behavior.
Unwrapped image of the same shape as the input. If the input image was a masked array, the mask will be preserved.
If called with a masked 1D array or called with a 1D array and
wrap_around=True
.
References
Miguel Arevallilo Herraez, David R. Burton, Michael J. Lalor, and Munther A. Gdeisat, “Fast twodimensional phaseunwrapping algorithm based on sorting by reliability following a noncontinuous path”, Journal Applied Optics, Vol. 41, No. 35 (2002) 7437,
AbdulRahman, H., Gdeisat, M., Burton, D., & Lalor, M., “Fast threedimensional phaseunwrapping algorithm based on sorting by reliability following a noncontinuous path. In W. Osten, C. Gorecki, & E. L. Novak (Eds.), Optical Metrology (2005) 32–40, International Society for Optics and Photonics.
Examples
>>> c0, c1 = np.ogrid[1:1:128j, 1:1:128j]
>>> image = 12 * np.pi * np.exp((c0**2 + c1**2))
>>> image_wrapped = np.angle(np.exp(1j * image))
>>> image_unwrapped = unwrap_phase(image_wrapped)
>>> np.std(image_unwrapped  image) < 1e6 # A constant offset is normal
True
skimage.restoration.unwrap_phase
¶skimage.restoration.
denoise_tv_bregman
(image, weight, max_iter=100, eps=0.001, isotropic=True)[source]¶Perform totalvariation denoising using splitBregman optimization.
Totalvariation denoising (also know as totalvariation regularization) tries to find an image with less totalvariation under the constraint of being similar to the input image, which is controlled by the regularization parameter ([1], [2], [3], [4]).
Input data to be denoised (converted using img_as_float`).
Denoising weight. The smaller the weight, the more denoising (at the expense of less similarity to the input). The regularization parameter lambda is chosen as 2 * weight.
Relative difference of the value of the cost function that determines the stop criterion. The algorithm stops when:
SUM((u(n)  u(n1))**2) < eps
Maximal number of iterations used for the optimization.
Switch between isotropic and anisotropic TV denoising.
Denoised image.
References
Tom Goldstein and Stanley Osher, “The Split Bregman Method For L1 Regularized Problems”, ftp://ftp.math.ucla.edu/pub/camreport/cam0829.pdf
Pascal Getreuer, “Rudin–Osher–Fatemi Total Variation Denoising using Split Bregman” in Image Processing On Line on 2012–05–19, http://www.ipol.im/pub/art/2012/gtvd/article_lr.pdf
http://www.math.ucsb.edu/~cgarcia/UGProjects/BregmanAlgorithms_JacquelineBush.pdf
skimage.restoration.
denoise_tv_chambolle
(image, weight=0.1, eps=0.0002, n_iter_max=200, multichannel=False)[source]¶Perform totalvariation denoising on ndimensional images.
Input data to be denoised. image can be of any numeric type, but it is cast into an ndarray of floats for the computation of the denoised image.
Denoising weight. The greater weight, the more denoising (at the expense of fidelity to input).
Relative difference of the value of the cost function that determines the stop criterion. The algorithm stops when:
(E_(n1)  E_n) < eps * E_0
Maximal number of iterations used for the optimization.
Apply totalvariation denoising separately for each channel. This option should be true for color images, otherwise the denoising is also applied in the channels dimension.
Denoised image.
Notes
Make sure to set the multichannel parameter appropriately for color images.
The principle of total variation denoising is explained in http://en.wikipedia.org/wiki/Total_variation_denoising
The principle of total variation denoising is to minimize the total variation of the image, which can be roughly described as the integral of the norm of the image gradient. Total variation denoising tends to produce “cartoonlike” images, that is, piecewiseconstant images.
This code is an implementation of the algorithm of Rudin, Fatemi and Osher that was proposed by Chambolle in [1].
References
A. Chambolle, An algorithm for total variation minimization and applications, Journal of Mathematical Imaging and Vision, Springer, 2004, 20, 8997.
Examples
2D example on astronaut image:
>>> from skimage import color, data
>>> img = color.rgb2gray(data.astronaut())[:50, :50]
>>> img += 0.5 * img.std() * np.random.randn(*img.shape)
>>> denoised_img = denoise_tv_chambolle(img, weight=60)
3D example on synthetic data:
>>> x, y, z = np.ogrid[0:20, 0:20, 0:20]
>>> mask = (x  22)**2 + (y  20)**2 + (z  17)**2 < 8**2
>>> mask = mask.astype(np.float)
>>> mask += 0.2*np.random.randn(*mask.shape)
>>> res = denoise_tv_chambolle(mask, weight=100)
skimage.restoration.denoise_tv_chambolle
¶skimage.restoration.
denoise_bilateral
(image, win_size=None, sigma_color=None, sigma_spatial=1, bins=10000, mode='constant', cval=0, multichannel=None)[source]¶Denoise image using bilateral filter.
This is an edgepreserving, denoising filter. It averages pixels based on their spatial closeness and radiometric similarity [1].
Spatial closeness is measured by the Gaussian function of the Euclidean distance between two pixels and a certain standard deviation (sigma_spatial).
Radiometric similarity is measured by the Gaussian function of the Euclidean distance between two color values and a certain standard deviation (sigma_color).
Input image, 2D grayscale or RGB.
Window size for filtering.
If win_size is not specified, it is calculated as
max(5, 2 * ceil(3 * sigma_spatial) + 1)
.
Standard deviation for grayvalue/color distance (radiometric
similarity). A larger value results in averaging of pixels with larger
radiometric differences. Note, that the image will be converted using
the img_as_float function and thus the standard deviation is in
respect to the range [0, 1]
. If the value is None
the standard
deviation of the image
will be used.
Standard deviation for range distance. A larger value results in averaging of pixels with larger spatial differences.
Number of discrete values for Gaussian weights of color filtering. A larger value results in improved accuracy.
How to handle values outside the image borders. See numpy.pad for detail.
Used in conjunction with mode ‘constant’, the value outside the image boundaries.
Whether the last axis of the image is to be interpreted as multiple channels or another spatial dimension.
Denoised image.
References
Examples
>>> from skimage import data, img_as_float
>>> astro = img_as_float(data.astronaut())
>>> astro = astro[220:300, 220:320]
>>> noisy = astro + 0.6 * astro.std() * np.random.random(astro.shape)
>>> noisy = np.clip(noisy, 0, 1)
>>> denoised = denoise_bilateral(noisy, sigma_color=0.05, sigma_spatial=15)
skimage.restoration.denoise_bilateral
¶skimage.restoration.
denoise_wavelet
(image, sigma=None, wavelet='db1', mode='soft', wavelet_levels=None, multichannel=False, convert2ycbcr=False, method='BayesShrink')[source]¶Perform wavelet denoising on an image.
Input data to be denoised. image can be of any numeric type, but it is cast into an ndarray of floats for the computation of the denoised image.
The noise standard deviation used when computing the wavelet detail coefficient threshold(s). When None (default), the noise standard deviation is estimated via the method in [2].
The type of wavelet to perform and can be any of the options
pywt.wavelist
outputs. The default is ‘db1’. For example,
wavelet
can be any of {'db2', 'haar', 'sym9'}
and many more.
An optional argument to choose the type of denoising performed. It noted that choosing soft thresholding given additive noise finds the best approximation of the original image.
The number of wavelet decomposition levels to use. The default is three less than the maximum number of possible decomposition levels.
Apply wavelet denoising separately for each channel (where channels correspond to the final axis of the array).
If True and multichannel True, do the wavelet denoising in the YCbCr colorspace instead of the RGB color space. This typically results in better performance for RGB images.
Thresholding method to be used. The currently supported methods are “BayesShrink” [1] and “VisuShrink” [2]. Defaults to “BayesShrink”.
Denoised image.
Notes
The wavelet domain is a sparse representation of the image, and can be thought of similarly to the frequency domain of the Fourier transform. Sparse representations have most values zero or nearzero and truly random noise is (usually) represented by many small values in the wavelet domain. Setting all values below some threshold to 0 reduces the noise in the image, but larger thresholds also decrease the detail present in the image.
If the input is 3D, this function performs wavelet denoising on each color plane separately. The output image is clipped between either [1, 1] and [0, 1] depending on the input image range.
When YCbCr conversion is done, every color channel is scaled between 0 and 1, and sigma values are applied to these scaled color channels.
Many wavelet coefficient thresholding approaches have been proposed. By
default, denoise_wavelet
applies BayesShrink, which is an adaptive
thresholding method that computes separate thresholds for each wavelet
subband as described in [1].
If method == "VisuShrink"
, a single “universal threshold” is applied to
all wavelet detail coefficients as described in [2]. This threshold
is designed to remove all Gaussian noise at a given sigma
with high
probability, but tends to produce images that appear overly smooth.
References
Chang, S. Grace, Bin Yu, and Martin Vetterli. “Adaptive wavelet thresholding for image denoising and compression.” Image Processing, IEEE Transactions on 9.9 (2000): 15321546. DOI: 10.1109/83.862633
D. L. Donoho and I. M. Johnstone. “Ideal spatial adaptation by wavelet shrinkage.” Biometrika 81.3 (1994): 425455. DOI: 10.1093/biomet/81.3.425
Examples
>>> from skimage import color, data
>>> img = img_as_float(data.astronaut())
>>> img = color.rgb2gray(img)
>>> img += 0.1 * np.random.randn(*img.shape)
>>> img = np.clip(img, 0, 1)
>>> denoised_img = denoise_wavelet(img, sigma=0.1)
skimage.restoration.
denoise_nl_means
(image, patch_size=7, patch_distance=11, h=0.1, multichannel=None, fast_mode=True, sigma=0.0)[source]¶Perform nonlocal means denoising on 2D or 3D grayscale images, and 2D RGB images.
Input image to be denoised, which can be 2D or 3D, and grayscale
or RGB (for 2D images only, see multichannel
parameter).
Size of patches used for denoising.
Maximal distance in pixels where to search patches used for denoising.
Cutoff distance (in gray levels). The higher h, the more permissive one is in accepting patches. A higher h results in a smoother image, at the expense of blurring features. For a Gaussian noise of standard deviation sigma, a rule of thumb is to choose the value of h to be sigma of slightly less.
Whether the last axis of the image is to be interpreted as multiple
channels or another spatial dimension. Set to False
for 3D images.
If True (default value), a fast version of the nonlocal means algorithm is used. If False, the original version of nonlocal means is used. See the Notes section for more details about the algorithms.
The standard deviation of the (Gaussian) noise. If provided, a more robust computation of patch weights is computed that takes the expected noise variance into account (see Notes below).
Denoised image, of same shape as image.
Notes
The nonlocal means algorithm is well suited for denoising images with specific textures. The principle of the algorithm is to average the value of a given pixel with values of other pixels in a limited neighbourhood, provided that the patches centered on the other pixels are similar enough to the patch centered on the pixel of interest.
In the original version of the algorithm [1], corresponding to
fast=False
, the computational complexity is:
image.size * patch_size ** image.ndim * patch_distance ** image.ndim
Hence, changing the size of patches or their maximal distance has a strong effect on computing times, especially for 3D images.
However, the default behavior corresponds to fast_mode=True
, for which
another version of nonlocal means [2] is used, corresponding to a
complexity of:
image.size * patch_distance ** image.ndim
The computing time depends only weakly on the patch size, thanks to
the computation of the integral of patches distances for a given
shift, that reduces the number of operations [1]. Therefore, this
algorithm executes faster than the classic algorith
(fast_mode=False
), at the expense of using twice as much memory.
This implementation has been proven to be more efficient compared to
other alternatives, see e.g. [3].
Compared to the classic algorithm, all pixels of a patch contribute to the distance to another patch with the same weight, no matter their distance to the center of the patch. This coarser computation of the distance can result in a slightly poorer denoising performance. Moreover, for small images (images with a linear size that is only a few times the patch size), the classic algorithm can be faster due to boundary effects.
The image is padded using the reflect mode of skimage.util.pad before denoising.
If the noise standard deviation, sigma, is provided a more robust computation of patch weights is used. Subtracting the known noise variance from the computed patch distances improves the estimates of patch similarity, giving a moderate improvement to denoising performance [4]. It was also mentioned as an option for the fast variant of the algorithm in [3].
When sigma is provided, a smaller h should typically be used to
avoid oversmoothing. The optimal value for h depends on the image
content and noise level, but a reasonable starting point is
h = 0.8 * sigma
when fast_mode is True, or h = 0.6 * sigma
when
fast_mode is False.
References
A. Buades, B. Coll, & JM. Morel. A nonlocal algorithm for image denoising. In CVPR 2005, Vol. 2, pp. 6065, IEEE. DOI: 10.1109/CVPR.2005.38
J. Darbon, A. Cunha, T.F. Chan, S. Osher, and G.J. Jensen, Fast nonlocal filtering applied to electron cryomicroscopy, in 5th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2008, pp. 13311334. DOI: 10.1109/ISBI.2008.4541250
Jacques Froment. ParameterFree Fast Pixelwise NonLocal Means Denoising. Image Processing On Line, 2014, vol. 4, pp. 300326. DOI: 10.5201/ipol.2014.120
A. Buades, B. Coll, & JM. Morel. NonLocal Means Denoising. Image Processing On Line, 2011, vol. 1, pp. 208212. DOI: 10.5201/ipol.2011.bcm_nlm
Examples
>>> a = np.zeros((40, 40))
>>> a[10:10, 10:10] = 1.
>>> a += 0.3 * np.random.randn(*a.shape)
>>> denoised_a = denoise_nl_means(a, 7, 5, 0.1)
skimage.restoration.denoise_nl_means
¶skimage.restoration.
estimate_sigma
(image, average_sigmas=False, multichannel=False)[source]¶Robust waveletbased estimator of the (Gaussian) noise standard deviation.
Image for which to estimate the noise standard deviation.
If true, average the channel estimates of sigma. Otherwise return a list of sigmas corresponding to each channel.
Estimate sigma separately for each channel.
Estimated noise standard deviation(s). If multichannel is True and average_sigmas is False, a separate noise estimate for each channel is returned. Otherwise, the average of the individual channel estimates is returned.
Notes
This function assumes the noise follows a Gaussian distribution. The estimation algorithm is based on the median absolute deviation of the wavelet detail coefficients as described in section 4.2 of [1].
References
D. L. Donoho and I. M. Johnstone. “Ideal spatial adaptation by wavelet shrinkage.” Biometrika 81.3 (1994): 425455. DOI:10.1093/biomet/81.3.425
Examples
>>> import skimage.data
>>> from skimage import img_as_float
>>> img = img_as_float(skimage.data.camera())
>>> sigma = 0.1
>>> img = img + sigma * np.random.standard_normal(img.shape)
>>> sigma_hat = estimate_sigma(img, multichannel=False)
skimage.restoration.
inpaint_biharmonic
(image, mask, multichannel=False)[source]¶Inpaint masked points in image with biharmonic equations.
Input image.
Array of pixels to be inpainted. Have to be the same shape as one of the ‘image’ channels. Unknown pixels have to be represented with 1, known pixels  with 0.
If True, the last image dimension is considered as a color channel, otherwise as spatial.
Input image with masked pixels inpainted.
References
N.S.Hoang, S.B.Damelin, “On surface completion and image inpainting by biharmonic functions: numerical aspects”, https://arxiv.org/abs/1707.06567
C. K. Chui and H. N. Mhaskar, MRA ContextualRecovery Extension of Smooth Functions on Manifolds, Appl. and Comp. Harmonic Anal., 28 (2010), 104113, DOI: 10.1016/j.acha.2009.04.004
Examples
>>> img = np.tile(np.square(np.linspace(0, 1, 5)), (5, 1))
>>> mask = np.zeros_like(img)
>>> mask[2, 2:] = 1
>>> mask[1, 3:] = 1
>>> mask[0, 4:] = 1
>>> out = inpaint_biharmonic(img, mask)
skimage.restoration.inpaint_biharmonic
¶skimage.restoration.
cycle_spin
(x, func, max_shifts, shift_steps=1, num_workers=None, multichannel=False, func_kw={})[source]¶Cycle spinning (repeatedly apply func to shifted versions of x).
Data for input to func
.
A function to apply to circularly shifted versions of x
. Should
take x
as its first argument. Any additional arguments can be
supplied via func_kw
.
If an integer, shifts in range(0, max_shifts+1)
will be used along
each axis of x
. If a tuple, range(0, max_shifts[i]+1)
will be
along axis i.
The step size for the shifts applied along axis, i, are::
range((0, max_shifts[i]+1, shift_steps[i]))
. If an integer is
provided, the same step size is used for all axes.
The number of parallel threads to use during cycle spinning. If set to
None
, the full set of available cores are used.
Whether to treat the final axis as channels (no cycle shifts are performed over the channels axis).
Additional keyword arguments to supply to func
.
The output of func(x, **func_kw)
averaged over all combinations of
the specified axis shifts.
Notes
Cycle spinning was proposed as a way to approach shiftinvariance via performing several circular shifts of a shiftvariant transform [1].
For a nlevel discrete wavelet transforms, one may wish to perform all
shifts up to max_shifts = 2**n  1
. In practice, much of the benefit
can often be realized with only a small number of shifts per axis.
For transforms such as the blockwise discrete cosine transform, one may wish to evaluate shifts up to the block size used by the transform.
References
R.R. Coifman and D.L. Donoho. “TranslationInvariant DeNoising”. Wavelets and Statistics, Lecture Notes in Statistics, vol.103. Springer, New York, 1995, pp.125150. DOI:10.1007/9781461225447_9
Examples
>>> import skimage.data
>>> from skimage import img_as_float
>>> from skimage.restoration import denoise_wavelet, cycle_spin
>>> img = img_as_float(skimage.data.camera())
>>> sigma = 0.1
>>> img = img + sigma * np.random.standard_normal(img.shape)
>>> denoised = cycle_spin(img, func=denoise_wavelet, max_shifts=3)
skimage.restoration.cycle_spin
¶