Measure fluorescence intensity at the nuclear envelope#

This example reproduces a well-established workflow in bioimage data analysis for measuring the fluorescence intensity localized to the nuclear envelope, in a time sequence of cell images (each with two channels and two spatial dimensions) which shows a process of protein re-localization from the cytoplasmic area to the nuclear envelope. This biological application was first presented by Andrea Boni and collaborators in [1]; it was used in a textbook by Kota Miura [2] as well as in other works ([3], [4]). In other words, we port this workflow from ImageJ Macro to Python with scikit-image.

import matplotlib.pyplot as plt
import numpy as np
import plotly.io
import plotly.express as px
from scipy import ndimage as ndi

from skimage import filters, measure, morphology, segmentation
from skimage.data import protein_transport

We start with a single cell/nucleus to construct the workflow.

shape: (15, 2, 180, 183)

The dataset is a 2D image stack with 15 frames (time points) and 2 channels.

vmin, vmax = 0, image_sequence.max()

fig = px.imshow(
    image_sequence,
    facet_col=1,
    animation_frame=0,
    zmin=vmin,
    zmax=vmax,
    binary_string=True,
    labels={'animation_frame': 'time point', 'facet_col': 'channel'},
)
plotly.io.show(fig)

To begin with, let us consider the first channel of the first image (step a) in the figure below).

Segment the nucleus rim#

Let us apply a Gaussian low-pass filter to this image in order to smooth it (step b)). Next, we segment the nuclei, finding the threshold between the background and foreground with Otsu’s method: We get a binary image (step c)). We then fill the holes in the objects (step c-1)).

Following the original workflow, let us remove objects which touch the image border (step c-2)). Here, we can see that part of another nucleus was touching the bottom right-hand corner.

dtype('bool')

We compute both the morphological dilation of this binary image (step d)) and its morphological erosion (step e)).

Finally, we subtract the eroded from the dilated to get the nucleus rim (step f)). This is equivalent to selecting the pixels which are in dilate, but not in erode:

Let us visualize these processing steps in a sequence of subplots.

fig, ax = plt.subplots(2, 4, figsize=(12, 6), sharey=True)

ax[0, 0].imshow(image_t_0_channel_0, cmap=plt.cm.gray)
ax[0, 0].set_title('a) Raw')

ax[0, 1].imshow(smooth, cmap=plt.cm.gray)
ax[0, 1].set_title('b) Blur')

ax[0, 2].imshow(thresh, cmap=plt.cm.gray)
ax[0, 2].set_title('c) Threshold')

ax[0, 3].imshow(fill, cmap=plt.cm.gray)
ax[0, 3].set_title('c-1) Fill in')

ax[1, 0].imshow(clear, cmap=plt.cm.gray)
ax[1, 0].set_title('c-2) Keep one nucleus')

ax[1, 1].imshow(dilate, cmap=plt.cm.gray)
ax[1, 1].set_title('d) Dilate')

ax[1, 2].imshow(erode, cmap=plt.cm.gray)
ax[1, 2].set_title('e) Erode')

ax[1, 3].imshow(mask, cmap=plt.cm.gray)
ax[1, 3].set_title('f) Nucleus Rim')

for a in ax.ravel():
    a.set_axis_off()

fig.tight_layout()
a) Raw, b) Blur, c) Threshold, c-1) Fill in, c-2) Keep one nucleus, d) Dilate, e) Erode, f) Nucleus Rim

Apply the segmented rim as a mask#

Now that we have segmented the nuclear membrane in the first channel, we use it as a mask to measure the intensity in the second channel.

Second channel (raw), Selection

Measure the total intensity#

The mean intensity is readily available as a region property in a labeled image.

props = measure.regionprops_table(
    mask.astype(np.uint8),
    intensity_image=image_t_0_channel_1,
    properties=('label', 'area', 'intensity_mean'),
)

We may want to check that the value for the total intensity

selection.sum()
np.uint64(28350)

can be recovered from:

props['area'] * props['intensity_mean']
array([28350.])

Process the entire image sequence#

Instead of iterating the workflow for each time point, we process the multidimensional dataset directly (except for the thresholding step). Indeed, most scikit-image functions support nD images.

n_z = image_sequence.shape[0]  # number of frames

smooth_seq = filters.gaussian(image_sequence[:, 0, :, :], sigma=(0, 1.5, 1.5))
thresh_values = [filters.threshold_otsu(s) for s in smooth_seq[:]]
thresh_seq = [smooth_seq[k, ...] > val for k, val in enumerate(thresh_values)]

Alternatively, we could compute thresh_values without using a list comprehension, by reshaping smooth_seq to make it 2D (where the first dimension still corresponds to time points, but the second and last dimension now contains all pixel values), and applying the thresholding function on the image sequence along its second axis:

thresh_values = np.apply_along_axis(filters.threshold_otsu,
                                    axis=1,
                                    arr=smooth_seq.reshape(n_z, -1))

We use the following flat structuring element for morphological computations (np.newaxis is used to prepend an axis of size 1 for time):

array([[[False,  True, False],
        [ True,  True,  True],
        [False,  True, False]]])

This way, each frame is processed independently (pixels from consecutive frames are never spatial neighbors).

When clearing objects which touch the image border, we want to make sure that the bottom (first) and top (last) frames are not considered as borders. In this case, the only relevant border is the edge at the greatest (x, y) values. This can be seen in 3D by running the following code: