Label-free classification of neurons and glia in neural stem cell cultures using a hyperspectral imaging microscopy combined with machine learning

Bioinformatics

HSI microscopy

Figure 1 shows a diagram of the HSI microscopy used in this study. The tunable bandpass filter and camera were connected to the port of the microscope on the left side, with two adapters in front of and behind the tunable bandpass filter. In the adapter in front of the tunable bandpass filter (Fig. 1 Adapter (a)), a shortpass filter was installed to cut the long wavelength light.

Figure 1

HSI microscopy. The tunable bandpass filter and camera were connected to the port on the left side of the microscope with adapters (a,b). A shortpass filter was installed in the adapter (a), to cut long wavelength light (the cut-off wavelength was 750 nm).

The correction collar of the objective lens was set to a minus position in order to achieve the finest focus (see Supplementary Figs S1 and S2 for details), and the iris aperture diaphragm was set to a minimum to obtain the largest contrast differences between the cells and the external regions (with this setting, the numerical aperture of the illumination was 0.063). The lateral resolution of this optical setup was 0.820 μm at the 450-nm wavelength, 1.002 μm at the 550-nm wavelength and 1.184 μm at the 650-nm wavelength with a minimum iris aperture diaphragm setting (calculated by Hopkins’ equation27,28).

Employing 5 illumination images acquired during a period of 2 hours, we evaluated spatial and spectral stability of the optical setup at representative 450-nm, 550-nm and 650-nm wavelengths (the evaluation included the illumination, the microscope, the bandpass filter and the camera). More than 90% pixels showed within 5% fluctuation of the intensity (5% of the mean intensity at each wavelength) during the 2 hours at all three wavelengths. Overall, 75% of the pixels showed within 4% fluctuation, and 50% of the pixels showed within 3% fluctuation at all three wavelengths.

The first of the five illumination images was evaluated for spatial stability, and 14.1%, 3.5% and 4.1% (percent of the mean intensity at each wavelength) non-uniformity of intensity in the field of view were observed (at 450-nm, 550-nm and 650-nm wavelengths, respectively). The patterns of the non-uniformity differed among the wavelengths, especially at the 450-nm wavelength (Supplementary Fig. S3). This non-uniformity was canceled by the flat-field correction for observation and cell classification employed in this study.

Observation of NSCs

Using the HSI microscopy with the settings mentioned above, we observed differentiated NSCs, which consisted of neurons and astroglia (glia) cells (Fig. 2). Using transmission images with 25 bands, including the wavelengths of 450 nm (Fig. 2a), 550 nm (Fig. 2b) and 650 nm (Fig. 2c), a cluster image was constructed (Fig. 2d, see the ‘Flat-field correction and cluster image’ subsection in the Methods section for clustering method). As shown in Fig. 2, details in the cellular structures (e.g. edges of the expanding cell body, Fig. 2d arrowheads) could be observed in the HSI cluster images (see Supplementary Fig. S4 for a comparison with bright field and phase contrast images).

Figure 2
Figure 2

Hyperspectral images of differentiated NSCs. This figure shows spectral images obtained at 450-nm (a), 550-nm (b) and 650-nm wavelengths (c). The cluster image was constructed from 25 spectral images (d). Arrowheads: Edges of the expanding cell bodies. Scale bar: 100 μm, 50 μm (in insets).

Class assignment and signal analysis

In order to achieve a detail analysis and cell classification evaluation, which is mentioned below, we assigned cellular components into four classes, including the neuronal cell body, the glial cell body, the process of both types of cells and the external region, referring to immunostaining against a neuronal marker (neuron-specific β-III tubulin, TuJ-1) and a glial marker (glial fibrillary acidic protein, GFAP) (Fig. 3). We only used TuJ-1 single positive cells (Fig. 3b, arrows) as neurons and GFAP single positive cells (Fig. 3b, arrowheads) as glia, and excluded double positive cells (Fig. 3b, DP), double negative cells (Fig. 3b, DN) and cells which had a small nuclei and no cytoplasm or processes (Fig. 3a and b, asterisks). The cell bodies of each cell type were assigned independent classes (Fig. 3c, blue area: neuronal cell, red area: glial cell), but the processes of both cell types were combined into one class (Fig. 3c, green area). The external regions were included in order to obtain better segregation between the cells and the external area (Fig. 3c, gray area, also indicated as ex).

Figure 3
Figure 3

Cellular components and spectral analysis. This figure shows a cluster image of NSCs (a) and the corresponding immunofluorescent image (b: TuJ-1 in green, GFAP in red, nuclei in blue). (c) Pixel-wise class assignments: Neuronal cell bodies (blue), glial cell bodies (red), processes (green) and the external region (gray). (d) Superimposed image of (a,c). (e) Major average pixel-wise spectra of four object classes. (f) Pixel-wise spectra map for neuronal cell body and glial cell body corresponding to (e). Most of the objects belonging to each class contained all four major spectra, and each spectrum corresponded to a different intracellular location. In the neuronal cell bodies, the center areas had a high intensity spectrum Nhigh, on the other hand, the edges of the cells showed a low intensity spectrum Nlow (f). In the glial cell bodies, a high intensity spectrum Ghigh was observed only in the center areas, whereas a low intensity spectrum Glow was observed broadly in the edges of the cells and the inner areas (f). Arrows: Neurons. Arrowheads: Glia cells. DP: Double positive cells. DN: Double negative cells. Asterisks: Excluded cells. Black diamond: Wavelength specific differences between spectra. Nhigh and Nlow: High and low intensity major spectra of neuronal cell body. Ghigh and Glow: High and low intensity major spectra of glial cell body. Scale bar: 100 μm.

Figure 3e shows four major average pixel-wise spectra for each class in an acquired image which contained 36 neuronal cell bodies (total of 45,460 pixels), 4 glial cell bodies (total of 26,650 pixels), 49 processes (total of 56,006 pixels) and 8 external regions (total of 131,718 pixels). As shown in Fig. 3f, most of the objects belonging to each class contained all four spectra, and each spectrum corresponded mainly to a different intracellular location. In the neuronal cell bodies, the center areas had the high intensity spectrum Nhigh, on the other hand, the edge of cells showed the low intensity spectrum Nlow (Fig. 3f, cyan and blue pixels). In the glial cell bodies, the high intensity spectrum Ghigh was observed only in the center areas, whereas the low intensity spectrum Glow was observed broadly at the edge of the cells and in the inner areas (Fig. 3f, yellow and red pixels). Notable intensity differences in the spectra were observed among the classes (Fig. 3e), for example, the neuronal cell bodies consisted of both brighter signals and darker signals, compared with the other classes, and the external regions consisted mainly of mean intensity signals (see Supplementary Fig. S5 for standard deviation of the spectra). In addition, some wavelength specific differences were also observed, for example, a spectrum of the glial cell body had higher intensities at longer wavelengths, compared with the spectrum of the process (Fig. 3e, black diamond symbol).

Overview of classification procedure

Figure 4 shows an overview of the classification procedure. In general, raw hyperspectral data has too much information and redundancy, which interferes with the performance of machine learning systems. In order to overcome this demerit, we employed a clustering method for information reduction. First, 25-band spectral data was converted to a cluster image using a spectrum-wise clustering method (k-means algorithm, see the ‘Flat-field correction and cluster image’ subsection in the Methods section for detail). The pixel values of the cluster image indicate the spectral categories. Pixel-wise cell classification was performed on the cluster image using in-house machine learning software, and the software returned the class-labeled image as a result. Using categorized spectral information in the cluster image together with the morphological features, the system realized a comprehensive classification based on spectral and morphological characteristics.

Figure 4
Figure 4

Overview of the cell classification procedure. 25-band spectral data was converted to the cluster image using a spectrum-wise clustering method. The pixel values of the cluster image indicate spectral categories. Pixel-wise cell classification was performed on the cluster image using in-house machine learning software, and the software returned the class-labeled image as a result. The machine learning software implemented a Random Forest classifier and multi-scale morphological features were employed.

Cluster images

Figure 5 shows the details of a cluster image constructed by the spectrum-wise clustering method. All of the pixels in the flat-field correction images were separated into clusters, based only on spectral similarity for information reduction (in this figure, the number of clusters was set to 16 for better visibility). Figure 5b shows the average spectra for clusters in a cluster image (Fig. 5a; see Supplementary Fig. S6 for standard deviation). As shown in the color-coded cluster image (Fig. 5c), different clusters were assigned almost exclusively to the cell region (yellow, red, dark purple) and the external region (green, cyan, blue, magenta). Among the 16 clusters, there were clusters which were only seen in neuronal cell bodies (Fig. 5d, cluster 5), clusters which were seen broadly in cells (Fig. 5d, cluster 6), clusters which were exclusively seen outside of cells (Fig. 5d, cluster 9) and clusters composed of the edges of cells and subcellular components (Fig. 5d, cluster 12). Even clusters which had close intensities each other were clearly assigned to different objects (Fig. 5b,d, cluster 6 and 9; where the average difference in signal intensity through the wavelengths was 74.0 ± 17.9 arbitrary unit, a.u.).

Figure 5
Figure 5

Cluster image. A cluster image of NSCs (a) and the average spectra of the clusters (b). Pixels were assigned to clusters based on 25-wavelength signals. (c) Color-coded cluster image of (a). (d) Single-colored cluster image. Colors in (c,d) correspond to the colors in panel (b).

Cell classification evaluation

In differentiated NSCs, several types of cells exist, like neurons and glia cells, and the differentiation processes were dynamically affected by the surrounding environment. In order to evaluate the feasibility of monitoring live NSCs, pixel-wise classification on four object classes was performed using multiple culture dishes. In the evaluation of the classification procedure, with a total of 19 images, 18 images were used for building the clustering model (32 clusters) and for machine learning training. The last remaining image was clustered based on the above clustering model and classified using the trained machine learning system (see the Methods section for the details of the procedure). All 19 combinations were evaluated and the results were averaged. Figure 6 shows an example of a classification result. With a cluster image as an input of classification (Fig. 6a), the extracellular region was clearly classified against the cellular components (Fig. 6b, gray: external region, blue: neural cell body, red: glial cell body, green: process), and most of the pixels in the neuronal cell bodies and the glial cell bodies were correctly classified (Fig. 6c for neuronal cell body, Fig. 6d for glial cell body, green indicates pixels that were classified correctly). Some misclassifications between neuronal cell body and process (Fig. 6c, blue pixels), and between glial cell body and external region (Fig. 6d, red pixels) were observed at marginal regions.

Figure 6
Figure 6

Cell classification. (a) Input image for classification (cluster image). (b) Classification result of the cluster image (a): Neuronal cell bodies in blue, glial cell bodies in red, processes in green, and the external region in gray. Single class result image for neuronal cell bodies (c) and glial cell bodies (d) True positives are shown in green, false positives in blue, and false negatives in red.

Table 1 summarizes the results of the evaluation of the classification procedure. This evaluation included 378 neuronal cell bodies (total of 548,626 pixels), 39 glial cell bodies (total of 199,876 pixels) and 471 processes (total of 402,514 pixels). In the pixel-wise results, the average F0.5 score for these 3 classes was 0.801 ± 0.095 (mean ± s.d.). Most of the pixels in the neuronal cell bodies were correctly classified (recall = 0.943 ± 0.022). Although 43% of the pixels in the glial cell bodies were not classified properly, the predictions for this class were the most precise, among the 3 classes tested (precision = 0.891 ± 0.103). More than 99% of the extracellular pixels were correctly classified with high precision (99.4%). In the object-wise results, an object was considered as correctly classified when the number of correctly predicted pixels was highest among the respective pixels, the average F0.5 score for these 3 classes was 0.901 ± 0.172. All of the objects studied were classified precisely (minimum average precision was 0.904 ± 0.195), and a minimum of 72% of the objects studied (88% of the objects on the average) were recalled properly.

Articles You May Like

The buzz about bumble bees isn’t good
Carbon dioxide from Silicon Valley affects the chemistry of Monterey Bay
Bio-IT World Announces Inaugural Innovative Practices Awards Winners
Hurricane Michael Was A Category 5, NOAA Finds – The First Since Andrew In 1992
#BioIT19: News, Notes From The Expo Floor

Leave a Reply

Your email address will not be published. Required fields are marked *