Then I tried increasing the radius to 10 which resulted in nothing being inpainted. Then I tried dilating the mask, which led to nans being inpainted. Thinking higher radius caused only nans to be inpainted, I tried again with all nans masked (m_nans). The result was identical for the regions I was actually interested in inpainting, m_nans_small. Then I tried the smallest radius possible (1) with m_nans_small and while that inpaints more regions, it fails to fully inpaint the region of interest. A radius of 1 is also too small compared to the area of the regions I want to inpaint.
Interestingly, if I try to inpaint recursively - inpaint the image above masked with the nans from that image intersected with the original mask - there is no noticeable improvement, even with the smallest radius. This suggests that inpainting does not fail from a lack of paint. Even two-pixel regions completely surrounded by valid pixels fail to inpaint.
Inpaint 47rar
Download: https://jinyurl.com/2vAOjA
At last I tried inpainting a dilated m_nans. With a radius of 1 it inpaints almost anything, but the result is shoddy due to the small radius, and it overwrites good data. With a radius of 50 it inpaints everything with nans.
Retouching specifically applies to similar work not done by a conservator, but is often used interchangeably with inpainting. Overpainting applies to an intervention that either partially or completely covers the original paint layer.
Inpainting is an incredibly time consuming process that often incites heated philosophical debates. It is often the final step in a conservation treatment, and can have a dramatic impact on the overall aesthetics of an object. The main goal is to reintegrate the lacunae, allowing the viewer to appreciate the overall aesthetics of the work of art without being distracted by either the damage, or the inpainting itself. Choosing the proper colors for this process can be difficult, due to the inevitable variations in lighting conditions, visitor perception, and aging between old and new materials. ( Burns et al. 2002)
In addition to completely disguising a conservator's inpainting, methods like tratteggio and its derivatives (including selezione chromatica), allow the viewer to distinguish (with scrutiny) between modern conservation treatments and original paint.
The detailed anatomical information of the brain provided by 3D magnetic resonance imaging (MRI) enables various neuroscience research. However, due to the long scan time for 3D MR images, 2D images are mainly obtained in clinical environments. The purpose of this study is to generate 3D images from a sparsely sampled 2D images using an inpainting deep neural network that has a U-net-like structure and DenseNet sub-blocks. To train the network, not only fidelity loss but also perceptual loss based on the VGG network were considered. Various methods were used to assess the overall similarity between the inpainted and original 3D data. In addition, morphological analyzes were performed to investigate whether the inpainted data produced local features similar to the original 3D data. The diagnostic ability using the inpainted data was also evaluated by investigating the pattern of morphological changes in disease groups. Brain anatomy details were efficiently recovered by the proposed neural network. In voxel-based analysis to assess gray matter volume and cortical thickness, differences between the inpainted data and the original 3D data were observed only in small clusters. The proposed method will be useful for utilizing advanced neuroimaging techniques with 2D MRI data.
In this study, we propose a deep learning-based inpainting technique that can generate 3D T1-weighted MR images from sparsely acquired 2D MR images (Fig. 1). The proposed network was trained to produce 3D inpainted MRI from the input that is linearly interpolated from sparsely sampled MR images. This approach allows the generation of 3D images without careful modeling of hand-crafted prior or assumptions about brain anatomy. The similarity between the inpainted and reference images was quantitatively analyzed based on real 3D MR images. We also used a voxel-based morphometry analysis and cortical thickness measurement to assess whether the complex structures of the cerebral cortex were correctly recovered and whether the inpainted data yielded equivalent morphological measurements to the original 3D data (reference image). Furthermore, we analyzed brain atrophy patterns in disease groups to see if we could characterize disease patterns using images generated based on deep learning.
For the test dataset, three quantitative measures were used to evaluate the similarity between the reference image (original 3D image) and the inpainted images (linear interpolated data and neural network-generated data). The similarity assessment was applied to the brain regions defined by the brain mask produced using the brain extraction tool34. Peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and high-frequency error norm (HFEN) were calculated using the following equations35,36.
Different brain tissue types (gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF)) were segmented from the reference and inpainted images using Statistical Parametric Mapping 12 (SPM12; )37. The segmented images were then quantitatively examined using Dice similarity coefficient38 to compare the similarity between the reference and inpainted data.
To assess the accuracy of the volumetric measures estimated from the inpainted data, a voxel-based morphometry method based on the SPM12 pipeline was applied to the GM images. Inter-subject registration of GM images was performed based on nonlinear deformation using a fast diffeomorphic image registration toolbox39, and the images were modulated to preserve the tissue volume. The images were then smoothed with a Gaussian kernel with a full width at half maximum of 10 mm. The GM volume images of the inpainted data were then compared to the reference data in a voxel-wise manner using t-test with the total intracranial volume as a covariate. The statistical threshold was set to p-value of 0.005 (corrected for family-wise error) and cluster extent of 100 voxels. The GM volumes in preselected regions (frontal, lateral parietal, posterior cingulate-precuneus, occipital, lateral temporal, hippocampus, caudate, and putamen regions) were also estimated using an automatic anatomic labelling algorithm40,41 to evaluate the correlation between the inpainted and reference data.
Cortical thickness measurements were performed using the automated image preprocessing pipeline of Freesurfer (v6.0.0; ). The processing pipeline included skull stripping, nonlinear registration, subcortical segmentation, cortical surface reconstruction, and parcellation42,43,44,45. The output of the automated processes was visually inspected for obvious errors. Instead of manually correcting for the errors, 15 subjects were excluded from the statistical analysis. Cortical thickness values were smoothened via surface-based smoothing with a Gaussian kernel with a full width at half maximum of 5 mm. Moreover, a two-sample t-test was performed to detect brain regions with differences in cortical thickness between the inpainted and reference data. Results were considered significant if survived at false-discovery rate corrected p-value of 0.05 at cluster level. The correlation of cortical thickness between the inpainted and reference data was also analyzed.
Finally, we performed voxel-wise comparisons of the GM volume and cortical thickness between NL and MCI/AD groups to investigate whether the inpainted images yield similar patterns of morphological abnormalities similar to those shown by the reference data in the MCI and AD groups.
Comparison of inpainted images to the original 3D MR images (reference). (a) Each column shows the 137th to 140th transaxial slices, the 95th coronal slice, and the 125th sagittal slice. Results of PSNR (b), SSIM (c) and HFEN (d) evaluation of the accuracy of the different inpainting methods (linear interpolation, U-Net and proposed network) relative to reference data are shown on the left.
Gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) segments of a representative image. (a) The areas indicated by arrows show poor recovery of the GM segments in linearly interpolated images (second row) and U-Net images (third row) compared to the proposed neural network-generated images (fourth row). 3D Dice coefficients between reference (original 3D MRI) and inpainted data in GM (b), WM (c), and CSF (d) are shown on the left.
Compared to the reference data, the proposed neural network-generated inpainted data showed no significant differences in the GM volume estimates (Fig. 5a and supplementary Fig. 1). On the other hand, the linearly interpolated data showed overestimation in the medial areas of the brain and underestimation in the bilateral frontal and occipital lobe, insula, striatum, and thalamus with respect to the GM volume. Similarly, as shown in Fig. 5b, when using the neural network-generated data, differences in the cortical thickness were found only in the small clusters in the medial occipital lobe. On the other hand, the cortical thickness was overestimated in the cingulate, bilateral occipital lobe, prefrontal cortex, and temporal lobe and underestimated in the bilateral frontal lobe and lateral temporal lobe when using the linearly interpolated data.
Voxel-wise comparison of inpainted data to the reference data (original 3D MRI). (a) Gray matter volume difference. (b) Cortical thickness difference. Red indicates overestimation, and blue indicates underestimation relative to the reference data.
The individual regional GM volumes estimated using the neural network-generated data are highly correlated with the volumes estimated using the reference data (Fig. 6a). For all the regions analyzed in the study, the volumetric measures obtained using the neural network-generated data showed a higher correlation than the measures obtained with the linearly interpolated data. Regional cortical thicknesses obtained with the inpainted and reference data were also compared (Fig. 6b). The correlation between the neural network-generated and reference data was higher in all regions. 2ff7e9595c
Comments