Next Article in Journal
Virtual/Augmented Reality Applications in Education & Life Long Learning
Previous Article in Journal
Optimization of Topological Reconfiguration in Electric Power Systems Using Genetic Algorithm and Nonlinear Programming with Discontinuous Derivatives
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advances in Mask-Modulated Lensless Imaging

by
Yangyundou Wang
1,2,* and
Zhengjie Duan
1,2
1
Hangzhou Institute of Technology, Xidian University, Hangzhou 311231, China
2
School of Optoelectronic Engineering, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(3), 617; https://doi.org/10.3390/electronics13030617
Submission received: 9 January 2024 / Revised: 23 January 2024 / Accepted: 23 January 2024 / Published: 1 February 2024

Abstract

:
Lensless imaging allows for designing imaging systems that are free from the constraints of traditional imaging architectures. As a broadly investigated technique, mask-modulated lensless imaging encodes light signals via a mask plate integrated with the image sensor, which is more compacted, with scalability and compressive imaging abilities. Here, we review the latest advancements in mask-modulated lensless imaging, lensless image reconstruction algorithms, related techniques, and future directions and applications.

1. Introduction

Unlike conventional focusing-based imaging, lensless imaging utilizes optical encoders for scene capture. Images captured by image sensors undergo modulation and are often not directly usable. Instead, they are decoded using compatible image reconstruction algorithms to reconstruct the scene [1]. Depending on the different modulation mechanisms in the imaging chain, lensless imaging can be divided into two main categories: illumination modulation and mask modulation. Lensless imaging systems based on illumination modulation utilize characteristics such as the position, coherence, and pulse timing of the illumination source to capture images with different encoded illumination. Depending on the specific illumination mode, lensless imaging can be further categorized into projection imaging, holographic imaging, and time-resolved imaging.
Mask-modulated lensless imaging achieves encoding modulation of the target scene by placing optical masks in front of the image sensor. Furthermore, with designed masks, functions such as 3D imaging, depth estimation, and spectral imaging can be realized. An amplitude mask encodes the incident light by partially blocking or attenuating it, as seen in methods like pinhole imaging and coded aperture imaging. Phase mask modulation, on the other hand, utilizes the principles of physical optics to modulate the phase of light. It almost does not block or attenuate light, effectively addressing the issue of lower light flux in amplitude mask modulation and improving the image signal-to-noise ratio. Common phase mask types include phase gratings, diffusers, and diffractive optical elements (DOEs).
In this review, an overview of the recent developments in lensless imaging using mask modulation techniques is provided in Section 2. In Section 3, the basic principles of image reconstruction, especially compressive imaging and deep learning algorithms, are explained. In Section 4, the related techniques and future directions are provided. Section 4.1 explores deep optics methods for optimizing both mask design and reconstruction algorithms at the same time, Section 4.2 describes the diffractive and optoelectronic neural networks for reducing energy consumption, and Section 4.3 gives a brief introduction on single-pixel imaging. Section 5 and Section 6 discuss the applications and prospects, respectively.

2. Advances of Mask-Modulated Lensless Imaging

In the past decade, mask-modulated lensless imaging systems have made great progress in Figure 1, and the most related progress is introduced according to the mask modulation types.
An amplitude-type mask encodes the incident light by partially obstructing or attenuating it. Pinhole imaging is the earliest form of optical imaging system and also the simplest form of amplitude-type mask modulation. However, due to the limited amount of light that passes through the pinhole, it results in a lower image quality and is less commonly used in modern imaging systems. With further research, Rego et al. [2] proposed a method to improve the quality of pinhole imaging. This method involves a joint denoising and deblurring image restoration process. The process combines low-pass filtering and denoising networks, which are jointly trained in a generative adversarial network (GAN), achieving the reconstruction of high-resolution static images and high-resolution videos.
To improve the signal-to-noise ratio of the imaging, researchers have developed encoded aperture imaging by using plates containing multiple transparent apertures instead of a single pinhole. This approach not only increases the transmissive area to enhance the light throughput and signal-to-noise ratio but also significantly reduces the physical size of the imaging system. Commonly used masks include band-limited coded masks [3], random aperture arrays [4], uniform redundant arrays (URAs) [5], and improved uniform redundant arrays (MURAs) [6].
However, MURAs can be impacted by the ill-posed nature of the inverse problem and the effects of diffraction. To address these issues, DeWeert and Farm [7] proposed Doubly-Toeplitz masks, in which point spread functions (PSFs) exhibit separable characteristics, making them more robust for wide-spectrum imaging. Asif et al. [8] designed the FlatCam camera, which places a binary mask with a 50% transmittance in close proximity to the image sensor, approximately 0.5 mm away. The PSF of FlatCam is essentially the projection of the mask itself. Adams et al. [9] introduced FlatScope, which positions the mask in front of the image sensor at a distance of 0.2 mm, achieving an imaging field of view of 6.52 mm2 compared to just 0.41 mm2 for traditional lens-based microscopes with the same sensor size. FlatScope also enables three-dimensional fluorescence microscopic imaging with micrometer-level resolution and a high frame rate. Shimano et al. [10] and Tajima et al. [11] proposed amplitude masks based on Fresnel zone apertures (FZAs), which are placed a few millimeters away from the image sensor. The aliasing effects are suppressed using stripe scanning, and digital refocusing is achieved through fast Fourier-transform (FFT). To mitigate the twin image problem in FZA-encoded imaging, Wu et al. [12] constructed a total variation regularization-based Fresnel-encoded aperture imaging reconstruction model. The sparse differences in gradient domains between twin images and the original image is considered. This approach reduces twin image noise and improves the imaging signal-to-noise ratio. Zhang et al. [13] introduced a binary mask-modulated lensless camera that uses multi-angle illuminations to recover the amplitude and phase information of the sample. Similar to the aperture constraints in Fourier ptychographic microscopy (FPM), the phase retrieval process satisfies the support constraints, and a 4.9 μm half-pitch resolution over a ~30 mm2 field of view can be achieved.
Nakamura et al. [14] proposed an ultra-thin lensless camera that can capture super field-of-view images from both the front and back sides simultaneously. The camera consists of multiple coded image sensors, which are complementary metal–oxide–semiconductor (CMOS) sensors with random holes at some pixels. Zhang et al. [15] integrated a planar-coded aperture array with a sensor to create a lensless compound eye microsystem with an overall size of just 32 mm × 36 mm × 28.3 mm. The system can achieve high resolution, a wide field of view, and real-time sensing with a high frame rate. Hua et al. [16] proposed SweepCam, which can achieve depth perception of the scene by using a set of mask patterns displayed on a programmable mask. The computational focusing operator can help determine the depths of the scene points, and a fast multi-depth scene reconstruction algorithm is designed that can reduce the reconstruction time by two orders of magnitude. Zheng et al. [17] proposed a lensless three-dimensional imaging system that uses different patterns displayed on a programmable mask to capture the depth and intensity information of the scene. Moreover, a refinement network is devised as a postprocessing step to identify and eliminate artifacts in the reconstruction.
A phase-type mask modulates the phase of light using the principles of wave optics, with minimal obstruction or attenuation of the light, effectively addressing the issues of low light flux and low image signal-to-noise ratios encountered in amplitude-type masks. Phase gratings have a periodic structure that can induce a phase modulation of either 0 or  π  in the incident light. Stork et al. [18] introduced asymmetric phase-rotating gratings, integrating phase gratings with CMOS photodetector arrays to create ultracompact lensless PicoCam cameras. Antipa et al. [19] proposed DiffuserCam, which is based on imaging through a diffuser, and each point in the field of view corresponds to a unique pseudo-random pattern of defocused projections on the sensor, enabling single-frame three-dimensional imaging. Subsequently, Monakhova et al. [20] developed a hyperspectral camera by combining a diffuser with a filter array. The filter array encodes spectral information onto the sensor, and the diffuser maps each point in the real scene to multiple spectral filters for multiplexing. Boominathan et al. [21] designed a phase mask-based thin, lensless camera that can achieve high-resolution two-dimensional imaging, computational refocusing, and three-dimensional imaging. A contour-based high-performance PSF is proposed that utilizes signal processing concepts to achieve the maximum information transfer for finite bit-depth sensors. Moreover, it demonstrates how to reconstruct high-quality images and three-dimensional information from a single two-dimensional encoded measurement using fast linear methods and iterative nonlinear methods. Cai et al. [22] proposed a lensless light-field imaging method that uses a diffuser as an encoder, which maps the four-dimensional light-field information to a two-dimensional image. It establishes a diffuser-encoded light-field transmission model, which describes the propagation process of light through the diffuser and designs a calibration strategy to flexibly determine the transmission matrix. Through the transmission matrix, the rays can be computationally decoupled from the detected image, and the spatial and angular resolutions can be adjusted, breaking the resolution limit of the sensor. Baek et al. [23] proposed a lensless polarization camera that can achieve full-Stokes polarization imaging in a single shot, and the full-Stokes polarization imaging can provide additional contrast based on the birefringence and surface characteristics of objects. Moreover, a diffuser-encoded light-field transmission model is established, and an image reconstruction and calibration process is designed. Sitong et al. [24] proposed an “end-to-end” optoelectronics artificial intelligent framework for phase-modulated lensless cameras. Both intensity reconstruction and quantitative phase imaging for speckles and lens imaging are simulated with high accuracy. For speckle reconstruction using phase information, Pearson’s inner-product correlation coefficient (PCC) and peak signal-to-noise ratio (PSNR) can reach 0.929 and 19.313 dB. For speckle reconstruction using intensity information, the PCC and PSNR can reach 0.955 and 19.779 dB. For lens imaging with phase information, the PCC and PSNR can reach above 0.989 and 29.930 dB. For lens imaging with intensity information, the PCC and PSNR can reach above 0.990 and 30.287 dB.
The DOE, as a commonly used phase-type mask, can be designed with different PSFs, offering high degrees of freedom and ease of fabrication. However, they often suffer from chromatic aberrations, which can be mitigated through device optimization and reconstruction algorithm improvements in computational imaging. For instance, in device optimization, Peng et al. [25] proposed an achromatic diffractive lens. By balancing the focusing contributions of different wavelengths at the focal plane, they obtained a spectral invariant blurred kernel, achieving high fidelity and colored diffractive imaging across the entire spectrum. Zhao et al. [26] encoded the microstructure height of diffractive optical elements and optimized it using a particle swarm optimization (PSO) algorithm to achieve achromatic imaging in the visible light spectrum. Boominathan et al. [20] introduced a mask design with a contour line PSF that can produce high-contrast PSFs at different depths, which enables two-dimensional imaging, refocusing, and three-dimensional imaging. In joint optimization approaches, Dun et al. [27] simplified two-dimensional DOEs into one-dimensional rotationally symmetric structures to reduce the computational complexity. An end-to-end joint optimization method was employed. Baek et al. [28] developed differentiable simulators and neural network reconstruction methods, achieving single-frame high-spectral depth imaging through automatic differentiation. Additionally, stacking multiple diffractive elements is a common approach. Heide et al. [29] proposed stacking multiple DOEs and changing the alignment between them to achieve wide-spectrum imaging. Banerji et al. [30] devised achromatic planar multi-level diffractive lenses (MDLs) for infrared waves.
Figure 1. The development of the mask-modulated lensless camera in the last 10 years. In 2013, Stork et al. proposed an asymmetric phase-rotating gratings for the phase-type mask modulated camera—lensless PicoCam [18]. In 2015, DeWeert and Farm proposed separate Doubly-Toeplitz masks for amplitude-mask-modulated lensless imaging [7]. In 2016, Asif et al. devised the amplitude-type lensless camera—FlatCam [8]. In 2017, Antipa et al. developed DiffuserCam using a diffuser as the phase-type mask [19]. In 2017 and 2018, Tajima and Shimano et al. designed Fresnel zone apertures (FAZs) that included amplitude-type mask-modulated lensless imaging [10,11]. In 2018, Zhang et al. proposed a binary mask-modulated lensless imaging system that is inspired by Fourier ptychographic microscopy (FPM) [13]. In 2019, Nakamura et al. proposed an ultra-thin amplitude-type mask-modulated lensless camera [14]. In the next two years, the DiffuserCam [20], PhlatCam [21], and diffractive optical element (DOE) based diffractive camera [28] were designed which further enriched phase-modulated lensless imaging. Moreover, the SweepCam [16], FAZ [12], and lensless three-dimensional imaging system [17] were designed by using amplitude-modulated masks. In 2022, the compound eye microsystem [15] and lensless polarization camera [23] were devised for amplitude and phase-modulated lensless imaging systems, accordingly. In 2024, Chen et al. developed a learnable phase-modulated lensless camera based on the concepts of deep optics and diffractive optical neural networks [24].
Figure 1. The development of the mask-modulated lensless camera in the last 10 years. In 2013, Stork et al. proposed an asymmetric phase-rotating gratings for the phase-type mask modulated camera—lensless PicoCam [18]. In 2015, DeWeert and Farm proposed separate Doubly-Toeplitz masks for amplitude-mask-modulated lensless imaging [7]. In 2016, Asif et al. devised the amplitude-type lensless camera—FlatCam [8]. In 2017, Antipa et al. developed DiffuserCam using a diffuser as the phase-type mask [19]. In 2017 and 2018, Tajima and Shimano et al. designed Fresnel zone apertures (FAZs) that included amplitude-type mask-modulated lensless imaging [10,11]. In 2018, Zhang et al. proposed a binary mask-modulated lensless imaging system that is inspired by Fourier ptychographic microscopy (FPM) [13]. In 2019, Nakamura et al. proposed an ultra-thin amplitude-type mask-modulated lensless camera [14]. In the next two years, the DiffuserCam [20], PhlatCam [21], and diffractive optical element (DOE) based diffractive camera [28] were designed which further enriched phase-modulated lensless imaging. Moreover, the SweepCam [16], FAZ [12], and lensless three-dimensional imaging system [17] were designed by using amplitude-modulated masks. In 2022, the compound eye microsystem [15] and lensless polarization camera [23] were devised for amplitude and phase-modulated lensless imaging systems, accordingly. In 2024, Chen et al. developed a learnable phase-modulated lensless camera based on the concepts of deep optics and diffractive optical neural networks [24].
Electronics 13 00617 g001

3. Image Reconstructions

The relationship between the sensor’s measurement values, denoted as “y”, and the target intensity, denoted as “x”, within the imaging model can be expressed as follows:
y = H x + n
In this context, the matrix “H” depends on the specific design of the imaging system and various device parameters. These parameters include the pixel response function of the sensor, the choice in modulation devices, the distance between the sensor and the modulation device, the distance between the modulation device and the target, etc. The variable “n” represents the noise in the imaging system. Image reconstruction is essentially an inverse process, where the goal is to infer the target intensity from the measurement values.
For a given imaging system, assuming that the matrix “H” in the imaging model is accurately calculated and the sensor measurements “y” are obtained by capturing an unknown target, the objective of reconstruction algorithms is to estimate the unknown image “x” when provided with the sensor measurements “y” and the matrix “H”. However, in practical applications, model inaccuracies and system noise make solving the inverse problem challenging. Researchers have proposed various solution methods, with commonly used reconstruction algorithms falling into three categories: forward-solving algorithms, model-based optimization iterative algorithms, and deep learning algorithms. Here, we focus on introducing the deep learning algorithms.

3.1. Deep Learning Algorithms

Deep learning, as a data-driven method, requires datasets that fit real-world scenarios for learning. Currently, datasets can be collected through experiments or generated through simulations. Deep learning methods are generally divided into supervised learning, semi-supervised or partially supervised learning, unsupervised learning, and reinforcement learning, which is often discussed under partially supervised and unsupervised learning. Deep learning is mainly applied to solve image mapping problems and achieve image reconstruction. Among them, the U-Net convolutional neural network is widely used due to its simplicity and suitability for training with small datasets [31]. Transformer networks were first introduced in the paper “Attention Is All You Need” in 2017. Transformer networks are a type of neural network that can process sequential data. The Transformer model mitigates the shortcomings of convolutional neural networks (CNNs) (i.e., limited receptive field and inadaptability to input content). Instead, they use a mechanism called attention, which allows them to focus on the most relevant parts of the input and output sequences. With a comparative performance, it has been dramatically implemented in computer vision [32,33,34,35,36,37,38].
Typical lensless imaging systems include coded mask imaging systems and Fourier ptychographic imaging systems [39]. The combination of deep learning techniques with optical imaging methods can effectively improve the imaging speed and quality, and it has been successfully applied in various fields, such as digital holographic reconstruction [40,41,42,43], three-dimensional particle field imaging [44], phase recovery [45,46,47], and imaging through scattering media [48,49]. In recent years, deep learning techniques have also gradually been applied to lensless imaging.
In lensless imaging, Wu et al. [50] proposed a lensless three-dimensional imaging system that uses different patterns displayed on a programmable mask to capture the depth and intensity information of the scene. It uses a refinement network as a postprocessing step to identify and eliminate artifacts in the reconstruction in Figure 2a. Khan et al. [51] used trainable inversion layers to map measurements to an intermediate image space and then employed U-Net to improve the reconstruction, generating more realistic images. U-Net networks are also used for image depth estimations in Figure 2b, such as Wu et al. [52] using a U-Net network to obtain depth images from encoded images in Figure 2f. Baek et al. [28] proposed a differentiable simulator and a neural network-based reconstruction method, which were jointly optimized by automatic differentiation, thus achieving end-to-end learning of the mask and the image. Moreover, the paper built an experimental prototype and provided the first HS-D dataset, which was used to evaluate the performance of different deep learning algorithms on the lensless imaging system in Figure 2c. Zheng et al. [17] introduced a lensless imaging strategy that can image multiple stacks in the axial direction. The paper proposed a refinement network as a postprocessing step to identify and eliminate artifacts and obtain all-in-focus images in the reconstruction. The experimental results demonstrated that dense 3D scenes with a small number of sensor measurements can be recovered using a programmable lensless camera in Figure 2d. Rego et al. [2] introduced an imaging restoration pipeline for pinhole photography, which can reconstruct high-quality images from extremely low-resolution pinhole images. This pipeline uses deep learning and optimization techniques and consists of three steps: pinhole image calibration, super-resolution reconstruction, and color enhancement in Figure 2e. Furthermore, additional network modules can be added to existing networks to achieve specific functions. Monakhova et al. [53] proposed a network architecture called Le-ADMM in Figure 2g. The network unrolls the ADMM iterations into layers and learns the optimal ADMM parameters and the image reconstruction at the same time. Moreover, a U-net-based denoiser network can be added after this network to further improve the image quality.

3.2. Three-Dimensional Lensless Imaging Algorithms

Three-dimensional lensless imaging involves image compression algorithms, and compressive sensing (CS) is an effective technique for reconstructing a signal that has the property of transform sparsity by solving an underdetermined linear system using a small amount of sampled data [54].
To achieve the signal recovery from the observed measurements, the CS reconstruction problem can be formulated as a penalized model:
x ^ = arg min x 1 2 H x y 2 2 + P x
where  P x  is a penalty term enforcing the transform sparsity of the signal. The purpose is to find the argument  x ^  that minimizes the loss function. The least squares function is the most straightforward method to choose. As x in the least squares function is nonzero and not sparse, there are classical ways to modify the regression to achieve better properties, such as ridge regression or Tikhonov regularization. Moreover, the least absolute shrinkage and selection operator (LASSO) is proposed for even sparse signals in which one norm promotes sparsity [55].
For the optimization of the penalized model, a large variety of iterative algorithms have been developed, such as the fast iterative shrinkage-thresholding algorithm (FISTA) [56], Chambolle-Pock algorithm [57], the alternating direction method of multipliers (ADMM) [58], etc.
In three-dimensional lensless imaging, the complexity of both the calibration and computation can be reduced using a convolution model:
b x , y = x , y , z v x , y , z h x , y ; x , y , z
where the object v is a set of point sources located at x, y, z on a non-Cartesian 3D grid. The caustic pattern of the pixels (x’,y’) on the sensor is the PSF, which can be expressed as  h x , y ; x , y , z , and  b x , y  is the sum of all the 2D sensor measurements.
Equation (3) can be further written as a L1-regularized least squares problem:
v ^ = argmin ω 0 , u , v 1 2 b D v 2 2 + τ u 1 s . t .   v = M v , u = Ψ u , ω = v
where  v u , and  ω  are auxiliary variables, D is a diagonal component, M and  Ψ  represent 3D convolutions, and  τ  is a tuning parameter that adjusts the degree of sparsity.
In general, the ADMM with variable splitting is employed for solving the 3D inverse problem of Equation (4), and the results at iteration k are
u k + 1 T τ μ 2 Ψ v k + η k / μ 2 v k + 1 D T D + μ 1 I 1 ξ k + μ 1 M v k + D T b w k + 1 max ρ k / μ 3 + v k , 0 v k + 1 μ 1 M T M + μ 2 Ψ T Ψ + μ 3 I 1 r k ξ k + 1 ξ k + μ 1 M v k + 1 v k + 1 η k + 1 η k + μ 2 Ψ v k + 1 u k + 1 ρ k + 1 ρ k + μ 3 v k + 1 w k + 1
where
r k = μ 3 w k + 1 ρ k + Ψ T μ 2 u k + 1 η k + M T μ 1 v k + 1 ξ k
Note that  T ν  is a vectorial soft-thresholding operator with a threshold value of  ν  [59], and ξ, η, and ρ are the Lagrange multipliers associated with v, u, and w, respectively. The scalars are penalty parameters that we compute automatically using the tuning strategy in [58].
Moreover, the minimization problem can also be described as the sum of the depths of interest with the total variation (TV) and L1 regularization terms as
i ^ = argmin i 0 b d = 1 D P d i d F 2 + γ 1 d = 1 D Ψ i d 1 + γ 2 i 1
where the number of depths D is calculated using the distance from the imaging device d and the 2D spatial gradient  Ψ . As the TV and L1 regularization terms are applied to restrict under the sparsity assumption,   γ 1  and  γ 2  are the tuning parameters that adjust the degree of sparsity. The 3D minimization problem can be further resolved by ADMM [60].

4. Related Techniques and Future Directions

The development of simple optical imaging relies on rapid advancements in various fields, such as optical field modulation devices, optical design software, image sensors, reconstruction algorithms, and artificial intelligence. However, it also presents several issues and challenges that need to be addressed.
As lenses have been used for imaging for centuries and they can create high-quality images, lensless imaging often produces lower-quality images than lens-based imaging. Lensless imaging systems need computational reconstruction algorithms to recover the unknown images of a scene. Thus, more processing power, lower energy consumption, and longer battery life are acquired. However, the light collection ability of a lensless camera is limited by the sensor size.

4.1. Deep Optics

The concept of computational imaging is to codesign the optics, sensing, and algorithms, and seminal works like wavefront coding, high dynamic range (HDR) imaging, super-resolutions, extend depths of field, light fields, and compressing imaging have been proposed in Figure 3. As designed in a compartmentalized fashion, deep optics is an end-to-end design that can be implemented to automate this process. In other words, deep optics jointly optimize optics and imaging processing end-to-end in simulations, and during the inference time, there is a minor adjustment for the imaging processing based on the fabrication of the optics. Furthermore, deep optics can be interpreted as an optical encoder and electronic decoder system in which the sensor is the bottleneck or a hybrid optical–electronic neural work in which the signals can be executed in photons or electrons, and low-level imaging processing in which physical parameters are included and high-level imaging processing can be optimized jointly. The applications of deep optics include low-light classification [61,62], monocular depth estimation [63,64], neural sensors [65,66], neural holography [67,68], and hybrid optical–electronic computing [69].

4.2. Diffractive and Optoelectronic Neural Networks

As shown in Figure 4, diffractive deep neural networks (D2NNs) have emerged as a free-space optical platform that leverages supervised deep learning algorithms to design diffractive surfaces for visual processing and all-optical computational tasks [70,71]. These diffractive optical networks, after their fabrication, form physical all-optical processors of visual information capable of executing various computer vision tasks spanning image classification [70,72,73,74,75], quantitative phase imaging (QPI) [76,77], linear transformations [78,79,80,81], image encryption [82,83], and imaging through diffusive media [84,85], among many others [86,87,88,89,90,91,92,93].
Each point on the diffractive layer represents an artificial neuron in which the phase or amplitude can be independently modulated, and the field distribution is
t i x i , y i , z i = a i x i , y i , z i exp j φ i x i , y i , z i  
where  i  denotes the neuron number, and the amplitude  a i  is assumed to be a constant. The phase term  φ  can be considered as a multiplicative “bias” term, which is a learnable parameter, and the Sigmoid function is used to limit the phase value of each neuron to 0~2 π.
Using the angular spectrum method, the transfer function can be expressed as
H f x , f y , z = exp j 2 π z / λ 1 λ f x 2 λ f y 2
where  z    represents the axial distance between two positions,  λ  is the illumination wavelength,   f x  and  f y  are the frequencies in the x and y directions, and  j = 1 .
The intensity distribution of the  i th learnable neuron on the diffractive layer can be expressed as
I i x i , y i , z i = k F 1 F I k d x , y , 0 · H k d f x , f y , z i · t i x i , y i , z i
where  I k d x , y , 0  is the intensity distribution of the kth pixel on the diffusing layer. The field intensity right in front of the diffractive layer can be obtained by multiplying the transfer function  H k d  with  I k d  in the frequency domain and then making the summation over k in the spatial domain.
For optoelectronic neural networks, Shi et al. [94] introduced a lensless optoelectronic hybrid neural network architecture that can be used for computer vision tasks, such as handwritten digit classification and privacy-preserving face recognition. It uses a passive mask to perform convolution operations in the imaging optical path, thus extracting features in the optical domain and solving the challenge of processing incoherent and broadband light signals in natural scenes. This architecture also optimizes the mask design and the backend network to reduce the computation and energy consumption of the whole process. Wang et al. [95] introduced a lensless image sensing method that uses multilayer nonlinear optical neural networks (ONNs) as image preprocessors, which compress high-dimensional image information into a low-dimensional latent space for subsequent computational analyses, such as machine vision benchmarking, flow cytometry image classification, and object recognition in real scenes. It also shows that this ONN preprocessor can achieve high-precision and high-speed image sensing with fewer pixels and fewer photons under the same hardware conditions, which provides new ideas for developing compact and efficient lensless image sensing platforms.

4.3. Single-Pixel Imaging

Single-pixel imaging is a technique that generates images by illuminating a scene with a sequence of spatially structured patterns and recording the corresponding intensity on a detector that has no spatial resolution. Single-pixel camera architectures can be used for polarimetry [96], imaging through scattering media [97], phase imaging [98], and more. By using LCD, the lensless image can be further computed. Huang et al. [99] recently showed a lensless imaging system that used a LCD to control the light transmission at multiple apertures independently. This kind of single-pixel imaging system can avoid problems such as aberrations and produce images with an improved depth-of-focus by using point spread function engineering.

5. Applications

In recent years, with the rapid development of mask-modulated lensless imaging technology in the fields of three-dimensional and hyperspectral imaging and super-resolution and large FOV imaging, its peripheral applications in microscopy, endoscopy, data storage, and security have been accelerated in Figure 5. In addition to the amplitude and phase, the wavelength information of the light field can also be modulated to achieve hyperspectral lensless imaging. The lensless imaging systems with mask modulation enable high-throughput imaging. As a result, high-dimensional information such as the depth and spectrum can be obtained with both a large field of view and super-resolution, which is particularly useful for microscopic and endoscopic imaging. Furthermore, mask-modulated lensless imaging overcomes the “one-to-one projection” limitation and is suitable for compressed imaging. With the advantages of small size, light weight, and low cost, it can expand the application possibilities in data storage and security. The following sections introduce the scenario-based applications of the mask-modulated lensless imaging system.

5.1. Three-Dimensional and Hyperspectral Imaging

Depth imaging enables the acquisition of three-dimensional spatial information about a scene with wide applications, such as autonomous driving, robot obstacle avoidance, target recognition, and virtual reality. Simple optical depth imaging relies on monocular depth estimation methods, which restore depth information based on the varying degrees of object blur at different depths. Carvalho et al. [100] addressed depth estimation and 3D reconstruction from defocused images, i.e., real defocused indoor and outdoor images, presenting a complete system for single-image depth prediction through the defocus depth and neural networks.
Phase masks, by introducing depth-related point spread functions (PSFs), can strengthen the correlation between the scene depth and image blur. Wu et al. [50] proposed a method for passive 3D imaging using a phase mask at the camera aperture. This technique relies on an end-to-end optimization framework for jointly learning the optimal phase mask and reconstruction algorithm. It can accurately estimate distances from captured data and demonstrates excellent performance in 3D imaging. Boominathan et al. [21] designed a phase mask with a contour line PSF based on wave optics and phase recovery methods, achieving sharp PSFs at different depths for close range 3D imaging. Baek et al. [28] combined high-spectral and depth imaging by using DOE elements with PSFs that vary in depth and spectrum, enabling high-spectral-depth imaging in the visible light spectrum. Addressing the impact of DOE parameterization on the image quality and depth estimation accuracy for all-focus images, Liu et al. [101] found that PSFs with concentrated energy perform well for all-focus imaging, while PSFs with depth-related shapes are more suitable for depth imaging. Antipa et al. [19] generated pseudo-random PSFs using a diffuser plate and achieved single-frame 3D imaging by pre-calibrating PSFs at different depths. Tian et al. [102] used a custom microlens array to construct a camera, optimizing the PSF for the depth estimation and reconstructing three-dimensional scenes.
Lensless spectral imaging can capture the spectral information of a scene without using any lenses or moving parts. It relies on computational methods to reconstruct the hyperspectral data cube from a single or multiple measurements under different mask patterns. The mask patterns can be designed to encode multiplexed spatio-spectral information, which can be decoded by solving an inverse problem. Spectral imaging in lensless imaging has advantages, such as compactness and snapshot capability. It can be applied in various fields, such as medical imaging, remote sensing, food quality analysis, and astronomy [19,103].

5.2. Super-Resolution and Large FOV Imaging

Due to the limitations imposed by the diffraction limit and the influence of complex imaging environments, the resolution of images captured by imaging systems is inherently limited. To enhance image resolution, two approaches have been pursued: image reconstruction methods and optical methods aimed at surpassing the diffraction limits inherent in conventional imaging systems. Super-resolution image reconstruction involves the recovery of high-resolution images from low-resolution counterparts and has garnered considerable attention [104]. Tsai and Huang [105] explored the use of multiple image frames to achieve super-resolution reconstruction. Over the years, super-resolution techniques have matured, enabling the effective reconstruction of both video and static images. Sitzmann et al. [106] employed stochastic gradient methods to perform end-to-end optimization of optical elements based on the output of neural networks. The DOEs can achieve snapshot super-resolution imaging.
In the field of lensless microscopy, several techniques are employed for super-resolution imaging, such as subpixel displacement-based methods, multi-angle illumination, axial scanning, and wavelength scanning. The FlatScope lensless microscope achieves micrometer-level resolution for three-dimensional fluorescence imaging [9]. Additionally, bio-inspired microlens arrays are an effective means of achieving high-resolution imaging. Venkataraman et al. [107] introduced PiCam, an ultra-thin, high-performance, single-chip camera array capable of capturing light fields and obtaining high-resolution images and depth maps through integrated disparity detection and super-resolution. Wu et al. [108] presented an integrated scanning light-field imaging sensor, capturing ultra-fine four-dimensional light-field distributions through vibrating encoded microlens arrays, enabling high-resolution imaging and three-dimensional photography. Hu et al. [109] introduced a miniature optoelectronic-integrated compound eye camera capable of wide-field spatial imaging, recognizing the positions of moving objects, and tracking sensitive trajectories. Zhang et al. [15] proposed a lensless compound eye microsystem, stitching together multiple subfields formed by a telephoto sub-eye, and imaging panoramic targets within a single multiplexed image sensor. Moreover, the lensless compound eye microsystem employs a microelectromechanical systems (MEMS) aperture array to achieve wide-field and high-resolution imaging.

5.3. Microscopy

Lensless imaging offers a new way of undertaking microscopy with a small size and a flexible design. Compared to conventional microscopes [110], lensless imaging has different trade-offs among three factors: the FoV, resolution, and light collection efficiency. Conventional microscopes need large and heavy lenses with a high NA to collect more light and achieve a higher resolution, but they also have a smaller FoV [111]. Lensless imaging does not need any lenses, so it can have a larger FoV by using a bigger sensor or sensor array without losing the resolution. Lensless microscopy has shown its ability to capture large FoV and high-resolution images in various situations [9,112,113,114,115,116]. Lensless imaging also has other benefits for microscopy, such as 3D reconstruction and lesser size, weight, and cost.
Adam et al. [60] proposed a new type of lensless microscope that can image tissue in vivo using a phase mask that creates high-contrast contours in the diffraction patterns. The lensless microscope can achieve large fields of view, computational refocusing, and low cost and small form factors. The article also showed experimental results on imaging calcium dynamics in a mouse cortex, freely moving hydra, and oral microvasculature that suggested the lensless microscope may have clinical applications, especially for imaging hard-to-reach areas of the body in Figure 6a. Guo et al. [117] proposed a new on-chip imaging strategy, called depth-multiplexed ptychographic microscopy (DPM), that can image multiple specimen stacks in the axial direction at high speed and can achieve high-throughput imaging of biospecimens on a chip, such as cell cultures, brain sections, and blood smears in Figure 6b. Kuo et al. [118] introduced a lensless fluorescence microscope that only requires placing a diffuser on a conventional image sensor to achieve wide-field fluorescence imaging for planar and three-dimensional samples. The diffuser acts as a super-thin alternative to a microscope objective, making the system compact and easy to assemble, and also has a practical working distance of over 1.5 mm. Moreover, the diffuser also encodes volumetric information, enabling the refocusing and three-dimensional imaging of sparse samples during postprocessing. To improve the performance under low-light conditions, it also proposes a random microlens diffuser, which consists of many small lenses randomly placed on the mask surface, producing noise-robust PSFs Figure 6c.

5.4. Endoscopy

Fiber-based endoscopes have been around since 2011 that move away optics between fibers and samples and are consequently termed lensless endoscopes [119,120,121,122]. Cellular-level imaging in live mice by lensless endoscopes has been demonstrated using single-photon fluorescence imaging [123,124]. Sun et al. [125] proposed a new approach to 3D endoscopy using ultra-thin fiber bundles called multicore fibers (MCFs). Unlike conventional fiber facet imaging methods, where the imaging resolution is limited by the core-to-core spacing, their approach enables 3D quantitative phase imaging (QPI) reconstruction with nanoscale axial sensitivity of the optical path length and a lateral resolution up to 1 μm in the ideal case via direct recovery of the incident complex light field. Kuschmierz et al. [126] proposed two lensless fiber endoscopy techniques based on DOEs and deep neural networks that can achieve ultra-thin 3D imaging. The first technique is called the DOE-grating method, which uses DOEs designed and fabricated for phase conjugation and focusing to realize lensless grating scanning imaging. The second technique is called the DOE-diffuser method, which uses a diffuser to encode the far field of a coherent fiber bundle (CFB) into a two-dimensional speckle pattern and then decodes the speckle pattern by using a neural network to recover the 3D object information.

5.5. Data Storage and Security

Lensless imaging can be used to store data in a holographic memory, which is a type of optical data storage that uses holograms to encode information. Holographic memory can store more data than conventional optical discs and can also enable faster data retrieval. Hao et al. [127] proposed a deep learning-based phase retrieval method for comparative holographic data storage (HDS) with a higher storage density, lower bit error rate (BER), and higher data transfer rate. Hao et al. [128] proposed a novel method for recovering both the amplitude and phase information of data pages in holographic data storage using a deep learning model and a lensless near-field diffraction setup.
Lensless imaging can be used for privacy by capturing heavily blurred images that are imperceptible for humans to recognize the subject but contain enough information for machines to infer information. Sui et al. [129] proposed an optical multiple-image hiding method for a lensless optical security system. It considers various optical parameters as security keys, including propagation distance, wavelength, focal length and topological charge of the structured phase mask, which increases the size of the key space and improves the security.
It should be noted that mask-modulated lensless imaging depends largely on the backend image reconstruction algorithms, which affect the imaging quality, the power consumption, and the real-time display of the image reconstruction. Moreover, the light transmission in a mask-modulated lensless imaging system is restricted by the modulated transfer function of the mask and size and angular transfer function of the sensor. Hence, the optimization of lensless imaging architectures can be a promising direction to further extend their applications.

6. Future Outlook

The ongoing progress in lensless optical imaging technology has driven the development of various industries, including consumer electronics, autonomous driving, machine vision, security surveillance, biomedical applications, and the metaverse.
In the realm of consumer electronics, smartphone camera modules, often composed of multiple lenses, have been a subject of concern due to their protruding designs, impacting aesthetics. Lensless imaging systems offer new solutions by providing low SWaP (Size, Weight, and Power) imaging, enabling cost-effective, high-quality images and video capturing.
In the biomedical field, biomedical imaging and medical devices are the primary directions for the development and application of simplified optics. Lensless imaging could also be used to create high-resolution and large FOV images of biological samples, which can help in the diagnosis and treatment of diseases.
In the field of security, demanding clear and comprehensive images or videos for target monitoring, tracking, and attribute analysis, lensless imaging could be used to encrypt and decrypt data, which can enhance the security and robustness of data transmissions. Moreover, all-optical or optoelectronic lensless imaging can be implemented to deal with the need for improved concealment and reduced power consumption, which are essential to achieve comprehensive, all-weather surveillance.
In the field of autonomous vehicles, lensless imaging could be used to create compact and robust sensors that can detect and recognize objects in real time. For example, lensless imaging can be implemented to monitor a car’s surroundings, enabling panoramic vision, emergency collision avoidance, automated parking, and adaptive cruise control.
To better develop mask-based lensless imaging technology and its related fields, the following aspects require additional investigation. First, computing capability is essential for extracting information, and it also significantly increases the energy consumption requirements of the imaging system. Therefore, how to combine optoelectronic intelligent algorithms to achieve efficient computation is an important area that needs improvement. Second, the continuously improved and enhanced nanofabrication technologies are expected to reduce the cost of large size and high-precision optical modulation components. Third, by processing data directly at the sensor end, improving the area, time, and energy efficiency, and performing intelligent control of the pixel units, redundant data movements between the sensor and the reconstruction unit can be reduced, thus reducing the power consumption and time delay.

Author Contributions

Conceptualization, Y.W.; writing—original draft preparation, Y.W. and Z.D., writing—review and editing, Y.W.; supervision, Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Boominathan, V.; Robinson, J.T.; Waller, L.; Veeraraghavan, A. Recent Advances in Lensless Imaging. Optica 2022, 9, 1–16. [Google Scholar] [CrossRef]
  2. Rego, J.D.; Chen, H.; Li, S.; Gu, J.; Jayasuriya, S. Deep Camera Obscura: An Image Restoration Pipeline for Pinhole Photography. Opt. Express 2022, 30, 27214–27235. [Google Scholar] [CrossRef]
  3. Barrett, H.H.; Horrigan, F.A. Fresnel Zone Plate Imaging of Gamma Rays; Theory. Appl. Opt. 1973, 12, 2686–2702. [Google Scholar] [CrossRef]
  4. Anand, V.; Katkus, T.; Linklater, D.P.; Ivanova, E.P.; Juodkazis, S. Lensless Three-Dimensional Quadntitative Phase Imaging Using Phase Retrieval Algorithm. J. Imaging 2020, 6, 99. [Google Scholar] [CrossRef]
  5. Fenimore, E.E.; Cannon, T.M. Coded Aperture Imaging with Uniformly Redundant Arrays. Appl. Opt. 1978, 17, 337–347. [Google Scholar] [CrossRef] [PubMed]
  6. Gottesman, S.R.; Fenimore, E.E. New Family of Binary Arrays for Coded Aperture Imaging. Appl. Opt. 1989, 28, 4344–4352. [Google Scholar] [CrossRef] [PubMed]
  7. DeWeert, M.J.; Farm, B.P. Lensless Coded-Aperture Imaging with Separable Doubly-Toeplitz Masks. Opt. Eng. 2015, 54, 023102. [Google Scholar] [CrossRef]
  8. Asif, M.S.; Ayremlou, A.; Sankaranarayanan, A.; Veeraraghavan, A.; Baraniuk, R.G. FlatCam: Thin, Bare-Sensor Cameras Using Coded Aperture and Computation. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 384–397. [Google Scholar] [CrossRef]
  9. Adams, J.K.; Boominathan, V.; Avants, B.W.; Vercosa, D.G.; Ye, F.; Baraniuk, R.G.; Robinson, J.T.; Veeraraghavan, A. Single-Frame 3D Fluorescence Microscopy with Ultraminiature Lensless FlatScope. Sci. Adv. 2017, 3, e1701548. [Google Scholar] [CrossRef] [PubMed]
  10. Shimano, T.; Nakamura, Y.; Tajima, K.; Sato, M.; Hoshizawa, T. Lensless Light-Field Imaging with Fresnel Zone Aperture: Quasi-Coherent Coding. Appl. Opt. 2018, 57, 2841–2850. [Google Scholar] [CrossRef] [PubMed]
  11. Tajima, K.; Shimano, T.; Nakamura, Y.; Sato, M.; Hoshizawa, T. Lensless Light-Field Imaging with Multi-Phased Fresnel Zone Aperture. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Stanford, CA, USA, 12–14 May 2017; pp. 1–7. [Google Scholar] [CrossRef]
  12. Wu, J.; Zhang, H.; Zhang, W.; Jin, G.; Cao, L.; Barbastathis, G. Single-Shot Lensless Imaging with Fresnel Zone Aperture and Incoherent Illumination. Light Sci. Appl. 2020, 9, 53. [Google Scholar] [CrossRef]
  13. Zhang, Z.; Zhou, Y.; Jiang, S.; Guo, K.; Hoshino, K.; Zhong, J.; Suo, J.; Dai, Q.; Zheng, G. Mask-Modulated lensless Imaging with Multi-Angle Illuminations. APL Photonics 2018, 3, 060803. [Google Scholar] [CrossRef]
  14. Nakamura, T.; Kagawa, K.; Torashima, S.; Yamaguchi, M. Super Field-of-View Lensless Camera by Coded Image Sensors. Sensors 2019, 19, 1329. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, L.; Zhan, H.; Liu, X.; Xing, F.; You, Z. a Wide-Field and High-Resolution Lensless Compound Eye Microsystem for Real-Time Target Motion Perception. Microsyst. Nanoeng. 2022, 8, 83. [Google Scholar] [CrossRef] [PubMed]
  16. Hua, Y.; Nakamura, S.; Asif, M.S.; Sankaranarayanan, A.C. SweepCam—Depth-Aware Lensless Imaging Using Programmable Masks. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 1606–1617. [Google Scholar] [CrossRef]
  17. Zheng, Y.; Hua, Y.; Sankaranarayanan, A.C.; Asif, M.S. A Simple Framework for 3D Lensless Imaging with Programmable Masks. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 2583–2592. [Google Scholar] [CrossRef]
  18. Stork, D.G.; Gill, P.R. Hardware Verification of an Ultra-miniature Computational Diffractive Imager. In Proceedings of the Computational Optical Sensing and Imaging 2014 (COSI), Kohala Coast, HI, USA, 22–26 June 2014. [Google Scholar] [CrossRef]
  19. Antipa, N.; Kuo, G.; Heckel, R.; Mildenhall, B.; Bostan, E.; Ng, R.; Waller, L. DiffuserCam: Lensless Single-Exposure 3D Imaging. In Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), University of California, Berkeley, CA, USA, 5 October 2017; pp. 1–9. [Google Scholar] [CrossRef]
  20. Monakhova, K.; Yanny, K.; Aggarwal, N.; Waller, L. Spectral DiffuserCam: Lensless Snapshot Hyperspectral Imaging with a Spectral Filter Array. Optica 2020, 7, 1298. [Google Scholar] [CrossRef]
  21. Boominathan, V.; Adams, J.K.; Robinson, J.T.; Veeraraghavan, A. PhlatCam: Designed Phase-Mask Based Thin Lensless Camera. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 1618–1629. [Google Scholar] [CrossRef]
  22. Cai, Z.; Chen, J.; Pedrini, G.; Osten, W.; Liu, X.; Peng, X. Lensless Light-Field Imaging through Diffuser Encoding. Light Sci. Appl. 2020, 9, 143. [Google Scholar] [CrossRef]
  23. Baek, N.; Lee, Y.; Kim, T.; Jung, J.; Lee, S. Lensless Polarization Camera for Single-Shot Full-Stokes Imaging. APL Photonics 2022, 7, 116107. [Google Scholar] [CrossRef]
  24. Chen, S.; Xiang, P.; Wang, H.; Xiao, J.; Shao, X.; Wang, Y. Optical-electronic neural network OENN for multi-modality and high-accurate lensless optics design and image reconstruction. Opt. Eng. 2024, 63, 013102. [Google Scholar] [CrossRef]
  25. Peng, Y.; Fu, Q.; Heide, F.; Heidrich, W. The Diffractive Achromat Full Spectrum Computational Imaging with Diffractive Optics. ACM Trans. Graph. 2016, 35, a31. [Google Scholar] [CrossRef]
  26. Zhao, X.; Fan, B.; He, Y.; Zhang, H.; Zheng, S.; Zhong, S.; Lei, J.; Yang, W.; Yang, H. Research Advances in Simple and Compact Optical Imaging Techniques. Acta Opt. Sin. 2022, 42, 1305001. [Google Scholar] [CrossRef]
  27. Dun, X.; Ikoma, H.; Wetzstein, G.; Wang, Z.S.; Cheng, X.B.; Peng, Y.F. Learned Rotationally Symmetric Diffractive Achromat for Full-Spectrum Computational Imaging. Optica 2020, 7, 913. [Google Scholar] [CrossRef]
  28. Baek, S.-H.; Ikoma, H.; Jeon, D.S.; Li, Y.Q.; Heidrich, W.; Wetzstein, G.; Kim, M.H. Single-Shot Hyperspectral-Depth Imaging with Learned Diffractive Optics. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 2631–2640. [Google Scholar] [CrossRef]
  29. Heide, F.; Fu, Q.; Peng, Y.; Heidrich, W. Encoded Diffractive Optics for Full-Spectrum Computational Imaging. Sci. Rep. 2016, 6, 33543. [Google Scholar] [CrossRef] [PubMed]
  30. Banerji, S.; Meem, M.; Majumder, A.; Dvonch, C.; Sensale-Rodriguez, B.; Menon, R. Broadband Lightweight Flat Lenses for Long-Wave Infrared Imaging. Proc. Natl. Acad. Sci. USA 2019, 116, 21375–21378. [Google Scholar] [CrossRef]
  31. Navab, N.; Joachim, H.; Wells, W.M.; Frangi, A.F. Medical Image Computing and Computer-Assisted Intervention. In Proceedings of the MICCAI 2015—18th International Conference, Munich, Germany, 5–9 October 2015; Springer: Munich, Germany, 2015; p. 234. [Google Scholar]
  32. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, J.L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
  33. Khan, S.; Naseer, M.; Hayat, M.; Zamir, S.W.; Khan, F.S.; Shah, M. Transformers in Vision: A Survey. ACM Comput. Surv. 2022, 54, 1–41. [Google Scholar] [CrossRef]
  34. Wang, Y.; Lin, Z.; Wang, H.; Hu, C.; Yang, H.; Gu, M. High generalization deep sparse pattern reconstruction: Feature extraction of speckles using a self-attention armed convolutional neural network. Opt. Express. 2021, 29, 35702–35711. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, Y.; Wang, H.; Li, Y.; Hu, C.; Yang, H.; Gu, M. High accurate and direct aberration compensation using self-attention armed deep convolutional neural network. J. Microsc. 2022, 286, 13–21. [Google Scholar] [CrossRef] [PubMed]
  36. Lin, Z.; Wang, Y.; Wang, H.; Hu, C.; Yang, H.; Gu, M. Expansion of Depth-of-Field of Scattering Imaging Based on DenseNet. Acta Opt. Sin. 2022, 42, 0436001-1–0436001-4. [Google Scholar] [CrossRef]
  37. Lan, B.; Wang, H.; Wang, Y. A One-to-all Light-weight Fourier Channel Attention Convolutional Neural Network (FCACNN) for Speckle Reconstructions. JOSAA 2022, 39, 2238–2245. [Google Scholar] [CrossRef]
  38. Wang, Y.; Wang, H.; Gu, M. High performance “non-local ” generic face reconstruction model using the lightweight Speckle-Transformer (SpT) UNet. Adv. Optoelectron. 2023, 6, 220049. [Google Scholar] [CrossRef]
  39. Zuo, C.; Sun, J.; Zhang, J.; Hu, Y.; Chen, Q. Lensless Phase Microscopy and Diffraction Tomography with Multi-Angle and Multi-Wavelength Illuminations Using a LED Matrix. Opt. Express 2015, 23, 14314. [Google Scholar] [CrossRef]
  40. Wang, H.; Lyu, M.; Situ, G. eHoloNet: A Learning-Based End-to-End Approach for in-line Digital Holographic Reconstruction. Opt. Express 2018, 26, 22603–22614. [Google Scholar] [CrossRef]
  41. Wang, K.; Dou, J.; Kemao, Q.; Di, J.; Zhao, J. Y-Net: A One-to-Two Deep Learning Framework for Digital Holographic Reconstruction. Opt. Lett. 2019, 44, 4765–4768. [Google Scholar] [CrossRef]
  42. Ren, Z.; Xu, Z.; Lam, E.Y. End-to-End Deep Learning Framework for Digital Holographic Reconstruction. Adv. Photonics 2019, 1, 016004. [Google Scholar] [CrossRef]
  43. Tahara, T.; Zhang, Y.; Rosen, J.; Anand, V.; Cao, L.; Wu, J.; Koujin, T.; Matsuda, A.; Ishii, A.; Kozawa, Y.; et al. Roadmap of Incoherent Digital Holography. Appl. Phys. 2022, 128, 193. [Google Scholar] [CrossRef]
  44. Wu, Y.; Wu, J.; Jin, S.; Cao, L.; Jin, G. Dense-U-net: Dense Encoder–Decoder Network for Holographic Imaging of 3D Particle Fields. Opt. Commun. 2021, 493, 126970. [Google Scholar] [CrossRef]
  45. Ba, C.; Zhou, M.; Min, J.; Dang, S.; Yu, X.; Zhang, P.; Peng, T.; Yao, B. Robust contrast-transfer-function phase retrieval via flexible deep learning networks: Publisher’s note. Opt. Lett. 2019, 44, 5561. [Google Scholar] [CrossRef] [PubMed]
  46. Wang, F.; Bian, Y.; Wang, H.; Lyu, M.; Pedrini, G.; Osten, W.; Barbastathis, G.; Situ, G. Phase Imaging with an Untrained Neural Network. Light Sci. Appl. 2020, 9, 77. [Google Scholar] [CrossRef] [PubMed]
  47. Metzler, C.; Schniter, P.; Veeraraghavan, A.; Baraniuk, R. prDeep: Robust Phase Retrieval with a Flexible Deep Network. In Proceedings of the 2018 35th International Conference on Machine Learning, Stockholmsmässan, Stockholm, Sweden, 10–15 July 2018; pp. 3501–3510. [Google Scholar]
  48. Li, S.; Deng, M.; Lee, J.; Sinha, A.; Barbastathis, G. Imaging through Glass Diffusers Using Densely Connected Convolutional Networks. Optica 2018, 5, 803. [Google Scholar] [CrossRef]
  49. Li, Y.; Xue, Y.; Tian, L. Deep Speckle Correlation: A Deep Learning Approach Toward Scalable Imaging through Scattering Media. Optica 2018, 5, 1181–1190. [Google Scholar] [CrossRef]
  50. Wu, Y.; Boominathan, V.; Chen, H.; Sankaranarayanan, A.; Veeraraghavan, A. PhaseCam3D—Learning Phase Masks for Passive Single View Depth Estimation. In Proceedings of the 2019 IEEE International Conference on Computational Photography (ICCP), Tokyo, Japan, 15–17 May 2019; pp. 1–12. [Google Scholar] [CrossRef]
  51. Khan, S.S.; Adarsh, V.; Boominathan, V.; Veeraraghavan, A.; Mitra, K. FlatNet: Towards Photorealistic Scene Reconstruction from Lensless Measurements. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 1934–1948. [Google Scholar] [CrossRef] [PubMed]
  52. Wu, J.; Cao, L.; Barbastathis, G. DNN-FZA Camera: A Deep Learning Approach Toward Broadband FZA Lensless Imaging. Opt. Lett. 2021, 46, 130–133. [Google Scholar] [CrossRef] [PubMed]
  53. Monakhova, K.; Yurtsever, B.J.; Kuo, G.; Antipa, N.; Yanny, K.; Waller, L. Learned Reconstructions for Practical Mask-Based Lensless Imaging. Opt. Express 2019, 27, 28075–28090. [Google Scholar] [CrossRef] [PubMed]
  54. Candès, E.J.; Wakin, M.B. An Introduction to Compressive Sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  55. Robert, T. Regression Shrinkage and Selection via the Lasso. J. R. Stat. Soc. Ser. B Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
  56. Beck, A.; Teboulle, M. A fast Iterative Shrinkage-Thresholding Algorithm with application to wavelet-based image deblurring. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 693–696. [Google Scholar] [CrossRef]
  57. Chambolle, A.; Thomas, P. A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging. J. Math. Imaging Vis. 2011, 40, 120–145. [Google Scholar] [CrossRef]
  58. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning Via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  59. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A New Alternating Minimization Algorithm for Total Variation Image Reconstruction. SIAM J. Imaging Sci. 2008, 1, 248–272. [Google Scholar] [CrossRef]
  60. Adams, J.K.; Yan, D.; Wu, J.; Boominathan, V.; Gao, S.; Rodriguez, A.V.; Kim, S.; Carns, J.; Richards-Kortum, R.; Kemere, C. In Vivo Lensless Microscopy Via a Phase Mask Generating Diffraction Patterns with High-Contrast Contours. Nat. Biomed. Eng. 2022, 6, 617–628. [Google Scholar] [CrossRef] [PubMed]
  61. Diamond, S.; Sitzmann, V.; Boyd, S.; Wetzstein, G.; Heide, F. Dirty Pixels: Optimizing Imaging Classification Architectures for Raw Sensor Data. arXiv 2017, arXiv:1701.06487. [Google Scholar] [CrossRef]
  62. Diamond, S.; Sitzmann, V.; Heide, F.; Wetzstein, G. Unrolled Optimization with Deep Priors. arXiv 2017, arXiv:1705.08041. [Google Scholar] [CrossRef]
  63. Chang, J.; Wetzstein, G. Deep Optics for Monocular Depth Estimation and 3D Object Detection. In Proceedings of the Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 10192–10201. [Google Scholar] [CrossRef]
  64. Ikoma, H.; Nguyen, C.M.; Metzler, C.A.; Peng, Y.; Wetzstein, G. Depth from Defocus with Learned Optics for Imaging and Occlusion-Aware Depth Estimation. In Proceedings of the 2021 IEEE International Conference on Computational Photography (ICCP), Haifa, Israel, 23–25 May 2021; pp. 1–12. [Google Scholar] [CrossRef]
  65. Martel, J.N.P.; Muller, L.K.; Carey, S.J.; Dudek, P.; Wetzstein, G. Neural Sensors: Optimizing Pixel Exposures for HDR Imaging and Video Compressive Sensing with Programmable Sensor. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 1642–1653. [Google Scholar] [CrossRef] [PubMed]
  66. Li, Y.; Qi, M.; Gulve, R.; Wei, M.; Genov, R.; Kutulakos, K.N.; Heidrich, W. End-to-End Video Compressive Sensing Using Anderson-Accelerated Unrolled Networks. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), St. Louis, MO, USA, 24–26 April 2020; pp. 1–12. [Google Scholar] [CrossRef]
  67. Peng, Y.; Choi, S.; Kim, J.; Wetzstein, G. Speckle-Free Holography with Partially-Coherent Light Sources and Camera-in-the-Loop Training. Sci. Adv. 2021, 7, eabg5040. [Google Scholar] [CrossRef] [PubMed]
  68. Choi, S.; Gopakumar, M.; Peng, Y.; Kim, J.; Wetzstein, G. Neural 3D Holography: Learning Accurate Wave Propagation Models for 3D Holographic Virtual and Augmented Reality Displays. ACM Trans. Graph. 2021, 40, 1–12. [Google Scholar] [CrossRef]
  69. Choi, S.; Gopakumar, M.; Peng, Y.; Kim, J.; O’Toole, M.; Wetzstein, G. Time-Multiplexed Neural Holography: A Flexible Framework for Holographic Near-eye Displays with Fast Heavily-quantized Spatial Light Modulators. In Proceedings of the 2022 ACM SIGGRAPH Conference, Christchurch, New Zealand, 12–16 March 2022; pp. 1–9. [Google Scholar] [CrossRef]
  70. Chang, J.; Sitzmann, V.; Dun, X.; Heidrich, W.; Wetzstein, G. Hybrid Optical-Electronic Convolutional Neural Networks with Diffractive Optics for Image Classification. Sci. Rep. 2018, 8, 12324. [Google Scholar] [CrossRef]
  71. Lin, X.; Rivenson, Y.; Yardimci, N.T.; Veli, M.; Luo, Y.; Jarrahi, M.; Ozcan, A. All-Optical Machine Learning Using Diffractive Deep Neural Networks. Science 2018, 361, 1004–1008. [Google Scholar] [CrossRef]
  72. Li, J.; Mengu, D.; Luo, Y.; Rivenson, Y.; Ozcan, A. Class-Specific Differential Detection in Diffractive Optical Neural Networks Improves Inference Accuracy. Adv. Photonics 2019, 1, 046001. [Google Scholar] [CrossRef]
  73. Rahman, M.S.S.; Li, J.; Mengu, D.; Rivenson, Y.; Ozcan, A. Ensemble Learning of Diffractive Optical Networks. Light Sci. Appl. 2021, 10, 14. [Google Scholar] [CrossRef]
  74. Li, J.; Deniz, M.; Yardimic, N.T.; Luo, Y.; Li, X.; Veli, M.; Rivenson, Y. Spectrally Encoded Single-Pixel Machine Vision Using Diffractive Networks. Sci. Adv. 2021, 7, eabd7690. [Google Scholar] [CrossRef]
  75. Bai, B.; Li, Y.; Luo, Y.; Li, X.; Çetintaş, E.; Jarrahi, M.; Ozcan, A. All-Optical Image Classification through Unknown Random Diffusers Using a Single-Pixel Diffractive Network. Light Sci. Appl. 2023, 12, 69. [Google Scholar] [CrossRef]
  76. Mengu, D.; Ozcan, A. All-Optical Phase Recovery: Diffractive Computing for Quantitative Phase Imaging. Adv. Opt. Mater. 2022, 10, 2200281. [Google Scholar] [CrossRef]
  77. Shen, C.-Y.; Li, J.; Mengu, D.; Ozcan, A. Multispectral Quantitative Phase Imaging Using a Diffractive Optical Network. Adv. Intell. Syst. 2023, 5, 2300300. [Google Scholar] [CrossRef]
  78. Rahman, M.S.S.; Yang, X.; Li, J.; Bai, B.; Ozcan, A. Universal Linear Intensity Transformations Using Spatially Incoherent Diffractive Processors. Light Sci. Appl. 2023, 12, 195. [Google Scholar] [CrossRef]
  79. Li, J.; Gan, T.; Bai, B.; Luo, Y.; Jarrahi, M.; Ozcan, A. Massively Parallel Universal Linear Transformations Using a Wavelength-Multiplexed Diffractive Optical Network. Adv. Photonics 2023, 5, 016003. [Google Scholar] [CrossRef]
  80. Li, Y.; Li, J.; Zhao, Y.; Gan, T.; Hu, J.; Jarrahi, M.; Ozcan, A. Universal Polarization Transformations: Spatial Programming of Polarization Scattering Matrices Using a Deep Learning-Designed Diffractive Polarization Transformer. Adv. Mater. 2023, 26, e2303395. [Google Scholar] [CrossRef] [PubMed]
  81. Bai, B.; Wei, H.; Yang, X.; Gan, T.; Mengu, D.; Jarrahi, M.; Ozcan, A. Data-Class-Specific All-Optical Transformations and Encryption. Adv. Mater. 2023, 35, 2212091. [Google Scholar] [CrossRef]
  82. Bai, B.; Luo, Y.; Gan, T.; Hu, J.; Li, Y.; Zhao, Y.; Mengu, D.; Jarrahi, M.; Ozcan, A. To Image, or not to Image: Class-Specific Diffractive Cameras with All-Optical Erasure of Undesired Objects. eLight 2022, 2, 14. [Google Scholar] [CrossRef]
  83. Mengu, D.; Zhao, Y.; Tabassum, A.; Jarrahi, M.; Ozcan, A. Diffractive Interconnects: All-Optical Permutation Operation Using Diffractive Networks. Nanophotonics 2022, 12, 905–923. [Google Scholar] [CrossRef]
  84. Li, Y.; Luo, Y.; Mengu, D.; Bai, B.; Ozcan, A. Quantitative Phase Imaging (QPI) through Random Diffusers Using a Diffractive Optical Network. Light Adv. Manuf. 2023, 4, 206–221. [Google Scholar] [CrossRef]
  85. Luo, Y.; Zhao, Y.; Li, J.; Çetintaş, E.; Rivenson, Y.; Jarrahi, M.; Ozcan, A. Computational Imaging without a Computer: Seeing through Random Diffusers at The Speed of Light. eLight 2022, 2, 4. [Google Scholar] [CrossRef]
  86. Li, J.; Gan, T.; Zhao, Y.; Bai, B.; Shen, C.; Sun, S.; Jarrahi, M.; Ozcan, A. Unidirectional Imaging Using Deep Learning-Designed Materials. Sci. Adv. 2023, 9, eadg1505. [Google Scholar] [CrossRef] [PubMed]
  87. Mengu, D.; Tabassum, A.; Jarrahi, M.; Ozcan, A. Snapshot Multispectral Imaging Using a Diffractive Optical Network. Light Sci. Appl. 2023, 12, 86. [Google Scholar] [CrossRef] [PubMed]
  88. Sakib Rahman, M.S.; Ozcan, A. Computer-Free, All-Optical Reconstruction of Holograms Using Diffractive Networks. ACS Photonics 2021, 8, 3375–3384. [Google Scholar] [CrossRef]
  89. Huang, Z.; Wang, P.; Liu, J.; Xiong, W.; He, Y.; Xiao, J.; Ye, H.; Li, Y.; Chen, S.; Fan, D. All-Optical Signal Processing of Vortex Beams with Diffractive Deep Neural Networks. Phys. Rev. Appl. 2021, 15, 014037. [Google Scholar] [CrossRef]
  90. Zhu, H.H.; Zou, J.; Zhang, H.; Shi, Y.; Luo, S.B.; Wang, N.; Cai, H.; Wan, L.; Wang, B.; Jiang, X.; et al. Space-Efficient Optical Computing with an Integrated Chip Diffractive Neural Network. Nat. Commun. 2022, 13, 1044. [Google Scholar] [CrossRef]
  91. Goi, E.; Schoenhardt, S.; Gu, M. Direct Retrieval of Zernike-Based Pupil Functions Using Integrated Diffractive Deep Neural Networks. Nat. Commun. 2022, 13, 7531. [Google Scholar] [CrossRef]
  92. Liu, C.; Ma, Q.; Luo, Z.; Hong, Q.; Xiao, Q.; Zhang, H.; Miao, L.; Yu, W.; Cheng, Q.; Li, L.; et al. A Programmable Diffractive Deep Neural Network Based on a Digital-Coding Metasurface Array. Nat. Electron. 2022, 5, 113–122. [Google Scholar] [CrossRef]
  93. Luo, X.; Hu, Y.; Ou, X.; Li, X.; Lai, J.; Liu, N.; Cheng, X.; Pan, A.; Duan, H. Metasurface-Enabled on-Chip Multiplexed Diffractive Neural Networks in the Visible. Light Sci. Appl. 2022, 11, 158. [Google Scholar] [CrossRef]
  94. Shi, W.; Huang, Z.; Huang, H.; Hu, C.; Chen, M.; Yang, S.; Chen, H. LOEN: Lensless Opto-Electronic Neural Network Empowered Machine Vision. Light Sci. Appl. 2022, 11, 121. [Google Scholar] [CrossRef]
  95. Wang, T.; Sohoni, M.M.; Wright, L.G.; Stein, M.M.; Ma, S.Y.; Onodera, T.; Anderson, M.G.; McMahon, P.L. Image sensing with multilayer, nonlinear optical neural networks. Nat. Photonics 2023, 17, 408–415. [Google Scholar] [CrossRef]
  96. Durán, V.; Clemente, P.; Fernández-Alonso, M.; Tajahuerce, E.; Lancis, J. Single-pixel polarimetric imaging. Opt. Lett. 2012, 37, 824–826. [Google Scholar] [CrossRef] [PubMed]
  97. Tajahuerce, E.; Durán, V.; Clemente, P.; Irles, E.; Soldevila, F.; Andrés, P.; Lancis, J. Image transmission through dynamic scattering media by single-pixel photodetection. Opt. Express 2014, 22, 16945–16955. [Google Scholar] [CrossRef]
  98. Soldevila, F.; Durán, V.; Clemente, P.; Lancis, J.; Tajahuerce, E. Phase imaging by spatial wavefront sampling. Optica 2018, 5, 164–174. [Google Scholar] [CrossRef]
  99. Jiang, H.; Huang, G.; Wilford, P. Multi-view in lensless compressive imaging. APSIPA Trans. Signal Inf. Process. 2014, 3, 15. [Google Scholar] [CrossRef]
  100. Carvalho, M.; Le, S.B.; Trouvé-Peloux, P.; Almansa, A.; Champagnat, F. Multi-Task Learning of Height and Semantics from Aerial Images. IEEE Geosci. Remote Sens. Lett. 2020, 17, 1391–1395. [Google Scholar] [CrossRef]
  101. Liu, X.; Li, L.; Liu, X.; Hao, X.; Peng, Y. Investigating Deep Optics Model Representation in Affecting Resolved All-in-Focus Iimage Quality and Depth Estimation Fidelity. Opt. Express 2022, 30, 36973. [Google Scholar] [CrossRef] [PubMed]
  102. Tian, F.; Yang, W. Learned lensless 3D camera. Opt. Express 2022, 30, 34479. [Google Scholar] [CrossRef]
  103. Green, L.A. Improving hyperspectral imaging using a lensless camera. Scilight 2023, 2023, 261102. [Google Scholar] [CrossRef]
  104. Harris, J.L. Diffraction and Resolving Power. Opt. Soc. Am. 1964, 54, 931–936. [Google Scholar] [CrossRef]
  105. Tsai, R.; Huang, T.S. Multiframe Image Restoration and Registration. Adv. Comput. Vis. Image Process. 1984, 1, 317–339. [Google Scholar]
  106. Sitzmann, V.; Diamond, S.; Peng, Y.F.; Dun, X.; Boyd, S.; Heidrich, W.; Heide, F.; Wetzstein, G. End-to-End Optimization of Optics and Image Processing for Achromatic Extended Depth of Field and Super-Resolution Imaging. ACM Trans. Graph. 2018, 37, 114. [Google Scholar] [CrossRef]
  107. Venkataraman, K.; Lelescu, D.; Duparre, J.; McMahon, A.; Molina, G.; Chatterjee, P.; Mullis, R.; Nayar, S. PiCam: An Ultra-Thin High Performance Monolithic Camera Array. ACM Trans. Graph. 2013, 32, 1–13. [Google Scholar] [CrossRef]
  108. Wu, J.; Guo, Y.; Deng, C.; Zhang, A.; Qiao, H.; Lu, Z.; Xie, J.; Fang, L.; Dai, Q. An Integrated Imaging Sensor for Aberration-Corrected 3D Photography. Nature 2022, 612, 62–71. [Google Scholar] [CrossRef] [PubMed]
  109. Hu, Z.Y.; Zhang, Y.L.; Pan, C.; Dou, J.Y.; Li, Z.Z.; Tian, Z.N.; Mao, J.W.; Chen, Q.D.; Sun, H.B. Miniature optoelectronic compound eye camera. Nat. Commun. 2022, 13, 5634. [Google Scholar] [CrossRef] [PubMed]
  110. Wang, Y.; Lin, J.; Zhang, Q.; Chen, X.; Luan, H.; Gu, M. Fluorescence Nanoscopy in Neuroscience. Engineering. 2022, 16, 29–38. [Google Scholar] [CrossRef]
  111. Ozcan, A.; Demirci, U. Ultra Wide-Field Lens-Free Monitoring of Cells on-Chip. Lab Chip 2008, 8, 98–106. [Google Scholar] [CrossRef] [PubMed]
  112. Bishara, W.; Su, T.W.; Coskun, A.F.; Ozcan, A. Lensfree on-Chip Microscopy Over a Wide Field-of-View Using Pixel Super-Resolution. Opt. Express 2010, 18, 11181–11191. [Google Scholar] [CrossRef]
  113. Mudanyali, O.; Tseng, D.; Oh, C.; Isikman, S.O.; Sencan, I.; Bishara, W.; Oztoprak, C.; Seo, S.; Khademhosseini, B.; Ozcan, A. Compact, Light-Weight and Cost-Effective Microscope Based on Lensless Incoherent Holography for Telemedicine Applications. Lab Chip 2010, 10, 1417–1428. [Google Scholar] [CrossRef] [PubMed]
  114. Jiang, S.; Zhu, J.; Song, P.; Guo, C.; Bian, Z.; Wang, R.; Huang, Y.; Wang, S.; Zhang, H.; Zheng, G. Wide-Field, High-Resolution Lensless on-Chip Microscopy: Via Near-Field Blind Ptychographic Modulation. Lab Chip 2020, 20, 1058–1065. [Google Scholar] [CrossRef]
  115. Sanz, M.; Picazo-Bueno, J.Á.; Granero, L.; Garciá, J.; Micó, V. Compact, Cost-Effective and Field-Portable Microscope Prototype Based on MISHELF Microscopy. Sci. Rep. 2017, 7, 43291. [Google Scholar] [CrossRef] [PubMed]
  116. Tobon-Maya, H.; Zapata-Valencia, S.; Zora-Guzmán, E.; Buitrago-Duque, C.; Garcia-Sucerquia, J. Open-Source, Cost-Effective, Portable, 3D-Printed Digital Lensless Holographic Microscope. Appl. Opt. 2021, 60, A205–A214. [Google Scholar] [CrossRef] [PubMed]
  117. Guo, C.; Jiang, S.; Yang, L.; Song, P.; Pirhanov, A.; Wang, R.; Wang, T.; Shao, X.; Wu, Q.; Cho, Y.K.; et al. Depth-Multiplexed Ptychographic Microscopy for High-Throughput Imaging of Stacked Bio-Specimens on a Chip. Biosens. Bioelectron. 2023, 224, 115049. [Google Scholar] [CrossRef] [PubMed]
  118. Kuo, G.; Liu, F.L.; Grossrubatscher, I.; Ng, R.; Waller, L. On-Chip Fluorescence Microscopy with a Random Microlens Diffuser. Opt. Express 2020, 28, 8384–8399. [Google Scholar] [CrossRef] [PubMed]
  119. Thompson, A.J.; Paterson, C.; Neil, M.A.A.; Dunsby, C.; French, P.M.W. Adaptive Phase Compensation for Ultracompact Laser Scanning Endomicroscopy. Opt. Lett. 2011, 36, 1707–1709. [Google Scholar] [CrossRef] [PubMed]
  120. Cizmar, T.; Dholakia, K. Shaping the Light Transmission through a Multimode Optical Fibre: Complex Transformation Analysis and Applications in Biophotonics. Opt. Express 2011, 19, 18871–18884. [Google Scholar] [CrossRef] [PubMed]
  121. Choi, Y.; Yoon, C.; Kim, M.; Yang, T.D.; Fang-Yen, C.; Dasari, R.R.; Lee, K.J.; Choi, W. Scanner-Free and Wide-Field Endoscopic Imaging by Using a Single Multimode Optical Fiber. Phys. Rev. Lett. 2012, 109, 203901. [Google Scholar] [CrossRef] [PubMed]
  122. Andresen, E.R.; Bouwmans, G.; Monneret, S.; Rigneault, H. Toward Endoscopes with No Distal Optics: Video-Rate Scanning Microscopy through a Fiber Bundle. Opt. Lett. 2013, 38, 609–611. [Google Scholar] [CrossRef]
  123. Ohayon, S.; Caravaca-Aguirre, A.M.; Piestun, R.; DiCarlo, J.J. Minimally Invasive Multimode Optical Fiber Microendoscope for Deep Brain Fluorescence Imaging. Biomed. Opt. Express 2018, 9, 1492–1509. [Google Scholar] [CrossRef]
  124. Vasquez-Lopez, S.; Turcotte, R.; Koren, V.; Plöschner, M.; Padamsey, Z.; Booth, M.J.; Cizmar, T.; Emptage, N. Subcellular Spatial Resolution Achieved for Deep-Brain Imaging In Vivo Using a Minimally Invasive Multimode Fiber. Light Sci. Appl. 2018, 7, 110. [Google Scholar] [CrossRef]
  125. Sun, J.; Wu, J.; Wu, S.; Goswami, R.; Girardo, S.; Cao, L.; Guck, J.; Koukourakis, N.; Czarske, J.W. Quantitative Phase Imaging through an Ultra-thin Lensless Fiber Endoscope. Light Sci. Appl. 2022, 11, 204. [Google Scholar] [CrossRef]
  126. Kuschmierz, R.; Scharf, E.; Ortegón-González, D.F.; Glosemeyer, T.; Czarske, J.W. Ultra-thin 3D Lensless Fiber Endoscopy Using Diffractive Optical Elements and Deep Neural Networks. Light Adv. Manuf. 2021, 2, 30. [Google Scholar] [CrossRef]
  127. Hao, J.; Lin, X.; Lin, Y.; Song, H.; Chen, R.; Chen, M.; Wang, K.; Tan, X. Lensless phase retrieval based on deep learning used in holographic data storage. Opt. Lett. 2021, 46, 4168–4171. [Google Scholar] [CrossRef] [PubMed]
  128. Hao, J.Y.; Lin, X.; Lin, Y.K.; Chen, M.Y.; Chen, R.X.; Guohai, S.; Horimai, H.; Tan, X. Lensless complex amplitude demodulation based on deep learning in holographic data storage. Opto-Electron. Adv. 2023, 6, 220157. [Google Scholar] [CrossRef]
  129. Sui, L.; Zhang, X.; Tian, A. Multiple-Image Hiding Based on Cascaded Free-Space Wave Propagation Using the Structured Phase Mask for Lensless Optical Security System. IEEE Photonics J. 2017, 9, 1–14. [Google Scholar] [CrossRef]
Figure 2. Data-driven approaches to reconstruct lensless images: (a) modified from [50], (b) modified from [51], (c) modified from [28], (d) modified from [17], (e) modified from [2], (f) modified from [52], and (g) modified from [53].
Figure 2. Data-driven approaches to reconstruct lensless images: (a) modified from [50], (b) modified from [51], (c) modified from [28], (d) modified from [17], (e) modified from [2], (f) modified from [52], and (g) modified from [53].
Electronics 13 00617 g002
Figure 3. Illustration of a deep optics designed system.
Figure 3. Illustration of a deep optics designed system.
Electronics 13 00617 g003
Figure 4. The schematic of diffractive deep neural networks (D2NNs).
Figure 4. The schematic of diffractive deep neural networks (D2NNs).
Electronics 13 00617 g004
Figure 5. The main development and application areas of mask-modulated lensless imaging.
Figure 5. The main development and application areas of mask-modulated lensless imaging.
Electronics 13 00617 g005
Figure 6. Masked-modulated lensless microscopic imaging. (a) Modified from [60], (b) modified from [117], and (c) modified from [118].
Figure 6. Masked-modulated lensless microscopic imaging. (a) Modified from [60], (b) modified from [117], and (c) modified from [118].
Electronics 13 00617 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Y.; Duan, Z. Advances in Mask-Modulated Lensless Imaging. Electronics 2024, 13, 617. https://doi.org/10.3390/electronics13030617

AMA Style

Wang Y, Duan Z. Advances in Mask-Modulated Lensless Imaging. Electronics. 2024; 13(3):617. https://doi.org/10.3390/electronics13030617

Chicago/Turabian Style

Wang, Yangyundou, and Zhengjie Duan. 2024. "Advances in Mask-Modulated Lensless Imaging" Electronics 13, no. 3: 617. https://doi.org/10.3390/electronics13030617

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop