Open Access
28 November 2023 Fast and robust Fourier ptychographic microscopy with position misalignment correction
Zicong Luo, Ruofei Wu, Hanbao Chen, Junrui Zhen, Mingdi Liu, Haiqi Zhang, Jiaxiong Luo, Dingan Han, Lisong Yan, Yanxiong Wu
Author Affiliations +
Abstract

Significance

Fourier ptychographic microscopy (FPM) is a new, developing computational imaging technology. It can realize the quantitative phase imaging of a wide field of view and high-resolution (HR) simultaneously by means of multi-angle illumination via a light emitting diode (LED) array, combined with a phase recovery algorithm and the synthetic aperture principle. However, in the FPM reconstruction process, LED position misalignment affects the quality of the reconstructed image, and the reconstruction efficiency of the existing LED position correction algorithms needs to be improved.

Aim

This study aims to improve the FPM correction method based on simulated annealing (SA) and proposes a position misalignment correction method (AA-C algorithm) using an improved phase recovery strategy.

Approach

The spectrum function update strategy was optimized by adding an adaptive control factor, and the reconstruction efficiency of the algorithm was improved.

Results

The experimental results show that the proposed method is effective and robust for position misalignment correction of LED arrays in FPM, and the convergence speed can be improved by 21.2% and 54.9% compared with SC-FPM and PC-FPM, respectively.

Conclusions

These results can reduce the requirement of the FPM system for LED array accuracy and improve robustness.

1.

Introduction

Optical microscopic imaging is an important technical tool in many fields, such as life sciences and biomedicine. Traditional optical microscopes are limited by the space-bandwidth product (SBP)1 of the optical imaging system and cannot simultaneously achieve a large field of view (FOV) and high-resolution (HR) imaging at the same time. Fourier ptychographic microscopy (FPM) is an emerging computational imaging technology that usually uses the combination of an optical microscope and a light emitting diode (LED) illumination array (programmable control) and combines phase recovery24 and the synthetic aperture principle58 to restore the HR intensity and phase image of the sample. To a certain extent, it overcomes the limitations of traditional optical microscopes of the difficulty they face in balancing a large FOV and HR imaging, and thus constitutes an improvement over traditional optical microscopy. In contrast to traditional optical microscopes, FPM transfers the high-frequency information in an object that initially surpasses the system cutoff frequency to within the passband of the system through multi-angle illumination,9 collects a series of low-resolution (LR) images at the camera side, and then uses a phase recovery algorithm to stitch them in the frequency domain to achieve large SBP and quantitative phase imaging.10 The summation of the numerical aperture of the objective (NAobj) and the numerical aperture of illumination (NAillu) determines the reconstruction resolution.11 Simultaneously achieving a large FOV and HR imaging has significance in microscopic imaging. Therefore, FPM-related theories and technologies have been widely researched and applied in the fields of digital microscopy and life sciences.1215

In an FPM system, LEDs at different positions on the LED array generate plane waves with different incident angles to illuminate the sample. The spatial position of each LED directly determines the position of the corresponding LR image in the frequency domain;16 that is, each LED corresponds to a sub-aperture in the frequency domain. However, in the process of production and assembly, there will inevitably be a certain position error in the LED array, whereupon the collected light intensity information and the ideal position information of the LEDs used in the reconstruction process cannot completely correspond, which transmits incorrect information into the captured LR image and affects subsequent reconstruction. When a certain offset exists in the LED array position, artifacts and wrinkles occur in the reconstructed image, leading to a decline in the quality of the reconstructed image. Therefore, the study of LED position misalignment correction is of great significance for improving the quality of reconstructed images. Position-correction methods have been proposed to solve this problem. In 2016, Sun et al. proposed PC-FPM based on the simulated annealing (SA) and non-linear regression technology, which can correct the LED position errors, and also introduced the LED array global position misalignment model;17 however, the speed still needs to be improved. Pan et al. proposed an SC-FPM based on the SA algorithm, LED intensity correction, and an adaptive step-size strategy to correct the system mixing error and improve the robustness of the FPM to the system mixing error.18 In 2018, Chen et al. proposed the rpcFPM, which uses feedback parameters and objective function constraints to correct the random position error of each LED and obtain good initial guesses;19 however, the performance of this algorithm is limited by the ePIE algorithm20 and may only reach local optima. Zhou et al. also proposed an mcFPM method based on a spatial domain search using the SA algorithm, which isolates the update of the misalignment parameters from the FPM iteration and uses the global position misalignment model to correct the global offsets;21 however, it cannot correct the height factor, and when there are many parameters, the accuracy and speed of global convergence are not sufficient. Zhao et al. proposed a trainable neural network for position correction and image reconstruction using the real and imaginary parts and the different position errors of each aperture as the weights of the convolution layer to achieve the correction of the aperture position deviation;22 however, this method is still relatively time-consuming. Therefore, it is necessary to propose a faster convergence speed and a more robust performance for the LED position misalignment correction algorithm, which will provide a guide for the future direction of our work.

In our previous study, we improved the conventional FPM reconstruction algorithm, which is referred to as the AA algorithm.23 The AA algorithm improved the rate of convergence and robustness to noise of the reconstruction algorithm; however, it does not consider the influence of LED position misalignment in the FPM system on the reconstructed image. Therefore, this study improves the FPM position correction method based on the SA algorithm and proposes a position misalignment correction method (AA-C) with an improved phase recovery strategy that optimizes the spectral function update strategy by calculating the threshold and introducing an adaptive control factor while simultaneously employing the position correction method, which can converge faster and obtain better reconstruction results than the traditional correction method. In this study, simulations and real experiments were conducted to demonstrate the effectiveness and robustness of the improved method for LED position misalignment correction, which not only speeds up the rate of convergence but also reduces the adverse impact of LED position misalignment on image reconstruction, reduces the positioning accuracy requirements for LED arrays in FPM systems, and improves the robustness of the FPM system.

The remainder of this paper is organized as follows. We introduce the basic model of the FPM and LED array position model in Sec. 2.1. Then, we introduce the flow and specific steps of the proposed AA-C algorithm in Sec. 2.2. We verify the effectiveness of the AA-C algorithm by simulations and practical experiments in Sec. 3. Finally, we summarize the conclusions in Sec. 4.

2.

Principles and Methods

2.1.

FPM Forward Imaging and LED Position Misalignment Model

Figure 1(a) shows an FPM forward imaging model. A typical FPM model mainly includes an LED array, samples, a low-NA objective lens, and a camera. The FPM imaging process included two parts. One is front-end images acquisition, and the other part is back-end data reconstruction. In images acquisition, different angles of plane wave illumination are provided by sequentially lighting the LEDs. The camera is used at the imaging end to capture the corresponding LR images, these images captured will be used for image reconstruction. The dotted circles in Fig. 1(b) represent the sub-spectrum information corresponding to the sample under the illumination of plane waves at different angles.

Fig. 1

FPM model and imaging process and schematic diagram of LED position misalignment. (a) FPM forward imaging model. (b) Schematic diagram of sub-spectrum information splicing in Fourier domain. (c) Schematic diagram of LED position misalignment. The black solid line coordinate axis is the ideal coordinate axis, the blue dashed line is the actual coordinate axis, and the green LED is the center LED of the LED array.

JBO_28_11_116503_f001.png

First, we establish a position model for the LED array. As shown in Fig. 1(a), the LED array is placed on a plane perpendicular to the optical axis, and the distance between neighboring LEDs in the array is regarded as the same. Ideally, the parameters in FPM are accurate; however, in practice, these parameters are biased. Four position factors are defined to determine the position of each LED element.17 The schematic diagram of LED position misalignment is shown in Fig. 1(c), including the rotation factor θ, the position deviation factors Δx, Δy along the x axis and y axis, and the perpendicular distance h from the LED array to the sample, and then the position of each LED element can be expressed as

Eq. (1)

[xm,nym,n]=[mdLEDndLED]·[cosθsinθsinθcosθ]+[ΔxΔy],
where xm,n, ym,n represent the positions of the LED elements on row m and column n, and dLED is the distance between adjacent LEDs. For a subregion of the sample, the illuminated wave vector (um,n,vm,n) from LEDm,n can be expressed as

Eq. (2)

um,n=kxoxm,n(xoxm,n)2+(yoym,n)2+h2,vm,n=kyoym,n(xoxm,n)2+(yoym,n)2+h2,
where k=2πλ, λ is the center wavelength of LEDs, (xo,yo) is the position coordinate of the center of the reconstructed small region. When plane wave with wave vector (um,n,vm,n) from LEDm,n irradiates the sample, the spectrum of the complex amplitude of the light wave field on the camera-receiving surface can be expressed as

Eq. (3)

ϕm,n(u,v)=O(uum,n,vvm,n)·P(u,v),
where O(u,v) is the HR spectrum of object function and P(u,v) represents the pupil function of the objective, which can be considered a low-pass filter. It is 1 in the passband and 0 outside the passband, and (u,v) is the frequency domain coordinate. The obtained intensity image can be expressed using the following equation:

Eq. (4)

Im,n(x,y)=|F1[ϕm,n(u,v)]|2.

Here, F1 expresses the Fourier inverse transform. In the images’ reconstruction process, the LR images captured above constitute the amplitude constraints of the FPM in the spatial domain, and P(u,v) is used as a spectrum support domain constraint in the frequency domain. FPM stitches the LR images collected under different illumination angles in the frequency domain and obtains the HR complex amplitude result of the object through iterative convergence.

2.2.

Flow of the Proposed Algorithm

A flowchart of the proposed method is shown in Fig. 2. The reconstruction algorithm is mainly divided into four modules: (1) The improved threshold denoising module. (2) The simulated annealing module. In the reconstruction process, the optimal solution of the frequency aperture is searched for subsequent spectrum updating. (3) The improved phase recovery strategy module. The update of the spectrum function is optimized by introducing an adaptive control factor. (4) Non-linear regression module. It is used to fit and estimate the global position factor to avoid confusion and disorder in the LED position obtained by the correction.

Fig. 2

Flowchart of the AA-C algorithm.

JBO_28_11_116503_f002.png

The following are the specific steps and process of algorithm implementation:

  • Step 1: Initialization: Before running the iterative algorithm, take the corresponding LR image illuminated by the central LED and perform Fourier transform to obtain the initial HR spectrum guess O(u,v) of the sample. The pupil function is P(u,v), which remains constant during the iterative update. Oi(u,v) is recorded as the sample spectrum of the i’th iteration.

  • Step 2: Use the pupil function to intercept the information in the initial HR spectrum of the sample, which is equivalent to low-pass filtering, and then inverse Fourier transformed to the spatial domain to generate the target complex amplitude image Im,ntexp(iφm,nt), where t denotes the target image. Noise reduction is then performed by calculating the noise threshold, defined as the difference in the arithmetic mean between the image actually obtained and the target image:24

    Eq. (5)

    Thresholdm,n=Im,n(x,y)Im,nt(x,y),
    where Im,n(x,y) and Im,nt(x,y) represent the actually acquired image and target image, respectively. The noise threshold calculated using Eq. (5) is then subtracted from the actually acquired image for noise reduction:

    Eq. (6)

    Im,nd(x,y)=Im,n(x,y)Thresholdm,n,
    where Im,nd(x,y) represents the denoising image, which is used to update the amplitude information of the LR estimated light field in step 4.

  • Step 3: In each iteration, the angle corresponding to each LED is processed (i.e., the LR image corresponding to each LED is updated) before completing one iteration of the algorithm. However, in the initial iteration, we generally use bright-field (BF) images because they contain relatively more low-frequency information, which is more important for image reconstruction; BF images are less affected by noise, and more accurate parameters can thus be obtained at the beginning. Therefore, we first iterate over the images with low NAillu to correct the position of the low-frequency aperture in the frequency domain. For example, in the simulation, we used an 11×11 LED array and repeated the LR images corresponding to the middle 5×5 LEDs in the first 10 iterations. Oi(u,v) must be initialized at the end of each initial iteration. After 10 initial iterations, 121 images (i.e., all captured images) in subsequent iterations must be used for iteration for more accurate position correction. We define the LED update range Si as

    Eq. (7)

    Si={{(m,n)m=2,,2,n=2,,2}i10{(m,n)m=5,,5,n=5,,5}else.

Then, in the FPM reconstruction process, the SA algorithm25 is used to search in the frequency domain to find the optimal solution for the frequency aperture and correct the incident wave vector of the LED. From Eq. (2), it can be identified that the incident wave vector (um,n,vm,n) is related to the position of the LED elements (xm,n,ym,n) and the distance h from the LED array to the sample. In addition, from Eq. (1), it can be observed that the position of LED elements (xm,n,ym,n) depends on the rotation factor θ, the positional deviation Δx, Δy along the x axis and the y axis. Therefore, searching for the frequency aperture optimal solution is in fact, searching for the optimal solution for the LED’s positional deviation factor (θ,Δx,Δy,h). First, we calculate a further estimation of a group of frequency apertures φr,i,m,ne(u,v) (r{1,2,,8}, representing eight different frequency shift directions, i.e., a random search in eight directions), each of which has a random frequency offset (Δur,m,n,Δvr,m,n). Here, we set the search step Δu,v=6 for SA, which is our predefined value, which gradually decreases with increasing iteration time. The r’th estimate of the frequency aperture is given as

Eq. (8)

φr,i,m,ne(u,v)=Oi(u(um,n+Δur,m,n),v(vm,n+Δvr,m,n))P(u,v).

The simulated target complex amplitude image is then: ϕr,i,m,ne(x,y)=F1{φr,i,m,ne(u,v)}, and the intensity image is: Im,ne(x,y)=|ϕr,i,m,ne(x,y)|2. Then, each target intensity image is compared with the actual image (after noise reduction), the difference is calculated, and the error measure is defined as

Eq. (9)

E(r)=x,y(Im,ne(x,y)Im,nd(x,y))2.

A smaller value of E(r) indicates that the intensity distribution of the simulated target image is closer to the actual acquired image; that is, the error is smaller. Find the LED frequency-domain coordinate offset corresponding to the minimum light intensity error (i.e., the minimum E(r)) of the LR light intensity images, mark the index of the minimum E(r) as l, and update the position of the frequency aperture as follows.

Eq. (10)

l=argmin[E(r)]um,nc=um,n+Δul,m,nvm,nc=vm,n+Δvl,m,n.

  • Step 4: After updating the position of the frequency aperture (um,nc,vm,nc), impose intensity constraints on the actually captured images, replace the target complex amplitude information with the LR image data after denoising in step 2, keep the phase information unaltered, and update the LR target image as follows:

    Eq. (11)

    ϕm,n(x,y)=Im,nd(x,y)ϕl,i,m,ne(x,y)|ϕl,i,m,ne(x,y)|.

A Fourier transform is then performed on the updated LR target image to obtain the updated HR spectrum after replacing the amplitude: φm,n(u,v)=F{ϕm,n(x,y)}. Calculate the difference between the estimated light field in the frequency domain and the updated estimated light field:

Eq. (12)

Δφm,n(u,v)=φm,n(u,v)φl,i,m,ne(u,v).

  • Step 5: To reduce the effect of abrupt changes in the value of the pupil function on the spectrum update, the ratio is added to the spectrum update process, defined as

    Eq. (13)

    W=|Pm,n(u,v)||Pm,n(u,v)|max.

Then, the adaptive control factor α is added to optimize the updating procedure of the target spectrum function to improve the reconstructed image quality and convergence speed. The value of α defined as

Eq. (14)

α=2Thresholdm,n.

The value of α is related to the noise reduction threshold. The target spectrum function used for the update after optimization is shown below:

Eq. (15)

Oi+1(uum,n,vvm,n)=Oi(uum,nc,vvm,nc)+(αW)Pi*(u,v)|Pi(u,v)|2+δ1Δφm,n(u,v),
where “*” represents the complex conjugate operation, δ1 is a regularization constant used to maintain numerical stability and prevent the denominator from becoming zero. We usually set δ1=1.

  • Step 6: Repeat steps 2 to 5 for different LEDs on the LED array until all spectral information within all sub-apertures have been updated (i.e., LR images corresponding to all LEDs in the Si range are used for the update), which means that one iteration is completed.

We then use the non-linear regression algorithm16 to update the four global factors of the LED array position (θ,Δx,Δy,h). The process can be represented as

Eq. (16)

Q(θ,Δx,Δy,h)=m,n[(um,n(θ,Δx,Δy,h)um,nc)2+(νm,n(θ,Δx,Δy,h)νm,nc)2],(θ,Δx,Δy,h)c=argmin[Q(θ,Δx,Δy,h)].

Here, the problem of LED array position misalignment correction can be regarded as the position misalignment factors (θ,Δx,Δy,h) for each LED to find the optimal solution, where Q(θ,Δx,Δy,h) is used to represent the error between the theoretical value and the corrected value of the central coordinate of the sub-aperture in the frequency domain. We update the global position factors through the non-linear regression process and solve the position factors (θ,Δx,Δy,h)c corresponding to the minimum Q(θ,Δx,Δy,h).

  • Step 7: Repeat steps 2 to 6 to obtain the updated global position factors, from which the frequency domain coordinates of the LED are updated to correct for LED position misalignment, until the iterative algorithm converges or the set number of iterations is reached so that finally completes the reconstruction process.

3.

Experiments and Results

3.1.

Simulation Experiment

To verify the effectiveness of the proposed algorithm, simulation validation was first conducted. In the simulation, the center 11×11 LEDs were selected to provide angle-changing illumination at a position of 56.1 mm beneath the sample, the distance between adjacent LEDs is 4 mm, the center wavelength of the LEDs is 531 nm, and the objective is 4×0.1  NA. The pixel size of the camera is 2.4  μm×2.4  μm. The “Cameraman” and “Westconcordorthophoto” (384×384  pixels) are initial input HR intensity image and phase image, respectively. In the simulation, we introduce four position factors (θ,Δx,Δy,h) into the original image.

We set the range of the introduced position misalignment parameters as: θ[5°,5°], Δx[1000  μm,1000  μm], Δy[1000  μm,1000  μm], Δh[1000  μm,1000  μm], because once these four factors move out of this range, the position error will be obvious. At this point, the LED array can be aligned through the initial calibration of the physical position. Here, we set the position misalignment parameters to θ=5°, Δx=1000  μm, Δy=1000  μm, and Δh=1000  μm. In total, 121 LR images were obtained from the simulation, and the overlap rate in the frequency domain was 55.62%. Subsequently, in the image reconstruction process, the ideal position of the LED is used for iterative reconstruction. We used and compared four algorithms. Figure 3 shows the simulation results. Figures 3(a1) and 3(a2) show the input HR intensity and phase distributions of the simulated complex samples, respectively, whereas Figs. 3(b1) and 3(b2) show the reconstruction results of the AA algorithm without correcting for position misalignment. Owing to the lack of correction for position misalignment errors, the reconstructed intensity and phase of the image have artifacts and wrinkles in the background, seriously affecting the clarity of the image. The results of the SC-FPM reconstruction are shown in Figs. 3(c1) and 3(c2), the quality of the reconstructed images is significantly improved, but there remain some artifacts in the reconstructed images. In comparison, the reconstructed image quality of the PC-FPM is improved, but there remain a few artifacts in the background, as shown in Figs. 3(d1) and 3(d2). Finally, the method presented in this paper is shown in Figs. 3(e1) and 3(e2), which shows that the reconstruction results have better background uniformity and fewer artifacts than other algorithms, indicating better correction of the position misalignment and better reduction of the impact of the position misalignment on reconstruction quality.

Fig. 3

Comparison of simulation reconstruction results of different algorithms. (a1) and (a2) Simulation of the input HR intensity and phase distribution for complex samples. (b)–(e) Reconstruction results of the AA algorithm, SC-FPM, PC-FPM, and AA-C algorithm. (f) and (g) RMSE curves of the HR intensity and phase images reconstructed by the SC-FPM, PC-FPM, and AA-C algorithm with the iterative conditions. (h) The position schematic of the frequency aperture.

JBO_28_11_116503_f003.png

The root-mean-square error (RMSE) curves of the HR intensity image and the phase image reconstructed by the SC-FPM, PC-FPM, and AA-C algorithms are shown in Figs. 3(f) and 3(g). It can be seen from the RMSE curve that the AA-C algorithm converges faster, the value after iterative stable convergence is smaller, and the reconstruction results are better. Figure 3(h) shows the position schematic of the frequency aperture, where the orange triangle represents the ideal position of the LED, the blue diamond represents the current position of the LED, and the green circle represents the position of the LED after correction using the AA-C algorithm. It can be seen that almost all the green circles coincide with the blue diamond, indicating that the frequency aperture positions are well corrected, illustrating the effectiveness of the algorithm in correcting the LED array position.

Table 1 also shows the peak signal-to-noise ratio (PSNR) and convergence time of the reconstructed images using different algorithms. The reconstruction intensity and phase PSNR value of the AA-C algorithm are higher than those of the other algorithms in the table, and the convergence time is the shortest. AA-C converged in 17 iterations, and convergence required 25 and 29 iterations for SC-FPM and PC-FPM, respectively. The convergence speed is 21.2% faster than that of SC-FPM and 54.9% faster than that of PC-FPM. The results indicate that the AA-C algorithm reconstructs images with better quality and faster convergence.

Table 1

PSNR values and convergence time of reconstruction intensity and phase for different algorithms in the case of iterative stabilization. Bold characters represent the indicators of the proposed method that are better.

AlgorithmsReconstructed intensityReconstructed phaseTime (s)Iteration
PSNR (dB)
AA22.7812.23
SC-FPM27.3122.4715.58525
PC-FPM29.7224.6026.92729
AA-C30.2226.5911.09717

3.2.

Real Experiment

The simulation results verified the effectiveness of the AA-C algorithm. To further verify the performance of the algorithm, we tested it using image data collected from actual experiments. The USAF resolution target was selected as a sample, and the experimental parameters were consistent with those used in Sec. 3.1. At the beginning of the experiment, precise mechanical correction of the LED array position was not performed. In the experiment, we captured 121 LR images to perform FPM reconstruction.

Figure 4 shows the reconstruction results for the different algorithms. Figures 4(a1) and 4(a2) show the LR image (400×400  pixels) captured in the central FOV and its enlarged view, respectively. For the AA algorithm, the reconstructed image shows severe artifacts and folds owing to the lack of position misalignment correction, as shown in Figs. 4(b1)4(b3). The results of the SC-FPM reconstruction are shown in Figs. 4(c1)4(c3). The background uniformity of the reconstructed image has been greatly improved and the artifacts and folds are much reduced. However, there is a loss of resolution. After zooming in, it can be seen that the ninth group of line-pair elements is slightly blurred and not clearly distinguished. Figures 4(d1)4(d3) show the reconstruction results of the PC-FPM. Compared with Figs. 4(c1)4(c3), the resolution is better, but the eighth group of line pairs appears to be tilted and not sufficiently straight. Figures 4(e1)4(e3) show the reconstruction results of the AA-C algorithm. It can be seen that each group of line pairs on the resolution board can be clearly distinguished, the background is clear, and the line pairs have not been distorted or deformed. Overall, the reconstruction effect of the AA-C algorithm is the best. In addition, to further quantify and evaluate the reconstruction results, the trends of the pixel-normalized intensity value curves of the marked delineated areas in Figs. 4(b3)4(e3) were compared, as shown in Fig. 4(f). An analysis of the curves shows that the red curve traces have the highest contrast, indicating that the images reconstructed by the AA-C algorithm have higher contrast and clearer details.

Fig. 4

USAF 1951 resolution target experiment. (a1), (a2) LR images taken at the central FOV. (b1)–(b3) Reconstruction results of the AA algorithm. (c1)–(c3) Reconstruction results of the SC-FPM algorithm. (d1)–(d3) Reconstruction results of the PC-FPM algorithm. (e1)–(e3) Reconstruction results of the AA-C algorithm. (f) Pixel normalized intensity curve.

JBO_28_11_116503_f004.png

To further validate the effectiveness of the algorithm, we experimented with biological samples. Select human tumor cells were selected as a sample, and an 11×11 LED array was used to provide angle-change illumination. The other experimental parameters were consistent with the USAF 1951 resolution target experiment in Sec. 3.2. In all, 121 LR images were captured, and the collected images of human tumor cells were reconstructed and restored. Refractive indices of biological samples can be reflected by phase information. Compared to the image amplitude, the influence of the error on the phase information is generally greater.26

The results of the reconstruction of human tumor cells using different algorithms are shown in Fig. 5. Figure 5(a) shows the LR FOV image of the specimen, and Fig. 5(a1) shows the corresponding enlarged region of interest (770×770  pixels). The reconstruction results for the AA algorithm are shown in Figs. 5(b1)5(b4). Both the amplitude and phase images reconstructed without correction for LED position misalignments showed more streaks and folds, which significantly affected the reconstruction quality of the images. The reconstructed results of the SC-FPM are shown in Figs. 5(c1)5(c4). We can see that the reconstructed results are significantly improved, with a significant reduction in streaks and folds; however, there are still some distortions and artifacts in the reconstructed phase image. For PC-FPM, the reconstructed phase image is significantly improved compared to Figs. 5(b3) and 5(c3), with a significant reduction in folds and artifacts; however, the amplitude image shows unexpected streaks, as shown in Figs. 5(d1)5(d4). Finally, Figs. 5(e1)5(e4) show the reconstruction results for the AA-C algorithm. In comparison to the first three algorithms, the overall clarity and smoothness of the reconstructed amplitude image are improved, and there are no stripes or artifacts in the phase image. The contours and details of the cells are clearer, and the reconstructed images have the best recovery effect, further validating the robustness of the AA-C algorithm.

Fig. 5

Comparison of reconstruction results of human tumor cell biospecimens. (a) LR FOV image of the sample. (a1) Enlarged region of interest. (b1)–(e1) Amplitude images reconstructed by the AA algorithm, SC-FPM, PC-FPM, and AA-C algorithm, respectively. (b2)–(e2) The enlargements of the region of interest of the amplitude images reconstructed by the different algorithms, respectively. (b3)–(e3) Phase images reconstructed by the AA algorithm, SC-FPM, PC-FPM, and AA-C algorithm, respectively. (b4)–(e4) The enlargements of the region of interest of the phase images reconstructed by the different algorithms, respectively.

JBO_28_11_116503_f005.png

Next, we selected human blood smears as samples for the experiment. A 5×5 LED array placed 50 mm beneath the sample. The spacing between neighboring LEDs was 5 mm, the wavelength of the LEDs was 531 nm, NAobj was 0.25, and the magnification was 10× and the pixel size of the sensor is 3.1  μm×3.1  μm. In all, 25 LR images were captured for FPM reconstruction during the experiment. The reconstruction results of human blood smears using different algorithms are shown in Fig. 6. Figure 6(a) shows the LR FOV image of a human blood smear specimen captured by the camera. Figure 6(a1) shows the magnified area (450×450  pixels) marked in (a). Figures 6(b1) and 6(b2) show the reconstruction results of the AA algorithm, where significant artifacts and stripes can be observed in both the reconstructed amplitude and phase images. As shown in Figs. 6(c1) and 6(c2), the amplitude image reconstructed with SC-FPM is greatly improved, but the phase image still has some artifacts and low contrast. The reconstruction results of the PC-FPM are shown in Figs. 6(d1) and 6(d2). When the position error of the LED array is large, the PC-FPM correction easily falls into the local optimal solution. This results in a decline in the correction ability of the position misalignment of the LED array, unsatisfactory quality of the reconstructed images, obvious artifacts in the amplitude image, and distortion and crosstalk in the phase image. Figures 6(e1) and 6(e2) show the reconstructed results of the AA-C algorithm, which effectively eliminated the artifacts. The reconstructed images are improved, the amplitude image background is clear and uniform, the phase image contrast is high, and there is no image distortion or obvious crosstalk.

Fig. 6

Comparison of experimental results of human blood smear sample. (a) LR FOV image of the sample. (a1) Enlarged region of interest. (b1)–(b2) Reconstruction results of the AA algorithm. (c1)–(c2) Reconstruction results of the SC-FPM algorithm. (d1)–(d2) Reconstruction results of the PC-FPM algorithm. (e1)–(e2) Reconstruction results of the AA-C algorithm.

JBO_28_11_116503_f006.png

Table 2 also compares the runtime and final iteration error for different correction algorithms in different biological specimen experiments. The results demonstrate that the proposed AA-C algorithm reconstructs the image with a smaller final iteration error requiring less runtime, which is faster compared to the SC-FPM and PC-FPM position correction algorithms.

Table 2

Comparison of running times and final iteration error for different correction algorithms in biological sample experiments. Bold characters represent the indicators of the proposed method are better.

ExperimentAlgorithmTime (s)Final error (103)
Human tumor cellSC-FPM334.5962.2857
PC-FPM527.7141.9484
AA-C298.6231.3862
Blood smearSC-FPM45.7623.5382
PC-FPM59.9314.4341
AA-C41.1962.8685

4.

Conclusion

This study presents improvements to the FPM correction method based on the SA algorithm and proposes a position misalignment correction method (AA-C algorithm) with an improved phase recovery strategy that effectively improves the image reconstruction quality by adding an adaptive control factor and optimizing the spectrum update strategy during the reconstruction process, which does not add to the complexities of the calculation and also has a robust convergence performance. Through simulations and using the USAF resolution target, human tumor cells, and blood smears as samples for the experiment, the effectiveness and robustness of the proposed AA-C algorithm for LED array position misalignment correction were verified. The proposed method accelerated the convergence speed and reduced the effect of LED array position misalignment on the FPM reconstruction quality. Compared with other advanced position correction methods, it can obtain a better reconstruction effect and improve the robustness of the FPM system. In addition, because the imaging efficiency of the FPM requires further improvement, we can combine LED multiplexing to further improve the imaging efficiency of the FPM and achieve faster and higher-quality high-throughput quantitative phase imaging. This will be the direction of our work in the future.

Disclosures

The authors declare no conflicts of interest.

Code and Data Availability

Data that support the findings of this article are not publicly available at this time but the code can be obtained from the authors upon reasonable request.

Acknowledgments

We would like to acknowledge financial support from the Guangdong Provincial Key Field R&D Plan Project (No. 2020B1111120004), Natural Science Foundation of Hubei Province (2022CFB099), and Foshan University 2023 Annual Student Academic Fund (Grant No. xsjj202305kja03).

References

1. 

J. Park et al., “Review of bio-optical imaging systems with a high space-bandwidth product,” Adv. Photonics, 3 (4), 044001 https://doi.org/10.1117/1.AP.3.4.044001 AOPAC7 1943-8206 (2021). Google Scholar

2. 

Y. Xiao et al., “High-speed Fourier ptychographic microscopy for quantitative phase imaging,” Opt. Lett., 46 (19), 4785 –4788 https://doi.org/10.1364/OL.428731 OPLEDP 0146-9592 (2021). Google Scholar

3. 

E. J. Candes, X. Li and M. Soltanolkotabi, “Phase retrieval via Wirtinger flow: theory and algorithms,” IEEE Trans. Inf. Theory, 61 (4), 1985 –2007 https://doi.org/10.1109/TIT.2015.2399924 IETTAW 0018-9448 (2015). Google Scholar

4. 

K. Creath and G. Goldstein, “Dynamic quantitative phase imaging for biological objects using a pixelated phase mask,” Biomed. Opt. Express, 3 (11), 2866 –2880 https://doi.org/10.1364/BOE.3.002866 BOEICL 2156-7085 (2012). Google Scholar

5. 

P. Song et al., “Synthetic aperture ptychography: coded sensor translation for joint spatial-Fourier bandwidth expansion,” Photonics Res., 10 (7), 1624 –1632 https://doi.org/10.1364/PRJ.460549 (2022). Google Scholar

6. 

L. Granero et al., “Synthetic aperture superresolved microscopy in digital lensless Fourier holography by time and angular multiplexing of the object information,” Appl. Opt., 49 (5), 845 –857 https://doi.org/10.1364/AO.49.000845 APOPAI 0003-6935 (2010). Google Scholar

7. 

D. J. Lee and A. M. Weiner, “Optical phase imaging using a synthetic aperture phase retrieval technique,” Opt. Express, 22 (8), 9380 –9394 https://doi.org/10.1364/OE.22.009380 OPEXFF 1094-4087 (2014). Google Scholar

8. 

A. E. Tippie, A. Kumar and J. R. Fienup, “High-resolution synthetic-aperture digital holography with digital phase and pupil correction,” Opt. Express, 19 (13), 12027 –12038 https://doi.org/10.1364/OE.19.012027 OPEXFF 1094-4087 (2011). Google Scholar

9. 

G. Zheng, R. Horstmeyer and C. Yang, “Wide-field, high-resolution Fourier ptychographic microscopy,” Nat. Photonics, 7 (9), 739 –745 https://doi.org/10.1038/nphoton.2013.187 NPAHBY 1749-4885 (2013). Google Scholar

10. 

X. Ou et al., “Quantitative phase imaging via Fourier ptychographic microscopy,” Opt. Lett., 38 (22), 4845 –4848 https://doi.org/10.1364/OL.38.004845 OPLEDP 0146-9592 (2013). Google Scholar

11. 

S. Pacheco et al., “Transfer function analysis in epi-illumination Fourier ptychography,” Opt. Lett., 40 (22), 5343 –5346 https://doi.org/10.1364/OL.40.005343 OPLEDP 0146-9592 (2015). Google Scholar

12. 

G. Zheng et al., “Fourier ptychographic microscopy: a gigapixel superscope for biomedicine,” Opt. Photonics News, 25 (4), 26 –33 https://doi.org/10.1364/OPN.25.4.000026 OPPHEL 1047-6938 (2014). Google Scholar

13. 

L. Tian et al., “Computational illumination for high-speed in vitro Fourier ptychographic microscopy,” Optica, 2 (10), 904 –911 https://doi.org/10.1364/OPTICA.2.000904 (2015). Google Scholar

14. 

J. Sun et al., “High-speed Fourier ptychographic microscopy based on programmable annular illuminations,” Sci. Rep., 8 (1), 7669 https://doi.org/10.1038/s41598-018-25797-8 SRCEC3 2045-2322 (2018). Google Scholar

15. 

P. C. Konda et al., “Fourier ptychography: current applications and future promises,” Opt. Express, 28 (7), 9603 –9630 https://doi.org/10.1364/OE.386168 OPEXFF 1094-4087 (2020). Google Scholar

16. 

Y. Chen et al., “Precise and independent position correction strategy for Fourier ptychographic microscopy,” Optik, 265 169481 https://doi.org/10.1016/j.ijleo.2022.169481 OTIKAJ 0030-4026 (2022). Google Scholar

17. 

J. Sun et al., “Efficient positional misalignment correction method for Fourier ptychographic microscopy,” Biomed. Opt. Express, 7 (4), 1336 –1350 https://doi.org/10.1364/BOE.7.001336 BOEICL 2156-7085 (2016). Google Scholar

18. 

A. Pan et al., “System calibration method for Fourier ptychographic microscopy,” J. Biomed. Opt., 22 (9), 1 –11 https://doi.org/10.1117/1.JBO.22.9.096005 JBOPFO 1083-3668 (2017). Google Scholar

19. 

S. Chen et al., “Random positional deviations correction for each LED via ePIE in Fourier ptychographic microscopy,” IEEE Access, 6 33399 –33409 https://doi.org/10.1109/ACCESS.2018.2849010 (2018). Google Scholar

20. 

A. M. Maiden et al., “An annealing algorithm to correct positioning errors in ptychography,” Ultramicroscopy, 120 64 –72 https://doi.org/10.1016/j.ultramic.2012.06.001 ULTRD6 0304-3991 (2012). Google Scholar

21. 

A. Zhou et al., “Fast and robust misalignment correction of Fourier ptychographic microscopy for full field of view reconstruction,” Opt. Express, 26 (18), 23661 –23674 https://doi.org/10.1364/OE.26.023661 OPEXFF 1094-4087 (2018). Google Scholar

22. 

M. Zhao et al., “Neural network model with positional deviation correction for Fourier ptychography,” J. Soc. Inf. Disp., 29 (10), 749 –757 https://doi.org/10.1002/jsid.1030 JSIDE8 0734-1768 (2021). Google Scholar

23. 

J. Luo et al., “Fast and stable Fourier ptychographic microscopy based on improved phase recovery strategy,” Opt. Express, 30 (11), 18505 –18517 https://doi.org/10.1364/OE.454615 OPEXFF 1094-4087 (2022). Google Scholar

24. 

L. Hou et al., “Background-noise reduction for Fourier ptychographic microscopy based on an improved thresholding method,” Curr. Opt. Photonics, 2 (2), 165 –171 (2018). Google Scholar

25. 

L. H. Yeh et al., “Experimental robustness of Fourier ptychography phase retrieval algorithms,” Opt. Express, 23 (26), 33214 –33240 https://doi.org/10.1364/OE.23.033214 OPEXFF 1094-4087 (2015). Google Scholar

26. 

Y. Zhu et al., “Space-based correction method for LED array misalignment in Fourier ptychographic microscopy,” Opt. Commun., 514 128163 https://doi.org/10.1016/j.optcom.2022.128163 OPCOB8 0030-4018 (2022). Google Scholar

Biography

Zicong Luo is currently pursuing his ME degree at the School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China. His research interests include Fourier ptychographic microscopy and computational imaging.

Ruofei Wu is currently working toward his ME degree at the School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China. His current research interests include Fourier ptychographic microscopy, computational microscopy imaging, and computer vision.

Hanbao Chen is currently pursuing his bachelor degree at the School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China. His research interests include Fourier ptychographic microscopy, structural design, and software interface design.

Junrui Zhen is currently pursuing his bachelor degree at the School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China. His research interests include Fourier ptychographic microscopy and software interface design.

Mingdi Liu is currently pursuing his ME degree at the School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China. His research interests include Fourier ptychographic microscopy and deep learning.

Haiqi Zhang is currently pursuing her bachelor degree at the School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China. Her research interests include Fourier ptychographic microscopy.

Jiaxiong Luo is currently pursuing his ME degree at the School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China. His research interests include computer imaging and Fourier ptychographic microscopy.

Dingan Han received her PhD from South China Normal University in 2005. She is currently a professor at the School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China. Her research interests include optoelectronic detection and biomedical imaging.

Lisong Yan received his PhD from Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China, in 2015. He is currently a professor at the School of Optics and Electronic Information, Huazhong University of Science and Technology, Wuhan, China. His research interests include optical system design and optical interference detection technology.

Yanxiong Wu received his PhD from Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China, in 2015. He is currently a professor at the School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China. His research interests include computational optical imaging, optical instrument design, and optical system design.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Zicong Luo, Ruofei Wu, Hanbao Chen, Junrui Zhen, Mingdi Liu, Haiqi Zhang, Jiaxiong Luo, Dingan Han, Lisong Yan, and Yanxiong Wu "Fast and robust Fourier ptychographic microscopy with position misalignment correction," Journal of Biomedical Optics 28(11), 116503 (28 November 2023). https://doi.org/10.1117/1.JBO.28.11.116503
Received: 18 August 2023; Accepted: 13 November 2023; Published: 28 November 2023
Advertisement
Advertisement
KEYWORDS
Reconstruction algorithms

Image restoration

Light emitting diodes

Lawrencium

Image quality

Image processing

Microscopy

Back to Top