Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Global field-of-view imaging model and parameter optimization for high dynamic star tracker

Open Access Open Access

Abstract

With the expansion of applications, star trackers break through the domain of traditional applications and operate under high dynamic environments. These new applications require high maneuverability and render the analyses and conclusions of previous traditional star trackers unsuitable. In order to resolve the limitation of the previous studies, we focus on the global field-of-view (GFOV) imaging performance of a high dynamic star tracker (HDST) in this paper. A GFOV imaging trajectory model is derived to correctly describe the different motions of stars imaged at different positions of focal plane. A comprehensive positional accuracy expression is obtained by analyzing the centroiding errors of stars in GFOV. On the basis of the proposed trajectory model and positional accuracy expression, a solution of GFOV optimal parameters is presented for the best performance in centroid estimation. Finally, comparative evaluations, numerical simulations, and a night sky experiment support the conclusions.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

Taking measured stars as the observation unit vectors in the body frame and guide stars as the corresponding reference unit vectors in the inertial frame, a star tracker can determine its three-axis attitude with accuracy in the range of a few arc seconds. Among all the attitude determination devices, the star tracker is deemed the most accurate one [1–4]. Star trackers are widely used in spacecraft.

With the expansion of the applications of star trackers, such as aircraft, agile satellites, and space weapons, the requirements for the dynamic performance of a star tracker are rapidly increasing [5,6]. Due to the high maneuverability, the angular velocity of these new applications changes faster and rougher. For example, an airborne star tracker operates on an dramatically changing angular velocity up to 25°/s.

Under high dynamic conditions, usually with an dramatically changing angular velocity larger than 5°/s, the relative movement between the stars and camera is considerable. Stars continuously move across the focal plane of an image sensor during exposure and form smeared star spots [7,8]. These smeared star spots disperse the star energy into pixels, depress the signal-to-noise ratio, and reduce the detection sensitivity of the star image [9]. To obtain high-precision attitude information, the imaging performance of high dynamic star trackers (HDSTs) should be investigated.

Several methods have been proposed to analyze the dynamic imaging performance of star trackers. However, these methods are based on traditional star trackers that operate on a low angular velocity (less than 5°/s), long exposure time (over 50 ms), and smooth motion [10,11]. In [12], [13] and [14], researchers analyzed the positional accuracy performance of smeared star spots. On the basis of the results in [12], Yu et al. [6] divided the exposure time into segments to improve attitude update rate. Shen et al. [15] conducted a simulation analysis on the dynamic working performance of star trackers. A frequency domain method to analyze the positional errors caused by discrete sampling was presented in [16] and [17]. A study on the motion compensation and positioning accuracy of star tracker was conducted in [18]. Assisted with IMU, Aretskin-Hariton complete the performance estimate of star tracker in [19]. On the other hand, researchers also processed the smeared star spot with image restoration approach based on the motion restoration [10,20–22].

However, as the dramatically changing angular velocity break through the domain of traditional applications and operate under high dynamic environments, the analysis and conclusions of the above-mentioned methods are inaccurate. In global field of view (GFOV), which means that all the imaging positions on the focal plane in field of view (FOV), the difference in the velocity (size and direction) of stars imaged at different positions of the focal plane is considerable, which results in different positional errors on the same focal plane. Moreover, fast and rough angular velocities expand the differences and cause sensitivity to the parameter change. Therefore, the evaluation on the positional accuracy performance of HDSTs is a complex and important issue that must be solved.

In order to resolve the limitation of the previous studies, the imaging performance of HDSTs is studied in this paper. The GFOV imaging trajectory model, positional accuracy performance, and parameter optimization were researched accordingly to achieve the best performance in centroid estimation of star tracker under high dynamic conditions. The conclusions were verified by the simulations and night sky experiment.

2. GFOV imaging trajectory model for HDST

Imaging distribution model is the basis for analyzing the positional accuracy of a star tracker. In general, the imaging distribution of stars is expressed with a Gaussian integrating distribution model by integrating the star motion trajectory within the exposure time [15, 18, 23, 24]. The accuracy of the imaging distribution is directly determined by the imaging trajectory accuracy. In this section, we focus on the star imaging trajectory for star tracker under high dynamic conditions.

During exposure, starlight passes through the optical lens and is imaged on the focal plane of the image sensor, as shown in Fig. 1(a). If the star detection sensitivity is not enough, usually, an image intensifier is installed between the optical lens and image sensor to amplify the input starlight signal [5]. On the basis of the star image captured by the image sensor, the attitude matrix is calculated with the star detection [25, 26], star recognition [27], star tracking, and QUEST attitude estimation algorithms [28].

 figure: Fig. 1

Fig. 1 Imaging process of HDST, (a) structure of the star tracker, (b) motion of a star spot.

Download Full Size | PDF

Suppose that the three-axis angular velocity of the star sensor at time t is ω=[ωxt,ωyt,ωzt]=ω[cosφcosβ,cosφsinβsinφ], where ω represents the length (°/s) of the angular velocity of ω⃗, φ denotes the angle (°) between ω⃗ and focal plane OXY, φ refers to the angle (°) between the projection of ω⃗ and the x-axis on image plane OXY, and f is the focal length of optical lens, as shown in Fig. 1(b). Considering the relative movement between the stars and camera, a i-th star spot in the focal plane moves from position P(xit,yit) to P(xit+Δt,yit+Δt) during exposure time Δt. The position P(xit+Δt,yit+Δt) is described as [6,10]:

{xit+Δt=xit+(yitωzt+fωyt)Δtyit+Δt=yit(xitωzt+fωxt)Δt.

In [6], the author ignored the small item caused by the difference between the 3-D star tracker coordinate system and 2-D focal plane system. It is accurate for the traditional star trackers.

However, under high dynamic conditions, the small item between the 3-D star tracker coordinate system and 2-D focal plane system is magnified by the larger angular velocity. The velocity in the focal plane system that crosses rotation around the x- and y- axis is expressed as [12]:

Vxy(Δt)=fωxy1cos2(θt+ωxyΔt)fωxy1cos2θt,
where θt is the FOV angle of position P(xit,yit), as shown in Fig. 1(b).

By substituting Eq. (2) into Eq. (1), the transient imaging trajectory model is obtained:

{xit+Δt=xit+(yitωzt+fωytcos2θt)Δtyit+Δt=yit(xitωzt+fωxtcos2θt)Δt.

Equation (3) is accurate for HDSTs when time Δt is small. Thus, the imaging trajectory of stars under high dynamic conditions can be iteratively solved by dividing the total exposure time T into several continuous time intervals Δt.

However, the nonlinear iterative solution of Eq. (3) is difficult to be used in the modeling and analyzing of the star imaging performance. For simplicity, a GFOV imaging trajectory model is derived in this study. The proposed model simplifies complex nonlinear star imaging movement into linear motion, and is implemented in two separate steps, as shown in Fig. 2(a).

 figure: Fig. 2

Fig. 2 GFOV imaging trajectory model of HDST, (a) modeling process, (b) velocity vectors.

Download Full Size | PDF

First, by deriving Eq. (3) with respect to Δt, the star moves uniformly with the transient velocity (at time t0) from the start position (x0, y0) to the end position (xT, yT) during exposure time T. Thus, the end position of step 1 is obtained:

{xT=x0+fωcosφcos2θ0sinβTyT=y0fωcosφcos2θ0cosβT.

Then, by rotating the end position (xT, yT) around the z-axis with an angle (−ωzT), position (xT, yT) moves to (x′T, y′T). The end position of step 2 is given by:

{xT=xTcos(ωzT)+yTsin(ωzT)yT=yTcos(ωzT)xTsin(ωzT).

Finally, from the start position (x0, y0) to end position (x′T, y′T), a new uniform motion trajectory is obtained, and its x-axis uniform velocity component is given by

VL,x=xTx0Tfωsinφtanθ0sin(2γωsinφT2)+fωcosφcos2θ0sin(βωsinφT),
where γ is the angle (°) between the star spot and x-axis, as shown in Fig. 2(a).

Similarly, the y-axis uniform velocity component of the proposed trajectory model is expressed as:

VL,yfωsinφtanθ0cos(2γωsinφT2)fωcosφcos2θ0cos(βωsinφT).

Composing (VL,x, VL,y), the total uniform velocity of the star on the focal plane is given by:

VL=VL,x2+VL,y2=fωsin2φtan2θ0+cos2φcos4θ0+sin2φtanθcos(γβ+ωsinφT/2)cos2θ0.

From Eq. (8), we can find that the velocity of the GFOV trajectory model is related to six parameters, namely, the length of angular velocity ω; FOV angle θ; three-axis angular velocity direction angles φ and β; imaging position angle γ; and exposure time T. These parameters influence the velocity of stars imaged on the focal plane interactively.

For example, when a star tracker (as Table 1) operates at the parameters ω = 25°/s, φ = 70° and β = −30° during the exposure time of 8 ms, the velocities of stars imaged at different positions in GFOV is shown in Fig. 2(b). We can find that these velocities are different in size and direction ranges from 0.4702 to 1.2842 pixels/ms.

Tables Icon

Table 1. Design parameters of the star tracker

Finally, by integrating the star motion trajectory and considering the pixelization of image sensor, the gray value of a pixel at the i-th column and j-th row is expressed as:

Ii,j=ΦKηQEGINS2πρ2xi0.5xi+0.5yj0.5yj+0.50Texp{[xx0VL,xt]2+[yy0VL,yt]22ρ2}dtdydx,
where, ρ indicates the Gaussian point spread function (PSF) radius of the optical lens, Φ represents the incident flux of the star on the focal plane, ηQE denotes the quantum efficiency related to the capability of converting photons into electrons, K refers to the conversion gain related to the capability of converting electrons into a digital gray value, and GINS is the gain of image intensifier. If the star tracker is not installed with image intensifier, GINS = 1; otherwise, the input starlight signal is multiplied by GINS times.

Adding the gray value of the pixels at i-th column yields:

Ii=jIi,jΦKηQEGINS2πρ20Txi0.5xi+0.5exp{[xx0VL,xt]22ρ2}dxdt.

The total gray value of smeared star spots is obtained as follows:

Itot=iIi=ΦKηQEGINST.

3. GFOV positional accuracy performance of HDST

Three-axis attitude accuracy is the most important indicator of a star tracker. As the positional accuracy directly influences the attitude accuracy, a study on the positional accuracy of star tracker under high dynamic conditions is discussed in this section.

Herein, we focused on the GFOV positional accuracy influenced by the imaging trajectory model, pixelization of image sensor, and random noises. As the centroiding algorithm is widely used to obtain the location of star spots, we individually analyze the centroiding errors caused by these three types of errors.

3.1. Centroiding errors caused by the motion trajectory model

In practice, the real imaging trajectory of stars on the focal plane is a complex and nonlinear motion. Thus, errors exist between the true centroid and the centroid calculated with our trajectory model.

Equation (3) is accurate for HDSTs when Δt is small. Therefore, by dividing the exposure time into a large number of continuous time intervals, a nonlinear dispersed motion, denoted as (xi, yi), can be obtained. This nonlinear dispersed motion is approximately regarded as the reference imaging trajectory of stars without errors. The centroid of the reference trajectory, denoted as (LC,X, LC,Y), is calculated by using the centroiding algorithm as follows, where the time interval Δt is set as 0.01 ms:

LC,X=1Ni=0N1xi,LC,Y=1Ni=0N1yiN=TΔt.

Since our trajectory model is a uniform motion, the midpoint of the trajectory is the unbiased estimation of the true centroid position. Thus, trajectory centroiding error is given by:

εtraj,x=1Ni=0N1xi12VL,xTx0,εtraj,y=1Ni=0N1yi12VL,yTy0.
εtraj,x and εtraj,y are uncorrelated, so the total trajectory centroiding error εtraj is obtained:
εtraj=εtraj,x2+εtraj,y2.

The trajectory centroiding error εtraj is influenced by six parameters, namely, the length of angular velocity ω; FOV angle θ; three-axis angular velocity direction angles φ and β; imaging position angle γ; and exposure time T.

If the parameters of the star tracker is designed as Table 1, the trajectory centroiding error εtraj versus parameters |φ|, ωT and γ, when θ ∈ [0°, 10°], γ ∈ [0°, 360°], β ∈ [0°, 360°], |φ| ∈ [0°, 90°], and ωT ∈ [0°, 0.2°], is shown in Fig. 3.

 figure: Fig. 3

Fig. 3 Trajectory centroiding error εtraj vs. parameters |φ|, ωT and γ in GFOV, (a) ωT = 0.2°, β = −30°, γ = 135°, (b) |φ| = 43°, β = −30°, γ = 135°, and (c) ωT = 0.2°, |φ| = 43°.

Download Full Size | PDF

We can find that the range of GFOV trajectory centroiding error for our model is [0, 0.018] pixels, with the change of the six parameters. Thus, the maximum centroiding error caused by the proposed trajectory in GFOV is less than 0.018 pixels when ωT is less than 0.2°. It can be neglected.

3.2. Centroiding error caused by pixelization

The pixelization of the image sensor sampled by the pixel array influences the positional accuracy of the star trackers. By substituting Eq. (9) into the centroiding algorithm, the centroid of the smeared star spot, which is denoted as (, ȳ), is given by:

x¯=ixiIiiIi,y¯=jyjIjjIj.

We know that the midpoint of the uniform trajectory is the unbiased estimation of the true centroid position (xC, yC) for the smeared star, thereby, the discretization pixelization centroiding error is calculated as:

δdisc,x=x¯xC=ixiIiiIi(VL,xT2+x0),δdisc,y=y¯yC=jyjIjjIj(VL,yT2+y0).

Then, by substituting the gray value of Ii (Eq. (10)) and Itot (Eq. (11)) into Eq. (16), the centroiding error of is written as:

δdisc,x={12ixi[erf(xix0+0.52ρ)erf(xix0+0.52ρ)]x0VL,xT=012VL,xTixi0.5+0.5[erf(xi+Δxx02ρ)erf(xi+ΔxVL,xT2ρ)]dΔxxCothers,
where erf(·) is the ‘error function’ encountered in integrating the normal distribution.

Suppose that the integral window of the smeared star spot is [1, xN]×[1, yM], which contains all the energy of the smeared star, and fPSF represents the standard normal distribution. We obtain:

ixi0.5+0.5[erf(xi+Δxx02ρ)erf(xi+ΔxVL,xT2ρ)]dΔx=i=1Nxi{(xix0+0.5)erf[(xix0+0.5)/(2ρ)]+2ρ2fPSF(xix0+0.5)(xix0+0.5VL,xT)erf[(xix0+0.5VL,xT)/(2ρ)]2ρ2fPSF(xix0+0.5VL,xT)(xix00.5)erf[(xix0+0.5)/(2ρ)]2ρ2fPSF(xix0+0.5)+(xix00.5VL,xT)erf[(xix0+0.5VL,xT)/(2ρ)]+2ρ2fPSF(xix0+0.5VL,xT)}xNxNx0+0.5VL,xTxNx0+0.5erf[(x/2ρ)]dxi=1Nxix00.5VL,xTxix00.5erf[x/(2ρ)]dx

Substituting Eq. (18) into Eq. (17), the centroiding error of is rewritten as:

δdisc,x={12ixi[erf(xix0+0.52ρ)erf(xix0+0.52ρ)]x0VL,xT=00VL,xT=1,2,N12VL,xT[xNVL,xTi=1Nx1x00.5VL,xTxix00.5erf[x/(2ρ)]dx]xCothers.

Similarly, the y-axis component of the discretization centroiding error is as follows:

δdisc,y={12jyj[erf(yjy0+0.52ρ)erf(yjy00.52ρ)]y0VL,yT=00VL,xT=1,2,M12VL,yT[yMVL,yTj=1Myjy00.5VL,yTyjy00.5erf[y/(2ρ)]dy]yCothers.

δdisc,x and δdisc,y are uncorrelated; thus, the total discretization centroiding error δdisc can be expressed as:

δdisc=δdisc,x2+δdisc,y2.

From Eqs. (19) and (20), we can find that the discretization centroiding error δdisc is influenced by Gaussian radius ρ, imaging velocities VL,x and VL,y, exposure time T, and initial position (x0, y0).

Due to the pixelization of the image sensor, the offsets of the initial position (x0, y0) on the focal plane, which are denoted as Δx0 and Δy0, are uniformly distributed over a pixel. When these offsets are changed within [0.5, 0.5), the δdisc periodically changes. As Δx0 and Δy0 are generally unknown, the maximum centroiding error of Δx0 and Δy0 is considered as follows:

δdisc(ρ,VL,xT,VL,yT)=max(δdisc(ρ,VL,xT,VL,yT,x0,y0,Δx0,Δy0)Δx0,Δy0[0.5,0.5).

When the integral window of the detected star is not large enough, a window truncation error will occur due to energy loss. To reduce this error, the width and height of the integral window, denoted as W and H, can be rounded as:

W×H=[0.5+3ρ,0.5+VL,xT+3ρ]×[0.5+3ρ,0.5+VL,yT+3ρ],
where ⌈·⌉ denotes the integer roundup operation. This integral window contains the energy of smeared star spots at over 99.7 % for Δx0, Δy0 ∈ [−0.5, 0.5).

According to the GFOV trajectory model (Eqs. (6) and (7)), VL,x and VL,y are related to six parameters, i.e., ω, θ, φ, β, γ, and T. These six parameters are interacted. For convenience, since different velocities (size and direction) result to different imaging length and imaging direction during exposure time, the two independent parameters, i.e., imaging length L and imaging direction χ are used to analyze the discretization centroiding error within the integral window as:

δdisc(ρ,L,χ)=max(δdisc(ρ,Lcosχ,Lsinχ,x0,y0,Δx0,Δy0)Δx0,Δy0[0.5,0.5).

Therefore, the discretization centroiding error is related to three parameters, namely, the Gaussian radius ρ, imaging length L, and imaging direction angle χ.

By setting ρ = 0.3 pixels, the discretization centroiding error δdisc versus imaging length L and imaging direction angle χ is shown in Fig. 4.

 figure: Fig. 4

Fig. 4 Discretization centroiding error δdisc vs. L and χ when ρ = 0.3 pixels.

Download Full Size | PDF

The discretization centroiding errors of x- and y-axial directions, such as 0°, 90°, 180° and 270°, are greater than that of other imaging directions. With the increase of imaging length L, the centroiding error of axial directions gradually approaches δdisc,y (ρ, L, 0°) while that of other directions gradually approaches ‘0’.

Considering the centroiding error influence by Gaussian radius ρ, the discretization centroiding error δdisc versus imaging length L and Gaussian radius ρ for imaging direction 0° and 45° is shown in Figs. 5(a) and 5(b), respectively. We can find that δdisc decreases as the increases of ρ for all imaging directions.

 figure: Fig. 5

Fig. 5 Discretization centroiding error δdisc vs. L and ρ for imaging direction (a) 0°, (b) 45°.

Download Full Size | PDF

In summary, when the imaging length L ⩾ 1, the discretization centroiding error δdisc at different imaging directions in GFOV can be approximately expressed as:

δdisc(ρ,L,χ)={δdisc,y(ρ,L,0°)χ=0°,90°,180°,270°0χ=45°,135°,225°,315°L1[0,δdisc,y(ρ,L,0°)]others.

3.3. Centroiding errors caused by random noises

Noises, which cause random error in centroiding results of stars, are inevitable in the imaging process of an image sensor. The gray value of each pixel Ii,j includes a random noise error ni,j. This error is caused by photon shot, dark current, readout, and quantization noises with standard deviations denoted as nshot, ndark, nread and nadc, respectively [12]. The digital variance of random noises for a pixel is given by:

ni,j2=K2[nshot2+(ndark2+nread2+nadc2)]=Ii,jK+K2nadd2,
where Ii,j K is the shot noise and K2nadd2 represents the additional noise.

On the basis of the error transfer function, the component of the random centroiding error in the x-axis can be expressed as:

σrand,x2=i(x¯Ii)ni2+2ijρi,j(x¯Ii)(x¯Ij)ninj,
where, ni and nj represent the random noise in i-th and j-th columns, respectively, and ρi,j is the correlation coefficient between ni and nj. As the pixels are independent of each other in most image sensors, that is, ρi,j = 0, Eq. (27) is derived as follows within the integral window W × H:
σrand,x2=KItot2i(xix¯)2Ii+K2nadd2HItot2i(xix¯)2.

Therefore, the random centroiding error is divided into two terms. The first term is the centroiding error caused by shot noise, and the second term is the centroiding error caused by additional noise.

By substituting the gray value of Ii (Eq. (10)) and Itot (Eq. (11)) into Eq. (28), the shot noise centroiding error σshot,x is derived as follows, where λ = ΦηQEGINS represents the average rate of incident photoelectron for image sensor:

σshot,x2=1λ2T2i(xix¯)2λ2πρ20Txi0.5xi+0.5exp{[xx0VL,xt]22ρ2}1λT20T[ρ2+(x0+VL,xt)22x¯(x0+VL,xt)+x¯2]dt=1λT[ρ2+112(VL,xT)2].

The additional noise centroiding error in the x-axis component is derived as:

σadd,x2=Hnadd2λ2T2x00.5+3ρx0+0.5+VL,xT+3ρ(xx0VL,xT2)2dx112Hnadd2λ2T2W3.

Similarly, the shot and additional noise centroiding errors in the y-axis component, which are denoted as σshot,y and σadd,y, respectively, are obtained:

σshot,y2=1λT[ρ2+112(VL,yT)2],σadd,y2=112Wnadd2λ2T2H3.

Thus, the total random centroiding error σrand is expressed as:

σrand2=(σshot,x2+σshot,y2)+(σadd,x2+σadd,y2)=VLλL(2ρ2+112L2)+112nadd2VL2λ2L2WH(W2+H2).

Equation (32) shows that the random centroiding error σrand is affected by the Gaussian radius ρ, imaging length L, imaging direction angle χ, additional noise nadd, average rate of incident photoelectron λ, and size of imaging velocity VL.

For intuition, we use the star tracker (Table 1) and image sensor CMV4000 [29] as examples. The dark current is 125 e/s at 25°C, nread is 13 e, K is 210/13500 DN/e, and ηQE is 0.45. Thus, the additional noise is nadd2=125T+nread2+1/(12K2) (at 25°C), and the average rate of incident photoelectron is λ = GINS × 185.05 × 103 photons/s (at 6 incident stellar magnitudes). Considering the insufficiency of star detection sensitivity for the star tracker (Table 1) and image sensor CMV4000, it is hard to detect weak magnitude stars during short exposure times (in several milliseconds). Therefore, the magnification of incident photoelectron for image intensifier is set as 30 times, i.e., GINS = 30. For simplicity, when no other parameters are specified, the remaining examples use the same parameters, which we will not mention hereafter.

By setting the size of the imaging velocity VL as 2.1 pixels/ms, the random centroiding error σrand versus imaging direction angle χ and imaging length L is shown in Figs. 6(a) and 6(b), respectively.

 figure: Fig. 6

Fig. 6 Random centroiding error vs. imaging direction angle χ and imaging length L.

Download Full Size | PDF

We can find that σrand changes periodically with the imaging direction χ ranges from 0° to 360°, proportional to Gaussian radius ρ, and contra-parabola versus imaging length L.

3.4. GFOV total centroiding error of HDST

In summary, ignoring the trajectory centroiding error, the total centroiding error at different positions in GFOV of star tracker is determined by summing up the discretization and random centroiding errors as:

σtotal2=δdisc2(ρ,L,χ)+σrand2(ρ,L,χ,λ,VL,nadd).

It is a function of Gaussian radius ρ, imaging length L, imaging direction angle χ, additional noise nadd, average rate of incident photoelectron λ, and size of imaging velocity VL.

For intuition, Figs. 7(a) and 7(b) display the total centroiding error σtotal versus imaging direction angle χ under imaging velocities VL = 0.1 and VL = 1.1 pixels/ms, respectively, when the Gaussian radius ρ is set as 0.45 pixels.

 figure: Fig. 7

Fig. 7 Total centroiding error σtotal vs. imaging direction for (a) VL = 0.1 pixels/ms, (b) VL = 1.1 pixels/ms.

Download Full Size | PDF

The total centroiding error σtotal is proportional to imaging velocity VL, and the maximum value of the GFOV centroiding error is either at imaging direction χ = 0°, 90°, 180°, 270° or χ = 45°, 135°, 225°, 315°.

4. GFOV parameter optimization

In order to achieve the best performance for HDSTs, the optimal parameters need to be selected for minimizing the total centroiding error. As previously analyzed, the total centroiding error of the HDST is influenced by six parameters, in which the additional noise nadd is determined by the image sensor, the average rate of incident photoelectron λ is identified by the detection sensitivity of the star tracker, and the imaging velocity on focal plane VL is determined by the real-time working conditions. These parameters are uncontrollable.

Therefore, we attempt to achieve the optimal performance for a star tracker under high dynamic conditions ω ∈ [0°/s, ωmax] by optimizing the Gaussian radius and exposure time, denoted (ρ̄, ), as follows:

(ρ¯,T¯)=argminρ,T{σtotal(ρ,T,χ,vω)}s.t.ω[0°/s,ωmax],χ[0°,360°],
where vω and χ represent the velocities of the smeared star spot in terms of size and direction, respectively.

On the basis of the proposed trajectory model (Eq. (8)), when γβ + ω sin φT/2 = 0°, the maximum imaging velocity for angular velocity ω is given by:

vω=fω{cos[arctan(12sin(2θmax))]/cos2θmax+sin[arctan(12sin(2θmax))]tanθmax},
where, θmax is the Half-FOV angle of the star tracker. Therefore, the angular velocity ω ∈ [0°/s, ωmax] relate to the imaging velocities ranges from 0 pixels/s to vω. As for the star tracker in Table 1, vω ≈ 1.0461 .

Since the optimization model (Eq. (34)) is difficult to solve directly, we resolve it in two steps: (1) determining the GFOV optimal Gaussian radius ρ̄ and (2) calculating the GFOV optimal exposure time .

4.1. GFOV optimal Gaussian radius for HDST

On the basis of Eqs. (25) and (32), the discretization centroiding error is inversely proportional to Gaussian radius ρ, whereas the random centroiding error is proportional to ρ. Thus, there is an optimal Gaussian radius that minimizes the total positional error.

Since the total centroiding error σtotal is proportional to imaging velocity VL, and the maximum value of the GFOV centroiding error is either at imaging direction χ = 0°, 90°, 180°, 270° or χ = 45°, 135°, 225°, 315° (see Fig. 7), we can obtain the upper limit of the GFOV centroiding error of ω ∈ [0°/s, ωmax] by setting the velocity as ωmax and considering imaging directions χ = 0° and χ = 45°, as follows:

Ωρ,T=[0,σtotal(ρ,L,χ=0°,vω)][0,σtotal(ρ,L,χ=45°,vω)]ω=ωmax.

Thus, we want to find out a Gaussian radius to minimize Ωρ,T. According to Eq. (32), the additional noise errors for imaging directions 0° and 45° within integral window are given by:

{σshot2=VLλL(2ρ2+112L2)χ=0°or45°σadd12=12nadd2VL2λ2L2(1+6ρ)(1+6ρ+L)[(1+6ρ)2+(1+6ρ+L)2]χ=0°σadd22=16nadd2VL2λ2L2(12L+1+6ρ)4χ=45°,
where, W × H ⩾ 1 + 6ρ + Lx] × [1 + 6ρ + Ly]

Then, by deriving functions σshot, σadd1 and σadd2 with respect to L and letting the derived function be equal to ‘0’, the equation yields:

Lshot=26ρ,Ladd1=2.383(1+6ρ),Ladd2=2(1+6ρ).

Lshot, Ladd1 and Ladd2 are the optimal imaging lengths that minimize functions σshot, σadd1 and σadd2, respectively. By setting the imaging lengths as Lshot, Ladd1 and Ladd2 at the same time, the total centroiding error in imaging directions χ = 0° and χ = 45° are calculated as:

{σtotal2(ρ,χ=0°)=δdisc,y2(ρ,L,0°)+σshot2(ρ,Lshot)+σadd12(ρ,Ladd1)σtotal2(ρ,χ=45°)=σshot2(ρ,Lshot)+σadd22(ρ,Ladd2).

For intuition, Figs. 8 (a) and 8(b) show the total centroiding error σtotal versus Gaussian radius ρ at χ = 0° and χ = 45° when the maximum rotational angular velocity ωmax of the star tracker is 1°/s, 2°/s, 20°/s, and 40°/s, respectively.

 figure: Fig. 8

Fig. 8 Total centroiding error σtotal vs. Gaussian radius ρ when χ = 0° and χ = 45° for (a) 20°/s and 40°/s, (b)1°/s and 2°/s, and (c) ρmin 1 and ρmin 2.

Download Full Size | PDF

Considering the centroiding error of imaging directions 0° and 45° at the same time, the optimal Gaussian radius ρ̄ equals to ρmin 1 when ωmax is 1°/s, and equals to ρmin 2 when ωmax is 2°/s, 20°/s and 40°/s, where, ρmin 1 represents the Gaussian radius minimizes minimizes σtotal (ρ, χ = 0°), and ρmin 2 represents the intersection of σtotal (ρ, χ = 0°) and σtotal (ρ, χ = 45°). Therefore, the GFOV optimal Gaussian radius ρ̄ is obtained:

{ρmin1=argminρ{δdisc,y2(ρ,L,0°)+σshot2(ρ,Lshot)+σadd12(ρ,Ladd1)}ρmin2=solveρ{δdisc,y2(ρ,L,0°)+σadd22(ρ,Ladd2)σadd12(ρ,Ladd1)}ρ¯=min[ρmin1,ρmin2].

When the maximum rotational angular velocity ωmax ranges from 0 ∼ 40°/s, the optimal Gaussian radius ρmin 1 and ρmin 2 versus the maximum rotational angular velocity ωmax is shown in Fig. 8(c). The optimal Gaussian radius is ρ̄ = ρmin 1 when ωmax = 0°/s ∼ 1.02°/s and ρ̄ = ρmin 2 when ωmax ⩾ 1.02°/s. The result of optimal Gaussian radius is influenced by the angular velocity, incident photoelectron, and the additional noise.

4.2. GFOV optimal exposure time for HDST

Exposure time is influenced by the imaging length and imaging velocity. Since the centroiding error σshot, σadd1 and σadd2 are contra-parabola versus imaging length L, there is an optimal imaging length that minimizes the total centroiding error. If the optimal imaging length, denoted as L0, is determined, the optimal exposure time is given by:

T¯=Lvωω[0°/s,ωmax].

From the above description, the optimal Gaussian radius ρ̄ is obtained by setting the imaging lengths as Lshot, Ladd1 and Ladd2 at the same time. However, this is not realistic. Under high dynamic conditions, the optimal Gaussian radius ρ̄ equals to ρmin 2 for ω ∈ [1.02°/s, ωmax], and the total centroiding error σtotal (ρ̄, χ = 0°) ⩾ σtotal (ρ̄, χ = 45°). Thus, the optimal imaging length L0 is among Lshot, Ladd1 and Ladd2, which is expressed as follows:

L0[Lshot,Ladd1][Lshot,Ladd2].

Considering that Ladd1 > Ladd2, we obtain the optimal imaging length L0 :

L0=min{argminL{σtotal(ρ¯,L,χ=0°,vω)},Ladd2}ω[0°/s,ωmax],L[Lshot,Ladd2].

By setting ω = ωmax = 40°/s, the optimal Gaussian radius ρ̄ equals to 0.36 pixels (see Fig. 8(c)). The apartments of the centroiding errors versus imaging length L is shown in Fig. 9(a), and the total centroiding error σtotal versus imaging length L when ω equals to 5°/s, 25°/s, and 40°/s is shown in Fig. 9(b). The optimal imaging length are represented with square boxes.

 figure: Fig. 9

Fig. 9 Centroiding errors vs. imaging length and angular velocity, (1) apartments of centroiding errors, (2) under different rotational angular velocities, (3) optimal exposure time .

Download Full Size | PDF

We can find that the centroiding error σtotal (ρ̄, L0, χ = 0°) approximately equals to σtotal (ρ̄, L0, χ = 45°), when ω = ωmax = 40°/s. Moreover, with the increase of ω, σtotal increases, and the optimal imaging length ranges from Lshot to Ladd2.

Then, according to Eq. (41), the GFOV optimal exposure time can be determined by dividing the optimal imaging length L0 by vω. The optimal exposure time versus angular velocity ω ∈ [0°/s, 40°/s] is shown in Fig. 9(c). where, the lower limit Tshot and upper limit Tadd2 of exposure time are determined by Lshot /vω and Ladd2 /vω, and ρ̄ is 0.36 pixels. With the increase of ω, the GFOV optimal exposure time ranges from Tshot to Tadd2.

On the other hand, we consider the centroiding error influenced by different incident stellar magnitudes and temperatures. The optimal Gaussian radius ρ̄, total centroiding error, and optimal exposure time curves influenced by different incident stellar magnitudes and temperatures are shown in Fig. 10.

 figure: Fig. 10

Fig. 10 Optimal Gaussian radius ρ̄, centroiding error σtotal, and optimal exposure time curves influenced by different incident stellar magnitudes and temperatures.

Download Full Size | PDF

As shown in Figs. 10(a)–10(c), the optimal Gaussian radius ρ̄ is remarkably influenced by incident stellar magnitudes, and the centroiding error of dark stars is greater than that of bright stars. Thus, the positional accuracy of HDST is determined by the detection sensitivity stellar magnitude.

As shown in Figs. 10(d)–10(f), the influence of temperatures on the total centroiding error of HDST is relatively small.

5. Simulations and night sky experiment

In the previous sections, we analyze the star imaging trajectory, positional accuracy, and parameter optimization of the HDST. In this section, the comparative evaluation, numerical simulations, and a night sky experiment are conducted to validate our conclusions.

5.1. Comparative evaluation for the trajectory model

The trajectory models of Yan [12] (Eq. (2)) and Yu [6] (Eq. (1)) are compared with the proposed trajectory model to evaluate its accuracy. Similar to Section 3.1, Fig. 11 shows the trajectory centroiding errors of different trajectory models in GFOV.

 figure: Fig. 11

Fig. 11 Comparison of trajectory centroiding errors of different trajectory models in GFOV.

Download Full Size | PDF

As for the parameters θ = 10°, γ ∈ [0°, 360°], β ∈ [0°, 360°], |φ| ∈ [0°, 90°], and ωT ∈ [0°, 0.2°], Fig. 11(a) presents the trajectory centroiding error versus |φ| when ωT = 0.2°, β = −30°, γ = 135°. The maximum value is obtained when |φ| equals to 90°, 0°, and 43° for ours, Yan’s, and Yu’s models, respectively.

Then, by setting |φ| as 90°, 0°, and 43° for the three models, the maximum trajectory centroiding error versus ωT when β = −30° and γ = 135° is shown in Fig 11.(b), and the maximum trajectory centroiding error versus γ when ωT = 0.2° and β = −30° is shown in Fig. 11(c).

Therefore, with the change of the six parameters, the range of trajectory centroiding error for our model, Yu’s model, and Yan’s model is [0, 0.018], [0, 0.305] and [0, 1.757] pixels, respectively. The maximum trajectory centroiding error of our model is much less than those of other two models.

5.2. Numerical simulations

Two groups of high dynamic simulations are conducted to verify the parameter optimization. In each group, the variation curve of the centroiding error along with imaging length is recorded. We use the star tracker (Table 1) and image sensor CMV4000 as the simulation devices.

5.2.1. Simulation 1

This simulation aims to verify the optimal Gaussian radius. The angular velocity is set as 40°/s, and the incident stellar magnitude is 6. The theoretical optimal Gaussian radius ρ̄ is 0.36 pixels (see Fig. 8(c)). Thus, three different tested Gaussian radii ρtest, i.e., 0.30, 0.36, and 0.42 pixels, are used to generate the simulation stars with a imaging length from 0.1 pixels to 7 pixels by a step of 0.05 pixels. The simulated stars are imaged on two different directions at 0° and 45°. The initial position of each simulated star is uniformly distributed within a single pixel, and photon shot and additional noises are added. In principle, σtotal (ρtest, χ = 0°) ≈ σtotal (ρtest, χ = 45°) if ρtestρ̄; σtotal (ρtest, χ = 0°) > σtotal (ρtest, χ = 45°) if ρtest < ρ̄; σtotal (ρtest, χ = 0°) < σtotal (ρtest, χ = 45°) if ρtest > ρ̄.

The simulation results of centroiding error versus imaging length L under different Gaussian radii in imaging directions 0° and 45° is shown in Figs. 12(a) and 12(b), respectively. The theoretical error calculated by Eq. (33) is denoted with solid lines for 0° and dash lines for 45°, and the simulation results are represented with scattered points.

 figure: Fig. 12

Fig. 12 Centroiding error vs. imaging length under different Gaussian radii for imaging directions (a) 0°, (b) 45°.

Download Full Size | PDF

The scattered points agree well with the theoretical results. The theoretical error curves keeps to be the maximum value of the simulation scattered points. Considering the centroiding error in imaging directions 0° and 45° at the same time, the GFOV centroiding errors are Ω(ρ = 0.30) ∈ [0, 0.061], Ω(ρ = 0.36) ∈ [0, 0.040] and Ω(ρ = 0.42) ∈ [0, 0.045], the optimal Gaussian radius is 0.36 pixels, which minimizes the total centroiding error.

5.2.2. Simulation 2

This simulation aims to verify the optimal imaging length. Similar to Simulation 1, the Gaussian radius is set as 0.36 pixels, and then the simulation stars are generated under different angular velocities, i.e., 5°/s, 25°/s, and 40°/s, and different incident stellar magnitudes, i.e., 6, 5, and 4 magnitudes. This simulation only considers the imaging direction of 0°.

The simulation results of centroiding error versus imaging length L under different angular velocities and incident stellar magnitudes are shown in Figs. 13(a) and 13(b), respectively.

 figure: Fig. 13

Fig. 13 Centroiding error vs. imaging length for imaging direction of 0°, (a) under different angular velocities, and (b) under different incident stellar magnitudes.

Download Full Size | PDF

As shown in Fig, 13, the trend of simulation points are similar to the theoretical trend under different angular velocities and incident stellar magnitudes, and the optimal imaging length of these simulation points varies between Lshot and and Ladd2. Therefore, the simulations agree well with the theoretical results.

5.3. Night sky experiment

A night sky experiment was conducted at the Xinglong Station (National Astronomical Observatories, China) to verify our theoretical conclusions. In Fig. 14, the star tracker is mounted on a two-axis turntable, and its Gaussian radius is approximately 0.43 pixels. The star tracker installed with an image intensifier. By setting the control voltage UC as 3.8V, the incident photoelectron is increased by approximately 30 times.

 figure: Fig. 14

Fig. 14 Setup of the night sky experiment.

Download Full Size | PDF

In our experiment, the angular velocities are set as 5°/s and 25°/s, and the imaging directions are set as 0° and 45°, respectively. The imaging lengths of 5°/s angular velocity vary from 0.997 pixels to 7.482 pixels under the exposure times of 2, 3, 5, 7, 10, and 15 ms. The imaging lengths of 25°/s angular velocity vary from 1.247 pixels to 7.482 pixels under the exposure times of 0.5, 1, 1.5, 2, 2.5, and 3 ms. Then, more than 200 frames of centroid data are acquired in each case by rotating the turntable from −40° to 40° and selecting the centroid frame data that range from −10° to 10° (reducing the influence of atmospheric refraction [30]). A baffle was assembled on the test star tracker to reduce the influence of the stray light [31].

Then, similar to the test method in [12], the inter-star angles of two stars are used to evaluate the centroiding error. The angular distance of the two observed star vectors is compared with that of the star catalog, divided by 2, and converted into pixels. The deviation of the average centroiding error versus imaging length when ω = 5°/s and ω = 25°/s is shown in Figs. 15(a) and 15(b), respectively.

 figure: Fig. 15

Fig. 15 Star location error vs. imaging length for (a) 5°/s, (b) 25°/s.

Download Full Size | PDF

When ω = 5°/s, the star centroiding error is contra-parabola versus imaging length L, and the centroiding error of imaging direction 0° is approximately equal to that of 45° within the range between Lshot and Ladd2, i.e., [2.1066, 5.0629] pixels. This trend is consistent with the previous conclusions that σtotal (ρtest, χ = 0°) ≈ σtotal (ρtest, χ = 45°) if ρtestρ̄(0.45) pixels(see Fig. 8(c)).

When ω = 25°/s, the centroiding error is contra-parabola versus imaging length L, and the centroiding error of imaging direction 0° is less than that of 45°. This result is consistent with the previous conclusions that σtotal (ρtest, χ = 0°) < σtotal (ρtest, χ = 45°) if ρtest > ρ̄(0.38) pixels.

Thus, the night sky experiment validates the reliability of our conclusions for error analysis and parameter optimization.

6. Conclusion

This study focused on the imaging performance of star trackers under high dynamic conditions. We derived a GFOV imaging trajectory model in the first step. The model has high precision for star trackers under high dynamic conditions. Then, by analyzing the centroiding errors of stars with different imaging motions in GFOV, we obtained a comprehensive centroiding accuracy expression that changes periodically with imaging direction rangs from 0° to 360°. Finally, the GFOV parameter optimization for Gaussian radius ρ and exposure time T is resolved in two steps. The GFOV optimal parameters (ρ̄, ) allowed the HDST to achieve the best accuracy and performance. For HDSTs, the optimal Gaussian radius is usually to be the intersection of σtotal (ρ, χ = 0°) and σtotal (ρ, χ = 45°), and the optimal imaging length is the value that minimizes σtotal (ρ̄, L, χ = 0°) within the range between Lshot and Ladd2. The result of optimal parameters are influenced by the angular velocity, incident photoelectron, and the additional noise. Based on our model, the star tracker can keep their best performance under high dynamic conditions. The simulation results of centroiding error agree well with the theoretical parameter optimization results. The night sky experiment validates the reliability of our conclusions for error analysis and parameter optimization.

Funding

National Natural Science Foundation of China (61725501).

References

1. C. C. Liebe, “Accuracy performance of star trackers-a tutorial,” IEEE Transactions on Aerosp. Electron. Syst. 38(2), 587–599 (2002). [CrossRef]  

2. L. Ma, D. Zhan, G. Jiang, S. Fu, H. Jia, X. Wang, Z. Huang, J. Zheng, F. Hu, and W. Wu, “Attitude-correlated frames approach for a star sensor to improve attitude accuracy under highly dynamic conditions,” Appl. Opt. 54(25), 7559–7566 (2015). [CrossRef]   [PubMed]  

3. T. Sun, F. Xing, X. Wang, Z. You, and D. Chu, “An accuracy measurement method for star trackers based on direct astronomic observation,” Sci. Reports 6, 22593 (2016). [CrossRef]  

4. J. Li, G. Wang, and X. Wei, “Generation of guide star catalogue for star trackers,” IEEE Sensors J. 18(11), 4592–4601 (2018). [CrossRef]  

5. A. Katake and C. Bruccoleri, “Starcam sg100: a high-update rate, high-sensitivity stellar gyroscope for spacecraft,” Proc. SPIE 7536, 753608 (2010). [CrossRef]  

6. W. Yu, J. Jiang, and G. Zhang, “Multi exposure imaging and parameter optimization for intensified star trackers,” Appl. Opt. 55(36), 10187–10197 (2016). [CrossRef]  

7. M. A. Samaan, T. C. Pollock, and J. L. Junkins, “Predictive centroiding for star trackers with the effect of image smear,” J. Astronaut. Sci. 50(1), 113–123 (2002).

8. C. C. Liebe, K. Gromov, and D. M. Meller, “Toward a stellar gyroscope for spacecraft attitude determination,” J. Guid. Control. Dyn. 27(1), 91–99 (2004). [CrossRef]  

9. T. Sun, F. Xing, Z. You, and M. Wei, “Motion-blurred star acquisition method of the star tracker under high dynamic conditions,” Opt. Express 21(17), 20096–20110 (2013). [CrossRef]   [PubMed]  

10. T. Sun, F. Xing, Z. You, X. Wang, and B. Li, “Smearing model and restoration of star image under conditions of variable angular velocity and long exposure time,” Opt. Express 22(5), 6009–6024 (2014). [CrossRef]   [PubMed]  

11. Z. Weina, Q. Wei, and G. Lei, “Blurred star image processing for star sensors under dynamic conditions,” Sensors 12(5), 6712–6726 (2012). [CrossRef]  

12. G. Zhang, J. Jiang, and J. Yan, “Dynamic imaging model and parameter optimization for a star tracker,” Opt. Express 24(6), 5961–5983 (2016). [CrossRef]   [PubMed]  

13. B. J. Shen, J. C. Tan, J. K. Yang, and J. L. Liao, “Exposure time optimization of the star sensor,” Opto-Electronic Eng. 36(12), 22–26 (2009).

14. X. Wei, W. Tan, J. Li, and G. Zhang, “Exposure time optimization for highly dynamic star trackers,” Sensors 14(3), 4914–4931 (2014). [CrossRef]   [PubMed]  

15. J. Shen, G. Zhang, and X. Wei, “Simulation analysis of dynamic working performance for star trackers,” J. Opt. Soc. Am. A Opt. Image Sci. Vis. 27(12), 2638–2647 (2010). [CrossRef]   [PubMed]  

16. X. Wei, J. Xu, J. Li, J. Yan, and G. Zhang, “S-curve centroiding error correction for star sensor,” Acta Astronaut. 99(1), 231–241 (2014). [CrossRef]  

17. J. Yang, B. Liang, T. Zhang, and J. Song, “A novel systematic error compensation algorithm based on least squares support vector regression for star sensor image centroid estimation,” Sensors 11(8), 7341–7363 (2011). [CrossRef]   [PubMed]  

18. Y. Hao, L. Da, W. Li, and J. Zhang, “Studies on dynamic motion compensation and positioning accuracy on star tracker,” Appl. Opt. 54(28), 8417–8424 (2015). [CrossRef]  

19. E. D. Aretskinhariton and A. J. Swank, “Star tracker performance estimate with imu,” in AIAA Guidance, Navigation, and Control Conference, (American Institute of Aeronautics and Astronautics, 2015), p. 44135.

20. C. Zhang, J. Zhao, T. Yu, H. Yuan, and F. Li, “Fast restoration of star image under dynamic conditions via lp regularized intensity prior,” Aerospaceence Technol. 61(1), 29–34 (2017). [CrossRef]  

21. X. Wu and X. Wang, “Multiple blur of star image and the restoration under dynamic conditions,” Acta Astronaut. 68(11), 1903–1913 (2011). [CrossRef]  

22. S. Wang, S. Zhang, M. Ning, and B. Zhou, “Motion blurred star image restoration based on mems gyroscope aid and blur kernel correction,” Sensors 18(8), 2662 (2018).

23. WANG Haiyong, ZHOU Wenrui, CHENG Xuan, and Haoyu, “Image smearing modeling and verification for strapdown star sensor,” Chin. J. Aeronaut. 25(1), 115–123 (2012). [CrossRef]  

24. X. Li and H. Zhao, “Analysis of star image centroid accuracy of an aps star sensor in rotation,” Aerosp. Control. Appl. 35(4), 11–16 (2009).

25. Z. Wang, J. Jiang, and G. Zhang, “Distributed parallel super-block-based star detection and centroid calculation,” IEEE Sensors J. 18(19), 8096–8107 (2018). [CrossRef]  

26. M. S. Wei, F. Xing, and Z. You, “A real-time detection and positioning method for small and weak targets using a 1d morphology-based approach in 2d images,” Light. Sci. Appl. 7(5), 18006 (2018). [CrossRef]  

27. G. Wang, J. Li, and X. Wei, “Star identification based on hash map,” IEEE Sensors J. 18(4), 1591–1599 (2018). [CrossRef]  

28. M. D. Shuster and S. D. Oh, “Three-axis attitude determination from vector observations,” J Guid. Control. Dynam 4(1), 70–77 (1981). [CrossRef]  

29. CMOS images sensor CMV4000, “CMOSIS bvba,” http://www.cmosis.com/products/productdetail/cmv4000.

30. R. C. Stone, “An accurate method for computing atmospheric refraction,” Publ. Astron. Soc. Pac. 108(729), 1051–1058 (1996). [CrossRef]  

31. G. Wang, F. Xing, M. Wei, and Z. You, “Rapid optimization method of the strong stray light elimination for extremely weak light signal detection,” Opt. Express 25(21), 26175–26185 (2017). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1 Imaging process of HDST, (a) structure of the star tracker, (b) motion of a star spot.
Fig. 2
Fig. 2 GFOV imaging trajectory model of HDST, (a) modeling process, (b) velocity vectors.
Fig. 3
Fig. 3 Trajectory centroiding error εtraj vs. parameters |φ|, ωT and γ in GFOV, (a) ωT = 0.2°, β = −30°, γ = 135°, (b) |φ| = 43°, β = −30°, γ = 135°, and (c) ωT = 0.2°, |φ| = 43°.
Fig. 4
Fig. 4 Discretization centroiding error δdisc vs. L and χ when ρ = 0.3 pixels.
Fig. 5
Fig. 5 Discretization centroiding error δdisc vs. L and ρ for imaging direction (a) 0°, (b) 45°.
Fig. 6
Fig. 6 Random centroiding error vs. imaging direction angle χ and imaging length L.
Fig. 7
Fig. 7 Total centroiding error σtotal vs. imaging direction for (a) VL = 0.1 pixels/ms, (b) VL = 1.1 pixels/ms.
Fig. 8
Fig. 8 Total centroiding error σtotal vs. Gaussian radius ρ when χ = 0° and χ = 45° for (a) 20°/s and 40°/s, (b)1°/s and 2°/s, and (c) ρmin 1 and ρmin 2.
Fig. 9
Fig. 9 Centroiding errors vs. imaging length and angular velocity, (1) apartments of centroiding errors, (2) under different rotational angular velocities, (3) optimal exposure time .
Fig. 10
Fig. 10 Optimal Gaussian radius ρ̄, centroiding error σtotal, and optimal exposure time curves influenced by different incident stellar magnitudes and temperatures.
Fig. 11
Fig. 11 Comparison of trajectory centroiding errors of different trajectory models in GFOV.
Fig. 12
Fig. 12 Centroiding error vs. imaging length under different Gaussian radii for imaging directions (a) 0°, (b) 45°.
Fig. 13
Fig. 13 Centroiding error vs. imaging length for imaging direction of 0°, (a) under different angular velocities, and (b) under different incident stellar magnitudes.
Fig. 14
Fig. 14 Setup of the night sky experiment.
Fig. 15
Fig. 15 Star location error vs. imaging length for (a) 5°/s, (b) 25°/s.

Tables (1)

Tables Icon

Table 1 Design parameters of the star tracker

Equations (43)

Equations on this page are rendered with MathJax. Learn more.

{ x i t + Δ t = x i t + ( y i t ω z t + f ω y t ) Δ t y i t + Δ t = y i t ( x i t ω z t + f ω x t ) Δ t .
V x y ( Δ t ) = f ω x y 1 cos 2 ( θ t + ω x y Δ t ) f ω x y 1 cos 2 θ t ,
{ x i t + Δ t = x i t + ( y i t ω z t + f ω y t cos 2 θ t ) Δ t y i t + Δ t = y i t ( x i t ω z t + f ω x t cos 2 θ t ) Δ t .
{ x T = x 0 + f ω cos φ cos 2 θ 0 sin β T y T = y 0 f ω cos φ cos 2 θ 0 cos β T .
{ x T = x T cos ( ω z T ) + y T sin ( ω z T ) y T = y T cos ( ω z T ) x T sin ( ω z T ) .
V L , x = x T x 0 T f ω sin φ tan θ 0 sin ( 2 γ ω sin φ T 2 ) + f ω cos φ cos 2 θ 0 sin ( β ω sin φ T ) ,
V L , y f ω sin φ tan θ 0 cos ( 2 γ ω sin φ T 2 ) f ω cos φ cos 2 θ 0 cos ( β ω sin φ T ) .
V L = V L , x 2 + V L , y 2 = f ω sin 2 φ tan 2 θ 0 + cos 2 φ cos 4 θ 0 + sin 2 φ tan θ cos ( γ β + ω sin φ T / 2 ) cos 2 θ 0 .
I i , j = Φ K η QE G INS 2 π ρ 2 x i 0.5 x i + 0.5 y j 0.5 y j + 0.5 0 T exp { [ x x 0 V L , x t ] 2 + [ y y 0 V L , y t ] 2 2 ρ 2 } d t d y d x ,
I i = j I i , j Φ K η QE G INS 2 π ρ 2 0 T x i 0.5 x i + 0.5 exp { [ x x 0 V L , x t ] 2 2 ρ 2 } d x d t .
I tot = i I i = Φ K η QE G INS T .
L C , X = 1 N i = 0 N 1 x i , L C , Y = 1 N i = 0 N 1 y i N = T Δ t .
ε traj , x = 1 N i = 0 N 1 x i 1 2 V L , x T x 0 , ε traj , y = 1 N i = 0 N 1 y i 1 2 V L , y T y 0 .
ε traj = ε traj , x 2 + ε traj , y 2 .
x ¯ = i x i I i i I i , y ¯ = j y j I j j I j .
δ disc , x = x ¯ x C = i x i I i i I i ( V L , x T 2 + x 0 ) , δ disc , y = y ¯ y C = j y j I j j I j ( V L , y T 2 + y 0 ) .
δ disc , x = { 1 2 i x i [ erf ( x i x 0 + 0.5 2 ρ ) erf ( x i x 0 + 0.5 2 ρ ) ] x 0 V L , x T = 0 1 2 V L , x T i x i 0.5 + 0.5 [ erf ( x i + Δ x x 0 2 ρ ) erf ( x i + Δ x V L , x T 2 ρ ) ] d Δ x x C others ,
i x i 0.5 + 0.5 [ erf ( x i + Δ x x 0 2 ρ ) erf ( x i + Δ x V L , x T 2 ρ ) ] d Δ x = i = 1 N x i { ( x i x 0 + 0.5 ) erf [ ( x i x 0 + 0.5 ) / ( 2 ρ ) ] + 2 ρ 2 f PSF ( x i x 0 + 0.5 ) ( x i x 0 + 0.5 V L , x T ) erf [ ( x i x 0 + 0.5 V L , x T ) / ( 2 ρ ) ] 2 ρ 2 f PSF ( x i x 0 + 0.5 V L , x T ) ( x i x 0 0.5 ) erf [ ( x i x 0 + 0.5 ) / ( 2 ρ ) ] 2 ρ 2 f PSF ( x i x 0 + 0.5 ) + ( x i x 0 0.5 V L , x T ) erf [ ( x i x 0 + 0.5 V L , x T ) / ( 2 ρ ) ] + 2 ρ 2 f PSF ( x i x 0 + 0.5 V L , x T ) } x N x N x 0 + 0.5 V L , x T x N x 0 + 0.5 erf [ ( x / 2 ρ ) ] d x i = 1 N x i x 0 0.5 V L , x T x i x 0 0.5 erf [ x / ( 2 ρ ) ] d x
δ disc , x = { 1 2 i x i [ erf ( x i x 0 + 0.5 2 ρ ) erf ( x i x 0 + 0.5 2 ρ ) ] x 0 V L , x T = 0 0 V L , x T = 1 , 2 , N 1 2 V L , x T [ x N V L , x T i = 1 N x 1 x 0 0.5 V L , x T x i x 0 0.5 erf [ x / ( 2 ρ ) ] d x ] x C others .
δ disc , y = { 1 2 j y j [ erf ( y j y 0 + 0.5 2 ρ ) erf ( y j y 0 0.5 2 ρ ) ] y 0 V L , y T = 0 0 V L , x T = 1 , 2 , M 1 2 V L , y T [ y M V L , y T j = 1 M y j y 0 0.5 V L , y T y j y 0 0.5 erf [ y / ( 2 ρ ) ] d y ] y C others .
δ disc = δ disc , x 2 + δ disc , y 2 .
δ disc ( ρ , V L , x T , V L , y T ) = max ( δ disc ( ρ , V L , x T , V L , y T , x 0 , y 0 , Δ x 0 , Δ y 0 ) Δ x 0 , Δ y 0 [ 0.5 , 0.5 ) .
W × H = [ 0.5 + 3 ρ , 0.5 + V L , x T + 3 ρ ] × [ 0.5 + 3 ρ , 0.5 + V L , y T + 3 ρ ] ,
δ disc ( ρ , L , χ ) = max ( δ disc ( ρ , L cos χ , L sin χ , x 0 , y 0 , Δ x 0 , Δ y 0 ) Δ x 0 , Δ y 0 [ 0.5 , 0.5 ) .
δ disc ( ρ , L , χ ) = { δ disc , y ( ρ , L , 0 ° ) χ = 0 ° , 90 ° , 180 ° , 270 ° 0 χ = 45 ° , 135 ° , 225 ° , 315 ° L 1 [ 0 , δ disc , y ( ρ , L , 0 ° ) ] others .
n i , j 2 = K 2 [ n shot 2 + ( n dark 2 + n read 2 + n adc 2 ) ] = I i , j K + K 2 n add 2 ,
σ rand , x 2 = i ( x ¯ I i ) n i 2 + 2 i j ρ i , j ( x ¯ I i ) ( x ¯ I j ) n i n j ,
σ rand , x 2 = K I tot 2 i ( x i x ¯ ) 2 I i + K 2 n add 2 H I tot 2 i ( x i x ¯ ) 2 .
σ shot , x 2 = 1 λ 2 T 2 i ( x i x ¯ ) 2 λ 2 π ρ 2 0 T x i 0.5 x i + 0.5 exp { [ x x 0 V L , x t ] 2 2 ρ 2 } 1 λ T 2 0 T [ ρ 2 + ( x 0 + V L , x t ) 2 2 x ¯ ( x 0 + V L , x t ) + x ¯ 2 ] d t = 1 λ T [ ρ 2 + 1 12 ( V L , x T ) 2 ] .
σ add , x 2 = H n add 2 λ 2 T 2 x 0 0.5 + 3 ρ x 0 + 0.5 + V L , x T + 3 ρ ( x x 0 V L , x T 2 ) 2 d x 1 12 H n add 2 λ 2 T 2 W 3 .
σ shot , y 2 = 1 λ T [ ρ 2 + 1 12 ( V L , y T ) 2 ] , σ add , y 2 = 1 12 W n add 2 λ 2 T 2 H 3 .
σ rand 2 = ( σ shot , x 2 + σ shot , y 2 ) + ( σ add , x 2 + σ add , y 2 ) = V L λ L ( 2 ρ 2 + 1 12 L 2 ) + 1 12 n add 2 V L 2 λ 2 L 2 W H ( W 2 + H 2 ) .
σ total 2 = δ disc 2 ( ρ , L , χ ) + σ rand 2 ( ρ , L , χ , λ , V L , n add ) .
( ρ ¯ , T ¯ ) = arg min ρ , T { σ total ( ρ , T , χ , v ω ) } s . t . ω [ 0 ° / s , ω max ] , χ [ 0 ° , 360 ° ] ,
v ω = f ω { cos [ arctan ( 1 2 sin ( 2 θ max ) ) ] / cos 2 θ max + sin [ arctan ( 1 2 sin ( 2 θ max ) ) ] tan θ max } ,
Ω ρ , T = [ 0 , σ total ( ρ , L , χ = 0 ° , v ω ) ] [ 0 , σ total ( ρ , L , χ = 45 ° , v ω ) ] ω = ω max .
{ σ shot 2 = V L λ L ( 2 ρ 2 + 1 12 L 2 ) χ = 0 ° or 45 ° σ add 1 2 = 1 2 n add 2 V L 2 λ 2 L 2 ( 1 + 6 ρ ) ( 1 + 6 ρ + L ) [ ( 1 + 6 ρ ) 2 + ( 1 + 6 ρ + L ) 2 ] χ = 0 ° σ add 2 2 = 1 6 n add 2 V L 2 λ 2 L 2 ( 1 2 L + 1 + 6 ρ ) 4 χ = 45 ° ,
L shot = 2 6 ρ , L add 1 = 2.383 ( 1 + 6 ρ ) , L add 2 = 2 ( 1 + 6 ρ ) .
{ σ total 2 ( ρ , χ = 0 ° ) = δ disc , y 2 ( ρ , L , 0 ° ) + σ shot 2 ( ρ , L shot ) + σ add 1 2 ( ρ , L add 1 ) σ total 2 ( ρ , χ = 45 ° ) = σ shot 2 ( ρ , L shot ) + σ add 2 2 ( ρ , L add 2 ) .
{ ρ min 1 = arg min ρ { δ disc , y 2 ( ρ , L , 0 ° ) + σ shot 2 ( ρ , L shot ) + σ add 1 2 ( ρ , L add 1 ) } ρ min 2 = solve ρ { δ disc , y 2 ( ρ , L , 0 ° ) + σ add 2 2 ( ρ , L add 2 ) σ add 1 2 ( ρ , L add 1 ) } ρ ¯ = min [ ρ min 1 , ρ min 2 ] .
T ¯ = L v ω ω [ 0 ° / s , ω max ] .
L 0 [ L shot , L add 1 ] [ L shot , L add 2 ] .
L 0 = min { arg min L { σ total ( ρ ¯ , L , χ = 0 ° , v ω ) } , L add 2 } ω [ 0 ° / s , ω max ] , L [ L shot , L add 2 ] .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.