Next Article in Journal
Examination of Inertial Sensor-Based Estimation Methods of Lower Limb Joint Moments and Ground Reaction Force: Results for Squat and Sit-to-Stand Movements in the Sagittal Plane
Previous Article in Journal
Modeling and Experimental Analysis on the Temperature Response of AlN-Film Based SAWRs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration

1
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, 129 Luoyu Road, Wuhan 430079, China
2
Computer School, Wuhan University, 129 Luoyu Road, Wuhan 430079, China
3
Collaborative Innovation Center of Geospatial Technology, Wuhan University, 129 Luoyu Road, Wuhan 430079, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(8), 1203; https://doi.org/10.3390/s16081203
Submission received: 2 June 2016 / Revised: 21 July 2016 / Accepted: 25 July 2016 / Published: 30 July 2016
(This article belongs to the Section Remote Sensors)

Abstract

:
Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%.

Graphical Abstract

1. Introduction

Since 2010, China has launched a series of high-resolution optical satellites. On 9 January 2012, China launched the first high-accuracy civil stereo-mapping optical satellite ZY-3 from the Taiyuan satellite launch center. Like Japan’s Advanced Land-Observing Satellite (ALOS) [1], ZY-3 was equipped with three panchromatic (0.50–0.80 μm) time-delay CCD-integrated cameras (nadir, forward, and backward) capable of taking in-track triplet stereo images. Their spatial resolutions were 2.1 m (for the nadir camera) and 3.5 m (for the forward and backward cameras) [2]. On 20 November 2014, the Yaogan-24 remote sensing satellite, with panchromatic spatial resolution within 1 m, was successfully launched. Optical satellite ZY-3-02 will be launched in early 2016, which is similar to ZY-3 satellite. In general, the orbital heights of these optical satellites are 500–800 km, from which a 1 arcsec attitude error could cause 3–5 m ground positioning error if other sources of error are ignored. Therefore, the impact of the accuracy of the satellite’s attitude data on the high-resolution image geometry process could be very significant and is summed up by the statement “one false step will make a great difference”. Thanks to technological advancements in precise orbit determination, microsecond time synchronization, and high-accuracy calibration of internal camera parameters, satellite attitude precision is becoming an important factor restricting the geometric positioning precision of high-resolution optical images, which remains unsolved [3,4,5,6].
To achieve high-precision image geometry processing for attitude determination of a high-resolution optical satellite, many attempts have been made in remote sensing and geoscience applications, including deterministic method such as TRIAD, QUEST, FOAM, SVD, Euler-q, ESOQ-2, which are based on vector observations, and the state filtering estimation method such as SOAR, q-EKF, Filter QUEST, and REQUEST methods [7,8,9,10,11,12]. Meanwhile, correlation algorithms for precision geometry processing of optical images have matured, including (to name a few), the tight geometric-imaging, rational polynomial-fitting, and pixel-pointing angle models, among others [2,3,4,13,14,15,16]. In particular, in recent years platform jittering and sub-meter image processes have become hot topics. Typical methods include tremor detection that are based on the calibration field, and tremor image geometry compensation based on a high-frequency angular displacement sensor [1,17,18,19]. However, to our best knowledge current studies are scarce; most are based mainly on simulation data, but lack the validation of real data. Due to the intrinsic nature of attitude, the issue of how to evaluate and verify its accuracy effectively also remains unsolved. Fortunately, our research has been dedicated to undertake intensive research in this area. Therefore, we introduce what we have undertaken recently to fill the perceived research gap.
Satellite attitude determination accuracy depends not only on the attitude sensor measurement accuracy, but also on the attitude data processing method used [20]. Generally, optical satellite images always uses on-board processing attitude data for geometric processing, which is also used for the satellite attitude control systems. Because the satellite’s attitude control system depends not on the accuracy, but on the robustness of attitude data, onboard processing usually uses a real-time unidirectional Kalman filter for attitude data processing, which is due to the use of past observation data; it relies more on gyro observation information, which will affect the accuracy of attitude for the existing of gyro bias and other error sources, is unable to make full use of the original observation data of attitude sensors to achieve high-precision processing [21]. Therefore, the attitude accuracy of the attitude control system cannot meet the requirements of optical image geometry processing.
To address these problems, the Yaogan-24 remote sensing satellite was taken as an example for doing research and experimental analysis. We will use the bidirectional Kalman filter and overall weighted smoothing method for attitude data processing to enhance the accuracy, a method that can realize high-precision ground processing for attitude data. Meanwhile, we will use the real panchromatic image and digital orthophoto (DOM) and digital elevation (DEM) (DOM/DEM) model of the geometric calibration field to achieve automatic measurements of high-precision control points and high-precision attitude inversions based on the image-intensive matching method. With this approach we can more reasonably evaluate the absolute and relative attitude accuracies.
The remainder of the paper is organized as follows: Section 2 introduces attitude data processing, model construction, and the verification methods. Section 3 presents the collection and analysis of experimental data. Section 4 summarizes our work and presents the study’s conclusions and research perspectives.

2. On-Ground Processing for Attitude Data and Verification in a Geometric Calibration Field

2.1. The On-Ground Processing Workflow

The Yaogan-24 remote sensing satellite is a high-resolution mapping satellite injected in a Sun-synchronous orbit at an altitude of 645 km. To extend the viewing range of the camera, HRO can swing laterally up to a maximum of 32°. The photographing system consists of a panchromatic linear mapping camera and a multi-spectral camera with a pixel resolution of 1.0 m and 2 m, respectively.
The systems of the satellite attitude sensor for configuration include two German Astro10 star sensors, an APS-made star sensor, four gyro components, a digital Sun sensor, and an infrared Earth sensor. Table 1 shows the details of the Yaogan-24 remote sensing satellite’s star sensor and gyro performance parameters. In addition, the Astro10 star sensor on the satellite was superior to the domestic APS star sensor that was used as an alternative.
Precise attitude data are a precondition for achieving high-precision geometric positioning of high-resolution optical image processing. The goal of attitude determination is to calculate the attitude parameters in the reference coordinate system on the basis of measurements from the attitude sensors. An attitude determination system consists of the attitude sensor and its attitude-determination algorithm [20]. Therefore, attitude accuracy depends on the accuracies of both the measurement and the calibration algorithm. On-satellite attitude sensors on high-resolution optical satellites include an infrared Earth sensor, a Sun sensor, star sensor, and gyro inertial sensors, etc. In general, the data of the star sensor and gyroscope are combined to determine the precise attitude parameters for high-resolution optical satellites [22,23,24].
Figure 1 shows the on-ground processing workflow for attitude data of the Yaogan-24 remote sensing satellite and its verification. Firstly, we do preprocessing of attitude observation data. Secondly, we construct the measurement model and state model, use the bidirectional Kalman filter and overall weighted smoothing method to realize the optimal satellite attitude estimation. Finally, we use the panchromatic image and reference data of geometric field to verify the relative and absolute accuracy of the estimated attitude data. The workflow includes four major steps:
(1)
Measurement equation construction
A star sensor optical axis vector can achieve high precision for positioning a space object, from which an equation of measurements is built. To ensure the quality and reliability of the observational attitude data, we will control the quality of the observational data based on optical axis angle stability.
(2)
State equation construction
Gyro data reflect the change in attitude and are used to construct the equation of state on the basis of attitude kinematic equations.
(3)
Filtering for information fusion
To estimate the optimal attitude parameters, we use a bidirectional Kalman filter to process attitude data, which are then smoothed overall according to the error covariance matrix.
(4)
Control point measurement and attitude precision inversion
To verify the effectiveness of our method, we applied real image and DOM/DEM data in the geometric calibration field for geometric processing and attitude precision inversion.
Before building the state and measurement equations, we must first determine the state variables. The selection of state variables can directly affect the non-linear dimensions of the state and measurement equations. To reduce the matrix order and streamline the attitude determination algorithms, we chose three parameter variables of error quaternion vector, and gyro bias error, as the state variables of the system, that is, X = [ Δ q 13 T , Δ b T ] T , Δ q 13 = [ Δ q 1       Δ q 2       Δ q 3 ] T .

2.2. Measurement Equation Construction

The measurement accuracy of a star sensor’s optical axis is the highest among the three axes, and we will construct the measurement equation based on the optical axis vector. Construction of the star sensor measurement equation includes the following three aspects:
(1)
Quality control of observational data from the star sensor
Naturally, certain unusual errors may occur during star sensor observation. Therefore, the quality of the observational data should be controlled first. Suppose that at time t, the quaternion observation value of Sensor A is q t A = [ q 0 A q 1 A q 2 A q 3 A ] T , and that of Sensor B is q t B = [ q 0 B q 1 B q 2 B q 3 B ] T . We could place the satellite body into the inertial rotation matrices R A I and R B I :
R A I = [ q 1 A 2 q 2 A 2 q 3 A 2 + q 0 A 2 2 ( q 1 A q 2 A q 3 A q 0 A ) 2 ( q 1 A q 3 A + q 2 A q 0 A ) 2 ( q 1 A q 2 A + q 3 A q 0 A ) q 1 A 2 + q 2 A 2 q 3 A 2 + q 0 A 2 2 ( q 2 A q 3 A q 1 A q 0 A ) 2 ( q 1 A q 3 A q 2 A q 0 A ) 2 ( q 2 A q 3 A + q 1 A q 0 A ) q 1 A 2 q 2 A 2 + q 3 A 2 + q 0 A 2 ]
R B I = [ q 1 B 2 q 2 B 2 q 3 B 2 + q 0 B 2 2 ( q 1 B q 2 B q 3 B q 0 B ) 2 ( q 1 B q 3 B + q 2 B q 0 B ) 2 ( q 1 B q 2 B + q 3 B q 0 B ) q 1 B 2 + q 2 B 2 q 3 B 2 + q 0 B 2 2 ( q 2 B q 3 B q 1 B q 0 B ) 2 ( q 1 B q 3 B q 2 B q 0 B ) 2 ( q 2 B q 3 B + q 1 B q 0 B ) q 1 B 2 q 2 B 2 + q 3 B 2 + q 0 B 2 ]
Next, we can obtain the optical axis vectors of the two sensors in the inertial reference frame as follows:
Z A = [ 2 ( q 1 A q 3 A + q 2 A q 0 A ) 2 ( q 2 A q 3 A q 1 A q 0 A ) q 1 A 2 q 2 A 2 + q 3 A 2 + q 0 A 2 ] T Z B = [ 2 ( q 1 B q 3 B + q 2 B q 0 B ) 2 ( q 2 B q 3 B q 1 B q 0 B ) q 1 B 2 q 2 B 2 + q 3 B 2 + q 0 B 2 ] T
Next, we could calculate the angle between the two at time t:
α t = arccos ( Z A · Z B )
As a rigid body bracket and temperature control device are used among the star sensors, and the axis angle change among the sensors is very small, we are able to control the quality of observational data in the following model:
δ m = i = 1 N ( α i α c a l ) 2 N { | α i α c a l | γ δ m     normal observation | α i α c a l | > γ δ m     abnormal observation
where α c a l represents the angle between the optical axis calibrated in the laboratory, γ represents the threshold value, the general range of 1–3, and δ m represents the mean square error of the optical axis angle.
(2)
Data fusion for multiple star sensors
Data fusion for multiple star sensors uses origin attitude observations, by which the high-precision attitude of the body coordinate relative to the inertial coordinate system can be determined, and the attitude accuracy depends on the optical axis pointing accuracy of the star sensor [25,26]. This operation is a prerequisite for the combination of star sensor and gyro. Assuming that the axis of multiple star sensors in the inertial coordinate is V 1 C I S , V 2 C I S , … V n C I S , the body coordinate of the axis is V 1 B o d y , V 2 B o d y , … V n B o d y , and the star sensor observation equation would be:
V i C I S + v 3 × 1 = R ^ B I V i B o d y     i = 1 , 2 , , n
in which R ^ B I is the rotation matrix from the body coordinate system to the inertial coordinate system, and v 3 × 1 is the star sensitive observation error.
We could establish an indirect adjustment model using quaternions as independent variables, which is expressed as:
V i C I S + v 3 × 1 = R ^ B I V i B o d y     ( i = 1 , 2 , , n ) q ^ 0 2 + q ^ 1 2 + q ^ 2 2 + q ^ 3 2 = 1
We do a Taylor series expansion on the first item:
v 3 × 1 = R B I V i B o d y V i C I S = R B I ( Q 0 ) V i B o d y + ( R B I q 0 V i B o d y d q 0 + R B I q 1 V i B o d y d q 1 + R B I q 2 V i B o d y d q 2 + R B I q 3 V i B o d y d q 3 ) V i C I S q 00 2 + q 01 2 + q 02 2 + q 03 2 + 2 q 00 d q 0 + 2 q 01 d q 1 + 2 q 02 d q 2 + 2 q 03 d q 3 - 1 = 0
where Q 0 = [ q 00 q 01 q 02 q 03 ] T is the initial value of the unknown quaternion. The above equation is linearized with restriction of error equations, and could be written in matrix form:
V = A X L C X + W = 0
Furthermore, we could solve the following equations and the specific derivation process is as follows:
{ V = A X L C X + W = 0 A T P V + C T K = 0
A T P A X + C T K A T P L = 0 K = ( C ( A T P A ) 1 C T ) 1 ( C ( A T P A ) 1 A T P L + W ) X = ( A T P A ) 1 ( A T P L C T ( C ( A T P A ) 1 C T ) 1 ( C ( A T P A ) 1 A T P L + W ) )
We then calculate the quaternion update:
q 0 = q 00 + d q 0 q 1 = q 01 + d q 1 q 2 = q 02 + d q 2 q 3 = q 03 + d q 3
We do iterative calculations until X = [ d q 0 d q 1 d q 2 d q 3 ] T stabilizes. Therefore, we could calculate the quaternion estimation based on the above model.
(3)
Constructing the measurement equation
From the fusion results of the multiple-star sensor observations shown above, we can construct attitude measurement equation. The attitudes of the three axes vectors in the body coordinate system, the measurement values in the inertial coordinate system and the real values are:
{ l b 1 = [ x b 1 , y b 1 , z b 1 ] T l b 2 = [ x b 2 , y b 2 , z b 2 ] T l b 3 = [ x b 3 , y b 3 , z b 3 ] T , { l m i 1 = [ x m i 1 , y m i 1 , z m i 1 ] T l m i 2 = [ x m i 2 , y m i 2 , z m i 2 ] T l m i 3 = [ x m i 3 , y m i 3 , z m i 3 ] T , { l i 1 = [ x i 1 , y i 1 , z i 1 ] T l i 2 = [ x i 2 , y i 2 , z i 2 ] T l i 3 = [ x i 3 , y i 3 , z i 3 ] T
We simplify the measurement equations of:
Z ( t ) = h ( X , t ) + V ( t ) h ( X , t ) = [ l i 1 l i 2 l i 3 ] 9 × 1 = [ h 1 h 2 h 3 ] , V ( t ) = [ v 1 v 2 v 3 ] 9 × 1
As the measurement Equation (14) is continuous and X ^ ( t ) is non-linear, to use the attitude estimation filtering algorithm we first need to focus on the best estimates of X ^ ( t ) to linearize it with the sampling period T , and then do discrete and recursive calculations. The detailed derivation process is as follows:
h 1 = l i 1 = A b i T ( q ) l b 1 A b i ( q ) = A b i ( Δ q ) A b i ( q ^ )
where A b i ( q ) represents the rotational transformation matrix from the inertial to the body coordinate system. As the error quaternion is small, it can be reduced to:
A ( Δ q ) = [ 1 2 Δ q 3 2 Δ q 2 2 Δ q 3 1 2 Δ q 1 2 Δ q 2 2 Δ q 1 1 ] = I 3 × 3 2 [ Δ q × ] A b i ( q ) = A b i ( Δ q ) A b i ( q ^ ) = { I 3 × 3 2 [ Δ q × ] } A b i ( q ^ )
Further solving it:
h 1 = l i 1 = A b i T ( q ) l b 1 = A b i T ( q ^ ) l b 1 { 2 [ Δ q × ] A b i ( q ^ ) } T l b 1 = A b i T ( q ^ ) l b 1 + 2 A b i T ( q ^ ) [ Δ q × ] l b 1 = A b i T ( q ^ ) l b 1 2 A b i T ( q ^ ) [ l b 1 × ] Δ q
We could similarly get:
h 2 = l i 2 = A b i T ( q ) l b 2 = A b i T ( q ^ ) l b 2 { 2 [ Δ q × ] A b i ( q ^ ) } T l b 2 = A b i T ( q ^ ) l b 2 + 2 A b i T ( q ^ ) [ Δ q × ] l b 2 = A b i T ( q ^ ) l b 2 2 A b i T ( q ^ ) [ l b 2 × ] Δ q ,     h 3 = l i 3 = A b i T ( q ) l b 3   = A b i T ( q ^ ) l b 3 { 2 [ Δ q × ] A b i ( q ^ ) } T l b 3   = A b i T ( q ^ ) l b 3 + 2 A b i T ( q ^ ) [ Δ q × ] l b 3   = A b i T ( q ^ ) l b 3 2 A b i T ( q ^ ) [ l b 3 × ] Δ q
Now we calculate the measurement matrix:
h 1 Δ q = 2 A b i T ( q ^ ) [ l b 1 × ] ,   h 2 Δ q = 2 A b i T ( q ^ ) [ l b 2 × ] ,       h 3 Δ q = 2 A b i T ( q ^ ) [ l b 3 × ]
Therefore, Equation (14) becomes linear and discrete:
Z k = H k X k + V k H k = h [ X ( t k ) , t k ] X ( t k ) | X ( t k ) = X ^ k / k 1 = [ h 1 Δ q 0 3 × 3 h 2 Δ q 0 3 × 3 h 3 Δ q 0 3 × 3 ] 9 × 9 = [ 2 A b i T ( q ^ ) [ l b 1 × ] 0 3 × 3 2 A b i T ( q ^ ) [ l b 2 × ] 0 3 × 3 2 A b i T ( q ^ ) [ l b 3 × ] 0 3 × 3 ] 9 × 9 Z k = [ l m i 1 A b i T ( q ^ ) l b 1 l m i 2 A b i T ( q ^ ) l b 2 l m i 3 A b i T ( q ^ ) l b 3 ] 9 × 1
V k is observation noise sequence, satisfying:
{ E ( V k ) = 0 E ( V k V j T ) = R k δ k j
where δ represents the Dirichlet function and R k is the covariance matrix of the measurement noise. Assuming that the main axis of the observation error is σ s , we obtain R k = σ s 2 I 9 × 9 .

2.3. Construction of the State Equation Based on the Gyroscope and Attitude Kinematic Equations

The measurement model of a gyro, a common inertial attitude sensor, is critical in attitude determination. The gyro output angular velocity is used to integrate and preset the next-time satellite attitude, in which gyro bias is treated as an estimated additional state amount; the measurement data of the gyroscope are used directly in the state equation, but are reflected in the measurement equation. According to the configuration of the on-satellite gyroscope feature, a gyroscope measurement model is:
ω g = ω + b + η g
in which ω g represents the gyro output-measured values; ω is the rotation speed of the satellite body coordinate system relative to the inertial space coordinate; b is the gyro bias; η g is the gyro measurement noise; σ g is the mean square error of gyro measurement noise; and δ represents Dirichlet function, satisfying:
E ( η g ( t ) ) = 0 E ( η g ( t ) η g T ( t ) ) = σ g 2 δ ( t t )
A gyro bias amount is not static and meets the following random walk model, assuming the gyro bias is driven by white noise, i.e.,:
b ˙ = η b { E ( η b ( t ) ) = 0 E ( η b ( t ) η b T ( t ) ) = σ b 2 δ ( t t )
Furthermore, we assume that the two types of noise are independent. η b represents the gyro bias white noise; and σ b is the mean square error of the gyro bias white noise.
With the quaternion kinematics:
q ˙ = 1 2 q ω b i
in which q represents the attitude of the satellite body coordinate system relative to the inertial coordinate system and ω b i represents the speed in the body coordinate system of the inertial coordinate system relative to the satellite body, we can obtain an integrator quaternion. As the angle rate of the gyro measurement contains, for example, measurement error and bias error, we can only obtain the corresponding q ^ and ω ^ b i estimates.
The error between the real satellite attitude quaternion q and the quaternion estimates q ^ can be expressed as Δ q = [ Δ q 0       Δ q 1       Δ q 2       Δ q 3 ] T , and using the error quaternion to represent the error, we have:
q = q ^ Δ q
where the error quaternion Δ q represents a small rotation angle in which Δ q 0 1 , so we just need to consider the vector part of the quaternion error, and the error quaternion can be reduced into three independent variables.
Change Equation (26) into:
Δ q = q ^ 1 q
and calculate the derivative of both sides:
Δ q ˙ = q ^ ˙ 1 q + q ^ 1 q ˙
Furthermore, from Equation (28):
Δ q ˙ = q ^ ˙ 1 q + q ^ 1 q ˙ = 1 2 ω ^ b i q ^ 1 q + 1 2 q ^ 1 q ω b i = 1 2 ω ^ b i Δ q + 1 2 Δ q ω b i
from which Δ ω b i = ω b i ω ^ b i is obtained:
Δ q ˙ = 1 2 ω ^ b i Δ q + 1 2 Δ q ω ^ b i + 1 2 Δ q Δ ω b i
Because the error quaternion is a small amount, we could obtain:
Δ q = [ Δ q 0       Δ q 1       Δ q 2       Δ q 3 ] T = [ 1       0       0       0 ] T 1 2 Δ q Δ ω b i = 1 2 [ Δ q 0 Δ q T Δ q Δ q 0 I 3 × 3 [ Δ q × ] ] [ 0 Δ ω b i ] = 1 2 [ 0 Δ ω b i + O ( | Δ q | | Δ ω b i | ) ]
By incorporating Equations (29) and (30) into (31), and ignoring the second-order small quantities, we can derive kinematic equations based on the error quaternion:
Δ q ˙ = [ Δ q ˙ 1 Δ q ˙ 2 Δ q ˙ 3 ] = [ ω ^ b i × ] Δ q 1 2 Δ b 1 2 η g Δ q ˙ 0 = 0 Δ b ˙ = η b
Using Equation (32), we can constitute a linear filtering state equation based on the state X 6 × 1 = [ Δ q T       Δ b T ] T , and get:
X ˙ ( t ) = F ( t ) X ( t ) + W ( t ) F ( t ) = [ [ ω ^ b i × ] 0.5 I 3 × 3 0 3 × 3 0 3 × 3 ] W ( t ) = [ 0.5 η g       η b ] T
Equation (33) is a continuous dynamic system filter and will be linear and discrete:
X k = Φ k / k 1 X k 1 + Γ k 1 W k 1 X ( t k ) = Φ ( t k , t k 1 ) X ( t k 1 ) + t k 1 t k Φ ( t k , τ ) W ( τ ) d τ
When a sampling interval is small, the calculation of the state transition matrix will be:
Φ ˙ ( t , t k 1 ) = F ( t ) Φ ( t , t k 1 ) Φ ( t k 1 , t k 1 ) = I Φ ( t k , t k 1 ) = exp t k 1 t k F ( t ) d t
When the filter period T ( T = t k t k 1 ) is small, F ( t ) can be approximated into a constant matrix:
F ( t ) F ( t k 1 )       t k 1 t t k Φ ( t k , t k 1 ) = exp T F ( t k 1 )
In addition, the system noise sequence W k 1 in the state equation and the driving array Γ k 1 can be expressed as:
Γ k 1 = t k 1 t k Φ ( t k , τ ) d τ = T I W k 1 = [ 0.5 η g       η b ] T E { W k } = 0 , E { W k , W j T } = Q k δ k j Q k = d i a g ( 0.25 σ g 2 I 3 × 3       σ b 2 I 3 × 3 )

2.4. Information Fusion Filter Design

On the basis of the measurement equation and the state equation, we use the bidirectional Kalman filter and overall weighted smoothing method to realize the optimal satellite attitude estimation and derive it specifically [27,28]. Figure 2 shows the schematic diagram of attitude data processing with the bidirectional Kalman filter.
(1)
Attitude forecast processing
When a star sensor does not output a measured value, a gyro measurement model based on the following equation at time t k 1 is integrated, from which the satellite quaternion ( q ^ b i ) k / k 1 forecast can be derived:
q ^ ( t k / k 1 ) = e 1 2 t k 1 t k Ω ( ω ^ b i ( t k 1 ) ) d t q ^ ( t k 1 ) = [ I + 1 2 Δ Θ 1 ! + ( 1 2 Δ Θ ) 2 2 ! + ( 1 2 Δ Θ ) 3 3 ! + ] q ^ ( t k 1 ) Δ Θ = t k 1 t k Ω ( ω ^ b i ( t k 1 ) ) d t = t k 1 t k [ 0 ω x ω y ω z ω x 0 ω z ω y ω y ω z 0 ω x ω z ω y ω x 0 ] d t = [ 0 Δ θ x Δ θ y Δ θ z Δ θ x 0 Δ θ z Δ θ y Δ θ y Δ θ z 0 Δ θ x Δ θ z Δ θ y Δ θ x 0 ] Δ Θ 2 = Δ θ 2 I Δ θ 2 = Δ θ x 2 + Δ θ y 2 + Δ θ z 2
where Δ θ x , Δ θ y , Δ θ z represent the incremented angles of the gyro in the X, Y, and Z axes in a sampling interval [ t k 1 , t k ] . Furthermore, the recurrence relations are derived as follows:
q ^ ( t k / k 1 ) = e 1 2 t k 1 t k Ω ( ω ^ b i ( t k 1 ) ) d t q ^ ( t k 1 ) = { I + I [ Δ Θ 2 1 ! + ( Δ θ 2 ) 2 2 ! + ( Δ θ 2 ) 2 Δ Θ 2 3 ! + ( Δ θ 2 ) 4 4 ! + ( Δ θ 2 ) 4 Δ Θ 2 5 ! + ( Δ θ 2 ) 6 6 ! + ] } q ^ ( t k 1 ) = { I [ 1 ( Δ θ 2 ) 2 2 ! + ( Δ θ 2 ) 4 4 ! ( Δ θ 2 ) 6 6 ! + ] + Δ Θ 2 2 Δ θ [ Δ θ 2 1 ! ( Δ θ 2 ) 3 3 ! + ( Δ θ 2 ) 5 5 ! ] } q ^ ( t k 1 ) = [ I cos Δ θ 2 + Δ Θ Δ θ sin Δ θ 2 ] q ^ ( t k 1 )
Therefore, with Equation (39), the satellite quaternion predictive value q ^ ( t k / k 1 ) can be obtained.
The equation specific for the gyro bias predictive value b ^ k / k 1 is:
b ^ k / k 1 = b ^ k 1
In addition, the equation specific for the predictive value P ^ k / k 1 of error covariance matrix is:
P ^ k / k 1 = Φ k / k 1 P ^ k 1 Φ k / k 1 T + Γ k 1 Q k 1 Γ k 1 T
(2)
Attitude correction processing
At time t k , the observation matrix H k can be calculated according to the measurement equation, and the filter gain can be calculated using the following equation:
K k = P k / k 1 H k T [ H k P k / k 1 H k T + R k ] 1
Correspondingly, the correction on filter status updates is:
X ^ k = X ^ k / k 1 + K k ( Z k H k X ^ k / k 1 )
After obtaining the state variable X ^ k = [ Δ q ^ k T       Δ b ^ k T ] T at time t k , the gyro bias can be corrected by the conventional method:
b ^ k = b ^ k / k 1 + Δ b ^ k
The corrected values ( q ^ b i ) k at time t k are:
( q ^ b i ) k = ( q ^ b i ) k / k 1 ( Δ q ^ b i ) k
As the constraint on the quaternion modulus equals 1, we can obtain the following results:
( Δ q ^ b i ) k = [ 1 Δ q ^ k T Δ q ^ k Δ q ^ k ]
The error covariance matrix can then be updated:
P k = ( I K k H k ) P k / k 1 ( I K k H k ) T + K k R k K k T
H k will be determined using the updated quaternion estimation, putting the filtering process in better convergence. After adjustment for the satellite attitude quaternion and gyro bias, the predicted value of the state variable is zero, and the state variable X ^ k must be reset to zero.
(3)
Covariance-weighted smoothing
Results of the optimal estimation are obtained by averaging the forward and backward state estimates with the weights based on their error covariance matrix, which minimizes the covariance of the optimal estimation results. The algorithm for this covariance-weighted smoothing is as follows:
q ^ f b ( k ) = q ^ b 1 ( k ) q ^ f ( k ) Δ x ^ f b ( k ) = [ sgn ( q ^ f b 0 ) [ q ^ f b 1 q ^ f b 2 q ^ f b 3 ] ( b ^ f ( k ) b ^ b ( k ) ) T ] T P ^ s ( k ) = ( P ^ f 1 ( k ) + P ^ b 1 ( k ) ) 1 Δ x ^ s ( k ) = [ Δ q ^ s T ( k ) Δ b ^ s T ( k ) ] T = P ^ s ( k ) P ^ f 1 ( k ) Δ x ^ f b ( k ) Δ q ^ s 0 ( k ) = s q r t ( 1 Δ q ^ s 1 T ( k ) Δ q ^ s 1 T ( k ) Δ q ^ s 2 T ( k ) Δ q ^ s 2 T ( k ) Δ q ^ s 3 T ( k ) Δ q ^ s 3 T ( k ) ) q ^ s = q ^ b ( k ) Δ q ^ s ( k )       b ^ s ( k ) = b ^ b ( k ) + Δ b ^ s ( k )
in which represents quaternion multiplication, f is the forward filtering result, b is the backward one, and s is the covariance-weighted smoothing result.

2.5. Attitude Data Model Construction

The high-resolution optical satellite equips the line push-broom camera in an imaging frequency up to tens of thoUSAnds of hertz, while the frequency for the attitude data is much smaller. The imaging frequency means number of image lines taken by line push-broom camera per second. Optical image geometric correction requires an accurate attitude for each line; therefore, a reasonable model should be used to meet the image geometry processing requirement. Common fitting methods for the attitude data include [29,30,31]:
(1)
Lagrange polynomial interpolation
Lagrange polynomial interpolation is a common interpolation method because it is simple, fast, and widely used. The attitude expression parameter contains Euler angles and quaternions. Assuming that ( φ , ω , κ ) represents the Euler angle parameter and ( q 0 , q 1 , q 2 , q 3 ) represents quaternion parameters, the specific interpolation model would be:
φ = j = 1 n φ j W j , ω = j = 1 n ω j W j , κ = j = 1 n κ j W j W j = k = 1 k j n t t k t j t k
q 1 = j = 1 n q 1 j W j , q 2 = j = 1 n q 2 j W j , q 3 = j = 1 n q 3 j W j q 0 = ± ( 1 q 1 2 q 2 2 q 3 2 )
(2)
Orthogonal polynomial fitting
Unlike ordinary polynomial fitting models, the orthogonal polynomial model can effectively avoid a pathological matrix. The orthogonal polynomial fitting in the m 1 order of the φ parameter can be expressed as:
P φ ( t ) = a 0 + a 1 t + a 2 t 2 + + a m 1 t m 1 ,   ( m n )
Assuming the above equation is a linear combination of the orthogonal polynomials δ j ( t ) ( j = 0 , 1 , , m 1 ) , we can obtain:
P φ ( t ) = c 0 δ 0 ( t ) + c 1 δ 1 ( t ) + + c m 1 δ m 1 ( t )
in which the orthogonal polynomials δ j ( t ) can be constructed with a recursive equation as shown below:
δ 0 ( t ) = 1 ; δ 1 ( t ) = ( t α 1 ) ; δ j ( t ) = ( t α j ) δ j 1 ( t ) β j δ j 2 ( t ) , j = 2 , , m 1
The orthogonal polynomial fitting principle of Euler angle parameters ω , κ and quaternion are the same as above, so we could get:
P q 1 ( t ) = k = 0 m 1 c q 1 k δ q 1 k ( t ) , P q 2 ( t ) = k = 0 m 1 c q 2 k δ q 2 k ( t ) , P q 3 ( t ) = k = 0 m 1 c q 3 k δ q 3 k ( t ) q 0 = ± ( 1 q 1 2 q 2 2 q 3 2 )
(3)
Spherical linear interpolation (SLERP) model
If q 1 , q 2 are considered as two points in four-dimensional space on a unit ball, the SLERP will go along the shortest arc for connection, which can be used in quaternion interpolation at a constant speed. Therefore, we can obtain:
q ( t ) = C 1 ( t ) q 1 + C 2 ( t ) q 2 C 1 ( t ) = sin ( 1 t ) θ sin θ , C 2 ( t ) = sin t θ sin θ q ( t ) = sin ( 1 t ) θ sin θ q 1 + sin t θ sin θ q 2 θ = cos 1 q 1 · q 2 = cos 1 ( q 10 q 20 + q 11 q 21 + q 12 q 22 + q 13 q 23 )

2.6. Precise Attitude Inversion in a Geometric Calibration Field

Due to the intrinsic nature of attitude, it is difficult to evaluate and verify its accuracy effectively. Based on real panchromatic images and the DOM/DEM of the geometric calibration field, image-dense matching is used to achieve automatic measurement of a high-precision control point, while a strict geometric imaging equation will be used to achieve attitude calculation for earth observation cameras. Furthermore, we could verify and evaluate the accuracy of attitude determination algorithm. Figure 3 displays the flowchart of precision control-point matching and high-precision attitude inversion in a geometric calibration field. It is the core of on-ground processing for attitude data of the Yaogan-24 remote sensing satellite and its verification. It comprises the following key steps of processing that image feature point extraction between the real panchromatic images and the DOM of the geometric calibration field, image simulation, pyramid image matching, whole pixel matching, sub pixel matching and gross error elimination [32,33,34]. The method of above-mentioned could achieve the matching accuracy of sub pixel.
The attitude accuracy includes two aspects: absolute and relative accuracies. Absolute accuracy means the external error including the datum error, while relative accuracy means the internal error after the deduction of datum error. For the first, we will analyze the geometric positioning accuracy of a panchromatic image to evaluate the absolute accuracy; for the second, we will take the inversion attitude of the panchromatic camera as a reference to evaluate the relative accuracy. The absolute and relative accuracy of inversion attitude could respectively reach 0.3 arcsec and 0.06 arcsec under the following conditions:
(a)
The Yaogan-24 remote sensing satellite uses dual-frequency GPS observations, precise ephemeris and dynamic model to determine the orbit, by which centimeter-level accuracy can be achieved on the orbit;
(b)
Satellite payloads have achieved high-precision time synchronization, and the synchronization error is subtle;
(c)
The camera internal parameters have been precisely calibrated in the pixel pointing angle model, and the calibration accuracy is better than 0.3 pixel;
(d)
The geometric calibration field is used, in which the absolute and relative accuracy of the control points are respectively within 1 m and 0.2 m in plane.
The camera calibration model used in this paper is [3,4,5]:
( V Image ) cam = ( x f , y f , 1 ) T = ( tan ( ψ x ( s ) ) , tan ( ψ y ( s ) ) , 1 ) T { ψ x ( s ) = a x 0 + a x 1 × s + a x 2 × s 2 + a x 3 × s 3 ψ y ( s ) = a y 0 + a y 1 × s + a y 2 × s 2 + a y 3 × s 3
In which ( ψ x ( s ) , ψ y ( s ) ) represents the direction angle of probe element s; a x 0 , a x 1 , a x 2 , a x 3 , a y 0 , a y 1 , a y 2 , a y 3 represent the fitting coefficients of pointing angle; x , y represent the coordinate of probe element in camera coordinate system; f represents the focal length of camera; ( V Image ) cam represents the vector of probe element. With Equation (52), we could get that the camera internal error model is simplified, and avoid the coupling of error parameters, when comparing to standard camera intrinsic parameters model.
The tight geometric imaging model used in this paper is:
( tan ( ψ x ( s ) ) tan ( ψ y ( s ) ) 1 ) = λ R body cam ( R J2000 body R wgs J2000 [ X g X gps Y g Y gps Z g Z gps ] wgs [ B X B Y B Z ] body )
In which ( X g , Y g , Z g ) represent object square coordinates of object points; ( X gps , Y gps , Z gps ) and ( B X , B Y , B Z ) respectively represent the object space coordinates of the camera center and GPS eccentric error; λ represents a scaling factor; R WGS J 2000 , R J 2000 body , and R b o d y cam represent the rotation matrices of, respectively, the WGS84 coordinate system to the J2000 coordinate system, the satellite the J2000 coordinate to body coordinate system, and the satellite body coordinate to the camera coordinate system.
With Equation (57), we could obtain a conversion model between the observation vector in the J2000 coordinate and observation vector in the camera measurement coordinates:
λ 1 ( R body cam ) 1 ( tan ( ψ x ( s ) ) tan ( ψ y ( s ) ) 1 ) = R J2000 body R wgs J2000 [ X g X gps Y g Y gps Z g Z gps ] wgs [ B X B Y B Z ] body
With the line push-broom camera, when a non-collinear observation vector on a matching control point in each scan line is ≥2, the attitude parameters along the scan line at certain time can be calculated from Equation (58). In theory, to ensure the precision and reliability of the attitude parameters, a larger number of matching control points are required and they are distributed evenly along the scan line [35,36]. The attitude accuracy depends mainly on the accuracy of the GPS orbit accuracy, DEM/DOM accuracy of the reference calibration field data, and the number and distribution of the control points per scan line [34,37,38,39]. The frequency of the inversion attitude should be equal to that of the linear array camera imaging, and can reach tens of thousands of hertz. However, changes in the calibration field are relatively large, making it impossible to match the control points of each line. Therefore, we will down-sample the frequency to tens of hertz.

3. Experiment and Discussion

An experiment was carried out using the data provided by the Yaogan-24 remote sensing satellite that was launched on 20 November 2014. The on-ground attitude data processing algorithm has been applied in the ground processing system in the China Resources Satellite Application Center.

3.1. Experimental Data

(1)
Observation data of the Yaogan-24 remote sensing satellite
The Yaogan-24 remote sensing satellite’s original observation data used for experimental analysis mainly includes dual-frequency GPS original observations, original observations of Astro10 A and B, gyro observation data, line time data of imaging, and panchromatic image data. The data mentioned above were acquired during the satellite in orbit test.
(2)
Geometric calibration field
The geometric calibration fields included Songshan, Anyang, Dongying, Sanya, Taiyuan, and Yili, all in China. In this paper, the Songshan and Anyang calibration fields were used. The Songshan field is located in Henan Province, central China, and features a hilly terrain, 112°42′–113°54′ E/34°13′–35°2′ N, coverage 100 × 80 = 8000 km2, average altitude of approximately 500 m (the highest point is at 1491.73 m), and a maximum fluctuation ≤2000 m. The Songshan calibration field provides a region of 1:2000-scale digital orthophoto (DOM) and digital elevation model (DEM) reference data, in which the DOM ground GSD geometric resolution was 0.2 m, and the plane accuracy ≤1 m; the DEM geometry ground resolution was 1 m GSD, accuracy ≤2 m (Figure 4).
The Anyang field is also located in Henan Province, China, featuring a plain, 114°19′–115°12′ E/35°44′–36°1′ N, coverage 90 × 30 = 2700 km2; average altitude approximately 40 m (the highest point is 70 m), and a maximum fluctuation ≤100 m. The field provides a region of 1:1000-scale DOM and DEM reference data, in which the DOM ground GSD geometric resolution was ≤0.1 m, and the plane accuracy ≤0.5 m; the DEM ground GSD geometric resolution was ≤0.5 m, accuracy ≤1 m (Figure 5).

3.2. Experimental Results and Analysis

(1)
Quality of the star sensor observation data
Star sensors are connected through a bracket and are vertically fixed. In theory, the angle between any two optical axes of the star sensor should be a constant. Therefore, we did a quality analysis on the original observation data using variation detection means of the optical axis angles. The general auxiliary data included only raw observations of the Astro10A and Astro10B star sensors, we focused on the raw data of the two sensors for our analysis.
First, we analyzed the quality of the raw observations in the angle changes of the star sensors. Figure 6 and Figure 7 show the errors in the angle change before and after treatment. Because the measurement accuracy of the optical axis of Astro10 star sensor is ≤5 arcsec, according to the law of error propagation, the optical axis angle error of the star sensor would be ≤7 arcsec.
As Figure 6 depicts, when a photo was taken in the Yili, Anyang, or Songshan calibration field, gross errors of 20–40 arcsec were presented in the observation data during some epochs that are much greater than the star sensor measurement accuracy. After being preprocessed in our algorithm, the gross errors were effectively corrected, and the optical axis angle error could satisfy the indicator (Figure 7).
Figure 8 represents the distribution of the variations in star sensor optical axis angle error, Table 2 represents the statistics of the optical axis angle error characteristic. As Figure 8 and Table 2 show, the error of the angle change followed a normal distribution. However, the gross errors in the observations did not exist, and the chance of a ±5 arcsec deviation appearing between the optical axes was 95%.
As described in Section 2.6, we would further use those precise inversion attitude parameters as reference data to analyze the relative precision of the star sensor raw observation data. Figure 9 and Figure 10 show, respectively, the error distributions in the Astro10A and Astro10B star sensors in the yaw, roll, and pitch directions. The maximum deviations in arcsec were −15–15 in yaw direction, −10–10 in roll direction, and −5–5 in pitch direction, while those of Astro10B were −15–20, −15–15, and −15–15 arcsec, respectively. This was because only the optical axis of star sensor could achieve the high pointing accuracy, the single star sensor attitude determination accuracy was limited and could not be directly used for attitude determination. In addition, the error distribution in the three directions of the two sensors were all normal in distribution (Figure 11 and Figure 12); and the relative precision of the measurements in the three directions were ≤12 arcsec (Figure 13 and Figure 14), which was consistent with the star sensor design accuracy of the optical axis error was ≤5″ (3σ) and horizontal axis error was ≤35″ (3σ). Therefore, the data quality of the star sensor observation used in our experiment will be reliable for other processes.
(2)
Convergence and stability after bidirectional filtering
On the basis of the measurement and state equations, we used a bidirectional filter and overall weighted smoothing method to process the attitude data. Raw observational attitude data taken in the Anyang calibration field on 1 January 2015 were applied for data fusion, and the experimental results were analyzed.
Understanding the variation trend of a state error parameter is important to determine whether the filter for the attitude determination system is convergent and stable. To test the convergence and stability of bidirectional filter and overall weighted smoothing method, we had chosen state error characteristic parameters of three vector parameter variables of attitude error quaternion, X = [ Δ q 13 T , Δ b T ] T , Δ q 13 = [ Δ q 1       Δ q 2       Δ q 3 ] T , and the gyro bias error as state variables of the system. The error quaternion means the difference between the predicted and the corrected respectively by gyro and star sensor, and the error of the gyro bias means that was corrected by the star sensor. With the bidirectional filter on the star sensor and the gyro, information can be fused and the variation trend of the error parameters can stand out (Figure 15 and Figure 16). We found that Euler angle errors in the yaw, roll, and pitch directions varied steadily and randomly, and so did the gyro. The range of the Euler angle error was −0.02–0.03 arcsec, and that of error bias −4.0 × 10−4~6.0 × 10−4 deg/h. The estimates for gyro bias changed over time (Figure 17). Therefore, we conclude that the gyro bias in the X, Y, and Z directions tended toward a constant value within 0.2 deg/h, meeting the gyro bias requirement of ≤2.0 deg/h.
In order to further verify the convergence and stability of the designed filter, we present a detailed description of the change in the gyro angle velocity estimates after star sensor and gyro information fusion. As shown in Figure 18, the values of the gyro angle velocity in three directions became close to the true state of the satellite flight that when the satellite was in stable flight, the angular velocity measured by the gyro was the angular velocity of the satellite around the Earth, about 0.06 degrees per second, and the measurement noise was effectively smoothed out. For more details, we list the average value and mean square of the Euler angle and error bias errors in the photos taken in different calibration fields at different times, and the two errors tended toward a Gaussian distribution (Table 3 and Table 4). Therefore, the bidirectional Kalman filter was reliable, which could maintain the convergence and stability.
(3)
Relative attitude accuracy
Due to the nature of attitude data, it is difficult to verify their accuracy and reliability. As described in Section 2.6, we still used precise attitude data calculated from an optical image in the geometric calibration field as reference data. We converted attitude parameters from the body coordinate system relative to the inertial coordinate into the body coordinate relative to the orbit coordinate. It is more convenient for us to analyze the processing precision of optical image geometry in the along-orbit direction and the direction perpendicular to the orbit. Figure 19, Figure 20 and Figure 21 present the relative attitude error distribution using photos taken in the Yili, Anyang, and Songshan calibration fields. As figure parts (a) and (b) indicate, we used a multi-star sensor combination and a star sensor and gyro combination to process the attitude data.
The results show that in the multi-star sensor combination, the range of relative attitude error in the yaw, roll, and pitch directions could reach the level of −2–3 arcsec, while in the star sensor and gyro combination, the range of relative attitude error could be within plus or minus sub-arcsec level. We analyzed the reliability of our proposed algorithm; Table 5 presents the statistics on the mean square error of the relative attitude in different attitude-sensor combinations. In the case of the multi-star sensor combination, the mean square error could reach approximately 1 arcsec and was ≤0.5 arcsec in the star sensor and gyro combination. The processing effect in the multi-star sensor combination was worse than in the star sensor and gyro combination because the attitude in the multi-star sensor combination contained high-frequency noise; in the star sensor and gyro combination, star sensor high-frequency noise could be smoothed out, and the gyro bias could be estimated to optimize the attitude data estimation.
(4)
Accuracy analysis of attitude fitting model
Although the precision attitude parameters could be obtained through a star sensor and gyro combination, the sampling frequency was only 4–8 Hz, far below the imaging frequency (20,000 Hz) of the satellite line push-broom camera. The resolution of the satellite panchromatic camera in this research was 1 m and the orbital altitude was 645 km; to meet a panchromatic geometric relative accuracy of better than 1 pixel, the fitting model accuracy of the attitude parameter should be within 0.3 arcsec. Therefore, we focused on how to obtain the precise attitude parameters of each scan line of the push-broom camera. Three attitude-fitting methods are described in detail in Section 2.5: Lagrange polynomial interpolation, orthogonal polynomial fitting, and the spherical linear interpolation model. We used different fitting models in our experiment on attitude parameters obtained from the star sensor and gyro combination, and evaluated the fitting accuracy in its precise attitude.
The attitude data used to fit on different fitting models were down-sampled from 8 to 4 Hz, allowing us to use different fitting models to fit attitude parameters of different forms of expression. Finally, we used 8 Hz attitude parameters as reference values to evaluate the fitting accuracy. Figure 22, Figure 23 and Figure 24, describe respectively, the attitude fitting accuracy distribution in the Lagrange polynomial interpolation, orthogonal polynomial fitting, and spherical linear interpolation models.
Parts (a) and (b) in each figure separately indicate the attitude parameter in the Euler angle and quaternion. The above-mentioned figures lead us to believe that we can control the attitude fitting accuracy in yaw, roll, and pitch directions within a level of 0.3 arcsec. Attitude parameters of both the Euler angle and quaternion could be used to build mathematical fitting models, and all of three types of fitting model could be used to fit the attitude parameter. Table 6 lists the statistical details of the fitting accuracy in the yaw, roll, and pitch directions based on different fitting methods. The results show that the orthogonal polynomial fitting model was more suitable for building mathematical models and could ensure the relative geometry accuracy of the optical image.
(5)
Imagery processing accuracy
We would use post-precise orbit data to provide precise exterior orientation line elements for image geometry processing, in which the accuracy could reach the centimeter level. Therefore, due to high-precision calibration and time synchronization, the attitude data are used directly for image geometry processing, which may directly reflect the quality of the data. The Section 3.1 describes in detail the geometric calibration field data for checking geometric accuracy, including the Songshan, Anyang, Dongying, Sanya, Taiyuan, and Yili calibration fields, and we would check the quality of the attitude parameter on the basis of the geometric calibration field image taken by the panchromatic camera and DOM/DEM reference data.
Wuhan University developed an optical satellite ground pretreatment system for the Yaogan-24 remote sensing satellite image processing, and we conducted experiments on the platform. The optical satellite ground pretreatment includes radiation treatment, sensor calibration, geometric correction, and so on. We took attitude data as the input for image preprocessing, and then analyzed the geometric accuracy on the basis of the geometric correction product and DOM/DEM of the calibration field reference data. Figure 25 and Figure 26 show the distribution of correspondence points between the DOM images and geometric correction images taken by the satellite panchromatic camera in Songshan on 16 March 2015 and in Anyang on 9 February 2015. The correspondence points between the DOM and geometric correction images were sufficient and were distributed more evenly to ensure the reliability of the geometric precision. Figure 27 and Figure 28 show the distribution of the relative geometric accuracy in the cross-track and along-track directions of the panchromatic image with respect to the DOM/DEM of the calibration field. The relative geometric accuracies in the cross-track and along-track directions were clearly between 1.5 and 2.0 pixels. Meanwhile, we concluded that the relative accuracy in the attitude data was 0.3–0.5 arcsec with respect to the precise attitude data calculated on the optical image in the geometric calibration field in Section 3.2. Therefore, both conclusions confirmed each other. Table 7 and Table 8 respectively show detailed statistics on the uncontrolled and relative positioning accuracy of the satellite geometric correction image taken by the panchromatic camera based on on-board and on-ground processing attitude data. Furthermore, we could conclude that the side swing angle did not affect the image geometric correction accuracy under no control-point condition when images were taken in different calibration fields, and the uncontrolled and relative positioning accuracy of the geometric correction image based on on-ground processing attitude data was about 15 m and 1.3 pixels, comparing to on-board attitude data about 30 m and 2.4 pixels, which increased about 50%. As the attitude determination accuracy of the star sensor was 5 arcsec, which is configured by the satellite, and the orbital altitude of the satellite was 645 km, 1 arcsec corresponded to a ground error of 3.127 m. Theoretically, the uncontrolled positioning precision of the satellite should within 15 m. Therefore, the attitude determination accuracy of the star sensor and the image positioning accuracy without a control point were consistent.

4. Conclusions

In this paper, we proposed a method of on-ground processing for attitude data of the Yaogan-24 remote sensing satellite and verification based on the geometric calibration field. By addressing the algorithms, we achieved a significant result. First, the quality of the star sensor observation data can be effectively preprocessed, and the optical axis angle error and original observations error in the directions of yaw, roll, and pitch were in line with the accuracy of the star sensor design indicators and were normally distributed. In addition, application of the bidirectional filter and overall weighted smoothing for attitude data information fusion and performance evaluation showed that the on-ground processing model could achieving bidirectional convergence, ensuring the robust and feasible. Furthermore, different attitude fitting models were analyzed. The results showed that both the attitude parameter of the Euler angle and quaternion could be used to build a mathematical fitting model, in which the orthogonal polynomial fitting model was more suitable for building mathematical models and ensured the relative geometry accuracy.
Finally, how to evaluate the relative and absolute accuracies of the attitude result obtained from the proposed algorithm was important. The experimental results show that the relative accuracy of the attitude data was 0.3–0.5 arcsec, and the relative geometric accuracy in the cross-track and along-track directions was between 1.5 and 2.0 pixels. The attitude determination accuracy of star sensor configured by the satellite was 5 arcsec, while the uncontrolled positioning precision of satellite’s panchromatic image was within 15 m. Therefore, both conclusions confirmed each other. In addition, the uncontrolled and relative geometric positioning accuracy of the panchromatic image could be effectively improved, when comparing to the result based on on-board attitude data.
Note that this paper does not involve topics such as error sources in the attitude sensor and sensor calibration. In the future, we will focus on the error characteristics, error model construction, relative and absolute calibration model construction, and the effect of the calibration parameters generated on the image geometric precision. Moreover, because the accuracy and frequency of the attitude data have become a key factor in high-resolution optical satellite image geometry processing, the attitude data of the high-frequency angular displacement sensor will be considered and discussed in future work.

Acknowledgments

This work was substantially supported by the National Basic Research Program of China 973 Program (Project No. 2014CB744201, 2012CB719902, 2012CB719901), the National Natural Science Foundation of China (Project No. 41371430, 91438112, 91438203, 91438111). The authors would like to thank the anonymous reviewers for their detailed review, valuable comments, and constructive suggestions.

Author Contributions

The first author conceived the study and designed the experiments; the second author developed the algorithm and wrote the program; The third author performed the experiments; The fourth and fifth author helped perform the analysis with constructive discussions and contributed to manuscript preparation.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Iwata, T. Precision Geolocation Determination and Pointing Management for the Advanced Land Observing Satellite (ALOS). Available online: http://www.researchgate.net/publication/4064183_Precision_geolocation (accessed on 17 January 2016).
  2. Tang, X.M.; Xie, J.F. Summary of high-resolution remote sensing satellite mapping key technology research. In Proceedings of the 2011 China Satellite Conference, Beijing, China, 26–28 October 2011; pp. 182–191.
  3. Yang, B. Research on On-Orbit Geometric Calibration Theories and Methods for Optical Linear Pushbroom Satellite Imagery. Ph.D. Dissertation, Wuhan University, Wuhan, China, 2014. [Google Scholar]
  4. Jacobsen, K. Geometry of Satellite Images-Calibration and Mathematical Models. Available online: http://120.52.72.40/www.ipi.uni-hannover.de/c3pr90ntcsf0/uploads/tx_tkpublikationen/GeomSatJac.pdf (accessed on 17 January 2016).
  5. Zhang, G.; Jiang, Y.H. In-orbit geometric calibration and validation of ZY-3 linear array sensors. Photogramm. Rec. 2014, 29, 68–88. [Google Scholar] [CrossRef]
  6. Delvit, J.; Greslou, D.; Latry, C. Geometric improvement for Earth observation applications. In Proceedings of the IEEE Internetional Geoscience and Remote Sensing Symposium (IGARSS), Honolulu, HI, USA, 25–30 July 2010; pp. 3632–3635.
  7. Bar-Itzhack, I. REQUEST: A recursive QUEST algorithm for sequential attitude determination. J. Guid. Control Dyn. 1996, 19, 1034–1038. [Google Scholar] [CrossRef]
  8. Shuster, M. A simple Kalman filter and smoother for spacecraft attitude. J. Astronaut. Sci. 1989, 37, 89–106. [Google Scholar]
  9. Christian, J.A.; Lightsey, E.G. Sequential optimal attitude recursion filter. J. Guid. Control Dyn. 2010, 33, 1787–1800. [Google Scholar] [CrossRef]
  10. Zanetti, R.; Ainscough, T.; Christian, J.A.; Spanos, P.D. Q-method extended Kalman filter. J. Guid. Control Dyn. 2015, 38, 752–760. [Google Scholar]
  11. Shuster, M.; Oh, S. Three-axis attitude determination from vector observations. J. Guid. Control Dyn. 1981, 4, 70–77. [Google Scholar] [CrossRef]
  12. Mortari, D. Second estimator of the optimal quaternion. J. Guid. Control Dyn. 2000, 23, 885–888. [Google Scholar] [CrossRef]
  13. Liu, J.; Zhang, Y.S.; Wang, D.H. Precise positioning of high spatial resolution satellite images based on RPC Models. Surv. Map 2006, 35, 30–34. [Google Scholar]
  14. Robertson, B.C. Rigorous geometric modeling and correction of QuickBird imagery. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Toulouse, France, 21–25 July 2003; pp. 797–802.
  15. Schwind, P.; Schneider, M.; Palubinskas, G. Processors for ALOS optical data: Deconvolution, DEM generation, orthorectification, and atmospheric correction. IEEE Trans. Geosci. Remote Sens. 2009, 47, 4074–4082. [Google Scholar] [CrossRef] [Green Version]
  16. Toutin, T. Review article: Geometric processing of remote sensing images: Models, algorithms and methods. Int. J. Remote. Sens. 2004, 25, 1893–1924. [Google Scholar] [CrossRef]
  17. Wu, J.; Zheng, Z.; Feng, H. Restoration of TDI camera images with motion distortion and blur. Opt. Laser Technol. 2010, 42, 1198–1203. [Google Scholar] [CrossRef]
  18. Iwasaki, A. Detection and estimation of satellite attitude jitter using remote sensing imagery. Adv. Spacecr. Technol. 2011, 257–272. [Google Scholar] [CrossRef]
  19. Teshima, Y.; Iwasaki, A. Correction of attitude fluctuation of Terra spacecraft using ASTER/SWIR imagery with parallax observation. IEEE Trans. Geosci. Remote Sens. 2008, 46, 222–227. [Google Scholar] [CrossRef]
  20. Soken, H.E.; Hajiyev, C.; Sakai, S.I. Robust Kalman filtering for small satellite attitude estimation in the presence of measurementfaults. Eur. J. Control 2014, 20, 64–72. [Google Scholar] [CrossRef]
  21. Goh, Y.H.; Raveendran, P.; Goh, Y.L. Robust speech recognition system using bidirectional Kalman filter. IET Signal Process. 2015, 9, 491–497. [Google Scholar] [CrossRef]
  22. Liu, L.; Zhang, L.; Zheng, X. Current situation and development trend of star sensor technology. Infrared Laser Eng. 2007, 36, 529–533. [Google Scholar]
  23. Markley, F.L. Attitude determination using vector observations: A fast optimal matrix algorithm. J. Astronaut. Sci. 1993, 41, 261–280. [Google Scholar]
  24. Lv, Z.D.; Lei, Y.J. Satellite Attitude Measurement and Determination; The Press of National Defence Industry: Beijing, China, 2013. [Google Scholar]
  25. Xie, J.F. The Critical Technology of Data Processing of Satellite Attitude Determination Based on Star Sensor. Ph.D. Dissertation, Wuhan University, Wuhan, China, 2009. [Google Scholar]
  26. Liu, B. Space-ground Intergated Attitude Determination of High-Resolution Satellite and Geometric Image Processing under Jitter Conditions. Ph.D. Dissertation, Wuhan University, Wuhan, China, 2011. [Google Scholar]
  27. Iwata, T. Precision Attitude and Position Determination for the Advanced Land Observing Satellite (ALOS). Available online: http://proceedings.spiedigitallibrary.org/ConferenceProceedings.aspx (accessed on 17 January 2016).
  28. Julier, S.; Uhlmann, J.; Durrant-Whyte, H.F. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Autom. Control 2000, 45, 477–482. [Google Scholar] [CrossRef]
  29. Liu, J.; Wang, D.H.; Zhang, L. Orientation image adjustment of three line images based on simplified slerp equation. Cehui Kexue Jishu Xuebao 2012, 29, 357–361. [Google Scholar]
  30. Liu, A.D.; Huang, B.; Lu, Z.W. Research on method of coordinate rotation and interpolation based on quaternion. Jisuanji Yu Xiandaihua 2012, 198, 44–51. [Google Scholar]
  31. Sun, H.Y.; Li, R.M.; Yan, L. Semi-parametric fitting of the track of perspective center and attitude of airborne TLS CCD system. Wuhan Daxue Xuebao Xinxi Kexue Ban 2003, 28, 706–709. [Google Scholar]
  32. Zhang, L.; Gruen, A. Multi-image matching for DSM generation from IKONOS imagery. ISPRS J. Photogramm. 2006, 60, 195–211. [Google Scholar] [CrossRef]
  33. Fraser, C.S.; Dial, G.; Grodecki, J. Sensor orientation via RPCs. ISPRS J. Photogramm. 2006, 60, 182–194. [Google Scholar] [CrossRef]
  34. Kocaman, S.; Armin, G. Orientation and self-calibration of ALOS PRISM imagery. Photogramm. Rec. 2008, 23, 323–340. [Google Scholar] [CrossRef]
  35. Zhang, Y.J.; Zheng, M.T. On-orbit geometric calibration of ZY-3 three-linear array imagery with multistrip data sets. IEEE Trans. Geosci. Remote Sens. 2014, 52, 224–234. [Google Scholar] [CrossRef]
  36. Ayoub, F.; Leprince, S.; Binet, R. Influence of camera distortions on satellite image registration and change detection applications. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Boston, MA, USA, 7–11 July 2008; pp. 1072–1075.
  37. Wang, P.; Zhang, Y.C.; Qiang, W.Y. Research on the algorithm of on-orbit calibration based on gyro/star-sensor. In Proceedings of the Fifth International Conference on Machine Learning and CyBernetics, Dalian, China, 13–16 August 2006; pp. 303–307.
  38. Pittelkau, M.E. Kalman filtering for spacecraft system alignment calibration. J. Guid. Control Dyn. 2001, 24, 1187–1195. [Google Scholar] [CrossRef]
  39. Fu, M.Y.; Deng, Z.H. Theory of Kalman Filter Applied in Navigation System; Science Press: Beijing, China, 2003. [Google Scholar]
Figure 1. Flow chart for attitude data processing and verification.
Figure 1. Flow chart for attitude data processing and verification.
Sensors 16 01203 g001
Figure 2. Schematic diagram of attitude data processing with bidirectional Kalman filter.
Figure 2. Schematic diagram of attitude data processing with bidirectional Kalman filter.
Sensors 16 01203 g002
Figure 3. Flowchart of precision control-point matching and high-precision attitude inversion in a geometric calibration field.
Figure 3. Flowchart of precision control-point matching and high-precision attitude inversion in a geometric calibration field.
Sensors 16 01203 g003
Figure 4. Reference data of the Songshan calibration field. (a) Digital orthophoto; (b) Digital elevation model.
Figure 4. Reference data of the Songshan calibration field. (a) Digital orthophoto; (b) Digital elevation model.
Sensors 16 01203 g004
Figure 5. Reference data of the Anyang calibration field. (a) Digital orthophoto; (b) Digital elevation model.
Figure 5. Reference data of the Anyang calibration field. (a) Digital orthophoto; (b) Digital elevation model.
Sensors 16 01203 g005
Figure 6. Variations of the star sensor optical axis angle before treatment using photos taken in different calibration fields. (a) Yili field; (b) Songshan field; (c,d) Anyang field.
Figure 6. Variations of the star sensor optical axis angle before treatment using photos taken in different calibration fields. (a) Yili field; (b) Songshan field; (c,d) Anyang field.
Sensors 16 01203 g006
Figure 7. Variations of the star sensor optical axis angle after treatment using photos taken in different calibration fields. (a) Yili field; (b) Songshan field; (c,d) Anyang field.
Figure 7. Variations of the star sensor optical axis angle after treatment using photos taken in different calibration fields. (a) Yili field; (b) Songshan field; (c,d) Anyang field.
Sensors 16 01203 g007
Figure 8. Normal distribution of the variations in star sensor optical axis angle when photos were taken in different calibration fields. (a) Yili field; (b) Songshan field; (c,d) Anyang field.
Figure 8. Normal distribution of the variations in star sensor optical axis angle when photos were taken in different calibration fields. (a) Yili field; (b) Songshan field; (c,d) Anyang field.
Sensors 16 01203 g008
Figure 9. The original observation accuracies of the Astro10A star sensor when photos were taken in the Anyang calibration field on 1 January 2015.
Figure 9. The original observation accuracies of the Astro10A star sensor when photos were taken in the Anyang calibration field on 1 January 2015.
Sensors 16 01203 g009
Figure 10. The original observation accuracies of the Astro10B star sensor when photos were taken in the Anyang calibration field on 1 January 2015.
Figure 10. The original observation accuracies of the Astro10B star sensor when photos were taken in the Anyang calibration field on 1 January 2015.
Sensors 16 01203 g010
Figure 11. Normal distribution of the original observation accuracies of the Astro10A star sensor (a) yaw; (b) roll; (c) pitch.
Figure 11. Normal distribution of the original observation accuracies of the Astro10A star sensor (a) yaw; (b) roll; (c) pitch.
Sensors 16 01203 g011
Figure 12. Normal distribution of the original observation accuracies of the Astro10B star sensor (a) yaw; (b) roll; (c) pitch.
Figure 12. Normal distribution of the original observation accuracies of the Astro10B star sensor (a) yaw; (b) roll; (c) pitch.
Sensors 16 01203 g012
Figure 13. Precision of the onboard Astro10A original observation data at different times and places (unit: arcsec).
Figure 13. Precision of the onboard Astro10A original observation data at different times and places (unit: arcsec).
Sensors 16 01203 g013
Figure 14. Precision of the onboard Astro10B original observation data at different times and places (unit: arcsec).
Figure 14. Precision of the onboard Astro10B original observation data at different times and places (unit: arcsec).
Sensors 16 01203 g014
Figure 15. Variation of Euler angle error based on star sensor and gyro information fusion.
Figure 15. Variation of Euler angle error based on star sensor and gyro information fusion.
Sensors 16 01203 g015
Figure 16. Variation of error bias based on star sensor and gyro information fusion.
Figure 16. Variation of error bias based on star sensor and gyro information fusion.
Sensors 16 01203 g016
Figure 17. Gyro bias estimates after star sensor and gyro information fusion.
Figure 17. Gyro bias estimates after star sensor and gyro information fusion.
Sensors 16 01203 g017
Figure 18. Gyro angle velocity estimates after star sensor and gyro information fusion.
Figure 18. Gyro angle velocity estimates after star sensor and gyro information fusion.
Sensors 16 01203 g018
Figure 19. Distribution of relative attitude error using photos taken in the Yili calibration field: (a) in multi-star sensor combination; (b) in star sensor and gyro combination.
Figure 19. Distribution of relative attitude error using photos taken in the Yili calibration field: (a) in multi-star sensor combination; (b) in star sensor and gyro combination.
Sensors 16 01203 g019
Figure 20. Distribution of relative attitude error using photos taken in the Anyang calibration field: (a) in multi-star sensor combination; (b) in star sensor and gyro combination.
Figure 20. Distribution of relative attitude error using photos taken in the Anyang calibration field: (a) in multi-star sensor combination; (b) in star sensor and gyro combination.
Sensors 16 01203 g020
Figure 21. Distribution of relative attitude error using photos taken in the Songshan calibration field: (a) in multi-star sensor combination; (b) in star sensor and gyro combination.
Figure 21. Distribution of relative attitude error using photos taken in the Songshan calibration field: (a) in multi-star sensor combination; (b) in star sensor and gyro combination.
Sensors 16 01203 g021
Figure 22. Distribution of attitude fitting accuracy in the Lagrange polynomial interpolation model: (a) fitting based on Euler angle parameters; (b) fitting based on quaternion parameters.
Figure 22. Distribution of attitude fitting accuracy in the Lagrange polynomial interpolation model: (a) fitting based on Euler angle parameters; (b) fitting based on quaternion parameters.
Sensors 16 01203 g022
Figure 23. Distribution of attitude-fitting accuracy in the orthogonal polynomial fitting model: (a) Fitting based on Euler angle parameters; (b) Fitting based on quaternion parameters.
Figure 23. Distribution of attitude-fitting accuracy in the orthogonal polynomial fitting model: (a) Fitting based on Euler angle parameters; (b) Fitting based on quaternion parameters.
Sensors 16 01203 g023
Figure 24. Distribution of attitude fitting accuracy in the spherical linear interpolation model.
Figure 24. Distribution of attitude fitting accuracy in the spherical linear interpolation model.
Sensors 16 01203 g024
Figure 25. Correspondence points between the DOM image and geometric correction image of the satellite panchromatic camera taken in Songshan on 16 March 2015: (a) panchromatic image; (b) reference image.
Figure 25. Correspondence points between the DOM image and geometric correction image of the satellite panchromatic camera taken in Songshan on 16 March 2015: (a) panchromatic image; (b) reference image.
Sensors 16 01203 g025
Figure 26. Correspondence points between the DOM and the geometric correction images of the satellite panchromatic camera taken in Anyang on 9 February 2015: (a) panchromatic image; (b) reference image.
Figure 26. Correspondence points between the DOM and the geometric correction images of the satellite panchromatic camera taken in Anyang on 9 February 2015: (a) panchromatic image; (b) reference image.
Sensors 16 01203 g026
Figure 27. Distribution of the geometric relative accuracy in the cross-track and along-track directions of a panchromatic image taken in Songshan on 16 March 2015.
Figure 27. Distribution of the geometric relative accuracy in the cross-track and along-track directions of a panchromatic image taken in Songshan on 16 March 2015.
Sensors 16 01203 g027
Figure 28. Distribution of the geometric relative accuracy in the cross-track and along-track directions of a panchromatic image taken in Anyang on 9 February 2015.
Figure 28. Distribution of the geometric relative accuracy in the cross-track and along-track directions of a panchromatic image taken in Anyang on 9 February 2015.
Sensors 16 01203 g028
Table 1. The Yaogan-24 remote sensing satellite’s star sensor and gyro performance parameters.
Table 1. The Yaogan-24 remote sensing satellite’s star sensor and gyro performance parameters.
Attitude SensorPerformance Parameters
Star sensor ASTRO10Optical axis error ≤5″ (3σ)
Horizontal axis error ≤35″ (3σ)
Frequency 4 Hz
APS star sensorOptical axis error ≤20″ (3σ)
Horizontal axis error ≤35″ (3σ)
Frequency 4 Hz
Gyro componentsRandom bias ≤0.39°/h (3σ)
Constant bias ≤2°/h
Frequency 8 Hz
Table 2. Statistics of the optical axis angle change in the Astro10A and Astro10B star sensors in different times and places (unit: arcsec).
Table 2. Statistics of the optical axis angle change in the Astro10A and Astro10B star sensors in different times and places (unit: arcsec).
AverageSkewnessKurtosisMean Square Error
27 November 2014 Yili2.3483 × 10−110.1172.8682.089
16 December 2014 Songshan−1.3952 × 10−11−0.4382.3602.087
24 December 2014 Anyang−1.15568 × 10−100.1883.2131.282
1 January 2015 Anyang5.1159 × 10−110.1002.7891.231
23 January 2015 Yili7.548 × 10−12−0.0342.2851.669
Table 3. Statistics of the Euler angle error at different times and places (unit: arcsec).
Table 3. Statistics of the Euler angle error at different times and places (unit: arcsec).
Average ValueMean Square Error
Calibration FieldsYawRollPitchYawRollPitch
Yili0.001560.000350.000310.008630.011990.01067
Anyang−0.000260.000030.000410.004230.004290.00333
Anyang−0.00019−0.000170.000180.011940.009910.01013
Yili−0.001640.003000.001190.010450.012980.00988
Songshan−0.00109−0.00132−0.000080.007170.007320.00542
Table 4. Statistics of gyro bias error at different times and places (unit: deg/h).
Table 4. Statistics of gyro bias error at different times and places (unit: deg/h).
Average ValueMean Square Error
Calibration FieldsBxByBzBxByBz
Yili−9.7333 × 10−6−7.3 × 10−6−3.5 × 10−50.000260.0002370.000188
Anyang−4.55149 × 10−7−4 × 10−62.5 × 10−63.97 × 10−53.06 × 10−53.98 × 10−5
Anyang4.19674 × 10−6−4.5 × 10−64.98 × 10−60.0002110.0002180.000246
Yili−6.8224 × 10−5−2.7 × 10−53.84 × 10−50.0002780.0002140.000234
Songshan1.52775 × 10−51.1 × 10−61.19 × 10−57.91 × 10−55.79 × 10−57.67 × 10−5
Table 5. Mean square error of the relative attitude in different attitude sensor combinations (unit: arcsec).
Table 5. Mean square error of the relative attitude in different attitude sensor combinations (unit: arcsec).
Multi-Star Sensor CombInationStar Sensor and Gyro Combination
Calibration FieldYawRollPitchYawRollPitch
Yili0.7270.8680.8110.2720.6110.477
Anyang0.6990.7730.5430.4180.3030.161
Anyang1.0050.7820.7960.3260.3860.294
Yili0.7890.9930.7330.3540.4160.159
Songshan1.0221.0290.7770.4250.5130.291
RMS0.8590.8950.7380.3630.4580.299
Table 6. The mean square error of attitude fitting accuracy in different fitting models (unit: arcsec).
Table 6. The mean square error of attitude fitting accuracy in different fitting models (unit: arcsec).
Fitting on EulerFitting on Quaternion
Fitting ModelYawRollPitchYawRollPitch
Lagrange polynomial0.2090.2070.2490.2040.2030.244
Orthogonal polynomial0.1410.1010.1360.1420.1050.135
Spherical linearNULLNULLNULL0.2720.2250.119
Table 7. Uncontrolled and relative positioning accuracy of a geometric correction image taken by the panchromatic camera based on on-board attitude data (unit: m).
Table 7. Uncontrolled and relative positioning accuracy of a geometric correction image taken by the panchromatic camera based on on-board attitude data (unit: m).
TimeSide Swing (°)RegionsPositioning AccuracyAverage OffsetRelative Accuracy
dxdymxmy
27 December 2014 15:27−3.820Yili19.64316.788−10.1991.9422.438
9 February 2015 11:10−5.532Anyang25.36421.32913.7272.1011.854
16 March 2015 13:204.152Songshan31.570−18.693−25.4412.4782.135
1 May 2015 16:133.318Anyang10.8839.233−5.7621.8732.195
23 July 2015 15:27−2.243Yili35.982−34.28810.9122.7112.389
12 September 2015 14:157.436Dongying29.99822.788−19.5092.7881.993
20 October 2015 13:30−5.616Sanya45.54837.255−26.2051.8932.154
25 December 2015 16:552.378Taiyuan27.536−24.406−12.7523.1172.082
RMS29.94124.62217.0362.4042.162
Table 8. Uncontrolled and relative positioning accuracy of a geometric correction image taken by the panchromatic camera based on ground processing attitude data (unit: m).
Table 8. Uncontrolled and relative positioning accuracy of a geometric correction image taken by the panchromatic camera based on ground processing attitude data (unit: m).
TimeSide Swing (°)RegionsPositioning AccuracyAverage OffsetRelative Accuracy
dxdymxmy
27 December 2014 15:27−3.820Yili13.141−11.581−6.2121.6651.721
9 February 2015 11:10−5.532Anyang6.4295.356−3.5571.3551.399
16 March 2015 13:204.152Songshan7.0204.271−5.5721.3231.413
1 May 2015 16:133.318Anyang8.381−3.564−7.5860.8311.896
23 July 2015 15:27−2.243Yili15.747−14.1976.8151.3851.125
12 September 2015 14:157.436Dongying18.28815.734−9.3231.2551.133
20 October 2015 13:30−5.616Sanya21.09916.388−13.291.4210.847
25 December 2015 16:552.378Taiyuan17.676−14.988−9.3721.0061.155
RMS14.46411.9168.1971.3021.374

Share and Cite

MDPI and ACS Style

Wang, M.; Fan, C.; Yang, B.; Jin, S.; Pan, J. On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration. Sensors 2016, 16, 1203. https://doi.org/10.3390/s16081203

AMA Style

Wang M, Fan C, Yang B, Jin S, Pan J. On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration. Sensors. 2016; 16(8):1203. https://doi.org/10.3390/s16081203

Chicago/Turabian Style

Wang, Mi, Chengcheng Fan, Bo Yang, Shuying Jin, and Jun Pan. 2016. "On-Ground Processing of Yaogan-24 Remote Sensing Satellite Attitude Data and Verification Using Geometric Field Calibration" Sensors 16, no. 8: 1203. https://doi.org/10.3390/s16081203

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop