Abstract
This paper studies the staring imaging attitude tracking and control for satellite videos based on image information. An improved temporal-spatial context learning algorithm is employed to extract the image information. Based on this, a hyperbolic tangent fuzzy sliding mode control law is proposed to achieve the attitude tracking and control. Furthermore, the hyperbolic tangent function and fuzzy logic system are introduced into the sliding mode controller. In the experiments, the improved temporal-spatial context learning algorithm is applied for the image information of the space target video sequence captured by Jilin-1 in orbit, where the image information is used as the input of the control loop. Moreover, the proposed method is realized through simulation. Besides, the image change caused by attitude adjustment is achieved successfully, and the target imaging can be located in the center of the image plane to realize the gaze tracking control of the space target effectively.
1. Introduction
With the rapid development of remote sensing technology, video satellites have attracted much attention due to their continuous observation ability [1–8]. As a new type of Earth observation satellite, video satellites can employ the payload on the satellite platform to be pushed scan imaging with the help of the orbital motion of the satellite. Note that video satellites are to adjust the attitude real-timely and to obtain the dynamic information of the target area continuously in the process of staring imaging, so the optical axis of the optical load points at the target at all the time. In Ref. [9], Liu et al. introduce the gaze imaging technique, where the process generally refers to the staring attitude control. In addition, video satellites can use agility and attitude control technology to realize continuous imaging of ground targets. Due to this reason, compared with the traditional Earth observation satellites, video satellites have widely applied in many fields, such as vehicle real-time monitoring [10, 11], rapid response to natural disaster emergency [12], and major engineering monitoring [13].
Up to now, there are mainly two types of staring imaging satellites in orbit, including the satellites in the geostationary orbit and video satellites in the low orbit [14–16]. Figure 1 shows the schematic diagram of the ground gazing attitude control of video satellites. Furthermore, the attitude control system in the video satellites can adjust the attitude in real-time so that the optical axis of the optical sensor always points to the ground target area for continuous photography. Staring imaging is the main working mode for video satellites [17, 18]. In essence, although the staring imaging control problem is a dynamic attitude tracking problem, it is difficult to ensure the high stability of the optical axis of the satellite optical sensor to the observed object.
Figure 1.

Schematic diagram of ground gazing attitude control of the video satellite.
In the last decades, the staring image attitude control for video satellites can be regarded as a spacecraft attitude tracking and control problem [19–21]. Much research work has been done on the attitude tracking control for the satellites. Meanwhile, some controllers have been employed in satellites attitude control, such as sliding mode controller, robust controller, and intelligent controller [19–34].
So far, compared with attitude tracking and control for spacecraft and satellites, several satellite staring attitude control methods have been introduced to achieve real-time tracking [24–32]. For instance, Lian et al. [27] investigated the small satellite attitude problems for staring operation. Liang et al. [28] designed the fuzzy logic control law for the staring imaging satellite attitude problem in LEO, which has a quick response and excellent robustness. In Ref. [29], Chen et al. present a quaternion-based PID feedback tracking controller with the gyroscopic term cancellation, where some desired target can be tracked on the Earth. Chen et al. [30] investigated a staring imaging attitude controller based on double-gimbaled control moment gyroscope (DGCMG), which is simple and effective for agile small satellites. In Ref. [31], Li and Liang proposed a robust finite-time controller aiming at satellite attitude maneuvers to demonstrate the robustness of some typical perturbations such as disturbance torque, model uncertainty, and actuator error. Li et al. [32] implemented the neural network controller for staring imaging, where the real-time performance can be achieved.
However, the above staring attitude controller does not consider the image information directly. Moreover, the image information is separated from the attitude tracking controller. Besides, we note that in the satellite staring mode, the optical axis of the camera should point fixedly to the target for a long time. Meanwhile, both the video satellites and the object may be moving in the inertial coordinate system. In this way, the relative velocity and position may be changing over time.
In essence, visual information is introduced into the closed-loop control, which is commonly known as visual servo, and is first applied in the field of robots [35–38]. Recently, robot visual servo has achieved numerous results in both theory and practical application. We note that robot visual servo is generally divided into two structures: the position-based visual servo and the image-based visual servo. The position-based servo should calibrate the internal parameters of the camera to determine the relative attitude between the target and the camera coordinate system, which may increase the amount of calculation of the system. On the contrary, the image-based visual servo directly uses the visual feature error of the target in the phase plane and takes the controlled object and the visual system as a whole.
Due to the above reasons and advantages, visual information was initially introduced into the control closed loop of satellites and spacecraft. Therefore, an improved temporal-spatial context learning (ISTC) algorithm is employed to extract the image information in this paper. Based on this, a hyperbolic tangent fuzzy sliding mode control law (HTFSMC) for small video satellites is designed to achieve the attitude tracking and control. In special, the related coordinates are defined for attitude transformation. Subsequently, the sliding mode tracking controller is presented based on the image information from satellite videos. Furthermore, the hyperbolic tangent function and the fuzzy logic system are employed in the sliding mode controller.
In summary, the contributions of this paper are threefold.
In this paper, an ISTC algorithm is employed to obtain the image information. Hence, the visual information can be employed effectively in the visual tracking control based on spatial moving images, instead of cumbersome, complex camera internal parameter calibration, and accurate information of target and camera motion.
Based on the image information, this paper proposed the HTFSMC, where the hyperbolic tangent function and fuzzy logic system are introduced into the sliding mode controller.
In the experiments, the image information of the space target video sequence captured by Jilin-1 in orbit is used as the input of the controller. The control part is realized through simulation. Besides, the image change caused by attitude adjustment is achieved successfully, and the target imaging can be located in the center of the image plane to realize the gaze tracking control of the space target effectively.
The rest of this paper is arranged as follows. In Section 2, the staring imaging attitude dynamics model is described in detail. In what follows of this section, the sliding mode controller and fuzzy sliding controller are presented for staring imaging attitude tracking of small satellite videos. In Section 3, the experiment results and some discussion are introduced. Finally, Section 4 concludes this article.
2. Materials and Methods
In this paper, in order to extract the image information, the ISTC is employed for moving object tracking. Based on this, an attitude controller based on image information feedback is designed to realize the gaze tracking control of moving targets. The structure diagram is shown in Figure 2. In a nutshell, this method is to conform the centroid coordinates and to calculate the position deviation from the image center. Therefore, the cumbersome process of camera calibration and relative pose estimation can be avoided so that the computation cost is reduced.
Figure 2.

Structure frame of staring imaging attitude tracking control system based on image information.
2.1. An Improved Spatio-Temporal Context Learning Algorithm
For moving target video tracking, the local context is the background of the target and an certain area nearby. In fact, there is a strong space-time relationship in the local scene around the target between consecutive frames. According to the space-time relationship between the target and its surrounding area, the spatio-temporal context (STC) learning algorithm is constructed a spatio-temporal context model for the target and the nearby area based on the gray features of the image. Moreover, the confidence map of the target is to be calculated, and the maximum-likelihood probability in the confidence map can be found as the estimated target position. Therefore, the ISTC algorithm is described in detail.
The confidence map of the target position can be set as x∗. The luminance feature set of x∗ is defined as
| (1) |
where c(z) and I(z) are the luminance feature and the image intensity at position z, respectively. Ωc(x∗) denotes the context area of position x∗. In the following, c(x) can be used to calculate the confidence map as follows:
| (2) |
where P(x|c(z), o) is the conditional probability, which can represent the spatial relationship between the target position and its the context information. In addition, P(c(z)|o) is the prior probability, which can model the appearance of the local context. Furthermore, P(x|c(z), o) can be defined as
| (3) |
where hsc(x − z) is the relative distance and direction function between the target position x and its local context information. Subsequently, P(c(z)|o) can be also defined as
| (4) |
where ωσ(z − x∗) is a weight function, a is a normalized constant, which makes P(c(z)|o) range from 0 to 1 in (4), and σ is a scale parameter. Hence, the confidence map c(x) in (2) can be rewritten as
| (5) |
where ⊗ is the convolution operator and β is an important shape parameter. Fast Fourier transform is utilized simultaneously on both sides of the equation for (5). Therefore, (5) can be updated as
| (6) |
where ℱ is the fast Fourier transform (FFT) and ⊙ denotes the element-wise product. Subsequently, hsc(x) can be obtained as
| (7) |
where ℱ−1 is the inverse FFT. In this way, based on (7), the spatio-temporal context model can be derived as
| (8) |
where htsc is the spatio-temporal context model at the k-th frame in (7). Hence, ht+1sc can be also obtained by (7). Thus, the confidence map at the (k+1)-th frame is expressed as
| (9) |
The confidence map is maximized, so the location of the target can be obtained as
| (10) |
Since the target attitude may be changed in the process of target movement, the size of the target may be also changed. Besides, the background information may be different in each frame. Therefore, the scale update strategy can be employed for the target, which is given as
| (11a) |
| (11b) |
| (11c) |
| (11d) |
However, in Equation (11a), the denominator may be close to zero so that the results of moving object tracking may occasionally lead to overfitting. Due to this reason, an improved scale update strategy is introduced to avoid an abrupt change based on a penalty term p(s):
| (12) |
where ς is a constant. In this way, the updated scale can be rewritten as
| (13) |
2.2. Staring Imaging Attitude Dynamics Model
2.2.1. The Definitions of the Related Coordinate Systems
In this paper, some related coordinate systems are shown in Figure 3. The inertial coordinate system of Earth is defined as Oi − XiYiZi, where the coordinate origin Oi is located at the center of mass of the Earth. The Xi-axis is on the equatorial plane, which is pointing at the vernal equinox of the time. The direction of Zi-axis is the consistent with the Earth rotation axis. The Yi-axis is in the equatorial plane and meets the right-hand orthogonal reference. The satellite body coordinate system is defined as Ob − XbYbZb, where the center of mass of the satellite is the origin of the coordinate system Ob. The image coordinate system is O − XpYp. The camera coordinate system is Oc − XcYcZc. The image pixel coordinate system is I − xy.
Figure 3.

Schematic diagram of ground gazing attitude control of video small satellite.
2.2.2. The Attitude Solution Based on Satellite Images
In this paper, we assume that the camera coordinate system Oc − XcYcZc coincides with the satellite coordinate system Ob − XbYbZb. The unit vector in the ObZb direction is r=(0,0,1)T. As shown in Figure 4, the coordinates of target P is set as (u, v) in the pixel coordinate system I − xy. In the satellite coordinate system Ob − XbYbZb, the target line of sight direction rP can be described as
| (14) |
where (xP, yP, zP) is the coordinate of the target in the satellite coordinate system Ob − XbYbZb. The focal length of the spaceborne camera is f and the pixel size is l.
Figure 4.

The diagram of target deviation on image plane.
The purpose is to ensure the target can image in the center of the image. Hence, it is necessary to coincide with r and rP. In the process of staring tracking, the images are required to remain stable and no rotation, which is convenient for image observation and analysis. Staring tracking imaging is the process of controlling r to track rP.
In the satellite coordinate system Ob − XbYbZb, the following assumptions are given as
| (15a) |
| (15b) |
First, we rotate around the ObYb axis so that r is coincided with r1. The rotation angle θ is expressed as
| (16) |
Then, we rotate around the ObXb axis so that r is coincided with rP. The rotation angle φ is demonstrated as
| (17) |
The attitude quaternions q are defined as
| (18a) |
| (18b) |
| (18c) |
where is the vector part, and q0 is the scalar part. The quaternion is obtained as the equation (19) through rotating θ around (0,1,0)T.
| (19) |
Through rotating φ around (0,1,0)T, the quaternion is obtained as
| (20a) |
| (20b) |
where ⊗ is the rotation multiplication operator of quaternions. Thereby, the expected attitude error quaternion qe is expressed as
| (21) |
Therefore, the expected attitude quaternion qt is relative to the Earth inertial system can be expressed as
| (22) |
where qb is attitude quaternion of satellite body coordinate system relative to the Earth inertial system.
The attitude kinematics equation is shown as
| (23a) |
| (23b) |
| (23c) |
where is an antisymmetric matrix, and I3×3 is an identity matrix. Then, the following equation is verified as
| (24) |
Using the equation (23a), the expected angular velocity ωt is inversely solved as
| (25a) |
| (25b) |
The expected attitude error angular velocity is given as
| (26) |
If is any quaternion, we can obtain
| (27) |
where A(qe) is the attitude matrix determined by qe.
2.3. Sliding Mode Controller
In this paper, based on the actuator of three orthogonal mounted reaction flywheel, the satellite attitude dynamics equation is given as
| (28) |
where J is the inertia moment of the satellite, ωb is angular velocity of satellite body coordinate system, h is the angular momentum of the flywheel, u is control torque, and d is external disturbance torque.
In this paper, the sliding mode function is designed as
| (29a) |
| (29b) |
| (29c) |
| (29d) |
| (29e) |
where ζ=[ζ1, ζ2, ζ3]T, K=diag(ki), i=1,2,3. If ζ⟶0, the angular velocity of the system and the state of the angular velocity can be tracked.
The approach function method is used to obtain the sliding mode control law.
The exponential reaching law is used as
| (30) |
where sgn(s)=[sgn(ζ1), sgn(ζ2), sgn(ζ3)], l=diag(li), and ε=diag(εi), i=1,2,3. The control quantity u is obtained as
| (31) |
where K1 is controller parameters. We notice that the chattering of sliding mode controller is mainly caused by sgn(ζ) and d. Let P(ζ)=sgn(ζ). In order to reduce chattering, P(ζ) is rewritten as
| (32a) |
| (32b) |
where the inflection point of hyperbolic tangent function is determined through the value of ε(ε > 0).
2.4. Stability Analysis
In order to ensure that the state of the system move from any initial point to s=0 in a finite time based on the designed sliding mode controller, the following assumptive conditions are given:
Supposed d is bounded, dmax ≥ D2 is boundary of d.
Supposed .
The stability of the system is proved as follows. Lyapunov function is constructed as
| (33) |
The derivative of the (33) is represented as
| (34) |
where if and only if ζ=0. is a seminegative definite function. Therefore, the system is convergent by using sliding mode control.
2.5. Fuzzy Logic System
Fuzzy logic system (FLS) is consisted of fuzzy rule base, fuzzy rule base, fuzzy inference engine, and defuzzifier, as shown Figure 5. In this paper, we assume that x1 ∈ X1, x2 ∈ X2, …, xp ∈ Xp, and y ∈ Y are p input and an output, respectively. Furthermore, the fuzzy rule base is composed of k rules, expressed as
| (35) |
where l=1,2,…, k. Thereby, FLS can be employed to simplify fuzzy rules as mapping from fuzzy input sets F1l × ⋯Fpl to the fuzzy output set Y, denoted by F1l × ⋯Fpl=Al. In this way, (35) can be rewritten as
| (36) |
Figure 5.

The diagram of Fuzzy logic system.
The membership function μR(l)(x, y) can be utilized to describe R(l) as
| (37) |
where x=(x1, x2,…,xp)T. Hence, (37) can be rewritten as
| (38) |
where ⋆ represents that multiple antecedents are juxtaposed and connected with t norm.
A x is a fuzzy set where p input of Rl is given, and the membership function μAx(x) is defined as
| (39) |
According to each fuzzy rule, a fuzzy set Bl about the set Y is given as
| (40) |
Meanwhile, based on commutativity of t norm, we can obtain the membership function μBl(y) in (41)
| (41) |
Singleton fuzzifier is employed into (41) and (41), which can be rewritten as
| (42) |
Due to centroid defuzzifier, FLS output can be expressed as
| (43a) |
| (43b) |
where B is output fuzzy set, and yc(x) is the clear output.
2.6. Fuzzy Sliding Mode Controller
The input and output fuzzy sets of the system are defined as
| (44a) |
| (44b) |
where is the input, and the variation of D1 is represented ΔD1 as the output. In Equations (44a) and (44b), NB, NM, NS, Z, PS, PM, PB are negative large, negative middle, negative small, zero, positive small, positive middle, and positive large, respectively. Therefore, the following seven rules are designed as
R1 : If is PB, THEN ΔD1 is PB.
R2 : If is PM, THEN ΔD1 is PM.
R3 : If is PS, THEN ΔD1 is PS.
R4 : If is Z, THEN ΔD1 is Z.
R5 : If is NS, THEN ΔD1 is NS.
R6 : If is NM, THEN ΔD1 is NM.
R7 : If is NB, THEN ΔD1 is NB.
Besides, Figure 6 shows the input/output membership function of fuzzy control system. According to the value of , the centroid defuzzifier is employed and the value of ΔD1 can be obtained as . Meanwhile, D1 can be rewritten as
| (45) |
Figure 6.

The input/output membership function of fuzzy control system: (a) membership function of fuzzy input; (b) membership function of fuzzy output.
Hence, the fuzzy sliding mode controller can be designed as
| (46) |
3. Results and Discussion
In this section, in order to verify the performance of the proposed method, the numerical simulations are conducted for the video satellite, where Jilin-1 is selected. The experiments are implemented in Matlab R2018b and NVIDIA GeForce GXT 2080Ti GPU. Firstly, the ISTC algorithm is employed to extract the image information. As shown in Figure 7, the results of moving target tracking by ISTC are presented. Meanwhile, based on this, the traditional sliding mode control and the HTFSMC are design comparative experiments. The initial condition of the simulations is demonstrated in Table 1. Besides, we can see that the image size is 4000 pixels × 4000 pixels, which is very large.
Figure 7.

The results of moving target tracking by ISTC.
Table 1.
Initial conditions of the simulation.
| Parameters | Parameter values |
|---|---|
| Inertia matrix | diag(264, 264, 28) kg m2 |
| Initial attitude angle | [0°,0°,0°] |
| Initial attitude angle angular velocity | [0,0,0] rad/s |
| Pixel initial position | [1100,800] |
| Image size | 4000 pixels × 4000 pixels |
| Disturbing torques | 10−6N m |
| Image processing error | 5 pixels |
| Pixel size | [8.38.3] mm |
| Camera focal length | 4.2 mm |
According to the above simulation parameters, we assume that the pixel coordinate system of the target located at (0, 510) in the initial time. Meanwhile, the traditional sliding mode control and the proposed controller are applied to obtain the variation curve of output torque, attitude angle, and angular velocity.
In Figure 8, we can see that Tx and Ty are used to represent the output torque in the x and y directions and can converge after about 40s based on the sliding mode controller. However, in Figure 9, Tx and Ty can converge after 25s based on the HTFSMC. Compared with the sliding mode control, the proposed controller can improve the convergence speed. Besides, Tz is used to represent the output torque in the z direction and can converge faster after about 5s for these two controllers.
Figure 8.

Variation curve of output torque by the traditional sliding mode controller.
Figure 9.

Variation curve of output torque by the HTFSMC.
In Figure 10 and Figure 11, for the traditional sliding mode controller and the HTFSMC, θ can converge after about 30s and 20s, respectively. φ can converge after about 40s and 20s, respectively. Ψ can converge after about 20s.
Figure 10.

Variation curve of attitude angle by the traditional sliding mode controller.
Figure 11.

Variation curve of attitude angle by the HTFSMC.
In Figure 12 and Figure 13, for the traditional sliding mode controller and HTFSMC, ω1 can converge after about 30s and 23s, respectively. ω2 can converge after about 40s and 27s, respectively. ω3 can converge after about 30s and 23s, respectively.
Figure 12.

Variation curve of angular velocity by the traditional sliding mode controller.
Figure 13.

Variation curve of angular velocity by the HTFSMC.
In Figure 14, the image information of the space target video is used as the input of the control loop. The image change caused by attitude adjustment is simulated. The image is scaled to 2000 pixels ∗ 2000 pixels, which represents the satellite visual field and shown in black. Besides, we design that the size of the actual image is 4000 pixels∗ 4000 pixels. Meanwhile, the satellite visual field is embedded the actual image. The red ∗ is demonstrated the target and the green box visual field center. It is can be seen that the attitude control based on image information feedback can be achieved by the proposed controller. Moreover, the target imaging can be located in the center of the image plane to realize the gaze tracking control of the space target effectively. Accordingly, Figure 15 shows the trajectory of the target in image plane based on fuzzy sliding mode control. In addition, we can see that the position of the visual field center is (2000,2000). At first, the target position is not in the visual field center. The target imaging is located in the center based on the feedback control of image information. Figure 16 shows the optical axis pointing error converging to 0 after about 60s based on the proposed controller.
Figure 14.

Simulation results of moving target gaze tracking based on the HTFSMC.
Figure 15.

Trajectory of the target in image plane based on the HTFSMC.
Figure 16.

Optical axis pointing error based on the HTFSMC.
Figure 17 shows the simulation results of moving target gaze tracking in Jilin-1 video, in which the image changes caused by attitude adjustment are simulated and the moving target is the airplane. The moving airplane is not in the visual field center at the initial moment. The airplane is imaged in the visual field center based on the proposed controller of image information. Accordingly, Figure 18 shows the trajectory of the target in image plane based on fuzzy sliding mode control. Moreover, Figure 19 shows the optical axis pointing error converging to 0 quickly based on the proposed controller. It means that the gaze tracking of space moving target is effectively simulated.
Figure 17.

Simulation results of moving target gaze tracking based on the HTFSMC.
Figure 18.

Trajectory of the target in image plane based on the HTFSMC: (a) trajectory of the target in the entire movement process; (b) local enlarged trajectory image of the target.
Figure 19.

Optical axis pointing error based on the HTFSMC.
4. Conclusions
The staring imaging attitude tracking and control for satellite videos based on image information is studied in this paper. An ISTC algorithm is designed to obtain the image information. Based on this, we introduced a HTFSMC law to achieve the attitude tracking and control. Furthermore, the hyperbolic tangent function and fuzzy logic system are introduced into the sliding mode controller.
In the experiments, the image information of the space target video sequence captured by Jilin-1 in orbit is used as the input of the control loop. The control part is realized through simulation. Compared with the traditional sliding mode controller, the image change caused by attitude adjustment is achieved successfully and quickly based on the proposed controller, and the target imaging can be located in the center of the image plane to realize the gaze tracking control of the space target effectively. In the future work, the space target video sequences will be used as the input of the control loop directly.
Acknowledgments
This work was supported in part by the NSFC (62133001 and 61520106010) and the National Basic Research Program of China 973 Program (2012CB821200 and 2012CB821201).
Data Availability
The data used to support the findings of the study are available from the corresponding author upon request.
Conflicts of Interest
The author declares that there are no conflicts of interest regarding the publication of this paper.
References
- 1.Lei L., Guo D. Multitarget detection and tracking method in remote sensing satellite video. Computational Intelligence and Neuroscience . 2021;2021:7. doi: 10.1155/2021/7381909.7381909 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Larsen S. Ø., Salberg A. B., Eikvil L. Automatic system for operational traffic monitoring using very-high-resolution satellite imagery. International Journal of Remote Sensing . 2013;34:4850–4870. doi: 10.1080/01431161.2013.782708. [DOI] [Google Scholar]
- 3.Ao W., Fu Y., Hou X., Xu F. Needles in a haystack: tracking City-scale moving vehicles from continuously moving satellite. IEEE Transactions on Image Processing . 2020;29:1944–1957. doi: 10.1109/tip.2019.2944097. [DOI] [PubMed] [Google Scholar]
- 4.Kopsiaftis G., Karantzalos K. Vehicle detection and traffic density monitoring from very high resolution satellite video data. Proceedings of the 2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS); July 2015; Milan, Italy. IEEE; pp. 1881–1884. [Google Scholar]
- 5.Mo L., Guo S. Consensus of linear multi-agent systems with persistent disturbances via distributed output feedback. Journal of Systems Science and Complexity . 2019;32(3):835–845. doi: 10.1007/s11424-018-7265-y. [DOI] [Google Scholar]
- 6.Guo Y., Yang D., Chen Z. Object tracking on satellite videos: a Correlation Filter-based tracking method with trajectory Correction by Kalman Filter. Ieee Journal of Selected Topics in Applied Earth Observations and Remote Sensing . 2019;12(9):3538–3551. doi: 10.1109/jstars.2019.2933488. [DOI] [Google Scholar]
- 7.Pei W., Lu X. Moving object tracking in satellite videos by Kernelized Correlation Filter based on Color-Name features and Kalman Prediction. Wireless Communications and Mobile Computing . 2022;2022:16. doi: 10.1155/2022/9735887.9735887 [DOI] [Google Scholar]
- 8.Shi Z., Yu X., Jiang Z., Li B. Ship detection in high-resolution optical imagery based on Anomaly Detector and local shape feature. IEEE Transactions on Geoscience and Remote Sensing . 2014;52(8):4511–4523. doi: 10.1109/tgrs.2013.2282355. [DOI] [Google Scholar]
- 9.Liu Z., Chen W. Space applications of staring imaging technology with area FPA. Infrared and Laser Engineering . 2006;35:541–545. [Google Scholar]
- 10.Wang W. L., Li Q. Q., Tang L. L. Algorithm of vehicle detection in low Altitude Aerial video. Journal of Wuhan University of Technology . 2010;32:155–158. [Google Scholar]
- 11.Luo Y. L., Liang Y. P., Wang Y. Traffic flow parameter estimation from satellite video data based on optical flow. Computer Engineering & Applications . 2018;54:204–207. [Google Scholar]
- 12.Gueguen L., Hamid R. Large-scale damage detection using satellite imagery. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; June 2015; Boston, MA, USA. IEEE; pp. 1321–1328. [Google Scholar]
- 13.Hu L. H. Evaluation research on the Application of GF-1 satellite for monitoring major engineering Land. Journal of North China Institute of Science and Technology . 2015;12:110–115. [Google Scholar]
- 14.Zhao X. F., Yu Y., Mao Y. D., Tang Z. H. Long-term photometric signature study of two GEO satellites. Advances in Space Research . 2021;67(8):2241–2251. doi: 10.1016/j.asr.2021.01.051. [DOI] [Google Scholar]
- 15.Crisp N. H., Roberts P., Romano F., et al. System modelling of very low earth orbit satellites for earth observation. Acta Astronautica . 2021;187:475–491. doi: 10.1016/j.actaastro.2021.07.004. [DOI] [Google Scholar]
- 16.Karim R., Malcovati P. On-chip-antennas: Next Milestone in the Big World of small satellites—a survey of Potentials, Challenges, and future directions. IEEE Aerospace and Electronic Systems Magazine . 2021;36(1):46–60. doi: 10.1109/maes.2020.3016751. [DOI] [Google Scholar]
- 17.Toth C., Jóźków G. Remote sensing platforms and sensors: a survey. ISPRS Journal of Photogrammetry and Remote Sensing . 2016;115:22–36. doi: 10.1016/j.isprsjprs.2015.10.004. [DOI] [Google Scholar]
- 18.Li H., Zhao Y., Li B., Li G. Attitude control of staring-imaging satellite using Permanent Magnet momentum Exchange Sphere. Proceedings of the 2019 22nd International Conference on Electrical Machines and Systems; August 2019; Harbin, China. ICEMS; pp. 1–6. [Google Scholar]
- 19.Lu K., Xia Y. Adaptive attitude tracking control for rigid spacecraft with finite-time convergence. Automatica . 2013;49:3591–3599. doi: 10.1016/j.automatica.2013.09.001. [DOI] [Google Scholar]
- 20.Tiwari P. M., Janardhanan S., Nabi M. U. Rigid spacecraft attitude control using Adaptive Non-singular Fast Terminal sliding mode. Journal of Control, Automation and Electrical Systems . 2015;26(2):115–124. doi: 10.1007/s40313-014-0164-0. [DOI] [Google Scholar]
- 21.Zou A., Kumar K. D., Hou Z., Liu X. Finite-time attitude tracking control for spacecraft using Terminal sliding mode and Chebyshev Neural network. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) . 2011;41(4):950–963. doi: 10.1109/tsmcb.2010.2101592. [DOI] [PubMed] [Google Scholar]
- 22.Jiang L., Yang X. Study on enlarging the Searching Scope of staring area and tracking imaging of dynamic targets by optical satellites. IEEE Sensors Journal . 2021;21(4):5349–5358. doi: 10.1109/jsen.2020.3031626. [DOI] [Google Scholar]
- 23.Huo B., Xia Y., Lu K., Fu M. Adaptive fuzzy finite-time fault-tolerant attitude control of rigid spacecraft. Journal of the Franklin Institute . 2015;352(10):4225–4246. doi: 10.1016/j.jfranklin.2015.05.042. [DOI] [Google Scholar]
- 24.Sanyal A. K., Lee-Ho Z. Attitude tracking control of a small satellite in low earth orbit. Family Business Review . 2013;19(4):289–300. [Google Scholar]
- 25.Liang H. Z., Wang J. Y., Sun Z. W. New simple robust attitude controller of staring-imaging satellite in LEO. Journal of Harbin Institute of Technology . 2011;43:26–30. [Google Scholar]
- 26.Feng Y., Liu K., Zhang W. Simulation of staring imaging attitude tracking finite time control of TV satellite. Journal of System Simulation . 2016;28(1):226–234. [Google Scholar]
- 27.Lian Y., Gao Y., Zeng G. Staring imaging attitude control of small satellites. Journal of Guidance, Control, and Dynamics . 2017;40(5):1278–1285. doi: 10.2514/1.g002197. [DOI] [Google Scholar]
- 28.Liang H., Sun Z., Wu X. Attitude fuzzy logic controller design of A staring-imaging satellite in LEO. Proceedings of the 2009 International Conference on Mechatronics and Automation; September 2009; Changchun, China. pp. 762–766. [Google Scholar]
- 29.Chen X., Steyn W., Hashida Y. Ground-target tracking control of earth-pointing satellites. Proceedings of the AIAA Guidance, Navigation, and Control Conference and Exhibit; August 2000; Dever,CO,U.S.A. AIAA; pp. 1–11. [Google Scholar]
- 30.Chen X. Q., Ma Y. H., Geng Y. H., Wang F., Dong Y. Staring imaging attitude tracking control of agile small satellite. Proceedings of the 2011 6th IEEE Conference on Industrial Electronics and Applications; August 2011; Beijing, China. IEEE; pp. 143–148. [Google Scholar]
- 31.Li P., Dong Y., Li H. Staring imaging real-time Optimal control based on Neural network. International Journal of Aerospace Engineering . 2020;2020:14. doi: 10.1155/2020/8822223.8822223 [DOI] [Google Scholar]
- 32.Li Y., Liang H. Robust finite-time control algorithm based on dynamic sliding mode for satellite attitude maneuver. Mathematics . 2021;10(1):p. 111. doi: 10.3390/math10010111. [DOI] [Google Scholar]
- 33.Mo L., Yu Y., Zhao L., Cao X. Distributed continuous-time optimization of second-order multiagent systems with nonconvex input constraints. IEEE Transactions on Systems, Man, and Cybernetics: Systems . 2021;51(10):6404–6413. doi: 10.1109/tsmc.2019.2961421. [DOI] [Google Scholar]
- 34.Mo L., Guo S., Yu Y. Mean-square consensus of heterogeneous multi-agent systems with nonconvex constraints Markovian switching topologies and delays. Neurocomputing . 2018;291:167–174. doi: 10.1016/j.neucom.2018.02.075. [DOI] [Google Scholar]
- 35.Hutchinson S., Hager G. D., Corke P. I. A tutorial on visual servo control. IEEE Transactions on Robotics and Automation . 1996;12(5):651–670. doi: 10.1109/70.538972. [DOI] [Google Scholar]
- 36.Kosmopoulos D. I. Robust Jacobian matrix estimation for image-based visual servoing. Robotics and Computer-Integrated Manufacturing . 2011;27(1):82–87. doi: 10.1016/j.rcim.2010.06.013. [DOI] [Google Scholar]
- 37.Wang Y., Zhang G. L., Lang H., Zuo B., De Silva C. W. A modified image-based visual servo controller with hybrid camera configuration for robust robotic grasping. Robotics and Autonomous Systems . 2014;62(10):1398–1407. doi: 10.1016/j.robot.2014.06.003. [DOI] [Google Scholar]
- 38.Serra P., Cunha R., Silvestre C., Hamel T. Visual servo aircraft control for tracking parallel curves. Proceedings of the IEEE Conference on Decision & Control; December 2012; Maui, Hawaii, USA. IEEE; pp. 1148–1153. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data used to support the findings of the study are available from the corresponding author upon request.
