Выделить слова: 


Патент США №

7765062

Автор(ы)

Ariyur и др.

Дата выдачи

27 июля 2010 г.


Method and system for autonomous tracking of a mobile target by an unmanned aerial vehicle



РЕФЕРАТ

A method and system for autonomous tracking of a mobile target such as a ground vehicle by an unmanned aerial vehicle are provided. The method and system utilize an approach that tracks a mobile ground target by using a ground vehicle model with an ummanned aerial vehicle model, with velocity and acceleration constraints. These real-world constraints ensure that the method is applicable to a general class of unmanned aerial vehicles and ground targets. One or more sensors are employed on the unmanned aerial vehicle, with the sensors having at least one field-of-view sensing cone over the ground. A position and path of the mobile target are monitored through input from the sensors on the unmanned aerial vehicle. The method and system detect and estimate the position and path of the mobile target when the target is inside the field-of-view sensing cone.


Авторы:

Kartik B Ariyur (Minnetonka, MN), Kingsley O. C. Fregene (Andover, MN)

Заявитель:

Honeywell International Inc. (Morristown, NJ)

ID семейства патентов

38620522

Номер заявки:

11/380,141

Дата регистрации:

25 апреля 2006 г.

Prior Publication Data

Document IdentifierPublication Date
US 20070250260 A1Oct 25, 2007

Класс патентной классификации США:

701/519; 342/357.54

Класс совместной патентной классификации:

G01S 3/7864 (20130101); G01S 13/723 (20130101); G05D 1/0094 (20130101)

Класс международной патентной классификации (МПК):

G01C 21/00 (20060101)

Область поиска:

;701/200,207,213-215 ;342/357.07 ;340/988

Использованные источники

[Referenced By]

Патентные документы США

5438517August 1995Sennott et al.
5457634October 1995Chavravarty
5555503September 1996Kyrtsos et al.
5559695September 1996Daily
5610815March 1997Gudat et al.
5615116March 1997Gudat et al.
5629855May 1997Kyrtsos et al.
5640323June 1997Kleimenhagen et al.
5684696November 1997Rao et al.
5740047April 1998Pilley et al.
5956250September 1999Gudat et al.
2002/0176605November 2002Stafsudd et al.
2003/0203717October 2003Chuprun et al.
2003/0212478November 2003Rios
2004/0141770July 2004Maeda et al.
2004/0252864December 2004Chang et al.
2005/0004723January 2005Duggan et al.
2005/0004759January 2005Siegel
2005/0040985February 2005Hudson et al.
2005/0114023May 2005Williamson et al.
2005/0150997July 2005Sjanic
2005/0197749September 2005Nichols et al.

Зарубежные патентные документы

0236587Sep 1987EP
1560096Aug 2005EP
2005123502Dec 2005WO

Другие источники


Ogren, Petter et al., "A Control Lyapunov Function Approach to Multiagent Coordination", "IEEE Transactions on Robotics and Automation", Oct. 2002, pp. 847-851, vol. 18, No. 5, Publisher: IEEE. cited by other.

Главный эксперт: Beaulieu; Yonel
Уполномоченный, доверенный или фирма: Fogg & Powers LLC

Интересы правительства





ЛИЦЕНЗИОННЫЕ ПРАВА ПРАВИТЕЛЬСТВА



The U.S. Government may have certain rights in the present invention as provided for by the terms of Contract No. F33615-01-C-1848 with AFRL/Wright Research Site.

ФОРМУЛА ИЗОБРЕТЕНИЯ



What is claimed is:

1. A method for autonomous tracking of a mobile ground target by an unmanned aerial vehicle, comprising: tracking the mobile ground target by using a ground vehicle model with an unmanned aerial vehicle model, with velocity and acceleration constraints; and monitoring a position and path of the mobile ground target through input from one or more sensors on the unmanned aerial vehicle, the sensors having at least one field-of-view sensing cone over the ground.

2. The method of claim 1, wherein the ground vehicle model comprises a two dimensional double integrator point mass model.

3. The method of claim 1, wherein the unmanned aerial vehicle model comprises a three dimensional double integrator point mass model.

4. The method of claim 1, wherein acceleration of the mobile ground target is estimated from a finite impulse response filter.

5. The method of claim 1, wherein monitoring the position and path of the mobile ground target depends upon the mobile target being inside of the field-of-view sensing cone.

6. The method of claim 1, wherein the one or more sensors comprise at least one of a visual sensor, a radar sensor, an acoustic sensor, or a laser radar sensor.

7. The method of claim 1, wherein the sensors having a plurality of field-of-view sensing cones over the ground.

8. The method of claim 1, wherein the mobile ground target comprises a motor vehicle.

9. The method of claim 1, wherein the unmanned aerial vehicle comprises a hover-capable aerial vehicle.

10. The method of claim 1, wherein the unmanned aerial vehicle comprises a fixed-wing aerial vehicle.

11. A system for autonomous tracking of a mobile ground target by an unmanned aerial vehicle, comprising: a computer configured to track the mobile ground target by using a ground vehicle model with an unmanned aerial vehicle model, with velocity and acceleration constraints; and one or more sensors on the unmanned aerial vehicle that have at least one field-of-view sensing cone over the ground; wherein detection of a position and path of the mobile ground target depends upon the mobile target being inside of the field-of-view sensing cone.

12. The system of claim 11, wherein the ground vehicle model comprises a two dimensional double integrator point mass model.

13. The system of claim 11, wherein the unmanned aerial vehicle model comprises a three dimensional double integrator point mass model.

14. The system of claim 11, wherein acceleration of the mobile ground target is estimated from a finite impulse response filter.

15. The system of claim 11, wherein the one or more sensors comprise at least one of a visual sensor, a radar sensor, an acoustic sensor, or a laser radar sensor.

16. The system of claim 11, wherein the unmanned aerial vehicle comprises a hover-capable aerial vehicle.

17. The system of claim 11, wherein the unmanned aerial vehicle comprises a fixed-wing aerial vehicle.

18. A computer program product, comprising: a computer readable medium having instructions operable to be executed to implement a method for autonomous tracking of a mobile ground target by an unmanned aerial vehicle, the method comprising: tracking the mobile ground target by using a ground vehicle model with an unmanned aerial vehicle model, with velocity and acceleration constraints; and monitoring a position and path of the mobile ground target through input from one or more sensors on the unmanned aerial vehicle.

19. The computer program product of claim 18, wherein the ground vehicle model comprises a two dimensional double integrator point mass model.

20. The computer program product of claim 18, wherein the unmanned aerial vehicle model comprises a three dimensional double integrator point mass model.


ОПИСАНИЕ



BACKGROUND TECHNOLOGY

Unmanned aerial vehicles (UAVs) are remotely piloted or self-piloted aircraft that can carry cameras, sensors, communications equipment, or other payloads. They have been used in a reconnaissance and intelligence-gathering role for many years. More recently, UAVs have been developed for the purpose of surveillance and target tracking.

Autonomous surveillance and target tracking performed by UAVs in either military or civilian environments is becoming an important aspect of intelligence-gathering. However, tracking a moving target on the ground, such as a ground vehicle in motion on a road, with an unmanned aerial vehicle (UAV) presents various difficulties that need to be addressed in order to have an effectively autonomous surveillance and target tracking system. For example, if there are minimum speed limits for the unmanned aerial vehicle, such as any fixed-wing UAV would have, the ground vehicle can easily give the slip to the tracking UAV. Another difficulty that needs to be addressed in a system for autonomous tracking of a moving target is the delay and noise inherent in visual recognition.


КРАТКОЕ ОПИСАНИЕ РИСУНКОВ



Features of the present invention will become apparent to those skilled in the art from the following description with reference to the drawings. Understanding that the drawings depict only typical embodiments of the invention and are not therefore to be considered limiting in scope, the invention will be described with additional specificity and detail through the use of the accompanying drawings, in which:

FIG. 1 is a schematic diagram depicting a system for aerial tracking of a ground vehicle according to one embodiment of the invention;

FIG. 2 is schematic overhead view depicting the path of a ground vehicle and the chase path covered by a hover-capable unmanned aerial vehicle in an urban setting;

FIG. 3 is a graph of the vertical motion above ground level (AGL) of the hover-capable unmanned aerial vehicle of FIG. 2; and

FIG. 4 is a schematic overhead view depicting the path of a ground vehicle and the chase path covered by a fixed-wing unmanned aerial vehicle in an urban setting.


ПОДРОБНОЕ ОПИСАНИЕ



In the following detailed description, embodiments are described in sufficient detail to enable those skilled in the art to practice the invention. It is to be understood that other embodiments may be utilized without departing from the scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense.

The present invention relates to a method and system for autonomous tracking of a mobile target, such as a ground motor vehicle, by an unmanned aerial vehicle (UAV). The method and system utilize an approach that tracks a mobile ground target by using a ground vehicle model with an unmanned aerial vehicle model, with velocity and acceleration constraints. These real-world constraints ensure that the method is applicable to a general class of unmanned aerial vehicles and ground targets.

In one approach of the present invention, the tracking of a mobile target is provided by using a ground vehicle model, comprising a two dimensional double integrator point mass model, with an unmanned aerial vehicle model comprising a three dimensional double integrator point mass model, with velocity and acceleration constraints. These constraints capture the capabilities of the real vehicle, thereby ensuring that the method of the invention is applicable to any other vehicle model used. The point mass models capture typical vehicle motion--indeed, an aircraft with closed loop attitude control and position and velocity tracking control loops behaves like a three dimensional double integrator with position and velocity tracking time constants. A sensor model applicable to a wide range of sensors or sensor systems (giving target position and velocity though different means such as vision, radar, or acoustics) can also be used.

It should be understood that the double integrator point mass models described hereafter are merely a simplification of complex dynamic models for ground vehicles and unmanned aerial vehicles. Other model systems may also be employed to implement the present invention.

The present invention can be implemented by utilizing a computer hardware and/or software system, which provides a means for tracking a mobile ground target by using a ground vehicle model with an unmanned aerial vehicle model, with velocity and acceleration constraints. A position and path of the mobile ground target are monitored through input from one or more sensors on the UAV, with the sensors having at least one field-of-view (FOV) sensing cone over the ground. For example, several sensors can be employed by the UAV, giving several FOV cones or a much larger FOV cone. The system and method detect and estimate the position and path of the mobile target when the target is inside the field-of-view sensing cone.

A wide variety of sensors can be used in the UAV, such as visual, radar, acoustic, or laser radar (ladar) sensors. For example, a tracking camera can be used in the UAV. The method and system of the invention also provide for maintaining stable tracking even with extremely noisy tracking sensors. The present invention is described in further detail hereafter.

Sensor Model

A camera sensor is modeled as being able to maintain target detection within a right circular cone vertically beneath the UAV with the cone angle .theta. being equal to the field-of-view (FOV) angle .alpha. of the camera. Such an arrangement is illustrated in FIG. 1, which is a schematic diagram depicting a system 100 for aerial tracking of a ground vehicle 110 by a UAV 112 having at least one sensor 114. The UAV 112 can either be a hover-capable aerial vehicle or a fixed-wing aerial vehicle. An FOV cone 118 projected by sensor 114 has an FOV circle 120 on the ground. The FOV circle 120 has a radius of z tan

.times..times..times..alpha. ##EQU00001## where z is the altitude of UAV 112.

Tracking control laws are described hereafter that are exponentially stable and can maintain a stable tracking system even with extremely noisy tracking sensors. The tracking system abstracts essential features of the tracking problem without the distractions of detailed UAV dynamics and various constraints. Furthermore, the present system eases tracking design for UAVs whose attitude stabilization control laws (commonly known as the inner loop) are already implemented, and therefore a given.

Chaser and Prey Models

Purely discretized models are used in the method of the invention as the handling of delays is natural in this setting. However, analogous methods call be developed for the continuous time setting, which is more advantageous if sensor noise characteristics are well known. In this case, a Kalman filter and Kalman predictor could be used to estimate prey vehicle motion (position, velocity and acceleration). The sampling time is denoted with T, and x.sub.p, v.sub.p denote planar position (x.sub.p, y.sub.p) and velocity vectors of the prey (i.e., a mobile target such as a ground vehicle), and x.sub.c, v.sub.c denote the three dimensional position (x.sub.c, y.sub.c, z.sub.c) and velocity vectors of the chaser (i.e., a UAV). The prey model is simply a double integrator with an unknown acceleration inputa.sub.p: x.sub.p(k+1)=x.sub.p(k)+Tv.sub.p(k) v.sub.p(k+1)=v.sub.p(k)+Ta.sub.p(k) where k=1, 2, 3 . . . is the sampling instant. The chaser model incorporates information about the position tracking and velocity tracking time constants (.tau..sub.x and .tau..sub.v) of the inner loop controller on board the UAV:

.function..function..function. ##EQU00002## .function..tau..times..tau..times..function..tau..times..function..tau..t- imes..tau..times. ##EQU00002.2## where x.sub.c.sup.ref is the current desired location of the chase vehicle to maintain tracking of the target vehicle. The next equation is for the planar position error between the chaser and the prey. The planar component of the vehicle position and velocity are denoted respectively by x.sub.c.sup.pl and v.sub.c.sup.pl:

.delta..times..times..ident. ##EQU00003## .delta..times..times..ident. ##EQU00003.2## .delta..times..times..function..delta..times..times..function..times..tim- es..delta..times..times. ##EQU00003.3## .delta..times..times..function..tau..times..tau..times..delta..times..tim- es..tau..times..delta..times..times..tau..times..tau..times..function..tau- ..times..function..function..tau..times..tau..times. ##EQU00003.4## where x.sub.c.sup.ref,pl is the planar part of the chaser position set point. Tracking Control Law

If the tracking set point is set to cancel the terms arising from prey vehicle position, velocity, and acceleration in the error equation above, there will be exponential tracking of the prey. The control law in this case would be: x.sub.c.sup.ref,pl(k)=x.sub.p(k)+.tau..sub.xv.sub.p(k)+.tau..sub.x.tau..s- ub.va.sub.p(k) However, it is necessary to work from delayed and noisy measurements of the prey position and velocity. To this end, estimates are made of the current prey position, velocity, and acceleration from the measurements. It is assumed that the delay (nT) is an integral multiple of the sampling time T, which is realistic since the sampling time is small compared to the delay. The measurements are: x.sub.p.sup.meas(k)=x.sub.p(k-n)+v.sub.1 v.sub.p.sup.meas(k)=v.sub.p(k-n)+v.sub.2. where v.sub.1 and v.sub.2 represent measurement noise, whose properties under different operating conditions may be available. To estimate the acceleration driving the prey dynamics, a FIR (finite impulse response) filter has been developed. The filter simply takes a weighted average of the m past estimates of acceleration, assuming it to be constant over that time period and giving maximum weight to the most recent estimate.

.times..times..function..function..function. ##EQU00004## .times. ##EQU00004.2## While the number of past points used and filter coefficients used can be chosen to optimize some objective function, the following were chosen: m=5 and c.sub.1= 17/32, c.sub.2=1/4, c.sub.3=1/8, c.sub.4= 1/16, c.sub.5= 1/32. Using the estimate of the acceleration, prediction of the current state of the prey (position and velocity) can be performed using the prey double integrator model:

.times..function..times..times..times..times..times..times..times..times. ##EQU00005## and I.sub.2 is the 2.times.2 identity matrix. It should be noted that the FIR filter is used to account for general or unknown noise characteristics. If noise characteristics are known, optimal filters, such as discrete FIR, discrete Kalman filters, or continuous Kalman filters and predictors can be used.

Finally, the vertical coordinate of the unmanned aerial vehicle is updated with the following gradient descent type law that minimizes the cost function:

.times..delta..times..times..times..times..alpha. ##EQU00006## with respect to z.sub.c, giving

.gamma..times..differential..differential..times..gamma..times..delta..ti- mes..times..times..times..alpha. ##EQU00007## where .gamma. is the gain of the gradient scheme (in units of distance.sup.2/time). The above cost function is motivated by the idea of maintaining the position of the unmanned aerial vehicle and therefore its tracking camera, within a square inscribed inside the field-of-view circle on the ground.

The following examples are given to illustrate the present invention, and are not intended to limit the scope of the invention.


ПРИМЕРЫ



Simulations were performed with the above control law on a representation of a typical military operations urban terrain (MOUT) with both a hover-capable chaser UAV and a fixed-wing chaser UAV. The fixed-wing chaser cannot go slower than a minimum velocity. The UAV capabilities were as follows: maximum speeds of 25 m/s (hover-capable) and 40 m/s (fixed-wing), maximum acceleration of 10 m/s.sup.2, .tau..sub.x=0.25 s, .tau..sub.v=0.5 s, the minimum speed for the fixed-wing vehicle was 25 m/s, and the vehicle flew between 25 m and 50 m above ground level (AGL). The field-of-view of the camera was taken as .alpha.=120.degree..

FIGS. 2 and 4 are schematic overhead map views depicting the chase path results for the hover-capable and fixed-wing UAVs, respectively. The instantaneous target vehicle (prey) position in each case is denoted by solid circles at times of 1, 2, 3, 7, 14, 21, and 28 seconds (moving from left to right on the maps in FIGS. 2 and 4). The corresponding UAV (chaser) position is represented by an open circle at those same times. The path of the target vehicle is represented by a continuous solid line and the path of the UAV is represented by a dotted line. The target vehicle is always either accelerating or decelerating at 3 m/s.sup.2. The target vehicle has a maximum velocity of 25 m/s (almost 60 mph) and it negotiates turns with a velocity of 7.5 m/s. The measurement noise variance for both position and velocity is 5 units (very large compared to actual values).

While FIG. 2 depicts the chase path covered by the hover-capable UAV, FIG. 3 is a graph of the vertical motion of the hover-capable UAV, showing the variation of the UAV's altitude AGL during the chase. In this case, the UAV chaser is able to maintain its prey in view while following the prey. The altitude remains in a small range, which is desirable since UAV actuation authority should not be wasted in altering altitude except when necessary.

As shown in FIG. 4, the chase path covered by the fixed-wing UAV includes figure-eights and circles around the prey as the prey slows down at the turns. Such a path is required since the fixed-wing UAV cannot fly slower than 25 m/s.

The UAV chasers are locally exponentially stable by design, since the x and y position error equations are exponentially stable, and the gradient descent is also stable. It may be possible to determine a stability region of the tracking in the presence of UAV constraints using Sum of Squares programming..sup.1 Performance in the presence of noise and occlusions, with no measurement for a few time steps, is also amenable to analysis in the Stun of Squares framework. .sup.1 See S. Prajna et al., SOSTOOLS: Sum of squares optimization toolbox for MATLAB, available from http://www.cds.caltech.edu/sostools, and http://www.mit.edu/.about.parillo/sostools, 2004.

Instructions for carrying out the various methods, process tasks, calculations, control functions, and the generation of signals and other data used in the operation of the system and method are implemented, in some embodiments, in software programs, firmware or computer readable instructions. These instructions are typically stored on any appropriate medium used for storage of computer readable instructions such as floppy disks, conventional hard disks, CD-ROM, flash memory ROM, nonvolatile ROM, RAM, and other like medium.

The present invention may be embodied in other specific forms without departing from its essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is therefore indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.

* * * * *