超过460,000+ 应用技术资源下载

An Optimization Approach to Adaptive Kalman Filtering.pdf

  • 1星
  • 日期: 2015-07-31
  • 大小: 224.14KB
  • 所需积分:1分
  • 下载次数:0
  • favicon收藏
  • rep举报
  • 分享
  • free评论
标签: afk

An Optimization Approach to Adaptive Kalman Filtering.pdf

An Optimization Approach to Adaptive Kalman Filtering M. Karasalo and X. Hu Abstract In this paper, an optimization-based adaptive Kalman filtering method is proposed. The method produces an estimate of the process noise covariance matrix Q by solving an optimization problem over a short window of data. The algorithm recovers the observations h(x) from a system x˙ = f (x), y = h(x) + v without a priori knowledge of system dynamics. Potential applications include target tracking using a network of nonlinear sensors, servoing, mapping, and localization. The algorithm is demonstrated in simulations on a tracking example for a target with coupled and nonlinear kinematics. Simulations indicate superiority over a standard MMAE algorithm for a large class of systems. Keywords: Adaptive filtering, optimization, tracking B.1 Introduction In mobile robotics and sensor networks the need for adaptive and robust filtering methods is increasing with the growing use of large networks of cheap mobile agents with simple sensors. For instance, [18] and [19] discuss the application of adaptive filtering to localization of mobile robots. Examples in the field of visual servoing include [20] where an adaptive extended Kalman filter is applied to improve pose estimates, and the recent contributions in [21], where a similar approach is used to enhance robustness of position and orientation estimates of moving 3D objects. A thorough treatment of the fundamentals of Kalman filtering can for instance be found in the classic series [1–3] where some adaptive methods of filtering are also outlined. The adaptive Kalman filtering schemes most frequently found in the literature are Innovation-based Adaptive Estimation (IAE) and Multiple Model Adaptive Estimation (MMAE). A neat summary of both strategies is found in [4]. IAE methods estimate the covariance matrix of the process noise Q and/or the measurement noise R and utilize the fact that for the right values of Q and R the innovation sequence of the Kalman filter is white noise. By tuning Q and/or R and studying the resulting innovation sequence one can get an idea of the appropriate values of the covariance matrices. An early example of IAE is found in [5], while more recent attempts are for instance [15] and [16]. However convergence to the "right" values of Q and R is not guaranteed with IAE and most algorithms require estimation made over rather large windows of data to achieve reliable covariance measurements, making the method impractical for rapidly changing systems. 57 58 AN OPTIMIZATION APPROACH TO ADAPTIVE KALMAN FILTERING MMAE methods handle model uncertainty by implementing a bank of several different models and computing the bayesian probability for each model to be the true system model given the measurement sequence and under the assumption that one of the models in the model bank is the correct one. The state estimate can be either the output of the most probable model or a weighted sum of the outputs of all models. This method is suitable for applications such as fault detection where one has some a priori information on the system dynamics. For instance, if the dynamics of an engine is well known, each model in the bank can represent the engine dynamics if one or several components fail. If the probability of one of the failure models gets to high an alarm is raised. Examples of MMAE algorithms are found in for instance [17] and [22]. Since MMAE works under the assumption that one of the models in the model bank is the right one, it is an unsuitable method for systems with unknown dynamics. It has been shown in the series of papers [7–13] that using too large a model bank actually decreases performance since, in that case, the true model gets too much competition from false models. Therefore safeguarding with a lot of plausible system models is not an option. Attempts with dynamic model banks have been made in [7–13] and [14], but to the extent of our knowledge all such algorithms assume one large model bank, hopefully including the true model (which may vary over time), and then study different subsets of this large bank. Therefore some way of guessing possible models beforehand is still needed. Convergence results for MMAE methods in general are still lacking. In this paper we propose an optimization approach to adaptive Kalman filtering, Optimizationbased Adaptive Estimation (OAE), that estimates Q componentwise for measurable states at each time step. The method is based on solving an optimization problem to find the Q - value that minimizes both bias and oscillation of the state estimate. The approach is scalable and requires no a priori knowledge of the system dynamics. Q is computed over short windows of data, making the algorithm suitable for rapidly changing systems and online applications. Intended applications include path following, servoing, localization and tracking of a target or signal with unknown dynamics. The paper is organized as follows. In Section B.2 we introduce notation and review the basics of Kalman filtering. Section B.3 derives the OAE method, which is the main contribution of this paper. In Section B.4 we evaluate OAE in simulation compared with an MMAE approach. Finally, Section B.5 summarizes the paper and provides concluding remarks. B.2 Preliminaries Consider a linear discrete-time system x(k + 1) = Ax(k) + Bw(k), (B.1) where x(k) ∈ Rn, and a measurement y(k) ∈ Rm governed by y(k) = Cx(k) + v(k), (B.2) with A ∈ Rn×n, B ∈ Rn×s and C ∈ Rm×n. The signals w(k) and v(k) are random process noise and measurement noise, respectively, and are assumed to be independent, gaussian noise with covariance matrices Q(k) R(k) E(w(k)w(k)T ) E(v(k)v(k)T ). (B.3) Here, E(·) is the expectation value of (·) and Q(k) ∈ Rs×s, R(k) ∈ Rm×m. The Kalman filter is an observer that gives an optimal estimate xˆ(k) of the state x(k) at time step k, given the estimate xˆ(k − 1) and the observation y(k), in a least squares sense. In other words, if we define the estimation error as e(k) x(k) − xˆ(k), (B.4) PAPER B 59 then the Kalman filter will give the estimate xˆ(k) that minimizes E(e(tk)T e(tk)). Now let P(k) E(e(k)e(k)T ) (B.5) denote the covariance matrix of e(k). Then, given initial estimates xˆ0 and P0, the discrete Kalman filter is the recursive process xˆ(k)− = Axˆ(k − 1) P(k)− = AP(k − 1)AT + BQ(k − 1)BT K(k) = P(k)−CT CP(k)−CT + R(k) −1 xˆ(k) = xˆ(k)− + K(k) (y(k) − Cxˆ(k)−) P(k) = (I − K(k)C) P(k)−. (B.6) R(k) and Q(k) play important roles in the recursions. Convergence to the optimal estimate xˆ(k) requires correct values of both matrices. In this paper, we assume that R(k) is known and focus on the estimation of Q(k). Basically, too small values of the components of Q(k) yields a biased estimate, while too large values yields an estimate that oscillates around the true state. In the next section we derive the OAE method that finds a Q(k) value that locally minimizes both bias and oscillations. B.3 An Optimization Approach To Finding Q Consider the system x˙ = f (x) y = h(x) + v, (B.7) where f (x) represents the system dynamics which are assumed unknown, h(x) = (h1(x) . . . hN(x))T is the measurement vector and v is a gaussian measurement noise with unknown covariance. In this section, we derive the OAE approach to Kalman filtering, which recovers h(x) = (h1(x) . . . hN(x))T , and possibly their derivatives. From now on we use the notation h(x) = (h1 . . . hN)T for brevity. f (x) and h(x) of (B.7) may be nonlinear and coupled but we will use n -dimensional linear systems of the type (B.1) - (B.2) as a system model for each component hi, i = 1, . . . , N, and adaptively design filters for each of the N linear systems separately. Associated with each such linear system, we define a state vector hi = (h1i . . . hni )T such that h1i hi. Denoting the output of system i by yi, we get, for i = 1, . . . , N, hi(k + 1) = Aihi(k) + Biwi(k) yi(k) = Cihi(k) + vi(k), (B.8) where Ai ∈ Rn×n, Bi ∈ Rn×s and Ci ∈ Rm×n (1 0 . . . 0), so that Cihi = h1i = hi. The unknown properties of the underlying system (B.7), such as coupling and nonlinearity, are modeled as an added scalar gaussian noise wi(k). As Qi(k) is the covariance of wi(k), each of the Kalman filters has a scalar covariance matrix Qi to be estimated. Viewing the entire N-dimensional vector h(x), this in a sense means assuming a diagonal covariance matrix. This is an assumption frequently used in the literature. Modeling the dynamics of each component hi separately allows for scalability without increasing model complexity. Let us illustrate the concept and the notation with an example. Example B.3.1 Consider tracking a target in 3D, such as an airplane or helicopter whose planned trajectory is unknown but whose position (x1 x2 x3)T can be observed (N = 3). Applying the OAE approach, we would have a measurable state vector h(x) = (h1(x) h2(x) h3(x))T = (x1 x2 x3)T representing the position of the target, and need to model the dynamics of h1, h2 and h3 separately. 60 AN OPTIMIZATION APPROACH TO ADAPTIVE KALMAN FILTERING Introduce state vectors hi and linear systems Ai, Bi,Ci for i = 1, 2, 3. For tracking in 1D a natural model for the target dynamics is a random walk model of order n = 2 where the noise w represents the unknown acceleration of the target. Therefore, we let (B.8) be realized as hi(k + 1) = yi(k) = 1δ 01 hi(k) + δ2 2 δ 1 0 hi(k) + vi(k). wi(k) (B.9) Here δ is the sampling rate, h1i corresponds to the position and h2i to the velocity of the target. OAE computes state estimates hˆ i = (hˆ 1i hˆ i2)T for each component hi and the final output is the complete estimate hˆ = (hˆ T1 hˆ T2 hˆ T3 )T = (xˆ1 x˙ˆ1 xˆ2 x˙ˆ2 xˆ3 x˙ˆ3)T of the 3D position and its derivatives. The idea of OAE is to, at each iteration k, compute an, in some sense, optimal value of Qi(k) for each component model i, and apply separate Kalman filters to each component. To emphasize that we are dealing with scalars we will from now on use qi to denote the process noise covariance for hi. The rest of this section will be devoted to deriving the OAE approach to computing qi(k). B.3.1 General Idea of OAE As mentioned in the preliminaries, a too small estimate of the process noise causes bias while a too large estimate causes oscillation. For very large values of qi(k), hˆi may even have a larger mean amplitude of oscillation around the true value hi than the noisy sample data yi so that the Kalman filter actually adds disturbances rather than filtering them out. This is illustrated in Figure 1, where two very different hˆ’s are obtained for the same trajectory, dynamic model, and data set due to different choices of Q. The ideal choice of qi(k) lies somewhere between these extremes. We want 10 8 6 4 2 0 −2 −4 0 Q=0 50 100 k 10 x y xhat 8 6 4 2 0 150 0 Q=5 50 100 k 150 Figure 1: Two Kalman estimates (dotted) for the same trajectory and data set. Left: Q = 0, right Q = 5. to find a qi(k) such that both the bias and the mean amplitude are small. Therefore we define a cost function Ji(q) = εai(q) + (1 − ε)bi(q), (B.10) PAPER B 61 where ai(q) and bi(q) are to be defined further down. ai(q) is a measure of the amplitude of hˆi, and bi(q) is a measure of the bias of hˆi, compared to the true value hi. ε ∈ (0, 1) is a scalar weight. Note that hˆi is a function of q through the Kalman recursions (B.6). As will be seen further down, ai(q) and bi(q) will actually be functions of hˆi and thereby implicit functions of q. Let (qi(k))∗ = arg min q ≥0 Ji(q). (B.11) The most critical part is to construct ai(q) and bi(q) without prior knowledge of the true vector h(x). In the following our approach is described. B.3.2 Measure of Bias and Amplitude First, recall that the output of system i is modeled as yi(k) = h1i (k) + v(k) = hi(k) + v(k). Since v(k) is a zero mean gaussian noise, yi is not biased. Therefore yi(k) and hˆi(k) can be compared to measure the bias of hˆi(k). This is done simply by fitting second order polynomials to yi(κ) and hˆi(κ), where κ ∈ [k ± ∆], where ∆ is a positive integer. We define the vectors fyi the discretizations of these polynomials with sampling points defined by κ. Locally, and fhˆi fyi can as be considered a non-biased estimate of h while fh is a smoothing of hˆi on the interval κ. The bias measure can now be defined as bi(q) || fyi − fhˆi ||2, (B.12) where · is the Euclidean norm. Note that, for large values of q, hˆi may oscillate rapidly enough for fyi − fhˆi to change sign within the window κ. In such cases, bi(q) is not a measure of bias but a measure of amplitude of the oscillations. Therefore bi(q) punishes bias for small values of q and enhances punishment on amplitude for large values of q. The construction of ai(q) follows along the same lines as that of bi(q). Note that if the amplitude of hˆi is high, the mean distance between hˆi and fhˆi is large. Therefore, let ai(q) || fhˆi − hˆ i||2. (B.13) This choice of bias and amplitude measurements has proved to be robust in simulations and is easy to implement. In the next section it is shown that the resulting expression for Ji(q) is actually a convex quadratic function in h(q). However, from a theoretical viewpoint, other cost functions might be a more reasonable choice. The best choice of measurements ai(q) and bi(q) should be investigated further in future work. If no knowledge of the tracked system is available beforehand, the most natural choice of weight is ε = 0.5. To give an idea of the effect of the terms in Ji(q), we have constructed estimates hˆ for the same curve and data set with a few different values of the weight ε. The results are shown in Figure 2. One way to construct a strict bias measure is to set bi(q) to zero if fyi − fhˆi changes sign on the interval κ. However, the resulting cost function Ji(q) becomes much less appealing, and performance actually deteriorates, as is illustrated in Figure 2. B.3.3 Derivation of Ji(q) First, expressions for fyi and fhˆi are given. Let κ = 1, . . . , 2∆ + 1 and let yi(κ) be measurement data. A second order polynomial has the form fyi (κ) = α0 + α1κ + α2κ2 (B.14) 62 AN OPTIMIZATION APPROACH TO ADAPTIVE KALMAN FILTERING x ε = 0.5 ε=1 4 x4 xOAE 2 2 x 0 0 −2 0 10 20 30 40 50 60 ε=0 4 −2 0 10 20 30 40 50 60 only including b when bias 4 2 2 x x 0 0 −2 0 10 20 30 40 50 60 −2 0 10 20 30 40 50 60 Figure 2: Tracking with OAE using different values of ε. Top left: : ε = 0.5. Top right: : ε = 1 yields a smooth but biased estimate. Bottom left: : ε = 0 yields an oscillating estimate. Bottom right: : only including bi(q) if fyi − fhˆi does not change sign gives more oscillations. and the coefficients α0, α1 and α2 are determined in a least squares sense by minimizing 2∆+1 ∑ (yi(κ) − (α0 + α1κ + α2κ2))2. κ =1 Using matrix notation (B.15) takes the form 1 1    ... 1 1 2 ... 2∆ + 1 1 4 ... (2∆ + 1)2       α0 α1 α2   =  yi(1) ... yi(2∆ + 1)  .  (B.15) (B.16) PAPER B 63 Note that for a general κ = k − ∆, . . . , k + ∆ the matrix in (B.16) may become ill conditioned if k is large. However for the purpose of finding fyi (κ) we can always translate the data set and use the matrix in (B.16) to avoid singularities. Introducing the notation K 1 1 1  1 2 4     ... ... ... ,   1 2∆ + 1 (2∆ + 1)2 yi (κ )  yi(1)    ... ,  yi(2∆ + 1) (B.17) and noting that (α0 α1 α2)T = (K T K )−1K T yi(κ) we can write fyi (κ) = K (K T K )−1K T yi(κ) Kyi(κ). (B.18) Note that K is singular for ∆ = 0. From now on it is assumed that ∆ > 0, or a minimum of 3 measurements in yi(κ) for each computation of Ji(q). This yields, with I ∈ R(2∆+1)×(2∆+1) denoting the identity matrix, ai(q) = ||(K − I)hˆi(κ)||2, (B.19) bi(q) = ||K(hˆi(κ) − yi(κ))||2, (B.20) where hˆi(κ) (hˆi(1) . . . hˆi(2∆ + 1))T . It is easy to show that K K (K T K )−1K T is symmetric and idempotent. After some straightforward calculations (recalling that yi(κ) is a vector of given measurement data and not a function of q, while hˆi(κ) is a function of q as it is the first component of the state vector of the Kalman estimate based on q) the final expression for the cost function (B.10) on the interval κ and for component hi is Ji,κ (q) = 1 2 hˆ i(κ )T Mhˆ i (κ ) − 2(1 − ε )yi(κ )T Khˆ i(κ ) (B.21) with M 2((1 − 2ε)K + εI). (B.22) For the special case ε = 0.5 we get Ji,κ (q) = 1 2 hˆ i (κ )T hˆ i (κ ) − yi(κ )T Khˆ i(κ ). The particular choices of ai(q) and bi(q) yield the following result. Proposition B.3.1 The function Ji,κ as defined by (B.21) is convex in hˆi. (B.23) Proof Note that Ji,κ is a quadratic function in the argument hˆi. For convexity in hˆi, it suffices that M is positive definite. For idempotent matrices such as K it holds that the eigenvalues are either 0 or 1. Since K is also symmetric, it is positive semidefinite. Now xT Mx = 2xT ((1 − 2ε)K + εI)x = 2(1 − 2ε)xT Kx + 2εxT Ix ≥ {(1 − 2ε) ≥ 0, K is pos. sem. def.} ≥ 2εxT Ix > 0 ∀x ∈ R2∆+1. (B.24) 64 AN OPTIMIZATION APPROACH TO ADAPTIVE KALMAN FILTERING It should be noted that Proposition B.3.1 is not sufficient for convexity in q. Since hˆi as a function of q depends on input data and is defined by the Kalman recursions, the optimization problem is in general not convex in q, and thus in most cases we get a locally optimal solution. Our method can be summarized as follows: Given ∆ and the interval κ = k − ∆, . . . , k + ∆, for each measurement yi(k) and each model i let (qi(k))∗ = arg min q ≥0 Ji,κ (q). (B.25) The value (hˆi(k))∗ corresponding to (qi(k))∗ is chosen as the output estimate hˆi(k). Next we illustrate how OAE works with an example. B.3.4 A Simple Example In this section J(q) is derived for a simple example, assuming h(x) ∈ R (N = 1) and using a first order one dimensional random walk as a system model (n = 1): h(k + 1) = h(k) + w(k) y(k) = h(k) + v(k). (B.26) For a scalar state variable h, all matrices of the Kalman filter are scalar and the recursions boil down to hˆ (k) = (P(k − 1) + Q(k − 1))(y(k) − hˆ (k − 1)) P(k − 1) + Q(k − 1) + R(k − 1) (B.27) P(k) = P(Rk(−k)1()P+(kQ−(k1)−+1)Q+(kR−(k1−))1). We will give an analytic expression for the cost function Ji,κ (q) for this example with the shortest possible window, κ = 1, 2, 3. With initial values hˆ 0, P0 and assuming a constant measurement noise covariance matrix R, we get y(κ )  y0   y1  , K y2 1 1 1  1 2 4 ⇒K=I 139  hˆ 0  h(κ ) =   (P0+q)(y1−hˆ 0) P0+q+R   .  (y2(P0+q+R)−(y1−hˆ 0)(P0+q))(P0R+(P0+q+2R)q)  (P0+q+R)(2P0R+(P0+q+3R)q+R2) (B.28) (B.29) With ε = 0.5 the explicit expression for the cost function reads J(q) = 1 hˆ (κ)T hˆ (κ) − y(κ)T hˆ (κ) = 1 2 2 (hˆ 0)2 + (P0 + q)(y1 − hˆ 0) P0 + q + R 2 + (y2(P0 + q + R) − (y1 − hˆ 0)(P0 + q))(P0R + (P0 + q + 2R)q) 2 − (P0 + q + R)(2P0R + (P0 + q + 3R)q + R2) y0 hˆ 0 + y1(P0+q)(y1−hˆ 0) P0+q+R + y2 (y2(P0+q+R)−(y1−hˆ 0)(P0+q))(P0R+(P0+q+2R)q) (P0+q+R)(2P0R+(P0+q+3R)q+R2) . (B.30) With a model order bigger than 1, such as for instance (B.9) in Example 1, the Kalman recursions become more complex, and increasing the window size ∆ increases the number of terms in the cost PAPER B 65 function. In the general case, the cost function J(q) consists of a sum of 2(2∆ + 1) quotients of polynomes in q with increasing degree for larger indices. As mentioned before, in general J(q) is not convex, and consequently the best we can hope for is a locally optimal solution. However, if we limit the domain to an interval q ∈ [qmin, qmax] we can find the optimal solution on the interval with an accuracy determined by the chosen discretization of [qmin, qmax]. Computationally this method requires the use of a small value of ∆ and a simple model such as (B.26) to be applicable online. Choosing the optimal q on an interval may be viewed as a kind of MMAE with a very large model bank given by the discretization of the interval. The advantages of this approach over regular MMAE are that here accuracy increases with the number of models, and the problem of convergence to zero for model probabilities is eliminated. B.4 Simulation Results B.4.1 Evaluation of Performance To evaluate the method we have performed simulations on both smooth (S) and non-smooth (NS) tra- jectories with different levels of noise. The smooth trajectories were generated by adding a random number of sines and cosines with varying amplitudes and periods and the non-smooth trajectories by adding a random number of piecewise constant functions. In both cases we let the parameter η describe the rate of change in the trajectories. In the smooth case, increasing η decreases the period while in the non-smooth case it decreases the length of each constant piece of the trajectory. Examples of 1D trajectories generated with this procedure are shown in Figure 3. A curve in 3D, for instance, is represented by three such sets, one for each state component. The final simulated data is generated by adding a gaussian noise with standard deviation σ . To summarize, a simulated data set is defined by (S, η, σ ) or (NS, η, σ ) and generally sets with large values of η and σ are more challenging. We let η = {0.05, 0.10, 0.15, . . ., 0.50}, σ = {0.05, 0.20, 0.35, . . ., 1.40} and generate 100 data sets of the type (S, η, σ ) and (NS, η, σ ) respectively for each combination (η, σ ). To evaluate OAE we let it run on each of the data sets, using the system model (B.9) with δ = 1. Since we assume no a priori knowledge of system dynamics, we let ε = 0.5. Note that for a fixed value of ε, ∆ is the only tuning parameter of OAE. For all 20000 investigated trajectories we let ∆ = 5 to investigate sensitivity to tuning. For comparison we run an MMAE algorithm on the same sets, using a model bank of 4 models of the type (B.9). The characteristics of each model in the bank are defined by its value of Q. We use Q1 = 0, Q2 = 10−1, Q3 = 100, Q4 = 101. The choices of Q-values and number of models for the model bank will inevitably effect performance of MMAE. For some of the investigated trajectories performance might increase by adding or removing models or using other Q-values. However, the sensitivity to the choice of model bank is one of the drawbacks with MMAE and since we allow no alterations of OAE for different trajectories, we keep the model bank constant for MMAE. To get an in some sense fair comparison we impose an upper bound qmax = 10 on (B.11) so that both methods will employ Q-values on the interval [0, 10]. Figure 4 shows tracking with OAE and MMAE for a typical smooth (left) and non-smooth (right) trajectory. Performance is measured as the relative error e(η, σ ) h(η, σ ) − hˆ(η, σ ) h(η, σ ) (B.31) where e = eOAE for hˆ = hˆOAE and e = eMMAE for hˆMMAE . To get a robust analysis, 100 data sets are generated for each pair (η, σ ) and the average e¯(η, σ ) over the 100 runs is measured. The resulting 66 AN OPTIMIZATION APPROACH TO ADAPTIVE KALMAN FILTERING x S x S η = 0.05 10 5 0 −5 0 100 200 300 400 k η = 0.05 8 6 4 2 0 0 100 200 300 400 k η = 0.5 5 0 −5 0 100 200 300 400 k η = 0.5 10 8 6 4 2 0 0 100 200 300 400 k xNS xNS Figure 3: Examples of trajectories. Left: η = 0.05, right: η = 0.5. Top: smooth, bottom: nonsmooth. error difference e¯MMAE (η, σ ) − e¯OAE(η, σ ) is plotted in Figure 5 for all pairs (η, σ ). B.4.1.1 Smooth Curves (e¯MMAE − e¯OAE) > 0 for all pairs (η, σ ) except (σ , η) = (0.5, 0.5), that is, for 99% of the tested pairs. See the left plot of Figure 5. Averaging over all runs and all pairs (η, σ ), the error of OAE is 83% of the error of MMAE for the smooth test cases. B.4.1.2 Non Smooth Curves (e¯MMAE − e¯OAE) > 0 for 87 out of the 100 tested pairs (η, σ ), see the right plot of Figure 5. MMAE was on average the better choice for σ > 0.5, η ∈ [0.2, 0.3]. Averaging over all runs and all pairs (η, σ ), the error of OAE is 85% of the error of MMAE for the non-smooth test cases. PAPER B 67 x σ = 0.35, η = 0.15 12 16 10 xOAE x 14 x MMAE 12 8 10 x 8 6 6 4 4 2 2 0 00 10 20 30 k 40 50 −20 10 20 30 40 50 k Figure 4: Tracking with MMAE and OAE. Left: smooth trajectory. Right: non-smooth trajectory. OAE −e MMAE eMMAE − eOAE reference level 0 0.06 0.04 0.02 0 −0.02 −0.04 1.5 1 0.5 00 0.6 0.4 0.2 eMMAE − eOAE 0.1 eMMAE − eOAE reference level 0 0.05 0 1.5 1 0.5 σ 00 0.4 0.2 η e Figure 5: Difference in errors for MMAE and OAE as a function of η and σ . Left: smooth curves. Right: non-smooth curves. B.4.2 Example: Tracking a Platform With Unicycle Dynamics In this section we demonstrate OAE in a tracking application where the true target kinematics are both nonlinear and coupled. We assume the target has unicycle kinematics: x˙1 = u cosφ x˙2 = u sin φ φ˙ = ω, (B.32) 68 AN OPTIMIZATION APPROACH TO ADAPTIVE KALMAN FILTERING where (x1, x2) is the position of the target and φ is its heading. (u, ω) are control inputs. In terms of Equation (B.7), we have, with state vector x = (x1 x2 x3)T = (x1 x2 φ )T ,  x˙1   f1(x, u, ω)   u cos x3   x˙2  =  f2(x, u, ω)  =  u sin x3  x˙3 f3(x, u, ω) ω (B.33) y= h1(x) h2(x) + v1 v2 = x1 x2 + v1 v2 (B.34) The task for the tracking device is to track the target position using noise contaminated observations y = h(x) + v, obtained from range sensors. The tracking device has no information on the true target kinematics f (x, u, ω) or control (u, ω). With no a priori information, a natural choice is to model the target kinematics as a random walk (RW) of the type (B.9). The model uncertainty w with covariance Q represents nonlinearity of the true model and coupling between states. For highly nonlinear and coupled true target kinematics such as (B.32), a good estimate of Q is essential. With the notation introduced in Section B.3, we get N=2 h = (h1 h2)T = (x1 x2)T h1 = (h1 h˙1)T h2 = (h2 h˙2)T Dimension of h(x) Observation State in RW model of x1 State in RW model of x2 The dynamics of h1 and h2 are modeled as (B.9) and estimates hˆ 1 = (hˆ1 h˙ˆ1)T and hˆ 2 = (hˆ1 h˙ˆ1)T are computed using OAE. B.4.2.1 Tracking using OAE Target tracking is performed for a target with the kinematics (B.32), traveling along a path generated by plugging in the control u(φ ) = min{| tan(4φ )|, 1} w(φ ) = −1.8/ sin(φ 2). (B.35) (B.36) into Equation (B.32). A gaussian noise with standard deviation [σx σy σφ ] = 0.05 [mean(x1) mean(x2) mean(φ )] was added to the path to simulate measurement noise. The resulting OAE estimate is shown in Figure 6, together with the OAE values of q1 and q2. B.4.2.2 Estimation of non measurable states According to the assumptions made, φ is not measurable. However, using a second order RW model, OAE outputs estimates of (x˙, y˙) as well. Noting that φ = arctan(y˙/x˙) (B.37) we can compute a rough estimate φˆ = arctan(y˙ˆ/x˙ˆ). This estimate is plotted together with the true value in Figure 7. B.5 Conclusions A novel adaptive Kalman filtering method was presented in this paper. An estimate of the covariance matrix Q was found by solving an optimization problem over a short window of data. No a priori knowledge of system dynamics is needed and the method is scalable to N dimensions. PAPER B 69 0 (xtarget,ytarget) data −0.5 (xOAE,yOAE) q1 8x 10−4 6 4 2 −1 0 0 8x 10−5 100 200 k 300 400 500 −1.5 6 −2 4 q 2 2 −2.5 −0.4 −0.2 0 0.2 0.4 0.6 00 100 200 k 300 400 500 Figure 6: Tracking a unicycle target using OAE. Left: Target trajectory given by (B.35) - (B.36), noisy data and resulting OAE estimate. Right: The OAE values of q1 and q2. 7 φtrue 6 φOAE 5 4 3 2 1 0 0 50 100 150 200 250 300 350 400 450 500 k Figure 7: The estimate φˆ. Trajectory tracking simulations show satisfying performance for a large class of systems and superiority over the reference method, a standard MMAE algorithm. Simulations indicate robustness to tuning in the sense that the value of ∆ can be chosen the same for vastly different trajectories, such as the examples in Figure 3, and still perform better than the reference method. Possible extensions of this work include introducing dynamic weights in the cost function J(q) to emphasize punishment on bias or amplitude depending on knowledge of traits of the system 70 AN OPTIMIZATION APPROACH TO ADAPTIVE KALMAN FILTERING or magnitude of the measurement noise, and further investigation of the relation between ∆ and performance for different types of systems. Alternative designs of a(q) and b(q) should also be evaluated. B.6 Acknowledgments The authors gratefully acknowledge the helpful comments of Danica Kragic at CAS, KTH. B.7 References [1] P.S. Maybeck, Stochastic models, estimation, and control, vol. 1, Academic Press, 1979. [2] P.S. Maybeck, Stochastic models, estimation, and control, vol. 2, Academic Press, 1979. [3] P.S. Maybeck, Stochastic models, estimation, and control, vol. 3, Academic Press, 1982. [4] A.H. Mohamed, and K.P. Schwarz, Adaptive Kalman filtering for INS/GPS, Journal of Geodesy, vol. 73, pp. 193–203, 1999. [5] R. Mehra, On the identification of variances and adaptive Kalman filtering, IEEE Trans. on Automatic Control, vol. 15, pp. 175–184, 1970. [6] P.S. Maybeck and P.D Hanlon, Performance enhancement of a multiple model adaptive estimator, IEEE Trans. on Aerospace and Electronic Systems, vol. 31, pp. 1240–1254, 1995. [7] X.R. Li and Y. Bar-Shalom, Multiple-model estimation with variable structure, IEEE Trans. on Automatic Control, vol. 41, no. 4, pp. 478–493, 1996. [8] X.R. Li, Multiple-model estimation with variable structure: some theoretical considerations, Proc. of the 33rd IEEE Conference on Decision and Control (CDC), vol. 2, pp. 1199–1204, 1994. [9] X.R. Li, Multiple-model estimation with variable structure, ii. model-set adaptation, IEEE Trans. on Automatic Control, vol. 45, no. 11, pp. 2047–2060, 2000. [10] X.R. Li, X. Zwi, and Y. Zwang, Multiple-model estimation with variable structure. iii. modelgroup switching algorithm, IEEE Trans. on Aerospace and Electronic Systems, vol. 35, no. 1, pp. 225–241, 1999. [11] X.R. Li, Y. Zhang, and X. Zhi, Multiple-model estimation with variable structure. iv. design and evaluation of modelgroup switching algorithm, IEEE Trans. on Aerospace and Electronic Systems, vol. 35, no. 1, pp. 242–254, 1999. [12] X.R. Li and Y. Zhang, Multiple-model estimation with variable structure. v. likely-model set algorithm, IEEE Trans. on Aerospace and Electronic Systems, vol. 36, no. 2, pp. 448–466, 2000. [13] X.R. Li, V. Jilkov, and J. Ru, Multiple-model estimation with variable structure- part vi: expected-mode augmentation, IEEE Trans. on Aerospace and Electronic Systems, vol. 41, no. 3, pp. 853–867, 2005. [14] P.S. Maybeck and K.P. Hentz, Investigation of moving-bank multiple model adaptive algorithms, Proc. of the 24th IEEE Conference on Decision and Control (CDC), vol. 24, pp. 1874–1881, 1985. PAPER B 71 [15] J.W. Ding and C. Rizos, Improving adaptive kalman estimation in gps/ins integration, Journal Of Navigation, vol. 60, pp. 517–529, 2007. [16] Y. Yang and W. Gao, An optimal adaptive kalman filter, Journal of Geodesy, vol. 80, no. 4, pp. 177–183, 2006. [17] X. Wang and V. Syrmos, Vehicle health monitoring system using multiple-model adaptive estimation, Proc. of the 11th IEEE Mediterranean Conference on Control and Automation, 2003. [18] L. Jetto, S. Longhi, and G. Venturini, Development and experimental validation of an adaptive extended kalman filter for the localization of mobile robots, IEEE Trans. on Robotics and Automation, vol. 15, no. 2, pp. 219–229, 1999. [19] G. Reina, A. Vargas, K. Nagatani, and K. Yoshida, Adaptive kalman filtering for gps-based mobile robot localization, Proc. of IEEE International Workshop on Safety, Security and Rescue Robotics (SSRR), pp. 1–6, 2007. [20] M. Ficocelli and F. Janabi-Sharifi, Adaptive filtering for pose estimation in visual servoing, Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 19–24, 2001. [21] V. Lippiello and L. Villani, Adaptive extended Kalman filtering for visual motion estimation of 3d objects, Control Engineering Practice, vol. 15, no. 1, pp. 123–134, 2007. [22] P. Maybeck and P. Hanlon, Performance enhancement of a multiple model adaptive estimator, IEEE Trans. on Aerospace and Electronic Systems, vol. 31, no. 4, pp. 1240–1254, 1995.




工业电子 汽车电子

$(function(){ var appid = $(".select li a").data("channel"); $(".select li a").click(function(){ var appid = $(this).data("channel"); $('.select dt').html($(this).html()); $('#channel').val(appid); }) })