Home / Prelab - Observer / CT2DT

CT2DT

The questions below are due on Tuesday December 07, 2021; 09:00:00 AM.
 
You are not logged in.

Please Log In for full access to the web site.
Note that this link will take you to an external site (https://shimmer.csail.mit.edu) to authenticate, and then you will be redirected back to this page.

1) Goals

This problem focuses on CT-to-DT conversion. We will use three examples we have already seen in the lab and in prelabs, the propeller speed control problem, the pendulum, and the L-motor-driven arm.

2) CT-to-DT Conversion

Recall that given a single-input, single-output CT state space system has the form

\textbf{E} \frac{d}{dt} \textbf{x}(t) = \textbf{A}\textbf{x}(t) + \textbf{B} u(t)
and
y(t) = \textbf{C} \textbf{x}(t),
where \textbf{x}(t) is an n-length vector of states, u(t) is the input, y(t) is observed output, \textbf{E} and \textbf{A} are n \times n evolution matrices, \textbf{B} is the n \times 1 input matrix and \textbf{C} is the 1 \times n output matrix.

The single-input, single-output DT state space system has the form

\textbf{x}[n] = \textbf{A}_d\textbf{x}[n-1] + \textbf{B}_d u[n-1]
and
y[n] = \textbf{C}_d \textbf{x}[n],
where \textbf{x}[n] is an n-length vector of states, u[n] is the input, y[n] is the observed output, \textbf{A}_d is the n \times n evolution matrix, \textbf{B}_d is the n \times 1 input matrix and \textbf{C}_d is the 1 \times n output matrix.

It is uncommon to write DT state-systems in descriptor form, that is, one rarely uses a \textbf{E}_d matrix. In CT, using an \textbf{E} matrix often leads to a simpler or clearer translation from physical system to state-space model, but we do not usually translate physical systems directly in to DT state-space models. Instead, \textbf{E}_d is avoided, as it leads to less efficient state update, and slows the sample rate for observer-based controllers. To see this, suppose \textbf{E}_d \ne \textbf{I}, then to compute \textbf{x}[n] from \textbf{x}[n-1] and u[n-1], as in

\textbf{E}_d \textbf{x}[n] = \textbf{A}_d \textbf{x}[n-1] + \textbf{B}_d u[n-1],
we must solve a system of linear equations for \textbf{x}[n], but with \textbf{E}_d = \textbf{I}, computing \textbf{x}[n] requires only matrix-vector multiplication,
\textbf{x}[n] = \textbf{A}_d \textbf{x}[n-1] + \textbf{B}_d u[n-1].

If we assume that \textbf{E}^{-1} exists and u(t) = u[floor(\frac{t}{\Delta})] (that is, u(t) is piecewise constant), then we can make an "exact" conversion from a CT to a DT state-space system. And by "exact", we mean that the relationships

x[n] = x(n \Delta T),\;\;\; y[n] = y(n\Delta T)
hold exactly.

To derive the exact conversion, let us start with the solution of the CT state-space system with initial condition \textbf{x}(0), and constant input u(t) = u[0], given by

\textbf{x}(t) = e^{\textbf{E}^{-1}\textbf{A}t} \textbf{x}(0) + \left(e^{\textbf{E}^{-1}\textbf{A}t} - \textbf{I}\right) \textbf{A}^{-1} \textbf{B}u[0].
We can verify that the above formula is a solution by substituting for x(t) in the CT state-space equation system, and then show that the equations are satisfied. To do so, we will need to make use of one of the defining properties of the matrix exponential,
\frac{d}{dt} e^{\textbf{E}^{-1}\textbf{A}t} = \textbf{E}^{-1}\textbf{A} e^{\textbf{E}^{-1}\textbf{A}t}.

If we want to compute \textbf{x} (n \Delta T ) given \textbf{x}((n-1) \Delta T ) , and we know that the input will be constant over this short interval ((n-1)\Delta T, n \Delta T), we can treat \textbf{x}((n-1) \Delta T) as if it were an initial condition, and adapt the above solution to get

\textbf{x}(n \Delta T) = e^{\textbf{E}^{-1}\textbf{A} \Delta T} \textbf{x}((n-1) \Delta T) + \left(e^{\textbf{E}^{-1}\textbf{A} \Delta T} - \textbf{I}\right) \textbf{A}^{-1} \textbf{B}u[n].
By defining \textbf{x}[n] \equiv \textbf{x}(n \Delta T), y[n] \equiv y(n \Delta T), then
\textbf{A}_d = e^{\textbf{E}^{-1}\textbf{A} \Delta T}
\textbf{B}_d = \left(e^{\textbf{E}^{-1}\textbf{A} \Delta T} - \textbf{I}\right) \textbf{A}^{-1} \textbf{B}
and \textbf{C}_d = \textbf{C}.

Although exact conversion seems desirable, it is quite common to approximate the CT-to-DT conversion. For example, we used approximate conversion in the first few labs, when we equated the change in angular velocity with product of \Delta T and angular acceleration. Our approximation was a particular example of the "Forward-Euler" approach, given in general by

\textbf{x}[n] = \textbf{x}[n-1] + \Delta T \textbf{E}^{-1} \left(\textbf{A} \textbf{x}[n-1] + \textbf{B} u[n-1]\right).

In terms of I, A, B, deltaT, and E, what is Ad for the Forward-Euler approximation?

In terms of I, A, B, C, E, and deltaT, what is Bd for the Forward-Euler approximation?

In terms of I, A, B, C, E, and deltaT what is Cd for the Forward-Euler approximation?

As an alternative to the "Forward-Euler" approximation, suppose we derived approximate \textbf{A}_d and \textbf{B}_d matrices by using a first-order Taylor approximation to the matrix exponential. More specifically, if we expand e^{\textbf{E}^{-1}\textbf{A} \Delta T } about \Delta T = 0, and keep only the first order terms,

e^{\textbf{E}^{-1}\textbf{A} \Delta T} \approx \textbf{I} + \Delta T \textbf{E}^{-1}\textbf{A}.

In terms of I, A, B, deltaT, and E, what is Ad for the first-order Taylor approximation?

In terms of I, A, B, C, E, and deltaT, what is Bd for the first-order Taylor approximation?

Given the above, it is not surprising that "Forward-Euler" and "First-Order" are used interchangeably to describe the above approximate CT to DT conversion.

3) Three numerical examples

In order to gain some concrete experience with CT to DT conversion, we will consider three examples:

3.1) Propeller Speed

We developed a model relating the propellor motor's back-EMF, which is proportional to rotational velocity voltage, to the voltage applied to the motor, v_{pwm} of the form

J_m \frac{d v_{emf}(t)}{dt}= K_m K_e \frac{\left(v_{pwm}(t) - v_{emf}(t)\right)}{\left(R_m+R_s\right)}-K_f v_{emf}(t).
For this scalar example, v_{emf}(t) is the scalar state, which we will denote generically as x(t), v_{pwm}(t) will be denoted as u(t), and we substituted values for the above parameters to yield a scalar state-space model,
3 \frac{d x(t)}{dt} = -25 x(t) + 15 u(t), \; \; \; y(t) = x(t).
so in this case, E = 3, A = -25, B = 15, and C = 1, and all the matrices are all 1 \times 1 .

3.2) Pendulum

For the pendulum, the 2-length state vector is

\textbf{x}(t) = \begin{bmatrix} \theta(t) \\ \omega(t) \\ \end{bmatrix}
and its evolution is described by
\frac{d}{dt} \begin{bmatrix}\theta(t) \\ \omega(t) \end{bmatrix} = \begin{bmatrix}0 & 1\\-9 & 0\end{bmatrix}\begin{bmatrix}\theta(t) \\ \omega (t)\end{bmatrix} + \begin{bmatrix}0 \\ 2 \end{bmatrix} u(t), \;\;\; y(t) = \begin{bmatrix}1 & 0 \end{bmatrix}\begin{bmatrix}\theta(t) \\ \omega (t)\end{bmatrix},
where \textbf{E} is the 2x2 identity matrix, \textbf{A} is 2x2, \textbf{B} is 2x1, and \textbf{C} is 1x2.

3.3) L-motor Driven Arm

For the Arm, the 3-length state vector is

\textbf{x}(t) = \begin{bmatrix} i_m(t) \\ \omega(t) \\ \theta (t) \end{bmatrix}
and its evolution is described by
\begin{bmatrix}0.005 & 0 & 0 \\0 & 0.0015 & 0\\ 0 & 0 & 1\end{bmatrix}\frac{d}{dt} \begin{bmatrix}i_m(t) \\ \omega(t) \\ \theta(t) \end{bmatrix} = \begin{bmatrix}-7 & -0.46 & 0 \\0.3 & -0.00073 & 0\\ 0 & 1 & 0\end{bmatrix}\begin{bmatrix}i_m (t) \\ \omega (t) \\ \theta(t) \end{bmatrix} + \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix} u(t),
y(t) = \begin{bmatrix}0 & 0 & 1 \end{bmatrix}\begin{bmatrix} i_m(t) \\ \omega (t)\\ \theta(t) \end{bmatrix},
where \textbf{E} is a 3x3 diagonal matrix, \textbf{A} is 3x3, \textbf{B} is 3x1, and \textbf{C} is 1x3.

Notice that this is the L-motor driven arm model for the case with NO rubber bands.

4) Poles and Eigenvalues for CT and DT state-space

If we use the z-transform to analyze the DT state-space system,

\textbf{X}(z) = z^{-1} \textbf{A}_d\textbf{X}(z) + z^{-1} \textbf{B}_d U(z)
and
Y(z) = \textbf{C}_d \textbf{X}(z).
Combining,
Y(z) = \textbf{C}_d \left( \textbf{I} - z^{-1} \textbf{A}_d \right)^{-1} z^{-1} \textbf{B}_d U(z),
or
Y(z) = \textbf{C}_d \left( z \textbf{I} - \textbf{A}_d \right)^{-1} \textbf{B}_d U(z) = H(z) U(z).

4.1) Poles of H(z) are the eigevalues of \textbf{A}_d

Notice the similarity to the transfer function in continuous time,

Y(s) = \textbf{C} \left( s \textbf{I} - \textbf{A} \right)^{-1} \textbf{B} U(s) = H(s) U(s).
In CT, we assumed that \textbf{A} had unique eigenvalues, and used diagonalization by eigenvectors to show that that the poles of the transfer function from U(s) to Y(s) are the eigenvalues of \textbf{A}. An alternative argument starts with the observation that if p is a pole of H(s), then H(p) = \infty. But H(p) can only be infinite if the inverse of \left( p \textbf{I} - \textbf{A} \right) is infinite.

If there exists a non-zero vector \textbf{w} such that

\textbf{A} \textbf{w} = p \textbf{w}
then p and \textbf{w} are an eigenvalue-eigenvector pair for \textbf{A}, and
\left( p \textbf{I} - \textbf{A} \right) \textbf{w} = 0.
That is, \left( p \textbf{I} - \textbf{A} \right) is zero, at least in one direction, so its inverse is infinite.

An identical argument can be made in the DT case, giving us the fact that if the poles of H(z) are unique, then

poles(H(z)) = eig(\textbf{A}_d).

4.2) The Spectral Mapping Theorem for Matrices

Given a polynomial function,

f(z) \equiv \sum_{l=0}^L \alpha_l z^l,
if \lambda is an eigenvalue of an n \times n matrix \textbf{A}, then f(\lambda) is an eigenvalue of f(\textbf{A}), given by
f(\textbf{A}) \equiv \sum_{l=0}^L \alpha_l \textbf{A}^l.

So for example, if \lambda is an eigenvalue of \textbf{A}, then 3 + 4\lambda is an eigenvalue of

3 \textbf{I} + 4\textbf{A},
and \lambda^{17} is an eigenvalue of \textbf{A}^{17}.

If the eigenvalues of \textbf{A} are distinct, the spectral mapping theorem is easy to prove. Recall that for matrices with distinct eigenvalues, the matrix of its eigenvectors, \textbf{V}, is invertible, and

\textbf{V} \Lambda \textbf{V}^{-1} = \textbf{A},
where \Lambda is the diagonal matrix of \textbf{A}'s eigenvalues.

For \textbf{A}\textbf{A} = \textbf{A}^2,

\textbf{A}^2 = \textbf{V} \Lambda \textbf{V}^{-1} \textbf{V} \Lambda \textbf{V}^{-1} = \textbf{V} \Lambda^2 \textbf{V}^{-1}
and by induction,
\textbf{A}^l = \textbf{V} \Lambda^l \textbf{V}^{-1}.
Finally by linearity,
f(\textbf{A}) = \textbf{V} \sum_{l=0}^L \alpha_l \Lambda^l \textbf{V}^{-1}
where
\sum_{l=0}^L \alpha_l \Lambda^l = \begin{bmatrix} \sum_{l=0}^L \alpha_l \lambda_1^l & 0 & ... & ... & 0\\ 0 & \sum_{l=0}^L \alpha_l \lambda_2^l & 0 & ... & 0 \\ ... & ... & ... & ... & ...\\ 0 & ... & ... & 0 & \sum_{l=0}^L \alpha_l \lambda_n^l \end{bmatrix}.

The Taylor series expansion for the exponential function can be used to define the matrix exponential,

e^{\textbf{A}} = \sum_{l=0}^\infty \frac{1}{l!} \textbf{A}^l,
and assuming an invertible matrix of eigenvectors,
e^{\textbf{A}} = \sum_{l=0}^\infty \frac{1}{l!} \textbf{V} \Lambda^l \textbf{V}^{-1} = \textbf{V} \left(\sum_{l=0}^\infty \frac{1}{l!} \Lambda^l\right) \textbf{V}^{-1}.
or
e^{\textbf{A}} = \textbf{V} \begin{bmatrix} e^{\lambda_1} & 0 & ... & ... & 0\\ 0 & e^{\lambda_2} & 0 & ... & 0 \\ ... & ... & ... & ... & ...\\ 0 & ... & ... & 0 & e^{\lambda_n} \end{bmatrix} \textbf{V}^{-1}.

4.3) Using the spectral mapping theorem!

You are welcome to use matlab to help answer the following questions. Commands you might find useful include:

'dss': Converts A,B,C,D,E matrices in to a CT state space system.

'c2d': Creates a DT system from a CT system using the formulas above.

'expm(A)': computes the matrix exponential e^{A}. DO NOT USE 'exp(A)'!!!!

Please enter numerical answers as real or complex scalars, e.g. 8.3 or 5.0+3.0j, or in python list format, e.g. [4.0,5.0+3.0j,5.0-3.0j]. When answering questions with expressions, please use python syntax, such as e^{6\Delta T } becomes e**(6*DeltaT).

This set of questions examines the pole of DT state-space models of the single-state propeller speed example. Notice that for small values of \Delta T, the DT pole is close to, but less than, 1.0.

What is the pole of the CT propeller speed system (example 3.1)?

If deltaT = 1/20, what is the pole of the exact DT propeller speed system?

If deltaT = 1/20, what is the pole of first-order approximate DT propeller speed system?

For what deltaT (to FOUR SIGNIFICANT DIGITS) is the pole of the exact DT propeller speed system equal to 0.99?

For what deltaT (to FOUR SIGNIFICANT DIGITS) is the pole of the first-order approximate DT propeller speed system equal to 0.99?

What is the smallest deltaT for which the first-order approximate DT propeller speed system is unstable (if never unstable, enter -1, if never stable, enter 0)?

What is the smallest deltaT for which the exact DT propeller speedsystem is unstable (if never unstable, enter -1, if never stable, enter 0)?

This set of questions refers to DT models of the two-state pendulum example without friction, given above. With no input and a nonzero initial condition, the pendulum oscillates forever, so its two CT poles are purely imaginary. The associated exact and approximate DT state-space systems can behave in surprising ways, particularly for larger \Delta T's.

What are the poles of the CT pendulum system? Remember to enter as a python list, e.g. [2.0+7j,2.0-7j].

What are the MAGNITUDES of the poles of exact DT pendulum system (does not depend on DeltaT)? Remember to enter as a python list, e.g. [5.0,3.7].

What is the smallest deltaT for which the first-order approximate DT pendulum system is unstable (if never unstable, enter -1, if never stable, enter 0)?

What is the smallest positive deltaT for which the Ad matrix of the exact DT system is a negative identity matrix (equal to -I) (hint:what must the DT poles be, then use the spectral mapping theorem).

What is the smallest positive deltaT for which the poles of the exact DT system are purely imaginary? If no such deltaT exists, enter 0.

What is the Bd matrix for exact DT system with purely imaginary poles whose deltaT was found in the previous question? Please give your answer as a list, e.g. `[3.7, 4.8]'.

Notice that the 2 \times 1 \textbf{B} matrix for the CT pendulum system has only one nonzero entry, as does the \textbf{B}_d matrix generated by the first-order approximate DT system. But for the exact DT system, the entries of its associated \textbf{B}_d matrix are usually all nonzero, particularly when the \Delta T is larger. To see why, suppose the state is zero when the input changes from zero to one. For the CT system, after a single \Delta T, all the states will have changed. And since the DT system is exact, it must represent the change in the states after one \Delta T, from all the states being zero to all the states being nonzero. Multiplying the starting all-zero state by \textbf{A}_d will always yield zero, so the nonzero state at the end of the \Delta T interval has to be represented in \textbf{B}_d.

The following questions refer to the L-motor driven arm example.

What are the poles of the CT L-motor driven arm example? Remember to enter as a python list, e.g. [2.0, 2.0+7j, 2.0-7j].

What is the largest pole for the exact DT system for the L-motor driven arm example (note that it does not depend on deltaT)?

For what value of deltaT is 0.1 the smallest pole of the exact DT system for the L motor driven arm example?

What is the smallest deltaT for which the exact DT system for the L-motor driven arm example has at least one negative pole? If there is none, enter 0.