State Space And LQR

You are not logged in.

Please Log In for full access to the web site.
Note that this link will take you to an external site (https://shimmer.mit.edu) to authenticate, and then you will be redirected back to this page.
Introducing State Space

The first few problems of this prelab are intended to help you get started learning about state space while you wrap up the continuous-time copter and maglev lab. And we think that a good way to start is to connect transfer functions and state-space descriptions, and to show you that poles are related to matrix eigenvalues. We start with a 2x2 pendulum example to introduce state-space. Then we will look at a more realistic model of an operational amplifier, starting with its transfer function representation and deriving a state-space model. Then we will reverse the process, and start with a motor state-space model, and derive its transfer function. In each case, we will show the direct connection between natural frequencies (or poles) and eigenvalues.

As additional background material, chapters 6 and 7 from the on-line (and free) textbook Feedback Systems: An Introduction for Scientists and Engineers by Karl J. Åström and Richard M. Murray are particularly good matches to the the material over the next few weeks. Those interested in standing robots will be particularly interested in Example 7.2.

Links to Åström and Murray Chapter 6 , Chapter 7

VERY Useful state space videos:

Summary of the State-space basics.

From the above examples, lectures, and supplementary material, we hope that four key ideas about linear state-space control are clear:

  1. The Matrix is the Model. We can model many systems with an \textbf{A}, \textbf{B} and \textbf{C} state-space model,

    {dt}\textbf{x}(t) = \textbf{A}\textbf{x}(t) + \textbf{B}u(t)
    y(t) = \textbf{C}\textbf{x}(t)
    where \textbf{x} is the state, \textbf{u} is the "plant" input (and is a scalar for most of our examples), \textbf{y} is the observed output (which can be a single state, a weighted combination of states, or many weighted combinations of states). For us, u(t) is usually the command we send to the coil or propeller motor, y(t) is usually a scalar such as motor speed, arm angle, or umbrella distance.

  2. Eigenvalues are Natural. The natural frequencies (aka poles) of a system described by an \textbf{A},\textbf{B},\textbf{C} state-space model are the the eigenvalues of \textbf{A}. And the natural frequencies of a state-feedback system with gain matrix \textbf{K} are the eigenvalues of \left(\textbf{A} - \textbf{B} \textbf{K}\right). The more negative the real part of the eigenvalues, the faster the system responds, and if any eigenvalue has a positive real part, the system is unstable.

  3. The state, the whole state, and nothing but the state. We can improve a feedback system's performance by using a controller that monitors more system states. If the system is described by an \textbf{A},\textbf{B},\textbf{C} state-space model, full state-feedback is of the form,

    u(t) = K_r r(t) -\textbf{K} \textbf{x}(t),
    where \textbf{K} is a matrix of feedback gains, r is a primary input and K_r is a scaling of the primary input. For most of our feedback systems, r(t) is y_{d}(t), the desired value for the output y(t). And we usually select K_r so that if the input settles to a steady-state (and the system is stable) then y(t) \approx y_d(t) for t large enough.

  4. Gaining the system. Given modest assumptions about \textbf{A} and \textbf{B}, it is possible to determine a gain matrix, \textbf{K} , that places the natural frequencies (aka poles) of a feedback system, that is the eigenvalues of \left(\textbf{A} - \textbf{B} \textbf{K}\right) anywhere we wish. In fact, the process of computing the needed gains from a set of desired natural frequencies is completely algorithmic (for example, using the acker algorithm and its variants, as in Matlab's place and Python's acker functions).

Once you feel confident in your understanding of the basics of state space, then move on to the next two problems, using LQR and adding an integral term.

LQR plus...

For the first state-space lab, we will return to the problem of several weeks ago, controlling the angle of a propeller arm. In the lab, you will compare two appproaches for computing state-feedback gains: using eigenvalue placement (also known as pole-placement) or selecting weights for an optimization approach (the linear-quadratic-regulator or LQR approach). As you will see, it can be difficult to design a controller by selecting candidate poles, because they do not relate that well to our usual goals, maximizing tracking accuracy and disturbance rejection while not exceeding equipment limits. The LQR weights are easier to determine, because they are more clearly (albeit not perfectly) correlated with our goals.

In lab, you will also address the issue of the input scaling, K_r (sometimes referred to as precompensation), set to ensure zero steady-state error. In above examples, we computed such K_r's, but of course it was assumed that the models were perfect. Modeling errors can lead to inaccurate K_r's, and inaccurate K_r's produce irreparable steady-state errors, so you will also investigate adding an integrator to your state-space controller.

In the first problem below, we examine the impact of different choices of weights for LQR applied to the L-motor example from above, and then examine adding an integral term to the state-space controller.