Prelab - Observer

You are not logged in.

Please Log In for full access to the web site.
Note that this link will take you to an external site (https://shimmer.mit.edu) to authenticate, and then you will be redirected back to this page.

State of State-Space.

We took a continuous-time path through state-space in lectures, staying with continuous time state-space for modeling, measured-state feedback, optimization-based gain selection. In lab, we used continuous-time models to determine measured-state feedback gains (using pole placement or LQR), and ignored the fact that the motor voltage on the propeller (the u in our model of the "plant") was only updated every millisecond.

Perhaps we grew annoyed at re-calibrating the back-EMF measurements again and again, or were frustrated with trying to denoise the angular velocity estimates, but now we are ready for an alternative to measuring all the states of our propeller-levitated arm. And there is an alternative, we can use observer-state feedback, and then only need to measure arm angle. More specificially,

  • Evolve the observer-estimated state using the state-space model,
  • Use the difference between the measured and observer-predicted arm angle to correct the observer's estimated state,
  • Use the observer-estimated state instead of the measured state when adjusting the propeller's motor command.
    • But exactly how do we "evolve" the observer-estimated state? If we are still in continuous time, then "evolving" means solving differential equations to update the observer states, and that is expensive to do on a microcontroller. But if we are in discrete-time, then "evolving" means multiplying the current state by a matrix, as we will see.

      So, it is time to switch to discrete time. But we can not just "switch" to discrete-time, our physical system is in continuous time, only our controller is in discrete-time. The two do connect, through sampling and a zero-order hold. The controller "samples" the state and output of the physical system every \Delta T seconds. That is,

      y[n] = y(n\Delta T),
      where n is being used as the sample index. The u(t) the controller sends to the plant is only updated every \Delta T seconds, but it is "held" between samples as,
      u(t) = u[\lfloor \frac{t}{\Delta T} \rfloor],
      where the notation \lfloor \tau \rfloor means "greatest integer less than \tau. A typical zero-order hold continuous waveform is shown in the figure below:

      Given the physical system is in continuous time, our continuous time plant model is still valid,

      \textbf{E}\frac{d}{dt}\textbf{x}(t) = \textbf{A}\textbf{x}(t) + \textbf{B}u(t),
      y(t) = \textbf{C}\textbf{x}(t),
      though we now understand that if u(t) is generated by a microcontroller, it is piecewise-constant in time. And in addition, we are only interested in samples of y(t) for t = n \Delta T. In particular,
      \textbf{x}[n] = \textbf{A}_d\textbf{x}[n-1] + \textbf{B}_du[n-1],
      and
      y[n] = \textbf{C}\textbf{x}[n],
      where
      \textbf{A}_d = e^{\textbf{E}^{-1}\textbf{A} \Delta T}
      and
      \textbf{B}_d = \left(e^{\textbf{E}^{-1}\textbf{A} \Delta T} - \textbf{I}\right) \textbf{A}^{-1} \textbf{B}.

      This gives us a path to practical observer-based control, one that we will examine in the examples below:

      1. Determine a Continuous-Time State-Space model of your physical system,
      2. Create an Exact DT Model (assuming ZOH on u)
      3. Determine state Feedback gains (using pole placement or LQR)
      4. Determine observer correction gains (using pole placement or an LQR-like alternative)

      We will start with a example of discrete-time state-space control, then derive and apply the above continuous-to-discrete formulas, and end with a complete example of observer-based control.