SIPE_TUDelft

System Identification and Parameter Estimation

http://ocw.tudelft.nl/courses/biomedical-engineering/system-identification-and-parameter-estimation/lectures/

 

Lecture 1 - Introduction

The first lecture gives an overview of the course and a general introduction to system identification and parameter estimation. System identification, in this course, tries to elucidate the dynamic relation between time-signals and to parameterize this relation in a mathematical model (where the model is based on differential equations). In this course emphasis is paid to system identification in frequency domain. Key element of this approach is the Fourier transform. Major advantage  of the frequency domain approach is that no a-priori knowledge is required of the type of the model (order of the system). Every recorded time-signal will be contaminated with noise. Noise is, by nature, a random process and consequently measured signals are stochastic. In stochastic theory not the individual realization is important but the statistical properties  are, e.g. mean, standard deviation, and also probability density functions. With ergodicity it is thought that one, sufficiently long, realization is representative for many realizations. This implicates that it is sufficient to capture one recording of a signal to assess its (statistical) properties (in stead of multiple recordings). Cross-product and cross-covariance functions are measures to estimate the relation between two (stochastic) signals in time-domain.

A zipfile containing a Matlab model used for demonstrating noise removal in a second order system by averaging a number of realizations and the effect of a stochastic input signal: HistoryLecture1.zip

Lecture 2 - Correlation functions in time and frequency domain

This lecture will first explain the correlation functions in time domain, then general properties of estimators and how to estimate these correlation functions will be explained. After that the Fourier transform will be reviewed and the lecture ends with the explanation of the correlation functions in frequency domain.

Lecture 3 - Impulse and frequency response functions

This lecture will first discuss the estimation of an Impulse Response Function (IRF) and a Frequency Responce Function (FRF), then improving the estimation of spectral densities by using the "Welch" method and frequency averaging will be explained. Other topics are open-loop versus closed-loop situations and coherency.  A zipfile containing some examples of impulse and frequency response functions in Matlab: Lec3_examples.zip

Lecture 4 - Assignment 1 & Perturbation Signal Design

This lecture will first explain the answers of Assignment 1 and then sources for error in estimators and ways to improve the estimation will be discussed. In most cases system identification is a battle against noise. One should try to decrease the power of the noise and/or increase the power of the signal. Several methods exist to boost the power of the signal and such improve the signal-to-noise ratio (SNR). However random signals always introduce leakage, an effect of the observation time and resulting discrete frequency resolution. Multisine signals are composed of multiple sines. These deterministic signal do not introduce leakage and as the power is distribute over a limited number of frequencies the power per frequency can be high. With cresting, a technique to minimize the ratio between the outliers of the time and the standard deviation of the signal, the power can  even be further increased. And the effect of the input signal on the system identification procedure is discussed.

Lecture 5 - Open- and Closed Loop Systems & Multivariable Systems

This lecture gives a little background on estimators in general and describes the ins and outs of identification of closed loop systems. An estimator gives an estimate for a certain ‘true’ variable or function. All estimators have an error; this error can be divided in random errors (variance of the estimator) and structural errors (bias of the estimator). A good estimator has negligible, or no, bias  and low variance. With a consistent estimator the variance of the estimator reduces with the number of samples. The (raw = non-averaged)  estimator for the spectral density based is not consistent! With increasing observation time the resolution in frequency domain increases, but the variance of this estimator remains equal. Furthermore using the raw estimators of the spectral density the estimator for the coherence is always 1. The variance of the estimator for the spectral density can be reduced by averaging over adjacent frequency bins. However at the cost of resolution! Using averaged spectral densities results in a better estimate for the FRF and  coherence. Note that the coherence is always overestimated.

Lecture 6 - Time Domain Models

ontinuous systems can be approximated well by discrete time models. Discrete time models are in fact the  regression coefficients of a discrete impulse response function. Having N discrete signal values, discrete models normally require far less regression coefficients (n<N) compared to FRFs (N). Major advantage of time domain models is that in the time domain noise can be separated from signals. Compared to the frequency domain where noise is mixed with the signal that requires a posteriori averaging, in the discrete time domain noise models are estimated a priori. Immediately, this a priori knowledge of the system’s structure which is often not known beforehand. In this lecture, different time domain models are presented. Besides the models are all linear input-output models, they are not all linear in their parameters (i.e. regression coefficients). Linearity in the parameters means a linear contribution of the parameters to the model error, which is typically the difference of the modeled output and  the system’s output. Advantage of linearity in the parameters is that  the model parameters can be obtained algebraically from the input and output signals only. Linearity in the parameters depends on the chosen noise model. E.g. ARX is linear in its parameters and ARMAX is nonlinear in  its parameters. Two discrete closed loop estimators are presented (two stage and coprime factorization), both utilizing two open loop estimation steps but each in a different way.

Lecture 7 - Assignment 2 & Overview of System Identification

In the first part of this lecture Assingment 2 will be thouroughly explained. Then lectures 1-6 will be resumed which is the part of System Identification.

Lecture 8 - Optimization methods

This lecture gives an overview of different types of optimization methods for static and dynamic systems.

Lecture 9 - Physical Modeling, Model and Parameter Accuracy

This lecture deals with application of optimization routines in order to retrieve the parameters from the model. Parameterization in both time and frequency domain will be explained and basic steps in an 'ideal' experiment will be given.

Lecture 10 - Assignment 3 & Nonlinear Models

The first hour of the lecture is spent on explaining Assignment 3. After explaining Assignment 3 the lecture will be about non-linear models, where non-linear behaviour, harmonics and the Volterra series are discussed.

Lecture 11 - Final Assignment & Time Variant Identification

The first part of this lecture is an explanation of the final assignment. Then 'Nonlinear models' will be discussed, continued on the topic which was discussed in lecture 10. The third part of this lecture will explain more on 'Time Variant Identification'.

Lecture 12 - Identification of Joint Impedance

In this final lecture, three case studies will be explained: the first two cases are linear SIPE and the third case is about nonlinear Parameter Estimation (PE). For the linear cases, one example is given for 1DOF (case1) and one case study is about 3DOF (case 2). The non-linear case 3 is about intrinsic and reflexive torque of the ankle in stroke.