Saturday, June 5, 2010

Classification of Signals

Along with the classification of signals below, it is also important to understand the Classification of Systems.
Continuous-Time vs. Discrete-Time:
As the names suggest, this classification is determined by whether or not the time axis (x-axis) is discrete (countable) or continuous (Figure 1). A continuous-time signal will contain a value for all real numbers along the time axis. In contrast to this, a discrete-time signal is often created by using the sampling theorem to sample a continuous signal, so it will only have values at equally spaced intervals along the time axis.
Figure 1
Analog vs. Digital:
The difference between analog and digital is similar to the difference between continuous-time and discrete-time. In this case, however, the difference is with respect to the value of the function (y-axis) (Figure 2). Analog corresponds to a continuous y-axis, while digital corresponds to a discrete y-axis. An easy example of a digital signal is a binary sequence, where the values of the function can only be one or zero.
http://cnx.org/content/m10057/latest/sigclass2.png
Figure 2
Periodic vs. Aperiodic:
Periodic signals repeat with some period T, while aperiodic, or nonperiodic, signals do not (Figure 3). We can define a periodic function through the following mathematical expression, where t can be any number and T is a positive constant:
f(t) = f(T + t).........(1)
The fundamental period of our function, f(t) is the smallest value of T that the still allows Equation (1) to be true.
(a) A periodic signal with period To


http://cnx.org/content/m10057/latest/sigclass4.png
(b) An aperiodic signal
Figure 3


Causal vs. Anticausal vs. Noncausal:
Causal signals are signals that are zero for all negative time, while anticausal are signals that are zero for all positive time. Noncausal signals are signals that have nonzero values in both positive and negative time (Figure 4).
(a) A causal signal


http://cnx.org/content/m10057/latest/sigclass6.png
(b) An anticausal signal

http://cnx.org/content/m10057/latest/sigclass7.png
(c) A noncausal signal
Figure 4

Even vs. Odd:
An even signal is any signal f such that f(t)=f(-t). Even signals can be easily spotted as they are symmetric around the vertical axis. An odd signal, on the other hand, is a signal f such that f(t)=-(f(-t)).(Figure 5).
http://cnx.org/content/m10057/latest/sigclass8.png
(a) An even signal

http://cnx.org/content/m10057/latest/sigclass9.png
(b) An odd signal
Figure 5

Using the definitions of even and odd signals, we can show that any signal can be written as a combination of an even and odd signal. That is, every signal has an odd-even decomposition. To demonstrate this, we have to look no further than a single equation.
f(t)=1/2(f(t)+f(-t))+1/2(f(t)−f(-t)).............(2)
By multiplying and adding this expression out, it can be shown to be true. Also, it can be shown that f(t)+f(-t) fulfills the requirement of an even function, while f(t)−f(-t) fulfills the requirement of an odd function (Figure 6).
http://cnx.org/content/m10057/latest/sigclass10.png
(a) The signal we will decompose using odd-even decomposition

http://cnx.org/content/m10057/latest/sigclass11.png
(b) Even part: e(t)=1/2(f(t)+f(-t))

http://cnx.org/content/m10057/latest/sigclass13.png
(c) Odd part: o(t)=1/2(f(t)−f(-t))
Figure 6
Deterministic vs. Random:
A deterministic signal is a signal in which each value of the signal is fixed and can be determined by a mathematical expression, rule, or table. Because of this the future values of the signal can be calculated from past values with complete confidence. On the other hand, a random signal has a lot of uncertainty about its behavior. The future values of a random signal cannot be accurately predicted and can usually only be guessed based on the averages of sets of signals (Figure 7).
http://cnx.org/content/m10057/latest/ran_sin.png
(a) Deterministic Signal

http://cnx.org/content/m10057/latest/ran_nos.png
(b) Random Signal
Figure 7

Right-Handed vs. Left-Hand Signal:
A right-handed signal and left-handed signal are those signals whose value is zero between a given variable and positive or negative infinity. Mathematically speaking, a right-handed signal is defined as any signal where f(t)=0 for
--> t <>1 < α , and a left-handed signal is defined as any signal where f(t)=0 for -->t > t1 > - α. See (Figure 8) for an example. Both figures "begin" at t1 and then extends to positive or negative infinity with mainly nonzero values.
http://cnx.org/content/m10057/latest/sigp_R.png
(a) Right-handed signal
(b) Left-handed signal
Figure 8

Finite vs. Infinite Length:
As the name applies, signals can be characterized as to whether they have a finite or infinite length set of values. Most finite length signals are used when dealing with discrete-time signals or a given sequence of values. Mathematically speaking, f(t) is a finite-length signal if it is nonzero over a finite interval
-->t1 <>2 Where t1 > - α and t2 < α .An example can be seen in Figure 9. Similarly, an infinite-length signal f(t) is defined as nonzero over all real numbers: α ≤ f(t) ≤ - α
http://cnx.org/content/m10057/latest/finite.png
Figure 9: Finite-Length Signal.
Note that it only has nonzero values on a set, finite interval.

Signal Operations

This module will look at two signal operations, time shifting and time scaling. Signal operations are operations on the time variable of the signal. These operations are very common components to real-world systems and, as such, should be understood thoroughly when learning about signals and systems.

Time Shifting:
Time shifting is, as the name suggests, the shifting of a signal in time. This is done by adding or subtracting the amount of the shift to the time variable in the function. Subtracting a fixed amount from the time variable will shift the signal to the right (delay) that amount, while adding to the time variable will shift the signal to the left (advance).
Figure 1: (t – T) moves (delays) to the right by T.
Time Scaling:
Time scaling compresses and dilates a signal by multiplying the time variable by some amount. If that amount is greater than one, the signal becomes narrower and the operation is called compression, while if the amount is less than one, the signal becomes wider and is called dilation. It often takes people quite a while to get comfortable with these operations, as people's intuition is often for the multiplication by an amount greater than one to dilate and less than one to compress.
Figure 2: (atcompresses by a.

Time Reversal:
A natural question to consider when learning about time scaling is: What happens when the time variable is multiplied by a negative number? The answer to this is time reversal. This operation is the reversal of the time axis, or flipping the signal over the y-axis.
Figure 3: Reverse the time axis.


Friday, June 4, 2010

Useful Signals

Before looking at this module, hopefully you have some basic idea of what a signal is and what basic classifications and properties a signal can have. To review, a signal is merely a function defined with respect to an independent variable. This variable is often time but could represent an index of a sequence or any number of things in any number of dimensions. Most, if not all, signals that you will encounter in your studies and the real world will be able to be created from the basic signals we discuss below. Because of this, these elementary signals are often referred to as the building blocks for all other signals.
Sinusoids:
Probably the most important elemental signal that you will deal with is the real-valued sinusoid. In its continuous-time form, we write the general form as
x (t) = Acos (ωt + Ф)……(1)
Where A is the amplitude; ω is the frequency, and Ф represents the phase. Note that it is common to see ωt replaced with ft. Since sinusoidal signals are periodic, we can express the period of these, or any periodic signal, as
T = 2π/ω
Figure 1: Sinusoid with = 2, ω = 2, and Ф = 0.

Complex Exponential Function:
Maybe as important as the general sinusoid, the complex exponential function will become a critical part of your study of signals and systems. Its general form is written as
f(t) = B est……….(2)
Where s, shown below, is a complex number in terms of σ, the phase constant, and ω the frequency:
s = σ + jω
Real Exponentials:
Just as the name sounds, real exponentials contain no imaginary numbers and are expressed simply as
f(t) = B eαt………(3)
Where both B and α are real parameters. Unlike the complex exponential that oscillates, the real exponential either decays or grows depending on the value of α.
· Decaying Exponential, when α < 0
· Growing Exponential, when α > 0
Figure 2: Examples of Real Exponentials (a) Decaying Exponential (b) Growing Exponential

Unit Impulse Function:
The unit impulse "function" (or Dirac delta function) is a signal that has infinite height and infinitesimal width. However, because of the way it is defined, it actually integrates to one. While in the engineering world, this signal is quite nice and aids in the understanding of many concepts, some mathematicians have a problem with it being called a function, since it is not defined at t = 0. Engineers reconcile this problem by keeping it around integrals, in order to keep it more nicely defined. The unit impulse is most commonly denoted as
δ (t)
The most important property of the unit-impulse is shown in the following integral:



Unit-Step Function:
Another very basic signal is the unit-step function that is defined as




Figure 5: Basic Step Functions (a) Continuous-Time Unit-Step Function (b) Discrete-Time Unit- Step Function.

Note that the step function is discontinuous at the origin; however, it does not need to be defined here as it does not matter in signal theory. The step function is a useful tool for testing and for defining other signals. For example, when different shifted versions of the step function are multiplied by other signals, one can select a certain portion of the signal and zero out the rest.
Ramp Function:
The ramp function is closely related to the unit-step discussed above. Where the unit-step goes from zero to one instantaneously, the ramp function better resembles a real-world signal, where there is some time needed for the signal to increase from zero to its set value, one in this case. We define a ramp function as follows:

Figure 7: Ramp Function.


The Impulse Function

In engineering, we often deal with the idea of an action occurring at a point. Whether it be a force at a point in space or a signal at a point in time, it becomes worth while to develop some way of quantitatively defining this. This leads us to the idea of a unit impulse, probably the second most important function, next to the complex exponential in systems and signals course.

Dirac Delta Function:
The Dirac Delta function, often referred to as the unit impulse or delta function is the function that defines the idea of a unit impulse. This function is one that is infinitesimally narrow, infinitely tall, yet integrates to unity, one. Perhaps the simplest way to visualize this is as a rectangular pulse from a – Є/2 to a + Є /2 with a height of 1/Є. As we take the limit of this,




we see that the width tends to zero and the height tends to infinity as the total area remains constant at one. The impulse function is often written as δ (t).

Figure 1: This is one way to visualize the Dirac Delta Function.

Figure 2
'
Figure 2: Since it is quite difficult to draw something that is infinitely tall, we represent the Dirac with an arrow centered at the point it is applied. If we wish to scale it, we may write the value it is scaled by next to the point of the arrow. This is a unit impulse (no scaling).

The Sifting Property of the Impulse:
The first step to understanding what this unit impulse function gives us is to examine what happens when we multiply another function by it.
f(t) δ(t) = f(0) δ(t)………..(1)
Since the impulse function is zero everywhere except the origin, we essentially just "pick off" the value of the function we are multiplying by evaluated at zero.
At first glance this may not appear to give use much, since we already know that the impulse evaluated at zero is infinity, and anything times infinity is infinity. However, what happens if we integrate this?

Sifting Property
Sifting Property
It quickly becomes apparent that what we end up with is simply the function evaluated at zero. Had we used δ (t – T) instead of δ(t), we could have "sifted out" f(T). This is what we call the Sifting Property of the Dirac function, which is often used to define the unit impulse.

The Sifting Property is very useful in developing the idea of convolution which is one of the fundamental principles of signal processing. By using convolution and the sifting property we can represent an approximation of any system's output if we know the system's impulse response and input.

Other Impulse Properties:
Below we will briefly list a few of the other properties of the unit impulse without going into detail of their proofs - we will leave that up to you to verify as most are straightforward. Note that these properties hold for continuous and discrete time.

Unit Impulse Properties
Units Impulse Properties
Discrete-Time Impulse (Unit Sample):
The extension of the Unit Impulse Function to discrete-time becomes quite trivial. All we really need to realize is that integration in continuous-time equates to summation in discrete-time. Therefore, we are looking for a signal that sums to zero and is zero everywhere except at zero.

Discrete-Time Impulse


Figure 3
Figure 3: The graphical representation of the discrete-time impulse function


Looking at the discrete-time plot of any discrete signal one can notice that all discrete signals are composed of a set of scaled, time-shifted unit samples. If we let the value of a sequence at each integer k be denoted by s [k] and the unit sample delayed that occurs at k to be written as δ[n – K], we can write any signal as the sum of delayed unit samples that are scaled by the signal value, or weighted coefficients.





This decomposition is strictly a property of discrete-time signals and proves to be a very useful property.

The Impulse Response:
The impulse response is exactly what its name implies - the response of an LTI system, such as a filter, when the system's input is the unit impulse (or unit sample). A system can be completed described by its impulse response due to the idea mentioned above that all signals can be represented by a superposition of signals. An impulse response gives an equivalent description of a system as a transfer function, since they are Laplace Transforms of each other.

Notation: Most texts use δ(t) and δ[n] to denote the continuous-time and discrete-time impulse response, respectively.

System Classifications and Properties

In this module some of the basic classifications of systems will be briefly introduced and the most important properties of these systems are explained. As can be seen, the properties of a system provide an easy way to separate one system from another. Understanding these basic differences between systems, and their properties, will be a fundamental concept used in all signal and system courses, such as digital signal processing (DSP). Once a set of systems can be identified as sharing particular properties, one no longer has to deal with proving a certain characteristic of a system each time, but it can simply be accepted do the systems classification. Also remember that this classification presented here is neither exclusive (systems can belong to several different classifications) nor is it unique (there are other methods of classification). Examples of simple systems can be found here.

Classification of Systems:
Along with the classification of systems below, it is also important to understand other Classification of Signals.

Continuous vs. Discrete:
This may be the simplest classification to understand as the idea of discrete-time and continuous time is one of the most fundamental properties to all of signals and system. A system where the input and output signals are continuous is a continuous system, and one where the input and output signals are discrete is a discrete system.

Linear vs. Nonlinear:
A linear system is any system that obeys the properties of scaling (homogeneity) and superposition (additivity), while a nonlinear system is any system that does not obey at least one of these. To show that a system H obeys the scaling property is to show that
H (k(f (t)) = kH (f (t))………………..(1)

Figure 1: A block diagram demonstrating the scaling property of linearity
To demonstrate that a system H obeys the superposition property of linearity is to show that
H (f1 (t) + f2 (t)) = H (f1 (t)) + H (f2 (t))…………(2)

Figure 2: A block diagram demonstrating the superposition property of linearity

It is possible to check a system for linearity in a single (though larger) step. To do this, simply combine the first two steps to get
H (k1f1 (t) + k2f2 (t)) = k2H (f1 (t)) + k2H (f2 (t)) ……..(3)

Time Invariant vs. Time Variant:
A time invariant system is one that does not depend on when it occurs: the shape of the output does not change with a delay of the input. That is to say that for a system H where H (f (t)) = y (t), H is time invariant if for all T
H (f (t – T) = y (t – T)…………..(4)

Figure 3
Figure 3: This block diagram shows what the condition for time invariance. The output is the same whether the delay is put on the input or the output.

When this property does not hold for a system, then it is said to be time variant, or time-varying.

Causal vs. Non-causal:
A causal system is one that is non-anticipative; that is, the output may depend on current and past inputs, but not future inputs. All "realtime" systems must be causal, since they can not have future inputs available to them.

One may think the idea of future inputs does not seem to make much physical sense; however, we have only been dealing with time as our dependent variable so far, which is not always the case. Imagine rather that we wanted to do image processing. Then the dependent variable might represent pixels to the left and right (the "future") of the current position on the image, and we would have a non-causal system.

Figure 4
Figure 4: (a) For a typical system to be causal... (b) ...the output at time t0(to), can only depend on the portion of the input signal before to.

Stable vs. Unstable:
A stable system is one where the output does not diverge as long as the input does not diverge. There are many ways to say that a signal "diverges"; for example it could have infinite energy. One particularly useful definition of divergence relates to whether the signal is bounded or not. Then a system is referred to as bounded input-bounded output (BIBO) stable if every possible bounded input produces a bounded output.

Representing this in a mathematical way, a stable system must have the following property, where x (t) is the input and y (t) is the output. The output must satisfy the condition
Іy (tMy < α …………..(5)

When we have an input to the system that can be described as
Іx (t) ІMx < α ………….. (6)

Mx and My both represent a set of finite positive numbers and these relationships hold for all of t.
If these conditions are not met, i.e. a system's output grows without limit (diverges) from a bounded input, then the system is unstable. Note that the BIBO stability of a linear time-invariant system (LTI) is neatly described in terms of whether or not its impulse response is absolutely integrable.