KEEE494: 2nd Semester 2009 Lab II Lab 2: Generation of Correlated Random Samples 1 Random Vector • A random vector X
Views 167 Downloads 1 File size 139KB
KEEE494: 2nd Semester 2009
Lab II
Lab 2: Generation of Correlated Random Samples 1
Random Vector • A random vector X with n components is given as X = [X1 X1 · · · Xn ] where Xk is the random variable. • Definition: Vector correlation The correlation of a random vector X is an n × n matrix RX with i, jth element RX (i, j) = E[Xi Xj ]. In vector notation, RX = E[XX0 ]. • Example: If X = [X1 X2 X3 ]0 , the correlation matrix of X is E[X12 ] E[X12 ] E[X1 X2 ] E[X1 X3 ] 2 rX2 ,X1 E[X2 X3 ] E[X2 X1 ] E[X2 ] = RX = rX3 ,X1 E[X3 X1 ] E[X3 X2 ] E[X32 ]
rX1 ,X2 E[X22 ] rX3 ,X2
rX1 ,X3 rX2 ,X3 . E[X32 ]
• Definition: Vector Covariance The covariance of a random vector X is an n × n matrix CX with components CX (i, j) = Cov[Xi , Xj ]. In vector notation, CX = E[(X − µX )(X − µX )0 ] • Example: If X = [X1 X2 X3 ]0 , the covariance matrix of X is V ar[X1 ] Cov[X1 , X2 ] Cov[X1 , X3 ] Cov[X2 , X1 ] V ar[X2 ] Cov[X2 , X3 ] . Cov[X3 , X1 ] Cov[X3 , X2 ] V ar[X3 ] • Theorem: For a random vector X with correlation matrix RX , covariance matrix CX , and vector expected value µX , CX = RX − µX µ0X . – Proof: CX
= =
E[XX0 − Xµ0X − µX X0 + µX µ0X ] E[XX0 ] − E[Xµ0X ] − E[µX X0 ] + E[µX µ0X ]
Since E[X] = µX is a constant vector, CX = RX − E[X]µX 0 − µX E[X0 ] + µX µX 0 . • Example: Find the expected value E[X], the correlation matrix, RX , and the covariance matrix CX of the 2-dimensional random vector X with PDF ½ 2 0 ≤ x1 ≤ x2 ≤ 1, fX (x) = 0 otherwise 1
The elements of the expected value vector are Z
∞
Z
Z
∞
E[Xi ] =
1
Z
x2
xi fX (x) dx1 dx2 = −∞
−∞
2xi dx1 dx2 , i = 1, 2. 0
0
The integrals are E[X1 ] = 1/3 and E[X2 ] = 2/3, so that µX = E[X] = [1/3 2/3]0 . The elements of the correlation matrix are Z ∞Z ∞ Z 1 Z x2 E[X12 ] = x21 fX (x) dx1 dx2 = 2x21 dx1 dx2 , Z E[X22 ]
Z
−∞ ∞
−∞ ∞
Z
= Z
E[X1 X2 ]
−∞ ∞
−∞ ∞
=
Z x22 fX (x) dx1 dx2 =
0
0
1 0
Z
x2
2x22 dx1 dx2 ,
0 1 Z x2
Z
x1 x2 fX (x) dx1 dx2 = −∞
−∞
2x1 x2 dx1 dx2 0
0
These integrals are E[X12 ] = 1/6, E[X22 ] = 1/2, and E[X1 X2 ] = 1/4. Therefore, · ¸ 1/6 1/4 RX = . 1/4 1/2 Then, the covariance matrix can be calculated as · ¸ · 1/6 1/4 1/9 CX = RX − µX µ0X = − 1/4 1/2 2/9
2/9 4/9
¸
· =
1/18 1/36
1/36 1/18
¸
• MATLAB: Generate Gaussian random vectors, X = [X1 X2 X3 ] with mean zero and variance σ 2 = 1, 4,and 9 for X1 , X2 , X3 , respectively, and estimate the correlation matrix. N=100000; X1=sigma1*randn(1,N); X2=sigma2*randn(1,N); X3=sigma3*randn(1,N); R(1,1)=mean(X1.ˆ2); R(1,2)=mean(X1.*X2); R(1,3)=mean(X1.*X3); R(2,1)=mean(X2.*X1); R(2,2)=mean(X2.ˆ2); R(2,3)=mean(X2.*X3); R(3,1)=mean(X3.*X1); R(3,2)=mean(X3.*X2); R(3,3)=mean(X3.ˆ2); R R = 1.0073 0.0026 -0.0070
0.0026 4.0102 0.0210
-0.0070 0.0210 8.9729
2
As expected, Xi for i = 1, 2, 3 are independent each other, the cross-correlation between the random variable (component of X) is near to zero while the diagonal components are close to the variances of X1 , X2 , X3 . For the calculation of covariance matrix, you may simply use the built-in function cov: X=[X1’ X2’ R=cov(X) R =
X3’];
1.0073 0.0026 -0.0070
0.0026 4.0102 0.0210
% N by 3
-0.0070 0.0210 8.9729
• Now, the question is how we can generate the correlated random vectors. For the generation of correlated random vectors, we need a little background on the linear algebra, and it is described below. • Theorem Given the n-dimensional standard normal random vector Z, an invertible n × n matrix A, and an n-dimensional vector b, X = AZ + b is an n-dimensional Gaussian random vector with expected value µX = b and covariance matrix CX = AA0 . • Theorem For a Gaussian vector X with covariance CX , there always exists a matrix A such that CX = AA0 . • Proof: To verify this fact, we connect some simple facts: – CX : positive semidefinite covariance matrix (i.e., every eigenvalue of CX is nonnegative). – The definition of the Gaussian vector PDF requires the existence of CX −1 . Hence, for the Gaussian vector X, all eigenvalues of CX are nonzero (from the above, positive). – Since CX is a real symmetric matrix, it has a singular value decomposition (SVD) CX = UDU0 where D =diag[d1 , · · · √ , dn ] is the√diagonal matrix of eigenvalues of CX . Since each di is positive, we can define D1/2 =diag[ d1 , · · · , dn ], and we can write ³ ´³ ´0 CX = UD1/2 D1/2 U0 = UD1/2 UD1/2 We see that A = UD1/2 . • MATLAB: Generate Gaussian random vectors, having a specified mean value mx and covariance Cx mx=[0 0]; Cx=[1 1/2;1/2 1]; [U D]=eig(Cx); % if you multiply U*D*U’, then you get Cx A=U*sqrt(D); Z=randn(2,N);
% Z=[z1 z2]’ % where z1 and z2 are standard gaussian random variable.
X=A*Z; X1=X(1,:)+mx(1); X2=X(2,:)+mx(2); 3
X=[X1’ X2’]; R=cov(X) R = 1.0033 0.5010
0.5010 1.0013
As expected, the estimated covariance matrix is very close to the covariance of what you want to obtain.
2
Autocorrelation
In the previous section we generated the correlated random vectors and estimated their covariance. In this section we generate a sample function of the random process, which is stationary, and estimate the autocorrelation and power spectrum. • Generate a discrete-time sequence N = 1000 i.i.d. uniformly distributed random numbers in the interval (− 12 , 12 ) and compute the estimate of the autocorrelation of the sequence {Xn }. φˆx (m) = =
NX −m 1 Xn Xn+m , N − m n=1 N X 1 Xn XN +m , N − |m|
m = 0, 1, ..., M m = −1, −2, ..., −M
n=|m|
echo on N=1000; M=50; Rx_av=zeros(1,M+1); for j=1:10, X=rand(1,N)-1/2;
% Take the ensemble average over ten realization % N iid uniformly distributed random variables % between -1/2 and 1/2 phi=phi_est(X,M); % autocorrelation of the realization phi_av=phi_av+phi; % sum of the autocorrelation echo off; end; echo on; phi_av=phi_av/10; % ensemble average autocorrelation plot([0:1:M],phi_av); xlabel(’m’) ylabel(’\phi(m)’) title(’Autocorrelation function’)
4
function [phi]=phi_est(X,M) % [phi]=phi_est(X,M) % phi_est estimates the autocorrelation of the sequence of % random variables given in X. Only phi(0), phi(1),...,phi(M) % are computed. % Note that phi(m) actually means phi(m-1). N=length(X); phi=zeros(1,M+1); for m=1:M+1, for n=1:N-m+1, phi(m)=phi(m)+X(n)*X(n+m-1); end; phi(m)=phi(m)/(N-m+1); end;
5
Lab Homework 2 1. Consider the Gaussian random vector X = [X1 , X2 , X3 ] with all the zero mean E[X] = [0 1 2] and the variance Var(X) = [1 exp(−δ) exp(−2δ)], where δ is a parameter and for this problem set to 0.5. The covariance matrix of X is given as 1 0.6 0.2 . 0.2 CX = 0.2 exp(−δ) (1) 0.6 0.2 exp(−2δ) Generate 100000 random variables for X and estimate the covariance matrix then compare your estimated value with (1).
6