Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ
Tải xuống
Code acquisition 3.1 OPTIMUM SOLUTION In this case, the theory starts with a simple problem where, for a received signal r(t) = s(t, θ ) + n(t), we have to estimate a generalized time invariant vector of parameters θ (frequency, phase, delay, data, . . .) of a signal s(t, θ ) in the presence of Gaussian noise ˆ n(t). The best that we can do is to find an estimate θ of the parameter θ for which ˆ /r) is maximum; hence the name maximum aposterior the aposterior probability p(θ probability (MAP) estimate. In other words, the chosen estimate based on. | Adaptive WCDMA Theory And Practice. Savo G. Glisic Copyright 2003 John Wiley Sons Ltd. ISBN 0-470-84825-1 3 Code acquisition 3.1 OPTIMUM SOLUTION In this case the theory starts with a simple problem where for a received signal r t s t 0 n t we have to estimate a generalized time invariant vector of parameters 0 frequency phase delay data . of a signal s t 0 in the presence of Gaussian noise n t . The best that we can do is to find an estimate of the parameter 0 for which the aposterior probability p 0 r is maximum hence the name maximum aposterior probability MAP estimate. In other words the chosen estimate based on the received signal r is correct for the highest probability. Practical implementation requires us to locally generate a number of trial values 0 to evaluate p 0 r for each such value and then to choose 0 for which p r is maximum. In this chapter we focus only on code acquisition and parameter 0 will include only code delay 0 r and become a scalar. Analytically this can be expressed as MAP 0 arg maxp 0 r 3.1 Very often in practice evaluation of p 0 r in closed form is not possible. By using the Bayesian rule for the joint probability distribution function p r 0 p r p 0 r p 0 p r 0 3.2 and assuming a uniform prior distribution of 0 maximizing p 0 r becomes equivalent to maximizing p r 0 a function that can be determined more easily. This algorithm is known as maximum likelihood ML estimation and can be defined analytically as ML 0 arg max p r 0 3.3 It is straightforward to show that in the case of Gaussian noise the ML principle necessitates the search for that value of 0 that would maximize the likelihood function defined as - f - f L 0 r t s t 0 dt s2 t 0 dt 3.4 44 CODE ACQUISITION where s t.9 is the locally generated replica of the signal with a trial value 9. For the given signal power the second term in the previous equation is a constant so that the maximization is equivalent to the maximization of the first term only. This can be expressed as k 9