% -*- mode: noweb; noweb-default-code-mode: R-mode; -*- \documentclass[nojss]{jss} \usepackage{dsfont} \usepackage{bbm} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amssymb} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% declarations for jss.cls %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% just as usual \author{Robin K. S. Hankin\\Auckland University of Technology} \title{The complex multivariate Gaussian distribution} %\VignetteIndexEntry{A vignette for the cmvnorm package} %% for pretty printing and a nice hypersummary also set: \Plainauthor{Robin K. S. Hankin} %% an abstract and keywords \Abstract{Here I introduce \pkg{cmvnorm}, a complex generalization of the \pkg{mvtnorm} package. A complex generalization of the Gaussian process is suggested and numerical results presented using the package. An application in the context of approximating the Weierstrass sigma function using a complex Gaussian process is given. An earlier version of this vignette appeared as \cite{hankin2015}, and the package is maintained at \url{http://github.com/RobinHankin/cmvnorm}. } \Keywords{Complex multivariate Gaussian distribution, Gaussian process, Weierstrass sigma function, emulator} %% publication information %% NOTE: This needs to filled out ONLY IF THE PAPER WAS ACCEPTED. %% If it was not (yet) accepted, leave them commented. %% \Volume{13} %% \Issue{9} %% \Month{September} %% \Year{2004} %% \Submitdate{2004-09-29} %% \Acceptdate{2004-09-29} %% The address of (at least) one author should be given %% in the following format: \Address{ Robin K. S. Hankin\\ Auckland University of Technology\\ 2-14 Wakefield Street\\ Auckland NZ\\ \email{hankin.robin@gmail.com}\hfill\includegraphics[width=1in]{\Sexpr{system.file("help/figures/cmvnorm.png",package="cmvnorm")}} } %% It is also possible to add a telephone and fax number %% before the e-mail in the following format: %% Telephone: +43/1/31336-5053 %% Fax: +43/1/31336-734 %% for those who use Sweave please include the following line (with % symbols): %% need no \usepackage{Sweave.sty} %% end of declarations %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \newcommand{\bx}{\mathbf x} \newcommand{\bX}{\mathbf X} \newcommand{\bt}{\mathbf t} \newcommand{\by}{\mathbf y} \newcommand{\bw}{\mathbf w} \newcommand{\bz}{\mathbf z} \newcommand{\bZ}{\mathbf Z} \newcommand{\bm}{\boldsymbol\mu} \newcommand{\bbeta}{\boldsymbol\beta} \newcommand{\bepsilon}{\boldsymbol\epsilon} \newcommand{\bzero}{\mathbf 0} \newcommand{\mathi}{\mathrm{i}} \newcommand{\expect}[1]{\mathbb{E}\left[#1\right]} \newcommand{\norm}[2]{\mathcal{N}_n\left(#1,#2\right)} \newcommand{\cnorm}[2]{\mathcal{N}\mathcal{C}_n\left(#1,#2\right)} \newcommand{\herm}[1]{{#1}^\ast} \SweaveOpts{} \begin{document} \hfill\includegraphics[width=1in]{\Sexpr{system.file("help/figures/cmvnorm.png",package="cmvnorm")}} \section{Introduction} Complex-valued random variables find applications in many areas of science such as signal processing~\citep{kay1989}, radio engineering~\citep{ozarow1994}, and atmospheric physics~\citep{mandic2009}. In this short paper I introduce \pkg{cmvnorm}, a package for investigating one commonly encountered complex-valued probability distribution, the complex Gaussian. The real multivariate Gaussian distribution is well supported in~\proglang{R}~\citep{rcore2014,genz2014}, having density function \begin{equation}\label{real_gaussian_PDF} f\left(\bx;\bm,\Sigma\right) = \frac{e^{-\frac{1}{2}\left(\bx-\bm\right)^T\Sigma^{-1}\left(\bx-\bm\right)}}{\sqrt{\left|2\pi\Sigma\right|}}\qquad\bx\in\mathbb{R}^n, \end{equation} where~$\left|M\right|$ denotes the determinant of matrix~$M$. Here, $\bm=\expect{\bX}\in\mathbb{R}^n$ is the mean vector and~$\Sigma=\expect{\left(\bX-\bm\right)\left(\bX-\bm\right)^T}$ the covariance of random vector~$\bX$; we write~$\bX\sim\norm{\bm}{\Sigma}$. One natural generalization would be to consider~$\bZ\sim\cnorm{\bm}{\Gamma}$, the complex multivariate Gaussian, with density function \begin{equation}\label{complex_Gaussian_PDF} f\left(\bz;\bm,\Gamma\right) = \frac{e^{-\herm{\left(\bz-\bm\right)}\Gamma^{-1}\left(\bz-\bm\right)}}{\left|\pi\Gamma\right|}\qquad\bz\in\mathbb{C}^n \end{equation} where $\herm{\bz}$ denotes the Hermitian transpose of complex vector~$\bz$. Now~$\bm\in\mathbb{C}^n$ is the complex mean and~$\Gamma=\expect{\left(\bZ-\bm\right)\herm{\left(\bZ-\bm\right)}}$ is the complex variance; $\Gamma$ is a Hermitian positive definite matrix. Note the simpler form of~(\ref{complex_Gaussian_PDF}), essentially due to Gauss's integral operating more cleanly over the complex plane than the real line: \[ \int_\mathbb{C}e^{-\herm{z}z}\,dz= \int_{x\in\mathbb{R}}\int_{y\in\mathbb{R}}e^{-\left(x^2+y^2\right)}\,dx\,dy= \int_{\theta=0}^{2\pi}\int_{r=0}^\infty e^{-r^2}r\,dr\,d\theta=\pi. \] %That the variance of this distribution %is~$\Gamma=\mathbb{E}\left(\bz-\bm\right)\left(\bz-\bm\right)^\star$ %follows directly from the definition of expectation: %\[ %\mathbb{E}z^\star z= %\int_\mathbb{C}z^\star z e^{-z^\star z}\,dz= %\int_{\theta=0}^{2\pi}\int_{r=0}^\infty r^2e^{-r^2}r\,dr\,d\theta %\] A zero mean complex random vector~$\bZ$ is said to be {\em circularly symmetric}~\citep{goodman1963} if~$\expect{\bZ\bZ^T}=0$, or equivalently~$\bZ$ and~$e^{i\alpha}\bZ$ have identical distributions for any~$\alpha\in\mathbb{R}$. Equation~(\ref{complex_Gaussian_PDF}) clearly has this property. Most results from real multivariate analysis have a direct generalization to the complex case, as long as ``transpose'' is replaced by ``Hermitian transpose''. For example, $\bX\sim\norm{\bzero}{\Sigma}$ implies $B\bX\sim\norm{\bzero}{B^T\Sigma B}$ for any constant matrix~$B\in\mathbb{R}^{m\times n}$, and analogously $\bZ\sim\cnorm{\bzero}{\Gamma}$ implies~$B\bZ\sim\cnorm{\bzero}{\herm{B}\Gamma B}$, $B\in\mathbb{C}^{m\times n}$. Similar generalizations operate for Schur complement methods on partitioned matrices. Also, linear regression generalizes similarly. Specifically, consider~$\by\in\mathbb{R}^n$. If~$\by=X\bbeta+\bepsilon$ where~$X$ is a~$n\times p$ design matrix, $\bbeta\in\mathbb{R}^p$ a vector of regression coefficients and~$\bepsilon\sim\norm{\bzero}{\Sigma}$ is a vector of errors, then~$\hat{\bbeta} = \left(X^T\Sigma^{-1} X\right)^{-1}X^T\Sigma^{-1}\by$ is the maximum likelihood estimator for~$\bbeta$. The complex generalization is to write~$\bz=Z\bbeta+\bepsilon$, $Z\in\mathbb{C}^{n\times p}$, $\bbeta\in\mathbb{C}^p$, $\bepsilon\sim\cnorm{\bzero}{\Gamma}$ which gives~$\hat{\bbeta}=\left(\herm{Z}\Gamma^{-1}Z\right)^{-1}\herm{Z}\Gamma^{-1}\bz$. Such considerations suggest a natural complex generalization of the Gaussian process. This short vignette introduces the \pkg{cmvnorm} package which furnishes some functionality for the complex multivariate Gaussian distribution, and applies it in the context of a complex generalization of the \pkg{emulator} package~\citep{hankin2005}, which implements functionality for investigating (real) Gaussian processes. \section{The package in use} Random complex vectors are generated using the \code{rcmvnorm()} function, analogous to \code{rmvnorm()}: <>= set.seed(1) library("cmvnorm",quietly=TRUE) cm <- c(1,1i) cv <- matrix(c(2,1i,-1i,2),2,2) (z <- rcmvnorm(6, mean=cm, sigma=cv)) @ Function \code{dcmvnorm()} returns the density according to~(\ref{complex_Gaussian_PDF}): <>= dcmvnorm(z,cm,cv) @ So it is possible to determine a maximum likelihood estimate for the mean using direct numerical optimization <>= helper <- function(x){c(x[1]+1i*x[2], x[3]+1i*x[4])} objective <- function(x,cv){-sum(dcmvnorm(z,mean=helper(x),sigma=cv,log=TRUE))} helper(optim(c(1,0,1,0),objective,cv=cv)$par) @ (helper functions are needed because \code{optim()} optimizes over~$\mathbb{R}^n$ as opposed to~$\mathbb{C}^n$). This shows reasonable agreement with the true value of the mean and indeed the analytic value of the MLE, specifically <>= colMeans(z) @ \section{The Gaussian process} In the context of the emulator, a (real) Gaussian process is usually defined as a random function~$\eta\colon\mathbb{R}^p\longrightarrow\mathbb{R}$ which, for any set of points~$\left\{\bx_1,\ldots,\bx_n\right\}$ in its domain~${\mathcal D}$ the random vector~$\left\{\eta\left(\bx_1\right),\ldots,\eta\left(\bx_n\right)\right\}$ is multivariate Gaussian. It is convenient to specify $\expect{\left.\eta\left(\bx\right)\right|\bbeta}=h\left(\bx\right)\bbeta$, that is, the expectation of the process at point~$\bx\in{\mathcal D}$ conditional on the (unknown) vector of coefficients~$\bbeta$. Here~$h\colon\mathbb{R}^p\longrightarrow\mathbb{R}^q$ specifies the~$q$ known regressor functions of~$\bx=\left(x_1,\ldots,x_p\right)^T$; a common choice is~$h\left(\bx\right)=\left(1,x_1,\ldots,x_p\right)^T$ [giving~$q=p+1$], but one is in principle free to choose any function of~$\bx$. One writes~$H^T=\left(h\left(\bx_1\right),\ldots,h\left(\bx_n\right)\right)$ when considering the entire design matrix~$X$; the \proglang{R} idiom is \code{regressor.multi()}. The covariance is typically given by \[ \COV\left(\eta(\bx),\eta(\bx')\right)=V\left(\bx-\bx'\right) \] where~$V\colon\mathbb{R}^n\longrightarrow\mathbb{R}$ must be chosen so that the variance matrix of any finite set of observations is always positive-definite. Bochner's theorem~\cite[chapter XIX]{feller1971} shows that~$V\left(\cdot\right)$ must be proportional to the characteristic function (CF) of a symmetric probability Borel measure. \cite{oakley1999} uses techniques which have clear complex analogues to show that the posterior mean of~$\eta\left(\bx\right)$ is given by \begin{equation}\label{emulator} h\left(\bx\right)^T\bbeta + \left(\COV\left(\bx,\bx_1\right),\ldots,\COV\left(\bx,\bx_n\right)\right)^TA^{-1}\left(\by-H\hat{\bbeta}\right) \end{equation} Here~$A$ is an~$n\times n$ matrix of correlations between the observations, $\sigma^2A_{ij}=\COV\left(\eta\left(\bx_i\right),\eta\left(\bx_j\right)\right)$ where~$\sigma^2$ is an overall variance term; and~$\hat{\bbeta}=\left(X^TA^{-1} X\right)^{-1}X^TA^{-1}\by$ is the maximum likelihood estimator for~$\bbeta$. Equation~(\ref{emulator}) furnishes a cheap approximation to~$\eta\left(\bx\right)$ and is known as the `emulator'. \subsection{Complex Gaussian processes} The complex case is directly analogous, with~$\eta\colon\mathbb{C}^p\longrightarrow\mathbb{C}$ and~$\bbeta\in\mathbb{C}^q$. Writing~$\COV\left(\eta\left(\bz_1\right),\ldots,\eta\left(\bz_n\right)\right)=\Omega$, so that element $(i,j)$ of matrix~$\Omega$ is~$\COV\left(\eta\left(\bz_i\right),\eta\left(\bz_j\right)\right)$, we may relax the requirement that~$\Omega$ be symmetric positive definite to requiring only Hermitian positive definiteness. This allows one to use the characteristic function of {\em any}, possibly non-symmetric, random variable~$\Psi$ with density function~$f\colon\mathbb{R}^p\longrightarrow\mathbb{R}$ and characteristic function~$\phi$: \begin{equation} \Omega_{ij}= \COV\left(\eta\left(\bz_i\right),\eta\left(\bz_j\right)\right) = \phi\left(\bz_i-\bz_j\right). \end{equation} That~$\Omega$ remains Hermitian positive definite may be shown by evaluating a quadratic form with it and arbitrary~$\bw\in\mathbb{C}^n$ and establishing that it is real and non-negative: \begin{align*} \herm{\bw}\Omega\bw&= \sum_{i,j}\overline{\bw_i}\COV\left(\eta\left(\bz_i\right),\eta\left(\bz_j\right)\right){\bw_j}&\mbox{\small definition of quadratic form}\\ &= \sum_{i,j}\overline{\bw_i}\phi\left(\bz_i-\bz_j\right)\bw_j&\mbox{\small covariance function is the CF of~$\Psi$}\\ &= \sum_{i,j}\overline{\bw_i}\left[\int_{\bt\in\mathbb{C}^n}e^{\mathi\operatorname{Re}\herm{\bt}(\bz_i-\bz_j)}f\left(\bt\right)\,d\bt\right]{\bw_j}\qquad&\mbox{\small definition of CF of~$\Psi$}\\ &= \int_{\bt\in\mathbb{C}^n}\left[\sum_{i,j}\overline{\bw_i}e^{\mathi\operatorname{Re}\herm{\bt}(\bz_i-\bz_j)}{\bw_j}f\left(\bt\right)\right]\,d\bt & \mbox{\small integration and summation commute}\\ &= \int_{\bt\in\mathbb{C}^n}\left[\sum_{i,j}\overline{\bw_i}e^{\mathi\operatorname{Re}(\herm{\bt}\bz_i)}\overline{\overline{\bw_j}e^{\mathi\operatorname{Re}(\herm{\bt}\bz_j)}}f\left(\bt\right)\right]\,d\bt&\mbox{\small expand and rearrange}\\ &= \int_{\bt\in\mathbb{C}^n}\left|\sum_{i}\overline{\bw_i}e^{\mathi\operatorname{Re}(\herm{\bt}\bz_i)}\right|^2 f\left(\bt\right)\,d\bt&\mbox{\small algebra}\\ &\geqslant 0. &\mbox{\small integral of sum of real positive functions} \end{align*} (This motivates the definition of the characteristic function of a complex multivariate random variable~${\mathbf Z}$ as~$\expect{e^{i\operatorname{Re}\left(\herm{\bt}{\mathbf Z}\right)}}$). Thus the covariance matrix is Hermitian positive definite: although its entries are not necessarily real, its eigenvalues are all nonnegative. In the real case one typically chooses~$\Psi$ to be a zero-mean Gaussian distribution; in the complex case one can use the complex multivariate distribution given in equation~(\ref{complex_Gaussian_PDF}) which has characteristic function \begin{equation}\label{complex_gaussian_CF} \exp\left(i\operatorname{Re}\left(\herm{\bt}\bm\right) - \frac{1}{4}\herm{\bt}\Gamma\bt\right) \end{equation} and following~\cite{hankin2012} in writing~$\mathfrak{B}=\Gamma/4$, we can write the variance matrix as a product of a (real) scalar~$\sigma^2$ term and \begin{equation}\label{ct} c\left(\bt\right) = \exp\left(i\operatorname{Re}\left(\herm{\bt}\bm\right) - \herm{\bt}\mathfrak{B}\bt\right). \end{equation} Thus the covariance matrix~$\Omega$ is given by \begin{equation} \Omega_{ij}=\COV\left(\eta\left(\bz_i\right),\eta\left(\bz_j\right)\right)= \sigma^2c\left(\bz_i-\bz_j\right). \end{equation} In~(\ref{ct}), $\mathfrak{B}$ has the same meaning as in conventional emulator techniques and controls the modulus of the covariance between~$\eta\left(\bz\right)$ and~$\eta\left(\bz'\right)$; $\bm$ governs the phase. Given the above, it seems to be reasonable to follow~\cite{oakley1999} and admit only diagonal~$\mathfrak{B}$; but now distributions with nonzero mean can be considered (compare the real case which requires a zero mean). A parametrization using diagonal~$\mathfrak{B}$ and complex mean vector requires~$3p$ (real) hyperparameters; compare~$2p$ if~$\mathbb{C}^p$ is identified with~$\mathbb{R}^{2p}$. \section{Functions of several complex variables} Analytic functions of several complex variables are an important and interesting class of objects; \cite{krantz1987} motivates and discusses the discipline. Formally, consider~$f\colon\mathbb{C}^n\longrightarrow\mathbb{C}$, $n\geqslant 2$ and write~$f\left(z_1,\ldots,z_n\right)$. Function~$f$ is {\em analytic} if it satisfies the Cauchy-Riemann conditions in each variable separately, that is~$\partial f/\partial\overline{z}_j=0$, $1\leqslant j\leqslant n$. Such an~$f$ is continuous (due to a ``non-trivial theorem of Hartogs'') and continuously differentiable to arbitrarily high order. \citeauthor{krantz1987} goes on to state some results which are startling if one's exposure to complex analysis is restricted to functions of a single variable: for example, any isolated singularity is removable. %\begin{itemize} %\item Any isolated singularity is removable %\item If~$\Omega\subseteq\mathbb{C}^n$ is bounded, and~$f$ is % continuous on the closure~$\overline{\Omega}$ of~$\Omega$, then the % image of~$\left.f\right|_{\partial\Omega}$ contains the full image % of~$f$ on~\overline{\Omega}$. %\end{itemize} \section{Numerical illustration of these ideas} The natural definition of complex Gaussian processes above, together with the features of analytic functions of several complex variables, suggests that a complex emulation of analytic functions of several complex variables might be a useful technique. The ideas presented above, and the \pkg{cmvnorm} package, can now be used to sample directly from an appropriate complex Gaussian distribution and estimate the roughness parameters: <>= val <- latin.hypercube(40,2,names=c('a','b'),complex = TRUE) head(val) @ (function \code{latin.hypercube()} is used to generate a random complex design matrix). We may now specify a variance matrix using simple values for the roughness hyperparameters~$\mathfrak{B}=\left(\begin{smallmatrix}1&0\\0&2\end{smallmatrix}\right)$ and~$\bm=\left(1,i\right)^T$: <>= true_scales <- c(1,2) true_means <- c(1,1i) A <- corr_complex(val, means=true_means, scales=true_scales) round(A[1:4,1:4],2) @ Function \code{corr\_complex()} is a complex generalization of \code{corr()}; matrix \code{A} is Hermitian positive-definite: <>= all(eigen(A)$values > 0) @ It is now possible to make a single multivariate observation~$d$ of this process, using~$\bbeta=\left(1,1+i,1-2i\right)^T$: <>= true_beta <- c(1,1+1i,1-2i) d <- drop(rcmvnorm(n=1,mean=regressor.multi(val) %*% true_beta,sigma=A)) head(d) @ Thus \code{d} is a single observation from a complex multivariate Gaussian distribution. Most of the functions of the \pkg{emulator} package operate without modification. Thus \code{betahat.fun()}, which calculates the maximum likelihood estimate~$\hat{\bbeta}=\left(H^*A^{-1}H\right)^{-1}H^TA^{-1}\by$ takes complex values directly: <>= betahat.fun(val,solve(A),d) @ However, because the likelihood function is different, the \code{interpolant()} functionality is implemented in the \pkg{cmvnorm} package by \code{interpolant.quick.complex()}, named in analogy to \code{interpolant.quick()} of package \pkg{emulator}. For example, it is possible to evaluate the posterior distribution of the process at $(0.5,0.3+0.1i)$, a point at which no observation has been made: <>= interpolant.quick.complex(rbind(c(0.5,0.3+0.1i)),d, val,solve(A),scales=true_scales,means=true_means,give.Z=TRUE) @ <>= answer <- interpolant.quick.complex(rbind(c(0.5,0.3+0.1i)),d, val,solve(A),scales=true_scales,means=true_means,give.Z=TRUE) @ Thus the posterior distribution for the process is complex Gaussian at this point with a mean of about~$\Sexpr{round(answer$mstar.star,2)}$ and a variance of about~$\Sexpr{round(answer$Z,2)}$. \subsection{Analytic functions} These techniques are now used to emulate an analytic function of several complex variables. A complex function's being analytic is a very strong restriction; \cite{needham2004} uses `rigidity' to describe the severe constraint that analyticity represents. Here the Weierstrass~$\sigma$-function~\citep{chandrasekharan1985} is chosen as an example, on the grounds that~\cite{littlewood1948} consider it to be a typical entire function in a well-defined sense. The elliptic package~\citep{hankin2006} is used for numerical evaluation. The~$\sigma$-function takes a primary argument~$z$ and two invariants~$g_1,g_2$, so a three-column complex design matrix is required: <>= library("elliptic") valsigma <- 2+1i + round(latin.hypercube(30,3,names=c("z","g1","g2"),complex=TRUE)/4,2) head(valsigma) @ (an offset is needed because~$\sigma\left(z,g_1,g_2\right)=z+\operatorname{\mathcal{O}}\left(z^5\right)$). The~$\sigma$-function can now be evaluated at the points on the design matrix: <>= dsigma <- apply(valsigma,1,function(u){sigma(u[1],g=u[2:3])}) @ One way of estimating the roughness parameters is to use maximum likelihood. The likelihood for any set of roughness parameters is given by~\cite{oakley1999} as~$\left(\sigma^2\right)^{-\frac{n-q}{2}}\left|A\right|^{-1/2} \left|H^TA^{-1}H\right|^{-1/2}$ with complex generalization~$\left(\sigma^2\right)^{-(n-q)}\left|A\right|^{-1} \left|\herm{H}A^{-1}H\right|^{-1}$ which is calculated in the package by function \code{scales.likelihood.complex()}; this can be used to return the log-likelihood for a specific set of roughness parameters: <>= scales.likelihood.complex(scales=c(1,1,2),means=c(1,1+1i,1-2i), zold=valsigma,z=dsigma,give_log=TRUE) @ Numerical methods can then be used to find the maximum likelihood estimate. Because function \code{optim()} optimizes over~$\mathbb{R}^n$, helper functions are again needed which translate from the optimand to scales and means: <>= scales <- function(x){exp(x[c(1,2,2)])} means <- function(x){x[c(3,4,4)] + 1i*x[c(5,6,6)]} @ Because the diagonal elements of~$\mathfrak{B}$ are strictly positive, their {\em logarithms} are optimized, following~\cite{hankin2005}; it is implicitly assumed that the scales and means associated with~$g_1$ and~$g_2$ are equal. <>= objective <- function(x,valsigma,dsigma){ -scales.likelihood.complex(scales=scales(x),means=means(x),zold=valsigma,z=dsigma) } start <- c(-0.538, -5.668, 0.6633, -0.0084, -1.73, -0.028) jj <- optim(start,objective,valsigma=valsigma, dsigma=dsigma,method="SANN",control=list(maxit=100)) (u <- jj$par) @ Function \code{corr_complex()} may now be used to calculate the covariance of the observations: <>= Asigma <- corr_complex(z1=valsigma,scales=scales(u),means=means(u)) @ So now we can compare the emulator against the ``true'' value: <>= interpolant.quick.complex(rbind(c(2+1i,2+1i,2+1i)), zold=valsigma, d=dsigma,Ainv=solve(Asigma),scales=scales(u),means=means(u)) sigma(2+1i,g=c(2+1i,2+1i)) @ showing reasonable agreement. It is also possible to test the hypothesis~$H_\mathbb{R}\colon\bm\in\mathbb{R}^2$ (that is, the variance matrix~$A$ is real), by calculating the likelihood ratio of the unconstrained model~(\ref{ct}) to that obtained by~$H_\mathbb{R}$. This may be achieved by constraining the optimization to satisfy~$\bm\in\mathbb{R}^2$: <>= ob2 <- function(x,valsigma,dsigma){ -scales.likelihood.complex(scales=scales(x),means=c(0,0,0),zold=valsigma,z=dsigma) } jjr <- optim(u[1:2],ob2, method="SANN",control=list(maxit=1000),valsigma=valsigma,dsigma=dsigma) (ur <- jjr$par) @ so the test statistic~$D$ is given by <>= LR <- scales.likelihood.complex(scales=scales(ur),means=c(0,0,0),zold=valsigma,z=dsigma) LC <- scales.likelihood.complex(scales=scales(u),means=means(u),zold=valsigma,z=dsigma) (D <- 2*(LC-LR)) @ Observing that~$D$ is in the tail region of its asymptotic distribution, $\chi^2_{3}$, the hypothesis~$H_\mathbb{R}$ may be rejected. \section{Conclusions} The \pkg{cmvnorm} package for the complex multivariate Gaussian distribution has been introduced and motivated. The Gaussian process has been generalized to the complex case, and a complex generalization of the emulator technique has been applied to an analytic function of several complex variables. The complex variance matrix was specified using a novel parameterization which accommodated non-real covariances in the context of circulary symmetric random variables. Further work might include numerical support for the complex multivariate Student $t$~distribution. \bibliography{complex_gaussian} \end{document}