| Version: | 0.0.3 |
| Title: | Robust Regression and Estimation Through Maximum Mean Discrepancy Minimization |
| Description: | The functions in this package compute robust estimators by minimizing a kernel-based distance known as MMD (Maximum Mean Discrepancy) between the sample and a statistical model. Recent works proved that these estimators enjoy a universal consistency property, and are extremely robust to outliers. Various optimization algorithms are implemented: stochastic gradient is available for most models, but the package also allows gradient descent in a few models for which an exact formula is available for the gradient. In terms of distribution fit, a large number of continuous and discrete distributions are available: Gaussian, exponential, uniform, gamma, Poisson, geometric, etc. In terms of regression, the models available are: linear, logistic, gamma, beta and Poisson. Alquier, P. and Gerber, M. (2024) <doi:10.1093/biomet/asad031> Cherief-Abdellatif, B.-E. and Alquier, P. (2022) <doi:10.3150/21-BEJ1338>. |
| License: | GPL (≥ 3) |
| Encoding: | UTF-8 |
| RoxygenNote: | 7.3.2 |
| RdMacros: | Rdpack |
| Imports: | Rdpack (≥ 0.7) |
| Author: | Pierre Alquier |
| Maintainer: | Pierre Alquier <pierre.alquier.stat@gmail.com> |
| NeedsCompilation: | no |
| Packaged: | 2025-10-15 08:01:36 UTC; alquier |
| Repository: | CRAN |
| Date/Publication: | 2025-10-16 08:10:03 UTC |
MMD estimation
Description
Fits a statistical models to the data, using the robust procedure based on maximum mean discrepancy (MMD) minimization introduced and studied in Briol et al. (2019); Chérief-Abdellatif and Alquier (2022).
Usage
mmd_est(x, model, par1, par2, kernel, bdwth, control= list())
Arguments
x |
Data. Must be a vector for univariate models, a matrix of dimension n by d, where n is the sample size and d the dimension of the model. |
model |
Parametric model to be fitted to the data. No default. See details for the list of available models. |
par1 |
First parameter of the model. In models where the first parameter is fixed, it is necessary to provide a value for |
par2 |
Second parameter of the model (if any). In models where the second parameter is fixed, it is necessary to provide a value for |
kernel |
Kernel to be used in the MMD. Available options for |
bdwth |
Bandwidth parameter for the kernel. |
control |
A |
Details
Available options for model are:
- "
beta" Beta distribution with pdf
~x^{a-1}(1-x)^(b-1)on[0,1],par1=aandpar2=bare both estimated.- "
binomial" Binomial distribution with pmf
~p^{x}(1-p)^{N-x}on\{0,1,...,N\},par1=Nandpar2=pare both estimated. Note that in this case, if the user specifies a value forN, it is used as an upper bound rather than an initialization.- "
binomial.prob" Binomial distribution with pmf
~p^{x}(1-p)^{N-x}on\{0,1,...,N\},par1=Nis fixed and must be specified by the user whilepar2=pis estimated.- "
binomial.size" Binomial distribution with pmf
~p^{x}(1-p)^{N-x}on\{0,1,...,N\},par1=Nis estimated whilepar2=pfixed and must be specified by the user. Note that in this case, if the user specifies a value forN, it is used as an upper bound rather than an initialization.- "
Cauchy" Cauchy distribution with pdf
~1/(1+(x-m)^2),par1=mis estimated.- "
continuous.uniform.loc" Uniform distribution with pdf
~1on[m-L/2,m+L/2],par1=mis estimated whilepar2=Lis fixed and must be specified by the user.- "
continuous.uniform.upper" Uniform distribution with pdf
~1on[a,b],par1=ais fixed and must be specified by the user whilepar2=bis estimated.- "
continuous.uniform.lower.upper" Uniform distribution with pdf
~1on[a,b],par1=aandpar2=bare estimated.- "
Dirac" Dirac mass at point
aon the reals,par1=ais estimated.- "
discrete.uniform" Uniform distribution with pmf
~1on\{1,2,..,M\},par1=Mis estimated. Note that in this case, if the user specifies a value forM, it is used as an upper bound rather than an initialization.- "
exponential" Exponential distribution with pdf
~\exp(-b x)on positive realsR_+,par1=bis estimated.- "
gamma" Gamma distribution with pdf
~x^{a-1}\exp(-b x)on positive realsR_+,par1=a>=0.5andpar2=bare estimated.- "
gamma.shape" Gamma distribution with pdf
~x^{a-1}\exp(-b x)on positive realsR_+,par1=a>=0.5is estimated whilepar2=bis fixed and must be specified by the user.- "
gamma.rate" Gamma distribution with pdf
~x^{a-1}\exp(-b x)on positive realsR_+,par1=a>=0.5is fixed and must be specified by the user whilepar2=bis estimated.- "
Gaussian" Gaussian distribution with pdf
~\exp(-(x-m)^2/2s^2)on realsR,par1=mandpar2=sare estimated.- "
Gaussian.loc" Gaussian distribution with pdf
~\exp(-(x-m)^2/2s^2)on realsR,par1=mis estimated whilepar2=sis fixed and must be specified by the user.- "
Gaussian.scale" Gaussian distribution with pdf
~\exp(-(x-m)^2/2s^2)on realsR,par1=mis fixed and must be specified by the user whilepar2=sis estimated.- "
geometric" Geometric distribution with pmf
~p(1-p)^xon\{0,1,2,...\},par1=pis estimated.- "
multidim.Dirac" Dirac mass at point
aonR^d,par1=a(d-dimensional vector) is estimated.- "
multidim.Gaussian" Gaussian distribution with pdf
~\exp(-(x-m)'U'U(x-m)onR^d,par1=m(d-dimensional vector) andpar2=U(d-dmatrix) are estimated.- "
multidim.Gaussian.loc" Gaussian distribution with pdf
~\exp(-\|x-m\|^2/2s^2)onR^d,par1=m(d-dimensional vector) is estimated whilepar2=sis fixed.- "
multidim.Gaussian.scale" Gaussian distribution with pdf
~\exp(-(x-m)'U'U(x-m)onR^d,par1=m(d-dimensional vector) is fixed and must be specified by the user while andpar2=U(d-dmatrix) is estimated.- "
Pareto" Pareto distribution with pmf
~1/x^{a+1}on the reals>1,par1=ais estimated.- "
Poisson" Poisson distribution with pmf
~b^x/x!on\{0,1,2,...\},par1=bis estimated.
The control argument is a list that can supply any of the following components:
- burnin
Length of the burn-in period in GD or SGD.
burninmust be a non-negative integer and defautburnin==500.- nsteps
Number of iterations performed after the burn-in period in GD or SGD.
nsetpsmust be an integer strictly larger than 2 and by defaultnsteps=1000- stepsize
Stepsize parameter. An adaptive gradient step is used (adagrad), but it is possible to pre-multiply it by
stepsize. It must be strictly positive number and by defaultstepsize=1- epsilon
Parameter used in adagrad to avoid numerical errors in the computation of the step-size.
epsilonmust be a strictly positive real number and by defaultepsilon=10^{-4}.- method
Optimization method to be used:
"EXACT"for exact,"GD"for gradient descent and"SGD"for stochastic gradient descent. Not all methods are available for all models. By default, exact is preferred to GD which is prefered to SGD.
Value
MMD_est returns an object of class "estMMD".
The functions summary can be used to print the results.
An object of class estMMD is a list containing the following components:
model |
Model estimated |
par1 |
In models where the first parameter is fixed, this is the value |
par2 |
In models where the second parameter is fixed, this is the value |
kernel |
Kernel used in the MMD |
bdwth |
Bandwidth used. That is, either the value specified by the user, either the bandwidth computedby the median heuristic |
burnin |
Number of steps in the burnin of the GD or SGD algorithm |
nstep |
Number of steps in the GD or SGD algorithm |
stepsize |
Stepize parameter in GD or SGD |
epsilon |
Parameter used in adagrad to avoid numerical errors in the computation of the step-size |
method |
Optimization method used |
error |
Error message (if any) |
estimator |
Estimated parameter(s) |
trajectory |
The full trajectory of the optimization algorithm (GD or SGD) |
type |
Takes the value " |
References
Briol F, Barp A, Duncan AB, Girolami M (2019).
“Statistical inference for generative models with maximum mean discrepancy.”
arXiv preprint arXiv:1906.05944.
Chérief-Abdellatif B, Alquier P (2022).
“Finite Sample Properties of Parametric MMD Estimation: Robustness to Misspecification and Dependence.”
Bernoulli, 28(1), 181-213.
Garreau D, Jitkrittum W, Kanagawa M (2017).
“Large sample analysis of the median heuristic.”
arXiv preprint arXiv:1707.07269.
Examples
#simulate data
x = rnorm(50,0,1.5)
# estimate the mean and variance (assuming the data is Gaussian)
Est = mmd_est(x, model="Gaussian")
# print a summary
summary(Est)
# estimate the mean (assuming the data is Gaussian with known standard deviation =1.5)
Est2 = mmd_est(x, model="Gaussian.loc", par2=1.5)
# print a summary
summary(Est2)
# estimate the standard deviation (assuming the data is Gaussian with known mean = 0)
Est3 = mmd_est(x, model="Gaussian.scale", par1=0)
# print a summary
summary(Est3)
# test of the robustness
x[42] = 100
mean(x)
# estimate the mean and variance (assuming the data is Gaussian)
Est4 = mmd_est(x, model="Gaussian")
summary(Est4)
MMD regression
Description
Fits a regression model to the data, using the robust procedure based on maximum mean discrepancy (MMD) minimization introduced and studied in Alquier and Gerber (2024).
Usage
mmd_reg(y, X, model, intercept, par1, par2, kernel.y, kernel.x, bdwth.y, bdwth.x,
control= list())
Arguments
y |
Response variable. Must be a vector of length |
X |
Design matrix. |
model |
Regression model to be fitted to the data. By default, the linear regression model with |
intercept |
If |
par1 |
Values of the regression coefficients of the model used as starting values to numerically optimize the objective function. |
par2 |
A value for the auxilliary parameter |
kernel.y |
Kernel applied on the response variable. Available options for |
kernel.x |
Kernel applied on the explanatory variables. Available options for |
bdwth.y |
Bandwidth parameter for the kernel |
bdwth.x |
Bandwidth parameter for the kernel |
control |
A |
Details
Available options for model are:
"linearGaussian"Linear regression model with
\mathcal{N}_1(0,\phi^2)error terms, with\phiunknown."linearGaussian.loc"Linear regression model with
\mathcal{N}_1(0,\phi^2)error terms, with\phiknown."gamma"Gamma regression model with unknown shape parameter
\phi. The inverse function is used as link function."gamma.loc"Gamma regression model with known shape parameter
\phi. The inverse function is used as link function."beta"Beta regression model with unknown precision parameter
\phi. The logistic function is used as link function."beta.loc"Beta regression model with known precision parameter
\phi. The logistic function is used as link function."logistic"Logistic regression model.
"exponential"Exponential regression.
"poisson"Poisson regression model.
When bdwth.x>0 the function reg_mmd computes the estimator \hat{\theta}_n introduced in Alquier and Gerber (2024). When bdwth.x=0 the function reg_mmd computes the estimator \tilde{\theta}_n introduced in Alquier and Gerber (2024). The former estimator has stronger theoretical properties but is more expensive to compute (see below).
When bdwth.x=0 and model is "linearGaussian", "linearGaussian.loc" or "logistic", the objective function and its gradient can be computed on \mathcal{O}(n) operations, where n is the sample size (i.e. the dimension of y). In this case, gradient descent with backtraking line search is used to perform the minimizatiom. The algorithm stops when the maximum number of iterations maxit is reached, or as soon as the change in the objective function is less than eps_gd times the current function value. In the former case, a warning message is generated. By defaut, maxit=5\times 10^4 and eps_gd=sqrt(.Machine$double.eps), and the value of these two parameters can be changed using the control argument of reg_mmd.
When bdwth.x>0 and model is "linearGaussian", "linearGaussian.loc" or "logistic", the objective function and its gradient can be computed on \mathcal{O}(n^2) operations. To reduce the computational cost the objective function is minimized using norm adagrad (Duchi et al. 2011), an adaptive step size stochastic gradient algorithm. Each iteration of the algorithm requires \mathcal{O}(n) operations. However, the algorithm has an intialization step that requires \mathcal{O}(n^2) operations and has a memory requirement of size \mathcal{O}(n^2).
When model is not in c("linearGaussian", "linearGaussian.loc", "logistic"), the objective function and its gradient cannot be computed explicitly and the minimization is performed using norm adagrad. The cost per iteration of the algorithm is \mathcal{O}(n) but, for bdwth.x>0, the memory requirement and the initialization cost are both of size \mathcal{O}(n^2).
When adagrad is used, burnin iterations are performed as a warm-up step. The algorithm then stops when burnin+maxit iterations are performed, or as soon as the norm of the average value of the gradient evaluations computed in all the previous iterations is less than eps_sg. A warning message is generated if the maximum number of iterations is reached. By default, burnin=10^3, nsteps=5\times 10^4 and eps_sg=10^{-5} and the value of these three parameters can be changed using the control argument of reg_mmd.
If bdwth.y="auto" then the value of the bandwidth parameter of kernel.y is equal to H_n/\sqrt{2} with H_n the median value of the set \{|y_i-y_j|\}_{i,j=1}^n, where y_i denote the ith component of y. This definition of bdwth.y is motivated by the results in Garreau et al. (2017). If H_n=0 the bandwidth parameter of kernel.y is set to 1.
If bdwth.x="auto" then the value of the bandwidth parameter of kernel.x is equal to 0.01H_n/\sqrt{2} with H_n is the median value of the set \{\|x_i-x_j\|\}_{i,j=1}^n, where x_i denote the ith row of the design matrix X. If H_n=0 the bandwidth parameter of kernel.x is set to 1.
The control argument is a list that can supply any of the following components:
- rescale:
If
rescale=TRUEthe (non-constant) columns ofXare rescalled before perfoming the optimization, while ifrescale=FLASEno rescaling is applied. By defaultrescale=TRUE.- burnin
A non-negative integer.
- eps_gd
A non-negative real number.
- eps_sg
A non-negative real number.
- maxit
A integer strictly larger than 2.
- stepsize
Scaling constant for the step-sizes used by adagrad.
stepsizemust be a stictly positive number and by defaultstepsize=1.- trace:
If
trace=TRUEthen the parameter value obtained at the end of each iteration (after the burn-in perdiod for adagrad) is returned. By default,trace=TRUEandtraceis automatically set toTRUEif the maximum number of iterations is reached.- epsilon
Parameter used in adagrad to avoid numerical errors in the computation of the step-sizes.
epsilonmust be a strictly positive real number and by defaultepsilon=10^{-4}.- alpha
Parameter for the backtraking line search.
alphamust be a real number in(0,1)and by defaultalpha=0.8.- c_det
Parameter used to control the computational cost of the algorithm when
gamma.x>0, see the Suplementary material in Alquier and Gerber (2024) for mode details.c_detmust be a real number in(0,1)and by defaultc_det=0.2.- c_rand
Parameter used to control the computational cost of the algorithm when
bdwth.x>0, see the Suplementary material in Alquier and Gerber (2024) for mode details.c_randmust be a real number in(0,1)and by defaultc_rand=0.1.
Value
MMD_reg returns an object of class "regMMD".
The function summary can be used to print the results.
An object of class regMMD is a list containing the following components:
coefficients |
Estimated regression coefficients. |
intercept |
If |
phi |
If relevant (see details), either the estimated value of the |
kernel.y |
Kernel applied on the response variable used to fit the model. |
bdwth.y |
Value of the bandwidth for the kernel applied on the response variable used to fit the model. |
kernel.x |
Kernel applied on the explanatory variables used to fit the model. |
bdwth.x |
Value of the bandwidth for the kernel applied on the explanatory variables used to fit the model. |
par1 |
Value of the parameter |
par2 |
Value of parameter |
trace |
If the control parameter |
References
Alquier P, Gerber M (2024).
“Universal robust regression via maximum mean discrepancy.”
Biometrika, 111(1), 71-92.
Duchi J, Hazan E, Singer Y (2011).
“Adaptive subgradient methods for online learning and stochastic optimization.”
Journal of machine learning research, 12(7), 2121-2159.
Garreau D, Jitkrittum W, Kanagawa M (2017).
“Large sample analysis of the median heuristic.”
arXiv preprint arXiv:1707.07269.
Examples
#Simulate data
n<-1000
p<-4
beta<-1:p
phi<-1
X<-matrix(data=rnorm(n*p,0,1),nrow=n,ncol=p)
data<-1+X%*%beta+rnorm(n,0,phi)
##Example 1: Linear Gaussian model
y<-data
Est<-mmd_reg(y, X)
summary(Est)
##Example 2: Logisitic regression model
y<-data
y[data>5]<-1
y[data<=5]<-0
Est<-mmd_reg(y, X, model="logistic")
summary(Est)
Est<-mmd_reg(y, X, model="logistic", bdwth.x="auto")
summary(Est)
Summary method for the class "estMMD"
Description
Summary method for the class "estMMD"
Usage
## S3 method for class 'estMMD'
summary(object, ...)
Arguments
object |
An object of |
... |
Additional arguments (not used). |
Value
No return value, called only to print information on the output of "estMMD".
Summary method for the class "regMMD"
Description
Summary method for the class "regMMD"
Usage
## S3 method for class 'regMMD'
summary(object, ...)
Arguments
object |
An object of |
... |
Additional arguments (not used). |
Value
No return value, called only to print information on the output of "regMMD".