N.I.M.R.O.D.  
Functions/Subroutines

derivation.f90 File Reference

Go to the source code of this file.

Functions/Subroutines

subroutine derivRVS (b, m, v, rl)
 Robust Variance Scoring function.
subroutine derivMARC (b, m, v, rl)
 Gradient and Hessian computation by finite differenciation.
subroutine derivVRAISTOT (b, m, v, rl)
 Gradient and Hessian computation by finite differenciation for total log-likelihood conditionnaly to random effects.
subroutine derivRANDOMEFFECT (b, m, v, rl)
 Gradient and Hessian computation by finite differenciation for random effects likelihood.

Function Documentation

subroutine derivMARC ( double precision,dimension(m),intent(in)  b,
integer,intent(in)  m,
double precision,dimension((m*(m+3)/2)),intent(out)  v,
double precision,intent(inout)  rl 
)

Gradient and Hessian computation by finite differenciation.

AUTHOR : Melanie Prague Daniel Commenges Julia Drylewicz Jeremy guedj Rodolphe Thiebaut

DESCRIPTION :

An optional automatic switch to a classical Levenberg-Marquardt algorithm by Marquardt et al. 1964 (An algorithm for least-squares estimation of nonlinear parameters. Journal of the society for Industrial and Applied Mathematics) when the algorithm gets stuck, i.e. between two iteration steps there is neither log-likelihood change nor parameters movement whereas the main convergence criterion based on RDM is not met yet.

MODIFICATION:

01/09/2012 - Prague - Refactoring

INFORMATIONS:

Parameters:
[in,out]bparameter vector
[in]mparameter vector lenth
[out]vgradients and hessian
[out]rlfunction value

Definition at line 325 of file derivation.f90.

References WorkingSharedValues::firstFuncpa, WorkingSharedValues::fth, WorkingSharedValues::fthn, gradients_penalization(), hessian_penalization(), WorkingSharedValues::hessianNonPenalisee, WorkingSharedValues::hessianPenalisee, mpimod::numproc, WorkingSharedValues::recap, WorkingSharedValues::score, and WorkingSharedValues::writefuncpaFichier.

Referenced by optim().

Here is the call graph for this function:

Here is the caller graph for this function:

subroutine derivRANDOMEFFECT ( double precision,dimension(m),intent(in)  b,
integer,intent(in)  m,
double precision,dimension((m*(m+3)/2)),intent(out)  v,
double precision,intent(inout)  rl 
)

Gradient and Hessian computation by finite differenciation for random effects likelihood.

AUTHOR : Melanie Prague Daniel Commenges Julia Drylewicz Jeremy guedj Rodolphe Thiebaut

DESCRIPTION :

Gradient and Hessian storage. Calculation Step can be modified internally (th, thn) to increase accuracy.

MODIFICATION:

01/09/2012 - Prague - Refactoring

INFORMATIONS:

Parameters:
[in,out]bparameter vector
[in]mparameter vector lenth
[out]vgradients and hessian
[out]rlfunction value

Definition at line 503 of file derivation.f90.

Referenced by marquardt().

Here is the caller graph for this function:

subroutine derivRVS ( double precision,dimension(m),intent(inout)  b,
integer,intent(in)  m,
double precision,dimension(m*(m+3)/2),intent(out)  v,
double precision,intent(out)  rl 
)

Robust Variance Scoring function.

AUTHOR : Melanie Prague Daniel Commenges Julia Drylewicz Jeremy guedj Rodolphe Thiebaut

DESCRIPTION :

To decrease the computation time we use an approximation of the second derivative of the log-likelihood leading to the Robust Variance Scoring (RVS) Algorithm by Commenges et al. 2006 (A newton- like algorithm for likelihood maximization : The robust-variance scoring algorithm. Arxiv math/0610402) an improved version of the BHHH algorithm by Berndt et al. 1974 (Estimation and inference in nonlinear structural models. Annals of Economic and Social Measurement).

The individual score for a given value of the parameter $\theta $, ( $U_i(\theta) $) is calculated using Louis' formula and the systems of sensitivity equations (given by $\left(\frac{d f(X^i(t),\xi^i(t))}{d \xi^i_l(t)} \right)_{l=1 \dots p}$) as described in Guedj et al. 2007 (Maximum likelihood estimation in dynamical models of HIV. Biometrics). Then, the penalized scores ( $U^P_i(\theta)$) are derived such that $U^P_i(\theta)=U_i(\theta)-(1/n)\frac{\partial J(\theta)}{\partial \theta}$. The observed log-likelihood and scores are calculated as the sum over all the subjects: $L^P(\theta)=\sum_{i=1}^n L^P_i(\theta)$, $U(\theta)=\sum_{i=1}^n U_i(\theta)$, $U^P(\theta)=\sum_{i=1}^n U^P_i(\theta)$. The Hessian of $-L^P(\theta)$, denoted $H_{L^P}(\theta)$, is approximated by an estimator of the variance of $L$ plus the second derivative of the penalization $J$:

\[ G(\theta) = \sum _{i=1}^n U_{i}(\theta)U^T_{i}(\theta)-\frac{1}{n}U(\theta)U^T(\theta)+\frac{\partial^2 J(\theta)}{\partial \theta^2}. \nonumber \]

For $\theta $ close to the true value and $n $ large, $\sum _{i=}^n U_{i}(\theta)U^T_{i}(\theta)-\frac{1}{n}U(\theta)U^T(\theta)$ approximates $H_{_L}(\theta)$ and $G(\theta)$ approximates $H_{L^P}(\theta)$, the Hessians of $-L$ and $-L^P$ respectively.

MODIFICATION:

01/09/2012 - Prague - Refactoring

INFORMATIONS:

Parameters:
[in,out]bparameter vector
[in]mparameter vector lenth
[out]vgradients and hessian
[out]rlfunction value

Definition at line 50 of file derivation.f90.

References WorkingSharedValues::abserr, WorkingSharedValues::abserr1, WorkingSharedValues::firstFuncpa, gradients_penalization(), hessian_penalization(), WorkingSharedValues::hessianNonPenalisee, WorkingSharedValues::hessianPenalisee, WorkingSharedValues::nbpatOK, mpimod::numproc, WorkingSharedValues::patOK, WorkingSharedValues::scoreERROR, WorkingSharedValues::scorePRECISION, scoreRVS(), WorkingSharedValues::withexclusion, and WorkingSharedValues::writefuncpaFichier.

Referenced by optim().

Here is the call graph for this function:

Here is the caller graph for this function:

subroutine derivVRAISTOT ( double precision,dimension(m),intent(inout)  b,
integer,intent(in)  m,
double precision,dimension(m*(m+3)/2),intent(out)  v,
double precision,intent(out)  rl 
)

Gradient and Hessian computation by finite differenciation for total log-likelihood conditionnaly to random effects.

AUTHOR : Melanie Prague Daniel Commenges Julia Drylewicz Jeremy guedj Rodolphe Thiebaut

DESCRIPTION :

Gradient and Hessian storage. Calculation Step can be modified internally (th, thn) to increase accuracy.

MODIFICATION:

01/09/2012 - Prague - Refactoring

INFORMATIONS:

Parameters:
[in,out]bparameter vector
[in]mparameter vector lenth
[out]vgradients and hessian
[out]rlfunction value

Definition at line 425 of file derivation.f90.

References vraistotEXP().

Referenced by funcpa().

Here is the call graph for this function:

Here is the caller graph for this function: