hpstat/docs/intcox.tex

94 lines
7.0 KiB
TeX
Raw Normal View History

\documentclass[a4paper,12pt]{article}
\usepackage[math-style=ISO, bold-style=ISO]{unicode-math}
\setmainfont{TeX Gyre Termes}
\setmathfont{TeX Gyre Termes Math}
\usepackage{parskip}
\usepackage{microtype}
\usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry}
\frenchspacing
\setlength{\emergencystretch}{3em}
2023-04-28 01:02:09 +10:00
\usepackage[hidelinks]{hyperref}
\usepackage{mathtools}
2023-04-29 18:33:58 +10:00
\newcommand{\bbeta}{\kern -0.1em\symbf{β}}
\newcommand{\blambda}{\kern -0.1em\symbf{Λ}}
\newcommand{\nablasub}[1]{\nabla_{\kern -0.15em #1}}
\begin{document}
{\centering\bfseries Supplemental documentation for hpstat \textit{intcox} command\par}
2023-04-28 01:02:09 +10:00
The hpstat \textit{intcox} command implements Cox proportional hazards regression for interval-censored observations, using an iterative convex minorant algorithm. The general algorithm is proposed by Huang \& Wellner [1]. When computing the baseline hazard, we apply a damped iterative convex minorant algorithm described by Aragón \& Eberly [2] and Pan [3]. This documentation discusses technical details of the implementation.
2023-04-29 18:33:58 +10:00
Let the baseline cumulative hazard be a step function $Λ(t)$ with steps at times $\{t_1, t_2, …, t_m\}$. We seek to optimise for the vector $\blambda = (Λ(t_1), Λ(t_2), …, Λ(t_m))^\mathrm{T}$. The cumulative hazard $Λ(t; \symbf{Z}_i)$ given covariates $\symbf{Z}_i$ is $\exp(\symbf{Z}_i^\mathrm{T} \bbeta) Λ(t)$. The cumulative distribution function for failure times is $F(t; \symbf{Z}_i) = 1 - \exp\left(-\exp(\symbf{Z}_i^\mathrm{T} \bbeta) Λ(t)\right)$.
2023-04-28 01:02:09 +10:00
Take the $i$-th observation, whose failure time falls in $(L_i, R_i]$.
2023-04-28 01:02:09 +10:00
The log-likelihood $\mathcal{L}_i$ for the $i$-th observation is:
%
\begin{align*}
2023-04-28 01:02:09 +10:00
\mathcal{L}_i &= \log( F(R_i; \symbf{Z}_i) - F(L_i; \symbf{Z}_i) ) \\
2023-04-29 18:33:58 +10:00
&= \log\left( \exp\left(-\exp(\symbf{Z}_i^\mathrm{T} \bbeta) Λ(L_i)\right) - \exp\left(-\exp(\symbf{Z}_i^\mathrm{T} \bbeta) Λ(R_i)\right) \right) \\
\intertext{Let $S_{Li} = \exp\left(-\exp(\symbf{Z}_i^\mathrm{T} \bbeta) Λ(L_i)\right)$, and $S_{Ri} = \exp\left(-\exp(\symbf{Z}_i^\mathrm{T} \bbeta) Λ(R_i)\right)$, so:}
2023-04-28 01:02:09 +10:00
\mathcal{L}_i &= \log\left( S_{Li} - S_{Ri} \right)
\end{align*}
%
2023-04-29 18:33:58 +10:00
Note the gradient $\nablasub{\blambda} S_{Li}$ with respect to $\blambda$ is the vector whose $j$-th element is:
%
2023-04-29 18:33:58 +10:00
$$\frac{\partial S_{Li}}{\partial Λ(t_j)} = - S_{Li} \exp(\symbf{Z}_i^\mathrm{T} \bbeta) \mathrm{I}(t_j = R_i)$$
%
2023-04-29 18:33:58 +10:00
$\nablasub{\blambda} S_{Ri}$ is derived similarly, and so the gradient $\nablasub{\blambda} \mathcal{L}_i$ is the vector whose $j$-th element is:
%
2023-04-28 01:02:09 +10:00
$$\frac{\partial}{\partial Λ(t_j)} \log\left( S_{Li} - S_{Ri} \right) = \frac{A_{ij}}{S_{Li} - S_{Ri}}$$
%
2023-04-29 18:33:58 +10:00
Where $A_{ij} = \partial\left( S_{Li} - S_{Ri} \right) / \partial Λ(t_j) = S_{Ri} \exp(\symbf{Z}_i^\mathrm{T} \bbeta) \mathrm{I}(t_j = R_i) - S_{Li} \exp(\symbf{Z}_i^\mathrm{T} \bbeta) \mathrm{I}(t_j = L_i)$. The sum of all $\nablasub{\blambda} \mathcal{L}_i$ yields $\nablasub{\blambda} \mathcal{L}$.
2023-04-28 01:02:09 +10:00
2023-04-29 18:33:58 +10:00
Note that $\partial A_{ij} / \partial Λ(t_j) = -A_{ij} \exp(\symbf{Z}_i^\mathrm{T} \bbeta)$, so applying the quotient rule and simplifying, the Hessian $\nabla^2_{\blambda} \mathcal{L}_i$ has diagonal $(j, j)$-th elements:
%
2023-04-29 18:33:58 +10:00
$$\frac{\partial^2}{\partial Λ(t_j){}^2} \log\left( S_{Li} - S_{Ri} \right) = \frac{-A_{ij} \exp(\symbf{Z}_i^\mathrm{T} \bbeta)}{S_{Li} - S_{Ri}} - \left( \frac{A_{ij}}{S_{Li} - S_{Ri}} \right)^2$$
%
2023-04-29 18:33:58 +10:00
And the sum of all $\nabla^2_{\blambda} \mathcal{L}_i$ yields $\nabla^2_{\blambda} \mathcal{L}$.
2023-04-28 01:02:09 +10:00
2023-04-29 18:33:58 +10:00
Let $\symbf{G}$ be a diagonal matrix of the diagonal elements of $-\nabla^2_{\blambda} \mathcal{L}$. As discussed by Pan [3], we update $\blambda$ by iteratively applying:
%
2023-04-29 18:33:58 +10:00
$$\blambda \leftarrow \mathrm{Proj}[ \blambda + α \symbf{G}^{-1} \nablasub{\blambda} \mathcal{L}, G, \mathbb{R} ]$$
%
2023-04-28 01:02:09 +10:00
Where $\mathrm{Proj}$, as defined by Pan [3], essentially represents monotonic (isotonic) regression. We port an efficient pool adjacent violators algorithm (PAVA) implementation from scikit-learn.
2023-04-29 18:33:58 +10:00
Turning to the estimation of $\bbeta$, note the gradient $\nablasub{\bbeta} S_{Li}$ with respect to $\bbeta$ is:
%
2023-04-29 18:33:58 +10:00
$$\nablasub{\bbeta} S_{Li} = \frac{\partial S_{Li}}{\partial\bbeta} = -S_{Li} \exp(\symbf{Z}_i^\mathrm{T} \bbeta) Λ(L_i) \symbf{Z}_i$$
%
2023-04-29 18:33:58 +10:00
$\nablasub{\bbeta} S_{Ri}$ is derived similarly. The gradient $\nablasub{\bbeta} \mathcal{L}_i$ is:
%
2023-04-29 18:33:58 +10:00
$$\nablasub{\bbeta} \mathcal{L}_i = \left[ \frac{B_{Ri} - B_{Li}}{S_{Li} - S_{Ri}} \right] \symbf{Z}_i$$
%
2023-04-29 18:33:58 +10:00
Where $B_{Li} = S_{Li} \exp(\symbf{Z}_i^\mathrm{T} \bbeta) Λ(L_i)$ and $B_{Ri} = S_{Ri} \exp(\symbf{Z}_i^\mathrm{T} \bbeta) Λ(R_i)$.
2023-04-28 01:02:09 +10:00
2023-04-29 18:33:58 +10:00
Note $\partial B_{Li} / \partial \bbeta = \exp(\symbf{Z}_i^\mathrm{T} \bbeta) Λ(L_i) (S_{Li} - B_{Li}) \symbf{Z}_i$, and $\partial B_{Ri} / \partial \bbeta$ is analogous. Applying the quotient rule and simplifying, the Hessian $\nabla^2_{\blambda} \mathcal{L}_i$ is:
2023-04-28 01:02:09 +10:00
%
2023-04-29 18:33:58 +10:00
$$\nabla^2_{\bbeta} \mathcal{L}_i = \left[ \frac{\exp(\symbf{Z}_i^\mathrm{T} \bbeta) Λ(R_i) (S_{Ri} - B_{Ri}) - \exp(\symbf{Z}_i^\mathrm{T} \bbeta) Λ(L_i) (S_{Li} - B_{Li})}{S_{Li} - S_{Ri}} - \left( \frac{B_{Ri} - B_{Li}}{S_{Li} - S_{Ri}} \right)^2 \right] \symbf{Z}_i \symbf{Z}_i^\mathrm{T}$$
%
2023-04-29 18:33:58 +10:00
And the sum of all $\nabla^2_{\bbeta} \mathcal{L}_i$ yields $\nabla^2_{\bbeta} \mathcal{L}$.
2023-04-28 01:02:09 +10:00
As discussed by Huang \& Wellner [1], we now apply standard Newton–Raphson optimisation by iteratively applying:
%
2023-04-29 18:33:58 +10:00
$$\bbeta \leftarrow \bbeta + [-\nabla^2_{\bbeta} \mathcal{L}]^{-1} \nablasub{\bbeta} \mathcal{L}$$
2023-04-28 01:02:09 +10:00
%
2023-04-29 18:33:58 +10:00
We alternate between updating $\blambda$ and updating $\bbeta$ until convergence is reached.
2023-04-28 01:02:09 +10:00
2023-04-29 18:33:58 +10:00
The covariance matrix for $\bbeta$ is computed as suggested by Zeng, Gao \& Lin [4] as the inverse of the empirical Fisher information, which is in turn estimated as the outer product of the gradient of the profile log-likelihood.
2023-04-28 01:02:09 +10:00
{\vspace{0.5cm}\scshape\centering References\par}
\begin{enumerate}
2023-04-29 18:33:58 +10:00
\item Huang J, Wellner JA. Interval censored survival data: a review of recent progress. In: Lin DY, Fleming TR, editors. \textit{Proceedings of the First Seattle Symposium in Biostatistics: survival analysis}; 1995 Nov 20–21; University of Washington, Seattle. New York: Springer-Verlag; 1997. p. 123–69. \href{https://doi.org/10.1007/978-1-4684-6316-3_8}{doi: 10.1007\slash 978-1-4684-6316-3\_8}
2023-04-28 01:02:09 +10:00
\item Aragón J, Eberly D. On convergence of convex minorant algorithms for distribution estimation with interval-censored data. \textit{Journal of Computational and Graphical Statistics}. 1992;1(2):129–40. \href{https://doi.org/10.2307/1390837}{doi: 10.2307\slash 1390837}
\item Pan W. Extending the iterative convex minorant algorithm to the Cox model for interval-censored data. \textit{Journal of Computational and Graphical Statistics}. 1999;8(1):109–20. \href{https://doi.org/10.1080/10618600.1999.10474804}{doi: 10.1080\slash 10618600.\allowbreak{}1999.10474804}
\item Zeng D, Gao F, Lin DY. Maximum likelihood estimation for semiparametric regression models with multivariate interval-censored data. \textit{Biometrika}. 2017;104(3):505–25. \href{https://doi.org/10.1093/biomet/asx029}{doi: 10.\allowbreak{}1093\slash biomet\slash asx029}
\end{enumerate}
2023-04-28 01:02:09 +10:00
\end{document}