\ul

Patricia A. Apellániz

Information Processing

and Telecommunications Center

Universidad Politécnica de Madrid

Madrid

patricia.alonsod@upm.es

&Juan Parras

Information Processing

and Telecommunications Center

Universidad Politécnica de Madrid

Madrid

j.parras@upm.es

&Santiago Zazo

Information Processing

and Telecommunications Center

Universidad Politécnica de Madrid

Madrid

santiago.zazo@upm.es

###### Abstract

As in many fields of medical research, survival analysis has witnessed a growing interest in the application of deep learning techniques to model complex, high-dimensional, heterogeneous, incomplete, and censored medical data. Current methods often make assumptions about the relations between data that may not be valid in practice. In response, we introduce SAVAE (Survival Analysis Variational Autoencoder), a novel approach based on Variational Autoencoders. SAVAE contributes significantly to the field by introducing a tailored ELBO formulation for survival analysis, supporting various parametric distributions for covariates and survival time (as long as the log-likelihood is differentiable). It offers a general method that consistently performs well on various metrics, demonstrating robustness and stability through different experiments. Our proposal effectively estimates time-to-event, accounting for censoring, covariate interactions, and time-varying risk associations. We validate our model in diverse datasets, including genomic, clinical, and demographic data, with varying levels of censoring. This approach demonstrates competitive performance compared to state-of-the-art techniques, as assessed by the Concordance Index and the Integrated Brier Score. SAVAE also offers an interpretable model that parametrically models covariates and time. Moreover, its generative architecture facilitates further applications such as clustering, data imputation, and the generation of synthetic patient data through latent space inference from survival data.

*K*eywords Survival Analysis$\cdot$Time to Event$\cdot$Deep Learning$\cdot$Variational Autoencoders

## 1 Introduction

In recent years, there has been a significant transformation in medical research methodologies towards the adoption of Deep Learning (DL) techniques for predicting critical events, such as disease development and patient mortality. Despite their potential to handle complex data, practical applications in this domain remain limited, with most studies still relying on traditional statistical methods.

Survival Analysis (SA), or time-to-event analysis, is an essential tool for studying specific events in various disciplines, not only in medicine but also in fields such as recommendation systems [1], employee retention [2], market modeling [3], and financial risk assessment [4].

According to the existing literature, the Cox proportional hazards model (Cox-PH) [5] is the dominant SA method that offers a semiparametric regression solution to the non-parametric Kaplan-Meier estimator problem [6]. Unlike the Kaplan-Meier method, which uses a single covariate, Cox-PH incorporates multiple covariates to predict event times and assess their impact on the hazard rate at specific time points. However, it is crucial to acknowledge that the Cox-PH model is built on certain strong assumptions. One of these is the proportional hazards assumption, which posits that different individuals have hazard functions that remain constant over time. Furthermore, the model assumes a linear relation between the natural logarithm of the relative hazard (the ratio of the hazard at time $t$ to the baseline hazard) and the covariates. Furthermore, it assumes the absence of interactions among these covariates. It is worth noting that these assumptions may not hold in real-world datasets, where complex interactions between covariates and non-linear relations might exist. Other traditional parametric statistical models for SA make specific assumptions about the distribution of event times. For instance, models like those presented in [7, 8] assume exponential and Weibull distributions, respectively, for event times. However, one drawback of these models is that they lack flexibility when it comes to changing the assumed distribution for survival times, making them less adaptable to diverse datasets.

In response, researchers have explored Deep Neural Networks (DNNs) to effectively capture the intricate and non-linear relations between predictive variables and a patient’s risk of failure. Significant emphasis has been placed on improving the Cox PH model, which has been the standard approach in SA.

Recent approaches have introduced Neural Networks (NN) in various configurations, either enhancing the Cox-PH model with neural components or proposing entirely novel architectures. This exploration of NN applications for SA traces back to 1995 with the work of [9], who initially employed a simple feed-forward NN to replace linear interaction terms while incorporating non-linearities. Subsequently, the field saw the emergence of DeepSurv [10], a model designed to extract non-linearities from input data, albeit still assuming the proportional hazards assumption. This assumption persists in other related models like the one proposed by [11]. Beyond addressing non-linearity, some researchers have sought to enhance prediction accuracy and model interpretability by combining Bayesian networks with the Cox-PH model, as demonstrated by [12]. Additionally, efforts have been made to introduce concepts that facilitate analysis when data availability is limited, as seen in the work of [13, 14]. However, it is essential to note that all these models still depend on the proportional hazards assumption. As a result, novel architectures such as DeepHit [15] have emerged as alternatives that do not rely on the proportional hazards assumption. While DeepHit has exhibited superior performance compared to other state-of-the-art models, it operates exclusively in the discrete-time domain, which comes with certain limitations, notably the requirement for a dataset with a substantial number of observations, a condition that may not be feasible in real-world scenarios.

In light of the persistent limitations of existing approaches in the realm of SA, this paper introduces a novel, versatile algorithm grounded in DL advances, named SAVAE (Survival Analysis Variational Autoencoder). SAVAE has been meticulously designed to predict the time distribution that leads to a predefined event and adapts to application in various domains, with a specific focus on the medical context. Then, our main contributions consist of:

- •
We introduce a generative approach that underpins the development of a flexible tool, SAVAE, based on Variational Autoencoders (VAEs). SAVAE can effectively reproduce the data by analytically modeling the discrete or continuous time to a specific event. This analytical approach enables the calculation of all necessary statistics with precision, as the output provided by SAVAE are the parameters of the predicted time distribution

- •
SAVAE is a flexible tool that enables us to use a wide variety of distributions to model the time-to-event and the covariates. This allows us to not assume proportional hazards. By using NN, it permits modeling complex, non-linear relations between the covariates and the time-to-event too, as opposed to the linearity assumptions in the state of the art. Also, the time-to-event is trained with standard likelihood techniques, unlike state-of-the-art models like DeepHit, which trains the Concordance Index (C-index). This makes our approach more general and flexible, as any differentiable distribution could be used to model the time and the covariates.

- •
Furthermore, our proposal can be trained on censored data, effectively leveraging information from patients who have not yet experienced the event of interest.

- •
We have conducted comprehensive time-to-event estimation experiments using datasets characterized by continuous and discrete time-to-event values and varying covariate natures, encompassing both clinical and genomic data. These experiments involve a comparative analysis with the traditional Cox-PH model and other DL techniques. The results indicate that SAVAE is competitive with these models in terms of the C-index and the Integrated Brier score (IBS).

## 2 Background

To establish context, we will define SA and VAEs. SA is a branch of applied statistics that examines random processes related to system failures and mortality. Following this, we will provide an analytical overview of VAEs before introducing SAVAE.

### 2.1 Survival Analysis

In a conventional time-to-event or SA setup, N observations are given. Each of these observations is described by $D=(x_{i},t_{i},d_{i})^{N}_{i=1}$ triplets, where $x_{i}=(x_{i}^{1},...,x_{i}^{L})$ is an $L$-dimensional vector where $l=1,2,...,L$ indexes the covariates, $t_{i}$ is the time-to-event, and $d_{i}\in\{0,1\}$ is the censor indicator. When $d_{i}=0$ (censored), the subject has not experienced an event up to time $t_{i}$, while $d_{i}=1$ indicates the observed events (ground truth). SA models are conditional on covariates: time probability density function $p(t|x)$, hazard rate function (the instantaneous rate of occurrence of the event at a specific time) $h(t|x)$, or survival function $S(t|x)=P(T>t)=1-F(t|x)$, also known as the probability of a failure occurring after time $t$, where $F(t|x)$ is the Cumulative Distribution Function (CDF) of the time. From standard definitions of the survival function, the relations between these three characterizations are formulated as:

$p(t|x)=h(t|x)S(t|x).$ | (1) |

### 2.2 Vanilla Variational Autoencoder

In 2013, [16] proposed the original VAE, a powerful approach employing DNNs for Bayesian inference. It addresses the problem of a dataset consisting of $N$ i.i.d. samples $x_{i}$ of a continuous or discrete variable, where $i\in 1,2,...,N$, $x_{i}$ are generated by the following random process, which is depicted in Figure 1:

- 1.
A latent variable $z_{i}$ is sampled from a given prior probability distribution $p(z)$. [16] assumes a form $p_{\theta}(z)$, i.e., the prior depends on some parameters $\theta$, but its main result drops this dependence. Therefore, in this paper, a simple prior $p(z)$ is assumed.

- 2.
A conditional distribution, $p_{\theta}(x|z)$, with parameters $\theta$ generates the observed values, $x_{i}$. This process is governed by a generative model. Certain assumptions are made, including the differentiability of probability density functions (pdfs), $p(z)$, and $p_{\theta}(x|z)$, regarding $\theta$ and $z$.

The latent variable $z$ and the parameters $\theta$ are unknown. Without simplifying assumptions, evaluating the true posterior density $p_{\theta}(x)=\int p(z)p_{\theta}(x|z)dz$ is infeasible. This true posterior density can be defined as Equation 2 using Bayes’ theorem:

$p_{\theta}(z|x)=\frac{p_{\theta}(x|z)p(z)}{p_{\theta}(x)}.$ | (2) |

Variational methods offer a solution by introducing a variational approximation, $q_{\phi}(z|x)$, to the true posterior. This approximation involves finding the best parameters for a chosen family of distributions through optimization. The quality of the approximation depends on the expressiveness of this parametric family.

#### 2.2.1 ELBO derivation

Since an optimization problem must be solved, the optimization target needs to be developed. Considering $x_{i}$ are assumed to be i.i.d., the marginal likelihood of a set of points $\{x_{i}\}_{i=1}^{N}$ can be expressed as

$\log p_{\theta}(x_{1},x_{2},...,x_{N})=\sum_{i=1}^{N}\log p_{\theta}(x_{i}),$ | (3) |

where

$\begin{split}p_{\theta}(x)=\int p_{\theta}(x,z)dz=\int p_{\theta}(x,z)\frac{q_%{\phi}(z|x)}{q_{\phi}(z|x)}dz=\mathbb{E}_{q_{\phi}(z|x)}\left[\frac{p_{\theta}%(x,z)}{q_{\phi}(z|x)}\right].\end{split}$ | (4) |

Using Jensen’s inequality, we can obtain:

$\begin{split}\log p_{\theta}(x)=\log\Bigg{[}\mathbb{E}_{q_{\phi}(z|x)}\left[%\frac{p_{\theta}(x,z)}{q_{\phi}(z|x)}\right]\Bigg{]}\geq\mathbb{E}_{q_{\phi}(z%|x)}\left[\log\frac{p_{\theta}(x,z)}{q_{\phi}(z|x)}\right].\end{split}$ | (5) |

Rearranging Equation 5, we can express it as follows:

$\begin{split}\mathbb{E}_{q_{\phi}(z|x)}\Bigg{[}\log\left(\frac{p_{\theta}(x,z)%}{q_{\phi}(z|x)}\right)\Bigg{]}=\int q_{\phi}(z|x)\log\frac{p_{\theta}(x|z)p(z%)}{q_{\phi}(z|x)}dz=\int q_{\phi}(z|x)\log\frac{p(z)}{q_{\phi}(z|x)}dz\\+\int q_{\phi}(z|x)\log p_{\theta}(x|z)dz=-\int q_{\phi}(z|x)\log\frac{q_{\phi%}(z|x)}{p(z)}dz+\int q_{\phi}(z|x)\log p_{\theta}(x|z)dz\\=-D_{KL}(q_{\phi}(z|x)||p(z))+\mathbb{E}_{q_{\phi}(z|x)}\left[\log p_{\theta}(%x|z)\right]=\pazocal{L}(x,\theta,\phi),\end{split}$ | (6) |

where $D_{KL}(p||q)$ is the Kullback-Leibler divergence between distributions $p$ and $q$, and $\pazocal{L}(x,\theta,\phi)$ is the Evidence Lower BOund (ELBO), whose name comes from Equation 5:

$\begin{split}\log p_{\theta}(x)\geq-D_{KL}(q_{\phi}(z|x)||p(z))+\mathbb{E}_{q_%{\phi}(z|x)}\left[\log p_{\theta}(x|z)\right]=\pazocal{L}(x,\theta,\phi),\end{split}$ | (7) |

that is, the ELBO is a lower bound for the marginal log-likelihood of the relevant set of points. Thus, by maximizing the ELBO, the log-likelihood of the data is maximized. This would be the optimization problem to solve.

#### 2.2.2 Implementation

The ELBO derived from Equation 7 can be effectively implemented using a DNN-based architecture. However, computing the gradient of the ELBO concerning $\phi$ presents challenges due to the presence of $\phi$ in the expectation term (the second part of the ELBO in Equation 7). To address this issue, [16] introduced the reparameterization trick. This method involves modifying the latent space sampling process to make it differentiable, enabling the use of gradient-based optimization techniques. Rather than sampling directly from the latent space distribution, VAEs sample $\epsilon$ from a simple distribution, often a standard normal distribution. Subsequently, a deterministic transformation $g_{\phi}$ is applied to $\epsilon$, producing $z=g_{\phi}(x,\epsilon)$ where $z\sim q_{\phi}(z|x)$ and $\epsilon\sim p(\epsilon)$. In this case, the ELBO can be estimated as follows.

$\begin{split}\pazocal{\hat{L}}(x,\theta,\phi)=\frac{1}{N}\sum_{i=1}^{N}\bigg{(%}-D_{KL}(q_{\phi}(z|x_{i})||p(z))+\log p_{\theta}(x_{i}|g_{\phi}(x_{i},%\epsilon_{i}))\bigg{)}.\end{split}$ | (8) |

This modification facilitates the calculation of the ELBO gradient concerning $\theta$ and $\phi$, allowing the application of standard gradient optimization methods.

Equation 8 offers a solution using DNNs, with functions parameterized by $\phi$ and $\theta$. Gradients can be conveniently computed using the Backpropagation algorithm, which is automated by various programming libraries. The term VAE derives from the fact that Equation 8 resembles the architecture of an Autoencoder (AE) [17], as illustrated in Figure 2. Notably, the variational distribution $q_{\phi}$ can be implemented using a DNN with weights $\phi$, taking an input sample $x$ and outputting parameters for the deterministic transformation $g_{\phi}$. The VAE’s latent space comprises the distribution of the latent variable $z$, which is a deterministic transformation $g_{\phi}$ of the encoder DNN output and random ancillary noise $\epsilon$. A sampled value $z_{i}$ is drawn from the latent distribution and used to generate an output sample, where another DNN with weights $\theta$ acts as a decoder, taking $z$ as input and providing parameters of the distribution $p_{\theta}(x|z)$ as output.

Two key observations emerge.

- 1.
The ELBO losses in Equation 7 include a regularization term penalizing deviations from the prior in the latent space and a reconstruction error term that enforces similarity between generated samples from the latent space and inputs.

- 2.
In contrast to standard AEs, VAEs incorporate intermediate sampling, rendering them non-deterministic. This dual sampling process is retained in applications where the distribution of output variables is of interest, facilitating the derivation of input value distribution parameters.

## 3 Materials and Methods

The interest lies in using VAEs to obtain the predictive distribution of time-to-event given covariates. The proposed approach, termed Survival Analysis VAE (SAVAE) and depicted in Figure 3, extends the Vanilla VAE. SAVAE includes a continuous latent variable $z$, two vectors (an observable covariate vector $x$ and the time-to-event $t$), and generative models $p_{\theta_{1}}(x|z)$ and $p_{\theta_{2}}(t|z)$, assuming conditional independence, which is a characteristic inherent to VAEs and their ability to effectively model the joint distribution of variables. This means that knowing $z$, the components of the vector $x$ and $t$ can be generated independently. To define the predictive distribution based on covariates, a single variational distribution estimates the variational posterior $p(z|x)$. While it is possible to include the effect of time ($p(z|t,x)$), this approach focuses on using only covariates to obtain the latent space, as the time $t$ can be unknown to predict survival times for test patients and could be censored. SAVAE combines VAEs and survival analysis, offering a flexible framework for modeling complex event data.

### 3.1 Goal

To achieve the main objective, which is to obtain the predictive distribution for the time to event, variational methods will be used in the following way defined in [18]:

$\begin{split}p\left(t^{*}|x^{*},\left\{x_{i},t_{i}\right\}^{N}_{i=1}\right)=%\int p\left(t^{*}|z,\left\{x_{i},t_{i}\right\}^{N}_{i=1}\right)p\left(z|x^{*},%\left\{x_{i},t_{i}\right\}^{N}_{i=1}\right)dz,\end{split}$ | (9) |

where $x^{*}$ represents the covariates of a certain patient, and its survival time distribution $p\left(t^{*}|z,\left\{x_{i},t_{i}\right\}^{N}_{i=1}\right)$ needs to be estimated.

### 3.2 ELBO derivation

Considering our main objective and the use of VAE as the architecture on which we base our approach, the ELBO development seen previously can be extended to apply to our case. SAVAE assumes that the two generative models $p_{\theta_{1}}(x|z)$ and $p_{\theta_{2}}(t|z)$ are conditionally independent. This implies that if $z$ is known, it is possible to generate $x$ or $t$. Furthermore, due to the VAE architecture, it is assumed that each component of the covariate vector $x$ is also conditionally independent given $z$. Therefore,

$p(x,t,z)=p_{\theta_{1}}(x|z)p_{\theta_{2}}(t|z)p(z)=p_{\theta}(x,t|z)p(z).$ | (10) |

It also assumes that the distribution families of $p_{\theta_{1}}(x|z)$ and $p_{\theta_{2}}(t|z)$ are known, but not the parameters $\theta_{1}$ and $\theta_{2}$.Taking into account these assumptions, the ELBO can be computed in a similar way to the case of the Vanilla VAE. First, the conditional likelihood of a set of points $\left\{x_{i},t_{i}\right\}^{N}_{i=1}$ can be expressed as follows:

$\begin{split}\log p_{\theta}(x_{1},x_{2},...,x_{N},t_{1},t_{2},...,t_{N}|z)=%\sum_{i=1}^{N}\log p_{\theta}(x_{i},t_{i}|z)\\=\sum_{i=1}^{N}\left(\log p_{\theta_{2}}(t_{i}|z)+\sum_{l=1}^{L}\log p_{\theta%_{1}}(x_{i}^{l}|z)\right),\end{split}$ | (11) |

where the expected conditional likelihood can be expressed as:

$\begin{split}\mathbb{E}_{z}\left[p_{\theta}(x,t|z)\right]=\int p_{\theta}(x,t|%z)p(z)dz=\int\frac{p_{\theta}(x,t,z)}{p(z)}p(z)dz\\=\int p_{\theta}(x,t,z)dz=p_{\theta}(x,t)=\int p_{\theta}(x,t,z)\frac{q_{\phi}%(z|x)}{q_{\phi}(z|x)}dz\\=\mathbb{E}_{q_{\phi}(z|x)}\left[\frac{p_{\theta}(x,t,z)}{q_{\phi}(z|x)}\right%].\end{split}$ | (12) |

As the interest lies in computing the log-likelihood:

$\begin{split}\log p_{\theta}(x,t)=\log\left[\mathbb{E}_{q_{\phi}(z|x)}\left[%\frac{p_{\theta}(x,t,z)}{q_{\phi}(z|x}\right]\right]\\\geq\mathbb{E}_{q_{\phi}(z|x)}\left[\log\frac{p_{\theta}(x,t,z)}{q_{\phi}(z|x)%}\right],\end{split}$ | (13) |

where the inequality comes from applying Jensen’s inequality. Then, this could be rearranged as:

$\begin{split}\mathbb{E}_{q_{\phi}(z|x)}\left[\log\left(\frac{p_{\theta}(x,t,z)%}{q_{\phi}(z|x)}\right)\right]=\int q_{\phi}(z|x)\log\frac{p_{\theta_{1}}(x|z)%p_{\theta_{2}}(t|z)p(z)}{q_{\phi}(z|x)}dz\\=-\int q_{\phi}(z|x)\log\frac{q_{\phi}(z|x)}{p(z)}dz+\int q_{\phi}(z|x)\left(%\log p_{\theta_{1}}(x|z)+\log p_{\theta_{2}}(t|z)\right)dz\\=-D_{KL}(q_{\phi}(z|x)||p(z))+\mathbb{E}_{q_{\phi}(z|x)}\left[\log p_{\theta_{%1}}(x|z)+\log p_{\theta_{2}}(t|z)\right]\\=\pazocal{L}(x,\theta_{1},\theta_{2},\phi).\end{split}$ | (14) |

After computing this ELBO, it can be seen that it is similar to the Vanilla VAE’s one (Equation 8). The only difference lies in the reconstruction term, which is expressed differently in order to explicitly distinguish between the covariates and the time-to-event. By using Equation 11 and the reparameterization trick, the ELBO estimator is obtained, explicitly accounting for each dimension of the covariates vector:

$\begin{split}\pazocal{\hat{L}}(x,\theta_{1},\theta_{2},\phi)\\=\frac{1}{N}\sum_{i=1}^{N}\Bigg{(}-D_{KL}(q_{\phi}(z|x_{i})||p(z))+\log p_{%\theta_{2}}(t_{i}|g_{\phi}(x_{i},\epsilon_{i}))+\sum_{l=1}^{L}\log p_{\theta_{%1}}(x_{i}^{l}|g_{\phi}(x_{i},\epsilon_{i}))\Bigg{)}.\end{split}$ | (15) |

In terms of implementation, three DNNs have been used, as specified in Figure 4. Note that the decoder DNNs output the parameters of each distribution.

#### 3.2.1 Divergence computation

SAVAE assumes that $q_{\phi}(z|x)$ follows a multidimensional Gaussian distribution defined by a vector of means $\mu$, where each element is $\mu_{j}$ and by a diagonal covariance matrix C, where the main diagonal consists of variances $\sigma^{2}_{j}$. According to [16], it can be stated that:

$-D_{KL}(q_{\phi}(z|x)||p(z))=\frac{1}{2}\sum_{j=1}^{J}(1+\log(\sigma_{j}^{2})-%\mu_{j}^{2}-\sigma_{j}^{2}),$ | (16) |

where $J$ is the dimension of the latent space $z$. This means that the Kullback-Leibler divergence from the ELBO Equation 15 can be calculated analytically.

#### 3.2.2 Time modeling

One significant challenge in handling survival data is the issue of censorship, which occurs when a patient has not yet experienced the event of interest. In such cases, the true survival time remains unknown, resulting in partial or incomplete observations. Consequently, SA models must employ specific techniques capable of accommodating censored observations along with uncensored ones to reliably estimate relevant parameters.

In our case, to account for censoring in survival data, we start from the time $t$ reconstruction term from Equation 15 for a single patient:

$\pazocal{\hat{L}}_{time}(x_{i},\theta_{2},\phi)=\log p_{\theta_{2}}(t_{i}|g_{%\phi}(x_{i},\epsilon_{i})).$ | (17) |

Taking into account the censoring indicator $d_{i}$:

$d_{i}=\begin{cases}0\;\;\;\text{if censored}\\1\;\;\;\text{if event experienced}\end{cases},$ | (18) |

we could just use the information given by uncensored patients. However, we would waste information, since we know that the censored patients have not experienced the event until time $t_{i}$. Hence, considering Equation 1 and following [19], we model the time pdf as:

$p_{\theta_{2}}(t_{i}|g_{\phi}(x_{i},\epsilon_{i}))=h(t_{i}|g_{\phi}(x_{i},%\epsilon_{i}))^{d_{i}}S(t_{i}|g_{\phi}(x_{i},\epsilon_{i})).$ | (19) |

Therefore, the hazard function term is only taken into account when the event has been experienced, that is, when the data are not censored. This way, SAVAE incorporates information from censored observations, providing consistent parameter estimates.

Regarding the distribution chosen for the time event, we have followed several publications such as [8] where the Weibull distribution model is used. This distribution is two-parameter, with positive support, that is, $p(t)=0,\forall t<0$. The two scalar parameters of the distribution are $\lambda$ and $\alpha$, where $\lambda>0$ controls the scale and $\alpha>0$ controls the shape as follows:

$\begin{cases}p(t;\alpha,\lambda)=\frac{\alpha}{\lambda}\left(\frac{t}{\lambda}%\right)^{\alpha-1}\exp{\left(-\left(\frac{t}{\lambda}\right)^{\alpha}\right)}%\\S(t;\alpha,\lambda)=\frac{}{}\exp{\left(-\left(\frac{t}{\lambda}\right)^{%\alpha}\right)}\\h(t;\alpha,\lambda)=\frac{p(t;\alpha,\lambda)}{S(t;\alpha,\lambda)}=\frac{%\alpha}{\lambda}\left(\frac{t}{\lambda}\right)^{\alpha-1}\end{cases}.$ | (20) |

Although the Weibull distribution is our primary choice for modeling time-to-event data in SAVAE, it is crucial to highlight that other distributions are feasible, as long as their hazard functions and CDFs can be analytically calculated. This versatility distinguishes SAVAE from other models. For example, the exponential distribution, a special case of Weibull with $\alpha=1$, can represent constant hazard functions. Integrating alternative distributions, such as the exponential, into SAVAE is straightforward and only requires adjusting the terms in Equation 19. The ability of SAVAE to predict the distribution parameters for each patient facilitates the calculation of various statistics, such as means, medians and percentiles, providing flexibility beyond the models customized to a single distribution.

#### 3.2.3 Marginal log-likelihood computation

Assigning distribution models to patient covariates in the reconstruction term is essential in SAVAE. This choice enables control over the resulting output variable distribution, but it also implies that the model approximates the chosen distribution even if the actual distribution differs. The third component of the ELBO (15) depends on the log-likelihood of the data, which for some representative distributions is:

- •
Gaussian distribution: Suitable for real-numbered variables ($x_{i}^{l}\in(-\infty,+\infty)$), it has parameters $\mu\in(-\infty,+\infty)$ and $\sigma\in(0,+\infty)$, known for its symmetric nature. Its log-likelihood function is:

$\begin{split}\log(p(x_{i}^{l};\mu,\sigma))=-\log(\sigma\sqrt{2\pi})-\frac{1}{2%}\left(\frac{x_{i}^{l}-\mu}{\sigma}\right)^{2}.\end{split}$ (21) - •
Bernoulli distribution: Applied to binary variables ($x_{i}^{l}\in\{0,1\}$), it has a single parameter $\beta\in[0,1]$, representing the probability of $x_{i}^{l}=1$.Its log-likelihood function is:

$\log(p(x_{i}^{l};\beta))=x_{i}^{l}\log(\beta)+(1-x_{i}^{l})\log(1-\beta).$ (22) - •
Categorical distribution: Models discrete variables with $K$ possible values. We can think of $x_{i}^{l}$ as a categorical scalar random variable with $K$ different values. Each possible outcome is assigned a probability $\theta_{k}$ (note that $\sum_{k=1}^{K}\theta_{k}=1$). The log-likelihood function can be computed based on the Probability Mass Function (PMF) following the expression:

$\log(p(x_{i}^{l}|\theta_{1},\theta_{2},...,\theta_{k}))=\log\left(\prod_{k=1}^%{K}\theta_{k}^{\mathbb{I}(x_{i}^{l}=k)}\right),$ (23) where the indicator function means:

$\mathbb{I}(x_{i}^{l}=k)=\begin{cases}1\quad x_{i}^{l}=k\\0\quad x_{i}^{l}\neq k\end{cases}.$ (24)

Recall that other desired distributions can be implemented in SAVAE, as long as their log-likelihood is differentiable.

## 4 Results and Discussion

Once SAVAE has been defined, the next step is to proceed with the experimental validation. First, the data used as input to the model will be discussed, followed by the experimental setup (network architecture and training process). Finally, SAVAE’s performance evaluation will be analyzed. The code can be found in https://github.com/Patricia-A-Apellaniz/savae.

### 4.1 Survival data

Dataset | # Samples | # Censored | # Covariates | Event Time (mean, (min - max)) | Censoring Time (mean, (min - max)) |
---|---|---|---|---|---|

WHAS | 1638 | 948 (57.88%) | 5 | 1045.42 (1 - 1999) days | 1298.92 (371 - 1999) days |

SUPPORT | 9104 | 2904 (31.89%) | 14 | 478.45 (3 - 2029) days | 1060.22 (344 - 2029) days |

GBSG | 1546 | 965 (43.23%) | 7 | 44.49 (0.26 - 87.36) months | 65.15 (0.26 - 87.36) months |

FLCHAIN | 6524 | 4562 (69.92%) | 8 | 3647.5 (0 - 5166) days | 4296.74 (1 - 5166) days |

NWTCO | 4028 | 3457 (85.82%) | 6 | 2276.68 (4 - 6209) days | 2588.23 (4 - 6209) days |

METABRIC | 1980 | 854 (56.18%) | 21 | 2944.81 (3 - 9193) days | 3424.81 (21 - 9193) days |

PBC | 418 | 257 (61.48%) | 17 | 63.93 (1.37 - 159.8) months | 75.22 (17.77 - 159.83) months |

STD | 877 | 530 (60.43%) | 21 | 369 (1 - 1519) days | 420 (1 - 1519) days |

PNEUMON | 3470 | 3397 (97.9%) | 13 | 9.84 (0.5 - 12) months | 9.98 (0.5 - 12) months |

In SA datasets, each patient contributes information about whether events of interest occurred during a study period, categorizing them as censored or uncensored, along with their respective follow-up times. To evaluate SAVAE, we trained it in nine diverse disease datasets, including WHAS, SUPPORT, GBSG, FLCHAIN, NWTCO, METABRIC, PBC, STD, and PNEUMON. We followed pre-processing procedures similar to state-of-the-art models, ensuring a fair evaluation on established benchmarks in SA.

The Worcester Heart Attack Study (WHAS) [20] focuses on patients with acute myocardial infarction (AMI), providing clinical and demographic data. The Study to Understand Prognoses Outcomes and Risks of Treatment (SUPPORT) [21] investigates seriously ill hospitalized adults and includes information on demographics, comorbidities, and physiological measurements. The Rotterdam & German Breast Cancer Study Group (GBSG) [22, 23] combines data from node-positive breast cancer patients and a chemotherapy trial. The FLCHAIN [24] dataset studies the relationship between mortality and serum immunoglobulin Free Light Chains, which are important in hematological disorders. NWTCO [25] studies Wilms tumor in children, Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) [26] explores breast cancer, PBC focuses on Primary Biliary Cholangitis, STD deals with sexually transmitted diseases, and PNEUMON examines infant pneumonia.

Table 1 offers a more comprehensive view of the temporal aspects and occurrences of events within the various datasets considered. It becomes evident that a deliberate selection of various disease datasets has been made, each characterized by distinct types and quantities of information. Significantly, the evaluation of the model has been carried out systematically in datasets that show varying proportions of censored samples and differing time-to-event ranges. This strategic approach aims to provide a broader perspective on how the model might perform when applied to other real-world datasets.

### 4.2 Performance metrics

Recalling from Section Survival Analysis, each dataset is described by $D=(x_{i},t_{i},d_{i})^{N}_{i=1}$ triplets, where $x_{i}=x_{i}^{1},...,x_{i}^{L}$ is an L-dimensional vector of covariates, $t_{i}$ is the time to event and $d_{i}\in\{0,1\}$ is the censoring indicator.

When evaluating an SA model, the literature shows that the most commonly used metric is the C-index, which is the generalization of the ROC curve for all data. It is a measure of the rank correlation between predicted risk and observed times. The concept arises from the intuition that a higher risk of an event occurring has a complete relation with a short time to the event. Therefore, a high number of correlating pairs, i.e. pairs of samples that meet this expectation, is decisive to say that the model has good predictive quality.

In this case, the time-dependent C-index described in [27] will be used since the original one [28] cannot reflect the possible changes in risk over time being only computed at the initial time of observation. This C-index is defined as follows:

$\begin{split}C_{index}=P\Big{(}\hat{F}(t|x_{i})>\hat{F}(t|x_{j})|d_{i}=1,t_{i}%<t_{j},t_{i}\leq t\Big{)},\end{split}$ | (25) |

where $\hat{F}(t|x_{i})$ is the CDF estimated by the model at the time $t$ given a set of covariates $x_{i}$. The probability is estimated by comparing the relative risks pairwise, as already mentioned.

Based on the prediction index defined in [29], [30] proposed the second evaluation metric that has been used in this analysis: Brier Score (BS). It is essentially a square prediction error based on the Inverse Probability of Censoring Weighting (IPCW) [31], a technique designed to recreate an unbiased scenario compensating for censored samples by giving more weight to samples with similar features that are not censored. So, given a time $t$ the BS can be calculated as follows, with $G(\cdot)$ being the survival function corresponding to censoring ($1/G(t)$ is the IPCW):

$\begin{split}BS(t)=\frac{1}{N}\sum_{i=1}^{N}\Bigg{[}\frac{(S(t|x_{i}))^{2}}{G(%t_{i})}\cdot\mathbb{I}(t_{i}<t,d_{i}=1)\\+\frac{(1-S(t|x_{i}))^{2}}{G(t)}\cdot\mathbb{I}(t_{i}\geq t)\Bigg{]}.\end{split}$ | (26) |

Since the C-index does not take into account the actual values of the predicted risk scores, BS can be used to assess calibration, i.e., if a model predicts a 10% risk of experiencing an event at time t, the observed frequency in the data should match this percentage for a well-calibrated model. On the other hand, it is also a measure of discrimination: whether a model can predict risk scores that allow us to correctly determine the order of events.

In this case, the evaluation is made using the integral form of BS since it does not depend on the selection of a specific time $t$:

$IBS(t_{max})=\frac{1}{t_{max}}\int_{0}^{t_{max}}BS(t)dt.$ | (27) |

To statistically assess the performance of each model based on the C-index globally, we propose the Mean Reciprocal Rank (MRR) as the third metric. It measures the effectiveness of a prediction by considering the rank of the first relevant C-index within a list composed of the C-indices obtained from each model. Formally, the Reciprocal Rank (RR) for a set of results for each model is the inverse of the position of the first pertinent result. For example, if the first relevant result is in position 1, its RR is 1; if it is in position 2, the RR is 0.5; if it is in position 3, the RR is approximately 0.33, and so on. Thus, the MRR is the average of the RRs for a set of models:

$MRR=\frac{1}{Q}\sum_{i=1}^{Q}\frac{1}{rank_{i}},$ | (28) |

where $Q$ is the total number of models that are being compared, and $rank_{i}$ is the position of the first relevant C-index for the $i-th$ model. Higher MRR values indicate that relevant results tend to appear higher in the list.

Finally, to add more statistical information on the performance of the models, we performed hypothesis testing to compare the mean C-index and IBS values of our model with those of the state-of-the-art models in multiple folds, since we are using a five-fold cross-validation method. Specifically, we formulated a null hypothesis that assumes that the mean performance metrics of the state-of-the-art models are greater than our model’s mean performance metrics. To assess the validity of this null hypothesis, we used $p$-values as a statistical measure. We established a significance threshold of 0.05, a common practice in hypothesis testing. When the obtained $p$-value for each case fell below this threshold, we rejected the null hypothesis. In practical terms, this indicated that our model exhibited superior performance compared to the other models. On the contrary, if the $p$-value exceeded 0.05, we concluded that there were no statistically significant differences between our model and the others. It is important to note that this approach considered variations in results across different folds, providing a more comprehensive assessment of model performance beyond just the average results.

### 4.3 Experimental setting

To begin with, the implementation of SAVAE was executed using the PyTorch framework [32]. As defined in Section 3.2, three different DNNs were trained, consisting of one encoder and two decoders. These decoders were designed to infer covariates and time parameters, respectively. The Gaussian encoder exhibits a straightforward architecture, characterized by a single hidden linear layer featuring a Rectified Linear Unit (ReLU) activation function and an output linear layer with hyperbolic tangent activation. The input to this encoder consists of the covariate vectors from the training dataset, while the output generates a Gaussian latent space. The dimensionality of this latent space has been fixed to 5. The generated latent space serves as input for both decoders, each featuring two linear layers. The first layer employs a ReLU activation function and incorporates a dropout rate of 20%. However, the final layer of the decoders employs different activation functions based on the specified distribution, thereby tailoring the output to the parameters of the respective covariate distribution. Furthermore, the number of neurons in each hidden layer was also fixed at 50. The training process involved 3000 epochs with a batch size of 64 samples while incorporating an Early Stop mechanism in the event of an insufficient reduction in validation loss.

To evaluate the results while ensuring their robustness against data partitioning, we used a five-fold cross-validation technique. This method was applied not only to our model but also to the state-of-the-art models used for performance comparison and result evaluation, including Cox-PH, DeepHit, and DeepSurv. Moreover, due to the inherent sensitivity of VAE architectures to initial conditions, we conducted training using up to 10 different random seeds. Subsequently, the C-index was averaged among the three best performing seeds. We consider that the average performance of three seeds provides a representative and sufficient evaluation. Lastly, note that the three state-of-the-art models have been implemented using the Pycox package [33], as well as the different metrics used for validation, C-index and IBS. The MRR has been calculated manually, while the $p$-value has been obtained using the SciPy [34] package.

### 4.4 Results

In this section, we present a comprehensive assessment of the performance of our proposed model, SAVAE, compared to three well-established state-of-the-art models. Cox-PH, DeepSurv, and DeepHit. Across multiple datasets that encompass a diverse range of medical and clinical scenarios, we conducted extensive experiments to assess the performance of these models. The key focus was on evaluating their ability to predict survival outcomes, considering censored and uncensored data points.

As the initial set of results, our focus is on comparing the performance and results in terms of the C-index. Table 2 provides a comprehensive view of how our model is completely comparable to the state-of-the-art models in terms of the average C-index. Additionally, note that all intervals for the minimum and maximum values across various folds overlap, indicating consistent performance across different data subsets. The results displayed in the table reveal that our model consistently achieves a higher MRR compared to others across multiple datasets, showcasing its superiority in many cases regarding the average C-index. However, it is essential to acknowledge that the C-index results among the different models are generally similar, highlighting the competitiveness of our model within the field. Furthermore, it is important to note that the broad intervals are primarily attributed to the limited sample sizes commonly found in medical databases, a characteristic that poses challenges when assessing model performance. To address this issue, we employed cross-validation as previously mentioned, ensuring that our model’s performance is robust and reliable. In summary, while our model demonstrates its strength by outperforming other models in terms of MRR and achieving competitive average C-index scores, the overall similarity in C-index results underscores its robustness and suitability for various medical datasets.

Dataset | COXPH | DEEPSURV | DEEPHIT | SAVAE | ||||
---|---|---|---|---|---|---|---|---|

Avg. C-index | (min, max) | Avg. C-index | (min, max) | Avg. C-index | (min, max) | Avg. C-index | (min, max) | |

WHAS | 0.74 | (0.66, 0.81) | 0.78 | (0.57, 0.88) | 0.89 | (0.82, 0.95) | 0.74 | (0.67, 0.80) |

SUPPORT | 0.58 | (0.39, 0.78) | 0.57 | (0.37, 0.82) | 0.55 | (0.37, 0.73) | 0.61 | (0.40, 0.86) |

GBSG | 0.66 | (0.61, 0.71) | 0.67 | (0.58, 0.73) | 0.66 | (0.58, 0.72) | 0.67 | (0.62, 0.72) |

FLCHAIN | 0.69 | (0.50, 0.80) | 0.67 | (0.55, 0.80) | 0.78 | (0.73, 0.82) | 0.79 | (0.75, 0.83) |

NWTCO | 0.71 | (0.64, 0.79) | 0.70 | (0.60, 0.79) | 0.72 | (0.66, 0.78) | 0.71 | (0.63, 0.79) |

METABRIC | 0.59 | (0.52, 0.68) | 0.61 | (0.52, 0.69) | 0.56 | (0.46, 0.64) | 0.61 | (0.53, 0.70) |

PBC | 0.81 | (0.64, 0.94) | 0.80 | (0.65, 0.92) | 0.80 | (0.62, 0.93) | 0.81 | (0.62, 0.95) |

STD | 0.60 | (0.47, 0.72) | 0.60 | (0.49, 0.71) | 0.59 | (0.50, 0.68) | 0.59 | (0.46, 0.71) |

PNEUMON | 0.62 | (0.54, 0.70) | 0.65 | (0.49, 0.80) | 0.67 | (0.57, 0.77) | 0.65 | (0.53, 0.77) |

MRR | 0.56 | 0.60 | 0.62 | 0.76 |

Bold highlights the best mean. For C-index and MRR, higher is better

Model | WHAS | SUPPORT | GBSG | FLCHAIN | NWTCO | METABRIC | PBC | STD | PNEUMON |
---|---|---|---|---|---|---|---|---|---|

COXPH | 0.579 | 0.058 | 0.0 | 0.0 | 0.268 | 0.003 | 0.45 | 0.887 | 0.003 |

DEEPSURV | 1.0 | 0.02 | 0.149 | 0.0 | 0.135 | 0.549 | 0.28 | 0.927 | 0.382 |

DEEPHIT | 1.0 | 0.0 | 0.0 | 0.01 | 0.644 | 0.0 | 0.228 | 0.727 | 0.935 |

Bold Implies a $p$-value below our threshold, 0.05. This means that SAVAE is significantly better than the other models.

In our validation process, we performed a statistical analysis using $p$-values to determine whether our model exhibited superior performance in terms of the C-index. To carry out this analysis, we compared the average C-index of our model with the mean C-index values obtained from multiple folds for each of the state-of-the-art models. The objective was to determine whether the performance of our model was statistically better than the alternative models. We established a significance threshold of 0.05, a common practice in hypothesis testing. Our findings in Table 3 reveal several instances in which our model outperformed the state-of-the-art models, as evidenced by $p$-values below the 0.05 threshold. These results highlight the effectiveness and competitiveness of our proposed approach. This comprehensive analysis, which considers the diverse C-index values in multiple folds, provides a robust evaluation of the performance of the model, extending beyond simple average comparisons.

Our validation through IBS values (Tables 4 and 5) yielded conclusions that closely parallel those derived from the C-index analysis. Overall, it is important to note that our model’s IBS results align closely with those of the state-of-the-art models, demonstrating comparable performance. However, our proposed model consistently demonstrated competitiveness and emerged as the top performer in the various datasets used in our study. This convergence of results across different evaluation metrics reinforces the robustness and effectiveness of our novel approach. While our model maintains a competitive edge within the context of the state-of-the-art models, further solidifying its potential and utility in the field of SA, it also stands out as a top-performing solution.

Dataset | COXPH | DEEPSURV | DEEPHIT | SAVAE | ||||
---|---|---|---|---|---|---|---|---|

Avg. IBS | (min, max) | Avg. IBS | (min, max) | Avg. IBS | (min, max) | Avg. IBS | (min, max) | |

WHAS | 0.171 | (0.109, 0.279) | 0.134 | (0.067, 0.260) | 0.120 | (0.067, 0.175) | 0.159 | (0.114, 0.205) |

SUPPORT | 0.208 | (0.074, 0.374) | 0.205 | (0.057, 0.363) | 0.219 | (0.086, 0.370) | 0.208 | (0.063, 0.385) |

GBSG | 0.182 | (0.142, 0.223) | 0.179 | (0.137, 0.228) | 0.208 | (0.168, 0.248) | 0.179 | (0.139, 0.222) |

FLCHAIN | 0.137 | (0.089, 0.185) | 0.142 | (0.088, 0.186) | 0.121 | (0.098, 0.145) | 0.102 | (0.078, 0.124) |

NWTCO | 0.107 | (0.080, 0.138) | 0.109 | (0.082, 0.149) | 0.111 | (0.083, 0.147) | 0.127 | (0.101, 0.152) |

METABRIC | 0.186 | (0.137, 0.233) | 0.191 | (0.143, 0.244) | 0.214 | (0.153, 0.275) | 0.180 | (0.127, 0.236) |

PBC | 0.147 | (0.043, 0.281) | 0.146 | (0.046, 0.268) | 0.195 | (0.087, 0.340) | 0.138 | (0.034, 0.267) |

STD | 0.210 | (0.121, 0.302) | 0.212 | (0.123, 0.305) | 0.224 | (0.142, 0.315) | 0.209 | (0.121, 0.307) |

PNEUMON | 0.016 | (0.004, 0.031) | 0.017 | (0.004, 0.034) | 0.016 | (0.004, 0.031) | 0.021 | (0.007, 0.037) |

MRR | 0.55 | 0.55 | 0.47 | 0.71 |

Bold highlights the best mean. For IBS lower is better and for MRR, higher is better

Model | WHAS | SUPPORT | GBSG | FLCHAIN | NWTCO | METABRIC | PBC | STD | PNEUMON |
---|---|---|---|---|---|---|---|---|---|

COXPH | 1 | 0.47 | 0.998 | 1 | 0.0 | 0.995 | 0.888 | 0.575 | 0.0 |

DEEPSURV | 0.0 | 0.341 | 1 | 0.0 | 1.0 | 0.549 | 0.868 | 0.746 | 0.0 |

DEEPHIT | 0.0 | 0.950 | 1 | 1 | 0.0 | 1 | 1.0 | 0.995 | 0.0 |

Bold Implies a $p$-value below our threshold, 0.05. This means that SAVAE is significantly better than the other models.

## 5 Conclusions

In this paper, we have successfully described an SA model (SAVAE), which stands out for its ability to avoid assumptions that can limit performance in real-world scenarios. It is a model based on VAEs in charge of estimating continuous or discrete survival times, first, modeling complex non-linear relations among covariates due to the use of highly expressive DNNs, and second, taking advantage of a combination of loss functions that capture the censoring inherent to survival data. Our model demonstrates efficiency compared to various state-of-the-art models, namely Cox-PH, DeepSurv, and DeepHit, because of its freedom from assumptions related to linearity and proportional hazards. In contrast to DeepHit, which directly learns the C-Index metric, we train using standard likelihood techniques. Note that this means that our approach is more flexible, as it allows using many different distributions to model the data, and the performance is competitive, as it performs well in C-Index and IBS.

Furthermore, the adaptability of our model is a notable strength. While we have assumed specific distributions for both survival times and covariates in our experiments, SAVAE’s versatility extends to accommodating any other parametric distribution, as long as their CDF and hazard function are differentiable, making it a scalable tool. Notably, our model can efficiently handle censoring to mitigate bias, introducing a novel improvement in results.

This work raises several attractive lines for the future. An additional advantage lies in our model’s architecture, where time and covariates are reconstructed from latent space information. This feature opens opportunities for its utility to be expanded to various tasks that have been developed using VAEs, including clustering [35], imputation of missing data [36], and data augmentation [37] by the generation of synthetic patients. Thus, this tool has great potential and can be exploited in future work to have different functionalities even in the world of Federated Learning [38] [39].

In summary, SAVAE emerges as a versatile and robust model for SA, surpassing state-of-the-art methods while offering extensibility to a broader range of healthcare applications. It presents a compelling solution for healthcare professionals looking for enhanced performance and adaptability in SA tasks.

## Acknowledgments

This research was supported by GenoMed4All project. GenoMed4All has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101017549. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

## References

- [1]How Jing and Alexander Smola.Neural survival recommender.pages 515–524, 02 2017.
- [2]Carolyn Grob, Dorothea Lerman, Channing Langlinais, and Natalie Villante.Assessing and teaching job-related social skills to adults with autism spectrum disorder.Journal of Applied Behavior Analysis, 52, 09 2018.
- [3]Rong Wang, Yves Balkanski, Olivier Boucher, Philippe Ciais, Greg Schuster, Frédéric Chevallier, Bjørn Samset, Junfeng Liu, Shilong Piao, Myrto Valari, and Shu Tao.Estimation of global black carbon direct radiative forcing and its uncertainty constrained by observations: Radiative forcing of black carbon.Journal of Geophysical Research: Atmospheres, 121, 05 2016.
- [4]Scott Dellana and David West.Survival analysis of supply chain financial risk.The Journal of Risk Finance, 17:130–151, 03 2016.
- [5]D.R. Cox.Regression models and life-tables.Journal of the Royal Statistical Society. Series B (Methodological), 34(2):187–220, 1972.
- [6]NormanE. Breslow.Introduction to kaplan and meier (1958) nonparametric estimation from incomplete observations.1992.
- [7]EricR. Ziegel.Statistical methods for survival data analysis.Technometrics, 35(1):101–101, 1993.
- [8]Rajesh Ranganath, Jaan Altosaar, Dustin Tran, and DavidM. Blei.Operator variational inference, 2016.
- [9]David Faraggi and RichardM. Simon.A neural network model for survival data.Statistics in medicine, 14 1:73–82, 1995.
- [10]Jared Katzman, Uri Shaham, Alexander Cloninger, Jonathan Bates, Tingting Jiang, and Yuval Kluger.Deep survival: A deep cox proportional hazards network.06 2016.
- [11]Margaux Luck, Tristan Sylvain, Héloïse Cardinal, Andrea Lodi, and Yoshua Bengio.Deep learning for patient-specific kidney graft survival analysis, 2017.
- [12]Jidapa Kraisangka and MarekJ. Druzdzel.A bayesian network interpretation of the cox’s proportional hazard model.International Journal of Approximate Reasoning, 103:195–211, 2018.
- [13]Bhanukiran Vinzamuri and ChandanK. Reddy.Cox regression with correlation based regularization for electronic health records.2013 IEEE 13th International Conference on Data Mining, pages 757–766, 2013.
- [14]Bhanukiran Vinzamuri, Yan Li, and ChandanK. Reddy.Active learning based survival regression for censored data.CIKM ’14, page 241–250, New York, NY, USA, 2014. Association for Computing Machinery.
- [15]Changhee Lee, WilliamR. Zame, Jinsung Yoon, and Mihaela vander Schaar.Deephit: A deep learning approach to survival analysis with competing risks.In AAAI, 2018.
- [16]DiederikP Kingma and Max Welling.Auto-encoding variational bayes, 2013.
- [17]GeoffreyE Hinton and RuslanR Salakhutdinov.Reducing the dimensionality of data with neural networks.science, 313(5786):504–507, 2006.
- [18]Rajesh Ranganath, Adler Perotte, Noémie Elhadad, and David Blei.Deep survival analysis, 2016.
- [19]Silvia Liverani, Lucy Leigh, Irene Hudson, and Julie Byles.Clustering method for censored and collinear survival data.Computational Statistics, 36, 03 2021.
- [20]DavidW HosmerJr, Stanley Lemeshow, and Susanne May.Applied survival analysis: regression modeling of time-to-event data.John Wiley & Sons, 2011.
- [21]WilliamA Knaus, FrankE Harrell, Joanne Lynn, Lee Goldman, RussellS Phillips, AlfredF Connors, NealV Dawson, WilliamJ Fulkerson, RobertM Califf, Norman Desbiens, etal.The support prognostic model: Objective estimates of survival for seriously ill hospitalized adults.Annals of internal medicine, 122(3):191–203, 1995.
- [22]JohnA Foekens, HarryA Peters, MaximeP Look, Henk Portengen, Manfred Schmitt, MichaelD Kramer, Nils Brunner, Fritz Jaanicke, Marion E Meijer-van Gelder, SonjaC Henzen-Logmans, etal.The urokinase system of plasminogen activation and prognosis in 2780 breast cancer patients.Cancer research, 60(3):636–643, 2000.
- [23]MSchumacher, GBastert, HBojar, KHübner, MOlschewski, WSauerbrei, CSchmoor, CBeyerle, RLNeumann, and HFRauschecker.Randomized 2 x 2 trial evaluating hormonal treatment and the duration of chemotherapy in node-positive breast cancer patients. german breast cancer study group.Journal of Clinical Oncology, 12(10):2086–2093, 1994.
- [24]Angela Dispenzieri, Jerry Katzmann, Robert Kyle, Dirk Larson, Terry Therneau, Colin Colby, Raynell Clark, Graham Mead, Shaji Kumar, LMelton, and SRajkumar.Use of nonclonal serum immunoglobulin free light chains to predict overall survival in the general population.Mayo Clinic proceedings. Mayo Clinic, 87:517–23, 06 2012.
- [25]NormanE Breslow and Nilanjan Chatterjee.Design and analysis of two-phase studies with binary outcome applied to wilms tumour prognosis.Journal of the Royal Statistical Society: Series C (Applied Statistics), 48(4):457–468, 1999.
- [26]Bernard Pereira, Suet-Feung Chin, OscarM Rueda, Hans-KristianMoen Vollan, Elena Provenzano, HelenA Bardwell, Michelle Pugh, Linda Jones, Roslin Russell, Stephen-John Sammut, Dana WY Tsui, Bin Liu, Sarah-Jane Dawson, Jean Abraham, Helen Northen, JohnF Peden, Abhik Mukherjee, Gulisa Turashvili, AndrewR Green, Steve McKinney, Arusha Oloumi, Sohrab Shah, Nitzan Rosenfeld, Leigh Murphy, DavidR Bentley, IanO Ellis, Arnie Purushotham, SarahE Pinder, Anne-Lise Børresen-Dale, HelenaM Earl, PaulD Pharoah, MarkT Ross, Samuel Aparicio, and Carlos Caldas.The somatic mutation profiles of 2,433 breast cancers refines their genomic and transcriptomic landscapes.Nature communications, 7:11479, May 2016.
- [27]Laura Antolini, Patrizia Boracchi, and EliaMario Biganzoli.A time-dependent discrimination index for survival data.Statistics in Medicine, 24, 2005.
- [28]JrHarrell, FrankE., RobertM. Califf, DavidB. Pryor, KerryL. Lee, and RobertA. Rosati.Evaluating the Yield of Medical Tests.JAMA, 247(18):2543–2546, 05 1982.
- [29]GLENNW. BRIER.Verification of forecasts expressed in terms of probability.Monthly Weather Review, 78(1):1 – 3, 1950.
- [30]EGraf, CSchmoor, WSauerbrei, and MSchumacher.Assessment and comparison of prognostic classification schemes for survival data.Statistics in medicine, 18(17-18):2529—2545, 1999.
- [31]JamesM Robins.Information recovery and bias adjustment in proportional hazards regression analysis of randomized trials using surrogate markers.In Proceedings of the Biopharmaceutical Section, American Statistical Association, volume24, page3. San Francisco CA, 1993.
- [32]Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, LuFang, Junjie Bai, and Soumith Chintala.Pytorch: An imperative style, high-performance deep learning library, 2019.
- [33]Håvard Kvamme, Ørnulf Borgan, and Ida Scheel.Time-to-event prediction with neural networks and cox regression, 2019.
- [34]Pauli Virtanen, Ralf Gommers, TravisE. Oliphant, Matt Haberland, Tyler Reddy, David Cournapeau, Evgeni Burovski, Pearu Peterson, Warren Weckesser, Jonathan Bright, StéfanJ. van der Walt, Matthew Brett, Joshua Wilson, K.Jarrod Millman, Nikolay Mayorov, Andrew R.J. Nelson, Eric Jones, Robert Kern, Eric Larson, CJ Carey, İlhan Polat, YuFeng, EricW. Moore, Jake VanderPlas, Denis Laxalde, Josef Perktold, Robert Cimrman, Ian Henriksen, E.A. Quintero, CharlesR. Harris, AnneM. Archibald, AntônioH. Ribeiro, Fabian Pedregosa, Paul van Mulbregt, and SciPy 1.0 Contributors.SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python.Nature Methods, 17:261–272, 2020.
- [35]Kart-Leong Lim, Xudong Jiang, and Chenyu Yi.Deep clustering with variational autoencoder.IEEE Signal Processing Letters, 27:231–235, 2020.
- [36]JohnT. McCoy, Steve Kroon, and Lidia Auret.Variational autoencoders for missing data imputation with application to a simulated milling circuit.IFAC-PapersOnLine, 51(21):141–146, 2018.5th IFAC Workshop on Mining, Mineral and Metal Processing MMM 2018.
- [37]Clément Chadebec and Stéphanie Allassonnière.Data augmentation with variational autoencoders and manifold sampling, 2021.
- [38]Zhipin Gu, Liangzhong He, Peiyan Li, Peng Sun, Jiangyong Shi, and Yuexiang Yang.Frepd: A robust federated learning framework on variational autoencoder.Computer Systems Science and Engineering, 39:307–320, 01 2021.
- [39]Mirko Polato.Federated variational autoencoder for collaborative filtering.In 2021 International Joint Conference on Neural Networks (IJCNN), pages 1–8, 2021.