The hazard ratio (HR) has been the most popular measure to quantify the magnitude of treatment effect on time-to-event outcomes in clinical research. However, the HR estimated by Cox's method has several drawbacks. One major issue is that there is no clear interpretation when the proportional hazards (PH) assumption does not hold, because it is affected by study-specific censoring time distribution in non-PH cases. Another major issue is that the lack of a group-specific absolute hazard value in each group obscures the clinical significance of the magnitude of the treatment effect. Given these, we propose average hazard with survival weight (AH-SW) as a summary metric of event time distribution and will use difference in AH-SW (DAH-SW) or ratio of AH-SW (RAH-SW) to quantify the treatment effect magnitude. The AH-SW we propose is a new digestible metric interpreted as a person-years event rate when random censoring would not exist. It is defined as the ratio of tau-year event rate and restricted mean survival time, which can be estimated non-parametrically. Numerical studies demonstrate that DAH-SW and RAH-SW offer almost identical power to Cox's method under PH scenarios and can be more powerful for delayed-difference patterns that are often seen in immunotherapy trials. The proposed metrics (i.e., AH-SW, DAH-SW and RAH-SW) and the inferential methods for them offer a digestible interpretation that the conventional Cox's method could not provide about the survival benefit of a new therapy. These metrics will increase the likelihood that results from clinical studies are correctly interpreted.



Included in

Biostatistics Commons