I am a PhD candidate in
Biostatistics, working jointly with
Mark van der Laan and
Alan
Hubbard. I am a founding core developer of
the
tlverse
project, the software ecosystem for
Targeted Learning. At UC Berkeley, I am affiliated with the
Center for
Computational Biology and the
NIH Biomedical Big
Data initiative. During my time in graduate school,
I have also enjoyed scientific and statistical collaborations with the
Bill &
Melinda Gates
Foundation,
the
Kaiser Permanente Division of
Research, the
Fred
Hutchinson Cancer Research
Center,
and
Netflix.
My research interests sit primarily at the intersection of causal inference and machine learning, with a particular concern towards developing efficient and robust statistical procedures for evaluating complex target estimands within observational studies and randomized trials. Broadly, my work draws on ideas from non/semi-parametric estimation in large, flexible statistical models; high-dimensional inference; targeted loss-based estimation; statistical computing; computational biology; and statistical epidemiology. Of late, my methodological work has touched on causal mediation analysis, stochastic treatment regimes, robust inference in two-phase designs, and efficient estimation with sieve-type methods. I am also quite keenly interested in designing open source statistical software to promote computational reproducibility in applied scientific practice.
PhD in Biostatistics (designated emphasis in Computational and Genomic Biology), 2017-2021 (expected)
University of California, Berkeley
MA in Biostatistics, 2017
University of California, Berkeley
BA with a triple major in Molecular and Cell Biology (em. Neurobiology), Psychology, and Public Health, 2015
University of California, Berkeley
Causal mediation analysis has historically been limited in two important regards: (i) a focus has traditionally been placed on binary treatments and static interventions, and (ii) direct and indirect effect decompositions have been pursued that are only identifiable in the absence of intermediate confounders affected by treatment. We present a theoretical study of an (in)direct effect decom- position of the population intervention effect, defined by stochastic interventions jointly applied to the treatment and mediators. In contrast to existing proposals, our causal effects can be evaluated regardless of whether a treatment is categorical or continuous and remain well-defined even in the presence of intermediate confounders affected by treatment. Our (in)direct effects are identifiable without a restrictive assumption on cross-world counterfactual independencies, allowing for substantive conclusions drawn from them to be validated in randomized controlled trials. Beyond the novel effects introduced, we provide a careful study of nonparametric efficiency theory relevant for the construction of flexible, multiply robust estimators of our (in)direct effects, all the while avoiding undue restrictions induced by assuming parametric models of nuisance parameter functionals. To complement our nonparametric estimation strategy, we introduce inferential techniques for constructing confidence intervals and hypothesis tests, and discuss open source software implementing the proposed methodology.
Interventional effects for mediation analysis were proposed as a solution to the lack of identifiability of natural (in)direct effects in the presence of a mediator-outcome confounder affected by exposure. We present a theoretical and computational study of the properties of the interventional (in)direct effect estimands based on the efficiency bound in the non-parametric statistical model. We derive the efficient influence function, using it to develop two asymptotically optimal, non-parametric estimators that leverage data-adaptive regression for estimation of the nuisance parameters: a one-step estimator and a targeted minimum loss estimator. A free and open source R package implementing our proposed estimators is made readily available on GitHub. We further present results establishing the conditions under which these estimators are consistent, rate multiply robust, $n^{\frac{1}{2}}$-consistent and efficient. We illustrate the finite-sample performance of the estimators and corroborate our theoretical results in a simulation study. We also demonstrate the use of the estimators in our motivating application to elucidate the mechanisms behind the unintended harmful effects that a housing intervention had on adolescent girls’ risk behavior.
The advent and subsequent widespread availability of preventive vaccines has altered the course of public health over the past century. Despite this success, effective vaccines to prevent many high-burden diseases, including HIV, have been slow to develop. Vaccine development can be aided by the identification of immune response markers that serve as effective surrogates for clinically significant infection or disease endpoints. However, measuring immune response is often costly, which has motivated the usage of two-phase sampling for immune response sampling in clinical trials of preventive vaccines. In such trials, measurement of immunological markers is performed on a subset of trial participants, where enrollment in this second phase is potentially contingent on the observed study outcome and other participant-level information. We propose nonparametric methodology for efficiently estimating a counterfactual parameter that quantifies the impact of a given immune response marker on the subsequent probability of infection. Along the way, we fill in a theoretical gap pertaining to the asymptotic behavior of nonparametric efficient estimators in the context of two-phase sampling, including a multiple robustness property enjoyed by our estimators. Techniques for constructing confidence intervals and hypothesis tests are presented, and an open source software implementation of the methodology, the txshift
R package, is introduced. We illustrate the proposed techniques using data from a recent preventive HIV vaccine efficacy trial.
(see CV for a full list)
Public Health 290: Biomedical Big Data Capstone Seminar (Spring 2020), as graduate student instructor with Prof. Alan Hubbard
Public Health 242C & Statistics 247C: Longitudinal Data Analysis (Fall 2019), as graduate student instructor with Prof. Alan Hubbard
Public Health 290: Targeted Learning in Biomedical Big Data (Spring
2018), as graduate student
instructor with Prof.
Mark van der Laan
Course
materials
here |
GitHub repositories
here
The tlverse software ecosystem for targeted
learning at the
Conference on
Statistical Practice;
2020 February; co-taught with Alan Hubbard, Jeremy Coyle, Ivana
Malenica, Rachael Phillips
Course materials
here
| GitHub repository
here
The tlverse software ecosystem for causal
inference at the
Atlantic Causal
Inference Conference;
2019 May; co-taught with Mark van der Laan, Alan Hubbard, Jeremy Coyle,
Ivana Malenica, Rachael Phillips
Course materials
here
| GitHub repository
here
I am an member of Software Carpentry and Data Carpentry, through which I work on curriculum development, maintenance of lesson materials, and workshop delivery.
Software Carpentry: Shell, Git, and
R at the
Berkeley Institute
for Data Science; 2019 January; co-taught with
Scott Peterson and Nelle Varoquaux
Course materials
here | GitHub repository
here
Software Carpentry: Shell, Git, and
Python at the
Berkeley Institute
for Data Science; 2018 July; co-taught with
Kunal Marwaha
Course materials
here | GitHub repository
here
Data Carpentry: Genomics at
Lawrence Berkeley National Laboratory; 2018 May;
co-taught with Adam Orr
Course materials
here | GitHub
repository
here
Collected collateral damage from doing statistics research, hopefully useful to others.
tlverse
The tlverse
is an ecosystem of R packages for
Targeted Learning, of which I am a co-founder and core developer. A few of the
tlverse
packages to which I’ve made significant contributions include
sl3
: An
R package providing a modern
implementation of the Super Learner ensemble modeling algorithm that
simultaneously exposes a flexible grammar for composing arbitrary pipelines
for machine learning tasks. Joint work with
Jeremy
Coyle,
Ivana
Malenica,
Rachael
Phillips, and
Oleg
Sofrygin.
[Docs] |
[GitHub]
origami
: An
R package exposing a generalized
framework for applying a great variety of cross-validation schemes to
arbitrary estimation functions. Joint work with
Jeremy
Coyle,
Ivana
Malenica, and
Rachael
Phillips.
[Docs] |
[GitHub] |
[CRAN] |
[Paper]
hal9001
: An
R package providing an efficient
implementation of the Highly Adaptive Lasso (HAL), a nonparametric
regression estimator achieving near-parametric convergence rates under
relatively mild assumptions.
Joint work with
Jeremy Coyle and
Mark van
der Laan.
[Docs] |
[GitHub] |
[CRAN]
tmle3shift
: An
R package for targeted
maximum likelihood estimation of the causal effects of modified treatment
policies on continuous-valued exposures, incorporates working marginal
structural models for summarization of effect estimates. Joint work with
Jeremy Coyle and
Mark van der
Laan.
[Docs] |
[GitHub]
A significant focus of my research program centers on the intersection of causal inference and statistical machine learning. I’ve (co-)developed R packages for a range of problems: causal mediation analysis, evaluating stochastic interventions under two-phase sampling, conditional density estimation, and survival analysis.
medshift
: An
R package for estimating the
population intervention (in)direct effects based on stochastic interventions.
Classical and efficient estimators are supported for the effects of
incremental propensity score interventions and modified treatment policies.
Joint work with
Iván Díaz.
[Docs] |
[GitHub]
medoutcon
: An
R package for efficient
estimation of interventional (in)direct effects subject to intermediate
confounding, including one-step and targeted minimum loss estimators. Joint
work with
Iván Díaz and
Kara
Rudolph.
[Docs] |
[GitHub]
txshift
: An
R package for efficient
estimation of and inference on causal effects of stochastic interventions on
continuous-valued exposures. Robust estimation and efficient inference under
two-phased sampling is supported. Joint work with
David
Benkeser.
[Docs] |
[GitHub]
haldensify
: An
R package for nonparametric
conditional density estimation based on the highly adaptive lasso, designed
for estimating the generalized propensity score. Joint work with
David
Benkeser
and
Mark van der Laan.
[Docs] |
[GitHub] |
[CRAN]
survtmle
: An
R package for the construction
of targeted maximum likelihood estimates of marginal cumulative incidence in
right-censored survival settings with and without competing risks, including
estimation procedures that respect bounds. Joint work with
David
Benkeser.
[Docs] |
[GitHub] |
[CRAN]
A parallel thread of my research concerns the development of novel statistical methodologies for application in high-dimensional and computational biology settings. Consequently, I have (co-)developed several R packages extending the Bioconductor Project.
biotmle
: An
R package for the model-free
discovery of biomarkers from biological expression data, introducing a
generalization of moderated statistics for variance stabilization of
semiparametric estimators. Joint work with
Alan
Hubbard and
Mark van der
Laan.
[Docs] |
[GitHub] |
[Bioconductor] |
[Paper]
scPCA
: An
R package for sparse contrastive
principal component analysis, facilitating the recovery of stable and
low-dimensional patterns from high-dimensional biological data while removing
technical artifacts by making use of control samples. Joint work with
Philippe Boileau and
Sandrine
Dudoit.
[GitHub] |
[Bioconductor] |
[Paper]
methyvim
: An
R package for genome-wide
assessment of differential methylation based on estimation of variable
importance measures at the level of CpG sites. Joint work with
Mark van der
Laan.
[Docs] |
[GitHub] |
[Bioconductor]
adaptest
: An
R package for multiple
hypothesis testing with data adaptive target parameters in high-dimensional
settings using Targeted Learning. Joint work with
Weixin
Cai and
Alan
Hubbard.
[GitHub] |
[Bioconductor] |
[Paper]