Research Seminar

Would you like to be notified about these presentations via e-mail? Please subscribe here.

×

Modal title

Modal content

Autumn Semester 2019

Date / Time Speaker Title Location
6 September 2019
15:15-16:00
Stefan Wagner
Stanford University
Event Details

Research Seminar in Statistics

Title Experimenting in Equilibrium
Speaker, Affiliation Stefan Wagner, Stanford University
Date, Time 6 September 2019, 15:15-16:00
Location HG G 19.1
Abstract Classical approaches to experimental design assume that intervening on one unit does not affect other units. Recently, however, there has been considerable interest in settings where this non-interference assumption does not hold, e.g., when running experiments on supply-side incentives on a ride-sharing platform or subsidies in an energy marketplace. In this paper, we introduce a new approach to experimental design in large-scale stochastic systems with considerable cross-unit interference, under an assumption that the interference is structured enough that it can be captured using mean-field asymptotics. Our approach enables us to accurately estimate the effect of small changes to system parameters by combining unobstrusive randomization with light-weight modeling, all while remaining in equilibrium. We can then use these estimates to optimize the system by gradient descent. Concretely, we focus on the problem of a platform that seeks to optimize supply-side payments p in a centralized marketplace where different suppliers interact via their effects on the overall supply-demand equilibrium, and show that our approach enables the platform to optimize p based on perturbations whose magnitude can get vanishingly small in large systems.
Experimenting in Equilibriumread_more
HG G 19.1
20 September 2019
15:15-16:00
Guillaume Obozinski
Swiss Data Science Center
Event Details

Research Seminar in Statistics

Title Integer programming and linear programming relaxation on the junction tree polytope for Influence Diagrams
Speaker, Affiliation Guillaume Obozinski, Swiss Data Science Center
Date, Time 20 September 2019, 15:15-16:00
Location HG G 19.1
Abstract Influence Diagrams (ID) provide a flexible framework to represent discrete stochastic optimization problems, including Markov Decision Process (MDP) and Partially Observable MDP as standard examples. In Influence Diagrams, the random variables are associated with a probabilistic graphical model whose vertices are partitioned into three types : chance, decision and utility vertices. The user has to choose the distribution of the decision vertices conditionally to their parents in order to maximize the expected utility. Leveraging a notion of rooted junction tree that we introduced with collaborators, I will show how the maximum expected utility problem on an influence diagram can be reformulated advantageously as a mixed integer linear problem on the marginal polytope of this junction tree. Then I will propose a way to obtain a good LP relaxation by identifying maximal sets that are invariant under the choice of the policy in the sense of the literature on causality. These LP relaxations allow for more efficient branch-and-bound algorithms but could also have other applications.
Integer programming and linear programming relaxation on the junction tree polytope for Influence Diagramsread_more
HG G 19.1
27 September 2019
15:15-16:00
Alexandra Carpentier
Universität Magdeburg
Event Details

Research Seminar in Statistics

Title Adaptive inference and its relations to sequential decision making
Speaker, Affiliation Alexandra Carpentier, Universität Magdeburg
Date, Time 27 September 2019, 15:15-16:00
Location HG G 19.1
Abstract Adaptive inference - namely adaptive estimation and adaptive confidence statements - is particularly important in high of infinite dimensional models in statistics. Indeed whenever the dimension becomes high or infinite, it is important to adapt to the underlying structure of the problem. While adaptive estimation is often possible, it is often the case that adaptive and honest confidence sets do not exist. This is known as the adaptive inference paradox. And this has consequences in sequential decision making. In this talk, I will present some classical results of adaptive inference and discuss how they impact sequential decision making. (based on joint works with Andrea Locatelli, Matthias Loeffler, Olga Klopp and Richard Nickl)
Adaptive inference and its relations to sequential decision makingread_more
HG G 19.1
11 October 2019
15:15-16:00
Ashia Wilson
Microsoft Research
Event Details

Research Seminar in Statistics

Title The risk of approximate cross validation
Speaker, Affiliation Ashia Wilson, Microsoft Research
Date, Time 11 October 2019, 15:15-16:00
Location HG G 19.1
Abstract Cross-validation (CV) is the de facto standard for selecting accurate predictive models and assessing model performance. However, CV suffers from a need to repeatedly refit a learning procedure on a large number of training datasets. To reduce the computational burden, a number of works have introduced approximate CV procedures that simultaneously reduce runtime and provide model assessments comparable to CV when the prediction problem is sufficiently smooth. An open question however is whether these procedures are suitable for model selection. In this talk, I’ll describe (i) broad conditions under which the model selection performance of approximate CV nearly matches that of CV, (ii) examples of prediction problems where approximate CV selection fails to mimic CV selection, and (iii) an extension of these results and the approximate CV framework more broadly to non-smooth prediction problems like L1-regularized empirical risk minimization. This is joint work with Lester Mackey and Maximilian Kasy.
The risk of approximate cross validationread_more
HG G 19.1
7 November 2019
16:15-17:00
Anders Kock
Oxford University
Event Details

Research Seminar in Statistics

Title Functional Sequential Treatment Allocation
Speaker, Affiliation Anders Kock, Oxford University
Date, Time 7 November 2019, 16:15-17:00
Location HG G 19.1
Abstract Consider a setting in which a policy maker assigns subjects to treatments, observing each outcome before the next subject arrives. Initially, it is unknown which treatment is best, but the sequential nature of the problem permits learning about the effectiveness of the treatments. While the multi-armed-bandit literature has shed much light on the situation when the policy maker compares the effectiveness of the treatments through their mean, economic decision making often requires targeting purpose specific characteristics of the outcome distribution, such as its inherent degree of inequality, welfare or poverty. In the present paper we introduce and study sequential learning algorithms when the distributional characteristic of interest is a general functional of the outcome distribution. In particular, it turns out that intuitively reasonable approaches, such as first conducting an experiment on an initial group of subjects followed by rolling out the inferred best treatment to the population, are dominated by the policies we develop and of which we show that they are optimal.
Functional Sequential Treatment Allocationread_more
HG G 19.1
7 January 2020
11:15-12:00
Abraham Wyner
Wharton, University of Pennsylvania
Event Details

Research Seminar in Statistics

Title Explaining the Success of AdaBoost, Random Forests and Deep Neural Nets as Interpolating Classifiers
Speaker, Affiliation Abraham Wyner, Wharton, University of Pennsylvania
Date, Time 7 January 2020, 11:15-12:00
Location HG G 19.2
Abstract AdaBoost, random forests and deep neural networks are the present day workhorses of the machine learning universe. We introduce a novel perspective on AdaBoost and random forests that proposes that the two algorithms work for similar reasons. While both classifiers achieve similar predictive accuracy, random forests cannot be conceived as a direct optimization procedure. Rather, random forests is a self-averaging, "interpolating" algorithm which creates what we denote as a “spiked-smooth” classifier, and we view AdaBoost in the same light. We conjecture that both AdaBoost and random forests succeed because of this mechanism. We provide a number of examples to support this explanation. We conclude with a brief mention of new research that suggests that deep neural nets are effective (at least in part and in some contexts) for the same reasons.
Explaining the Success of AdaBoost, Random Forests and Deep Neural Nets as Interpolating Classifiersread_more
HG G 19.2

Note: if you want you can subscribe to the iCal/ics Calender.

JavaScript has been disabled in your browser