Research Seminar

Main content

Would you like to be notified about these presentations via e-mail? Please subscribe here.

Autumn Semester 2016

Note: The highlighted event marks the next occurring event.

Date / Time Speaker Title Location
16 September 2016
Venkat Chandrasekaran
California Institute of Technology, USA
Learning Semidefinite Regularizers via Matrix Factorization  HG G 19.1 
Abstract: Regularization techniques are widely employed in the solution of inverse problems in data analysis and scientific computing due to their effectiveness in addressing difficulties due to ill-posedness. In their most common manifestation, these methods take the form of penalty functions added to the objective in optimization-based approaches for solving inverse problems. The purpose of the penalty function is to induce a desired structure in the solution, and these functions are specified based on prior domain-specific expertise. We consider the problem of learning suitable regularization functions from data in settings in which prior domain knowledge is not directly available. Previous work under the title of 'dictionary learning' or 'sparse coding' may be viewed as learning a polyhedral regularizer from data. We describe generalizations of these methods to learn semidefinite regularizers by computing structured factorizations of data matrices. Our algorithmic approach for computing these factorizations combines recent techniques for rank minimization problems along with operator analogs of Sinkhorn scaling. The regularizers obtained using our framework can be employed effectively in semidefinite programming relaxations for solving inverse problems. (Joint work with Yong Sheng Soh)
23 September 2016
Helen Odgen
University of Southampton, UK
Inference with approximate likelihoods  HG G 19.1 
Abstract: Many statistical models have likelihoods which are intractable: it is impossible or infeasibly expensive to compute the likelihood exactly. In such settings, a common approach is to replace the likelihood with an approximation, and proceed with inference as if the approximate likelihood were the exact likelihood. For example, in latent variable models, where the likelihood is an integral over the latent variables, a Laplace approximation to the likelihood is often used in place of the exact likelihood to do inference. I will describe general conditions which guarantee that this naive inference with an approximate likelihood has the same first-order asymptotic properties as inference with the exact likelihood, and discuss in detail the implications of these results for inference using a Laplace approximation to the likelihood in generalized linear mixed models.
28 October 2016
Samantha Leorato
Università Tor Vergata, Roma
Distribution and Quantile Regressions  HG G 19.1 
Abstract: Given a continuous random variable Y and a random vector X defined on the same probability space, the conditional distribution function (CDF) and the conditional quantile function (CQF) give rise to two competing approaches to the estimation of the conditional distribution of Y given X. One approach -- distribution regression -- is based on direct estimation of the conditional distribution function (CDF); the other approach -- quantile regression -- is instead based on direct estimation of the conditional quantile function (CQF). Since the CDF and the CQF are generalized inverses of each other, estimates of any functional of the distribution may be obtained by appropriately transforming the direct estimates of the CDF and the CQ. Similarly, indirect estimates of the CQF and the CDF may be obtained by taking the generalized inverse of the direct estimates. Contrary to the QR estimator, that typically refers to a conditional ALAD estimator, there is no unique choice for the DR estimator. One possibility is to define a binary choice model for any given threshold $y$ and the corresponding dummy variable $\{Y\leq y\}$. This choice is particularly suited to comparisons with the QR estimator, since, in the unconditional case, the two approaches are equivalent. Our paper focuses on comparing QR and DR approaches, and their performances in terms of efficiency, both asymptotically and for finite samples. Asymptotic efficiency is measured by asymptotic MSE of the rescaled estimators of the CDF (or of the CQF), where asymptotic MSE is the sum of the asymptotic variance and of the squared asymptotic bias. Asymptotic bias is allowed to be nonzero, thus taking into account some form of \emph{local} misspecification of either the QR or the DR models. For the asymptotic variance, we show that the choice of the link function used for DR estimation matters, and that under the most popular error distributions (i.e. logistic and normal) the QR is uniformly more efficient (in expectation). The finite sample performance is assessed by an extensive Monte Carlo exercise.
4 November 2016
Davy Paindaveine
Universität Brüssel
Inference on the mode of weak directional signals: A Le Cam perspektive on hypothesis testing near singularities  HG G 19.2 
Abstract: We revisit, in an original and challenging perspective, the problem of testing the null hypothesis that the mode of a directional signal is equal to a given value. Motivated by a real data example where the signal is weak, we consider this problem under asymptotic scenarios for which the signal strength goes to zero at an arbitrary rate eta_n. Both under the null and the alternative, we focus on rotationally symmetric distributions. We show that, while they are asymptoti- cally equivalent under fixed signal strength, the classical Wald and Watson tests exhibit very different (null and non-null) behaviours when the signal becomes arbitrarily weak. To fully characterize how challenging the problem is as a function of eta_n, we adopt a Le Cam, convergence-of-statistical-experiments, point of view and show that the resulting limiting experiments crucially depend on eta_n. In the light of these results, the Watson test is shown to be adaptively rate-consistent and essentially adaptively Le Cam optimal. Throughout, our theoretical findings are illustrated via Monte Carlo simulations. The practical relevance of our results is also shown on the real data example that motivated the present work.
18 November 2016
Gabor Lugosi
Universitat Pompeu Fabra
Title T.B.A. HG G 19.1 
2 December 2016
Martyn Plummer
IARC Lyon, France
A Bayesian Information Criterion for Singular Models  HG G 19.1 
Abstract: We consider approximate Bayesian model choice for model selection problems that involve models whose Fisher information matrices may fail to be invertible along other competing submodels. Such singular models do not obey the regularity conditions underlying the derivation of Schwarz’s Bayesian information criterion (BIC) and the penalty structure in BIC generally does not reflect the frequentist large sample behaviour of their marginal likelihood. Although large sample theory for the marginal likelihood of singular models has been developed recently, the resulting approximations depend on the true parameter value and lead to a paradox of circular reasoning. Guided by examples such as determining the number of components of mixture models, the number of factors in latent factor models or the rank in reduced rank regression, we propose a resolution to this paradox and give a practical extension of BIC for singular model selection problems.

Archive: AS 16  SS 16  AS 15  SS 15  AS 14  SS 14  AS 13  SS 13  AS 12  SS 12  AS 11  SS 11  AS 10  SS 10  AS 09 

Page URL:
© 2016 Eidgenössische Technische Hochschule Zürich