Post/Doctoral Seminar in Mathematical Finance

×

Modal title

Modal content

Spring Semester 2020

Date / Time Speaker Title Location
3 March 2020
15:15-16:15
Prof. Dr. Johannes Ruf
LSE
Event Details

Post/Doctoral Seminar in Mathematical Finance

Title Hedging with neural networks
Speaker, Affiliation Prof. Dr. Johannes Ruf, LSE
Date, Time 3 March 2020, 15:15-16:15
Location HG G 19.2
Rämistrasse 101
Abstract We study the use of neural networks as nonparametric estimation tools for the hedging of options. To this end, we design a network, named HedgeNet, that directly outputs a hedging strategy given relevant features as input. This network is trained to minimise the hedging error instead of the pricing error. Applied to end-of-day and tick prices of S&P 500 and Euro Stoxx 50 options, the network is able to reduce the mean squared hedging error of the Black-Scholes benchmark significantly. We illustrate, however, that a similar benefit arises by delta-vega hedging, thanks to the presence of the so called leverage effect. Finally, we argue that the previously reported outperformance of neural networks reported is most likely due to a lack of data hygiene. In particular, data leakage is sometimes unnecessarily introduced by a faulty training/test data split, possibly along with an additional ’tagging’ of data. Joint work with Weiguan Wang
Hedging with neural networksread_more
HG G 19.2
Rämistrasse 101
17 March 2020
15:15-16:15
Dr. Julie Thogersen
Aarhus University
Event Details

Post/Doctoral Seminar in Mathematical Finance

Title CANCELED!!! Personal non-life insurance decisions and the welfare loss of flat deductibles
Speaker, Affiliation Dr. Julie Thogersen, Aarhus University
Date, Time 17 March 2020, 15:15-16:15
Location HG G 19.2
Abstract We view the retail non-life insurance decision from the perspective of the insured. We formalize different consumption-insurance problems depending on the flexibility of the insurance contract. For exponential utility and power utility we find the optimal flexible insurance decision or insurance contract. For exponential utility we also find the optimal position in standard contracts that are less flexible and therefore, for certain non-linear pricing rules, lead to a welfare loss for the individual insuree compared to the optimal flexible insurance decision. For the exponential loss distribution, we quantify a significant welfare loss. This calls for product development in the retail insurance business.
CANCELED!!! Personal non-life insurance decisions and the welfare loss of flat deductiblesread_more
HG G 19.2
16 April 2020
16:00-17:00
Dr. Anastasis Kratsios
ETH Zurich, Switzerland
Event Details

Post/Doctoral Seminar in Mathematical Finance

Title The Universal Approximation Capabilities of Neural Networks with Modified Feature and Readout Maps
Speaker, Affiliation Dr. Anastasis Kratsios, ETH Zurich, Switzerland
Date, Time 16 April 2020, 16:00-17:00
Location Online seminar on Zoom
Abstract Modifications made to a neural network's input and output maps to suit a learning task are prevalent throughout learning theory. Perhaps the most common examples include random transfer learning and classification tasks. In the former case, a random feature map is built by randomizing the first few layers of an architecture and only training the final layers. In the latter case, a network's output map is modified either with a softmax function or with a component-wise sigmoid function followed by a thresholding function. We address these cases, as consequences of a broad result describing pairs of feature and readout maps which preserve an architecture's universal approximation capabilities between continuous functions. Applications to deep geometric learning are also considered and in particular, it is shown that hyperbolic feed-forward networks are universal approximators. We also show how the folklore justification of the continuous classification capabilities of neural networks is flawed.
The Universal Approximation Capabilities of Neural Networks with Modified Feature and Readout Mapsread_more
Online seminar on Zoom
28 April 2020
16:00-17:00
Dr. Wahid Khosrawi-Sardroudi
ETH Zurich, Switzerland
Event Details

Post/Doctoral Seminar in Mathematical Finance

Title Efficient computation of confidence bounds for neural network regression
Speaker, Affiliation Dr. Wahid Khosrawi-Sardroudi, ETH Zurich, Switzerland
Date, Time 28 April 2020, 16:00-17:00
Location Zoom (link will be sent later)
Abstract A prime example of machine learning is given by the supervised learning problem. Mathematically, this corresponds to a high dimensional non-linear regression problem, where the unknown function is approximated by a neural network and the goal is to find reasonable parameters for this network. In application, an important question is about the level of certainty one has for the estimated function evaluated at a given point of the domain. We present an efficient approximating method to compute such confidence regions and analyze the performance by means of an appropriate test.
Efficient computation of confidence bounds for neural network regressionread_more
Zoom (link will be sent later)
19 May 2020
15:00-16:00
Dr. Wahid Khosrawi-Sardroudi
ETH Zurich, Switzerland
Event Details

Post/Doctoral Seminar in Mathematical Finance

Title Functional Polynomial Processes
Speaker, Affiliation Dr. Wahid Khosrawi-Sardroudi, ETH Zurich, Switzerland
Date, Time 19 May 2020, 15:00-16:00
Location Zoom (link will be sent later)
Abstract Polynomial processes have been studied in detail over the past years. These processes are semimartingales with predictable characteristics of a certain polynomial type, allowing for efficient moment computations by taking advantage of certain algebraic invariance properties. So far, all cases considered in the literature are of (almost) Markovian type. In this work, we extend the class of polynomial processes to a functional Ito calculus setting allowing for full non-Markovian dynamics. Joint work with: Christa Cuchiero, Josef Teichmann and Thorsten Schmidt
Functional Polynomial Processesread_more
Zoom (link will be sent later)
26 May 2020
15:00-16:00
Vincenzo Ignazio
ETH Zurich, Switzerland
Event Details

Post/Doctoral Seminar in Mathematical Finance

Title Demand Functions in Mean field games
Speaker, Affiliation Vincenzo Ignazio, ETH Zurich, Switzerland
Date, Time 26 May 2020, 15:00-16:00
Location Zoom (link will be sent later)
Abstract We connect a system of mean field game partial differential equations to the general concept of nonlinear demand functions. The goal is to prove the existence of classical solutions to a system of mean field games arising in the study of exhaustible resource production under market competition. Individual trajectories are modeled by a controlled diffusion process with jumps, which adds a nonlocal term to the PDE system. The assumptions on the Hamiltonian are sufficiently general to cover a large class of examples proposed in the literature on Bertrand and Cournot mean field games. Uniqueness also holds under a sufficient restriction on the structure of the Hamiltonian, which in practice amounts to a small upper bound on the substitutability of goods.
Demand Functions in Mean field gamesread_more
Zoom (link will be sent later)
23 June 2020
15:00-16:00
Jakob Heiss
ETH Zurich, Switzerland
Event Details

Post/Doctoral Seminar in Mathematical Finance

Title How implicit Regularization of Artificial Neural Networks Characterized the learned function or a Mathematical Point of View on the Psychology of Artificial Neural Networks
Speaker, Affiliation Jakob Heiss, ETH Zurich, Switzerland
Date, Time 23 June 2020, 15:00-16:00
Location Zoom
Abstract Empirical results suggest that typical algorithms for training neural networks favor regularized solutions. These observations motivate us to analyze properties of the solutions found by gradient descent initialized close to zero, that is frequently employed to perform the training task. For one dimensional shallow ReLU neural networks in which weights are chosen randomly and only the terminal layer is trained, we show that the resulting solution converges to the smooth spline interpolation of the training data as the number of hidden nodes tends to infinity. Moreover, we derive a correspondence between the early stopped gradient descent and the smoothing spline regression. This gives valuable insight on the properties of the solutions obtained using gradient descent methods in general settings. In Psychology one tries to understand the behaviour of humans and other animals. We proof mathematical theories how certain neural networks will behave in new situations after they have experienced a certain training.
How implicit Regularization of Artificial Neural Networks Characterized the learned function or a Mathematical Point of View on the Psychology of Artificial Neural Networksread_more
Zoom

Note: if you want you can subscribe to the iCal/ics Calender.

JavaScript has been disabled in your browser