Visitor seminars
Upcoming seminars
Title: | Symplectic groupoid and cluster structure on the Teichmueller space of closed genus two surfaces |
Speaker: | Dr. Michael Shapiro (Michigan State University) |
Date: | Tuesday, February 6, 2024 |
Time: | 12:00 pm |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: |
Symplectic groupoid of unipotent nxn upper-triangular matrices is formed by pairs (B, A) where B is a nondegenerate nxn matrix, A is a unipotent upper-triangular nxn matrix, and BAB^t is unipotent upper triangular. The symplectic groupoid is equipped with the natural symplectic form defined by Weinstein, which induces a Poisson bracket on the space of upper triangular unipotent matrices studied by Bondal, Dubrovin-Ugaglia, and others. We compute the cluster structure compatible with the Poisson structure and discuss its connection with the Teichmueller space of genus g curves with one or two holes equipped with a Goldman Poisson bracket. As an unexpected byproduct, we obtain a cluster structure on the Teichmueller space of closed genus two curves unknown earlier. This is a joint project with L. Chekhov. |
Past seminars
Title: | CS4ML: A general framework for active learning with arbitrary data based on Christoffel functions |
Speaker: | Dr. Ben Adcock (Simon Fraser University) |
Date: | Tuesday, December 5, 2023 |
Time: | 11:00 am |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: |
Active learning is an important concept in machine learning, in which the learning algorithm is able to choose where to query the underlying ground truth to improve the accuracy of the learned model. As machine learning techniques come to be more commonly used in scientific computing problems, where data is often expensive to obtain, the use of active learning is expected to be particularly important in the design of efficient algorithms. In this work, we introduce a general framework for active learning in regression problems. Our framework extends the standard setup by allowing for general types of data, rather than merely pointwise samples of the target function. This generalization covers many cases of practical interest, such as data acquired in transform domains (e.g., Fourier data), vector-valued data (e.g., gradient-augmented data), data acquired along continuous curves, and, multimodal data (i.e., combinations of different types of measurements). Our framework considers random sampling according to a finite number of sampling measures and arbitrary nonlinear approximation spaces (model classes). We introduce the concept of generalized Christoffel functions and show how these can be used to optimize the sampling measures. We prove that this leads to near-optimal sample complexity in various important cases. This work focuses on applications in scientific computing, where, as noted, active learning is often desirable, since it is usually expensive to generate data. We demonstrate the efficacy of our framework for gradient-augmented learning with polynomials, Magnetic Resonance Imaging (MRI) using generative models and adaptive sampling for solving PDEs using Physics-Informed Neural Networks (PINNs). This is joint work with Juan M. Cardenas (UC Boulder) and Nick Dexter (Florida State). Read the relevant paper. The presentation will be available on Zoom. |
Title: | Strategic Loss Reporting |
Speaker: | Dr. Bin Zou (University of Connecticut) |
Date: | Friday, November 24, 2023 |
Time: | 2:00 pm |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: |
We study a strategic loss reporting problem for an insured who seeks an optimal strategy to dictate what losses she should report to the insurer, with the purpose to maximize her expected wealth (risk-neutral case) or expected utility of wealth (risk-averse case). To explore the potential advantage of underreporting her insurable losses, the insured follows a barrier strategy and only reports losses above the barrier to the insurer. Under a tractable 2-class bonus-malus model and given that the insured has purchased full insurance, we obtain a unique equilibrium reporting strategy in (semi)closed form. We find that the two equilibrium barriers are the same and strictly greater than zero, offering a theoretical explanation for the underreporting of insurable losses. If time allows, we will discuss the generalization to the case where the insured has deductible insurance and may even take into account strategic reporting in the decision of contract deductibles. This talk is based on joint work with Jingyi Cao, Dongchen Li, and Jenny Young. The presentation will be available on Zoom. |
Title: | Applications of Hawkes Processes in Finance and Insurance |
Speaker: | Dr. Anatoliy Swishchuk (University of Calgary) |
Date: | Friday, November 17, 2023 |
Time: | 2:00 pm |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: |
In this talk, I will introduce various stochastic models based on various Hawkes processes (HP), and present their applications in finance and insurance. The models include: compound HP, general compound HP, multivariate general compound HP, exponential general HP, to name a few. Applications will be given in finance, including limit order books (LOB), option pricing (analogue of Black-Scholes formula, Margrabe’s, spread and basket options pricing, Merton optimization problem) and in insurance, such as Hawkes-based risk model, ruin probabilities and Merton optimization problem. The models are justified and supported by numerical examples based on real data. The presentation will be available on Zoom. |
Title: | A Different Approach to Endpoint Weak-Type Estimates for Calderón-Zygmund Operators |
Speaker: | Dr. Cody Stockdale (Clemson University) |
Date: | Wednesday, May 17, 2023 |
Time: | 2:30 pm |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: |
The weak-type (1,1) estimate for Calderón-Zygmund operators isfundamental in harmonic analysis. We investigate weak-type inequalitiesfor Calderón-Zygmund singular integral operators using the Calderón-Zygmund decomposition and ideas inspired by Nazarov, Treil, and Volberg.We discuss applications of these techniques in the Euclidean setting, inweighted settings, for multilinear operators, for operators with weakenedsmoothness assumptions, and in studying the dimensional dependence ofthe Riesz transforms. |
Title: | Limiting absorption principle and virtual levels of operatirs in Banach spaces |
Speaker: | Dr. Andrew Comech (Texas A&M University) |
Date: | Friday, March 17, 2023 |
Time: | 1:30 pm |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: |
Virtual levels, also known as threshold resonances, admit several equivalent characterizations: 1. there are corresponding "virtual states" from a space "slightly weaker" than L^2; 2. there is no limiting absorption principle in their vicinity (e.g. no weights such that the "sandwiched" resolvent is uniformly bounded); 3. an arbitrarily small perturbation can produce an eigenvalue. We develop a general approach to virtual levels in Banach spaces and provide applications to Schroedinger operators with nonselfadjoint potentials and in any dimension, deriving optimal estimates on the resolvent (including the lower dimensional cases). This is a joint work with Nabile Boussaid (Université Franche-Comte Besançon). The presentation will be available via Zoom. |
Title: | Equilibria in Reinsurance Markets: Monopolistic vs. Competitive Pricing |
Speaker: | Mario Ghossoub (University of Waterloo) |
Date: | Wednesday, December 7, 2022 |
Time: | 12:30 pm |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: |
The notion of a Bowley optimum (or Stackelberg equilibrium) has gained recent popularity as an equilibrium concept in reinsurance markets, but it assumes a monopolistic structure on the supply side. This is in contrast to reinsurance markets in which equilibrium pricing arises through strategic price competition between reinsurers. In this talk, I will discuss both market structures and argue that the notion of a Subgame Perfect Nash Equilibrium (SPNE) is the appropriate solution concept in the latter market setting. I will provide a characterization of equilibrium reinsurance contracts in each case, under fairly general assumptions about the preferences of market participants. Finally, I will discuss the Pareto-efficiency of equilibria and whether efficient allocations can be decentralized in each market structure. Specifically, Bowley-optimal contracts lead to Pareto-efficient allocations, but they make the insurer indifferent with the status quo. Moreover, only those Pareto-efficient contracts that make the insurer indifferent between suffering the loss and entering into the reinsurance contract are Bowley optimal. This is indicative of the limitations of Bowley optimality as an equilibrium concept. In the second market structure, equilibrium contracts induced by an SPNE result in Pareto-efficient allocations. Additionally, under mild conditions, the insurer realizes a strict welfare gain, which addresses the shortcomings of the Bowley setting and thereby ultimately reflects the benefit to the insurer of competition on the supply side. |
Title: | p-adic methods for solving Diophantine equations |
Speaker: | Amnon Besser (Ben Gurion University) |
Date: | Friday, December 2, 2022 |
Time: | 3:30 pm |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: |
The problem of finding rational or integer solutions to polynomial equations is one of the oldest problems in mathematics and is one of the key driving forces in the development of Number Theory. In the last 15 years new methods were developed that can sometimes effectively solve this problem. These methods attempt to find the solutions inside the larger set of solutions of the same equation in the field of p-adic numbers as the vanishing set of some computable function. When these methods work they give the rational solutions to arbitrarily large p-adic precision, which usually suffices to rigorously recover the full set of solutions. I will survey the new methods, originating from the work of Kim and from the more recent work of Lawrence and Venkatesh. I will then explain my work with Muller and Srinivasan that uses a p-adic version of the notion of norms on line bundles and associated heights, as used for example in arithmetic dynamics, to give a new approach to some Kim type results. |
Title: | Machine Learning and Dynamical Systems Meet in Reproducing Kernel Hilbert Spaces |
Speaker: | Dr. Boumediene Hamzi (Johns Hopkins University) |
Date: | Thursday, October 27, 2022 |
Time: | 10:00 am |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: |
The intersection of the fields of dynamical systems and machine learning is largely unexplored and the objective of this talk is to show that working in reproducing kernel Hilbert spaces offers tools for a data-based theory of nonlinear dynamical systems. We use the method of parametric and nonparametric kernel flows to predict some prototypical chaotic dynamical systems as well as geophysical observational data. We also consider microlocal kernel design for detecting critical transitions in some fast-slow random dynamical systems. We then show how kernel methods can be used to approximate center manifolds, propose a data-based version of the center manifold theorem and construct Lyapunov functions for nonlinear ODEs. We also introduce a data-based approach to estimating key quantities which arise in the study of nonlinear autonomous, control and random dynamical systems. Our approach hinges on the observation that much of the existing linear theory may be readily extended to nonlinear systems-- with a reasonable expectation of success- once the nonlinear system has been mapped into a high or infinite dimensional Reproducing Kernel Hilbert Space. In particular, we develop computable, non-parametric estimators approximating controllability and observability energies for nonlinear systems. We apply this approach to the problem of model reduction of nonlinear control systems. It is also shown that the controllability energy estimator provides a key means for approximating the invariant measure of an ergodic, stochastically forced nonlinear system. |
Title: | Studying nonlinear dynamics using computational polynomial optimization |
Speaker: | Dr. David Goluskin (University of Victoria) |
Date: | Tuesday, October 4, 2022 |
Time: | 3:00 pm |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: |
For nonlinear ODEs and PDEs that cannot be solved exactly, various properties can be inferred by constructing functions that satisfy suitable inequalities. Although the most familiar example is proving nonlinear stability of an equilibrium by constructing Lyapunov functions, similar approaches can produce many other types of mathematical statements, including for systems with chaotic behavior. Such statements include bounds on attractor properties or on transient behavior, estimates of basins of attraction, and design of nonlinear controls. Analytical results of these types often give overly conservative results in order to remain tractable. Much stronger results can be achieved by using computational methods of polynomial optimization to construct functions that satisfy the desired inequalities. This talk will provide an overview of the different ways in which polynomial optimization can be used to study dynamics. I will show various examples in which polynomial optimization produces arbitrarily sharp results while other methods do not. I will focus on the ODE case, where theory and computational methods are more complete. |
Title: | Noncausal Affine Processes with Applications to Derivative Pricing |
Speaker: | Dr. Yang Lu (Université Paris 13, France) |
Date: | Tuesday, January 21, 2020 |
Time: | 11:30 am |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Linear factor models, where the factors are affine processes, play a key role in Finance, since they allow for quasi-closed form expressions of the term structure of risks. We introduce the class of noncausal affine linear factor models by considering factors that are affine in reverse time. These models are especially relevant for pricing sequences of speculative bubbles. We show that they feature much more complicated non affine dynamics in calendar time, while still providing (quasi) closed form term structures and derivative pricing formulas. The framework is illustrated with zero-coupon bond and European call option pricing examples. |
Title: | Optimal Reinsurance-Investment Strategy for a Dynamic Contagion Claim Model |
Speaker: | Ms. Jingyi Cao (University of Waterloo, ON) |
Date: | Friday, January 17, 2020 |
Time: | 10:00 am |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | We study the optimal reinsurance-investment problem for the compound dynamic contagion process introduced by Dassios and Zhao (2011). This model allows for self-exciting and externally-exciting clustering effect for the claim arrivals, and includes the well-known Cox process with shot noise intensity and the Hawkes process as special cases. For tractability, we assume that the insurer’s risk preference is the time-consistent mean-variance criterion. By utilizing the dynamic programming and extended HJB equation approach, a closed-form expression is obtained for the equilibrium reinsurance-investment strategy. An excess-of-loss reinsurance type is shown to be optimal even in the presence of self-exciting and externally-exciting contagion claims, and the strategy depends on both the claim size and claim arrivals assumptions. Further, we show that the self-exciting effect is of a more dangerous nature than the externally-exciting effect as the former requires more risk management controls than the latter. In addition, we find that the reinsurance strategy does not always become more conservative (i.e., transferring more risk to the reinsurer) when the claim arrivals are contagious. Indeed, the insurer can be better off retaining more risk if the claim severity is relatively light-tailed. This is a joint work with David Landriault and Bin Li (both from University of Waterloo). |
Title: | Application of Random Effects in Dependent Compound Risk Model |
Speaker: | Mr. Himchan Jeong (University of Connecticut, CT) |
Date: | Friday, January 10, 2020 |
Time: | 10:00 am |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | In ratemaking for general insurance, the calculation of a pure premium has traditionally been based on modeling both frequency and severity in an aggregated claims model. Additionally for simplicity, it has been a standard practice to assume the independence of loss frequency and loss severity. However, in recent years, there has been sporadic interest in the actuarial literature exploring models that departs from this independence. Besides, usual property and casualty insurance enables us to explore the benefits of using random effects for predicting insurance claims observed longitudinally, or over a period of time. Thus, in this article, a research work is introduced with utilizes random effects in dependent two-part model for insurance ratemaking, testing the presence of random effects via Bayesian sensitivity analysis with its own theoretical development as well as empirical results and performance measures using out-of-sample validation procedures. |
Title: | Mixture of Experts Regression Models for Insurance Ratemaking and Reserving |
Speaker: | Mr. Tsz Chai Fung (University of Toronto, ON) |
Date: | Thursday, January 9, 2020 |
Time: | 11:30 am |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Understanding the effect of policyholders' risk profile on the number and the amount of claims, as well as the dependence among different types of claims, are critical to insurance ratemaking and IBNR-type reserving. To accurately quantify such features, it is essential to develop a regression model which is flexible, interpretable and statistically tractable. In this presentation, I will discuss a highly flexible nonlinear regression model we have recently developed, namely the logit-weighted reduced mixture of experts (LRMoE) models, for multivariate claim frequencies or severities distributions. The LRMoE model is interpretable as it has two components: Gating functions to classify policyholders into various latent sub-classes and Expert functions to govern the distributional properties of the claims. The model is also flexible to fit any types of claim data accurately and hence minimize the issue of model selection. Model implementation is illustrated in two ways using a real automobile insurance dataset from a major European insurance company. We first fit the multivariate claim frequencies using an Erlang count expert function. Apart from showing excellent fitting results, we can interpret the fitted model in an insurance perspective and visualize the relationship between policyholders' information and their risk level. We further demonstrate how the fitted model may be useful for insurance ratemaking. The second illustration deals with insurance loss severity data that often exhibits heavy-tail behavior. Using a Transformed Gamma expert function, our model is applicable to fit the severity and reporting delay components of the dataset, which is ultimately shown to be useful and crucial for an adequate prediction of IBNR reserve. This project is joint work with Andrei Badescu and Sheldon Lin. |
Title: | Compressive Sensing and its Applications in Data Science and in Computational Mathematics |
Speaker: | Dr. Simone Brugiapaglia (Simon Fraser University, BC) |
Date: | Monday, January 21, 2019 |
Time: | 11:45 am |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Compressive sensing (CS) is a general paradigm that enables us to measure objects (such as images, signals, or functions) by using a number of linear measurements proportional to their sparsity, i.e. to the minimal amount of information needed to represent them with respect to a suitable system. The vast popularity of CS is due to its impact in many practical applications of data science and signal processing, such as magnetic resonance imaging, X-ray computed tomography, or seismic imaging. In this talk, after presenting the main theoretical ingredients that made the success of CS possible and discussing recovery guarantees in the noise-blind scenario, we will show the impact of CS in computational mathematics. In particular, we will consider the problem of computing sparse polynomial approximations of functions defined over high-dimensional domains from pointwise samples, highly relevant for the uncertainty quantification of PDEs with random inputs. In this context, CS-based approaches are able to substantially lessen the curse of dimensionality, thus enabling the effective approximation of high-dimensional functions from small datasets. We will illustrate a rigorous noise-blind recovery error analysis for these methods and show their effectiveness through numerical experiments. Finally, we will present some challenging open problems for CS-based techniques in computational mathematics. |
Title: | The Algorithmic Hardness Threshold for Continuous Random Energy Models |
Speaker: | Dr. Pascal Maillard (Institut de Mathématique d’Orsay, France) |
Date: | Friday, January 18, 2019 |
Time: | 10:00 am |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | He will report on recent work with Louigi Addario-Berry on algorithmic hardness for finding low-energy states in the continuous random energy model of Bovier and Kurkova. This model can be regarded as a toy model for strongly correlated random energy landscapes such as the Sherrington--Kirkpatrick model. We exhibit a precise and explicit hardness threshold: finding states of energy above the threshold can be done in linear time, while below the threshold this takes exponential time for any algorithm with high probability. If time permits, I further discuss what insights this yields for understanding algorithmic hardness thresholds for random instances of combinatorial optimization problems. |
Title: | Rapid Mixing Bounds for Hamiltonian Monte Carlo under Strong Log-Concavity |
Speaker: | Dr. Oren Manbougi (Ecole polytechnique fédérale de Lausanne [EPFL], Switzerland) |
Date: | Thursday, January 10, 2019 |
Time: | 9:30 am |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Sampling from a probability distribution is a fundamental algorithmic problem. We discuss applications of sampling to several areas including machine learning, Bayesian statistics and optimization. In many situations, for instance when the dimension is large, such sampling problems become computationally difficult. Markov chain Monte Carlo (MCMC) algorithms are among the most effective methods used to solve difficult sampling problems. However, most of the existing guarantees for MCMC algorithms only handle Markov chains that take very small steps and hence can oftentimes be very slow. Hamiltonian Monte Carlo (HMC) algorithms – which are inspired from Hamiltonian dynamics in physics – are capable of taking longer steps. Unfortunately, these long steps make HMC difficult to analyze. As a result, non-asymptotic bounds on the convergence rate of HMC have remained elusive. In this talk, we obtain rapid mixing bounds for HMC in an important class of strongly log-concave target distributions encountered in statistical and Machine learning applications. Our bounds show that HMC is faster than its main competitor algorithms, including the Langevin and random walk Metropolis algorithms, for this class of distributions. Finally, we consider future directions in sampling and optimization. Specifically, we discuss how one might design adaptive online sampling algorithms for applications to reinforcement learning. We also discuss how Markov chain algorithms can be used to solve difficult non-convex sampling and optimization problems, and how one might be able to obtain theoretical guarantees for the MCMC algorithms that can solve these problems. |
Title: | Suboptimality of Local Algorithms for Optimization on Sparse Graphs |
Speaker: | Dr. Mustazee Rahman (KTH Royal Institute of Technology, Sweden) |
Date: | Tuesday, January 8, 2019 |
Time: | 11:15 am |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Suppose we want to find the largest independent set or maximal cut in a large yet sparse graph, where the average vertex degree is asymptotically constant. These are two basic optimization problems relevant to both theory and practice. For typical, or randomized, sparse graphs, many natural algorithms proceed by way of local decision rules. Examples include Glauber dynamics, Belief propagation, etc. I will explain a form of local algorithms that captures many of these. I will then explain how they provably fail to find optimal independent sets or cuts once the average degree of the graph becomes large. This answers a question that traces back to computer science, probability and statistical physics. Along the way, we will find connections to entropy and spin glasses. |
Title: | Ranks of Elliptic Curves, Limiting Distributions and Moments of Arithmetical Sequences |
Speaker: | Dr. Daniel Fiorilli (University of Ottawa, ON) |
Date: | Friday, February 3, 2017 |
Time: | 10:30 a.m. - 12:00 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Dr. Fiorilli will begin with an introduction to elliptic curves, with some historical comments. Concepts such as the rank of an elliptic curve will be discussed, as well as famous open problems such as extreme ranks and the Birch and Swinnerton-Dyer conjecture. My work links such problems to statements about the involved limiting distributions, which allow to apply tools such as the central limit theorem and the theory of large deviations. In particular, I will describe how one can combine probability, analytic number theory, Galois theory and algebraic geometry to understand elliptic curves over function fields. Finally, I will describe my work on moments of arithmetical sequences in progressions, in particular bounds and asymptotics for the first and second moments (both in the classical case and with the circle method), as well as a probabilistic study of the second moment. |
Title: | Quantum Chaos and Arithmetic |
Speaker: | Dr. Stephen Lester (Concordia University & CRM-ISM, QC) |
Date: | Tuesday, January 24, 2017 |
Time: | 1:30 - 3:00 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | In this talk I will discuss some problems in Quantum Chaos and describe results in arithmetic settings where more can be proved. Given a compact, smooth Riemannian manifold (M,g) a central problem in Quantum Chaos is to understand the behavior of eigenfunctions of the Laplace-Beltrami operator in the limit as the eigenvalue tends to infinity. The Quantum Ergodicity Theorem of Shnirelman, Colin de Vediere, and Zelditch asserts that if the geodesic flow on M is ergodic then the mass of almost all of the eigenfunctions equidistributes. I will discuss problems which go beyond the Quantum Ergodicity Theorem such as quantum unique ergodicity and small scale quantum ergodicity in the setting of arithmetic surfaces such as the torus and modular surface. Limitations on equidistribution will also be discussed. I will also indicate how these problems are related to arithmetic objects such as L-functions, modular forms, and multiplicative functions. |
Title: | The Arithmetic of L-functions and their P-Adic Properties |
Speaker: | Dr. Giovanni Rosso (University of Cambridge, UK) |
Date: | Monday, January 23, 2017 |
Time: | 11:00 a.m. - 12:30 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | L-functions are complex functions which are defined from interesting arithmetic objects, such as number fields and elliptic curves. These functions contain a lot of interesting information about the objects that interest us. During the talk, I shall give some examples of how this information can be recovered, stressing the importance of the p-adic and mod p behaviour of these functions. I shall conclude with an overview of related conjectures (Iwasawa Main Conjecture, Greenberg's conjecture on trivial zeroes, existence of eigenvarieties,...) and my results on these topics. |
Title: | Families of Modular Forms |
Speaker: | Dr. John Bergdall (Boston University, MA) |
Date: | Thursday, January 19, 2017 |
Time: | 2:00 - 3:30 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Modular forms are central objects in number theory. They famously appear in the proof of Fermat's Last Theorem and more generally they play a role in the so-called Langlands program. This talk is concerned with the eigenforms, modular forms that are more algebraic in nature, and our ability to deform eigenforms into geometric families. My goal is to explain the history as well as the motivation for modular forms and families thereof. In addition, I will discuss recent results on the geometric properties of these families. This talk is intended for a general audience. A portion of this work is joint with David Hansen. |
Title: | The Geometry of Modular Forms, from Ramanujan to Moonshine |
Speaker: | Dr. Luca Candelori (Louisianna State University, LA) |
Date: | Tuesday, January 10, 2017 |
Time: | 2:00 - 3:30 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Modular forms often appear as generating series of interesting mathematical data, from Ramanujan's work on integer partitions to John McKay's computations on the representation theory of the Monster group. In this talk we introduce new geometric perspectives on these classical topics, or rather on some of the mathematics they have directly inspired: the theory of mock modular forms and that of vector-valued modular forms associated to vertex operator algebras. Specifically, we employ the geometry of modular curves to study the fields of definition of coefficients of mock modular forms and to classify the structure of graded modules of vector-valued modular forms. |
Title: | The Pricing of Idiosyncratic Risk in Option Markets |
Speaker: | Mr. Jean-François Bégin (HEC Montréal, QC) |
Date: | Monday, February 1, 2016 |
Time: | 2:15 - 4:00 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | The recent literature provides conflicting empirical evidence on the sign of the relationship between firm-specific risk and equity returns. This paper sheds new light on this relationship by developing a discrete time jump-diffusion model in which a firm’s systematic and idiosyncratic risk have both a diffusive and a tail component. Our discrete time model is estimated using a two-stage procedure based exclusively on returns and options data. As this model includes latent variables, each stage relies on an adaptation of Gordon et al.’s (1993) bootstrap particle filter and the smoothed resampling method of Malik and Pitt (2011). We exploit the richness of stock option data to extract the expected risk premium associated to each risk factor, thereby avoiding the exclusive use of noisy realizations of historical returns. This would also allow us to filter the jumps more efficiently. We estimate the model on 117 firms that are or were part of the S&P 500 index. First, we find that firm-specific jump risk and systematic risk are priced to a similar extent. Second, we show that the diffusive part of idiosyncratic risk is not priced in option markets, once other sources of risk are accounted for. |
Title: | Inference Concerning Intraclass Correlation for Binary Responses |
Speaker: | Dr. Debaraj Sen (Concordia University, QC) |
Date: | Friday, January 22, 2016 |
Time: | 1:45 - 3:30 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | In the analysis of several treatment groups for binary outcome data, it is often interesting to determine if the treatments may have stabilizing effects. This inference problem can be done based on the confidence interval for a common intraclass correlation coefficient and, in many applications of epidemiological studies, it is preferable by practitioners. Inference procedures concerning the intraclass correlation have been well developed for single-sample problems; little attention has been paid to extend these inference procedures for multiple-sample problems. In this talk, we focus on constructing the confidence interval procedures for a common intraclass correlation coefficient of several treatment groups. We compare different approaches with a large sample procedure in terms of coverage and expected length through a simulation study. An application to a solar protection study is used to illustrate the proposed methods. |
Title: | Design and Analysis of Experiments on Non-convex Regions |
Speaker: | Dr. Ofir Harari (Simon Fraser University, BC) |
Date: | Wednesday, January 20, 2016 |
Time: | 10:45 a.m. - 12:30 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Modeling a response over a non-convex design region is a common problem in diverse areas such as engineering and geophysics. The tools available to model and design for such responses are limited and have received little attention. We propose a new method for selecting design points over non-convex regions that is based on the application of multidimensional scaling to the geodesic distance. Optimal designs for prediction are described, with special emphasis on Gaussian process models, followed by a simulation study and an application in glaciology. |
Title: | Mixtures of Shifted Asymmetric Laplace Distributions |
Speaker: | Dr. Brian C. Franczak (McMaster University, ON) |
Date: | Monday, January 18, 2016 |
Time: | 1:45 - 3:30 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Cluster analysis can be lucidly defined as the process of sorting similar objects into groups. Due to their construction, finite mixture models are a natural choice for performing cluster analysis. When a finite mixture model is utilized for cluster analysis, we call the process model-based clustering. To date, the Gaussian mixture model (GMM) has been the focal point in the development of finite mixtures for clustering. However, the GMM assumes that clusters are symmetric, which is often an unrealistic assumption. This talk discusses the development of a finite mixture of shifted asymmetric Laplace (SAL) distributions, which facilitate clustering, or classification, in situations where the clusters are not symmetric. The mixture of SAL distributions allow for the parameterization of skewness, in addition to location and scale, and are a substantial departure from the Gaussian model-based clustering paradigm. The mixture of SAL distributions are fitted using a variant of the expectation-maximization (EM) algorithm and we demonstrate our novel mixture modelling approach using simulated and real data sets. The Bayesian information criterion (BIC) and integrated complete likelihood(ICL) are utilized for model selection. Finally, extensions of the general SAL mixture that arise by decomposing the component scale matrices are also discussed. |
Title: | Bayesian Learning, A Framework for Automatic Learning from Big Data |
Speaker: | Dr. Vahid Partovi Nia (École Polytechnique de Montréal, QC) |
Date: | Friday, January 15, 2016 |
Time: | 1:45 - 3:30 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | We live in the era of massive data generation through satellites, cell phones, voip and video calls, social websites, smart watches, online stock market, and so on. This revolution in data generation calls for computationally efficient algorithms with a proper generalization ground to tackle various data structures. As an applied statistician, I dedicate a third of my talk to introduce applied projects that my research team and I executed recently. I demonstrate interesting questions on real data that initiated my collaboration with the industry. Such questions motivated my methodological contributions in statistics, data analysis, and smart technology. The remaining two thirds of the talk is dedicated to my main theme of research, i.e. the Bayesian supervised, semi-supervised, and unsupervised learning. The focus of the talk is unsupervised learning, because the supervised and the semi-supervised variants are special cases of the unsupervised learning case. I pick-up a simple unsupervised learning algorithm and build a computationally efficient Bayesian version. Then, I discuss the advantages of this new view, specially while data are massive. |
Title: | Tree-Structured Estimation |
Speaker: | Dr. Nathalie Akakpo (Laboratoire de Probabilités et Modèles Aléatoires, Université Pierre et Marie Curie, Paris, France) |
Date: | Friday, January 8, 2016 |
Time: | 1:45 - 3:30 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Density estimation already seems to be a well-documented topic. Yet, some problems are still difficult to solve, even for univariate data, such as adapting to spatially varying smoothness. For multivariate data, the smoothness is also likely to vary with the direction. Besides, the dimension of the data weighs upon the performance of nonparametric density estimators, a phenomenon also known as the curse of dimensionality. In order to overcome such problems, we will present tree-structured models, based either on piecewise polynomials or wavelets, that allow to approximate a wide range of densities. We will explain how to select a good model from the data by using an appropriate penalized criterion. We will outline theoretical results and algorithmic properties for these new estimation procedures. |
Title: | Exponentially Convergent Algorithm to Generate Uuniformly Random Ppoints in a d-Dimensional Cconvex Body |
Speaker: | Dr. Tomasz Szarek (University of Gdansk, Poland) |
Date: | Wednesday, February 19, 2014 |
Time: | 1:00 - 2:00 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | An algorithm to generate random points inside an arbitrary d--dimensional convex body X with respect to the flat (Lebesgue) measure is proposed. It can be considered as an iterated functions system (IFS) with an infinite number of functions acting on X. We analyze the corresponding Markov operator which acts on the probability measures, and show that any initial measure converges exponentially to the uniform measure. Estimations for the convergence rate are derived in terms of the dimension d and the ratio between the radius of the sphere inscribed in X and the radius of the outscribed sphere. Concrete estimations are provided for the Birkhoff polytope containing bistochastic matrices, the set of quantum states acting on N--dimensional Hilbert space and its subset consisting of states with positive partial transpose. |
Title: | Tail Dependence and its Influence on Risk Measures |
Speaker: | Mr. Lei (Larry) Hua (University of British Columbia, BC) |
Date: | December 7, 2011 |
Time: | 10:30 a.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | In this talk, Mr. Hua will present his contributions to the study of tail behavior of copulas and its influence on risk measures such as Conditional Tail Expectation and Value at Risk. Tail order and intermediate tail dependence will be discussed for copula families. In particular, for Archimedean copulas, we use a Laplace Transform (LT) representation to relate the tail heaviness of a positive random variable to the tail behavior of copulas. Conditions that lead to intermediate tail dependence have been obtained, and they are related to a Taylor expansion of the LT up to a fractional order. Then sensitivity of risk measures to dependence modeling will be discussed in terms of tail comonotonicity, an asymptotically full dependence structure. Conservativity and asymptotic additivity of risk measures will be discussed with various theoretical results, simulations and data analysis of an auto insurance claim data. |
Title: |
Ruin Models Featuring Interest and Diffusion |
Speaker: | Dr. Ilie-Radu Mitric (University of Connecticut, CT) |
Date: | December 6, 2011 |
Time: | 10:30 a.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | A number of extensions to the risk theory models are analyzed. Initially, we consider a multi-threshold compound Poisson surplus process with interest earned at a constant rate. Then, we analyze a multi‐layer compound Poisson surplus process perturbed by diffusion and examine the behavior of the Gerber-Shiu discounted penalty function. Lastly, we considered the absolute ruin problem in a risk model with debit and credit interest, to renewal and non-renewal structures. Our first results apply to MAP processes, which we later restrict to the Sparre Andersen renewal risk model with interclaim times that are generalized Erlang (n) distributed and claim amounts following a Matrix-Exponential (ME) distribution. Under this scenario, we present a general methodology to analyze the Gerber-Shiu discounted penalty function defined at absolute ruin, as a solution of highorder linear differential equations with non-constant coefficients. Closedform solutions for the absolute ruin probabilities in the generalized Erlang (2) case complement recent results from literature obtained under the classical risk model. |
Title: |
Modelling of Dependent Risk Processes Based on the Class of Erlang Mixtures |
Speaker: | Dr. Jae Kyung Woo (Columbia University, NY) |
Date: | December 1, 2011 |
Time: | 11:30 a.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | A generalized Sparre Andersen risk process is examined, whereby the joint distribution of the interclaim time and the ensuing claim amount is assumed to have a particular mathematical structure. This structure is present in various dependency models which have previously been proposed and analyzed. It is then shown that this structure in turn often implies particular functional forms for joint discounted densities of ruin related variables including some or all of the deficit at ruin, the surplus immediately prior to ruin, and the surplus after the second last claim. Then, employing a fairly general interclaim time structure which involves a combination of Erlang type densities, a complete identification of a generalized Gerber‐Shiu function is provided. Various examples and special cases of the model are then considered, including one involving a bivariate Erlang mixture model. Moreover, a class of multivariate mixed Erlang distribution is discussed for modelling different types of dependency structures. |
Title: |
Risk Measures and Dependence Concepts |
Speaker: | Ms. Mélina Mailhot (Université Laval, QC) |
Date: | November 28, 2011 |
Time: | 2:15 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Consider a portfolio of n possibly dependent risks X1; …, Xn. The objective is to determine the amount of economic capital needed for the whole portfolio and to compute the amount of capital to be allocated to each risk X1, ..., Xn assuming a multivariate compound model for (X1; …, Xn). The TVaR-based capital allocation method is used to quantify the contribution of each risk. Numerical illustrations are also provided. The multivariate lower and upper orthant Value-at-Risk (VaR) is another risk measure used in actuarial science, finance, and quantitative risk management. The behavior of these risk measures in the bivariate case is studied since they have been recently introduced. Several propositions are established from the new definitions of the latter. Bounds on these risk measures and confidence regions are obtained. Many propositions of the lower and upper orthant VaR are derived. Applications are made to illustrate the results obtained. Finally, related projects will be discussed. |
Title: | On Algebraic Connectivity of Graphs With at Most Two Cut-Points |
Speaker: | Dr. Arbind Lal (Institute of Technology, India) |
Date: | November 17 , 2010 |
Time: | 2:45 p.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Let G be a connected graph and L(G) be its Laplacian matrix. Fiedler associated the word "algebraic connectivity of the graph G" with the second smallest eigenvalue of L(G). This talk will start with definitions and results related with the study of algebraic connectivity and Fiedler vector of graphs. Then some basic and fundamental results in this area that have been generalized for graphs with at most two cut-points will be presented. This work was done in collaboration with Profs. R B Bapat (Professor, Stat-math Unit, Indian Statistical Institute Delhi Center) and S Pati (Professor, Dept. of Mathematics, IIT Guwahati). |
Title: | Application of Optimization Theory in Statistics |
Speaker: | Dr. Jinde Wang (Nanging University, China) |
Date: | August 12, 2010 |
Time: | 10:30 a.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Mathematically, many important statistical problems are optimization problems. Typical problems are regression problems, maximum likelihood estimation problems, maximum likelihood ratio testing problems and so on. Some of them can be solved by classical calculus. However, some problems are quite severe and are not easy to be solved in this way. Examples of this type are nondifferentiable regression (least absolute deviation estimation) problems, inequality- constrained estimation problems, some complex functional estimation problems. We show how these difficult problems can be solved by applying optimization theory to them. |
Title: | From Spectral Theory of Polyhedral Surfaces to Geometry of Hurwitz Spaces |
Speaker: | Dr. Alexey Kokotov (Concordia University, QC) |
Date: | February 5, 2010 |
Time: | 10:30 a.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | In this talk we show how to make use of an important spectral invariant of polyhedral surfaces (Riemann surfaces with conformal flat conical metrics) - the determinant of the Laplacian - to study the geometry of moduli spaces. For a special class of polyhedral surfaces - the surfaces with trivial holonomy - we obtain a holomorphic factorization formula for the determinant of Laplacian. The holomorphic factor appearing in this formula is the so-called Bergman tau-function - a universal object arising in various areas: from isomonodromic deformations of Fuchsian ODEs to random matrices and Frobenius manifolds. The Bergman tau-function gives rise to a section of the Hodge line bundle over the space of admissible covers (the Harris-Mumford compactification of the Hurwitz space i.e. the moduli space of meromorphic functions on Riemann surfaces). Analysis of the asymptotics of the Bergman tau-function near the boundary of the Hurwitz space leads to an explicit expression for the Hodge class of the space of admissible covers in terms of boundary divisors. This expression generalizes previously known results of Lando-Zvonkine for spaces of rational functions and Cornalba-Harris for spaces of hyperelliptic curves. |
Title: | Convex Bodies, Intersection Theory and Toric Degenerations |
Speaker: | Dr. Kiumars Kaveh (McMaster University, ON) |
Date: | January 29, 2010 |
Time: | 10:30 a.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | In this talk, we discuss a new connection between algebraic geometry and convex geometry. We explain a basic construction which associates convex bodies to algebraic varieties encoding information about their geometry. The construction is a generalization of the well-known Newton polytope of a toric variety. As an application, we give a formula for the number of solutions of a system of equations in terms of volumes of these bodies (far generalizing the celebrated Kushnirenko theorem in toric geometry). Next we will see how convex polytopes naturally appearing in representation theory are special cases of the construction. This gives rise to natural degenerations of flag varieties and Grassmannians to toric varieties. It can be used to approach mirror symmetry problems on flag varieties. The methods and constructions can be applied to a wide range of situations. Some of the works explained here are joint with A. G. Khovanskii and are based on previous works of A. Okounkov. |
Title: | Gromov-Witten and Donaldson-Thomas Theories and Quantum McKay Correspondence |
Speaker: | Dr. Amin Gholampour (California Institute of Technology, CA) |
Date: | January 27, 2010 |
Time: | 10:30 a.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Let G be a finite subgroup of SU(2) or SO(3). Classical McKay Correspondence describes the classical geometry of Y, the natural resolution of C^3/G, in terms of the representation theory of G. We determine the quantum geometry of Y by solving for its Gromov-Witten theory. We describe the answer in terms of an ADE root system canonically associated to G. In case G < SU(2), we solve for 3 variations of the Donaldson-Thomas theories of Y and verify the conjectural relations with the Gromov-Witten theory. Finally, in some cases, we verify the Crepant Resolution Conjecture for both Gromov-Witten and Donaldson-Thomas theories of Y. The Crepant Resolution Conjecture relates the quantum geometries of Y and the orbifold [C^3/G]. |
Title: | Enumerative Geometry, Gromov-Witten Theory, and Orbifolds |
Speaker: | Dr. Linda Chen (Swarthmore College, PA) |
Date: | January 22, 2010 |
Time: | 11:00 a.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | It is a fundamental problem in geometry to understand and use moduli spaces parametrizing objects satisfying certain criteria. In the 1990s, guided by ideas in physics, Kontsevich obtained a beautiful formula for the number of rational plane curves of degree d passing through 3d-1 general points by using Gromov-Witten theory of the projective plane. More recently, orbifold Gromov-Witten theory has been used to study generalizations of the McKay correspondence and crepant resolutions. Dr. Chen will discuss these developments as well further approaches to problems in enumerative geometry. |
Title: | Singularities of Polynomials in Characteristic 0 and Characteristic p |
Speaker: | Dr. Karl Schwede (University of Michigan) |
Date: | January 20, 2010 |
Time: | 11:00 a.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | I will discuss the singularities of the zero-locus of a complex valued polynomial equation. A particular focus will be paid to comparing different singularities. I will discuss two different approaches to this question, both analytic (characteristic zero) and algebraic (positive characteristic). |
Title: | Improved Confidence Interval for the Dispersion Parameter in Count Data Using Profile Likelihood |
Speaker: | Dr. Krishna K. Saha (Central Connecticut State University) |
Date: | July, 3, 2009 |
Time: | 10:30 a.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | The dispersion parameter is an important and versatile measure in the analysis of one-way layout of count data in biological studies. Many simulation studies have examined the bias and efficiency of different estimators of the dispersion parameter for finite data sets, but little attention has been paid to the accuracy of its confidence interval. In this paper hee investigates the small-sample coverage probabilities of four different approaches for computing the confidence intervals of the dispersion parameter in counts based on a parametric model as well as the models that are specified by only the first two moments of the counts. He strongly recommended that one of these be used in practice. The first procedure he compares is an asymptotic approach (AA) based on the Wald statistic and the second method is the parametric bootstrap approach (PBA) based on the estimators of the dispersion parameter. The remaining two procedures compared are the hybrid profile variance approach (HPVA) and the profile likelihood approach (PLA). As assessed by Monte Carlo simulation, the PLA has the best small-sample performance, followed by HPVA and PBA. Finally, these methods are applied to a set of cancer tumor data. |
Title: | L1 Penalty and Shrinkage Estimation Strategies in Generalized Linear Models |
Speaker: | Dr. Ejaz S. Ahmed (University of Windsor) |
Date: | June 19, 2009 |
Time: | 10:00 a.m. |
Location: | LB 921-4 (Concordia University, Library Building, 1400 de Maisonneuve West) |
Abstract: | Penalized and shrinkage regression have been widely used in high-dimensional data analysis. Much recent work has been done on the study of penalized least square methods in linear models. In this talk, I consider estimation in generalized linear models when there are many potential predictor variables and some of them may not have influence on the response of interest. In the context of two competing models where one model includes all predictors and the other restricts variable coefficients to a candidate linear subspace based on prior knowledge, we investigate the relative performances of Stein type shrinkage in the direction of the subspace, pretest estimators, L1 penalty, and candidate subspace restricted type estimators. We develop large sample theory for the estimators including derivation of asymptotic bias and mean squared error. Asymptotics and a Monte Carlo simulation study show that the shrinkage estimator overall performs best and in particular performs better than the L1 penalty estimator when the dimension of the restricted parameter space is large. A real data set analysis is also presented to compare the suggested methods. Joint work with: K.Doksum, and S.Hossain. |