Seminar Series

**Date:**Tuesday, August 25, 2015;2:00 - 3:00 PM; Math Tower, Room 1-122

**Speaker:**Xiaolei Chen, AMS Department

**Title:**Army Research Office

**Abstract:**The High School Apprenticeship Program (HSAP), managed by the Army Research Office (ARO), is an Army Educational Outreach Program (AEOP) that matches talented high school juniors and seniors with practicing scientists and engineers in Army-sponsored university/college research laboratories throughout the country. This summer, Corey Emery and David Bishop joined our group. They focused on *Numerical Modeling of Fabric Surfaces and Simulation of Parachute Inflation*.

Corey works on the optimization of the stability of the parachutes (C9 and T10) by adjusting such variables as the weight of the payload, length of the chords, and radii of the canopy and air vent. Also, Corey started the pilot chute simulation in *FronTier++* library.

David works on the stability analysis of the cross parachute. Multiple simulations were conducted for the cross parachute for each variable, with a spectrum of values for each variable. During these simulations, only one variable was changed at a time and everything else remained as originally set in the *FronTier++* package.

In the presentation, they will demonstrate some numerical simulation results and some analysis.

**Date:**Tuesday, May 19, 2015;2:00 - 3:00 PM, Math Tower, Seminar Room 1-122

**Speaker:**Dr.Qiang Zhang, Department of Mathematics,City University of Hong Kong

**Title:**Pricing Options with Stochastic Volatility

**Abstract:**The well-known Heston model for stochastic volatility captures the reality of the movement of stock prices in our financial market. However, the solutions for option prices under the stochastic volatility model are expressed in terms of integrals in the complex plane. There are difficulties in evaluating these expressions numerically. We present closed-form solutions for option prices and implied volatility under Heston model of stochastic volatility. We method is based on a multiple-scale analysis in singular perturbation theory. Our theoretical predictions are in excellent agreement with numerical solutions of the Heston model of stochastic volatility. We also show that our approximate solution is valid not only in the fast-mean-reverting regime, but also in the slow mean-reverting regime. This means that the solutions in these two different regions can be approximated by the same function. We further apply our new approach of multiple-scale analysis to pricing Asian options with stochastic volatility. The results are also in excellent agreement with the exact numerical solutions.

**Date:** Monday, May 11, 2015; 2:00 - 3:30 PM, Math Tower, Seminar Room 1-122

**Speaker:** Dr. Xinghui Zhong, Department of Mathematics, Michigan State University

**Title:** Discontinuous Galerkin Methods: Algorithm Design and Applications

**Abstract:** In this talk,we discuss discontinuous Galerkin (DG) methods with emphasis on their algorithm design targeted towards applications for shock calculation and plasma physics. DG method is a class of finite element methods that has gained popularity in recent years due to its flexibility for arbitrarily unstructured meshes, with a compact stencil, and with the ability to easily accommodate arbitrary h-p adaptivity. However, some challenges still remain in specific application problems.

In the first part of my talk, we design a new limiter using weighted essentially non-oscillatory(WENO) methodology for DG methods solving conservation laws, with the goal of obtaining a robust and high order limiting procedure to simultaneously achieve uniform high order accuracy and sharp, non-oscillatory shocktransitions. The main advantage of this limiter is its simplicity in implementation, especially on multi-dimensional unstructured meshes.

In the second part, we propose energy-conserving numerical schemes for the Vlasov-type systems. Those equations are fundamental models in the simulation of plasma physics. The total energy is an important physical quantity that is conserved by those models. Our methods are the first Eulerian solver that can preserve fully discrete total energy conservation. The main features of our methods include energy-conservative temporal and spatial discretization. In particular, an energy-conserving operator splitting is proposed to enable efficient calculation of fully implicit methods.We validate our schemes by rigorous derivations and benchmark numerical examples.

**"Pizza with the Professor"****seminar series for AMS faculty and graduate students**

**Date:**Wednesday, April 15, 2015;1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:**Professor Roman Samulyak

**Title:**Novel Lagrangian Particle and Particle-Mesh methods for Complex Systems

**Abstract:**Complex physics systems involving interactions of non-uniform matter, relativistic particles and fields require new approaches to mathematical modeling, and numerical algorithms capable of resolving relevant length-scales and physics processes, and applicable to modern heterogeneous supercomputers.

We have developed new methods based on Lagrangian particles capable of transforming the nature of simulations of coupled multiphysics system by elimination of computational meshes. The first example deals with the Lagrangian particle method for compressible hydrodynamics that handles complex interfaces and continuously adapts to density changes. It is based on rigorous mathematical approximation theory, leading to robust, high quality, and high convergence order algorithms. In the second example, I will present the new Particle-in-Cloud method for meshless simulations of Vlasov-Poisson equations that are traditionally solved by the Particle-in-Cell method. Our method is highly adaptive and accurate, and free of artificial numerical effects. In the third example, I’ll discuss more traditional Particle-in-Cell approach for Vlasov-Maxwell equations, where our contribution is a resolution of complex plasma chemistry.

Application examples include new cooling techniques for relativistic particle beams for future particle accelerators, an alternative hybrid magneto-inertial fusion concept that uses high Mach number plasma jets and magnetized plasma targets, and high-power targets for particle accelerators.

*"Pizza with the Professor"*seminar series for AMS faculty and graduate students

**Date:**Wednesday, March 25, 2015;1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:**Professor Xiaolin Li

**Title:**Computing Partial Differential Equationwith Deforming Domain and Moving Boundaries

**Abstract:**Mathematically, you need to understand the general behavior of hyperbolic, parabolic and elliptic PDEs. They arise from science and engineering as the advection, diffusion, reaction and steady state problems in continuum medium. Numerically the challenges come from the partition and management of deforming domains and moving boundaries, in addition to the finite difference schemes.

I will show how to build such computational platform to solve free boundary problems and how one can apply such computational software to demonstrate nature and predict the evolution into the future. I will present the simulation of fluid instabilities, bubbling and jet spray; the simulation phase transition problems, deposition, dissolution, erosion, freezing and melting; the simulation of fluid structure interaction, from coffee stirring, pump, to windmill, bullet and parachute inflation. Yes, I will also show you that the stock market is also a continuum medium involving moving boundaries and our tool can also be applied to quantitative finance.

**Date:**Friday, March 6, 2015; 11:00 AM -12:00 PM; Math Tower, Seminar Room 1-122A

**Speaker:**Professor Jie Yu -Civil Engineering Program, Department of Mechanical Engineering, and School of Marine and Atmospheric Sciences, Stony Brook University

**Title:**Fluid ratcheting by oscillating channel walls with sawteeth

**Abstract: **Motions rectified by symmetry-breaking mechanisms in oscillating flows have been of great interest in microfluidics, biological locomotion, among other engineering applications.It is well known in water waves that steady streaming can be induced close to a rigid or flexible boundary due to nonlinearity and viscosity. This is known as mass transport or peristaltic pumping in water waves. This mechanism is shared by the phenomenon of ratcheting fluid in a narrow channel by vibrating the channel walls lined with asymmetric sawteeth, demonstrated in a recent experiment. A theory is presented here to describe the ratcheting effects in such a channel. In a conformally transformed plane, the vorticity dynamics is analysed, obtaining the solution that elucidates the essential physics of rectifying oscillatory momentum to steady flows.The geometric asymmetry renders the effects to be spatially biased, leading to a unidirectional component in the steady flow, i.e. the net pumping. Various influences on the net pumping rate are analysed. Potential applications of the mechanism include valveless micro-pumping.

**"Pizza with the Professor"****seminar series for AMS faculty and graduate students**

**Date:**Wednesday, February 18, 2015;1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:**Professor Matthew Reuter

**Title: **Mathematics for Nanoscience

**Abstract:**In this talk I will overview some of the mathematical tools and challenges in studying nanoscience. As a motivating example, I will discuss how block quasi-separable matrices and matrix Möbius transformations can be used to characterize nanomaterials. These mathematical tools both simplify computation and provide physical insight.

**Date:**Tuesday, February 3, 2015;11:00AM - 12:00 PM; Math Tower, Seminar Room 1-122A

**Speaker: **Dr. Barbara Chapman, University of Houston

**Title:** Large-Scale Application Development in an Age of Diversity

**Abstract:** The developers of large-scale scientific and technical applications expend significant effort adapting their code to make efficient use of increasingly complex high performance computing (HPC) systems. Today, this requires learning how to exploit parallelism within each compute node as well as between nodes. There is considerable diversity with respect to intranode architectures, and a faster rate of change in parameters that affect application performance. As a result, the challenge of creating portable code that achieves high performance is greater than ever.

For nearly two decades, the vast majority of HPC applications were created via the insertion of MPI’s library routines. In order to exploit the multiple cores, shared memory and (potentially) accelerators within a node, MPI is now increasingly being used together with OpenMP and/or OpenACC. Both OpenMP and OpenACC are directive-based APIs developed by vendor-driven consortia that are still evolving. In this presentation we discuss the status of these programming interfaces and the challenges ahead for application developers and systems software alike.

**CV – Dr. Barbara Chapman**:

Dr. Chapman is a Professor of Computer Science at the University of Houston, TX, USA, where she also directs the Center for Advanced Computing and Data Systems. Chapman has performed research on parallel programming languages and the related implementation technology for over 20 years and has been involved in the OpenMP directive-based programming standard development since 2001. She also contributes to the OpenSHMEM and OpenACC programming standards efforts. Her research group has developed OpenUH, a state-of-the art open source compiler that is used to explore language, compiler and runtime techniques, with a special focus on multithreaded programming. Dr. Chapman’s research also explores optimization of partitioned global address space programs, strategies for runtime code optimizations, compiler-tools interactions and high-level programming models for embedded systems.

*Dr. Jie's seminar for January 28, 2015 has been cancelled and will be re-scheduled at a later date.*

**Date:**Wednesday, January 28, 2015;1:00 - 2:00 PM; Math Tower, Seminar Room 1-122A

**Speaker:**Jie Yu -Civil Engineering Program, Department of Mechanical Engineering, and School of Marine and Atmospheric Sciences, Stony Brook University

**Title:**Fluid ratcheting by oscillating channel walls with sawteeth

**Abstract: **Motions rectified by symmetry-breaking mechanisms in oscillating flows have been of great interest in microfluidics, biological locomotion, among other engineering applications.It is well known in water waves that steady streaming can be induced close to a rigid or flexible boundary due to nonlinearity and viscosity. This is known as mass transport or peristaltic pumping in water waves. This mechanism is shared by the phenomenon of ratcheting fluid in a narrow channel by vibrating the channel walls lined with asymmetric sawteeth, demonstrated in a recent experiment. A theory is presented here to describe the ratcheting effects in such a channel. In a conformally transformed plane, the vorticity dynamics is analysed, obtaining the solution that elucidates the essential physics of rectifying oscillatory momentum to steady flows.The geometric asymmetry renders the effects to be spatially biased, leading to a unidirectional component in the steady flow, i.e. the net pumping. Various influences on the net pumping rate are analysed. Potential applications of the mechanism include valveless micro-pumping.

**"Pizza with the Professor"**** seminar series for AMS faculty and graduate students**

**Date:**Wednesday, November 19, 2014;1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:**Professor Allen Tannenbaum

**Title:**Theory and Practice of Medical Imaging Analysis

**Abstract:**In this talk, we will describe some theory and practice of medical imaging analysis. This includes projects such as radiation planning in cancer therapy, traumatic brain injury, and left atrial fibrillation. Accordingly, we will describe several models of active contours for which both local (edge-based) and global (statistics-based) information may be included for various segmentation tasks. Segmentation is the process of partitioning an image into its constituent parts.

In addition to segmentation, the second key component of many medical imaging is registration. The registration problem is still one of the great challenges in vision and medical image processing. Registration is the process of establishing a common geometric reference frame between two or more data sets obtained by possibly different imaging modalities. Registration has a substantial literature devoted to it, with numerous approaches ranging from optical flow to computational fluid dynamics. For this purpose, we propose using ideas from optimal mass transport.

The talk is designed to be accessible to a general applied mathematical audience with an interest in medical imaging. We will demonstrate our techniques on a wide variety of data sets from various medical imaging modalities.

**"Pizza with the Professor"**** seminar series for AMS faculty and graduate students**

**Date:** Wednesday, November 12, 2014; 1:00 - 2:00 PM; Math Tower, Math Tower, Room S-240

**Speaker:** Professor Yuefan Deng

**Title: **Parallel Computing for Anything Big

**Abstract: **Parallel computers with 3 million cores are capable of 18x10^15 floating-point operations per second (18 Petaflops) today and they are expected to reach 200 million cores later this decade for 1,000 Petaflops. Our research considers (1) the design of the interconnection networks, particularly the topologies, for effectively coupling these millions of cores for such supercomputers; (2) mapping of super applications to such monsters for scalable performances, and (3) development of production applications for science and engineering. In grander terms, we study the Architectures, Algorithms, and Applications, i.e., AAA, of parallel computers.

**Date:**Wednesday, October 15, 2014;1:00 - 2:00 PM; Math Tower, Room 1-122

**Speaker:**San-Yih Lin,**Department of Aeronautics and Astronautics,National Cheng Kung University, Tainan, Taiwan**

**Title:**PressureCorrectionMethodforFluid-ParticleInteractionandTwo-PhaseFlow Problems

**Abstract:**Apressure correction methodcoupled with a direct-forcing immersed boundary (IB) method and thevolume of fluid (VOF) methodis developed to simulate fluid-particleinteractionand two-phase flows.This methodusesa pressure correction methodtosolveincompressible flow fields,an IBmethodtohandlefluid-particle interactions, andthe VOF method to solve the two-phase flow.A direct forcing method is introduced in the IB method to capture particle motions.Athird-order modified monotone upwind scheme for conservation law (modified MUSCL) is used to solvethesolutions of the advection equation.Moreover, by applying the Gauss theorem, the formulas for computing thehydrodynamicforce and torque acting on the particle from flows are derived from the volume integralofthe particle insteadofthe particle surface.Fordemonstratingthe efficiencyand capabilityof the presentmethod, sedimentationsofmanyspherical particlesin an enclosure, three-dimensional broken dam problem, three-dimensional rising bubble, andthree-dimensional wave impact on a tall structure are performed. Finally, the numerical method is applied to investigate the granular flow impact on a tall structure and the bubble generation and the flow of a falling ellipse.

**Date:** Friday, September 5, 2014; 3:00 - 4:15PM, Math Tower, Seminar Room 1-122A

**Title:**The modeling and simulation of turbulent flows.

**Speaker:**Foluso Ladeinde, Ph.D.

**Affiliation:**Department of Mechanical Engineering, Stony Brook University, Stony Brook, NY 11794-2300

**Abstract:** The common hierarchies of the modeling of turbulent flows are discussed, to include the Reynolds-Averaged Navier-Stokes Equations (RANS), Large-Eddy Simulations (LES), and Direct Numerical simulations (DNS), with a focus on LES and, to a lesser extent, on RANS. For RANS, we briefly review the zero-, one-, two-, three- and six-equation models. The important issues in LES are discussed, including the various types of the method, the generation of inflow conditions, the (mathematical) Kernels in the explicit filtering procedures, the commonly used probability density functions, completeness of LES, and the limitations of the approach as a function of the controlling physics in the problem being analyzed. Both compressible and incompressible flows will be discussed, as will the applications of high-order RANS and LES modeling to turbulent flows over complete Boeing 747-200 commercial aircraft and supersonic combustion in scramjets.

**Short Bio:** Dr. Ladeinde received hisB. Sc.degree from the Faculty of Technology, University of Ibadan (Nigeria) in 1979, withFirst Class Honours. He moved to the United States in 1981. Since then, he has earned several post-graduate degrees at Cornell University, Ithaca, New York, including hisPh.D.in Mechanical and Aerospace Engineering in 1988. He worked for several years as ahigh-performance computing (HPC)software developerbefore joining the faculty of theState University of New York (Stony Brook University),Stony Brook in 1991 as an Assistant Professor of Mechanical Engineering, where he has since been promoted and tenured.Dr. Ladeindeis aVisiting Professorand aFacultyFellowof theUnited StatesAir Forceand theUnited StatesNational Research Council, and aVisiting Scientistat theUnited StatesDepartment of Energyat the Brookhaven National Laboratory. He is aLifetime Memberand anAssociate Fellowof theAmerican Institute of Aeronautics and Astronautics(AIAA)and aFellowof theAmerican Society of Mechanical Engineers(ASME), aLifetime Memberof theAmerican Physical Society(APS), and aMemberof theSociety for Industrial and Applied Mathematics(SIAM). Dr. Ladeindechaired the External Review Boardof theNASACenter for Aerospace Researchat North Carolina (NC) A&T University in Greensboro, NC. Dr. Ladeinde has produced over250 publicationsin internationally-recognizedarchival journalsandpeer-reviewedconferenceproceedings.Hehas served as anAssociate Editor(2009-2013) of theAIAA Journal,the flagship journal of the AIAA, as well as aBook ReviewerforCambridge University Pressin the area of theoretical fluid dynamics. Dr. Ladeinde’s 2010 paper on scramjet computer simulation wonAIAABest Paper Award. Dr. Ladeinde was aVisiting ScholaratTsinghua Universityin Beijing, China, during the summer of 2012, and has been aWeekly Columniston Information and Communication Technology for the Nigeria-based internationalDaily Trustnewspaper since 2011.

*General-Not Categorized:***Monday, March 4, 2013; 2:00-3:00PM; Location: S240 Mathematics Tower**

**Speaker:** Allen Tannenbaum, Comprehensive Cancer Center/ECE, UAB

**Title:** Geometric PDEs and Optimal Mass Transport with Applications to Medical Image Computing

**Abstract: **Geometric PDEs and optimal mass transport ideas have recently been introduced into computer vision and medical image computation. We will describe

their use for three of the key problems in vision: enhancement, segmentation, and registration. First, all real images have noise, and many times the noise must be electively removed as a pre-processing step. We will describe several fliters based on curvature driven flows that denoise the image while preserving key features

such as edges.

We will then move on to segmentation, that is the partitioning of an image into its “key" components. We describe several models of active contours for which both local (edge-based) and global (statistics-based) information may be included for various segmentation tasks. We will indicate how statistical estimation and prediction ideas (e.g., particle filtering) may be naturally combined with this methodology. A novel model of directional active contour models based on the Finsler metric will be considered that may be employed for white matter brain tractography. Very importantly, we will describe some ideas from feedback control that may be used to close the loop around and robustify the typical open-loop segmentation algorithms in computer vision.

Finally, we treat image registration. The registration problem (especially in the elastic case) is still one of the great challenges in vision and medical image processing. Registration is the process of establishing a common geometric reference frame between two or more data sets obtained by possibly different imaging modalities. Registration has a substantial literature devoted to it, with numerous approaches ranging from optical flow to computational fluid dynamics. For this purpose, we propose using ideas from optimal mass transport (Monge-Kantorovich). The optimal mass transport approach has strong connections to optimal control, and can be the basis for a geometric observer theory for tracking in which shape information is explicitly taken into account. Finally, we will describe how mass transport ideas may be utilized to generate hexahedral meshes with applications to problems in biomechanics.

The talk is designed to be accessible to a general mathematical/engineering/computer science audience.

**Friday, August 16, 2013; 1:00PM - 2:00PM; Mathematics Tower, Room 1-122**

**Speaker:** Yu-Chen Shu, Assistant Professor; Department of Mathematics, National Cheng Kung University

**Title:** Accurate Gradient Approximation for Complex Interface Problems in 3D by an Improved Coupling Interface Method

**Abstract:** Most elliptic interface solvers become complicated for complex interface problems at those "exceptional points" where there are not enough number of neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we make classification of these exceptional points and propose two recipes to maintain order of accuracy there. The proposed recipes are to improve our previous method, the coupling interface method, but the idea is also applicable to other interface solvers. The main idea is the follows. The goal is to have at least first order approximation for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid, if it is available. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Careful numerical tests are performed to show second order accuracy for gradients for complex interface problems in two and three dimensions, including a real molecule (1D63) which is composed hundreds of atoms.

**Friday, July 26, 2013; 2:00-3:00PM; Mathematics Tower, Room 1-122**

**Speaker:** Baolian Cheng, Staff Scientist, Los Alamos National Laboratory

**Title:** Thermonuclear Ignition Criterion and Scaling Laws for ICF Capsules

**Abstract:** We have developed an analytical physics model from fundamental physic principles and used the model to derive a thermonuclear ignition criterion and implosion energy scaling laws applicable to inertial confinement fusion (ICF) capsules. First, we derived a general thermonuclear burn criterion in terms of the areal density and temperature of the hot fuel. The newly derived thresholds include the minimum required hot fuel pressure, mass, areal density, burn fraction, and empirical ignition threshold factor (ITFX). We compared our criteria with existing theories, code simulations, and NIF experimental data. Our ignition thresholds are higher than the current ignition thresholds and explain the NIF data better than existing theories and simulations. Differences between this model and existing models will be discussed. Second, we analytically derived the scaling laws for implosion and the ignition threshold factor (ITF) from first principles. The scaling laws relate the fuel pressure and the minimum implosion energy required for ignition to the peak implosion velocity and the equation of state of the pusher and the hot fuel. When a specific cold adiabat path is used for the cold fuel, the new scaling laws recover the current ITF dependence on the implosion velocity, but when a hot adiabat path is chosen, the model agrees with the NIF data. Model predictions for the ratios of mass, aspects, volumes, areal densities, and energies of the hot spot to pusher and to total fuel and predictions of the hot fuel pressure and neutron yields are in good agreement with the NIF experiments. The newly derived ITF shows a much stronger dependence on both equation of state and implosion velocity than the simulations-based ITF.

**Thursday and Friday, August 23-24, 2012, 8:30 a.m. - 4:00 p.m.; Location: S240 Mathematics Tower**

**Speaker:** Dr. Tulin Kaman, Dr. HyunKyung Lim, Gaurish Telang

**Title:***High Performance Computing Workshop*

**Abstract:** This workshop will provide New York Blue Supercomputer users with valuable information on services and resources, technical details on the IBM Blue Gene architecture.

**Wednesday, May 16, 2012, 1:00 - 2:00PM, Math Tower, Seminar Room 1-122**

**Speaker:** Z.J. Wang, Iowa State University

**Title:** A Unifying Discontinuous Formulation for the Navier-Stokes Equations on Mixed Grids

**Abstract:**

A current breakthrough in computational fluid dynamics (CFD) is the emergence of adaptive high-order (order > 2) methods. The leader is the discontinuous Galerkin (DG) method, amongst several other methods including the multi-domain spectral, spectral volume (SV), spectral difference (SD) methods. Recently, the correction procedure via reconstruction (CPR) formulation was developed to unify all the methods under a single framework. All these methods possess the following properties: k-exactness on arbitrary grids, and compactness, which is especially important for parallel computing on clusters of CPUs and GPUs. In this talk, I will describe the CPR formulation, and explain its connection to the DG, SV and SD methods. In addition, the application of high-order methods to compute transitional flow over a SD7003 wing and flow over flapping wings will be presented. The talk will conclude with several remaining challenges in the research on high-order methods.

**Short Bio of Z.J. Wang****Department of Aerospace Engineering and CFD CenterIowa State University, Ames, IA 50011**

Z.J. Wang, Wilson Professor of Engineering and Director of Computational Fluid Dynamics (CFD) Center at the Iowa State University (ISU), received his Ph.D. in Aerospace Engineering from the University of Glasgow in 1990. Then he conducted post-doctoral research in Glasgow and Oxford before joining CFD Research Corporation in Huntsville, Alabama in 1991 as a Research Engineer, and later becoming a Technical Fellow. In 2000, he joined the faculty of Michigan State University as an Associate Professor of Mechanical Engineering. In 2005 he returned to Aerospace Engineering at ISU. He has been active in CFD research since early 1990es with over 160 journal and conference publications. His research areas include adaptive high-order methods for the Navier-Stokes equations, algorithm and flow solver development for structured and unstructured, overset and adaptive Cartesian grids, computational aeroacoustics and electromagnetic, large eddy simulation of transitional and bio-inspired flow problems, high performance computing on CPU and GPU clusters, geometry modeling and grid generation. He was an invited lecturer of the von Karman Institute Lecture Series on High-Order CFD Methods in 2005 and 2008. He is an Associate Fellow of AIAA, and an Associate Editor of the AIAA Journal. He was awarded the degree of Doctor of Science in Engineering by the University of Glasgow in 2008, and a Distinguished Service Award by the Fluid Dynamics Technical Committee of AIAA in 2010.

**Wednesday, November 30, 2011 Time: 1:00PM-2:00PM Location: Seminar Room 1-122**

** Speaker:**Professor Hyunsun Lee, Department of Mathematics, Florida State University, Tallahassee, FL, USA

**Title:**

**Abstract:** An acoustic analogy using decomposition of the Lighthill source term to ten sub-terms is discussed in the light of a high-fidelity numerical simulation of a subsonic jet, at Mach number 0.9 and Reynolds number 100,000, with a baseline nozzle (SMC000) as a benchmark problem. These sub-terms consist of density, velocity, vorticity and dilatation fields, presenting their reciprocal non-linear interactions. To understand aerodynamic noise generation mechanism, intrinsic links between turbulence and emitted sound waves, such as cross-correlation function, are necessary. This causality method is directly adopted to the LES data to identify fundamental noise sources by calculating the cross-correlation between each spatial sub-term in near field and acoustic pressure fluctuation at a far field position, showing its contribution on the noise generation. Three principal noise production terms, related to Laplacian of turbulence kinetic energy and divergence of Lamb vector, are witnessed and interpreted, showing encouraging agreement with previous predictions. As a future work, the observation can be extended on SMC006 chevron jet nozzle configuration, possibly leading us to characterize the structure by comparing the correlation profiles with those of SMC000. Furthermore, this study is expected to shed light on assessing a better understanding and prediction on other sound control devices.

**Wednesday, Nov. 2, 2011 Time: 1-2pm Location: Seminar Room 1-122**

**Speaker:**Dr. Rajeev Jaiman,Director of CFD Development at Altair Engineering, Inc.

**Title:** Stable and Accurate Techniques for Transient Multiphysics Simulations

**Abstract: ** This presentation summarizes recent results obtained in the development of a novel numerical treatment of coupled multiphysics problems, with emphasis on the simulation of transient fluid-structure interaction (FSI) applications. The talk will begin with the example problems ranging from the propagation of shocks and blast waves along deformable structures, flutter instability, aeroelasticity-driven failure events in solid propellant rockets, offshore marine risers and pipelines, large scale wind turbines, nuclear energy, bio-medical and many more.The talk will focus on two aspects of the on-going research in the area of multiphysics simulations: (i) the development of an accurate scheme used to transfer fluid-induced loads across non-matching discretized interfaces; and (ii) the formulation and implementation of new stable and accurate coupling schemes between fluid and structural solvers. Beyond a presentation of the load transfer and coupling schemes, the talk will include results of a detailed comparative study between the proposed methods and existing schemes. These comparative assessments are based on a set of FSI applications of increasing complexity involving flat and curved fluid interfaces. The talk will conclude with a brief reporting on the successful applications of the new methods and their impact on the current state of the art in computational mechanics.

**Friday, October 21, 2011, Time: 11:00-12:00PM Location: Seminar Room 1-122 **

**Speaker:** Prof. Falai Chen,University of Science and Technology of China**Title:** Splines over T-meshes and Applications in Geometric Modeling and NumericalAnalysis

**Abstract:** In this talk, I will introduce the notion of splines over T-meshes andpresent some recent advances on the theory and applications of splines overT-meshes.The applications in Geometric Modelling and Isogeometric Analysis (IGA) areemphasized.

**Wednesday, September 28, 2011, Time from 12:00-1:00PM. Location: AMS 1-122 **

**Speaker: **

The Department of Mathematics and Statistics, McGill University

**Discretizing Solutions vs. Discretizing Operators**

Title:

**The linear advection equation possesses one of the simplest solution of any PDE, yet its numerical solution is still challenging. Interestingly, most numerical approaches ignore the structure of the solution and focus on discretizing the operator instead. I will try to compare and contrast the two points of view and provide some useful insight to devise methods to solve more complicated problems. One such problem is Poisson's equation with internal jumps. This equation arises in the computation of fluid flows with multiple components (e.g., water and air). I will illustrate the various key points by presenting numerical solutions of fluid systems including drops, bubbles, and soap films.**

Abstract:

**Friday, September 23, 2011, Time from 10:00AM -11:30AM.Location: AMS 1-122**

**Speaker:** Prof. Hong Qian, University of Washington

**Title:** Delbruck-Gillespie Processes: Nonlinear Stochastic Dynamics, Phase transition, Thermodynamics and Analytical Mechanics

**Abstract:** Agent-based population dynamics articulates a distribution in the behavior of individuals and considers deterministic behavior at the population level as an emerget phenomenon. Using chemical species inside a small aqueous volume as a example, we introduce Delbruck-Gillespie birth-and-death process for chemical reactions dynamics. Using this formalism, we (1) illustrate the relation between nonlinear saddle-node bifurcationand first-order phase transition; (2) introduce a thermodynamic theory for entropy and entropy production; (3) show how an analytical mechanics (i.e., Lagrangian and Hamiltonian systems) arises and the meaning of kinetic energy. We suggest the inter-attractoral stochastic dynamics as a possible mechanism for isogenetic variations in cellular biology.

F**riday, March 11, 2011, 1:00 pm, AMS Seminar room 1-122A**

Title: Numerical simulations of ideal MHD and applications in astrophysics Dr. Christian Klingenberg Institut of Applied Mathematics, Würzburg University, Germany

**Abstract**: We introduce a finite volume code for ideal MHD. Ingredients are: an approximate Riemann solver, extension to multidimensions via a Powell term, second order preserving positivity. We have extensively tested our code. We then show driven turbulence simulations applied to star formation.

Wednesday, Feb. 9, 2011, 1:00 pm, AMS Seminar room 1-122A

Dr. Michael Siegel, New Jersey Institute of Technology

**Title:** A hybrid numerical method for fluid interfaces with soluble surfactant

**Abstract:** We address a significant difficulty in the numerical computation of fluid interfaces with soluble surfactant that occurs in the practically important limit of large bulk Peclet number Pe. At the high values of Pe in typical fluid-surfactant systems, there is a transition layer near the interface in which the surfactant concentration varies rapidly. Accurately resolving this layer is a challenge for traditional numerical methods but is essential to evaluate the exchange of surfactant between the interface and bulk flow. We present recent work that uses the slenderness of the layer to develop a fast and accurate `hybrid' numerical method that incorporates a separate analysis of the dynamics in the transition layer into a full numerical solution of the interfacial free boundary problem.

Tuesday, February 1, 2011, 1:00 pm, AMS Seminar Room 1-122A

**Title:** The FLASH Code Architecture and Abstractions

Dr. Anshu Dubey, Flash Center at University of Chicago

FLASH is a publicly available high performance application code that has evolved into a modular, extensible software system from a collection of unconnected legacy codes. The current version, FLASH 3, consists of interoperable modules that can be combined to generate dierent applications. The FLASH architecture allows many multiple alternative implementations of its components to co-exist and interchange with each other, resulting in greater flexibility. Further, a simple and elegant mechanism exists for customization of code functionality without the need to modify the core implementation of the source. A built-in unit test framework providing veriability, combined with a rigorous software maintenance process, allow the code to operate simultaneously in the dual mode of production and development.

This presentation will give an overall view of the code architecture and capabilities, and the abstractions that enable the extensibility. In addition, there will be a discussion of the interaction between the infrastructure and the solvers, highlighting the challenges of running on leadership class machines.

**Wednesday, Dec. 1, 2010, 1:00 pm, AMS Seminar room 1-122A**

Dr. John Grove, Los Alamos National Laboratory

**Title:** So You think you think you want to use a "real" equation of state

**Abstract:** We discuss the algorithmic, numerical, and practical considerations needed to use general equations of state (EOS) in a hydrodynamic code. Such models raise a host of issues that must be addressed in a useful code. These include the implementation of the EOS’s (e.g. analytic verses tabular), how material stiffness will affect the hydro solver, what happens when a material leaves the domain of its EOS, and how mixtures of materials can be treated.

Wednesday, Oct. 27, 2010, 1:00 pm, AMS Seminar room 1-122A

**Title:** Phase-field models for multiphase complex fluids: modeling, numerical analysis and simulations

**Speaker**: Jie Shen, Purdue University

**Abstract**: I shall present an energetic variational phase field model for multiphase incompressible flows which leads to a set of coupled nonlinear system consisting a phase equation and the Navier-Stokes equations. We shall pay particular attention to situations with large density ratios as they lead to formidable challenges in both analysis and simulation.

I shall present efficient and accurate numerical schemes for solving this coupled nonlinear system, in many case prove that they are energy stable, and show ample numerical results (air bubble rising in water, Newtonian bubble rising in a polymeric fluid, etc.) which not only demonstrate the effectiveness of the numerical schemes, but also validate the flexibility and robustness of the phase-field model.

Wednesday, Oct. 13, 2010, 1:30 pm, AMS Seminar room 1-122A

Dr. Shengtai Li, Los Alamos National Laboratory

**Title:** "Higher-Order Divergence-free methods for MHD flows on overlapping grid".

**Abstract:** Magnetic fields have an intrinsic divergence-free property. It is essential to preserve this property in numerical simulations for magneto-hydrodynamics (MHD) simulations. However, it is difficult to achieve higher than second-order accuracy for conventional divergence-free finite-volume methods. In this talk I will present a higher-order (>=3) divergence-free method for MHD flows on overlapping grid. Our method uses the central scheme on an overlapping grid. It uses the solutions on a dual mesh, whose vertices consist of the centroids of the primal mesh. By solving the solutions on the dual/primal mesh simultaneously, we derive a

divergence-free numerical method for MHD of any high order. We also use the dual-mesh information to develop a more compact scheme that has better

resolution and accuracy than using only the primal mesh. If there is enough time, I will also present an efficient method to preserve the divergence-free condition on an adaptive mesh refinement grid.

Thursday, September 23, 2010, 12:00 pm, Math Tower Room S-240

Dr. Xiaoye Li, Lawrence Berkeley National Laboratory

**TITLE:** Factorization-based Sparse Solvers and Preconditioners

**ABSTRACT:** Efficient solution of large-scale, ill-conditioned and highly-indefinite algebraic equations often relies on high quality preconditioners together with iterative solvers. Because of their robustness, factorization-based algorithms often play a significant role in developing scalable solvers.

We present our recent work of using state-of-the-art sparse factorization techniques to build domain-decomposition type direct/iterative hybrid solvers and efficient incomplete factorization preconditioners. In addition to algorithmic principles, we also address many practical aspects that need to be taken under consideration in order to deliver high speed and robustness to the users of today's sophisticated high performance computers.

**Wednesday, September 8, 2010, 1:00 pm, AMS Seminar Room (Math Tower 1-122A)**

**Title**: Sensitivity Analysis, Uncertainty Quantification and Multiscale Modeling of Complex Systems

Dr. Guang Lin, Pacific Northwest National Laboratory

**Abstract**: Experience suggests that uncertainties often play an important role in quantifying the performance of complex systems. Uncertainty-based optimization, in particular, allows for optimizing a large set of objectives that may be varying in time as the mission requirements of a specific design may be changing in time. Therefore, uncertainty needs to be treated as a core element in modeling, simulation and optimization of complex systems. In this talk, a new formulation for quantifying uncertainty in the context of aerodynamic problem will be discussed with extensions to other fields of mechanics and to dynamical systems. An integrated simulation framework will be presented that quantifies both numerical and modeling errors in an effort to establish "error bars" in CFD. In particular, a review of high-order methods (Spectral Elements, Discontinuous Galerkin, and WENO) will be presented for deterministic flow problems. Subsequently, stochastic formulations based on Galerkin and collocation versions of the generalized Polynomial Chaos (gPC), and some stochastic sensitivity analysis techniques will be discussed in some detail. Several specific examples on stochastic piston problem, lift enhancement due to random roughness and stochastic modeling of ion- electron two-fluid plasma flow will be presented to illustrate the main idea of our approach.

In the catalytic reactor applications there is often a need to model accurately multiscale reactive transport across several orders of magnitude in space and time scales. Multiple scale model in both time and space can overcome this difficulty and provide a unified description of reactive transport in catalytic reactor from nanoscale to larger scales. We propose a new multiscale formalism based upon hybrid model, which combines kinetic Monte Carlo (KMC) with continuum model. Thermal diffusion and mass transport of different species are solved in the continuum model. A non-iterative coupling of different- scale models will be presented, which makes it more efficient than most of existing hybrids and amenable to applications to the complex problems. A simple one-dimensional example will be demonstrated.

**Friday, March 26, 2010, 1:30 - 2:30pm, Math Tower 1-122**

Jian Du, Mathematics Department, University of Utah

**Title:** computational method for simulating two-phase gel dynamics

In this talk, I will present a parallel computational algorithm for simulating models of gel dynamics where the gel is described by two phases, a networked polymer and a fluid solvent. The models consist of transport equations for the two phases, two coupled momentum equations, and a volume-averaged incompressibility constraint. Multigrid with Vanka-type box relaxation scheme is used as preconditioner for the Krylov subspace solver (GMRES) to solve the momentum and incompressibility equations. Through numerical experiments of a model problem, the efficiency, robustness and scalability of the algorithm are illustrated.

**Friday, March 19, 2010, 1:30 - 2:30pm, Math Tower 1-122**

Tianshi Lu, Mathematics and Statistics Department, Wichita State University

**Title**: Theory and Computation of the Grad-Shafranov Equation

**Abstract:** In this talk, I will review the theory and computation of the Grad-Shafranov (GSh) equation, which describes the toroidal magnetohydrodynamic equilibrium. The equation is a poisson-type equation with the source as unknown functions of the potential. Depending on the physical measurements and models, the

equilibrium condition can also be posed as a nonlinear eigenvalue problem or a free boundary problem. The GSh equation is often studied in its simpler form in planar geometry. The solution is nonunique in radially symmetric geometry, while the solution is unique in the presence of a corner. For physically more relevant smooth domains, the uniqueness of the solution remains an open question. A class of analytical solution to the GSh equation known as Soloviev solutions can be used as benchmark tests for proposed numerical solvers.

On the computation side, the most popular method to solve the GSh equation is to iteratively adjusting a few parameters characterizing the toridal current density profile. For a given profile, the GSh equation can be solved by direct or Newton-type iteration. The use of advancing flux coordinates can improve the

accuracy of the solution. A boundary integral equation approach based on Green's functions for polynomial sources was recently introduced. A "front tracking" method for the solution of the GSh equation will be proposed in the talk. An alternative method using flux coordinates will also be described. The data from motional Stark effect (MSE) measurement, which will be part of ITER diagnostics, will significantly change the reconstruction process of the current density profile. Its effect will be discussed too.

**Wednesday, March 17, 2010, 1:00 - 2:00pm, Math Tower 1-122**

Jinjie Liu, Department of Mathematical Sciences, Delaware State University

**Title:** The Overlapping Yee FDTD Method on Nonorthogonal Grids

**Abstract:** We present an overlapping Yee (OY) method for solving time-domain Maxwell's equations on non-orthogonal grids. The OY method is a direct extension of the Finite-Difference Time-Domain (FDTD) method (Yee's scheme) to irregular grids, and it overcomes the late-time instability of the previous FDTD algorithms on non-orthogonal grids. When material interface is presented, the diagonal split-cell model is applied to achieve better accuracy. Numerical simulations on scattering problem and optical force computation will be presented.

**Friday, March 12, 2010, 1:30 - 2:30pm, Math Tower 1-122**

Xinfeng Liu, Department of Mathematics, University of South Carolina

**Title**: Computational studies for spatial dynamics of cell signaling with localized scaffold

**Abstract:** The specificity of cellular responses to receptor stimulation is encoded by the spatial and temporal dynamics of downstream signaling networks. In many cases, spatially localized scaffold proteins that bind and organize multiple proteins into complexes have merged as essential factors in shaping the quantitative response behavior of a signaling pathway. Through studying various models of scaffold, I will show novel regulations induced by its spatial location and switch-like responses due to scaffold. To efficiently compute the models, I shall introduce a new class of fast numerical algorithms incorporated with adaptive mesh refinement techniques for solving the stiff systems with spatial dynamics in complex domains.

**Friday, March 5, 2010, 1:30 - 2:30pm, Math Tower 1-122**

Marc Laforest, Department of Mathematics and Industrial Engineering

École Polytechnique de Montréal

**Title**: An adaptive version of Glimm's scheme

**Abstract**: We describe a local error estimator for Glimm's scheme for hyperbolic systems of conservation laws and use it to replace the usual random choice in Glimm's scheme by an optimal choice. As a by-product of the local error estimator, the procedure provides a global error estimator that is shown numerically to be a very accurate estimate of the error in L^1(\RR) for all times. Although there is partial mathematical evidence for the error estimator proposed, at this stage the error estimator must be considered ad-hoc. Nonetheless, the error estimator is simple to compute, relatively inexpensive, without adjustable parameters and at least as accurate as other existing error estimators. Numerical experiments in 1-D for Burgers' equation and for Euler's system are performed to measure the asymptotic accuracy of the resulting scheme and of the error estimator.

**Friday, February 19, 2010, 1:30 - 2:30 pm, Math Tower 1-122**

Dr. Tong Fang, Manager of Adaptive Techniques

Real-Time Vision and Modeling Department, Siemens Corporate Research, Inc.

3D Geometric Modeling for Direct Digital Manufacturing

Direct Digital Manufacturing is one of hot topics in industry. It is a manufacturing process which manifests physical parts directly from 3D data using additive fabrication techniques, also called additive manufacturing, layered manufacturing, or 3D Printing. In this talk, a 3D geometric modeling application for hearing aids digital manufacturing will be introduced. In addition, some medical applications by using 3D geometric modeling technologies will be talked.

Bio: Dr. Tong Fang, received the Ph.D. degree in the area of image processing from Rutgers University in 2000. He also received his Bachelor degree in E.E. from Hefei University of Technology, China in 1988 and three Master degrees in Management Science (1992), Industrial Engineering (1997), Computer Engineering (1999) from University of Science & Technology of China and Rutgers University, respectively. At Siemens Corporate Research, he currently leads Adaptive Techniques R&D Program, and Real Time Systems and Optimization Competence Group to conduct research and development in the fields of computer vision, industrial and medical image processing, pattern recognition, 3D geometric modeling and visualization. He has 11 US patents and 6 international patents awarded, 40+ papers published and 50+ patents pending.

**Wednesday, December 2, 2009, 10:30 - 12:00, Math Tower 1-122**

Valmor de Almeida, Oak Ridge National Laboratory

**Title:** Challenges for Modeling and Simulation of Solvent Extraction in Nuclear Fuel Reprocessing

**Abstract:** Solvent extraction is a central process in spent nuclear fuel reprocessing. This talk will describe on-going modeling and simulation work addressing principal length and time scales necessary for developing a predictive computational capability. Description of approaches for modeling plant-level, unit operation, and molecular scales will be discussed and a path forward presented for a modern, scientifically based simulation method. This work is prompted by the US government plan to expand the energy portfolio of the nation including nuclear energy to reduce the use of fossil fuels. The DOE Office of Nuclear Energy has recently announced (http:// www.ne.doe.gov) the Hub for Modeling and Simulation which confirms the interest in the use of simulation tools to expedite the expansion of nuclear energy capabilities.

**Wednesday, October 28, 2009, 9:30 am, AMS Seminar Room 1-122**

**Speaker**: Min Zhou, Rensselaer Polytechnic Institute

**Title**: Petascale Adaptive Computational Fluid Dynamics

**Abstract**: In this study, we identify and resolve several bottlenecks facing unstructured, adaptive, implicit finite element methods march toward petascale simulations. With those obstacles resolved, our method demonstrates its capabilities with strong scalability on large scale supercomputers and its ability to solve problems of interest requiring intensive numerical computations in a reasonable time frame. The performance of our implicit solver is improved by two algorithms developed in this work. The first algorithm, multiple compute-object partition improvement, incrementally improves the load balance, hence the scalability of both the equation formation and the equation solution of the finite element analysis (FEA). The second algorithm, data reordering, enables the effective usage of the memory subsystem by increasing the data locality, so as to accelerate the per-core performance of the FEA.

We present excellent strong scaling for several applications performed on various supercomputers including IBM Blue Gene (BG/L and BG/P), Cray (XT3 and XT5) and Sun Constellation Cluster. The applications involve the flow simulations of a bifurcation pipe model with relatively small meshes and cardiovascular flow of an abdominal aorta aneurysm model with a much bigger mesh (more than 1 billion elements). The other application involves the blood flow in a ``whole'' body model composed of 78 arteries; from the neck to the toes. The effectiveness of our methodologies and the algorithms developed in this work are investigated in those applications. With the ability to solve real-world problems having complex geometry/physics in a realistic time, this work provides a reliable and efficient computation tool that can be used by researchers for design and development purpose.

**Wednesday, October 21, 2009, 1:00 pm, AMS Seminar Room 1-122**

Ravi Samtaney, Princeton Plasma Physics Laboratory, Princeton University

**Title**: Overcoming spatial and temporal stiffness in MHD simulations.

**Abstract:** Magnetohydrodynamics (MHD) is arguably the most popular mathematical model for the macroscopic simulations of fusion plasmas. In this talk we will focus on the resistive single-fluid MHD equations, the solutions of which can exhibit near-singular layers (or even discontinuities in the absence of diffusion terms). We rely on locally adaptive structured mesh refinement (AMR) methods to mitigate the separation of spatial scales in MHD. We will present results from AMR simulations of MHD applications: (a) pellet injection, a proven method to refuel tokamaks; (b) magnetic reconnection which is a canonical problem in plasma physics involving thin current sheets; and (c) an example in MHD shock refraction where five or more discontinuities meet at a single point.

For a tokamak fusion plasma, the presence of a large background field and toroidal geometry results in a large separation of temporal scales. Explicit time-stepping methods to simulatfusion plasmas become prohibitively expensive due to the CFL constraint on the time-step. To overcome the temporal stiffness associated with the fast compressive and Alfven waves in MHD, we have developed a nonlinearly implicit time stepping method using a Jacobian-Free Newton-Krylov approach (JFNK) and begun exploring nonlinear multigrid methods. At the heart of our JFNK method is a PDE-operator based preconditioner (exact for a 1D system of hyperbolic PDEs), to effectively solve the resulting large ill-conditioned linear system.

Wednesday, October 7, 2009, 10:30 am, AMS Seminar Room 1-122

Viktor Kilchyk, Purdue University, vkilchyk@purdue.edu

Pressure-wave Amplification of Flame Area in Wave Rotor Channels

**Abstract**: Recent interest in novel engine concepts such as wave rotor combustors or pulse detonation engines highlighted the need for better understanding of the pressure wave-flame interaction phenomenon. For the optimum design of such devices, burning rate variation and thus flame area change following pressure wave passage should be well understood.

Deformation of an interface between fluids with different densities following a shock passage is referred as Richtmyer-Meshkov instability. To characterize interface increase produced by the instability, perturbation amplitude growth is commonly studied. However, it is the area of the interface that is crucial to flame speed and burning rate predictions. Therefore, in our work we studied numerically the area increase of a flame following a shock or an expansion wave passage. Numerical solutions to the Navier-Stockes equations were obtained using an in-house second order CFD code. The code is specialized in handling ideal and real compressible fluids. An upwind finite-volume spatial discretization was used with an approximate Riemann solver adapted to the generalized form of the governing equations.

It was found that the area of a sinusoidally perturbed flame increases almost linearly, for a time period significantly exceeding duration of growth of the perturbation amplitude. Opposite to the expected from the Richtmyer-Meshkov theory, for a given set of initial parameters, faster interface growth rates were observed in shock refractions where shock approached from the “hot” side of the interface (fast/slow refractions). More importantly, the computed interface growth rates produced by shocks and expansion waves showed nearly linear correlation with deposited circulation.

Using an analytical solution for shock and expansion wave deposited circulation, contribution of the flame area increase to the overall burning rate variation was examined. The results showed that the flame area increase plays a dominant role in the burning rate change with relatively weak shocks and expansion waves. In the case of expansion waves, it was also shown that expansion wave-flame interaction may result in a burning rate increase temporarily; the negative chemical kinetic effect of expansion wave passage is offset by the flame area increase.

Wednesday, September 23, 2009, 10:30 am, AMS Seminar room, Math Tower 1-122

**Title:** The Common Component Architecture for Scalable Scientific Software Engineering

Kostadin Damevski

Department of Mathematics and Computer Science, Virginia State University

**Abstract**: In recent years, component technology has been a successful methodology for large-scale commercial software development. Component technology encapsulates a set of frequently used functions into a component and makes the implementation transparent to the users. Application developers typically use a group of components, connecting them to create an executable application. The Common Component Architecture (CCA) is a project whose goal is to use component technology in scientific computing to tame the software complexity required in coupling multiple disciplines, multiple scales, and/or multiple physical phenomena. The CCA is designed to fit the needs of the scientific computing community by imposing very low overhead, supporting parallel components, and enabling interoperability with legacy code. The CCA component model has already been used in several application domains, creating components for large simulations involving accelerator design, climate modeling, combustion, and accidental fires and explosions. These simulations are able to execute on sets of distributed memory machines spanning several computational and organizational domains. This talk will introduce the CCA and its associated tools and discuss some of the recent advancements made by this project.

**Wednesday, June 24, 2009, 11:00am**, AMS Seminar room, Math Tower 1-122

Alexandre Tartakovsky

Scientist

Computational Mathematics

Pacific Northwest National Laboratory

Title: Multi-scale simulations of multiphase flow and reactive transport in fractured and porous media.

Particle methods such as smoothed particle hydrodynamics are very robust and versatile for pore-scale flow and transport simulations, and it is relatively easy to add complex physical, chemical and biological processes into particle codes. However, the computational efficiency of particle methods is low relative to continuum methods. Multiscale particle methods and hybrid (particle-particle and particle-continuum) methods may be needed to improve computational efficiency and make effective use of emerging computational capabilities.

An SPH multiphase flow model was used to study the effects of pore-scale heterogeneity and anisotropy on infiltration/drainage cycles, entrapment and dissolution of non-wetting fluids and a pressure/saturation relationship.

An SPH reactive transport model was used as a part of a multi-scale numerical and experimental study of mixing-induced reactions and mineral precipitation. In a laboratory experiment, solutions containing Na2CO3 and CaCl2 were each injected in different halves of a quasi two-dimensional flow cell filled with quartz sand. Pore-scale simulations were conducted to help understand the mechanism of precipitation layer formation.

A meso-scale langevin model and a hybrid model were developed to bridge a gap between pore-scale and darcy-scale descriptions of transport processes.

**Wednesday, April 22, 2009, 12:00pm**, AMS Seminar Room, Math Tower 1-122

Title: Shock Wave Propagation in Tissue and Bone

Professor Randall J. LeVeque

Department of Applied Mathematics

University of Washington, Seattle

Studying the physical and biological mechanisms of extracorporeal shock wave therapy (ESWT) requires modeling the propagation of strong shock waves through tissue and bone. Interfaces between different biological materials lead to reflections and focusing of shock waves and the creation of strong rarefaction zones and cavitation fields. I will discuss recent numerical work using high-resolution finite volume methods in which each grid cell is allowed to have distinct material properties. Sharp interfaces either occur at cell edges (if an appropriate geometry-conforming grid can be obtained) or are represented by averaging the material properties over grid cells on a Cartesian grid. In either case, logically rectangular grids with adaptive mesh refinement are used to efficiently deal with multiscale problems where the medium has heterogeneities at various length scales.

**Wednesday, April 1, 2009, 12:00pm**, AMS Seminar Room, Math Tower 1-122

Title: TBA

Dr. Patrick M. Knupp

Distinguished Member Tehcnical Staff

Optimization Uncertainty Estimation Department

Sandia National Laboratories

Updating meshes on deforming domains via the target-matrix paradigm

Mesh quality can impact simulation accuracy and efficiency, as well as determine the time needed to create a mesh. Mesh

optimization is one of the more rigorous methods to improve quality. A new Target-matrix paradigm for mesh optimization is proposed in which targets, based on reference Jacobians of the local map, are constructed based on application-specific requirements. An important use of the paradigm involves that of updating meshes on deforming domains in order to maintain the quality of the original mesh.

**Wednesday, March 18, 2009, 12:00pm**, AMS Seminar Room, Math Tower 1-122

Title: Central Discontinuous Galerkin Method and Hierarchical Reconstruction on Overlapping Cells

Professor Yingjie Liu, Department of Mathematics, Georgia Institute of Technology

The central scheme (Nessyahu and Tadmor '90) can be extended to staggered overlapping cells on which the O(1/dt) dissipation error due to grid shifting can be removed while keeping the benefit of using no flux function or Riemann solver. This strategy allows us to develop a semi-discrete central Discontinuous Galerkin method (DG) on overlapping cells combining the benefit of the central scheme and the compact stencil of the DG method. This also allows standard Runge-Kutta time discretization methods to be used. We are still at the beginning to understand some properties of central DG on overlapping cells. For example, its CFL number can be shown to decrease much slower than conventional DG method on non staggered grids as the order increases. Another interesting property is that hierarchical reconstruction on overlapping cells seems to generate higher resolution and smoother numerical solution compared to that on non staggered grids. Combining a new technique which uses partial neighboring cells for hierarchical reconstruction, we expect even better performance on overlapping cells. I will briefly introduce the recently developed hierarchical reconstruction technique on overlapping cells, and report our newest results. This technique does not use any charcteristic decomposition. It's compact and can be formulated on unstructured meshes naturally. The talk is based on several collaborated works with C.-W. Shu, E. Tadmor, Z.-L. Xu and M.-P. Zhang.

**Wednesday, March 11, 2009, 10:30am**, AMS Seminar Room, Math Tower 1-122

Title: DG/Spectral Volume and HR Limiting

Professor Zhiliang Xu, Department of Mathematics, University of Notre Dame

Hierarchical reconstruction for spectral volume and RKDG methods for solvinghyperbolic conservation laws

In this talk, I will dicuss the recent development of hierarchical reconstruction (HR)[Liu etal., Central discontinuous Galerkin methods on overlapping cells with anon-oscillatory hierarchical reconstruction. SIAM J. Numer. Anal., 45:2442-2467, 2007 and Xu et al.,Hierarchical reconstruction for discontinuous Galerkinmethods on unstructured grids with a WENO type linear reconstruction andpartial neighboring cells. J.C.P. (in press)] for limitingsolutions computed by spectral volume and RKDG methods for solving hyperbolic conservation laws.HR is applied to apiecewise quadraticpolynomial on two-dimensional unstructuredgrids as a limiting procedure to prevent spurious oscillations in numericalsolutions. The key features of this HR are that the reconstruction on each element only usesadjacent neighbors, which forms a compact stencil set, and there is no truncation of higher degree terms of the polynomial.We explore a WENO-type linear reconstructionon each hierarchical level for the reconstruction of high degree polynomials.We demonstrate that the hierarchical reconstruction can generateessentially non-oscillatory solutions while keeping the resolutionand desired order of accuracy for smooth solutions.

**Feb 18, 2009 11:00am**, AMS Seminar Room

Title: An inverse problem arising in flow in porous media

Professor Dan Marchesin, Institute for Pure and Applied Mathematics, Brazil

Most oil is produced by pumping water in some wells and recovering oil in others. The injected water often contains suspended particles that penetrate the rock and are retained in the pores. The rock becomes less permeable, and the well may become useless. This is deep bed filtration with formation damage. It is modeled by two conservation laws describing transport and retention of particles, and Darcy's law. The model contains an empirical "filtration function" of the deposited concentration, which cannot be measured directly. It must be recovered from experimental data by solving and ill-posed inverse problem, in the form of a functional equation. We present a robust method for solving this inverse problem mathematically, which gives rise to a robust numerical procedure. We show some numerical applications for real data.

**Date:** Monday, May 4, 2015; 1:00 - 2:00 PM, Math Tower, Sublevel S240

**Speaker: **Leng Han, University of Texas, MD Anderson Cancer Center Computational Biology Faculty Candidate

**Title:**Dissecting novel genetic elements in cancer from The Cancer Genome Atlas

**Abstract:**The Cancer Genome Atlas (TCGA) is a comprehensive effort to understand the molecular basis of cancer. We generated pseudogene expression profiles in patient samples of seven cancer types from TCGA RNA-seq data using a newly developed computational pipeline. Supervised analysis revealed a significant number of pseudogenes that were differentially expressed among established tumor subtypes and that pseudogene expression alone can accurately classify the major histological subtypes of endometrial cancer. Across cancer types, the tumor subtypes revealed by pseudogene expression showed extensive and strong concordance with the subtypes defined by other molecular data. In kidney cancer, the pseudogene expression subtypes not only significantly correlated with patient survival, but also helped stratify patients in combination with clinical variables. We further characterized the global A-to-I RNA editing profiles of 17 cancer types from TCGA and revealed a striking diversity of RNA-editing patterns in tumors relative to normal tissues. We identified an appreciable number of clinically relevant RNA editing events that are particularly enriched in non-silent coding regions. We experimentally demonstrated the effects of several cross-tumor recoding RNA editing events on cell viability and provided the first evidence that RNA editing could selectively affect drug sensitivity. These results highlighted pseudogene expression and RNA editing as exciting paradigms for investigating cancer mechanisms, biomarkers and treatments.

**Date:**Monday, April 27, 2015; 1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:**Dr. Dmytro Kozakov, Boston University, Computational Biology Faculty Candidate

**Title:** Modeling and Modulating Protein-Protein Interactions

**Abstract:** The proteomics revolution provided a blueprint for the networks of molecular interactions in the cell. However, full understanding of how molecules interact requires information on three-dimensional structures. Despite recent progress in structure determination of individual proteins using X-ray crystallography or nuclear magnetic resonance (NMR), structures of complexes remains difficult to obtain. In addition, modulating protein interactions for therapeutic purposes has become one of the modern frontiers of biomedical research. Thus, modeling of protein interactions has important motivations. My talk consists of three parts. First, I will describe the development of a multi-stage protein-protein docking method that includesglobal sampling based on the Fast Fourier Transform (FFT) correlation algorithm, Monte Carlo Minimization and Semi Definite Underestimation (SDU) methods for medium range optimization, and finally manifold and combinatorial optimization for local structure refinement. I will demonstrate that the energy evaluation model used is accurate enough not just to model the structure of the complex, but also provides insight on protein-protein association and reveals that the protein interaction energy landscape resembles a canyon-like terrain where the low energy areas lie in a lower dimensional subspace. I will show that this finding can potentially enable the design of effective approaches to docking. The second part of the talk will focus on understanding the key principles of disrupting protein-protein interactions using small molecules, macrocycles, or other compounds. This will be done by introducing the concept of hot spots of protein-protein interactions, i.e., regions of surface that disproportionally contribute to the binding free energy. Hot spots will be determined by modeling the interaction of proteins with a number of small molecules used as probes. The method is a direct computational analogue of experimental techniques, and uses similar optimization approaches as described above. I will demonstrate that the hot spots provide information on the “druggability” of protein-protein interactions, i.e., on the ability of the target protein to bind drug-like small molecules. Finally, in the third part of the talk I will demonstrate how these approaches can be potentially scaled to the system level by considering all known structures in the entire kinome and determining hot spots that provide potential allosteric sites.

**Date:**Thursday, April 23, 2015; 1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:** Robert Unckless, Cornell University, Computational Biology Faculty Candidate

**Title:**Balancing selection and convergent evolution in the Drosophila immune system

**Abstract:** Conventional thinking has been that antibacterial peptides (AMPs) are functionally redundant and evolutionarily dispensable at the individual gene level. In*Drosophila*, this inference has been drawn from observations that antibacterial peptide genes show low rates of amino acid divergence between species and high rates of genomic duplication and deletion, and that genetic variation in individual AMP genes makes little or no contribution to organism-level defense phenotypes. However, we identified a serine/arginine polymorphism in the*Diptericin A*gene of*Drosophila melanogaster*that is highly predictive of resistance to specific bacterial infections. The same amino acid polymorphism is segregating in the*Diptericin A*gene of the sister species*D. simulans*, with equivalent phenotypic effect and having arisen convergently by independent mutation of the hom*ologous codon. Examination of the larger*Drosophila*phylogeny reveals that the arginine mutation has arisen independently at least 5 times in the genus. These observation prompted us to revisit the molecular evolution of other antibacterial peptide genes and we find that molecular convergence and shared interspecific polymorphism are surprisingly common. We additionally have found multiple loss-of-function mutations, which cause high susceptibility to infection in*D. melanogaster*and*D. simulans*. We have reevaluated the previously supposed mode of evolution of*Diptericin*and other antibacterial peptide genes, and now favor the hypothesis that AMP genes evolve under a model where the selection pressure favoring alterative amino acid states fluctuates over time and space. The frequent incidence of loss-of-function alleles in nature suggests that AMP function in immune defense is balanced by deleterious consequences in the absence of infection, and serial pseudogenization and duplication-subfunctionalization may explain the rapid gene family dynamics. Since previous screens for molecular adaptation have explicitly tested for adaptive divergence, these would have failed to detect convergent or balanced mutations.

**Date:**Thursday, April 16, 2015;1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:**Dr. Leonid Chindelevitch, Postdoctoral Fellow in Epidemiology, Harvard School of Public Health; Mathematical Modeling of the Tuberculosis Epidemic in Sub-Saharan Africa.

**Title:** Modeling tuberculosis, from pathogen to host to population

**Abstract:**Tuberculosis (TB) continues to cause over a million deaths a year worldwide. Multi-drug resistance is on the rise, causing concern among public-health experts.Computational biology has an important role to play in supporting worldwide control of TB infections.

This talk will give an overview of my work on modeling TB at different scales.I will start by presentingMetaMerge,an approach for unifying disparate sources of knowledge about TB metabolism, which results in more comprehensive models and improved predictive power. I will go on to propose a simplebiochemical model that can predict some key features of the behavior of antibacterial drugs within a host, such as their minimum inhibitory concentration.Finally, I will discuss how anaccurate classification of complex TB infections as originating from mutation or mixed infection is possible usingClassTR,an optimization-based method I developed for population-level analysis.I will conclude with some challenges of extracting insights from large-scale genomic data in the context of infectious disease research.

*"Pizza with the Professor"***seminar series for AMS faculty and graduate students**

**Date:**Wednesday, April 8, 2015;1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:**Professor Robert Rizzo

**Title:**Computer-aided Drug Design

**Abstract:**Computational approaches for predicting how compounds interact with a therapeutic target are an important component of modern day drug discovery.Under the broad umbrella of Computational Structural Biology, research in our group includes development, validation, and application of new modeling procedures and protocols to improve predictions for how small molecules interact with biological important proteins implicated in human disease.The underlying goal is to improve methods for high-throughput virtual screening, the procedure by which large chemical libraries are computationally assessed for compatibility with a target to identify promising drug-like leads for subsequent experimental testing.Improved computational methods have great potential to save billions of dollars in drug development costs and reduce the time associated with bringing clinically useful medicines to market.Today’s talk will focus on new methods our group has developed and implemented into the virtual screening program DOCK, and give examples of application to several drug targets.

**Date:**Thursday, February 19, 2015;10:00 AM- 12:00 PM; Math Tower, Seminar Room 1-122

**Speaker:** Dwight McGee, Ph.D., Visiting Post Doctoral Fellow (R. Hernandez Lab), Georgia Institute of Technology, Atlanta, GA

**Title: **"A Computational Study of HIV-1 Protease: Evaluating Drug-Resistance and the Protonation of the Catalytic-Dyad"

**Abstract:** Human Immunodeficiency Virus (HIV), the virus that causes Acquired Immunodeficiency Syndrome (AIDS), is still one of the most prominent diseases in the world. One of the main targets in anti-retroviral therapy is the protease because of its role in the life scyle of the virus, and inhibition would prevent the maturation and the spread of the virus to neighboring cells. The protonation state of the catalytic aspartates of HIV-1 protease (HIVPR) is atypical, and as a result, is the subject of much debate. Using pH Replica Exchange Molecular Dynamics, we simulated the apo and the bound forms of HIV-1 protease with twelve different protease inhibitors to investigage the pKa of not only the catalytic-dyad, but also the other titrating residues in HIVPR. The mechanism of how drug-pressured-selected mutations, especially those not located in the active site, confer resistance is not well understood. Through our results we provide an explanation as to how the mutations G48T and L89M reduce the efficacy of the protease inhibitor saquinavir. The results of this work could offer valuable insight on ways to improve protease inhibitors.

**"Pizza with the Professor"**** seminar series for AMS faculty and graduate students**

**Date:**Wednesday, November 19, 2014;1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:**Professor Allen Tannenbaum

**Title:**Theory and Practice of Medical Imaging Analysis

**Abstract:**In this talk, we will describe some theory and practice of medical imaging analysis. This includes projects such as radiation planning in cancer therapy, traumatic brain injury, and left atrial fibrillation. Accordingly, we will describe several models of active contours for which both local (edge-based) and global (statistics-based) information may be included for various segmentation tasks. Segmentation is the process of partitioning an image into its constituent parts.

In addition to segmentation, the second key component of many medical imaging is registration. The registration problem is still one of the great challenges in vision and medical image processing. Registration is the process of establishing a common geometric reference frame between two or more data sets obtained by possibly different imaging modalities. Registration has a substantial literature devoted to it, with numerous approaches ranging from optical flow to computational fluid dynamics. For this purpose, we propose using ideas from optimal mass transport.

The talk is designed to be accessible to a general applied mathematical audience with an interest in medical imaging. We will demonstrate our techniques on a wide variety of data sets from various medical imaging modalities.

**"Pizza with the Professor"**** seminar series for AMS faculty and graduate students**

**Date:**Wednesday, October 22, 2014;1:00 - 2:00 PM; Math Tower, Math Tower, Room S-240

**Speaker:** ProfessorEvangelos Coutsias

**Title: **Mechanisms and Molecules

**Abstract:** This will be a mainly pictorial discussion of matrix-based algorithms for the computer modeling of motion in macromolecules. My own interest is in modeling macrocyclic drugs which are currently a hot topic in drug design. I will give an overview of some mathematical questions in the field and connect with problems in the control of robotic manipulators.

*Math/AMS Seminar*

**Date:**Wednesday, October 22, 2014;10:00 AM - 12:00 PM; Math Tower, Room 1-122A

**Speaker:** Maria Hempel, ETH, Zurich; http://www.math.ethz.ch/

**Title:** Flexibility of Polyhedral Surfaces

**Abstract:** In this talk I sketch how a description of polyhedra in terms of their surface and their dihedralangles may be used to study their rigidity and flexibility. In particular, I explain how the flexibility of polyhedral cones may be used to explain when a polyhedron is rigid and when it is flexible.

**"Pizza with the Professor"**** seminar series for AMS faculty and graduate students**

**Date:** Wednesday, October 8, 2014; 1:00 - 2:00 PM; Math Tower, Math Tower, Room S-240

**Speaker:** Professor David Green

**Title: **Computational Protein Design: From Combinatorial Optimization to Biological Engineering

**Abstract:** A major grand challenge in biological engineering is the ability to robustly design proteins to function in a desired manner. To solve this design problem requires consideration of multiple sub-problems: how to efficiently search over possible amino acid replacements at a subset of positions; how to determine the native (low-energy) conformation for a given amino acid sequence; and how to appropriate predict the relative performance of potential solutions.

In this week's talk, I will present an overview of the computational protein design problem and introduce a mapping of this problem onto a general class of combinatorial optimization problems. This will set the stage for the discussion of two algorithms, Dead-End Elimination and A* search, which allow for a tractable solution of the problem. Finally, I will illustrate the use of these approaches to a number of real-world problems, and summarize some of the remaining open questions in the field.

**Tuesday, May 31, 2011, Time 11:00am - 12:00pm, Location: AMS Seminar Room, Math Tower 1-122**

**Speaker**: Dr. Karunesh Arora

**Title:** Multiscale Modeling and Simulation as a Tool to Integrate Biomolecular Structure, Dynamics, and Function

**Abstract:**

chemical step of enzyme catalysis. This understanding will have broad implications for our understanding of enzyme mechanisms and for design of novel protein catalysts

**Friday May 6th, 2011, Time 1:15pm - 2:15pm, Location: AMS seminar room, Math Tower 1-122**

Speaker: Dr. Rosemary Braun

**Title**: Spectral Clustering for Pathway Analysis of Gene Expression Data **Abstract: **Gene profiling experiments have become a ubiquitous tool in the study of disease, and the vast number of gene transcripts assayed by modern microarrays has driven forward our understanding of biological processes tremendously. However, because most phenotypes studied by gene expression profiling studies are complex, there is a need for analytical techniques that can identify relationships between samples that are driven by many genes and that may exist on several scales. In this talk, I will describe a spectral clustering based technique -- the Partition Decoupling Method (PDM) -- and present its application to several gene expression data sets, showing how PDM may be used to classify samples based on multi-gene expression patterns and to identify pathways associated with phenotype. The PDM uses iterated spectral clustering steps, revealing at each iteration progressively finer structure in the geometry of the data; these iterations, each of which provide a partition of the data that is decoupled from the others, are carried forward until the structure in the data is indistinguishable from noise, preventing over-fitting. Because it has the ability to reveal non-linear and non-convex geometries present in the data, the PDM is an improvement over typical gene expression analysis algorithms, permitting a multi-gene analysis that can reveal phenotypic differences even when the individual genes do not exhibit differential expression. After describing the method, I will present the results of its application to several publicly-available gene expression data sets, demonstrating that PDM is able to identify cell types and treatments with higher accuracy than is obtained through other approaches. By applying PDM in a pathway-by-pathway fashion, I will illustrate how the PDM may be used to find sets of mechanistically-related genes that discriminate phenotypes.

**Wednesday, April 20, 2011, Time 1:00-2:00PM, Location: AMS Seminar Room 1-122**

**Speaker**: Prof. Evangelos A. Coutsias

Dept. of Mathematics and Statistics, University of New Mexico

**Title**: Protein Loop Modeling with Inverse Kinematics

**Abstract:** Protein loops are the sections of the polypeptide chain connecting regions of secondary structure such as helices and beta strands.They may contain functional residues or have purely structural roles and often they can be the sites of evolutionary changes. In contrast to the relatively rigid helices and strands, loops can be flexible, allowing a protein to rapidly respond to changes and bind to ligands. Structure determination of flexible loops with given endpoints is a challenging problem, commonly referred as the Loop Closure problem. Loop closure has been studied by computational methods since the pioneering work of Go and Scheraga in the '70s. Our Triaxial Loop Closure (TLC) method provides a simple and robust algebraic formulation of the loop closure problem for loops of arbitrary length and geometry.We present results of several recent studies showing that TLC samples loop conformations more efficiently than other currently available methods: TLC sampling augmented with a simulated annealing protocol using the Rosetta scoring potential was able to predict the native structures of several standard loop test sets with up to 12 residue loops with sub-‐Angstrom mean accuracy;TLC with a Jacobian guided Fragment Assembly scheme was shown to outperform other methods

in generating near native ensembles;and finally, TLC based local moves were incorporated in a new Monte Carlo scheme that hierarchically samples backbone and sidechains, making it possible to make large moves that cross energy barriers. The latter method, applied to the flexible loop in triosephosphate isomerase that caps the active site, able to generate loop ensembles agreeing well with key observations from previous structural studies.Further applications of kinematic geometry to protein modeling will be discussed as time permits.

Monday April 18th, 2011, Time 1:00pm - 2:00pm, Location: AMS seminar room, Math Tower 1-122

**Speaker: **Prof. Michal Brylinski, Georgia Institute of Technology, Atlanta

**Title:** Ligand hom*ology Modeling as a new computational platform to support modern drug discovery

**Abstract:** As an integral part of drug development, high-throughput virtual screening is a widely used tool that could in principle significantly reduce the cost and time needed to discover new pharmaceuticals. In practice, virtual screening algorithms suffer from a number of limitations and the development of new methodologies is required. In this talk, I will discuss the ideas of Ligand hom*ology Modeling (LHM), which is likely one of the first approaches in Cheminformatics that successfully extends template-based techniques, commonly used in proteins structure prediction, to the modeling of protein-ligand interactions. Our intensive research in this field culminated in the development of a novel virtual screening approach, which appears as a powerful compound prioritization technique applicable to the early stages of proteome-scale drug design projects. As an example, I will describe the application of LHM to all kinase domains in humans, which has provided the scientific community with a very extensive structural and functional characterization of the human kinome to support the discovery of novel kinase inhibitors.

Wednesday April 13th, 2011, Time 1:00pm - 2:00pm, Location: AMS seminar room, Math Tower 1-122

**Speaker:** Dr. Thomas MacCarthy

**Title**: Modeling somatic hypermutation in B-cells

**Abstract:** Somatic hypermutation (SHM) is a fundamental process in antibody diversity generation that functions by introducing point mutations into the variable regions of immunoglobulin (Ig) heavy and light chain genes of B-cells. The enzyme activation-induced cytidine deaminase (AID) has been found to play a central part in SHM by generating C→U mutations. AID achieves this by cytosine site deamination, which occurs preferentially at so-called hotspot motifs. Computational models can be used to produce simulated sequences which in turn can be compared to “control” (e.g. wildtype) datasets. I previously used a computational model of AID activity to quantify the contribution of simple hotspot motif targeting to the mutation process, and found that the model could only account for ~50% of the complexity of the full *in vivo* mutation process. Extending the model by incorporating features such as processivity and DNA entry sites for AID increases the explained complexity to over 80% when compared to a large dataset of human IGHV3-23 sequences. We have also investigated AID entry sites experimentally by inserting a cluster of overlapping hotspot motifs into the human heavy chain V region expressed by the Ramos Burkitt’s lymphoma cell line using both a cell-free in vitro assay and intact Ramos cells. Clustering analysis of the in vitro data shows that wildtype sequences contain a protected segment in the 3’ half of the V region. The protection appears to occur stochastically, affecting only a subset of sequences. When the cluster of hotspots was inserted, the protection disappeared. In Ramos cells when the hotspot cluster was inserted into the endogenous Ig locus, only one of five Ramos clones displayed a focusing of mutation within the cluster as well as a concentration of mutations 3’ to the cluster suggesting that the stochastic use of entry sites and disruption of 3’ protection also apply in the in vivo context.

**Monday, March 28, 2011, 2:30 - 3:30pm. Simon Center Lecture Hall.**

**Speaker:** Dr. James Chen

Senior Biomedical Research Service/Senior Mathematical Statistician National Center for Toxicological Research, FDA Fellow, American Statistical Association.

**Title**: Statistics in the Analysis of High-Dimensional Biological Data

**Abstract:** High-throughout genomic, proteomic, and metabolomic technologies are widely used in biomedical research to develop molecular biomarkers of exposure, toxicity, disease risk, disease status and response to therapy. High-dimensional data often refer to a data set where each sample is described by hundreds or thousands of correlated measurements of attributes; the number of samples can be large too. This talk presents salient problems and challenges encountered in the analysis of high-dimensionality, multiple testing, gene set enrichment analysis, dimensionality reduction with feature selection and feature extraction, and ensemble classification.

**Friday, March 27,2pm**,

**AMS Seminar Room, Math Tower 1-122**

**Title:** Understanding Embryonic Robustness: Quantitative Experiments and Theory

Alexander Spirov,Adjunct Associate Professor, Department of Applied Mathematics and Statistics

Center for Developmental Genetics, Stony Brook University

The primary aim of our research is to understand how gene regulation generates precise spatial patterns in embryonic development. However, the chemical reactions and transport processes underlying pattern formation are subject to numerous sources of variability and noise. Extrinsic sources include variability in temperature, size and maternally-supplied factors. Intrinsic noise arises from the low concentrations of many biological molecules and the random aspects of cell shape, orientation and movement. For development to reliably form complex body plans, gene network dynamics must be robust to these disruptive influences. We use one of the genetically best characterized model systems for embryonic patterning, anterior-posterior (AP) segmentation in *Drosophila*. We combine quantified data acquisition, statistical extraction of trends and noise components, and stochastic and evolutionary modeling of gene networks. Such an integrated approach is required to properly characterize the different aspects of developmental noise (within an embryo, e.g. nucleus-to-nucleus) and variability (embryo-to-embryo), and to understand how these are controlled. Our long-term goal is to provide a mathematically quantified understanding of the interactions which give the robust spatial patterning underlying the development of complex body plans. Studying how networks maintain robustness, and how they lose it, should have direct bearing on heritable human diseases, particularly birth defects, which display variable outcome.

**Monday, April 13, 2009 10:30 AM, AMS Seminar Room, Math Tower 1-122**

**Title:** Using Novel X-Ray Crystallographic Methods to Identify Side Chain Polymorphism in Protein-Ligand Interactions: Applications to Calmodulin Peptide Binding Specificity

P. Therese Lang, University of California, Berkeley Department of Molecular & Cell Biology

Although proteins populate ensembles of structures in solution, X-ray diffraction data are traditionally interpreted using a single dominant model. To detect ensembles of side chain motions in x-ray electron density, we developed a new computational method called Ringer. Using this approach, we have identified structural fluctuations in protein active sites and explored their effects on the biophysical properties of ligand binding. Using experimental density, Ringer identified unmodeled alternate rotamers in 5-15% of side chains, supporting the idea that the newly detected conformations are widespread. With this new method, we are exploring X-ray structures of calmodulin (CaM), a calcium signaling protein that recognizes approximately 200 different peptide sequences, to test the idea that free receptors contain structural fluctuations required for bound conformations. We have identified several, previously unmodeled alternate side chain conformations in the active site of apo-CaM structure necessary for diverse binding. We have also seen a correlation with NMR experiments that detect changes in side chain rotamers. The identified alternate conformations support predictions about which residues within the binding site can influence recognition selectively by modulating the ensemble of side motions. These studies have the potential to provide new tools to explore the underpinnings of ligand specificity in CAM and other systems.

**Date:**Friday, April 10, 2015;1:00 - 2:15 PM; Math Tower, Room S-240

**Speaker:**Dr. Boris Mordukhovich, Wayne State University, www.math.wayne.edu/~boris

**Title:**Variational Analysis: What is it?

**Abstract:** Variational analysis has been recognized as an active andrapidly growingarea of mathematics and operations researchmotivated mainly by the study of constrained optimization and equilibrium problems, while also applying perturbation ideas and variational principles to a broad class of problems and situations that may be not of a variational nature. One of the most characteristic features of modern variational analysis is the intrinsic presence of nonsmoothness, which naturally enters not only through the initial data of the problems under consideration but largely via variational principles and perturbation techniques applied to a variety of problems with even smooth data. Nonlinear dynamics and variational systems in applied sciences also give rise to nonsmooth structures and motivate the development of new forms of analysis that rely on generalized differentiation.

This talk is devoted to discussing some basic constructions and results of variational analysis and its remarkable applications.

*ABOUT THE SPEAKER:*

Dr. Boris Mordukhovich is Distinguished University Professor of Mathematics at Wayne State University. He has more than 370 publications including several monographs. Among his best known achievements are the introduction and development of powerful constructions of generalized differentiation and their applications to broad classes of problems in variational analysis, optimization, equilibrium, control, economics, engineering, and other fields. Dr. Mordukhovich is a SIAM Fellow, an AMS Fellow, and a recipient of many international awards and honors including Doctor Honoris Causa degrees from five universities over the world. He is in the list of Highly Cited Researchers in Mathematics. His research has been supported by continued grants from the National Science Foundations.

**"Pizza with the Professor"**** seminar series for AMS faculty and graduate students**

**Date:**Wednesday, March 4, 2015;1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:**Professor Zhenhua Liu

**Title:** Sustainability, IT and Market Design: a “Rigor + Relevance” Approach

**Abstract:** Energy and sustainability have become one of the most critical issues facing our society. My broader view is that Information Technology can, and should, play a significant role in improving the sustainability and efficiency of the broad energy infrastructure. In this talk, I will briefly introduce my previous work and future directions for the following three topics.

(1) Sustainable IT: geographical load balancing to exploit the spatial flexibility in cloud workloads for renewable energy integration. I have designed algorithms with theoretically provable guarantees to deal with information uncertainties and the need of distributed control.

(2) IT for Sustainability: a tale of data center demand response. I will discuss its great potential and challenges, as well as my recent efforts in both control algorithm design for customers and market design for utility companies and our society.

(3) Analytics framework: cloud platforms for energy management and big data applications. I will talk about recent work on multi-resource fair allocation.

Additionally, I will talk about my own experience of making the bridge between theory and industry for technology transfer, in particular, the complete lifecycle of technology development, taking a mathematical idea all the way through R&D to implementation and to deployment through industrial transfer (“Net-zero Energy Data Center” with HP, 2013 Computerworld Honors Laureate).

Fig. Research Overview

**"Pizza with the Professor"**** seminar series for AMS faculty and graduate students**

**Date:**Wednesday, February 25, 2015;1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:**Professor Eugene Feinberg

**Title:**Research Directions in MDPs (Markov Decision Processes)

**Abstract:**A successful professor in an applied discipline is usually an expert in at least one area of fundamental research and in at least one area of applications. My area of fundamental studies is Markov decision processes (MDPs), and the area of applications is electric power systems.

An MDP, also known under the name of a stochastic dynamic program, is a mathematical model for sequential decision-making in stochastic environments. It has numerous applications in business, engineering, biology, ecology, computer science, and other fields. It also uses a large variety of mathematical methods and sometimes leads to new mathematical discoveries.

Over the last fifteen years I have been working on developing forecasting, optimization, and statistical methods for electric power systems. Currently these topics are important for the development of the new generation of electric power systems called Smart Grids.

My talk is concentrated on research directions in MDPs, but I’ll also try to talk a little bit about Smart Grid research.

*"Pizza with the Professor"* seminar series for AMS faculty and graduate students

**Date:** Wednesday, September 10, 2014; 1:00 - 2:00 PM; Math Tower, Seminar Room 1-122

**Speaker:** Professor Joseph Mitchell

**Title:** "Guarding Art Galleries, Patroling Prisons, Shoveling Snow, and Surveying Planets: One of Many Areas of Study in Computational Geometry"

**Abstract:** A famous problem posed by Victor Klee in the early 1970's is the Art Gallery Problem: How many points ("guards") are sufficient to place within a simple polygon P having n vertices so that every point of P is "seen" by at least one guard? This problem falls into a rich class of computational geometry problems that ask one to optimally cover a domain. We discuss several interesting mathematical and algorithmic questions that arise in this class, both in the case of stationary guards and mobile robotic guards. The problems are simple to state, easy to visualize, but often very challenging to solve.

Monday, March 10, 2014; 1:00 - 2:00 PM, Math Tower, Room 1-122A

**Speaker:** Ilya Tkachev, Delft University of Technology, The Netherlands

**Title: **Metrics and Equivalence Relations between Stochastic Models with Different State Spaces

**Abstract:** Comparing stochastic models with different state spaces is an important task. For example, if the original stochastic model has an uncountable state space one may be interested in finding an equivalent finite model in order to do analysis over the latter and infer properties over the original model. Clearly, it is rarely the case that an infinite system admits an equivalent finite one – hence, one may be further interested in defining metrics between systems that still allow inferring properties over the original system based on the analysis carried over its finite approximants.

This talk concerns particular stochastic models arising in gambling theory, that are similar to Markov Decision Processes. We show how equivalence relation introduced on the level of transition kernels extends to equivalence relation between collections of strategic measures. We further introduce a total-variation based metric between gambling models and provide bounds of its propagation in time. One particularly important concept we use is lifting of relation between states to that between probability measures. As a separate result we show that this lifting corresponds to a functor in the category of Borel spaces and analytic relations.

**Wednesday, September 18****, 2013;12:00-1:00 PM; AMS Seminar Room 1-122A**

**Speaker:** Michal Stern,

Tel-Aviv-Yaffo Academic College, Israel

**Title:** The optimal clustering tree problem

Monday, August 5, 2013; 11:00AM - 12:30PM; SCGP in Simons Auditorium

**Speakers:** Sandor P. Fekete (TU Braunschweig) and James McLurkin (Rice University)

*Special Summer Double Feature!!*

Please join us for a special event this comingMonday, August 5. We have two distinguished visitors -- Sandor Fekete, and James McLurkin, experts in distributed/swarm robotics and computational geometry.

Feature (1): We will have two short movies* (see below)* [Sandor Fekete] followed by;

Feature (2): A talk and demo[James McLurkin] "Distributed Computational Geometry and Multi-Robot Systems: Twins Separated at Birth?" with a live demo of a small swarm of real robots!

**(1) Speaker: Sandor P. Fekete (TU Braunschweig)**

**Abstract:** One of the driving engines of Computational Geometry is the interaction with practical problems; one of the application areas with strong ties to geometry is the field of robotics. In this talk, we start by presenting two 10-minute videos that document ongoing collaborations between computational geometry and robotics.

The first [1] shows how building detailed three-dimensional maps with a robot platform that carries a powerful laserscanner is related to the classical Art Gallery Problem (AGP). We develop different methods for solving such problems to optimality, and demonstrate the resulting application. This is joint work with colleagues from the University of Campinas (Brazil) and Jacobs University Bremen (Germany).

The second [2] considers exploration and triangulation with a swarm of small "r-one" robots with relatively few individual capabilities; we develop ideas, provide theory and present a practical demonstration of how such a swarm can be used to explore an unknown territory, and guard it. This is ongoing joint work between Braunschweig and Rice University (USA).

In the second part of the talk, James McLurkin will perform some show-and-tell magic tricks with these very r-one robots.

*References:*

[1] D. Borrmann, P.J. de Rezende, C.C. de Souza, S.P. Fekete,

S. Friedrichs, A. Kroeller, A. Nuechter, C. Schmidt and D.C. Tozoni.

Point Guards and Point Clouds: Solving General Art Gallery Problems,

Video, in: SoCG 2013. Abstract at

http://dl.acm.org/citation.cfm?id=246236

[2] A. Becker, S.P. Fekete, A. Kroeller, L.S. Kyou, J. McLurkin and

C. Schmidt. Triangulating Unknown Environments Using Robot Swarms.

Video, in: SoCG 2013. Abstract at

http://dl.acm.org/citation.cfm?id=2462360

=============================**(2) Speaker: James McLurkin (Rice University)**

**Title:** Distributed Computational Geometry and Multi-Robot Systems: Twins Separated at Birth?

**Abstract:** This is a talk in three parts. We start with an overview of robotics in general, and why multi-robot systems are the next frontier of robotics research. We focus on the opportunities and challenges presented by large populations; they enable simultaneous coverage of large areas, highly parallel operations, and other novel solutions, but require distributed algorithms for sensing, computation, communication, and actuation. Distributed computational geometry will be required to address these problems.

We present an overview of our work with multi-robot systems, including distributed algorithms for robot recovery, angular coordinate systems, and massive manipulation. Our systems are best modeled as geometric graphs embedded in the plane, with various connectivity constraints and self-mobile vertices. Our low-cost robot platform enables this work, and typifies the hardware (read: algorithmic) constraints of large populations.

I conclude with a call for action from the computational geometry community. There is a need for more discussion between our communities on interesting new problems, and we all stand to gain from knowledge of each other's fields. once. I present several engineering problems that require algorithmic solutions, and algorithmic problems that are solved with proper engineering.

=============================

James McLurkin is an Assistant Professor at Rice University in the Department of Computer Science. Current interests include using distributed computational geometry for multi-robot configuration estimation and control, and defining complexity metrics that quantify the relationships between algorithm execution time, inter-robot communication bandwidth, and robot speed. Previous positions include lead research scientist at iRobot corporation, where McLurkin was the manager of the DARPA-funded Swarm project. Results included the design and construction of 112 robots and distributed configuration control algorithms, including robust software to search indoor environments. He holds a S.B. in Electrical Engineering with a Minor in Mechanical Engineering from M.I.T., a M.S. in Electrical Engineering from

University of California, Berkeley, and a S.M. and Ph.D. in Computer Science from M.I.T.

**Wednesday, March 6, 2013; 10:00-11:00AM; Location: Mathematics Tower, Room P-131**

**Speaker:** Pavlo O. Kasyanov, National Technical University of Ukraine

This talk describes sufficient conditions for the existence of optimal policies for Partially Observable Markov Decision Processes (POMDPs). The objective criterion is either minimization of the expected total discounted costs or minimization of the expected total nonnegative costs. It is well-known that a POMDP can be reduced to a Completely Observable Markov Decision Process (COMDP) with the state space being the sets of believe probabilities for the POMDP. Thus, a policy is optimal in POMDP if and only if it corresponds to an optimal policy in the COMDP. Here we provide sufficient conditions for the existence of optimal policies for COMDP and therefore for POMDP.

The talk is based on a joint paper with Eugene A. Feinberg and Michael Z. Zgurovsky

**Tuesday, February 5, 2013; 10:15-11:15AM; Simons Lecture Hall 102**

**Speaker: **Sheldon Jacobson, NSF Operations Research Program Director

**Title:** Funding Opportunities at the National Science Foundation

**Abstract: **This presentation discusses opportunities for funding in Operations Research and other programs at the National Science Foundation. It also discusses recent changes implemented for proposals submitted to NSF, as well as new issues related to proposal review.

**Biography:** Sheldon H. Jacobson is on leave from the University of Illinois at Urbana-Champaign, serving as the Program Director for Operations Research in the Division of Civil, Mechanical and Manufacturing Innovation at the National Science Foundation. He has a B.Sc. in Mathematics from McGill University and a Ph.D. in Operations Research from Cornell University. His research interests span theory and practice, covering decision-making under uncertainty and discrete optimization modeling and analysis, with applications in aviation security, health care, and sports.

**Friday, November 16, 2012; 1:30-2:30 PM; Harriman Hall, Room 102**

**Speaker:** Michael C. Fu, University of Maryland (joint work with Huashuai Qu)

**Title:** Augmenting Simulation Metamodels with Direct Gradient Estimates

**Abstract:** Traditional response surface methods such as regression fit a function using only output data from the response itself. However, in many settings found in stochastic simulation, direct** gradient**estimates are available. We propose several approaches that augment traditional regression and stochastic kriging models by incorporating the additional gradient information. Theoretical results for the regression setting, along with numerical experiments for both settings, are reported, which indicate the potential improvements that can be achieved.

**Biography:** Michael Fu is Ralph J. Tyser Professor of Management Science in the Robert H. Smith School of Business, University of Maryland at College Park, with a joint appointment in the Institute for Systems Research and affiliate faculty appointment in the Department of Electrical and Computer Engineering (in the A. James Clark School of Engineering).

He received degrees in mathematics and EE/CS from MIT in 1985, and a Ph.D. in applied mathematics from Harvard University in 1989. His research interests include simulation optimization and applied probability, with applications in supply chain management and financial engineering. At Maryland, he received the Business School's Allen J. Krowe Award for Teaching Excellence in 1995, the Institute for Systems Research Outstanding Systems Engineering Faculty Award in 2002, and was named a Distinguished Scholar-Teacher for 2004-2005. He has published four books: *Conditional Monte Carlo: Gradient Estimation and Optimization Applications*, which received the INFORMS Simulation Society Outstanding Publication Award in 1998;*Simulation-based Algorithms for Markov Decision Processes*;*Perspectives in Operations Research*; and*Advances in Mathematical Finance*.

He served as Stochastic Models and Simulation Department Editor of*Management Science*from 2006-2008, as Simulation Area Editor of*Operations Research*from 2000-2005, and also on the editorial boards of*Mathematics of Operations Research*,*INFORMS Journal on Computing,**IIE Transactions*, and*Production and Operations Management*.

He also served as Program Chair of the 2011 Winter Simulation Conference and as Program Director of the Operations Research Program at the National Science Foundation from September 2010 to August 2012. He is a Fellow of INFORMS and IEEE.

**Thursday, April 26, 10:45 - 11:45AM, Math Tower 1-122**

**Speaker:**Michael Kokkolaras, Department of Mechanical Engineering, University of Michigan, Ann Arbor

**Title:** Rigorous Engineering Design by means of Mathematical Programming

**Friday, April 27, 10:30-12:00PM, Math Tower 4-130 (Math Department Seminar Room)**

**Speaker:** Albert N. Shiryaev, Steklov Mathematical Institute and Moscow State University, Moscow, Russia

**Title:** On Some Sequential Time-Dependent Bayesian Problems

**Abstract**: We present solutions of the several sequential decision problems characterized by the property that optimal stopping times are times when some process-sufficient statistics reaches some (unknown) nonlinear boundary that depends of time. These problems include:

1.Testing of THREE statistical hypotheses about drift of the Brownian motion;

2.Chernoff problem of the testing two hypotheses (m >0 and m<0) about a drift m of the observable Brownian motion;

3.Stochastic version of the trading rule "Buy and Hold."

For all these problems optimal boundaries satisfy some Volterra-type integral equations of the second order.

Tuesday, April 17, 2012, 1:00 - 2:30PM, Math Tower 1-122

**Speaker:** Mark S Squillante, IBM T.J. Watson Research Center, Yorktown Heights, NY

**Title:** Linear Stochastic Loss Networks: Analysis and Optimization

**Abstract:** We investigate fundamental properties of throughput and cost in linear stochastic loss networks where customers enter at a fixed source node, are relayed from one node to the next node in a fixed sequence of nodes, and exit the network at a fixed destination node. The maximum throughput of the stochastic network with exponential service times is derived and the arrival process that maximizes throughput, given a fixed arrival rate, is established. We first show that it is feasible to achieve an asymptotic throughput scalability of $c/\sqrt{k}$ in a linear $k$-node loss network as the size of the stochastic network grows, where the maximum achievable $c$ in a network with exponential service times is shown to be equal to the service rate multiplied by $1/\sqrt{\pi}$. Then, for general service times, an asymptotically critical loading regime is identified such that the probability of an arbitrary customer being lost is strictly within $(0, 1)$ as the network size increases. This regime delivers throughput comparable to the maximum at a relatively low network cost, and we further establish the asymptotic throughput and network cost under this critical loading. These results support a general framework for the optimization of trade-offs between throughput and cost in the critical regime under a wide variety of utility functions. Some previous work that motivated and led to our investigation of this particular stochastic network will be discussed.

**Tuesday, February 28, Math Tower 1-122, 1:00-2:30**

**The talk is based on a joint paper with Eugene A. Feinberg and Nina V. Zadoianchuk**

**Speaker:** Pavlo O. Kasyanov

Institute for Applied System Analysis, National Technical University of Ukraine ``Kyiv Polytechnic Institute''

**Title:** Average-Cost Markov Decision Process with Weakly Continuous Transition Probabilities

This talk presents sufficient conditions for the existence of stationary optimal policies for average-cost Markov Decision Processes with Borel state and action sets and with weakly continuous transition probabilities. The one-step cost functions may be unbounded, and action sets may be noncompact. Our main contributions are: (i) general sufficient conditions for the existence of stationary discount-optimal and average-cost optimal policies and descriptions of properties of value functions and sets of optimal actions, (ii) a sufficient condition for the average-cost optimality of a stationary policy in the form of optimality inequalities, and (iii) approximations of average-cost optimal actions by discount-optimal actions.

**Wednesday, December 22, 10:30-12:00, Math Tower 1-122**

Konstantin Avrachenkov, INRIA - Sophia Antipolis, France

Title: Monte Carlo Methods for Top-k Personalized PageRank Lists with

Application to Name Disambiguation

**Abstract:** We study a problem of quick detection of top-k Personalized PageRank lists. This problem has a number of important applications such as finding local cuts in large graphs, estimation of similarity distance and name disambiguation. In particular, we apply our results to construct efficient algorithms for the person name disambiguation problem. We argue that when finding top-k Personalized PageRank lists two observations are important. Firstly, it is crucial that we detect fast the top-k most important neighbors of a node, while the exact order in the top-k list as well as the exact values of PageRank are by far not so crucial. Secondly, a little number of wrong elements in top-k lists do not really degrade the quality of top-k lists, but it can lead to significant computational saving. Based on these two key

observations we propose Monte Carlo methods for fast detection of top-k Personalized PageRank lists. We provide performance evaluation of the proposed methods and supply stopping criteria.

Tuesday, October 19, 2010, 2:30 pm, AMS Seminar Room 1-122A

**Speaker**: Yuri Suhov, Statistical Laboratory, Faculty of Mathematics, University of Cambridge, UK

**Title**: Service systems with limited selection of location

**Abstract:** In a standard single-server queueing system an arriving task and a (conservative) server do not have a choice: every task joins the same queue and will be

served in accordance with the queueing discipline. Many modern systems give a possibility of a choice: an arriving task can select a shorter queue while servers may be

preferentially allocated to queues that are longer, or vice versa. An interesting situation arises when the choice is limited, e.g., an arriving task selects randomly two queues out of N and joins the shorter while a server selects two queues at random and serves the longer, or, again vice versa. As N becomes large (and the arrival rates are properly re-scaled), the situation simplifies and admits some exact (and surprising) solutions, first shown by Dobrushin--Karpelevich--Vvedenskaya (1996).

In the talk I'll give a review and report some new results in this direction.

Friday, May 7, 2010, 2:45-3:45 pm, AMS Seminar Room (Math Tower 1-122A)

Stochastic analysis of the trading rule "Buy and Hold"

Albert N.Shiryaev, Steklov Mathematics Institute and Moscow State University, Moscow, Russia

We consider several variants of the nonstandard problems of finding optimal time of selling stock trying, for example, to maximize expectation of the ratio S(T)/max S(t),where T is a stopping time with values from trading time interval (0,1) and max is taken on t from this interval. Different assumptions about stock processes and different formulations of optimization problems will be presented.

Wednesday, May 5, 2010, 11:30-12:30 pm, Computer Science 1211

Sensor Localization Using Data Correlation and the Occam's Razor Principle

Alon Efrat, University of Arizona

**Abstract:** We present an algorithm for computing a combined solution to two problems in sensor networks: (1) clustering the sensors into groups, each meaningfully related, and (2) solving the localization problem of determining position estimates (in global coordinates) for each sensor. We assume that initially only a rough approximation of the set of sensor positions is known. Our algorithm applies the ``Occam's razor principle'' by computing a ``simplest'' explanation (in a precise sense, defined below) for the measurement data collected by the sensors. We present both a centralized and a distributed algorithm for this problem, as well as efficient heuristics.

**Monday, May 3, 2010, 10:00-11:00 am, AMS Seminar Room (Math Tower 1-122A)**

Optimality of Trunk Reservation for an M/M/k/N Queue with Several Customer Types and Holding Costs

Fenghsu Yang, Department of Applied Mathematics and Statistics, Stony Brook University

We study optimal admission to an M/M/k/N queue with several customer types. The reward structure consists of revenues collected from admitted customers and holding costs, both of which depend on customer types. The goal is to find an admission policy that maximizes average rewards per unit time. This paper describes the structures of optimal, canonical, bias optimal, and Blackwell optimal policies. Under natural conditions, there exists an optimal trunk reservation policy. In addition, any stationary optimal policy is either a trunk reservation policy or can be transformed easily to a trunk reservation policy. Similar to the case without holding costs, bias optimal and Blackwell optimal policies are unique, coincide, and are defined by the largest optimal control levels for each customer type. Problems with one holding cost rate for all customers have been studied previously in the literature. The talk is based on a joint paper with Eugene Feinberg

Thursday, April 8th, 2010, 11:00 - 12:00 pm, AMS Seminar Room (Math Tower 1-122A)

Mark E. Levis, Cornell University, School of Operations Research and Information Engineering

**Title**: Dynamic Control of a Service Center withAbandonments

In this talk we study the dynamic control of a single server that must meet the service requirements of two parallel queues. Customers arrive to each queue according to independent Poisson processes. Each customer either waits until the service is completed, or their (independentandexponential) patience runs out. The service requirements of customers are exponentialandthe service rate of the server is fixedandknown. Two cost/reward models are considered. In the first, customers are differentiated by their holding cost rateandthe penalty charged for customer abandonment. In the second model a reward is received for service of each customer class. The challenges include the fact that none of the traditional methods (interchange arguments, uniformization) easily extend. This talk will explain these challenges, how they are addressedandwhere the optimal policy defies intuition.

**Monday, November 23, 2009, 1:00-2:00, Math Tower 1-122**

Mark Kelbert, Department of Mathematics, Swansea University, UK

Abstract. The `bird eye's' view of actuarial ruin problem is captured by the so-called Cramer-Lundberg model which represents the current capital as a difference of incoming payments and the outgoing claims. The simplest model for the claim flow is the compound Poisson process. We are interested in an asymptotic expansion of the ruin probability on a big time interval when the initial capital tends to infinity and the ratio of the capital and the time tends to a constant.The results of standard saddle-point approximation fails on the Stokes lines. However, some refinements of this method provides uniform asymptotic expansions.

This seminar is partially supported by the Grad School.

**Tuesday, May 5, 2009, 11:00 am, AMS Seminar Room, Math Tower 1-122**

Evdokia Nikolova, MIT, Stochastic Shortest Paths

How do we get to the airport on time? Ideally we would like to take the shortest path, however in the presence of uncertain traffic what does that mean? Is that the path with smallest expected travel time, or should we minimize the path variance or some other metric? One natural objective is to choose the path which maximizes our probability of arriving on time. This turns out to be equivalent to a non-convex optimization problem, for which no efficient algorithms are available. We develop algorithms that bridge stochastic, nonconvex and combinatorial optimization. In fact, our algorithms extend to solve a much more general framework of stochastic optimization that incorporates risk, beyond shortest paths.

In an alternative route planning model, we seek adaptive algorithms which tell us where to go at every node along the way, given the realized edge values so far and the edges adjacent to our current position. This problem, called the Canadian traveler problem, turns out very challenging even with simple linear objectives which aim to minimize the expected route length. We provide the optimal policies (adaptive algorithms) for a class of graphs based on Markov Decision Processes and conclude with intriguing open problems.

**Monday, March 23, 2009, 4:00pm, AMS Seminar Room, Math Tower 1-122**

Production Systems Engineering: Main Problems, Solutions, and Applications

S.M. Meerkov, Department of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI

Production Systems Engineering (PSE) is an emerging branch of Engineering intended to uncover fundamental principles that govern production systems and utilize them for the purposes of analysis, continuous improvement, and design. In PSE, the machines are assumed to be unreliable and the buffers are finite. Under these assumptions, production lines are nonlinear stochastic systems. The study of their statics and dynamics is the goal of PSE.

In this talk, the main problems of PSE and their solutions will be described along with a few applications. In addition, the so-called PSE Toolbox, which implements the methods and algorithms developed, will be discussed.

The main results of PSE are summarized in a recent textbook: J. Li and S.M. Meerkov, *Production Systems Engineering*, Springer 2009. More information on the textbook and a demo of the toolbox can be found at http://www.ProductionSystemsEngineering.com/

Thursday, March 19, 2009, 11:30am,**AMS Seminar Room, Math Tower 1-122**

ACCURACY CERTIFICATES FOR COMPUTATIONAL PROBLEMS WITH CONVEX STRUCTURE

Uriel G. Rothblum, Technion, Haifa, Israel

This talk introduces the notion of certificates which verify the accuracy of solutions of computational problems with convex structure; such problems include minimizing convex functions, variational inequalities with monotone operators, computing saddle points of convex-concave functions and solving convex Nash equilibrium problems. We demonstrate how the implementation of the Ellipsoid method and other cutting plane algorithms can be augmented with the computation of such certificates without essential increase of the computational effort. Further, we show that (computable) certificates exist whenever an algorithm is {capable} to produce solutions of guaranteed accuracy. This talk is based on a joint paper with Arkadi Nemirovsk and Shmuel Onn

**Date:** Monday, March 2, 2015;1:00 - 2:00 PM; Math Tower, Seminar Room 1-122A

**Speaker:**Raphaël Douady, CNRS (National Center for Scientific Research)

**Title:**Nonlinear Polymodels and the StressVaR: New Risk Concepts for Fund Allocation

**Abstract: **We introduce a novel approach to risk estimation based on nonlinear "poly-models": rather than one multi-factor model, the risk of an investment is represented by a collection of nonlinear dynamic single factor models. Using this approach, we build a risk measure, the "StressVaR" (SVaR) which combines the notion of Value-at-Risk and of stress scenarios. Developed to evaluate the risk of hedge funds, the SVaR appears to be applicable to a wide range of investments. The computation of the StressVaR is a 3 step procedure whose main components we describe in relative detail. Its principle is to use the fairly short and sparse history of the hedge fund returns to identify relevant risk factors among a very broad set of possible risk sources. This risk profile is obtained by calibrating a collection of nonlinear single-factor models as opposed to a single multi-factor model. We then use the risk profile and the very long and rich history of the factors to assess the possible impact of known past crises on the funds, unveiling their hidden risks and so called "black swans".

In backtests using data of 1060 hedge funds we demonstrate that the SVaR has better or comparable properties than several common VaR measures - shows less VaR exceptions and, perhaps even more importantly, in case of an exception, by smaller amounts.

The ultimate test of the StressVaR however, is in its usage as a fund allocating tool. By simulating a realistic investment in a portfolio of hedge funds, we show that the portfolio constructed using the StressVaR on average outperforms both the market and the portfolios constructed using common VaR measures.

For the period from Feb. 2003 to June 2009, the StressVaR constructed portfolio outperforms the market by about 6% annually, and on average the competing VaR measures by around 3%.

The performance numbers from Aug. 2007 to June 2009 are even more impressive. The SVaR portfolio outperforms the market by 20%, and the best competing measure by 4%.

Joint work with Ilija I. Zovko, Cyril Coste and Alexander Cherny.

**Date:**Monday, November 10, 2014; 1:00 - 2:00 PM; Math Tower, Room 1-122

**Speaker:**Pawel Polak, Columbia University

**Title: **Portfolio Selection with Active Risk Monitoring

**Abstract:**The paper proposes a unified framework for large scale portfolio optimization which utilizes advanced risk measures, monitors the risk exposure of the optimal portfolio, and accounts for all the major stylized facts of multivariate financial returns, including volatility clustering, dynamics in the dependency structure, asymmetry, heavy tails, and non-ellipticity. It introduces a risk fear portfolio strategy, which combines portfolio optimization with active risk monitoring. The former selects optimal portfolio weights. The later, independently, initiates market exit in case of excessive risks. The strategy agrees with the stylized fact of stock market major sell-offs during the initial stage of market downturns. The advantages of the new framework and the new portfolio strategy are illustrated with an extensive empirical study. It is shown that the framework leads to superior multivariate density and Value-at-Risk forecasting, and better portfolio performance. The proposed risk fear portfolio strategy outperforms all types of optimal portfolios even in the presence of conservative transaction costs and frequent rebalancing. In particular, the new strategy avoids all the losses during the 2008 financial crises, and it profits from the market recovery after the crises.

**Bio:** Pawel Polak is a visiting scholar at the Columbia University in the Department of Statistics, and a postdoc at the University of Zurich in the Department of Banking and Finance. He received a Ph.D. in Banking and Finance in December 2013 from University of Zurich and Swiss Finance Institute, in June 2009 he graduated from Institute for Advanced Studies in Vienna with a diploma in Economics, and he received a M.S. in Mathematics in June 2007 from University of Warsaw. He is the recipient of Swiss National Science Foundation (SNSF) Early Post-Doc Mobility Fellowship (2014), Swiss Finance Institute Scholarship (2010), and Austrian Lottery Excellence Award (2009). His research interests lie in applications of statistical methods in portfolio optimization, risk management, and option pricing.

**"Pizza with the Professor" ****seminar series for AMS faculty and graduate students**

**Date:** Wednesday, September 24, 2014; 1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:** Professor Sergio Focardi

**Title: **The Mathematics of Financial and Economic Crises.**Subtitle:** Market Crashes, Forest Fires, Earthquakes: What do they have in common?

**Abstract:** Market crashes and economic crises are fascinating phenomena with serious consequences for single investors and, often, for the economy at large. The classical intuitive explanations for crises given by Charles Kindleberger, 1954, *Manias, Panics, and Crashes: A History of Financial Crises *and by Hyman Minsky, 1992, “*The Financial Instability Hypothesis”* have been analyzed using the mathematical tools of the theory of complexity. The theory of random graphs, percolation, and interacting particle systems offers mathematical tools to explain phenomena such as the aggregation of decision-making processes and the propagation of financial distress among connected entities. At ETH-Zurich, Didier Sornette argues that the theory of self-reinforcing processes explains the formation of bubbles and instabilities that lead to financial crises. This survey seminar will introduce the tools that are at the leading edge of scientific explanations of crises. The same tools are used to explain many catastrophic phenomena from forest fires to earthquakes.

**Date:** Wednesday, September 24, 2014; 2:30 - 3:30 PM; Math Tower, Seminar Room 1-122

**Speaker:** Dr. Bo Zhang, Research Staff Member in Mathematical Sciences and Analytics, IBM Research

**Title: **Efficient Monte Carlo Counterparty Credit Risk Pricing and Measurement

**Abstract:** Counterparty credit risk (CCR), a key driver of the 2007-08 credit crisis, has become one of the main focuses of the major global and U.S. regulatory standards. Financial institutions invest large amounts of resources employing Monte Carlo simulation to measure and price their counterparty credit risk. We have developed the first efficient Monte Carlo CCR estimation framework by focusing on the most widely used and regulatory-driven CCR measures: expected positive exposure (EPE), credit value adjustment (CVA), and effective expected positive exposure (EEPE). Efficiency criteria under consideration are variance, bias, and computing time of the Monte Carlo estimators.

From a mathematical point of view, our framework is one for efficiently estimating the Riemann-Stieltjes integral of the mean of a stochastic process that depends on many other stochastic processes in a complicated way. I shall illustrate this framework by discussing in detail the estimation for the simplest CCR measure, EPE. I shall briefly comment on the extensions to CVA and EEPE. Numerical examples will be presented to demonstrate that our proposed efficient Monte Carlo estimators outperform the existing crude estimators of various CCR measures substantially in terms of the mean square error.

This is based on joint work with Samim Ghamami at the U.S. Federal Reserve Board.

**B io:**

*Bo Zhang is a Research Staff Member in Mathematical Sciences and Analytics at IBM Research. His current research focuses on three application areas:financial risk management, resource management and data security for information technology and systems (in particular, cloud computing and mobile technology-based systems), and decision-making for emerging healthcare practices. He uses mathematical tools in probability, statistics, dynamical systems, and optimization to solve practical problems and also works on theoretical problems in these mathematical fields.He hasheld visiting scholar positions with the U.S. Federal Reserve Board, Columbia University, National University of Singapore, Bell Labs, New York University, and the Center for Mathematics and Computer Science in the Netherlands, and hastaughtoperations and inventory managementat Columbia University.His research has been recognized by an IBM Outstanding Innovation Award, the INFORMS George Nicholson Prize, and the Best Student Paper award in the IFIP Performance2010 conference.*

**Speaker: **Sergei Levendorski, University of Leicester, UK

Monday, August 18, 2014

2:00-3:00PM- AMS Seminar Room (1-122)

**Title:**Efficient Laplace and Fourier inversions and Wiener-Hopf factorization in financial applications

**Abstract:**

Many important problems in computational finance can be reduced to calculation of expectations of stochastic expressions. The Fourier transform, Laplace transform and Wiener-Hopf factorization are standard tools for calculation of these expectations in terms of integrals over appropriate contours in the complex plane. In applications, these integrals must be calculated sufficiently accurately and very fast, especially when a model is calibrated to the data. Insufficiently accurate numerical procedures can lead to seriously incorrect calibration results such as sundial calibration, when a correct model with heavy tails is never seen; ghost calibration, when a local minimum of the calibration error is found when there is none; and a moderately reasonable calibration to vanillas which leads to large errors in prices of barrier options.

In the talk, it is explained how a family of conformal deformations of the contours of integration in pricing formulas can be used to significantly increase the accuracy and speed of calculations. The same idea can be used for simulations, and in other situations when accurate Laplace and Fourier inversions are needed.

**Thursday, April 25, 2013; 1:15 - 2:15PM; Math Tower, Seminar Room 1-122**

**Speaker:**Xinyun Chen, Columbia University

**Title:**A Continuous Time Stochastic Model for Spread-Price Dynamics in Order-Driven Markets

**Abstract:**The availability of high-frequency order book data has stimulated a great deal of research on the relationship between order flow, liquidity, and price dynamics. Most of the literature concentrates on pricing models which are not fully informed by the order flow. In contrast, we construct and study a continuous time model that incorporates the whole order book to inform the joint evolution of the spread and the price processes in a bottom-up approach. The construction of our model is guided by empirical investigations on limit order books. In particular, empirical observations suggest a multi-scale asymptotic regime, under which we obtain a jump-diffusion processes governing the evolution of the spread-price dynamics from the order flows. The simulation results for the price-spread process reproduce stylized features observed empirically. This is joint work with Jose Blanchet.

**Monday, April 22, 2013; 1:15-2:15PM; Location: Mathematics Tower, Seminar Room 1-122A**

**Speaker:**Maxim Bichuch, Princeton University

**Title**: Pricing with Transaction Costs

**Abstract**: I price a contingent claim liability using the utility indifference argument. An agent with exponential utility invests in a stock and a money market account with the goal of maximizing the utility of his investment at the final time in the presence of positive transaction cost in two cases with and without a contingent claim liability. I provide a rigorous derivation of the asymptotic expansion of the value function in the transaction cost parameter around the known value function for the case of zero transaction cost in both cases with and without a contingent claim liability. Additionally, using utility indifference method I derive an asymptotic expansion of the price of the contingent claim liability. In both cases, I also obtain a ``nearly optimal" strategy, whose utility asymptotically matches the leading terms of the value function.

**Thursday, April 4, 2013; 1:30-2:30PM; Location: Mathematics Tower, Seminar Room 1-122A**

**Speaker:**Keli Xiao

**Title:**Data Mining and Its Business Applications: The Case Studiesin High-Value Houses Discovery and Mobile Recommender Systems

**Abstract: **Data mining techniques have been very significant parts of the approaches to solve the problems in e-commerce and e-finance. Based on its four basic technical tasks: clustering, association analysis, predictive modeling (classification), and anomaly detection, researchers seek effective solutions to problems in different fields by considering the time and space costs. In this talk, I briefly introduce the four tasks and the general processes of data mining and its applications in business. Then, I present two of my work as case studies: one is for the residential real estate ranking problem, and the other one is a driving route mobile recommender system.

In the first work, we aim to provide a residential real estate ranking system based on the property related information, such as the geography information and the human mobility patterns. The features from urban geography data are summarized by spatial statistics; and the features from human mobility patterns are derived by popularity modeling. We finally conduct a comprehensive experimental study on real-world data and develop an interactive discovery system for illustrating the power of the proposed ranking system.

In the second work, we provide a focused study of extracting energy-efficient transportation patterns from location traces. As a case study, we develop a mobile recommender system for recommending a sequence of pick-up points for taxi drivers or a sequence of potential parking positions based on our proposed Potential Travel Distance (PTD) function. The experimental results show that the proposed system provide effective mobile sequential recommendation and knowledge extracted from location traces can be used for coaching drivers.

**Biography: **Mr. Keli Xiao is receiving his Ph.D. in Finance at Rutgers Business School - Newark & New Brunswick. Before that, he has received his master's degree in quantitative finance at Rutgers Business School in 2009 and his master degree's in computer science from Queens College in the City University of New York in 2008. Mr. Xiao's general area of research in finance is real estate finance and asset pricing with a focus on the studies of housing bubbles. His dissertation aims to analyze the generating process of housing bubbles and study the factors associated or contribute to housing bubbles. In addition, data mining and its business applications are also parts of his research interests. Some of his works have been published in refereed journals and conference proceedings in the field of data mining, such as*ACM Transactions on Knowledge Discovery from Data*(TKDD) and*ACM SIGKDD International Conference on Knowledge Discovery and Data Mining**(KDD).*

**Monday, March 18, 2013; 1:30-2:30PM; Location: Mathematics Tower, Seminar Room 1-122**

**Speaker:**Stefan Trueck, Professor of Finance at Macquarie University in Sydney

**Bio: **Stefan Trueck is a Professor of Finance at Macquarie University in Sydney. Before joining Macquarie, he has held positions at Queensland University of Technology and University of Karlsruhe in Germany where he received a PhD in Business Engineering. His current research interests focus on risk management and financial econometrics including the fields of credit risk, operational risk, commodity markets, emissions trading and real estate finance. He further has several years of consulting experience for financial institutions. Stefan has published in international high impact journals including The Journal of Banking and Finance, European Journal of Finance, Energy Economics, The Economic Record, Global Environmental Change, Pacific-Basin Finance Journal, The Journal of Credit Risk, Computational Statistics, Studies on Non-Linear Dynamics & Econometrics and Journal of Property Investment and Finance. He also holds two ARC Discovery Grants on "Managing the risk of price spikes, dependencies and contagion effects in Australian electricity markets" and "Risk management with real-time financial and business conditions indicators".

**Title:**How Is Convenience Yield Risk Priced

**Abstract:**We investigate how convenience yield risk is priced in commodity markets. The convenience yield can be considered as one of the most important features in commodity derivatives markets and may well be one of the key risk factors that impacts on the pricing of such contracts. While there is an extensive body of literature examining convenience yields in commodity markets, existing research provides only limited knowledge about the convenience yield risk premium. Based on the Gibson-Schwartz two-factor model, we construct portfolios that are only sensitive to convenience yield risk and investigate the nature of the risk premium. We find that for the considered commodity markets observed convenience yield risk premiums are generally positive and time-varying. We also find that risk premiums are very large for metals and grains while we do not find significant convenience yield risk premiums for oil and gas. Our results indicate that variations in the convenience yield risk premium across different commodities are related to the market structure.

**Wednesday February 6, 2013; 12:00-1:00 PM; Math Tower, Room S-240**

**Speaker:**Youri Kabanov, University of Franche-Comte, France

**Title**: Arbitrage Theory under transaction costs.

**Abstract:** We give an overview of key results of a "general theory" of financial markets with proportional transaction costs. It covers no-arbitrage criteria (the analogs of FTAP for frictionless markets) and hedging theorems for European and American contingent claims.

**Friday, November 16, 2012; 1:30-2:30 PM; Harriman Hall, Room 102**

**Speaker:** Michael C. Fu, University of Maryland (joint work with Huashuai Qu)

**Title:** Augmenting Simulation Metamodels with Direct Gradient Estimates

**Abstract:** Traditional response surface methods such as regression fit a function using only output data from the response itself. However, in many settings found in stochastic simulation, direct** gradient**estimates are available. We propose several approaches that augment traditional regression and stochastic kriging models by incorporating the additional gradient information. Theoretical results for the regression setting, along with numerical experiments for both settings, are reported, which indicate the potential improvements that can be achieved.

**Biography:** Michael Fu is Ralph J. Tyser Professor of Management Science in the Robert H. Smith School of Business, University of Maryland at College Park, with a joint appointment in the Institute for Systems Research and affiliate faculty appointment in the Department of Electrical and Computer Engineering (in the A. James Clark School of Engineering).

He received degrees in mathematics and EE/CS from MIT in 1985, and a Ph.D. in applied mathematics from Harvard University in 1989. His research interests include simulation optimization and applied probability, with applications in supply chain management and financial engineering. At Maryland, he received the Business School's Allen J. Krowe Award for Teaching Excellence in 1995, the Institute for Systems Research Outstanding Systems Engineering Faculty Award in 2002, and was named a Distinguished Scholar-Teacher for 2004-2005. He has published four books: *Conditional Monte Carlo: Gradient Estimation and Optimization Applications*, which received the INFORMS Simulation Society Outstanding Publication Award in 1998;*Simulation-based Algorithms for Markov Decision Processes*;*Perspectives in Operations Research*; and*Advances in Mathematical Finance*.

He served as Stochastic Models and Simulation Department Editor of*Management Science*from 2006-2008, as Simulation Area Editor of*Operations Research*from 2000-2005, and also on the editorial boards of*Mathematics of Operations Research*,*INFORMS Journal on Computing,**IIE Transactions*, and*Production and Operations Management*.

He also served as Program Chair of the 2011 Winter Simulation Conference and as Program Director of the Operations Research Program at the National Science Foundation from September 2010 to August 2012. He is a Fellow of INFORMS and IEEE.

**Friday, November 2, 2012; 10:00 - 11:00 AM; Location: Mathematics Tower, Math Seminar Room 1-122A**

**Speaker: **Clemens Puppe, Dean, School of Economic Business Engineering KIT, Karlsruhe, Germany

**Title**: Majority Voting over Interconnected Propositions: The Condorcet Set

**Abstract:** Judgement aggregation is a model of social choice where the space of social alternatives is the set of consistent evaluations (‘views’) on a family of logically interconnected propositions, or yes/no-issues. Unfortunately, simply complying with the majority opinion in each issue often yields a logically inconsistent collection of judgements. Thus, we consider the Condorcet set: the set of logically consistent views which agree with the majority in as many issues as possible. Any element of this set can be obtained through a process of diachronic judgement aggregation, where the evaluations of the individual issues are decided through a sequence of majority votes unfolding over time, with earlier decisions possibly imposing logical constraints on later decisions. Thus, for a fixed profile of votes, the ultimate social choice can depend on the order in which the issues are decided; this is called path-dependence. We investigate the size and structure of the Condorcet set —and hence the scope and severity of path-dependence —for several important classes of judgement aggregation problems.

**Wednesday, May 16, 2012, 2:00 - 3:00PM, Math Tower, Seminar Room 1-122**

**Speaker:** Stephen F. LeRoy, Dept. of Economics, University of California, Santa Barbara

**Title:** Infinite Portfolio Strategies

**Abstract:** In continuous-time stochastic calculus a limit in probability is used to extend the definition of the stochastic integral to the case where the integrand is not square-integrable at the endpoint of the time interval under consideration. When the extension is applied to portfolio strategies, absence of arbitrage in finite portfolio strategies is consistent with existence of arbitrage in infinite portfolio strategies. The doubling strategy is the most common example. We argue that this extension may or may not make economic sense, depending on whether or not one thinks that valuation should be continuous. We propose an alternative extension of the definition of the stochastic integral under which valuation is continuous and absence of arbitrage is preserved. The extension involves appending a date and state called ∞ to the payoff index set and altering the definition of convergence under which gains on infinite portfolio strategies are defined as limits of gains on finite portfolio strategies.

**Tuesday, May 8, 2012, 2:00 - 3:00PM, Math Tower, Seminar Room 1-122**

**Speaker:** Alexander Melnikov, Professor, University of Alberta, Edmonton, Canada, E-mail: melnikov@ualberta.ca

**Title:** On Quantitative Risk-Management in Equity-Linked Life Inurance

**Abstract: **In the talk we study equity-linked life insurance contracts with fixed and stochastic guarantees linked to the evolution of a financial market. The presence of a client’s mortality risk does not allow perfect hedging, and we utilize imperfect hedging methodologies. These methodologies were developed in mathematical finance based on loss function conceptions (quantile and efficient hedging) and risk measures. We allow an insurance company to be exposed to a financial risk. The price of the contracts will be subject to a maximization/minimization of the expected loss function/risk measure under initial budget constraints. In the Black-Scholes and jump-diffusion setting we derive equations separating financial and insurance risks embedded in the contracts and propose a methodology for effective risk-management of the contracts. Pooling hom*ogeneous clients together enables the insurance company to take advantage of diversification of a mortality risk. A large enough portfolio of life insurance contracts will result in a more predictable mortality exposure and reduced prices. The results will be illustrated with the help of financial indices (S&P 500 and the Russell 2000).

**Monday, May 7, 2012, 2:00 - 3:00PM, Math Tower, Seminar Room 1-122**

**Speaker: **Domenico Mignacca, Risk Management Director, Eurizon Capital SGR

**Title:** Risk Attribution, Risk Budgeting and portfolio expected return

**Abstract:** After a review of factor models, we concentrate our attention on risk attribution and risk budgeting. We will stress the importance of implied return and risk attribution and present an example of risk attribution from a management point of view.

**Thursday, May 3, 2012, 2:00 - 3:00PM, Math Tower, Seminar Room 1-122**

**Speaker:** Sergio Focardi, Professor, Finance, Law & Accounting Department, Edhec Business School, Nice France

**Title:** Factor Models in Finance

**Abstract:** This presentation discusses factor models in the theory and practice of finance. It presents the development of static and dynamic factor models and sketches the state-of-the-art of these models. It then outlines the development of modern approximate factor models and the techniques used to determine the number of factors, including methods based on random matrix theory and model selection criteria. Next, using results of original research conducted jointly with Professors Frank Fabozzi and Svetlozar Rachev, it discusses how factor models of prices offer better forecasting capability than factor models of returns.

**Monday, April 30, 2012, 1:00 - 2:00PM, Simons Center, Lecture Hall Room 102**

**Speaker:** John Mulvey

Professor, Bendheim Center for Finance Operations Research and Financial Engineering Department, Princeton University

**Title:** Optimal Asset Allocation and Asset-liability Management Systems: Lessons from the 2008/09 Crash

**Abstract:** Many institutional and individual investors lost considerable capital - over 25 to 30% - during the 2008/09 crash period. The traditional Markowitz model assumptions, such as fixed correlations, failed during the turbulent periods when contagion occurred across most asset categories. We discuss extensions of optimal portfolio models based on regimes via Hidden Markov Models and advanced overlay strategies. The role of managed futures and commodities for enhancing performance will be discussed.

**Wednesday, March 21, 2012, 2:00 - 3:00 PM, Math Tower, Seminar Room 1-122**

**Speaker:**Rosella Giacometti

**Title:** The Credit default swap market and its implied information

**Abstract:** After an examination of the characteristics of the European market for credit default swaps, we will discuss how these instruments can be used to extract forward-looking measures of credit risk, in particular the implied probability of default of reference entities and the joint probability of default of the reference entities and counterpart.

**Monday, March 12, 2012, Math Tower, Seminar Room 1-122, 2:00PM-3:00PM**

**Speaker:** Ilya Pollak

Associate Professor of Electrical & Computer Engineering, Purdue University

**Title:** Stochastic Image Models with Applications to the Analysis of Alloy Micrographs.

**Abstract:** In the development cycle of advanced materials, computerized data analysis and simulation has the potential to make a very significant impact by drastically reducing the time necessary to synthesize and test a new material. It is critically important to develop image segmentation procedures that are able to accurately extract and classify structures of interest in microscope imagery, such as individual grains or different phases in a material. Further analysis can then be performed to discover the relationships between these structures and properties of the material.

The emergence of computerized microscopes and the resulting high volume of collected imagery has made it impossible for a human operator to perform image segmentation and analysis, requiring the development of effective techniques which require little or no human intervention. Importantly, since abundant prior information is usually available regarding the shape of the structures of interest, any viable segmentation method must properly account for such information.

Similar segmentation problems arise in many other applications: analysis of microscopy images of cells; extraction of buildings and road networks from remove sensing images; and population counting in surveillance, remote sensing, and microscopy.

To address such problems, we propose a novel way of constructing and enforcing shape priors within a maximum a posteriori (MAP) segmentation framework. After computing a preliminary segmentation through matching pursuit, our algorithm uses it to construct a prior model for a MAP segmentation problem. This problem is then solved using a min-cut algorithm. The resulting algorithm has a small number of parameters that need to be selected by the user, and produces very accurate segmentations. It significantly outperforms both the MAP segmentations obtained without the shape prior, and the matching pursuit segmentations.

We then introduce the concept of point processes with multiresolution marks and show how they can be used to generalize our models and algorithms.

This is joint work with Landis Huffman, Jeff Simmons, and Marc De Graef.

**Wednesday, February 29, 2012, Time from 2:00PM -3:00PM,Location: Math Tower, Seminar Room 1-122**

**Speaker:** Alan De Genaro Dario

Courant Institute of Mathematical Sciences - NYU

**Title:** Properties of Doubly Stochastic Poisson Processes with Affine Intensities

**Abstract:** This paper discusses properties of a Doubly Stochastic Poisson Process (DSPP) where the intensity process belongs to a class of affine diffusions. For any intensity process from this class we derive an analytical expression for probability distribution functions of the corresponding DSPP. A specification of our results is provided in a particular case where the intensity is given by one-dimensional Feller process and its parameters are estimated by Kalman filtering for high frequency transaction data.

**Thursday, November 17th, 2011, Time: 2:00pm-3:30pm Location: Simon Center Room 102**

**Speaker:** Dr. Andrew Mullhaupt

(AMS Stony Brook) many years in finance, (Morgan Stanley, Renaissance Technologies, SAC Capital).Research Professor, Ph.D., 1984, New York University

**Title:** Information Geometry and Prediction

**Abstract:** New ideas can dramatically outperform classical statistical methods. Why? Careful application of classical applied mathematical is part of the story. But deep ideas in analysis and geometry lie at the heart of this success. Connections will be drawn from an actual prediction example in finance, to the geometry of Hardy space, and to the geometry of a particular Kahler manifold of rational functions.

**Wednesday, November 2nd, Time: 11:00am-12:00pm Location: Math Tower, Seminar Room 1-122**

**Speaker: **

**Title:** Lessons from Hedge Fund Replication: Information Asymmetry, Risk Management and Asset Allocation**Abstract:** What lessons have we learnet from hedge fund investing? How does this experience guide asset allocation and regulatory policy? How has this experience guided the application of quantitative methodology to asset management? I examine these questions with perspectives gained from academic studies and practitioner products on Hedge Fund Replication. My focus is to relate our understanding of the sources of hedge fund returns to the broader theoretical constructs of the Efficient Market Hypothesis (EMH), Option Pricing, market microstructure and risk management. This analysis leads to some interesting insights and research directions relevant to asset allocation and regulatory/disclosure framework.

**Thursday, October 27th, 2011, Time: 2:00pm-3:30pm Location: Simon Center Room 102**

**Speaker:** Dr. Michel Balinski

Laboratoire d'Econométrie

Ecole Polytechnique

91128 Palaiseau Cedex, France

**Title:** "Judge, Don't Vote !"

**Abstract:** Judge: Don’t Vote!

Ecole Polytechnique and CNRS, Paris (based on joint work with Rida Laraki)

The intent of this talk is to make three major points.

1. The traditional model of the theory of voting and social choice fails for

two separate reasons:

• the voters’ and judges’ inputs are inadequate,

• the theory that emerges is inconsistent and contradictory.

2. The traditional majority methods of voting—notably first-past-the-post

and two-past-the-post—fail in practice.

3. A more meaningful and realistic model gives voters the right to express their opinions fully and leads to a method that best meets the traditional criteria of what constitutes a good method of election and of judging competitors: majority judgment. It is described via a recent representative national presidential poll conducted in France.

Reference: Michel Balinski and Rida Laraki, Majority Judgment: Measuring, Ranking, and Electing, 2010. Cambridge, MA and London, England: The MIT

Press.

**Tuesday October 11, 2011, Time from 12:00pm -1:00pm, Location: AMS 1-122**

**Speaker:** Prof. Dr. Ludger Ruechendorf

**Title:** Stochastic dependence, extremal risk and optimal portfolio diversification

**Abstract:** This talk is concerned with the description of possible influence of positive dependence on the magnitude of risk in a portfolio vector. We discuss and review developments on the classical problem of Fr\'echet type bounds with univariate and multivariate marginals, and their applications to related various dependence orderings. As application we identify the worst case dependence structure of a portfolio of $d$-dimensional risks. In the second part we consider some new developments on the portfolio diversification problem.In the framework of multivariate extreme value theory we determine risk optimal portfolios and consider statistical properties of their empirical versions.

**Wednesday, September 28, 2011, Time from 11:00AM -12:00PM,Location: Physics Building P127 **

**Speaker:** Stanislav (Stan) Lazarov

(Chief Architect,Cognity, FinAnalytica Inc)

**Title:** Developing risk management system - Cognity

**Abstract:** FinAnalytica is a leading provider of real world portfolio and risk management solutions for quantitative analysts and portfolio managers. FinAnalytica's Cognity software suite incorporates the latest and most transparent advances in analytics,including comprehensive treatment of real world fat-tailed and skewed asset returns. With offices in New York, London and Sofia, FinAnalytica supports leading asset managers, hedge funds, pension funds, endowments and fund of funds globally. Started 10 years ago with a team of 5 people, today our company has more than 50 talented individuals who work together to constantly develop the ever growing software solution. The team is comprised of a dozen of quants, most of them holding PhD in applied mathematics,developers holding computer science degree, business analysts and client services managers with MBA background and IT professionals. Cognity has been designed from ground up with a multiplatform, highly scalable architecture allowing for hundreds of thousands of assets to be simulated and their risk statistics to be calculated. In addition to innovative analytics, we employ connectors to a couple of data providers and offer both interactive reports and Excel reports.

I have been with FinAnalytica for almost 9 years, 6 of them - on senior positions - Chief Architect, Head of Framework team, Head of position-based Cognity and now - Head of Technical Services. I was a direct witness of the progress of our company and this allows me to present you the development history from first hand.

**Tuesday August 16, 2011, Time from 2:45pm -3:45pm, Location: AMS 1-122**

**Speaker:** Prof. Stefan Mittnik.

Chair of Financial Econometrics, Department of Statistics and

Center for Quantitative Risk Analysis, University of Munich, Germany,

**Title:** Solvency II Calibrations: Where Curiosity Meets Spuriosity

**Abstract:** The European Union’s new regulatory framework for the insurance industry, called Solvency II, is scheduled to come into force in 2013. By imposing a mark-to-market regime for solvency-capital requirements (SCR), the new regulation will greatly impact the way insurers and reinsurers compose their asset allocation. With assets totaling US$ 9.5 trillion, this will have far reaching consequences for global capital markets.

To derive a company’s SCR, the Solvency II framework specifies a Standard Formula, which has two inputs: the SCRs of individual risk components and their correlations. To appropriately calibrate these input parameters, several Quantitative Impact Studies that have been conducted. Focusing on the equity-risk module, the most significant risk component making up about 25% in total SCR, we demonstrate that the proposed calibrations of the input parameters are seriously flawed. As a consequence, the implementation of the Standard Formula with the currently proposed calibration settings will lead to spurious, empirically unfounded and highly erratic SCR calculations.

**Thursday May 26th, 2011, Time 2:30pm - 3:30pm, Location: Humanities Building. Room 1003**

**Speaker:** William H. May, Senior Vice President and FRM Program Manager, Research Center, Global Association of Risk Professionals (GARP)

**Title:** GARP programs, including the Financial Risk Manager (FRM) and Energy Risk Professional (ERP) Certifications, the Advocacy program and risk education and training from entry level to board level.

**Abstract:** The Global Association of Risk Professionals (GARP) is made up of over 150,000 risk management practitioners and researchers representing banks, investment management firms, government agencies, academic institutions and corporations from more than 195 countries and territories worldwide. GARP’s mission is to help develop leaders within the risk management community by encouraging communications between practitioners, academics and regulators. The speaker will discuss GARP programs, including the Financial Risk Manager (FRM) and Energy Risk Professional (ERP) Certifications, the Advocacy program and risk education and training from entry level to board level. Participants will have the opportunity to learn how GARP can add value to their risk management careers.

**Wednesday, March 16, 2011, 1:00-2:00pm, Math Tower 1-122**

**Speaker**: Yuedong Wang, Chair, Dept of Statistics & Applied Probability, University of California Santa Barbara

**Title:** Nonparametric Nonlinear Regression Models

**Abstract**: Almost all of the current nonparametric regression methods such as smoothing splines, generalized additive models and varying coefficients models assume a linear relationship when nonparametric functions are regarded as parameters. In this talk we present a general class of nonparametric nonlinear models that allow nonparametric functions to act nonlinearly. They arise in many fields as either theoretical or empirical models. We propose new estimation methods based on an extension of the Gauss-Newton method to infinite dimensional spaces and the backfitting procedure. We extend the generalized cross validation and the generalized maximum likelihood methods to estimate smoothing parameters. Connections between nonlinear nonparametric models and nonlinear mixed effects models are established. Approximate Bayesian confidence intervals are derived for inference. We will also present a user friendly R function for fitting these models. The methods will be illustrated using two real data examples.

**Friday, September 30, 2011, Time from 2:00PM -3:00PM,****Location: AMS 1-122**

**Speaker:** Dmitry Malioutov, DRW Holdings, algorithmic trading research

**Title:** Smooth Isotonic Covariances for Interest Rate Risk Modeling

**Abstract:** In this talk we consider the problem of estimating the covariance matrix of a high-dimensional random vector in the scarce data setting, where the number of samples is less than or comparable to the dimension. The sample covariance matrix is a poor choice in this setting, and a variety of structural assumptions or priors have been considered in the literature: covariance selection models with sparse precision matrices, low-rank models (PCA and factor analysis), sparse plus low-rank, covariance shrinkage, and others.

We suggest to use another type of structure, which plays an important role in applications such as interest rate modeling in computational finance: we assume that the random vectors can be indexed over a low-dimensional manifold, and the covariance matrix has smoothness and monotonicity properties over the manifold. We describe how these assumptions can be enforced in a convex optimization framework using semidefinite programming (SDP) via interior point methods and first order proximal gradient methods. Furthermore we also describe how this framework could be applied to problems with missing data, and with asynchronous measurements.

**Wednesday September 07, 2011, Time from 2:30pm -3:30pm, Location: Simon Center Room 102**

**Speaker:** Prof. Gennady Samorodnitsky, School of Operations Research and Information Engineering, Cornell University

**Title:** TAIL INFERENCE: WHERE DOES THE TAIL BEGIN?"

**Abstract:** The quality of estimation of tail parameters, such as tail index in the univariate case, or the spectral measure in the multivariate case, depends crucially on the part of the sample included in the estimation. A simple approach involving sequential statistical testing is proposed in order to choose this part of the sample. This method can be used both in the univariate and multivariate cases. It is computationally efficient, and can be easily automated. No visual inspection of the data is required. We establish consistency of the Hill estimator when used in conjunction with the proposed method, as well describe its asymptotic fluctuations. We compare our method to existing methods in univariate and multivariate tail estimation, and use it to analyze Danish fire insurance data.

**Friday July 29, 2011, Time from 1:00pm -2:00pm, Location: AMS Seminar Room, Math Tower 1-122**

**Speaker:** Andrey Bernstein, Operations Research Visiting Faculty Candidate

**Title: **Adaptive Decision Making in Complex Environments: Some Algorithms and Applications

**Abstract: **We consider the general problem of adaptive decision making in complex and, possibly, unpredictable/adversarial environments. In this context, two different frameworks will be discussed:

(i) Reinforcement Learning (RL), where the environment is modeled as an MDP with unknown reward/transition structure. We discuss the exploration-exploitation tradeoff: how to balance between choosing actions for the purpose of learning and trying to maximize the return based on the information gathered so far. We present an algorithm which solves this trade-off efficiently for very large spaces, using adaptive state aggregation.

(ii) No-Regret Learning, where (part of) the environment is unpredictable and possibly adversarial. The power of a no-regret algorithm is that it performs as well as an offline algorithm that knows (in hindsight) the whole history of that unpredictable component. We discuss the possibility of devising online learning algorithms which will enjoy the ``best of two worlds''. They will have the same guarantees as standard RL algorithms if the environment is stochastic and stationary, while if the environment has an unpredictable/adversarial component, it will have a no-regret property.

Finally, we discuss some applications where both RL and no-regret algorithms can be used. These include various control problems, online routing problem, online classification problem, and applications related to the smart power grid and cognitive radio.

**Friday, June 17, 2011, Time 11:00am - 12:00pm, Location: AMS Seminar Room, Math Tower 1-122**

**Speaker**: Jian Hu

A Ph.D. candidate in the Department of Industrial Engineering and Management Sciences at Northwestern University. He received a M.S. degree in Logistics/Transportation from the University of Arkansas at Little Rock and a B.Eng. degree in Control Engineering from Xi’an Jiaotong University. He will receive his Ph.D. in 2011.

**Title:** Risk Adjusted Budget Allocation Models with Application in Homeland Security

**Abstract:** Multi-criteria optimization and risk-averse stochastic programming have been widely used to solve complex resource allocation problems under uncertainty, in energy, finance, supply chain management, health care, emergency response, and other areas. In many of these problems, high levels of uncertainty create the challenge of efficient information collection. Indeed, decision makers often hesitate to choose trade-off weights indicating the relative importance of different criteria and express a risk-averse utility function evaluating the economic benefit of allocated resources. In this talk, we present two robust approaches - the robust weighted sum method and stochastic dominance - reducing the impact of incomplete information. The proposed approaches are applied to the budget allocation to urban areas in the United States under the Urban Areas Security Initiative (UASI). Numerical results and analyses are reported to demonstrate the efficiency of these approaches.

**Monday, June 06, 2011, Time 11:00am - 12:00pm, Location: AMS Seminar Room, Math Tower 1-122**

**Speaker**: Dr. Evrim Dalkiran

Evrim Dalkiran is a postdoctoral associate and an adjunct faculty in the Grado Department of Industrial and Systems Engineering at Virginia Tech. She received her Ph.D. in Operations Research from Virginia Tech in May 2011. She holds B.S. and M.S. degrees from Industrial Engineering Department at Bogazici University, Turkey, earned in 2003 and 2006, respectively.

**Title:** Discrete and Continuous Nonconvex Optimization: Decision Trees, Valid Inequalities, and Reduced Basis Techniques

**Abstract: **This talk addresses the modeling and analysis of a strategic risk management problem via a novel decision tree optimization approach, as well as the development of enhanced Reformulation-Linearization Technique (RLT)-based linear programming (LP) relaxations for solving nonconvex polynomial programming problems, through the generation of valid inequalities and reduced representations, along with the design and implementation of efficient algorithms. We first conduct a quantitative analysis for a strategic risk management problem involving the allocation of resources and selection of decision alternatives to minimize the risk in the event of a hazardous occurrence. Using a decision tree to represent the cascading sequences of events as controlled by decision and investment alternatives, the problem is modeled as a nonconvex mixed-integer 0-1 factorable program. A branch-and-bound algorithm is developed for which convergence and computational results are discussed. Next, we enhance RLT-based LP relaxations for polynomial programming problems by developing two classes of valid inequalities: *v-semidefinite cuts *and *bound-grid-factor* constraints. The first of these uses concepts derived from semidefinite programming by imposing positive semidefiniteness on (constraint-factor scaled) dyadic variable-product matrices. We explore various strategies for generating cuts, and exhibit their relative effectiveness for tightening relaxations and solving the underlying polynomial programs. As a second cutting plane strategy, we introduce a new class of bound-grid-factor constraints that can be judiciously used to augment the basic RLT relaxations in order to improve the quality of lower bounds and enhance the performance of global branch-and-bound algorithms. Certain theoretical properties are established that shed light on the effect of these valid inequalities in driving the discrepancies between the RLT variables and their associated nonlinear products to zero. The results indicate that certain classes of *v*-semidefinite cuts and bound-grid-factor constraints significantly improve the computational performance. Finally, we explore equivalent, reduced size RLT-based formulations for polynomial programs. Utilizing a basis partitioning scheme for an embedded linear equality subsystem, a strict subset of RLT equalities is shown to imply the remaining ones. Certain static and dynamic basis selection strategies are proposed to implement this procedure via an algorithm that assures convergence to a global optimum. Computational results are presented to demonstrate the improvement in overall effort.

**Monday April 4th, 2011, Time 2:30pm - 3:30pm, Location AMS seminar room, Math Tower 1-122**

**Speaker**: Prof. Andrzej Ruszczynski, Department of Management Science and Informaion Systems Rutgers University

**Title**: Dynamic Risk-Averse Optimization

**Abstract:** We present the concept of a dynamic risk measure and discuss its important properties. In particular, we focus on time-consistency of risk measures. Next, we focus on dynamic optimization problems for Markov models. We introduce the concept of a Markov risk measure and we use it to formulate risk-averse control problems for two Markov decision models: a finite horizon model and a discounted infinite horizon model. For both models we derive risk-averse dynamic programming equations and a value iteration method. For the infinite horizon problem we also develop a risk-averse policy iteration method and we prove its convergence. We propose a version of the Newton method to solve a non-smooth equation arising in the policy iteration method and we prove its global convergence. Finally, we discuss relations to Markov games.

**Monday (March 14th) from 2:30-3:30pm at Math Tower 1-122.**

**Speaker**: Huizhen (Janey) Yu

**Title:** Q-Learning and Enhanced Policy Iteration in Discounted Dynamic Programming

**Abstract**: We consider the classical finite-state discounted Markovian decision problem, and we introduce a new policy iteration-like Q-learning algorithm for finding the optimal Q-factors. Instead of policy evaluation by solving a linear system of equations, our algorithm requires (possibly inexact) solution of a nonlinear system of equations, involving estimates of state costs as well as Q-factors. This is Bellman's equation for an optimal stopping problem that can be solved with simple Q-learning

iterations, in the case where a lookup table representation is used; it can also be solved with the Q-learning algorithm of Tsitsiklis and Van Roy [TsV99], in the case where feature-based Q-factor approximations are used. In exact/lookup table representation form, our algorithm admits asynchronous and stochastic iterative implementations, in the spirit of asynchronous/modified policy iteration, with lower overhead and/or more reliable convergence advantages over existing Q-learning schemes. Furthermore, for large-scale problems, where linear basis function approximations and simulation-based temporal difference implementations are used, our algorithm resolves effectively the inherent difficulties of existing schemes due to inadequate exploration.

Joint work with Dimitri P. Bertsekas.

**Bio**: Huizhen Yu is currently a postdoctoral researcher at Laboratory for Information and Decision Systems (LIDS), Massachusetts Institute of Technology. She received the Ph.D. degree in computer science and electrical engineering from Massachusetts Institute of Technology in 2006. Her research interests include stochastic control, machine learning, and nonlinear and convex optimization.

**Friday, March 4, 2011, 2:30-3:30pm, Math Tower 1-122.**

**Speaker:** Michael Fu

Program Director for Operations Research, National Science Foundation (on leave from his position as Ralph J. Tyser Professor of Management Science, Department of Decision, Operations and Information Technologies, The University of Maryland, College Park)

**Title:** Stochastic Gradient Estimation: Tutorial Review and Recent Research

**Abstract:** Stochastic gradient estimation techniques are methodologies for deriving computationally efficient estimators used in simulation optimization and

sensitivity analysis of complex stochastic systems that require simulation to estimate their performance. Using a simple illustrative example, the three most well-known direct techniques that lead to unbiased estimators are presented: perturbation analysis, the likelihood ratio (score function) method, and weak derivatives. Applications are discussed and then some recent research results in financial engineering and revenue management are presented. Opportunities for NSF funding in the Operations Research Program will be discussed at the end of the talk.

**Bio**: Michael Fu is Director of the Operations Research Program at NSF. He is on leave from the University of Maryland at College Park where he is Ralph J.

Tyser Professor of Management Science in the Robert H. Smith School of Business, with a joint appointment in the Institute for Systems Research and affiliate faculty appointment in the Department of Electrical and Computer Engineering, both in the A. James Clark School of Engineering. He received degrees in mathematics and EE/CS from MIT in 1985, and a Ph.D. in applied mathematics from Harvard University in 1989. His research interests include simulation optimization and applied probability, with applications in supply chain management and financial engineering. At Maryland, he received the Business School's Allen J. Krowe Award for

Teaching Excellence in 1995, the Institute for Systems Research Outstanding Systems Engineering Faculty Award in 2002, and was named a University of Maryland Distinguished Scholar-Teacher for 2004-2005. He has published four books: Conditional Monte Carlo: Gradient Estimation and Optimization Applications (1997, co-author J.Q. Hu), which received the INFORMS Simulation Society Outstanding Publication Award in 1998; Simulation-based Algorithms for Markov Decision Processes (2007, co-authors H.S. Chang, J. Hu, S.I. Marcus); Perspectives in Operations Research (2006, co-editors F.B. Alt, B.L. Golden); and Advances in

Mathematical Finance (2007, co-editors R.A. Jarrow, J.-Y. Yen, R.J. Elliott). He served as Stochastic Models and Simulation Department Editor of Management Science from 2006-2008, as Simulation Area Editor of Operations Research 2000-2005, and also on the editorial boards of Mathematics of Operations Research, INFORMS Journal on Computing, IIE Transactions, and Production and Operations Management. He is a Fellow of INFORMS and IEEE.

**Wednesday, Sept 29, 2010, 3:00pm**, AMS Seminar Room 1-122

**Speaker:** Greg Frank **Title**: "Using Database Systems for Tick Data Mining"

**Abstract:** As high frequency traders of instruments in various asset classes, we are faced with the challenge of analyzing the characteristics of vast quantities of data. Tools like Matlab and Quantlib are great for quickly investigating high order relationships in financial data. But how does one approach analysis when data sets run into terabytes? And what about when the data is streaming in real-time?

In this talk, we'll take a practical look at how common relational database systems and commercial business intelligence platforms can be used for analyzing tick data. We'll take a look at how various estimation and classification techniques like Logistic Regression or ARIMA can be deployed - and their relative performances compared - with common database tools.

In some asset classes, such as spot FX, getting the data itself into a form that can be analyzed with traditional techniques poses a challenge. Price updates are irregularly spaced in time, there are data drop-outs and spurious "zero" prices, and because FX is traded between banks rather than on an exchange, there is no centrally authoritative source for reporting what the "correct" numbers are. Yet the mathematical techniques we use usually only work correctly on regularly spaced, clean, accurate input data. We'll look at some lessons learned for basic data conditioning, which we view as an important step to real-world financial data analysis.

About the speaker: Greg Frank is a founding partner of Presagium, a proprietary trading firm. Previously, he was managing partner of Ovation Capital, a venture capital firm investing in software companies. He is chairman of Connectiva Systems, a 400-person company providing revenue assurance and fraud management solutions to telecommunications companies globally. He has held senior positions at Microsoft in Redmond, WA, and at Murray & Roberts. Greg has an MBA from Harvard Business School and a degree in electronic engineering from the University of Cape Town, South Africa. He is a recreational glider pilot and distance runner, and lives in Manhattan with his wife and two sons.

**Wednesday, September 15, 2010, 3:00 pm, AMS Seminar Room 1-122A**

**Speaker**: Robert Almgren

**Title:** Algorithmic Trading for Interest Rate Futures**Abstract:** Interest rate futures markets present several novel microstructural features, not found in equities and foreign exchange markets. For algorithmic trading, these features must be fully understood and properly exploited. Three features are the most important. First is pro rata order matching, which has strong effects on the optimal order placement strategy. Second is implied quoting via calendar spread and butterfly contracts, which presents opportunities to find hidden liquidity and better order fills. Third is the highly coupled nature of contracts at different points on the yield curve, requiring an inherently multidimensional analysis even to trade a single contract. We shall provide an overview of all these aspects, and the quantitative tools that are used to model them.

**Speaker Bio**: Robert Almgren, co-founder of Quantitative Brokers, providing agency algorithmic execution and cost measurement in fixed income markets. Until 2008, Dr Almgren was a Managing Director and Head of Quantitative Strategies in the Electronic Trading Services group of Banc of America Securities. From 2000-2005, he was a tenured Associate Professor of Mathematics and Computer Science at the University of Toronto, and Director of its Master of Mathematical Finance program. Before that, he was an Assistant Professor of Mathematics at the University of Chicago and Associate Director of the Program on Financial Mathematics; he is currently a Fellow in the Mathematics in Finance Program at New York University. Dr. Almgren holds a B.S. in Physics and Mathematics from the Massachusetts Institute of Technology, an M.S. in Applied Mathematics from Harvard University and a Ph.D. in Applied and Computational Mathematics from Princeton University. He has an extensive research record in applied mathematics, including several papers on optimal securities trading, transaction cost measurement, and portfolio formation.

**Monday, April 19, 2010, 2:15pm, AMS Seminar Room 1-122**

Prof. Dr.Sci. Svetlozar Rachev**Title:** Operational Risk Assessment

Advanced Statistical Methodology and its Practical Implementation

**Abstract:** The main topics of this talk include:

a. Compound Cox process models for operational losses;

b. Fitting loss distributions to truncated and full operational loss data;

c. Fitting non-hom*ogeneous Poisson process models to operational frequency data;

d. Applications of heavy-tailed -stable distributions to loss data;

e. Estimation of the dependence (copula) structure of losses from various business lines and event-types;

f. Forecasting of one-period ahead Value-at-Risk and Expected Tail Loss for (i) every individual business-line, event-type, and for (ii) the total operational loss from all business-lines and event types;

g. In-sample goodness-of-fit tests (such as Kolmogorov-Smirnoff, Kuiper, Anderson-Darling, Cramer-von Mises);

h. Backtesting;

i. Robust modeling techniques and comparative analysis with classical models.

**Tuesday, March 23, 2010, 2:30 pm, AMS Seminar Room 1-122**

Eugene Stern, Research Group, RiskMetrics

**Title:** Risk Management and Real Life

**Abstract:** Challenges of managing market and credit risk inside a large organization. We will analyze some simple trades and hedges, and discuss which risks are hedged and which remain, how to model the risk, and how to incorporate the analysis into a firm-wide risk model. We’ll explore which risks can be modeled statistically, and which can’t – and how to measure and manage both kinds.

**Wednesday, October 28, 2009, 3:50PM - 5:10PM, Physics Tower S-240**

**Speaker**: Ann Tucker, Ph.D.**Title**: Momentum and the Financial Crisis

**Abstract:** The momentum factor is a well documented market anomaly that continues to exhibit strength well after it was first documented in the academic literature.There is evidence that momentum exposure, or long exposure to assets with good recent performance and short exposure to assets with poor recent performance, is especially widespread within the hedge fund community.The extent of the exposure became painfully clear during the second half of 2008 when Lehman’s bankruptcy triggered a global unwinding of risk in almost every asset class.This talk explores the contribution of momentum-related strategies in equities, commodities, interest rates and foreign exchange to the buildup of risk in the global financial system and the chaos that ensued when the great reversal occurred.In addition, the roles played by the U.S. dollar and the Japanese yen as carry currencies of choice during this period are examined in the context of the momentum environment, possible intervention of the Chinese in the currency markets, and the unwinding of the aforementioned carry trades

**Wednesday, October 14, 2009, 3:50PM - 5:10PM, AMS Seminar room, Math Tower 1-122**

**Speaker: **Greg Van Inwegen, Ph.D. **Title:** "Risk Management in a Non Transparent and Non Linear World: Perspectives and Challenges from a Fund of Hedge Funds"

**Abstract:** Multi-Factor Risk Modeling & Stress Testing Simulations based on Sector Exposures, Yield Curve Sensitivities and Greeks Volatility Regime Shift Modeling Measuring and Adjusting for Illiquidity Non-Normal Risk Budgeting

Dr. Van Inwegen is a Managing Director and Chief Investment Risk Officer at Ivy Asset Management, where he has worked since 2004. He chairs the Investment Risk Management Committee at Ivy and leads the Risk Management and Quantitative Research team at this Hedge Fund of Funds. His professional career started at Syracuse University, where he taught Finance as an Assistant Professor before moving to Wall St. He has worked at Verizon, Paine Webber, Bankers Trust, Deutsche Bank, and a hedge fund start up. In addition to risk management, he as been involved in a number of elements of the asset management business, including stock selection models, asset allocation, enhanced indexing and high frequency statistical arbitrage models. Dr. Van Inwegen has degrees from the University of California at Berkeley, the Sloan School at MIT; and the Wharton School at the University of Pennsylvania.

**Wednesday, October 7, 2009, 3:50pm - 5:10PM , AMS Seminar room 1-122**

**Speaker:** Michael Driscoll, Ph.D.

**Title:** Challenges in Assessing Credit Risk in Today's Financial Crisis

**Abstract:** In the current environment, the financial services industry and its regulators are concerned about exposure to credit risk. The distribution of financial losses due to changes in the credit quality of a counterparty to a financial agreement.

Credit risk pervades virtually all financial transactions. The rise in the complexity and globalization of financial services has contributed to stronger linkages between counterparties. While higher connectivity facilitates economic growth through credit allocation and risk diversification, it also increases the potential for disruptions to spread throughout the system. Financial engineering further enabled risk transfers that were not fully accounted for by regulators or by the institutions themselves, thereby complicating the assessment of counterparty risk, risk management, and policy responses. The current crisis highlights how systemic linkages can arise not just from financial institutions’ solvency concerns but also from the lack of market liquidity and other stress events.

At the center of the issue is the quantification of the probability of a default; an event resulting from a complex decision process. This process is affected by the intricate network of business relations between firms, and in turn, the default decision of a single firm affects the entire system. Corporate defaults aggregate and is induced by the correlation among firms. It is driven by individual firm sensitivity to common economic factors such as interest rates or inflation, but also from the feedback of an individual firm event to the entire system. The assessment of credit risk for trading strategies in credit and across markets, its risk management and policy development encompass a broad set of topics, e.g.

All facets of credit risk assessment face a wide range of challenges ranging from the availability of historical events to measure and calibrate models to the transparency of risk within the system and the uncertainty of available information.

Michael Driscoll is a Managing Director at Cogent Partners, specializing in capital markets and risk management advisory services in Private Equity and Alternative Investments. Dr. Driscoll has been a Principal and Global Head of Risk Management for Allianz (ART Group) and a member of their Underwriting, Risk and Investment Management Committees. He began his career in the research division of AT&T Bell Laboratories and received his Ph.D, M.S. and B.S. degrees from SUNY Stony Brook where he was elected to Sigma Xi and Tau Beta Pi. Dr. Driscoll also serves a member of the Stony Brook Center for Quantitative Finance Advisory Board.

**Wednesday, September 30, 2009, 3:50 - 5:10 pm, Math Common Room 4-125**

**Speaker**: Michael Driscoll, Ph.D.

**Title**: Challenges in Assessing Credit Risk in Today's Financial Crisis

**Abstract: **In the current environment, the financial services industry and its regulators are concerned about exposure to credit risk. The distribution of financial losses due to changes in the credit quality of a counterparty to a financial agreement.

Credit risk pervades virtually all financial transactions. The rise in the complexity and globalization of financial services has contributed to stronger linkages between counterparties. While higher connectivity facilitates economic growth through credit allocation and risk diversification, it also increases the potential for disruptions to spread throughout the system. Financial engineering further enabled risk transfers that were not fully accounted for by regulators or by the institutions themselves, thereby complicating the assessment of counterparty risk, risk management, and policy responses. The current crisis highlights how systemic linkages can arise not just from financial institutions’ solvency concerns but also from the lack of market liquidity and other stress events.

At the center of the issue is the quantification of the probability of a default; an event resulting from a complex decision process. This process is affected by the intricate network of business relations between firms, and in turn, the default decision of a single firm affects the entire system. Corporate defaults aggregate and is induced by the correlation among firms. It is driven by individual firm sensitivity to common economic factors such as interest rates or inflation, but also from the feedback of an individual firm event to the entire system.

The assessment of credit risk for trading strategies in credit and across markets, its risk management and policy development encompass a broad set of topics, e.g. -Forecasting of individual defaults,

-Valuation of credit sensitive securities and quantification of credit risk on portfolios of securities,

-Simulation of dependent default events and losses, and

-Statistical validation of models.

All facets of credit risk assessment face a wide range of challenges ranging from the availability of historical events to measure and calibrate models to the transparency of risk within the system and the uncertainty of available information.

Michael Driscoll is a Managing Director at Cogent Partners, specializing in capital markets and risk management advisory services in Private Equity and Alternative Investments. Dr. Driscoll has been a Principal and Global Head of Risk Management for Allianz (ART Group) and a member of their Underwriting, Risk and Investment Management Committees. He began his career in the research division of AT&T Bell Laboratories and received his Ph.D, M.S. and B.S. degrees from SUNY Stony Brook where he was elected to Sigma Xi and Tau Beta Pi. Dr. Driscoll also serves a member of the Stony Brook Center for Quantitative Finance Advisory Board.

**Wednesday, September 16, 2009, 3:50PM to 5PM, AMS Seminar Room, Math Tower 1-122**

Andrew P. Mullhaupt, Ph.D.**Topic**: TBA

Dr. Mullhapt recently retired as Director of Research and Portfolio Manager at SAC Meridien Fund, a systematic hedge fund. Dr. Mullhaupt has worked at Renaissance Technologies as a Senior Research Analyst and at Morgan Stanley. He has held various academic posts at SUNY Buffalo, the University of New Mexico and the Courant Institute. Dr. Mullhaupt received his Ph.D. in Applied Mathematics from the Courant Institute and his B.S. from Stevens Institute of Technology.

**Wednesday, September 9, 2009, 2009, 3:50 pm, AMS Seminar Room, Math Tower 1-122**

David Cru, Ph.D. Candidate SUNY Stony Brook

Asst. Vice President, Ivy Asset Management

"Dynamic Hedge Fund Asset Allocation Under Multiple Regimes"

**Abstract:** Portfolio Selection as introduced by Harry Markowitz laid the foundation for Modern Portfolio Theory. However, the assumption that underlying asset returns follow a normal distribution and that investors are indifferent to skew and kurtosis are not practically suited for the hedge fund environment. Additionally, the Lockup and Notice provisions built into hedge fund contracts make portfolio rebalancing difficult and justify the need for dynamic allocation strategies. Market conditions are dynamic therefore rebalancing constraints in the face of changing market environments can have a severe impact on return generation. There is a need for sophisticated yet tractable solutions to the multi-period problem of hedge fund portfolio construction and rebalancing. We Generalize the hedge fund asset return distribution to a Multivariate K-mean Gaussian Mixture Distribution; cast the multi-period hedge fund allocation problem as a constrained optimization problem; and propose practical rebalancing strategies that represent a convergence of literature on Hedge Fund investing, Regime Switching and Dynamic Portfolio Optimization

**Monday, June 8, 2009, 11:30am**, AMS Seminar room, Math Tower 1-122

American Options: Free-Boundary-Value Problems in Finance

Qiang Zhang, Department of Mathematics, City University of Hong Kong

A vanilla option is a right to buy or sell an underlying security at a fixed price. Exotic options have more complicated payoff structures and depend on more state variables. It is well known that, in a simple setting, the prices of European options that can only be exercised on the maturity date are given by the Black-Scholes formulae. However, most of options traded in the market are American type that can be exercised any time before and on the maturity date. So far, except in a few special cases, no close-form expressions for American options have been found and numerical computation is the main method for pricing American options. The difficulty is due to the fact that the American options are free-boundary-value problems, namely at what critical price of the underlying one should exercise the options? In this talk we will discuss the theoretical properties of American options and analytical approximations for the solutions of American options and the free boundaries. We show that this approximation method is applicable to both American type vanilla and exotic options. We will also discuss free-boundary-value problems in other types of financial products.

**Monday, May 4, 2009, 1:00pm**, AMS Seminar Room, Math Tower 1-122

**Title:** Market Crashes and Modeling Volatile

Professor Svetlozar Rachev, School of Economics and Business Engineering, University of Karlsruhe, Germany

**Tuesday, April 21, 2009 4:00pm**, AMS Seminar Room, Math Tower 1-122

**Title: **Option Pricing Under a Stressed-Beta Model

Adam Tashman, UC-Santa Barbara Department of Statistics and Applied Probability

The Capital Asset Pricing Model (CAPM) was a fundamental contribution to the field of financial economics, relating the sensitivity of an asset's return to the stock market return. This sensitivity (or slope), referred to as beta, is ubiquitous in modern finance. An assumption of CAPM is that there is a linear relationship between asset returns and market returns, but this does not always hold in practice.

We consider a continuous-time CAPM model where beta is not constant, but rather is piecewise constant. This allows us to introduce regime-switching dynamics while keeping things tractable. When the market level crosses below a given threshold, an additive term increases the slope, resulting in a higher sensitivity of asset returns to market returns. We develop the price of an equity option using this approach. Along the way, several interesting quantities appear, such as the occupation time of a Brownian motion in an interval, and Brownian local time.

One of the future goals of this research will be to introduce a calibration technique for the slope in each regime based on estimated option price parameters of both the asset and the market index.

**Thursday, March 19, 2009, 4pm**, AMS Seminar Room, Math 1-122

**Title:** Challenges in Pricing Mortgage Backed Securities

Dr. Ying Chen, Former JP Morgan Analyst

After a brief review of the development of US mortgage market, a widely-used Mortgage Backed Securities (MBS) pricing procedure, consisting of Option Adjusted Spread (OAS) analysis and prepayment modeling, is introduced. Then we discuss some challenges in pricing MBS, including evaluation of prepayment risk, interest rate modeling, and analysis of loans with different characteristics. In the end, several key factors causing the current subprime mortgage credit crisis are examined and recommendations are provided to improve the pricing models for MBS.

**February 19, 2009, 4:00pm**, AMS Seminar Room

**Title**: Quantitative Challenges in Algorithmic Execution

Professor Robert Almgren of NYU Courant Institute

**Date:**Tuesday, June 16, 2015;10:00 - 11:00 AMMath Tower, Seminar Room 1-122

**Speaker:**Professor Rongling Wu, Center for Statistical Genetics, Department of Public Health Sciences, Pennsylvania State University

**Title:**A unifying high-dimensional platform to infer the global genetic architecture of trait development

**Abstract****:** Current developments of genotyping and phenotyping techniques are revolutionizing our capacity to collect any amount of data from any level of organization from molecules, cells, organs to organisms. Traditional methods for constructing a genotype-phenotype map using these data are based on a marginal regression model, thus incapable of estimating and testing the net and cumulative effects of individual genes. We have implemented a high-dimensional model to better infer the genetic architecture of complex phenotypes by selecting a sparse but full set of significant genes from an extremely large pool of genetic data. This model was further unified with functional mapping, a model aimed at dissecting phenotypic formation into developmental components through mathematical equations, to provide an unprecedented opportunity to chart a detailed picture of genotype-phenotype relationships and ultimately predict or engineer the phenotypes through genotypes. We have packed our unified high-dimensional genetic association model into a computer platform, 2HiGWAS, for public use.

**Date:**Wednesday, May 20, 2015;3:00–4:00 PM, Math Tower, Seminar Room 1-122

**Speaker:**Professor I. Memming Park, Department of Neurobiology and Behavior, Stony Brook University

**Title:**Finding structure in neural signal: generalized quadratic model and dimensionality reduction

**Abstract****:** To understand how the brain computes as an integrated system, powerful statistical tools are necessary to analyze neural signals. One approach to tackling what information neurons represent in their spatiotemporal activity is to build an encoding model that predicts neural signals given other observables. Neurons in the early visual system are known to represent only a small subspace of the visual stimulus space, hence dimensionality reduction methods have been widely used to characterize their neural response to stimuli. However, the stimulus distribution must be rotationally symmetric for traditional methods. Here we show a connection between widely used moment based dimensionality reduction techniques to a probabilistic encoding model that we call the generalized quadratic model (GQM). We show computationally efficient spectral estimator for the model parameters. In addition, GQMs do not have restrictions in the input distribution unlike the traditional approaches, and allow Bayesian extensions. I will show results on primary visual cortical neurons.

*"Pizza with the Professor"seminar series for AMS faculty and graduate students*

**Date:**Wednesday, May 13, 2015;1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:**Professor Song Wu

**Title:**"Parallel Processing of Sequence Alignment and Assembly”

**Abstract****:**High-throughput next generation sequencing (NGS) technology has quickly emerged as a powerful tool in many aspects of biomedical research. However, along with its rapid development, the data magnitude and analysis complexity for NGS far exceed the capacity and capability of traditional small-scale computing facilities, such as multithreading algorithms on standalone workstations. To address this issue, here we present a solution using the ever-increasing supply of processing power by massive parallel processing systems. Through collaborations with Prof. Yuefan Deng’s team, we have designed scalable hierarchical multitasking algorithms for importing classical sequencing algorithms to modern parallel computers. More specifically, we have developed a novel parallel infrastructure, which includes a portable NGS-oriented messaging package that adapts well to heterogeneous communication systems, and a scheduling package that provides a dynamic balancing strategy for efficient task scheduling. The parallel infrastructure has been adopted and applied into two fundamental tasks in NGS data processing: NGS read alignment to a reference genome and*de novo*assembly of NGS reads. In this talk, I will demonstrate both applications.

**"Pizza with the Professor"**** seminar series for AMS faculty and graduate students**

**Date:**Wednesday, April 22, 2015;1:00 - 2:00 PM; Math Tower, Room S-240

**Speaker:**Professor Hongshik Ahn

**Title:**Ensemble Approaches for Classification

**Abstract:**Recently, classification methods have been developed based on ensembles of classifiers. By combining classifiers built from different subsets of the feature space, the methods substantially improve the accuracy of class prediction. A primary area of application is the classification of subjects into cancer-risk or cancer-type categories based on highdimensional genomics or proteomics data. The methods also have been used for developing a Clinical Decision Support System.

**"Pizza with the Professor"**** seminar series for AMS faculty and graduate students**

**Date:**Wednesday, March 11 , 2015;1:00 - 2:00 PM; Math Tower, Seminar Room 1-122A

**Speaker:**Professor Song Wu

**Title:**"When Statistics Meets Genetics"

**Abstract:**Historically, statistics and genetics are cognate in the sense that many great statisticians are also renowned geneticists. Nowadays, the interplay between statistics and genetics has reached to an unprecedented level with the advent of various state-of-art technologies in generating complicated genetic data, whose analyses often require and motivate the development of modern statistical methodologies.

In this talk, I will give a brief overview about how statistics may be applied to genetic studies. Particularly, I will discuss some recent progress made in my group about genome-wide association analyses with multiple genetic markers, i.e. single nucleotide polymorphisms (SNPs). The main advantage is that our models directly incorporate genetic principles into model construction. Our studies demonstrate that the new methods significantly improve the power and robustness of identifying causal genetic factors. Various statistical methods, including the EM algorithm, functional linear model and neural networks, will be discussed. Lastly, I will briefly mention the most recent development and analyses of the next generation sequencing data.

**Date:** Monday, September 29, 2014; 10:30 - 11:30 AM; Math Tower, Room 5-127

**Speaker:** Professor Jeffrey Simonoff, New York University Stern School of Business

**Title: **Regression Trees for Longitudinal and Clustered Data Based on Mixed Effects Models: Methods, Applications, and Extensions

**Abstract:** Longitudinal data refer to the situation where repeated observations are available for each sampled object. Clustered data, where observations are nested in a hierarchical structure within objects (without time necessarily being involved) represent a similar type of situation. Methodologies that take this structure into account allow for the possibilities of systematic differences between objects that are not related to attributes and autocorrelation within objects across time periods. This talk discusses work related to tree-based methods designed for this situation. After describing methods based on tree construction for multiple response data, we focus on a methodology that combines the structure of mixed effects models for longitudinal and clustered data with the flexibility of tree-based estimation methods through the use of an EM-type algorithm. The resulting estimation method, called the RE-EM tree, is less sensitive to parametric assumptions and can provide improved predictive power compared to linear models with random effects and regression trees without random effects.

The RE-EM tree as originally formulated was based on the CART tree algorithm proposed by Breiman et al. (1984) for tree building, and therefore it inherits the tendency of CART to split on variables with more possible splits. I propose a revised version of the RE-EM regression tree that corrects for this bias by using the conditional inference tree proposed by Hothorn et al. (2006) as the underlying tree algorithm instead of CART. The new version is apparently unbiased, and has several improvements over the original RE-EM regression tree in terms of prediction accuracy and the ability to recover the correct true structure.

If time permits I will then describe how the RE-EM tree can be used as a diagnostic check when applying the most commonly used model for longitudinal and clustered data, the linear multilevel model (which combines a linear model for the population-level fixed effects, a linear model for normally distributed individual-level random effects, and normally distributed observation-level errors with constant variance). Application of the RE-EM tree to (functions of) the residuals from a linear multilevel model fit can be used to construct goodness-of-fit tests for nonlinearity of the fixed effects or heteroscedasticity of the errors. The resultant tests have good power to identify explainable model violations (that is, ones that are related to available covariate information in the data).

Portions of this talk are based on Sela and Simonoff (2012, *Machine Learning*) and Simonoff (2013, *Statistical Modelling*), and on joint work with Wei Fu.

*Bio:** Jeff graduated from the Applied Mathematics and Statistics Department with a BS degree in 1976 and then received his Masters and PhD degrees from Yale University*

AMS Statistics Research Seminar**Wednesday, June 26, 2013, 12:00-1:00 PM****; AMS Seminar Room, Math Tower 1-122A**

**Speaker:**

Faculty Candidate for Statistics Recruitment

**Title:**Modeling Epigenetics Data: From ChIP-Seq to DNA Methylation

**Abstract:**The advancements of high throughput instruments have revolutionalized the study of epigenetics events. These include (1) chromatin immunoprecipitation followed by sequencing (ChIP-Seq) for profiling protein DNA binding, histone modifications and nucleosome occupancies, and (2) DNA methylation profiling via Illumina Beadarray or whole-genome bisulfite sequencing. My talk is organized into two parts. In the first part, I will discuss the utility of naked/deproteinized DNA and input DNA in building a background model for ChIP-Seq data, and how this leads to our proposed mixture model for detecting peaks in both one- and two-sample ChIP-Seq data analysis. In the second part, I will discuss statistical issues in differential methylation analysis. I will introduce a statistical model that systematically incorporates informative features and improves CpG rankings, which could lead to better understanding of DNA methylation patterns.

**Friday, May 17, 2013, 11:30-12:30PM; AMS Seminar Room, Math Tower 1-122A**

**Speaker:**

Faculty Candidate for Statistics Recruitment

**Title:**Another look at statistical testing and integrative analysis in a big(ger) data era

**Abstract:**A decade after the draft sequencing of the human genome, basic statistical issues of multiple testing remain important for discovery-based and translational science. These issues are not unique to genomics, but this area can be used to highlight shortcomings in standard multiple testing: (i) at the extreme testing thresholds required for many genomics platforms, standard testing approaches can have highly inflated false positive rates, leading to false discoveries; (ii) standard approaches to analyses of sets of features (such as genetic "pathways") can also lead to numerous false discoveries, unless correlation structures are handled carefully. Permutation analysis provides a rigorous framework for testing, but is computationally intensive and cumbersome.

In this talk, I will introduce the rationale for multiple testing and permutation analysis when dealing with high-dimensional data, and describe a series of interconnected approaches to handle testing in a computationally efficient manner. For testing individual features in 'omics platforms, I will describe the Moment-Corrected Correlation (MCC) approach to perform extremely fast and accurate testing against trend alternatives, with careful control of false positives. For data with stratified covariates, MCC has a close connection to exact conditional testing in generalized linear models. For testing pathways, or other defined groups of features, I will introduce*safeExpress*, a new software package to perform highly rigorous pathway testing. A sensible extension of these ideas leads us to exact permutation moments and methods for quadratic forms.

Finally, I will describe several additional projects and software packages that are incorporating the ideas from MCC and*safeExpress*, and are being applied to methylation, genotyping, and RNA-Seq datasets.

**Wednesday, May 15, 2013; 1:15 - 2:15PM; AMS Seminar Room, Math Tower, 1-122A**

**Speaker:** Xuyang Lu, Ph.D. Candidate, University of California

Faculty Candidate for Statistics Recruitment

**Title**: A Bayesian Approach for Instrumental Variable Analysis with Right-Censored Time-to-Event Outcome

**Abstract**: The method of instrumental variable (IV) analysis is widely used in economics, epidemiology, and other fields to estimate the causal effects of intermediate covariates on outcomes, in the presence of unobserved confounders and/or measurement errors in covariates. Consistent estimation of the effect has been developed when the outcome is continuous, while methods for binary outcome produce inconsistent estimation. We extend the IV method to time-to-event outcome for linear models with right censored data. We propose a parametric Bayesian model with elliptically contoured error distributions, and a semiparametric Bayesian model with Dirichlet process mixtures for the random errors, in order to relax the parametric assumptions and address heterogeneous clustering problems. Performance of our method is examined by simulation studies. We illustrate our method on the Women's Health Initiative Observational Study.

**Wednesday, April 25, 2012, 1:15 PM - 2:15 PM, Mathematics Tower, Room S-240**

**Title:** Mapping the regulatory network of the genotype-phenotype map

**Speaker:** Rongling Wu, Professor of Biostatistics, Bioinformatics, Statistics, and Biology

Director, Center for Statistical Genetics, The Pennsylvania State University

**Abstract**: Genetic mapping has been instrumental for identifying specific quantitative trait loci (QTLs) that control complex traits in different organisms. Traditional approaches for genetic mapping assume a direct relationship between genotype and phenotype, ignoring a network of biochemical and developmental pathways involved in a process from DNA to a high-order phenotype. We present a new conceptual framework of QTL mapping by integrating regulatory networks of trait formation. This framework, named network mapping, treats trait formation as a dynamic system in which transcriptional, proteomic, metabolomic and developmental components coordinate and interact through a cascade of biochemical pathways. Network mapping models and quantifies a complex web of biochemical interactions using a system of differential equations (DE). By estimating the pattern of how mathematical parameters of DE change jointly or individually in time and space, the dynamic behavior and outcome of the system can be predicted by genetic and other information. Network mapping pinpoints a new research direction of genetic mapping by integrating it with systems biology.

**Wednesday, August 17th, 2011, Time 11:30am - 12:30pm, Location: AMS Seminar Room, Math Tower 1-122**

**Speaker:** Dr. Qiang Zhang, Department of Mathematics, City University of Hong Kong

**Title:** An Investment Strategy for both Good and Bad Economic Times

**Abstract:** The well-known Merton strategy is a power-utility-maximization strategy. Although this strategy performs better than several other strategies, the strategy is optimal only in the sense of ensemble averaging. However, in reality, only one random path will be realized and the value of the portfolio at the end of the investment horizon could be dramatically lower than its historical high. This is evident in the recent financial crisis. We will present a new strategy to overcome this problem. This new strategy performs well in both good and bad economic times

**Tuesday August 9th, 2011, Time from 11:30pm -12:30pm, Location: AMS 1-122**

**Speaker:** Juanjuan Fan, Department of Mathematics and Statistics, San Diego State University

**Title:** Trees and Random Forests for Correlated Survival Data

**Abstract:** We are interested in developing rules for assignment of tooth prognosis based on actual tooth loss in the VA Dental Longitudinal Study. It is also of interest to rank the relative importance of various clinical factors for tooth loss. A multivariate survival tree procedure is proposed. The procedure is built on a parametric exponential frailty model, which leads to greater computational efficiency. We adopted the goodness-of-split pruning algorithm of LeBlanc and Crowley (1993) to determine the best tree size. In addition, the variable importance method is extended to trees grown by goodness-of-fit using an algorithm similar to the random forest procedure in Breiman (2001). Simulation studies for assessing the proposed tree and variable importance methods are presented. To limit the final number of meaningful prognostic groups, an amalgamation algorithm is employed to merge terminal nodes that are hom*ogenous in tooth survival. The resulting prognosis rules and variable importance rankings seem to offer simple yet clear and insightful interpretations.

**Thursday, May 19th, 2011, Time 10:30am - 11:30am, Location: AMS Seminar Room, Math Tower 1-122**

**Speaker:** Professor Angshuman Sarkar, Department of Statistics, Visva-Bharati University, India

**Title:** Two-level and multi-level search designs under a tree structure

**Abstract: **Search designs provide an indispensable tool under model uncertainty. After the pioneering work of Srivastava (1975) many authors considered the problem of constructing search designs for different situations. Considering the hierarchy of factorial effects, Srivastava and Hveberg (1992) pointed out the importance of a tree structure in the factorial effects while analyzing data arises from behavioral sciences. That is, for a factorial experiment involving factors F1, F2, F3 and F4, the non-negligibility of interactions F1F2 and F3F4 may implies the non-negligibility of atleast one of F1F2F3 or F1F2F4 or F1F3F4 or F2F3F4. This article consider the problem of constructing a new class of search designs for situations where it is desired to search and estimate two 2-factor and one 3-factor interactions under a tree structure, in addition to the estimation of all main effects and general mean. First of all we propose the necessary condition for existence of such a search design. The proposed necessary condition is also sufficient in the noiseless case. Then we propose the required search design both for the two-level and multi-level cases. The performance of the proposed design has been judged in terms of the probability of correct searching.

**Wednesday April 6th, 2011, Time 1:00pm - 2:00pm, Location Simons Center Auditorium, Room 103**

**Speaker**: Dr. Song Wu, Department of Biostatistics, St. Jude Children’s Research Hospital, Memphis, TN

**Title**: Multiple-Marker Linkage Disequilibrium Mapping of Quantitative Traits

**Abstract:** Single nucleotide polymorphisms (SNPs) comprise a major part of DNA variants that contribute to disease onset and progression. SNP microarrays provide a platform to survey SNPs on a genome-wide scale. In past years, many methodologies have been developed to analyze the SNP data, and most of them treated SNPs as independent markers and analyzed them separately. A single marker association test would be suitable for somatic SNPs that are acquired in non-productive cells and cannot be passed onto offspring, however, for a majority of inheritable germline SNPs, the single marker method may suffer by ignoring the linkage information contained in adjoining SNPs that are co-segregated with the quantitative trait loci (QTL). In this study, we propose a more powerful framework for linkage disequilibrium (LD) mapping of quantitative traits by using multiple SNP markers. It can be shown theoretically that the four disequilibria parameters involved in a trigenic model can be used to test the association between QTL and its two flanking SNPs. Simulation studies demonstrate that our new method significantly improves the power and robustness of mapping disease genes when the QTL is in linkage with its neighboring SNP markers. Additionally, when the QTL is at the exact location of a SNP, our method maintains a comparable power with the single marker method. A real data example has been analyzed to illustrate the utility of the method. In addition to the SNP array data, I will also discuss how other high-throughput genomic data may contribute to biological discoveries by using examples from The Cancer Genome Atlas (TCGA) project.