Background Image

Seminars & Colloquia

MA 725 Seminars
Friday, October 30, 2009 - 2:00 p.m. - Science Center 356
Hai Lin, Division of Mathematics and Computer Science, Clarkson University
"Algorithms for Crypotographic Protocol Verification in Presence of Algebraic Properties"
Abstract: 

The design of cryptographic protocols is error-prone, people have found flaws in seemingly secure protocols years after they were proposed. For example, the famous Needham-Schroeder protocol was found to be flawed almost 20 years after it was originally proposed. This dissertation is about giving automatic proofs that protocols are secure. We proposed several methods that can search for attacks systematically and automatically.

            Traditionally, the perfect encryption assumption was assumed when people verify protocols using formal methods. This assumption means that attackers can get nothing out of any encrypted message without knowing the key. But in some scenarios this assumption is not true, cryptographic primitives can have some algebraic properties. Under these properties, some partial information of an encrypted message may be released, and there can be attacks by taking advantage of the partial information. Traditional methods cannot find attacks based on these algebraic properties. In this dissertation, we proposed several methods that consider these properties.

            In this proposal, I will mostly focus on the motivations of our work. First I will briefly introduce cryptographic protocols and security issues in them. And then, I will talk about automated reasoning and its applications to protocol verification. Next, I will talk about algebraic properties in some protocols, why they can be dangerous and why it is difficult to consider them.

Variably saturated groundwater how problems are often modeled by Richards equation, a highly nonlinear partial differential equation. Efficient and robust numerical approximations of Richards model continue to be challenging for most common problems due to the nonlinearities of the model and the nonsmooth properties of its constitutive relations for certain physical parameters. These problems can lead to a series of difficulties including loss of mass, poorly resolved fronts and failure for nonlinear and iterative linear solvers. Standard methods that use uniform temporal and spatial discretization for these problems are often inefficient and have given way to dynamic methods that adapt in time and space. While the advantages of each component of such adaption strategies have been demonstrated, the joint use of spatially and temporally adaptive methods for solving Richards' equation has received little or no attention. This work has two parts; joint adaption for Richards' equation and mass conservative issued associated with spatial discretizations. For the joint adaption, we propose a method for solving Richards' equation which is adaptive in both space and time. Next we present conservation schemes that address mass conservation issues associated with grid coarsening in spatial adaption. The motivation for this work is the Adaptive Hydrology Model(ADH), being developed by the U.S. Army corps of Engineers, for solving surface and ground water flow problems. ADH couples 3D unsaturated ground water modeling to 2D shallow water modeling at the surface. It advances in time implicitly, solving the nonlinear equations with an inexact Newton method with a two level domain decomposition preconditioner. We provide promising numerical results using a 1-D simulation tool structured similarly to ADH that incorporates a joint spatial and temporal adaption scheme with a choice of three methods for mass conservation.
Thursday, August 28, 2009 - 11:00 a.m. - Science Center 346
 Godfred Yamoah, Department of Mathematics, Clarkson University
Conservative Temporal and Spatial Adaptive Methods for Groundwater Flow
Abstract:  Variably saturated groundwater how problems are often modeled by Richards equation, a highly nonlinear partial differential equation. Efficient and robust numerical approximations of Richards model continue to be challenging for most common problems due to the nonlinearities of the model and the nonsmooth properties of its constitutive relations for certain physical parameters. These problems can lead to a series of difficulties including loss of mass, poorly resolved fronts and failure for nonlinear and iterative linear solvers. Standard methods that use uniform temporal and spatial discretization for these problems are often inefficient and have given way to dynamic methods that adapt in time and space. While the advantages of each component of such adaption strategies have been demonstrated, the joint use of spatially and temporally adaptive methods for solving Richards' equation has received little or no attention. This work has two parts; joint adaption for Richards' equation and mass conservative issued associated with spatial discretizations. For the joint adaption, we propose a method for solving Richards' equation which is adaptive in both space and time. Next we present conservation schemes that address mass conservation issues associated with grid coarsening in spatial adaption. The motivation for this work is the Adaptive Hydrology Model(ADH), being developed by the U.S. Army corps of Engineers, for solving surface and ground water flow problems. ADH couples 3D unsaturated ground water modeling to 2D shallow water modeling at the surface. It advances in time implicitly, solving the nonlinear equations with an inexact Newton method with a two level domain decomposition preconditioner. We provide promising numerical results using a 1-D simulation tool structured similarly to ADH that incorporates a joint spatial and temporal adaption scheme with a choice of three methods for mass conservation.
Thursday, April 28, 2009 - 11:00 a.m. - Science Center 344
 Mufutau Akinwande, Department of Mathematics, Clarkson University
Homomorphisms of Nonbinary DeBruijn Graphs: Construction and Application to Parallel Random Number Generators
Abstract:  DeBruijn sequences are combinatorial objects that have been applied to several disciplines of science and engineering such as communications (shift register sequences), cryptography, pseudorandom number generation, DNA algorithms, just to mention a few.


A deBruijn sequence of order n defined over a finite alphabet (typically numerical, binary, etc.) includes all possible patterns of size n as a substring, exactly once each. They can be interpreted as Hamiltonian cycles in the so-called deBruijn digraph. For a given order n, there are many known algorithms to generate deBruijn sequences. However, the number of known sequences is vanishingly small compared to the total number of possible sequences.


In this proposal, we will describe and characterize a class of homomorphisms between deBruijn digraphs of different orders (n + k and n). We will show how a homomorphism from this class can be used to recursively generate an exponential (in n) number of deBruijn sequences of any alphabet size. We will show some important properties of these generated sequences, and also describe an algorithm that efficiently implements the proposed construction.


The second part of the proposed work is to apply these homomorphisms to the problem of generating multiple “independent sequences” of pseudorandom numbers that can be used in e.g. parallel Monte Carlo Simulations. We show how this “parallelization“ technique avoids some issues present in the currently existing methods. Supporting results of preliminary tests of randomness and independence will be demonstrated.

Thursday, April 23, 2009 - 2:30 p.m. - Science Center 354
 Joseph Skufca, Department of Mathematics, Clarkson University
Application of Evidence Theory to Fusion of Network Data
Abstract:  Suppose we trying to understand the structure of a network composed of nodes — point objects— connected in some functional way by linear objects, where this network is directly representative of some physical object. At any particular instant, the actual network has a specific configuration, determined by the edges and links that form the network. Suppose that the exact configuration is not completely known. Instead, we possess two different estimated configurations of the network, or depictions. An estimate may differ from the actual network by some combination of (1) including nodes and edges that are not in the actual network, or (2) excluding nodes and edges that are part of the actual network. The accuracy of the estimates is not precisely known, but is assigned by some type of subjective description. The goal is to (1) fuse the data from the estimates to form a new estimate of the network, and (2) to assign a new description of the accuracy for the fused data. We assume that the actual network is not accessible — the “truth” will never be known. Therefore, the goal of the automated fusion process is to find an algorithm that accurately uses the information that is presented to produce fused data that is defensible from the perspective of its consistency with the source data. We will show how Evidence Theory might be used to achieve these goals.


Evidence Theory serves as an alternative to probability theory as a mathematical model of uncertainty, with particular robustness in applications where the uncertainty is epistemic (rooted in lack of information). The talk will provide an introduction to Evidence Theory before illustrating its application to the above network problem.

Thursday, April 22, 2009 - 3:00 p.m. - Science Center 354
 Scott Lalonde, Department of Mathematics, Clarkson University
A Computational Approach to Measuring Homeomorphic Defect
Abstract:  An important concept in the field of dynamical systems is the notion of conjugacy. Two dynamical systems are said to be conjugate if their dynamics are topologically equivalent. In other words, there is a homeomorphism between the underlying spaces which preserves the dynamics of the two systems. In this thesis we will be discussing an extension of this idea called mostly conjugacy.


In the context of mostly conjugacy, we deal with functions called commuters. These relate two dynamical systems that are not necessarily conjugate. We can determine the amount by which two dynamical systems fail to be conjugate by studying certain properties of their associated commuter. As commuters are not generally homeomorphisms, we will do so by studying a quantity called the homeomorphic defect. The work presented here is devoted largely to developing computational techniques for measuring this defect. In particular, we will construct an algorithm for approximating the Lebesgue measure of subsets of Rn which will rely heavily on the concept of Monte Carlo integration. <\p><\br> Once we have constructed the algorithm, we will present results from its deployment on various benchmark sets. We will analyze these results and use them to present arguments for the validity of the algorithm. Finally, we will discuss open questions and possible future work that can be done to achieve the goal of measuring homeomorphic defect to a reasonable degree of accuracy.

Thursday, April 9, 2009 - 12:30 p.m. - TAC 206
Brian McClune, Department of Mathematics, Clarkson University
New Hybrid and Surrogate Techniques for Simulation-based Optimization of a Polymer Extrusion Filter
Abstract:  In this thesis, we propose two novel hybrid approaches created via the combination of a PSO and a GA. The first method hybridizes the two algorithms by nesting the swarm search functionality of the PSO within the GA as a replacement to the traditional genetic algorithm crossover routine; the second method combines the two by introducing a crossover function to be executed on a conditionally determined fraction of the swarm after the particle locations are updated. In addition, we introduce alternate hybrid versions augmented with surrogate construction functionality and search subroutines in an effort to further improve their search efficiencies. It is our hope that this work will not only provide a powerful new means by which to improve upon previous results in our polymer extrusion filter optimization research, but also demonstrate the potential of the new hybrid and surrogate designs for application to a far wider set of optimization problems.
Wednesday, April 2, 2009 - 2:30 p.m. - Science Center Room 354
Suman Sanyal, Visiting Assistant Professor, Department of Mathematics and Computer Science, Clarkson University
Brownian Motion Indexed by a Time Scale
Abstract:  In this talk, I will present a construction of countable dense subset of an arbitrary time scale (closed subset of the reals) and use it to prove the existence of a brownian motion on a specific set by proving a generalization of Kolmogorov-Centsov theorem.
Wednesday, March 11, 2009 - 1:00 p.m. - Science Center Room 356
Yuefeng Tang, Department of Mathematics and Computer Science, Clarkson University
Interpolation Algorithms in Program Verification
Abstract:  Recently, Interpolants have been used as an important component of program verification.  In this thesis, we propose three different procedures for interpolant generation.  The first method based on efficient rewriting methods constructs interpolants for linear inequalities over reals, and it does not require the use of combination procedures.  The process of building interpolants reflects the structure of the rewrite proof.  The second method, built on the linear arithmetic solver of Yices(SMT solver), constructs interpolants for linear inequalities over reals or linear equalities over integers, and it is designed to work well in an SMT theorem prover. We have implemented this method and give experimental results.
The third method is built on the linear arithmetic solver of Yices with Gomory cut strategy for linear inequalities over integers.  The interpolant is constructed in such a way that a conditional formula derived from a contradictory equation is repeatedly refined until it becomes an interpolant.
Thursday, March 5, 2009 - 2:30 p.m. - Science Center Room 354
Suman Sanyal, Department of Mathematics, Clarkson University
Global Stability of Complex-Valued Neural Networks
Abstract:  In this talk, I will discuss the activation dynamics of a complex-valued neural network on general time scales.  Besides presenting conditions guaranteeing the existence of a unique equilibrium pattern, its global exponential stability will be discussed. Some numerical examples for different time scales will be given in order to highlight the results.
Friday, January 30, 2009 - 11:30 p.m. - Science Center Room 356
Jie Sun
Department of Mathematics, Clarkson University
Networked Networks: Uncovering the Scale of Your Network Dynamics
Abstract:  Modeling of dynamical systems on networks have wide application across many fields of science. Many of these problems require understanding of spatial and temporal interactions and scales in non-homogeneous mediums. In this thesis proposal, we will present our current study and future work to the approach of this central issue.

In the first part, we will present theoretical tools for analyzing the synchronization stability of networked oscillators: a coupled dynamical system. Unlike the traditional analysis which assumes the identicalness of individual oscillators, our generalized analysis allows for error in varies forms: random noise, parameter mismatch, or effects from external oscillators. Other forms of synchronization including cluster synchronization and generalized synchronization will also be discussed under the framework of our analysis.


In the second part we use the tools developed above, to define spatial scales in terms of the synchronizability of clusters: densely connected subnetworks that share similar dynamics. Interestingly, we connect  a classic and exciting concept called shadowing to the problem of modelability of synchronized oscillators. We will also present a new type of network model: sequence networks, as a canonical model for modular networks (in spite of dynamics).


Other fundamental problems that are related to the modeling of networks and dynamics on networks, including the problem of optimal representation of a network, and efficient computation of network statistics via update schema, will also be discussed.

Thursday, December 4, 2008 - 2:30 p.m. - Science Center Room 354
Abbas Alhakim
Department of Mathematics, Clarkson University
The Classical Serial Test of Randomness with a New Look
Abstract: An improvement of the well-known overlapping serial test will be presented. This is a statistical test that checks the validity of the assumption of randomness of computer- generated sequences. A short introduction will be given about the main methods and deficiencies of pseudo-random numbers.
Thursday, November 20, 2008 - 3:30 p.m. - Petersen Board Room
Guohua Zhou
Department of Mathematics, Clarkson University
Fourier Analysis and Truncation Error Estimates of Multigrid Methods and Conservative Jacobians on Hexagonal Grids
Abstract:  Multigrid methods are fast solvers for elliptic partial differential equations (PDEs), which leads to all kinds of applications in numerical computations, such as the multigrid solver on geodesic grids. Multigrid methods use approximations on grids of different mesh sizes. Comparing the approximate truncation error on two consequent grids leads to a higher order approximation to the truncation error on the coarse grid, which is known as relative trunction error estimates. Hexagonal grids offer some advantages compared with rectangular grids, and hexagonal grids approximate to geodesic grids. This talk presents the multigrid methods on hexagonal grids. Quantitative insights for the multigrid solver on geodesic grids can be obtained by local Fourier analysis on hexagonal grid. The framework for one- and two-grid analysis is given, and numerical results confirm the analysis. An accurate formulation of the truncation error estimates for linear problems limited to non-staggered grids has been investigated. We extend this formulation to non-linear problems on staggered grids. We apply the method to the Cauchy-Riemann equations, Stokes equations and Navier-Stokes equations. The discretizations of the conservative Jacobian operator on hexagonal grids are investigated. Numerical results from solving the nondivergent barotropic vorticity equation, which is expressed using the Jacobian, verify the conservation properties of the discretizations. Multigrid method is applied when solving the Poisson equation for updating the streamfunction.  With a flow through the boundary, we test the boundary formulations.
Thursday, November 13, 2008 - 2:30 p.m. - Science Center 354
Kirsten Morris
Department of Mathematics, University of Waterloo
Controller Design for Partial Differential Equations
Abstract: There are essentially two approaches to controller design for systems modelled by partial differential equations. In the first approach, the full model of the system is used in controller design. The designed controller is generally infinite- dimensional and is often subsequently reduced before implementation.

This approach is generally not feasible, since a closed-form expression for the solution is not available. For most practical examples, a finite-dimensional approximation of the system is first obtained and the controller is designed using this finite-dimensional approximation. The hope is that the controller has the desired effect when implemented on the original system. That this method is not always successful was first documented more than 30 years ago. A controller that stabilizes a reduced-order model need not necessarily stabilize the original model; or some other aspect of the system performance may be unacceptable. Systems with infinitely many eigenvalues either on or asymptoting to the imaginary axis are notorious candidates for problems.

In this talk, some issues associated with approximation of systems for the purpose of controller design are discussed along with conditions under which satisfactory controllers can be obtained using approximations.

Thursday, October 16, 2008 - 2:30 p.m. - Science Center 354
Kathleen Fowler
Department of Mathematics, Clarkson University
The Existence of Free Lunch
Abstract: In this talk I will describe my experience with the American Institute of Mathematics (AIM) Research Conference Center (ARCC) in Palo Alto, CA where I hosted a workshop on Derivative-Free Hybrid Optimization Methods for Simulation-based Problems in Hydrology. ARCC hosts week-long focused workshops in all areas of the mathematical sciences with an emphasis on a specific mathematical goal or progress on a significant unsolved problem. AIM provides funding for roughly 30 participants (chosen by the host) to travel and attend the workshop, including hotel, airfare, per diem, etc. The next call for proposals is November 1, 2008. Come learn about how you can host a workshop to advance your own area of research.
Thursday, October 2, 2008 - 2:30 p.m. - Science Center 354
Suman Sanyal
Department of Mathematics, Clarkson University
Stochastic Dynamic Equations and its Applications
Abstract: To be posted
Friday, September 19, 2008 - 12:00 p.m. - Science Center 348
Karabi Sinha, Ph.D.
Departments of Biostatistics and Nursing, UCLA Schools of Public Health and Nursing
Preparation for a Career in Quantitative Research
Abstract: An interactive discussion about necessary skills and knowledge important for students to hone as they prepare for careers in quantitative research. Some such areas are developing oral and written communication skills and familiarizing oneself with research resources and common practices. This discussion should prove helpful to students who are going to graduate school to pursue higher studies and academic careers as well as to students who are graduating with undergraduate/graduate degrees to pursue industry jobs.
Thursday, September 18, 2008 - 2:30p.m. - Science Center 354
Karabi Sinha, Ph.D.
Departments of Biostatistics and Nursing, UCLA Schools of Public Health and Nursing
Some Aspects of Diallel Cross Designs with Correlated Observations
Abstract: Diallel Cross Designs [DCDs] have been extensively studied in the literature during the last few decades. Complete and Partial Diallel Crosses have been studied from sampling and genetical experiment perspectives. Generally experiments are planned to accommodate all crosses in randomized blocks which make the block sizes too prohibitive. Therefore incomplete block designs as well as partial diallel crosses have been recommended. And in this context, there has been a great deal of emphasis and study on optimal partial diallel crosses, in both unblocked and blocked situations. However, all these studies are based on the assumption that the errors are homoscedastic, though the same parental line may be involved in crosses within a block.
The objective of this work is to study diallel cross designs in blocked experiments, after introducing a natural correlation structure among the observations within each block and the implications on data analysis thereof. It is observed that when the information matrix for parental line effects possesses a completely symmetric structure for uncorrelated observations, the same structure continues to hold even in the correlated set-up, though the computation of the constant is non-trivial. We have resolved this computation for small block sizes.
Thursday, September 4, 2008 - 2:30 p.m. - Science Center 354
Joseph Skufca
Department of Mathematics, Clarkson University
Stadium Chants and Soccer Cheers: Synchronization of Networks in the Presence of Delay
Abstract: We consider the dynamics of networks of coupled oscillators where communication between the systems is delayed by some transmission lag. We examine several different schemes and investigate how the delay can control the dynamics available to the network. We focus upon a symmetrically coupled scheme of chaotic oscillators and show that the coupling will stabilize a periodic orbit, with the specific delay value determining the period of the system, with a very natural period doubling route to chaos as delay tends toward zero. We explore this phenomenon as a probe to understand the implication of delay upon self organizing behavior.
Tuesday, March 25, 2008 - 3:00 p.m. - Petersen Boardroom
Naratip Santitissadeekorn
Department of Mathematics, Clarkson University
Transport Analysis and Motion Estimation of Dynamical Systems of Time-Series Data
Abstract: This thesis addresses the global analysis of dynamical systems known only via an experimental time-series data, or video data (image sequences). We wish to identify mechanisms of global transport from dynamical systems which are only known through experimental observations. A primary interest of this thesis is to reconstruct a velocity field from a 2-D image sequence that transforms the intensity pattern from one image to the next image in a sequence. The problem of motion estimation has been intensively studied in the last two decades in many areas of computer vision to perform motion detection and tracking, object segmentation, time-to-collision estimation, and etc. However, an application of the motion estimation has not yet been demonstrated in the area of measurable dynamical systems, in particular, to approximate the Frobenius-Perron transfer operator from a sequential image data. Therefore, we offer a practical technique to reconstruct a dynamical model from movies (image sequences) from various areas of science. We provide a mathematical formalism in a framework of the Frobenius-Perron operator to validate our technique. For questions of transport analysis, we explore and apply certain graph theory methods to identify almost invariant sets. These almost invariant sets correspond to the situation of basin-hopping in a multi-stable system under a small stochastic perturbation. We perform several numerical experiments to investigate the validity of the method. For a general time-dependent vector field, we employ the concept of the finite-time Lyapunov exponent (FTLE) to define the time-varying Lagrangian coherent structure, which prevents a flux to traverse across the structure. We demonstrate some applications of the combination of the above tools to a study of transport and mixing for certain real-world mixing devices.
 
Wednesday, March 12, 2008 - 12:00noon - SC 348 (presented by SIAM)
Prof. C.K. Poon
Department of Computer Science, City University of Hong Kong
A Glimpse of Data Stream Computations
Abstract: Due to the pressing need to process massive volume of data efficiently in many areas such as data mining, network traffic management and sensor networks, etc., data stream computation has received a lot of attention recently.  In this talk, I will describe several models, problems and a number of interesting results in the area.
 
Friday, February 29, 2008 - 12:00 noon - SC 356
Carmeliza Navasca
Department of Mathematics, Rochester Institute of Technology
Applications and Numerical Methods in High Dimensions
Abstract: Standard methods for solving PDEs, for example, suffer from the curse of dimensionality since the computation grows exponentially as the state dimension increases. These equations in high dimension are important because they have real-world applications. In this talk, I will present some numerical techniques for solving optimal control problems in high dimension as well as describe recent tools, such as, tensor decomposition. In addition, I will feature some of its important applications in signal processing, pursuit-evasion games, parameter identification and liver cancer treatment.
 
Wednesday, February 20, 2008 - 12:00noon - SC 356
Xudong Yao
Department of Mathematics, University of Arkansas
Minimax Methods for Numerically Finding Multiple Solutions of Nonlinear PDEs
Abstract:In this talk, several well-known model problems will be presented first as background for our research. Then, the development of minimax methods for numerically finding multiple solutions of nonlinear PDEs will be reviewed. Our contribution to this development will be discussed. Some numerical results for the p-Laplacian equations, the eigen problem of the p-Laplacian operator and the Gross-Pitaeviskii equation, i.e., a mathematical model to describe the single particle properties of the Bose-Einstein condensate will be shown. Several future research topics will be suggested.
 
Tuesday, February 19, 2008 -12:00noon - SC 356
Shuhua Hu
Center for Research in Scientific Computation, North Carolina State University
Modeling Shrimp Biomass and Viral Infection for Production of Biological Countermeasures
Abstract:A hybrid model of shrimp biomass and vaccine production system was developed for the production of large quantities of biological countermeasures, where the output of biomass production model for the healthy shrimp is served as input to the vaccine production model for shrimp that have been infected with a recombinant viral vector expressing a foreign antigen. The biomass production model has size as the only structure variable, and the vaccine production model entails both size and class age structure. Sensitivity of size-structured population model with respect to growth and mortality rates is one important factor in optimizing the entire biomass production system. A rigorous derivation of the partial differential equations for the sensitivities of the population density with respect to initial conditions, growth, mortality and fecundity rates was established via the method of characteristics. A reassuring aspect of our investigations is that they reveal that the correct sensitivity equations can be formally obtained by simply differentiating the population equation of interest with respect to the interested functions.
Monday, February 11 , 2008 -12:00noon - SC 356
Aaron Luttman
Bethany Lutheran College, Division of Science and Mathematics
Inverse Problems for Botanical and Astronomical Image Analysis
Abstract: Many image analysis problems are formulated as inverse problems, where the goal is to minimize a particular energy functional over some set of allowable functions.  Two such problems are image segmentation and image deblurring.  In botany it is useful to capture image data of leaves as they fluoresce in the infra-red, and the goal is to segment the videos or images into regions of fluorescence and non-fluorescence.  The botanical problem will be described as well a variational technique with numerical methods for video segmentation in this context.  Astronomical data measured from the ground is blurred as it passes through the atmosphere, and this effect must be reversed in order to analyze the data.  This deblurring is formulated as an inverse problem, and we present theoretical analysis and numerical results demonstrating that Poisson negative log-likelihood estimation can be used to reconstruct such astronomical data when regularized using the total variation of the reconstruction.
Friday, January 18, 2007 - 12:00noon - SC 356
Yanzhao Cao
Department of Mathematics, Florida A&M University
Numerical Solutions of Stochastic Partial Differential Equations
Abstract:Using computer simulations, we are now able to study the dependence of computed solutions on variations or uncertainties in the initial data, the forcing terms, or even in the coefficients or the physical properties of the system.  The results of such studies suggest that both natural and engineering phenomena commonly framed in terms of deterministic systems of partial differential equations may be more correctly modeled and deeply understood as stochastic partial differential equations (SPDE's) instead. Stochastic models are more complex than deterministic ones; as part of this complexity, the solution of an SPDE is not simply a function, but rather a stochastic process which expresses the implicit variability of the system.  This is the reason that SPDE's are able to more fully capture the behavior of interesting phenomena; it also means that the corresponding numerical analysis of the model will require new tools to model the systems, produce the solutions, and analyze the information stored within the solutions.
In this talk, I will present some recent developments on numerical solutions for stochastic partial differential equations. In particular, I will discuss finite element approximations for a class nonlinear stochastic elliptic equations with white noise forcing terms. I will also briefly discuss an engine noise reduction problem related to Monte Carlo simulations for stochastic Helmholtz equations and modeling of water flow and contaminant transport in karst aquifers related to stochastic Darcy equation.
Wednesday, January 16, 2008 -12:00noon - SC 356
Kevin Vixie
Mathematical Modeling and Analysis, Theoretical Division, Los Alamos National Labs
Measuring shapes: Image analysis, Geometric Measure Theory, and Stochastic Processes
Abstract:In this talk I will highlight recent work that showed the new Chan-Esedoglu L1TV image functional generalized a special case of the important flat norm from geometric measure theory.
This relationship not only suggests a very simple generalization of the flat norm, it also generalizes the L1TV functional. These observations lead to promising new applications to image analysis, shape analysis and probability in shape spaces.
After discussing these innovations, I will describe my newest research components at the interface of geometric measure theory and stochastic analysis. In these new threads we will be studying the precise quantification of uncertainty in geometric measure vectors computed on the output of specific image segmentation functionals. After outlining the motivation and a short guide to our plan of attack, I will introduce various background pieces that are of independent interest. These include various facts about tubular neighborhoods, sets of positive reach and curvature measures.
The lecture, including as it does some fairly technical details, will be kept accessible to graduate students through an emphasis on intuitive explanation and visual illustration.
Monday, January 14, 2008 -12:00noon - SC 356
Jichun Li
Department of Mathematical Sciences, University of Nevada, Las Vegas
Mathematical and numerical study of Maxwell's equations in complex media
Abstract: Solving Maxwell's equations plays an important role in many science and engineering areas. Examples include device design (such as antennas, radar and waveguides), nondestructive testing and imaging (geophysical probing for oil reservoirs and tumor detection), near field control and manipulation (detecting low levels of chemical and biological agents). In this talk I will start with the general time dependent Maxwell's equations in dispersive media. Three most popular dispersive media models (cold plasma, one-pole Debye medium and two-pole Lorentz medium) will be discussed. Next, I will show some finite element methods and corresponding error estimates for solving those model equations. Then I will extend the discussion to double negative metamaterials and present some numerical results. I will conclude the talk by posing some open problems and potential applications in nanotechnology and biomedical applications.
Monday, November 5, 2007 - 3:00p.m. - SC 348
Godfred Yamoah
Department of Mathematics, Clarkson University
Temporal and Spatial Adaptive Methods for Groundwater/Surface Water Flow
Abstract: Variably saturated groundwater flow problems are often by Richards’ equation. Efficient and robust numerical approximations of Richards’ model continue to be challenging for most common problems due to the nonlinearities of the model and the nonsmooth properties of its constitutive relations for certain physical parameters. Standard methods that use uniform temporal and spatial discretization for these problems are often inefficient and computationally expensive. In this work we implement a temporal adaptive scheme using a fixed grid to solve Richards’ equation in two simulations; one in which no infiltration fronts form in the domain and the other in which sharp fronts form and move in the domain. Our simulations were done on clay, silt and sand. We implement a spatially adaptive refinement scheme using an a priori error indicator to perform the adaptive spatial refinement and a heuristic method to compute the time step sizes. We also describe a 1D grid mass conservative coarsening scheme, using an optimization approach and compare it with an averaging method suggested by Kee. We discuss a joint temporal and spatial adaptive scheme that we will be implementing. The motivation for this work is the ADaptive Hydrology Model(ADH), being developed by the US Army corps of Engineers, for solving surface and ground water flow problems.
Thursday, September 13, 2007 - 2:30p.m. - SC 354
Michael Radin
Department of Mathematics, Rochester Institute of Technology
Boundedness, Periodic and Monotonic Character of the Positive Solutions of a Non-Autonomous Rational Difference Equation
Abstract: It is our goal to investigate the boundedness, periodic and monotonic character of the positive solutions depending on the order of the period of the sequence and how the terms of the sequence are rearranged. In addition, how the period of the sequence affects the periodic character of the solutions as well.

This is joint work with Mark R. Bellavia and Gerasimos Ladas from the University of Rhode Island.
  
Friday, August 23, 2007, 3:00p.m. - SC 354
Xunyang Shen
Department of Mathematics, Clarkson University
Towards a Practical Solutoin of Handling Over/underflow Exceptions with Alternate Number Formats
Abstract: One of the primary drawbacks of floating-point (FLP) arithmetic is its susceptibility to overflow and underflow. Accordingly, many alternate number formats are proposed to overcome/underflow exceptions in scientific computing. With the goal of promoting their use in practice, this dissertation discusses the effectiveness of some alternate number formats using both theoretical analysis and experimental examples.
  
Symmetric level-index (SLI) arithmetic is the preferred alternative. It allows such a large representable number range that it is essentially over/underflow-free. Some algorithm improvements, precision analysis, and a software implementation are discussed, followed by its applications on three practical applications. The satisfactory results indicate that we have achieved a practical solution of handling over/underflow problems with software, and a hardware solution can be realistic in the future.
 
July 11 , 2007, 3:30p.m. - SC 356
Chen Yao
Department of Mathematics, Clarkson University
Modelling Low-Dimensioinal Submanifolds in Dynamical Systems
Abstract: In this dissertation, we present a new technique to model a low-dimensional dynamical system embedded in a high dimensional dynamical system. The technique involves a method of discovering a low-dimensional nonlinear submanifold, known only empirically through the data set from a high-dimensional system, and a method of globally modelling the low-dimensional manifold to obtain an analytic form that reproduces a data set approximating the low-dimensional manifold. When a reliable analytic form of the low-dimensional system is attained, further work such as short-time prediction, structure investigation, and control may be possible to accomplish.
  
The technique of modelling low-dimensional systems developed here does not require analytic forms of the original systems, and also, the embedding low dimensional submanifolds can be highly nonlinear. Moreover, in some specified systems, the analytic forms generated by the modelling technique can give us some insight into the structure of the system restricted on the low dimensional submanifolds
 
Friday, April 20, 2007, 9:30 a,n,- SC 344
Mehul N. Vora
Department of Mathematics, Clarkson University
A Novel Approach to Data Mining: Genetic Algorithm for Feature Selection
Abstract: This dissertation presents the Genetic Algorithm (GA) as a data microscope for sorting, probing and finding uncovered relationships in multivariate data. Identifying a relationship or a pattern in a multivariate dataset is a challenging problem. Sometimes relationships can not be expressed in quantitative terms. These relationships are better expressed in terms of similarity and dissimilarity among groups of multivariate data. Feature selection, the process of identifying the most informative features, is a crucial step in any data mining and pattern recognition study. The selection of an appropriate feature subset can simplify the problem and lead to improved results. Feature selection, however, is itself non-trivial.
  
The pattern recognition GA identifies a subset of features whose variance or information is primarily about differences between the groups in a data set. The attributes of a genetic search strategy towards selecting the best feature subset can potentially overcome the difficulties inherent in feature selection. Application of the pattern recognition GA to a wide range of problems from the field of chemometrics and bioinformatics demonstrates the utility of the method.
 
Thursday, April 12, 2007, 2:30-3:30 - SC 354
Genetha Gray
Sandia National Labs
Some Considerations for Evaluating the Predictive Capabilities Xyce, An Electrical Circuit Simulator
Abstract: : Significant advances in computing capabilities, decreasing storage costs, and the rising costs associated with physical experiments have contributed to an increase in the use of numerical modeling and simulation. However, the inclusion of computer simulations in the study and design of complex engineering systems has introduced many new challenges.
  
As programs turn to numerical modeling and simulation to aid their design, validation and verification (V&V) tools are critical for determining simulation-based confidence and predictive capabilities. For example, code verification must be used to confirm that the underlying equations are being solved correctly. In addition, validation processes should be applied to answer questions of correctness of the equations and models for the physics being modeled and the application being studied. Moreover, validation metrics must be carefully chosen in order to explicitly compare experimental and computational results and quantify the uncertainties in these comparisons. Therefore, validation and verification (V&V) tools are critical for determining simulation-based confidence and predictive capabilities.
  
Overall, the V&V process for computational experimentation can provide the best estimates of what can happen and the likelihood of it happening when uncertainties are taken into account. The process is not independent of physical experimentation. In order to carry out the validation activities, experiments must be carefully planned to provide adequate and appropriate data. In this talk, we will describe the validation process for as it relates to Xyce, an electrical circuit simulator developed at Sandia. Specifically, we will address the issues of physical experiments, simulator calibration, and uncertainty quantification.
 
Wednesday, April 4, 2007, 12:30-1:30 - SC 356 - a SIAM Visiting Lecture, sponsored by Clarkson University Chapter of SIAM
John Hamilton
Photographic Science and Technology, Eastman Kodak Company
Algorithms for Digital Color Cameras
Abstract: : While digital color imaging has many problems in common with conventional silver halide imaging, it also has its own particular problems not faced in the analog world.  Two of these problems, and two corresponding algorithmic solutions, are illustrated by example and discussed in detail.  In addition, a mathematical perspective is presented to explain how these algorithms work.
  
The first problem is that of color interpolation (also called demosaicking).  The pixels of most silicon sensors capture a single color measurement, usually a red, green, or blue color value.  Because a fully processed color image requires all three color values at each pixel, two additional color values must be provided at each pixel.  An algorithm is presented that addresses this problem for sensors using the Bayer color filter pattern.
  
The second problem is that of color aliasing.  While the problem of aliasing is always present in a discrete imaging system, it is compounded for color sensors because the different color channels can alias in different ways.  The resulting interference patterns have distinctive color components which constitute an obvious and annoying imaging artifact.  An algorithm is presented that addresses this problem for sensors using the Bayer color filter pattern.
 
Tuesday, April 3, 2007, 4:00-5:00 - SC 454
Ye Chen - Department of Mathematics, Ph.D. Thesis Proposal
Multigrid Methods for Monge-Ampere Equations and Systems
Abstract: :The equations governing the motions of the atmosphere express the conservation of mass, momentum and energy. Various approximations lead to different dynamical models. For example, an assumption of dynamical balance between the mass and wind fields can simplify the dynamics, e.g. by filtering high-frequency waves. These models are called balanced models. In principle, one can recover both the mass and wind fields from the potential vorticity. Dooing so requires solving an elliptic problem, which we refer to as an invertibility relation.
  
Some linear invertibility relations can be solved by existing software. However, for more complicated problems, the invertibility relations are nonlinear and difficult to solve efficiently . We have developed 2nd-order efficient multigrid solver for semigeostrophic model and balanced vortex model. We propose developing a 2nd-order efficient multigrid solver for zonally symmetric balanced flow model. And we expect to develop 4th-order robust multigrid solvers for the three models mentioned above.
 
Wednesday, February 7, 2007, 4:00-5:00 - CAMP 176 - co-sponsored with Electrical and Computer Engineering
Thomas Hemker
Department of Computer Science, TU Darmstadt
Surrogate Optimization for Mixed-Integer Nonlinear Problems in Engineering Applications of Black-Box Type
Abstract: : Simulation based optimization becomes increasingly important in engineering applications, but usually employed software for computational modeling and simulation of complex, engineering problems have been developed over many years and have usually not been designed to meet the specific needs of optimization methods. Underlying iterative algorithms, approximation of tabular data etc. in addition result in very low smoothness properties of the objective function.
  
Thus, non-smooth black-box type optimization problems arise. If in addition to continuous, real-valued also discrete, integer-valued optimization variables have to be considered, only a few optimization methods based on evaluations of the objective function only can be applied besides computational expensive random search methods.
  
In this talk we present a surrogate optimization approach to solve such problems based on sequentially improved stochastic approximations. Numerical results for the proposed approach will be discussed for electro-magnetic design problems for electrical engineering, for groundwater management problems from civil engineering, as well as for walking speed optimization problems of humanoid robots.
 
Monday, February 5, 2007, 4:00-5:00 - SC 356
Maria Emelianenko
Department of Mathematical Sciences, Carnegie Mellon University
Mathematical Modeling and Simulation of Texture Evolution
Abstract: : Preparing a texture suitable for a given purpose is a central problem in materials science, which presents many challenges for mathematical modeling, simulation, and analysis. In this talk, I will focus on the mesoscopic behavior of the grain boundary system and on understanding the role of topological reconfigurations during evolution. Several types of evolution equations based on pure probabilistic and stochastic descriptions will be formulated and compared against the results provided by simulation. Their advantages and limitations, numerical characteristics and possible extensions to higher dimensions will be discussed.
 
Thursday, February 1, 2007, 2:30-3:30 - SC 354
Takashi Nishikawa
Department of Mathematics, Southern Methodist University, Dallas
Optimal Networks for Synchronization: Maximum Performance at Minimum Cost
Abstract: In this talk, I will consider two optimization problems on synchronization of oscillator networks: maximization of synchronizability and minimization of synchronization cost. I will first develop an extension of the well-known master stability framework to the case of non-diagonalizable Laplacian matrices. Using this, I can show that the solution sets of the two optimization problems coincide and are simultaneously characterized by a simple condition on the Laplacian eigenvalues. To further characterize the optimal networks, I will identify a subclass of hierarchical networks, characterized by the absence of feedback loops and the normalization of inputs. I will also show that most optimal networks are directed and non-diagonalizable, necessitating the extension of the master stability framework. The results may provide insights into the evolutionary origin of structures in complex networks for which synchronization plays a significant role.
 
Wednesday, January 31, 2007, 4:00-5:00 - SC 356
Sumona Mondal
Department of Mathematics, University of Louisiana, Lafayette
Tolerance Regions for Some Multivariate Linear Models
Abstract: In this talk, we shall first outline an improved method of computing tolerance factors for a multivariate normal distribution. This method involves an approximation to the distribution of a linear function of independent noncentral chi-square random variables and simulation. Nevertheless, it is stable and more accurate than several approximate methods considered in Krishnamoorthy and Mathew (1999, Technometrics, 41, 234-249). The accuracies of the tolerance regions are evaluated using Monte Carlo simulation. Simulation study shows that the new approach is very satisfactory even for small samples. The proposed method is also extended to the multivariate linear regression model to construct tolerance regions for the response vector given a set of explanatory variables. The proposed approach is compared with the available approximate methods. Comparison study indicates that our approach is better than other available methods.
 
Monday, January 29, 2007, 4:00-5:00 - SC 356
Florence George
Department of Mathematics and Statistics, University of South Florida, Tampa
Johnson System of Distributions and Microarray Data Analysis
Abstract: A common task in analyzing microarray data is to determine which genes are differentially expressed across two kinds of tissue samples or samples obtained under two experimental conditions. In recent years several statistical methods have been proposed to accomplish this goal when there are replicated samples under each condition. In this talk Johnson system of curves will be introduced. We will discuss how Johnson system can be used for gene expression data analysis. An empirical Bayes method for gene selection using mixture model approach will also be discussed. Comparisons with other commonly used methods will be presented.
 
Beginning Tuesday, December 5, 2006, Noon - SC 342
Athanassios S. Fokas
Chair of Nonlinear Analysis, University of Cambridge, UK
"Medical Imaging:" A Basic Introduction to Magnetoencephalography
Series of 2 lectures:
  • Tuesday, December 5, 12:00 - 12:50, in SC 342
  • Thursday, December 7, 10:30 - 11:30, in Snell 212
 
Friday, October 27, 2006, 2:00-3:00 - SC 356
Mr. Xunyang Shen
Department of Mathematics, Clarkson University
Towards a Practical Solution of Handling Over/underflow Exceptions with Alternate Number Formats
Abstract: One of the primary drawbacks of floating-point arithmetic is its susceptibility to overflow and underflow. Accordingly, many alternate number formats are proposed to overcome over/underflow exceptions in scientific computing. With the goal of promoting their use in practice, we are now trying to pick out some potential alternatives by comparisons, to modify and improve the number representation schemes together with their algorithms, to further justify their effectiveness by both theoretical analysis and experimental examples, and to implement them in a way that the software implementations are ready to be used in real life computational problems.

The major part of my thesis will be about symmetric level-index (SLI) arithmetic, which is over/underflow free when doing all four basic arithmetic operations. I am also working on an extended-range floating-point format, which could be efficient in a software implementation, and a tapered over/underflow scheme, which would have performance advantages in a potential hardware implementation.
 
Monday, October 9, 2006, 4:00-5:00 - SC 354
Mr. Naratip Santitissadeekorn
Department of Mathematics, Clarkson University
Vector fields from movies: The infinitesimal operator for the semigroup of the Frobenius-Perron operator from image sequence data
Abstract: Estimation of velocity field from image sequences has many important applications: in meteorology, oceanography and experimental fluid mechanics. We will present mathematical model to measure the velocity field based on the infinitesimal operator for the semigroup of the Frobenius-Perron operator. The variational approach is used to solve this model which assumes a constraint that the velocity of the brightness pattern varies smoothly throughout the image. Empirical results from applications of the algorithm to several examples of image sequences are also included. We will include some discussion regarding our motivating research toward characterizing ergodic properties including mixing rate, lyapanov exponent, basin structure and transport, and etc.
 
Thursday, September 21, 2006, 2:30-3:30 - SC 301
Dr. Owen Eslinger
U.S. Army Corps of Engineers
Optimizing Tetrahedral Element Quality through the use of Mesh Smoothing Techniques for Studying Near Surface Phenomena
Abstract: An algorithm for rapid, large-scale mesh generation will be presented in the context of studying near-surface phenomena. The output of black-box mesh generation software is post-processed with a mesh-smoothing technique to ensure quality tetrahedral elements in the final mesh. The result is an optimization problem with more than 100k degrees of freedom. The entire procedure will be presented with a specific focus on the smoothing algorithm, the optimization problem, and the treatment of buried objects.
 
Thursday, April 20, 2006, 2:45-3:45 - SC 354
Dr. Edward Schneider
Dept. of Information & Communication Technology, SUNY Potsdam
The State of Research and Development in Video Game Technology
Abstract: The Game Developers Conference is the video game industry's largest developers event, and attracts 10,000 people to the Silicon Valley each year. The conference started as an informal gathering of PC game programmers, and now has grown to the point that Bill Gates first officially unveiled the Xbox in 2000 at GDC, and Sony's Phil Harrison unveiled the insides of the Playstation 3 this year. This session will begin with an overview of some of the themes in game design that were being discussed at the conference, and summaries of the major keynotes by Sims designer Will Wright, Nintendo President Satoru Iwata.

The Serious Games Summit portion of the GDC was added for developers that use video games to teach and train. A research study from SUNY Potsdam that was presented at the Serious Games Summit will be discussed in more detail. The study involved testing a group of subjects' commitment to instructions against desire for exploration in a virtual environment. Implications for virtual world design that arose from the study's results will be given.

The presentation will close with a brief overview of the game industry from the point of view of a person wanting to get first employment for a video game company. The presenter spoke with industry reps at the GDC job fair and current industry people throughout the conference and asked them advice for college students looking to break into the industry.
 
Friday, March 24, 2006, 1:00 - SC 348
Kaleem Siddiqi
School of Computer Science, McGill University, Montreal
Medial Integrals: Mathematics, Algorithms and Applications
Abstract: In the late 60's Blum developed the notion of axis-morphologies for describing 2D and 3D forms. He proposed an interpretation of the local reflective symmetries of an object as a "medial graph" and suggested that the implied part structure could be used for object categorization and recognition. In this talk I will discuss a type of integral performed on a vector field defined as the gradient of the Euclidean distance function to the bounding curve (or surface) of an object. The limiting behavior of this integral as the enclosed area (or volume) shrinks to zero reveals an invariant which can be used to compute the Blum skeleton as well as to reveal the geometry of the object that it describes. I will also discuss applications of this technique to the problem of 2D and 3D object retrieval.
 
Thursday, March 9, 2006, 2:30 - SC 354
Dan Schult
Department of Mathematics, Colgate University
Network Structure meets Dynamics
Abstract: Basic models of disease spread ignore details of the transmission network. Everyone is assumed to be equally likely to infect anyone else. More detailed models divide the population into groups with different transmission rates between groups. In extreme cases, models try to keep track of each individual and transmission contact explicitly. Why keep track of the transmission network? Because it seems clear that the structure of the network can change the dynamics of the disease in important ways. Still, not much is known about how or when the structure of a network affects dynamics on that network. These questions apply more generally. Every contact process depends on the contact network's structure. But what is it about the structure that is important? What are the important features of a network? What features of the dynamics depend on that structure? This talk will discuss our recent attempts to find relationships between common measures of network structure and measures of dynamic behavior.
 
Beginning Monday, January 16, 2006, Noon - SC 348
Kevin R. Vixie
Los Alamos National Laboratory
An Invitation to Geometry: Image Analysis, Geometric Analysis, and High-Dimensional Geometry
Series of 5 lectures:
  • Monday, January 16, 12:00 - 12:50, in SC 348
  • Wednesday, January 18, 12:00 - 12:50, in SC 348
  • Friday, January 20, 12:00 - 12:50, in SC 348
  • Monday, January 23, 12:00 - 12:50, in SC 348
  • Tuesday, January 24, 12:00 - 12:50, in SC 348

Abstract: In this 5 lecture short course I will present an introduction to metrics and regularization for image-driven data analysis and then look more deeply at the related geometric analysis and high-dimensional geometry. The first 3 lectures will be essentially what I presented in the short course from last summer's Graduate Summer School at the Institute for Pure and Applied Mathematics. These lectures provide a solid motivation for the purer aspects to follow and demonstrate the deeply useful nature of various aspects of mathematical analysis.

In the last 2 lectures I will (1) concentrate on BV functions and the TV seminorm, using them as a vehicle into geometric analysis, (2) linger (a bit) on the concentration of measure phenomena and (3) briefly mention the Johnson-Lindenstrauss lemma and its applications. The lectures, intended to be lively and somewhat informal, are also designed to (further) convince you that the "division" of mathematics into pure and applied is artificial, even unnatural and destructive. Those who realize and exploit this are well positioned to establish a solidly funded research position. But even if this were not the case, it would still be true that embracing both the applied and pure aspects of mathematics refreshes, invigorates and deepens the resulting research.

 
Friday, October 21, 2005, 11:00a.m. - Science Center 346
Guohua Zhou
Department of Mathematics, Clarkson University
Multigrid methods and finite differences on hexagonal and geodesic grids
 
Abstract:
Geodesic grids are widely used in models constructed on the sphere because of their isotropy and homogeneity. They consist of hexagons except for 12 pentagons. To study the geodesic grids, we will start from the regular hexagonal grid. A few papers have studied the property for the hexagonal grid. It has some advantages compared with the rectangular grid in some applications. Multigrid methods are fast iterative methods to solve elliptic equations. I will try to study the multigrid methods on the regular hexagonal grid and the geodesic grids. And also, I will study some finite difference schemes on the hexagonal grid and the geodesic grids. Finally, I will apply the results to the shallow water model and 3D climate model.
 
Friday, October 14, 2005, 4:00 p.m. - Science Center 356
Chen Yao
Department of Mathematics, Clarkson University
Modeling and Nonlinear Parameter Estimation for Dynamical System
 
Abstract:
It is common that a high dimensional dynamical system has a low dimensional invariant manifold or some other form of global attractor. We focus here on the questions of dimensionality reduction and global modeling. That is, building a set of ordinary differential equations (ODEs) of minimal dimension which models a multivariate time dependent data set given from some high dimensional process. We also investigate convergence of the given modeling method based on different integration schemes. We will furthermore introduce a new method that adapts the least-squares best approximation by a Kronecker product representation, to analyze underlying structure when it exists as a system of coupled oscillators. Several examples are presented including diffusively coupled chaotic systems.
 
Thursday, October 6, 2005, 2:30 p.m. - Science Center 354
John Dennis
Professor Emeritus in the Department of Computational and Applied Mathematics at Rice University
Optimization Using Surrogates for Engineering Design

Abstract:
This talk will outline the surrogate management framework for nasty nonsmooth optimization problems with expensive function evaluations. It is provably convergent to an optimal design of the problem as posed, not just to an optimizer of the surrogate problem.

This line of research was motivated by industrial applications, indeed, by a question I was asked by Paul Frank of Boeing Phantom Works. His group was often asked for help in dealing with very expensive low dimensional design problems from all around the company. Everyone there was dissatisfied with the common practice of substituting inexpensive surrogates for the expensive "true" objective and constraint functions in the optimal design formulation. I hope to demonstrate in this talk just how simple the answer to Paul's question is.

The surrogate management framework has been implemented successfully by several different groups. All the implementations are effective in practice, where most of the application are extended valued and certainly nondifferentiable. This has forced my colleagues and me to begin to learn some nonsmooth analysis, which in turn has led to MADS, a replacement for the GPS infrastructure algorithm. This talk is directed to a general audience, and so we will slight the mathematical infrastructure. However, the rigorous mathematics is there to back up our approach, and we can provide details to anyone interested in them.