Background Image

Seminars & Colloquia

In this Section
 

 

 
 

 
 


 
 
 
 
Friday, August 23, 2007, 3:00p.m. - SC 354
Xunyang Shen
Department of Mathematics, Clarkson University
Towards a Practical Solutoin of Handling Over/underflow Exceptions with Alternate Number Formats
Abstract: One of the primary drawbacks of floating-point (FLP) arithmetic is its susceptibility to overflow and underflow. Accordingly, many alternate number formats are proposed to overcome/underflow exceptions in scientific computing. With the goal of promoting their use in practice, this dissertation discusses the effectiveness of some alternate number formats using both theoretical analysis and experimental examples.
  
Symmetric level-index (SLI) arithmetic is the preferred alternative. It allows such a large representable number range that it is essentially over/underflow-free. Some algorithm improvements, precision analysis, and a software implementation are discussed, followed by its applications on three practical applications. The satisfactory results indicate that we have achieved a practical solution of handling over/underflow problems with software, and a hardware solution can be realistic in the future.
 
July 11 , 2007, 3:30p.m. - SC 356
Chen Yao
Department of Mathematics, Clarkson University
Modelling Low-Dimensioinal Submanifolds in Dynamical Systems
Abstract: In this dissertation, we present a new technique to model a low-dimensional dynamical system embedded in a high dimensional dynamical system. The technique involves a method of discovering a low-dimensional nonlinear submanifold, known only empirically through the data set from a high-dimensional system, and a method of globally modelling the low-dimensional manifold to obtain an analytic form that reproduces a data set approximating the low-dimensional manifold. When a reliable analytic form of the low-dimensional system is attained, further work such as short-time prediction, structure investigation, and control may be possible to accomplish.
  
The technique of modelling low-dimensional systems developed here does not require analytic forms of the original systems, and also, the embedding low dimensional submanifolds can be highly nonlinear. Moreover, in some specified systems, the analytic forms generated by the modelling technique can give us some insight into the structure of the system restricted on the low dimensional submanifolds
 
Friday, April 20, 2007, 9:30 a,n,- SC 344
Mehul N. Vora
Department of Mathematics, Clarkson University
A Novel Approach to Data Mining: Genetic Algorithm for Feature Selection
Abstract: This dissertation presents the Genetic Algorithm (GA) as a data microscope for sorting, probing and finding uncovered relationships in multivariate data. Identifying a relationship or a pattern in a multivariate dataset is a challenging problem. Sometimes relationships can not be expressed in quantitative terms. These relationships are better expressed in terms of similarity and dissimilarity among groups of multivariate data. Feature selection, the process of identifying the most informative features, is a crucial step in any data mining and pattern recognition study. The selection of an appropriate feature subset can simplify the problem and lead to improved results. Feature selection, however, is itself non-trivial.
  
The pattern recognition GA identifies a subset of features whose variance or information is primarily about differences between the groups in a data set. The attributes of a genetic search strategy towards selecting the best feature subset can potentially overcome the difficulties inherent in feature selection. Application of the pattern recognition GA to a wide range of problems from the field of chemometrics and bioinformatics demonstrates the utility of the method.
 
Thursday, April 12, 2007, 2:30-3:30 - SC 354
Genetha Gray
Sandia National Labs
Some Considerations for Evaluating the Predictive Capabilities Xyce, An Electrical Circuit Simulator
Abstract: : Significant advances in computing capabilities, decreasing storage costs, and the rising costs associated with physical experiments have contributed to an increase in the use of numerical modeling and simulation. However, the inclusion of computer simulations in the study and design of complex engineering systems has introduced many new challenges.
  
As programs turn to numerical modeling and simulation to aid their design, validation and verification (V&V) tools are critical for determining simulation-based confidence and predictive capabilities. For example, code verification must be used to confirm that the underlying equations are being solved correctly. In addition, validation processes should be applied to answer questions of correctness of the equations and models for the physics being modeled and the application being studied. Moreover, validation metrics must be carefully chosen in order to explicitly compare experimental and computational results and quantify the uncertainties in these comparisons. Therefore, validation and verification (V&V) tools are critical for determining simulation-based confidence and predictive capabilities.
  
Overall, the V&V process for computational experimentation can provide the best estimates of what can happen and the likelihood of it happening when uncertainties are taken into account. The process is not independent of physical experimentation. In order to carry out the validation activities, experiments must be carefully planned to provide adequate and appropriate data. In this talk, we will describe the validation process for as it relates to Xyce, an electrical circuit simulator developed at Sandia. Specifically, we will address the issues of physical experiments, simulator calibration, and uncertainty quantification.
 
Wednesday, April 4, 2007, 12:30-1:30 - SC 356 - a SIAM Visiting Lecture, sponsored by Clarkson University Chapter of SIAM
John Hamilton
Photographic Science and Technology, Eastman Kodak Company
Algorithms for Digital Color Cameras
Abstract: : While digital color imaging has many problems in common with conventional silver halide imaging, it also has its own particular problems not faced in the analog world.  Two of these problems, and two corresponding algorithmic solutions, are illustrated by example and discussed in detail.  In addition, a mathematical perspective is presented to explain how these algorithms work.
  
The first problem is that of color interpolation (also called demosaicking).  The pixels of most silicon sensors capture a single color measurement, usually a red, green, or blue color value.  Because a fully processed color image requires all three color values at each pixel, two additional color values must be provided at each pixel.  An algorithm is presented that addresses this problem for sensors using the Bayer color filter pattern.
  
The second problem is that of color aliasing.  While the problem of aliasing is always present in a discrete imaging system, it is compounded for color sensors because the different color channels can alias in different ways.  The resulting interference patterns have distinctive color components which constitute an obvious and annoying imaging artifact.  An algorithm is presented that addresses this problem for sensors using the Bayer color filter pattern.
 
Tuesday, April 3, 2007, 4:00-5:00 - SC 454
Ye Chen - Department of Mathematics, Ph.D. Thesis Proposal
Multigrid Methods for Monge-Ampere Equations and Systems
Abstract: :The equations governing the motions of the atmosphere express the conservation of mass, momentum and energy. Various approximations lead to different dynamical models. For example, an assumption of dynamical balance between the mass and wind fields can simplify the dynamics, e.g. by filtering high-frequency waves. These models are called balanced models. In principle, one can recover both the mass and wind fields from the potential vorticity. Dooing so requires solving an elliptic problem, which we refer to as an invertibility relation.
  
Some linear invertibility relations can be solved by existing software. However, for more complicated problems, the invertibility relations are nonlinear and difficult to solve efficiently . We have developed 2nd-order efficient multigrid solver for semigeostrophic model and balanced vortex model. We propose developing a 2nd-order efficient multigrid solver for zonally symmetric balanced flow model. And we expect to develop 4th-order robust multigrid solvers for the three models mentioned above.
 
Wednesday, February 7, 2007, 4:00-5:00 - CAMP 176 - co-sponsored with Electrical and Computer Engineering
Thomas Hemker
Department of Computer Science, TU Darmstadt
Surrogate Optimization for Mixed-Integer Nonlinear Problems in Engineering Applications of Black-Box Type
Abstract: : Simulation based optimization becomes increasingly important in engineering applications, but usually employed software for computational modeling and simulation of complex, engineering problems have been developed over many years and have usually not been designed to meet the specific needs of optimization methods. Underlying iterative algorithms, approximation of tabular data etc. in addition result in very low smoothness properties of the objective function.
  
Thus, non-smooth black-box type optimization problems arise. If in addition to continuous, real-valued also discrete, integer-valued optimization variables have to be considered, only a few optimization methods based on evaluations of the objective function only can be applied besides computational expensive random search methods.
  
In this talk we present a surrogate optimization approach to solve such problems based on sequentially improved stochastic approximations. Numerical results for the proposed approach will be discussed for electro-magnetic design problems for electrical engineering, for groundwater management problems from civil engineering, as well as for walking speed optimization problems of humanoid robots.
 
Monday, February 5, 2007, 4:00-5:00 - SC 356
Maria Emelianenko
Department of Mathematical Sciences, Carnegie Mellon University
Mathematical Modeling and Simulation of Texture Evolution
Abstract: : Preparing a texture suitable for a given purpose is a central problem in materials science, which presents many challenges for mathematical modeling, simulation, and analysis. In this talk, I will focus on the mesoscopic behavior of the grain boundary system and on understanding the role of topological reconfigurations during evolution. Several types of evolution equations based on pure probabilistic and stochastic descriptions will be formulated and compared against the results provided by simulation. Their advantages and limitations, numerical characteristics and possible extensions to higher dimensions will be discussed.
 
Thursday, February 1, 2007, 2:30-3:30 - SC 354
Takashi Nishikawa
Department of Mathematics, Southern Methodist University, Dallas
Optimal Networks for Synchronization: Maximum Performance at Minimum Cost
Abstract: In this talk, I will consider two optimization problems on synchronization of oscillator networks: maximization of synchronizability and minimization of synchronization cost. I will first develop an extension of the well-known master stability framework to the case of non-diagonalizable Laplacian matrices. Using this, I can show that the solution sets of the two optimization problems coincide and are simultaneously characterized by a simple condition on the Laplacian eigenvalues. To further characterize the optimal networks, I will identify a subclass of hierarchical networks, characterized by the absence of feedback loops and the normalization of inputs. I will also show that most optimal networks are directed and non-diagonalizable, necessitating the extension of the master stability framework. The results may provide insights into the evolutionary origin of structures in complex networks for which synchronization plays a significant role.
 
Wednesday, January 31, 2007, 4:00-5:00 - SC 356
Sumona Mondal
Department of Mathematics, University of Louisiana, Lafayette
Tolerance Regions for Some Multivariate Linear Models
Abstract: In this talk, we shall first outline an improved method of computing tolerance factors for a multivariate normal distribution. This method involves an approximation to the distribution of a linear function of independent noncentral chi-square random variables and simulation. Nevertheless, it is stable and more accurate than several approximate methods considered in Krishnamoorthy and Mathew (1999, Technometrics, 41, 234-249). The accuracies of the tolerance regions are evaluated using Monte Carlo simulation. Simulation study shows that the new approach is very satisfactory even for small samples. The proposed method is also extended to the multivariate linear regression model to construct tolerance regions for the response vector given a set of explanatory variables. The proposed approach is compared with the available approximate methods. Comparison study indicates that our approach is better than other available methods.
 
Monday, January 29, 2007, 4:00-5:00 - SC 356
Florence George
Department of Mathematics and Statistics, University of South Florida, Tampa
Johnson System of Distributions and Microarray Data Analysis
Abstract: A common task in analyzing microarray data is to determine which genes are differentially expressed across two kinds of tissue samples or samples obtained under two experimental conditions. In recent years several statistical methods have been proposed to accomplish this goal when there are replicated samples under each condition. In this talk Johnson system of curves will be introduced. We will discuss how Johnson system can be used for gene expression data analysis. An empirical Bayes method for gene selection using mixture model approach will also be discussed. Comparisons with other commonly used methods will be presented.
 
Beginning Tuesday, December 5, 2006, Noon - SC 342
Athanassios S. Fokas
Chair of Nonlinear Analysis, University of Cambridge, UK
"Medical Imaging:" A Basic Introduction to Magnetoencephalography
Series of 2 lectures:
  • Tuesday, December 5, 12:00 - 12:50, in SC 342
  • Thursday, December 7, 10:30 - 11:30, in Snell 212
 
Friday, October 27, 2006, 2:00-3:00 - SC 356
Mr. Xunyang Shen
Department of Mathematics, Clarkson University
Towards a Practical Solution of Handling Over/underflow Exceptions with Alternate Number Formats
Abstract: One of the primary drawbacks of floating-point arithmetic is its susceptibility to overflow and underflow. Accordingly, many alternate number formats are proposed to overcome over/underflow exceptions in scientific computing. With the goal of promoting their use in practice, we are now trying to pick out some potential alternatives by comparisons, to modify and improve the number representation schemes together with their algorithms, to further justify their effectiveness by both theoretical analysis and experimental examples, and to implement them in a way that the software implementations are ready to be used in real life computational problems.

The major part of my thesis will be about symmetric level-index (SLI) arithmetic, which is over/underflow free when doing all four basic arithmetic operations. I am also working on an extended-range floating-point format, which could be efficient in a software implementation, and a tapered over/underflow scheme, which would have performance advantages in a potential hardware implementation.
 
Monday, October 9, 2006, 4:00-5:00 - SC 354
Mr. Naratip Santitissadeekorn
Department of Mathematics, Clarkson University
Vector fields from movies: The infinitesimal operator for the semigroup of the Frobenius-Perron operator from image sequence data
Abstract: Estimation of velocity field from image sequences has many important applications: in meteorology, oceanography and experimental fluid mechanics. We will present mathematical model to measure the velocity field based on the infinitesimal operator for the semigroup of the Frobenius-Perron operator. The variational approach is used to solve this model which assumes a constraint that the velocity of the brightness pattern varies smoothly throughout the image. Empirical results from applications of the algorithm to several examples of image sequences are also included. We will include some discussion regarding our motivating research toward characterizing ergodic properties including mixing rate, lyapanov exponent, basin structure and transport, and etc.
 
Thursday, September 21, 2006, 2:30-3:30 - SC 301
Dr. Owen Eslinger
U.S. Army Corps of Engineers
Optimizing Tetrahedral Element Quality through the use of Mesh Smoothing Techniques for Studying Near Surface Phenomena
Abstract: An algorithm for rapid, large-scale mesh generation will be presented in the context of studying near-surface phenomena. The output of black-box mesh generation software is post-processed with a mesh-smoothing technique to ensure quality tetrahedral elements in the final mesh. The result is an optimization problem with more than 100k degrees of freedom. The entire procedure will be presented with a specific focus on the smoothing algorithm, the optimization problem, and the treatment of buried objects.
 
Thursday, April 20, 2006, 2:45-3:45 - SC 354
Dr. Edward Schneider
Dept. of Information & Communication Technology, SUNY Potsdam
The State of Research and Development in Video Game Technology
Abstract: The Game Developers Conference is the video game industry's largest developers event, and attracts 10,000 people to the Silicon Valley each year. The conference started as an informal gathering of PC game programmers, and now has grown to the point that Bill Gates first officially unveiled the Xbox in 2000 at GDC, and Sony's Phil Harrison unveiled the insides of the Playstation 3 this year. This session will begin with an overview of some of the themes in game design that were being discussed at the conference, and summaries of the major keynotes by Sims designer Will Wright, Nintendo President Satoru Iwata.

The Serious Games Summit portion of the GDC was added for developers that use video games to teach and train. A research study from SUNY Potsdam that was presented at the Serious Games Summit will be discussed in more detail. The study involved testing a group of subjects' commitment to instructions against desire for exploration in a virtual environment. Implications for virtual world design that arose from the study's results will be given.

The presentation will close with a brief overview of the game industry from the point of view of a person wanting to get first employment for a video game company. The presenter spoke with industry reps at the GDC job fair and current industry people throughout the conference and asked them advice for college students looking to break into the industry.
 
Friday, March 24, 2006, 1:00 - SC 348
Kaleem Siddiqi
School of Computer Science, McGill University, Montreal
Medial Integrals: Mathematics, Algorithms and Applications
Abstract: In the late 60's Blum developed the notion of axis-morphologies for describing 2D and 3D forms. He proposed an interpretation of the local reflective symmetries of an object as a "medial graph" and suggested that the implied part structure could be used for object categorization and recognition. In this talk I will discuss a type of integral performed on a vector field defined as the gradient of the Euclidean distance function to the bounding curve (or surface) of an object. The limiting behavior of this integral as the enclosed area (or volume) shrinks to zero reveals an invariant which can be used to compute the Blum skeleton as well as to reveal the geometry of the object that it describes. I will also discuss applications of this technique to the problem of 2D and 3D object retrieval.
 
Thursday, March 9, 2006, 2:30 - SC 354
Dan Schult
Department of Mathematics, Colgate University
Network Structure meets Dynamics
Abstract: Basic models of disease spread ignore details of the transmission network. Everyone is assumed to be equally likely to infect anyone else. More detailed models divide the population into groups with different transmission rates between groups. In extreme cases, models try to keep track of each individual and transmission contact explicitly. Why keep track of the transmission network? Because it seems clear that the structure of the network can change the dynamics of the disease in important ways. Still, not much is known about how or when the structure of a network affects dynamics on that network. These questions apply more generally. Every contact process depends on the contact network's structure. But what is it about the structure that is important? What are the important features of a network? What features of the dynamics depend on that structure? This talk will discuss our recent attempts to find relationships between common measures of network structure and measures of dynamic behavior.
 
Beginning Monday, January 16, 2006, Noon - SC 348
Kevin R. Vixie
Los Alamos National Laboratory
An Invitation to Geometry: Image Analysis, Geometric Analysis, and High-Dimensional Geometry
Series of 5 lectures:
  • Monday, January 16, 12:00 - 12:50, in SC 348
  • Wednesday, January 18, 12:00 - 12:50, in SC 348
  • Friday, January 20, 12:00 - 12:50, in SC 348
  • Monday, January 23, 12:00 - 12:50, in SC 348
  • Tuesday, January 24, 12:00 - 12:50, in SC 348

Abstract: In this 5 lecture short course I will present an introduction to metrics and regularization for image-driven data analysis and then look more deeply at the related geometric analysis and high-dimensional geometry. The first 3 lectures will be essentially what I presented in the short course from last summer's Graduate Summer School at the Institute for Pure and Applied Mathematics. These lectures provide a solid motivation for the purer aspects to follow and demonstrate the deeply useful nature of various aspects of mathematical analysis.

In the last 2 lectures I will (1) concentrate on BV functions and the TV seminorm, using them as a vehicle into geometric analysis, (2) linger (a bit) on the concentration of measure phenomena and (3) briefly mention the Johnson-Lindenstrauss lemma and its applications. The lectures, intended to be lively and somewhat informal, are also designed to (further) convince you that the "division" of mathematics into pure and applied is artificial, even unnatural and destructive. Those who realize and exploit this are well positioned to establish a solidly funded research position. But even if this were not the case, it would still be true that embracing both the applied and pure aspects of mathematics refreshes, invigorates and deepens the resulting research.

 
Friday, October 21, 2005, 11:00a.m. - Science Center 346
Guohua Zhou
Department of Mathematics, Clarkson University
Multigrid methods and finite differences on hexagonal and geodesic grids
 
Abstract:
Geodesic grids are widely used in models constructed on the sphere because of their isotropy and homogeneity. They consist of hexagons except for 12 pentagons. To study the geodesic grids, we will start from the regular hexagonal grid. A few papers have studied the property for the hexagonal grid. It has some advantages compared with the rectangular grid in some applications. Multigrid methods are fast iterative methods to solve elliptic equations. I will try to study the multigrid methods on the regular hexagonal grid and the geodesic grids. And also, I will study some finite difference schemes on the hexagonal grid and the geodesic grids. Finally, I will apply the results to the shallow water model and 3D climate model.
 
Friday, October 14, 2005, 4:00 p.m. - Science Center 356
Chen Yao
Department of Mathematics, Clarkson University
Modeling and Nonlinear Parameter Estimation for Dynamical System
 
Abstract:
It is common that a high dimensional dynamical system has a low dimensional invariant manifold or some other form of global attractor. We focus here on the questions of dimensionality reduction and global modeling. That is, building a set of ordinary differential equations (ODEs) of minimal dimension which models a multivariate time dependent data set given from some high dimensional process. We also investigate convergence of the given modeling method based on different integration schemes. We will furthermore introduce a new method that adapts the least-squares best approximation by a Kronecker product representation, to analyze underlying structure when it exists as a system of coupled oscillators. Several examples are presented including diffusively coupled chaotic systems.
 
Thursday, October 6, 2005, 2:30 p.m. - Science Center 354
John Dennis
Professor Emeritus in the Department of Computational and Applied Mathematics at Rice University
Optimization Using Surrogates for Engineering Design

Abstract:
This talk will outline the surrogate management framework for nasty nonsmooth optimization problems with expensive function evaluations. It is provably convergent to an optimal design of the problem as posed, not just to an optimizer of the surrogate problem.

This line of research was motivated by industrial applications, indeed, by a question I was asked by Paul Frank of Boeing Phantom Works. His group was often asked for help in dealing with very expensive low dimensional design problems from all around the company. Everyone there was dissatisfied with the common practice of substituting inexpensive surrogates for the expensive "true" objective and constraint functions in the optimal design formulation. I hope to demonstrate in this talk just how simple the answer to Paul's question is.

The surrogate management framework has been implemented successfully by several different groups. All the implementations are effective in practice, where most of the application are extended valued and certainly nondifferentiable. This has forced my colleagues and me to begin to learn some nonsmooth analysis, which in turn has led to MADS, a replacement for the GPS infrastructure algorithm. This talk is directed to a general audience, and so we will slight the mathematical infrastructure. However, the rigorous mathematics is there to back up our approach, and we can provide details to anyone interested in them.