Background Image

Past Seminarss

In this Section
Friday, May 27, 2005, 2 p.m. - Science Center 356
Hon-Ching Lo
Department of Computer Science, Clarkson University
Program Verification
Abstract:
The objective of the thesis is to develop software to automate the process of Program Verification, based on existing Formal Methods and the theorem prover Integrated Canonizer and Solver (ICS). We developed software, called Converter To Constraints (CTC), which produces constraints for ICS to prove programs. We also defined a language CC as input to CTC. By applying the software to different CC programs, we explore useful techniques for allowing more varieties of programs to be verified, and for increasing the usability of the software. We also devise methods/strategies to maximize the automation. In the course of building on the basic framework of CTC, we discovered various problems and limitations on ICS. Therefore, part of efforts resided on making necessary adjustments on CTC so that it is compatible with ICS.
 
Tuesday, May 24, 2005, 2 p.m. - Science Center 356
Hon-Ting Lo
Department of Computer Science, Clarkson University
Cryptographic Protocol Analysis
Abstract:
Cryptographic protocols provide secured services in communication. Methods like model checking and theorem proving are used to verify the correctness of cryptographic protocols. But model checking methods can only check a bounded number of states, and theorem proving methods generally do not find attacks. Our thesis provides a method of verifying protocols automatically and also finding attacks on protocols automatically if there is one, by using the automated theorem prover SPASS. We show how to transform cryptographic protocols in conventional notation into SPASS format, then verify the protocol in SPASS. We also provide a method to figure out the steps of attack from the SPASS output. The advantages of our method are that it is completely automated; we use ordered resolution for efficiency and also equations to model cryptographic primitives, which differ our methods from other methods.
 
Monday, March 21, 2005, 4 p.m. - Science Center 342
Jeffrey Jackson
Department of Mathematics and Computer Science, Duquesne University
Average-Case Learnability
Abstract:
For many seemingly-simple problems in learning theory, no polynomial-time algorithm has been discovered. For example, no efficient algorithms are known for approximating either boolean decision trees or monotone disjunctive normal form (DNF) expressions from uniform examples. We simplify these problems somewhat by asking whether these classes can be learned "on average" relative to certain natural probability distributions defined over the classes. In this average-case setting, we obtain efficient algorithms for learning boolean decision trees and for learning a substantial subclass of monotone DNF (as well as a slightly more restricted subclass of nonmonotone DNF). This is joint work with Rocco Servedio (Columbia University).
 
Friday, February 25, 2005, 4 p.m. - Science Center 356
Michael Molloy
Department of Computer Science, University of Toronto
Random Constraint Satisfaction Problems

Abstract:
Constraint satisfaction problems are generalizations of CNF boolean formulas. We are given a set of variables, and a domain of values that the variables can take. Some subsets of the variables have "constraints" which restrict the value-assignments that can be made to those variables. A problem is satisfiable if there is an assignment of values to the variables which does not violate any of the constraints. Some well-studied special cases are k-SAT, graph colourability and graph homomorphism.

In this talk, we discuss models of random constraint satisfaction problems. The main issue is the following: if we form a random problem by adding random constraints one-at-a-time, how does the probability of the problem being satisfiable change? It is clear that this probability is monotonically decreasing as the number of constraints increases, but how suddenly does it decrease? How many constraints must be added before the decrease is significant? (In formal terms, these questions can be rephrased as "does the problem have a sharp threshold of satisfiability?" and "how many constraints must be added before it is no longer almost surely satisfiable?") These questions generalize the well-studied notions of the threshold of satisfiability for a random instance of k-SAT and the threshold of k-colourability for a random graph.

Monday, January 24, 2005, 4 p.m. - Science Center 356
Brigitte Pientka
Department of Computer Science, McGill University
Overcoming Performance Barriers: Efficient Proof Search in Logical Frameworks

Abstract:
The logical framework Twelf provides an experimental platform to specify, implement and execute formal systems. One of its applications is in proof-carrying code and proof-carrying authentication, where it is successfully used to specify and verify formal guarantees about the run-time behavior of programs. These real-world applications have exposed some critical bottlenecks: execution of many logical specifications is severely hampered and some are not executable at all, thereby limiting its application in practice.

In this talk, I will describe three optimizations which substantially improve the performance and extend the power of the existing system. First, I give an optimized unification algorithm for logical frameworks which allows us to eliminate unnecessary occurs-checks. Second I present a novel execution model based on selective memorization and third I will discuss term indexing techniques to sustain performance in large-scale examples. As experimental results will demonstrate, these optimizations taken together constitute a significant step toward exploring the full potential of logical frameworks in real-world applications.

 
Thursday, January 13, 2005, 2:30 p.m. - Science Center 354
Michael Jones
Montclar State University, Department of Mathematical Sciences
Dynamics of Nim Induces Difference Equations
Abstract:
Nim is a well-known combinatorial two-player game in which each player alternates taking turns removing tokens from several piles. As such, Nim is the canonical example of an impartial game where from any position the same moves are legal for both players. In the 1901-02 issue of The Annals of Mathematics, Bouton used arithmetic over Z_2 to provide a complete analysis of a first player's optimal behavior or winning strategy in multi-pile Nim where players can remove any number of tokens from a pile. A single-pile variant of Nim requires players to remove only a restricted number of tokens (elements of which form the subtraction set) from the pile on their turns. Classifying optimal behavior for any subtraction set is one of the outstanding unsolved problems in combinatorial game theory. Winning and losing positions for player 1 in single pile Nim are defined recursively as a two symbol sequence depending on the subtraction set or parameter set. The key to the approach is to write the recursion as a nonlinear dynamical system defined on the phase space Z^sk_2 where sk is the largest element in the subtraction set. Under the appropriate initial conditions, the difference equation yields the optimal play for Nim. The transient dynamics and Garden of Eden points are completely determined for arbitrary sized subtraction sets. A characterization of cycle lengths for two parameter subtraction sets is determined. Extensions of the two parameter case to an arbitrary sized subtraction set are explored. This talk is based on the paper Nim Induced Dynamical Systems by Michael A. Jones and Diana M. Thomas.
 
Friday, January 7, 2005, 10 a.m. - Science Center 354
Lin Zhang
Exponential Tail Bounds

Abstract:
Given a sequence of independent random variables, how close is their sum to its mean? The answer is that it is exponentially close, assuming that the random variables are well-behaved. Some explicit bounds are known under several reasonable assumptions on the random variables. We describe here well-known exponential tail bounds that are relevant to computer science. These are organized in the following categories:

(a) standard bounds: Chernoff and Hoeffding bounds.
(b) martingale bounds: McDiarmid's bound and Azuma's inequality.
(c) isoperimetric bound: Talagrand's inequality.

We survey the basic ideas behind these bounds and describe the most direct proofs. Then, we observe a simple quantitative connection between Chernoff-Hoeffding bounds and the notion of negatively associated random variables.

 
Friday, December 10, 2004, 3 p.m. - Snell Hall 214
Todd Deshane
Web-Based Collaborative Exploration and Characterization of Large SQL Databases
Abstract:
Groups of people in many diverse fields face the challenge of characterizing or mining information from a large data set. People seeking to understand this large amount of data that is typically stored in a relational database, often need to submit long-running SQL queries that inefficiently use valuable computing resources. In addition, the web interfaces to large data sets were written by and for engineers or scientists, without accommodating various user skill sets. Our goal is to shift computational data mining from the laborious task of the individual to the cooperative task of groups. To address these problems regarding large data sets, we developed a query caching system with a web-based collaborative interface. With this interface, we promote knowledge sharing between users of various skill levels and experiences. We discuss our design for this system along with our implementation of a prototype system that uses a large set of Border Gateway Protocol (BGP) data. By using this system to explore and characterize large BGP databases, we demonstrate that it is in fact possible to effectively apply collaborative techniques to mine large SQL databases.
 
Friday, December 10, 2004, 10 a.m. - Snell Hall 214
Matt Finlayson
OfficeBeans – An Interactive Component Toolkit for Data
Visualization
Abstract:
Office Beans provides a portable web based solution for viewing large amounts of data quickly and concisely. This project provides the user with the ability to arrange their data using dynamic charts and graphs as well as share the results with other users. Office Beans was developed as a generic front end to any large data repository, in particular it was designed to meet the needs of jETIA (pronounced "juh tie ah"), a report generation infrastructure and toolkit for enterprise reporting. Office Beans is implemented as a collection of Java Applets that receive data through an XML stream. A single instance of an XML data set is shared by the reporting applets and changes are reflected in each of the applets through inter applet communication. This modified instance of an XML dataset can be passed back to jETIA to be used at a later time or to be shared with other users. Although Office Beans has been designed specifically to work with jETIA any Office Bean compliant XML stream can be represented with spreadsheets, pivot tables, line, bar, or pie charts. The implementation of Office Beans is an exercise in the application of modern software engineering practices and open standard technologies to produce a simple and robust application to meet end users needs.
 
Wednesday, December 8, 2004, 9:30 a.m. - Snell Hall 175
Eli Michael Dow
Dept. of Computer Science, Clarkson University
Project E.D.E.N.: Extra Data Everyone Needs

Abstract:
Interested in learning about an open source alternative to Microsoft's oft-delayed WinFS, Google's recently announced desktop search system, or Apple's Spotlight search technology? Would you find it useful if your computer held on to extra information about your files so you could locate them with greater ease?

I have implemented my own approach to metadata storage and search that parallels the functionality found in these systems. This includes a method for reliably storing additional information about your files seamlessly and quickly. Furthermore, my approach is unlike those commercial solutions because it is extremely portable across operating systems, underlying file systems, and database implementations.

Come to New Snell Hall room 175 Wednesday, December 8, 9:30 – 11:30 a.m. to hear more about the E.D.E.N. system architecture, discussion about keeping stored metadata in synchronization with actual files on disk (a.k.a. the coherence problem), and the interesting advances in search it can facilitate. The talk should be fairly complete, and accessible to anyone with basic interest in the subject matter.

 
Friday, November 12, 2004, Noon - Snell 212 (jointly sponsored by the Bio-Interface Seminar Series and The Department of Biology)
Phil Long
Center for Computational Learning Systems, Columbia University
Microarray Data and the Theory of Generalization in Boosting

Abstract:
Boosting is a widely successful approach to classification in which a number of "base classifiers" are incorporated into an aggregate classifier. The behavior of boosting seemed to run counter to the common view that a learning algorithm should strike a balance between the complexity of a hypothesis and its fit to the data. The influential "margin analysis" provided one explanation of this phenomenon.

Microarrays allow scientists to measure the rate of expression of a large number of genes simultaneously in a given tissue sample. These rates are referred to collectively as an expression profile for the tissue sample. Some biological problems can be addressed by learning a classifier for expression profiles.

Applying boosting, and some related algorithms, on microarray data, and related data, exposes that the margin analysis provides an incomplete explanation of generalization in boosting. Examining how suggests an alternative mode of analysis -- we have carried out a preliminary analysis of this type. Suitably modified, boosting appears to be a useful tool for classifying expression profiles. (This talk includes joint research with Vinsensius Vega and Sanjoy Dasgupta.)

 
Monday, November 8, 2004, 12:30 p.m. - SC 346
Supraja Gurajala
Computer Science, Clarkson University
A New Multi Rate QoS Aware MAC Protocol for Multi hop Ad hoc Networks
Abstract:
Ad hoc networks are an emerging area of research due to the growing popularity of portable wireless devices. The advances in wireless communications and growth of real-time applications have necessitated the development of wireless networks that support a high quality of service (QoS). The performance of a single channel Ad hoc network depends largely on the medium access control (MAC) protocol. A number of MAC protocols have been proposed in recent years, and each of these has focused on improving and optimizing a few selected QoS issues. We introduce a new Mac protocol, called Multi-rate Multi hop QoS-aware Mac Protocol (MMMP) for Ad hoc networks that concurrently addresses several QoS issues. MMMP is an asynchronous scheme that can handle multi-rate and multi-hop real-time traffics, reduce hidden terminal problems, provide different QoS requirements for different real time traffics, and has bounded end-to-end delay for real time traffics in multi-hop environment. Two additional new features, Smart Drop and Release Bandwidth, are incorporated in MMMP to enable bandwidth preservation and admission control. Simulations were conducted on the QualNet platform to test the performance of the new scheme. The results indicate that MMMP outperforms IEEE 802.11 for all performance measures and can efficiently handle a range of load conditions.
 
Thursday, October 21, 2004, 2:30 p.m. - SC 354
Mark Goldberg
Department of Computer Science, Rensselaer Polytechnic Institute
Network Algorithms for Homeland Security, or How to find a Small Hidden Group in a Large Communication Network

Abstract:
A small group of actors in a large communication network is planning a terrorist act against people they disagree with. They are very careful in three ways:

  • they take time to plan all details, to guarantee their "success";
  • all their messages are well camouflaged: the context is encrypted so that it is difficult for an outsider to discern their real meanings; and
  • the group members do not trust any outsider with the details of the plan and preparation.

Do we have any chance of discovering this group before they start implementing whatever they are planning? We describe models and efficient algorithms for detecting groups, functioning in communication networks, which attempt to hide their functionality–hidden groups. As expected, we find that if the background communications are dense or more structured, then the hidden group is harder to detect. Surprisingly, we also find that when the hidden group is non-trusting, it is easier to detect that group.

 
Beginning Friday, September 10, 2004, Noon - SC 354
Athanassios S. Fokas
University of Cambridge, UK and Clarkson University
Medical Imaging
Series of 5 lectures on Medical Imaging (lecture notes will be available)
  • Friday, September 10, Noon – SC 346
  • Monday, September 13, Noon – SC 346
  • Tuesday, September 14, 2:30 – 3:30 p.m. – SC 354
  • Wednesday, September 15, Noon – 1 p.m. – SC 342
  • Thursday, September 16, 2:30 – 3:30 p.m. – SC 354
 
Thursday, September 9, 2004, 2:30 p.m. - Science Center 354
Rick Chartrand
LANL
The Monge-Kantorovich Problem and a Nice Solution
Abstract:
In 1781, Gaspard Monge proposed the following problem: given a hole in the ground to fill and a pile of dirt with which to fill it, what is the best way to move the dirt to the hole? This problem and its generalizations turn out to be quite challenging mathematically. Moreover, problems from many different fields can be formulated in this optimal transport framework. We will present some of the theory of the Monge-Kantorovich problem and its dual, and how the theory leads to a numerical implementation method that is elegant and fast in comparison to current methods in the literature. Examples from the application of image warping will be shown.
 
Wednesday, September 1, 2004, 2 p.m. - Science Center 348
Zhaoying Chen
MS Thesis Defense - "A new QoS MAC protocol for Ad Hoc Wireless Network"
Committee Members: Sunil Kumar (Advisor), Chris Lynch and Christino Tamon
Abstract:
A lot of MAC protocols have been developed to support the Mobile Ad Hoc Network. Their main goals are to solve the Hidden Terminal Problems and Exposed Terminal Problems uniquely exists in Ad Hoc Networks. With the excitement of the development of high speed wireless communications, real-time transmissions are getting more and more common in Wireless Ad Hoc Networks. In order to provide Quality of Service (QoS), differentiated but not at the cost of starving the rest of transmissions services should be provided. Here we developed a new MAC scheme based on the MMACA/PR scheme, combined different features in DPS, ASAP and designed some inspiring new features to provide differentiated but somewhat fair Service, deal with the Hidden Terminal Problem and provide aggregated services.
 
Friday August 20, 2004, 1 p.m. - Science Center 348
Dalia Solomon
MS Thesis Defense
Committee Members: Profs. Jeanna Matthews, Chris Lynch and Tino Tamon
Abstract:
HoneyPot machines are designed to mimic systems in order to mislead intruders from breaking into the real system. By luring the intruder to a HoneyPot machine an administrator can monitor the activity of the trespasser. The administrator can then learn about the vulnerabilities of the current system and redesign it to be more secure. But in order to do so the administrator must properly build the HoneyPot machine is such a way that the HoneyPot machine fools the attacker in believing that it?s the real system so that he/she can effectively log information about the attackers? behavior. I will discuss about building a HoneyPot with VMware also I will talk about fingerprinting and countermeasures etc.
 
RE-SCHEDULE Friday, May 7, 2004, 10 a.m., SC 354
(Co-Sponsored with the Department of Mechanical and Aeronautical Engineering)
Clancy Rowley
Department of Mechanical Engineering, Princeton University
Low-Order Modeling and Control of Fluids

Abstract:
The ability to effectively control a fluid would enable many exciting technological advances, including the design of quieter, more efficient aircraft. Most of the flow control strategies tried so far have been largely ad hoc, and have not used many of the available tools from control theory and dynamical systems, which can guide controller design as well as placement of sensors and actuators. These tools require knowledge of a model of the system in terms of a system of differential equations, and the equations governing a fluid, though known, are too complex for these tools to apply. This talk addresses model reduction techniques, which are used to simplify existing models, to obtain low-order models tractable enough to be used for analysis and control, while retaining the essential physics. These techniques provide a bridge between complex problems and the mathematical tools useful for their analysis.

Specifically, the talk will focus on recent developments of two techniques, Proper Orthogonal Decomposition (POD) and balanced truncation. Each of these techniques has strengths and weaknesses, and we show how ideas from both techniques may be combined, to exploit their strengths. We illustrate the methods by obtaining reduced-order models for a compressible flow past a cavity, and an incompressible channel flow.

 
Friday, April 30, 2004, 3 p.m. - Science Center 354
Thanasis Fokas
Clarkson University, Department of Mathematics and Computer Science and University of Cambridge, UK
Fourier Transforms, Their Nonlinearization and the Imaging of the Brain
 
Thursday, April 22, 2004, 4:15 p.m. - Science Center 307
Thai Giang
Clarkson University, Department of Mathematics and Computer Science
Using Batch to Improve RSA
Abstract:
Security is a major issue in today's networks. Passwords and private messages sent across the network are not as secure as one might expect. Although encryption schemes are available, the performance of applications that uses them are proven to be worst than those that do not use any encryption scheme. This thesis analyzes the standard RSA encryption algorithm and compares it with Fiat's Batch RSA algorithm. The two algorithms will be compared in a secure Server/Client Handshake Protocol. Our goal is to test Batch RSA and show that the standard Handshake Protocol can be improved with Batch.
 
Friday, April 16, 2004, 3 p.m. - Science Center 354
Joe Skufca
US Naval Academy, University of Maryland, US Navy
Travellers on a Loop Without - A Generalized Random Walk
Abstract:
We consider the problem of stochastic flow of multiple particles traveling on a closed loop, with a constraint that particles move without passing. We use a Markov chain description that reduces the problem to a generalized random walk on a hyperplane (with boundaries). By expressing positions via a moving reference frame, the geometry of the no-passing criteria is greatly simplified, with the resultant condition expressible as the coordinate system planes which bound the first orthant. To determine state transition probabilities, we decompose transitions into independent events and construct a digraph representation in which calculating transition probability is reduced to a shortest path determination on the digraph. The resultant decomposition digraph is self-converse, and we exploit that property to establish the necessary symmetries to find the stationary density for the process.
 
Friday, March 5, 2004, 4 p.m. - CAMP 176
(Co-Sponsored with the Department of Electrical and Computer Engineering)
James G. Nagy
Department of Mathematics and Computer Science, Emory University
Exploiting Kronecker Product Structure in Image Restoration
Abstract:
Image restoration is the process of minimizing or removing degradations from an observed image, which may be distorted by such things as blurring and noise. Such problems arise in applications ranging from microscopy to astronomy. Computational methods for restoring the image require solving large, very ill-conditioned linear systems. The structure of the "blurring" matrix determines whether it is feasible to use matrix factorizations, or whether iterative methods are more appropriate for solving the linear systems. In this talk we describe some aspects of image restoration algorithms, and show that the blurring matrix can often be represented in terms of Kronecker products. We also show how to exploit the Kronecker product structure to improve computational efficiency. Examples from various applications will be presented.
 
Friday, February 27, 2004, 10 a.m. - Snell 213
Jeanna Matthews
Department of Mathematics and Computer Science, Clarkson University
Security Exploits and Defenses Up and Down the Protocol Stack
Abstract:
I will discuss some of the security holes in Internet protocols a various layers in the network protocol stack from application level protocols to link layer protocols. I will also discuss defenses that can be implemented at each layer. I will also discuss some of the most common classes of attacks on computer systems.
 
Friday, February 23, 2004, 4 p.m. - Camp 176
(Co-Sponsored with the Department of Mechanical and Aeronautical Engineering)
Pei Yu
Department of Mechanical Engineering, Western Ontario University
Bifurcation Dynamics in Control Systems
Abstract:
This talk focuses on bifurcation dynamics in control systems, which are described by ordinary differential equations, partial differential equations and delayed differential equations. In particular, bifurcations related to double Hopf, combination of double zero and Hopf, and chaos are presented. Center manifold theory and normal form theory are applied to simplify the analysis. Explicit stability conditions are derived and routes of Bifurcations leading to various complex dynamics are given. A system with time delayed feedback control is studied to show that time delay plays an important role in controlling and anti-controlling chaotic motions. Furthermore, a simple feedback controller is designed for anti-controlling Hopf bifurcation arising in the Lorenz system.
 
Friday, February 20, 2004, 10 a.m. - Snell 213
John Aycock
Department of Computer Science, University of Calgary
The Sky is Falling!

Abstract:
Last fall the University of Calgary offered a course on computer viruses and malicious software, the first of its kind in Canada and one of only a handful that have ever been offered worldwide. The course generated a global stream of controversy and media attention, in part because students were taught how to write viruses as well as defend against them.

This talk will be the first public presentation about the course. I will talk about the rationale behind the course, what I taught the students, how we put together a secure laboratory for coursework, and the experience we gained.

 
POSTPONED - to be rescheduled!
(Co-Sponsored with the Department of Mechanical and Aeronautical Engineering)
Clancy Rowley
Department of Mechanical Engineering, Princeton University
Low-Order Modeling and Control of Fluids

Abstract:
The ability to effectively control a fluid would enable many exciting technological advances, including the design of quieter, more efficient aircraft. Most of the flow control strategies tried so far have been largely ad hoc, and have not used many of the available tools from control theory and dynamical systems, which can guide controller design as well as placement of sensors and actuators. These tools require knowledge of a model of the system in terms of a system of differential equations, and the equations governing a fluid, though known, are too complex for these tools to apply. This talk addresses model reduction techniques, which are used to simplify existing models, to obtain low-order models tractable enough to be used for analysis and control, while retaining the essential physics. These techniques provide a bridge between complex problems and the mathematical tools useful for their analysis.

Specifically, the talk will focus on recent developments of two techniques, Proper Orthogonal Decomposition (POD) and balanced truncation. Each of these techniques has strengths and weaknesses, and we show how ideas from both techniques may be combined, to exploit their strengths. We illustrate the methods by obtaining reduced-order models for a compressible flow past a cavity, and an incompressible channel flow.

 
Thursday, January 29, 2004, 2:30 p.m. - SC 354
Erik Bollt
Department of Mathematics and Computer Science, Clarkson University
Modeling, Measurement and the Role of Partition Placement in Deriving Information from an Experiment
Abstract:
A popular method of encoding chaotic time-series from a physical experiment is the "threshold crossings" technique, relative to an arbitrary partition. The intent of symbol dynamics is to describe a dynamical system, but only a generating partition is expected to uniquely encode trajectories. We rigorously investigate some consequences of using itineraries relative to a nongenerating partition. We find the misrepresentation can be severe, including diminished topological entropy (but devil's staircase-like and surprisingly nonmonotone), and a high degree of nonuniqueness.
 
Friday, December 5, 2003, 3 p.m. - SC 342
Juan Wang
Department of Mathematics and Computer Science, Clarkson University
To Design and Implement a Minitype Proxy Firewall System

Abstract:
This thesis analyzes the causes of Internet Security problems and the advantages and disadvantages of the commonly used network security techniques such as encryption, authentication, secure protocols and firewall. It also introduces basic concepts and functions of firewall, and then analyzes the implementation price and security performance of firewall system with different architectures. It is important to research on firewall techniques because of its effective protection for network security.

Since proxy firewall is proved securer than packet filtering firewall, this system is designed as a proxy firewall. This system runs on a bastion host with Redhat Linux operation system. It has the abilities of Network Address Translator, internal information hide, strong user authentication, detailed log, etc. It is sufficient for normal Internet connection.

The focus of the thesis is on implementation of a mini type proxy firewall system, which includes a telnet proxy, an ftp proxy, user authentication service, log, and accounting. Telnet proxy tn-gw and ftp proxy jftpgw control telnet and ftp connection, respectively. They validate the connection, conduct the necessary user authentication, forward data between client and server, log necessary information and take down accounting data. User authentication service authsrv controls access in user instead of IP address thus makes system securer. It supports UNIX password and One-Time Password, with client embedded in proxies, sever creates challenge and verifies response, and user information is managed by a manage program. Detailed log is used for future audit and trace, and accounting data is used for calculating communication fee. Total incoming and outgoing bytes and online time for E-mail application are calculated by user name, while those for other applications are calculated by user name or IP address, and all are done automatically every month.

With consideration of the development of network and security technology, this thesis has also studied on the new firewall techniques and some improvement need to make that might be considered in future firewall design.

 
Thursday, December 4, 2003, 2:30 p.m. - SC 354
Alexis Maciel
Department of Mathematics and Computer Science, Clarkson University
A Simple Proof of the Weak Pigeonhole Principle

Abstract:
The Pigeonhole Principle states that if n pigeons are placed in n-1 holes, then at least one hole will contain more than one pigeon. The Weak Pigeonhole Principle refers to a version where the number of pigeons is significantly larger than the number of holes, typically at least twice as large. Both of these basic principles have numerous applications.

These theorems are easy to prove. How easy? Questions about the complexity of proving mathematical theorems are natural but they also have connections to important problems in computational complexity, automated theorem proving and software verification.

In this talk, we will introduce the study of proof complexity and present one of the simplest known proofs of the Weak Pigeonhole Principle. This is joint work with Toni Pitassi and Alan Woods.

 
Monday, November 17, 2003, Noon - SC 356
(Co-Sponsored with the Department of Biology and Seminar Series in Science and Technology at the Bio-Interface)
Katherine St. John
City University of New York, Department of Mathematics and Computer Science
Computational Methods for Analyzing Phylogenetic Trees
Abstract:
Evolutionary histories, or phylogenies, form an integral part of much work in biology. In addition to the intrinsic interest in the interrelationships between species, phylogenies are used for drug design, multiple sequence alignment, and even as evidence in a recent criminal trial. Much work has been done on designing algorithms that build phylogenetic trees given representative sequences of their DNA. The optimization criteria preferred by biologists for building trees is NP-hard. So, heuristics are often used that return many possible trees, instead of single optimal tree. This talk concentrates on the heuristics used for tree reconstruction, as well as how to summarize, analyze, and visualize these sets of trees. In particular, we will focus on fast reconstruction methods with provably nice properties, calculating biologically meaningful distances between trees quickly, and visualizing large sets of trees using the treecomp package designed by our group, as a module for the Mesquitesystem (developed by Wayne and David Maddison).
 
Tuesday, November 11, 2003, 3 p.m. - Science Center 348
Allan Borodin
University of Toronto, Department of Computer Science
What is a Greedy Algorithm?

Abstract:
Many undergraduate algorithms courses and texts often organize the material in terms of "algorithmic paradigms", such as greedy algorithms, backtracking, dynamic programming, local search, primal-dual, etc. We seem to be able to intuitively describe what we have in mind when we discuss these classes of algorithms but rarely (if ever) do we (or any of the standard texts) attempt to define precisely what we mean by greedy, dynamic programming, etc.

In a paper co-authored with Nielsen and Rackoff, we proposed a definition for greedy-like (approximation) algorithms, called priority algorithms. We did so in the context of classical scheduling problems. Angelopoulos and Borodin then applied the priority algorithm framework to the facility location and set cover problems. Impagliazzo and Davis have recently applied the framework to a number of traditional graph theoretic optimization problems.

Similar to online algorithms, we want to derive limitations on the (approximation) power of priority algorithms based only on the structure of the algorithm (allowing unbounded time complexity). Hopefully such a study will also lead to new algorithms. We motivate the priority algorithm framework by discussing some well known greedy algorithms and the corresponding lower bounds provable within this framework. We also discuss some extensions of the model such as the ``one-pass'' framework of Erlebach and Spieksma.

 
Friday, November 7, 2003, 3 p.m. - Snell Hall (hill) 213
Robert Vrablik
Senior Strategist/Solution Architect, Enterprise Systems Grid Computing, IBM Corp.
The Era of Grid Computing in an "On-Demand" Environment

Abstract:
The hype of the Internet boom has faded. But the challenges of how to drive business value from technology investments are more critical now than ever before. Products will be built only when ordered. Orders will automatically trigger a ripple effect across your business, from billing to manufacturing to shipping. Whatever you deliver, whether it's products to consumers, information to employees, or parts to a factory halfway around the world will need to be coordinated through a technology backbone that makes the most of tightly integrated, streamlined critical business processes.

Enter the new world of "e-business on Demand". A place where it's not just being on the Net but being a part of it, with the ability to respond with speed to any customer demand, market opportunity or competitive threat. To meet this challenge, Grid Computing has emerged as an enabling technology that forms the foundation of the "on-demand" computing environment. Grid Computing is a new, services oriented architecture based on "Open Grid Services Architecture" (OGSA) that embraces heterogeneous systems and involves distributed computing-over the Internet or any private network-via open standards. It is designed to help businesses efficiently consolidate, pool, share and manage IT resources.

Come learn about how Grid Computing helps:

  • manage resources across distributed heterogeneous platforms
  • deliver seamless Quality of Service (QoS) across integrated Grid resources
  • define open, published interfaces
  • integrate with existing IT resources
 
Thursday, October 30, 2003, 2:30 p.m. - SC 354
(Co-Sponsored by Mechanical and Aeronautical Engineering)
Serpil Kocabiyik
Memorial University of Newfoundland, Department of Mathematics and Statistics
The Flow Induced by a Circular Cylinder Subject to Superimposed Oscillations
Abstract:
Numerical calculations of two-dimensional viscous incompressible flow induced by an impulsively started circular cylinder which performs rotational and/or recti-linear vibrations are reported at Reynolds number $R=855$. These oscillations are harmonic and superimposed on the constant translational velocity to examine the effect of increase of the oscillation frequency on the near-wake structure and the hydrodynamic forces acting on the cylinder. The numerical results lead to the postulation of asymmetric and symmetric vortex shedding regimes depending on the oscillation conditions, which are discussed in some detail for exemplary cases. Comparisons are made with previous existing results to validate the solution methodology.
 
Wednesday, October 22, 2003, 1 p.m. - Science Center 356
Alexander Vladimirsky
Cornell University, Department of Mathematics
Non-Iterative Methods for Boundary Value Problems in Control Theory

Abstract:
We consider a wide class of anisotropic optimal trajectory problems, review the dynamic programming approach and derive the corresponding static Hamilton-Jacobi-Bellman PDE. A usual numerical approach requires some discretized version of the PDE to be satisfied at every mesh point, thus leading to a necessity to solve a coupled system of N nonlinear equations. Ordered Upwind Methods (OUMs) present an alternative (non-iterative) approach to obtaining the convergent numerical solution. These methods are based on a careful use of the direction of information propagation, stemming from the Optimality Principle. The resulting computational complexity is O(N log N) on any unstructured mesh with N mesh points.

We illustrate OUMs on several continuous & hybrid optimal trajectory test-problems. In addition, we briefly describe applicability of OUMs to problems in anisotropic front propagation, in optimal control under uncertainty, and in dynamical systems. Several of the above projects are joint work with J.A. Sethian and with J. Guckenheimer.

 
Thursday, October 16, 2003, 5 p.m. - Science Center 360
(Co-Sponsored with Pi Mu Epsilon Math Honorary Society)
Cristina Ruiz
Binghamton University, Department of Mathematics
From Matrices to Oriented Matroids
Abstract:
Oriented matroids are a combinatorial abstraction of linear algebra. They are a natural concept which has applications in areas like topology, combinatorics and algebraic geometry to name a few. In this talk I will introduce oriented matroids by using points, vectors and lines on the plane. As an example we will use oriented matroids to read off the faces of a high dimensional polyhedron. This talk is accessible to undergraduates. However some knowledge of linear algebra is required. (Students interested in pursuing a Ph.D. are particularly encouraged to talk with the speaker.)
 
Thursday, October 16, 2003, 2:30 p.m. - Science Center 354
Ira B. Schwartz
Naval Research Laboratory, Nonlinear System Dynamics Section, Washington, DC
Stochastic Epidemic Outbreaks or Why Epidemics Behave Like Lasers
Abstract:
Many diseases, such childhood diseases, dengue fever, and West Nile virus, appear to oscillate randomly as a function of seasonal environmental or social changes. Such oscillations appear to have a chaotic bursting character, although it is still uncertain how much is due to random fluctuations. Such bursting in the presence of noise is also observed in driven lasers. In this talk, I will show how noise can excite random outbreaks in simple models of seasonally driven outbreaks, as well as lasers. The models for both population dynamics will be shown to share the same class of underlying topology, which plays a major role in the cause of observed stochastic bursting. Finally, I will speculate on some ideas of stochastic epidemic control, and how to test these ideas in a physical optics experiment.
 
Thursday, October 9, 2003, 4 p.m. - SC 354
(Co-Sponsored by Pi Mu Epsilon Math Honorary Society)
Jeffrey Weeks
MacArthur Fellow, Canton, NY
The Shape of Space
Abstract:
Then we look out on a clear night, the universe seems infinite. Yet this infinity might be an illusion. During the first half of the presentation, computer games will introduce the concept of a "multiconnected universe". Interactive 3D graphics will then take the viewer on a tour of several possible shapes for space. Finally, we'll see how satellite data released in February 2003 are beginning to reveal the true shape of our universe. The only prerequisites for this talk are curiosity and imagination. For first-year students on up.
 
Thursday, October 2, 2003, 4 p.m. - Science Center 346
(Co-Sponsored with Pi Mu Epsilon Math Honorary Society)
John Derrico
Eastman Kodak
Spline Interpolation and Modeling at Eastman Kodak
Abstract:
Splines have many uses in industry, ranging from process control contexts to research and development. While splines are often seen as functions of one variable, there are also uses for higher dimensional splines. (You may even be making use of 3d splines and never know it.) In this talk I'll sample some of these uses, I'll also suggest why we have used a particular class of spline model as opposed to some other model form.
 
Thursday, September 25, 2003, 4 p.m. - SC 356
Joel Foisy
SUNY Potsdam, Department of Mathematics
Knots and Links in Spatially Embedded Graphs
Abstract:
A graph is said to be intrinsically knotted if it contains a knotted cycle in every (tame) spatial embedding. Similarly, a graph is said to be intrinsically 3-linked if it contains a non-split 3 component link in every spatial embedding. It is an open question to determine all "minor-minimal" intrinsically knotted (or intrinsically 3-linked) graphs. We will discuss recent results concerning this open question. Some of the results discussed represent work done jointly with Garry Bowlin. No prior knowledge of knot theory will be assumed.
 
Thursday, September 11, 2003, 2:30 p.m. - SC 354
Alireza Ziarani
Clarkson University, Department of Electrical and Computer Engineering
A Nonlinear Adaptive System for Filtering and Parameter Estimation
Abstract:
This talk will present the mathematical properties and applications of a recently introduced technique of extraction of nonstationary sinusoidal signals and estimation of their parameters, namely amplitude, phase and frequency. The nonlinear dynamical system governing the algorithm presents open stability problems, which will be discussed in this presentation. An outline of applications of the technique in biomedical engineering, nondestructive testing of materials and power engineering will be presented.
 
Tuesday, August 26, 2003, 4 p.m. - Science Center 342
Kevin Vixie
Los Alamos National Laboratory
Variational Analysis, PDE's and Image Analysis: A Sampling of Details and The Big Picture
Abstract:
In this talk I introduce the mathematics involved in the variational approach to image analysis, comment on the connections to robust statistics and discuss in some detail one particular image denoising model: the Rudin-Osher-Fatemi Total Variation minimization method. The talk is expository in nature and will be friendly to graduate students. I will also comment on our most recent results.
 
Monday, August 18, 2003, 11:30 a.m. - Science Center 354
Vineet S. Raghavan
Clarkson University, Department of Mathematics and Computer Science
Medium Access Control Protocols and Quality of Service in Ad Hoc Wireless Networks
Abstract:
Ad hoc wireless networks are a relatively new field, and as they gain more popularity, various new applications are bound to use them because of several reasons. In a network, the Medium Access Control (MAC) protocols are responsible for enabling the communication between two nodes. In a wireless network, however, these protocols gain even more importance since the communication channel is inherently prone to errors and unique problems like those caused by hidden terminals and signal fading effects. Although a lot of research has been conducted on MAC protocols, the various issues involved have mostly been presented in isolation of each other. We therefore make an attempt to present a comprehensive review of several schemes, integrating the various related issues and challenges with a view to providing a big-picture outlook to this vast area. We present a classification of MAC protocols based on their operation principles and underlying features. We look at the aspect of Quality of Service (QoS) in the context of Ad hoc wireless networks, and attempt to correlate the challenges involved with suitable approaches presented in the literature. We also consider the unique application of Sensor Networks, and review some solutions that have been presented in this regard. In conclusion, we present a brief summary of key ideas and a general direction for future work.
 
Wednesday, July 23, 2003 - 2 p.m. - Science Center 356
Bojan Orel
University of Ljubjana, Slovenia
Geometric Integration

Abstract:
Qualitative properties of numerical solutions of ordinary differential equations (ODEs) attracted a lot of attention recently. It was shown that classical numerical methods for solving ODEs are very poor at preserving invariants. Linear multistep formulas can preserve only linear invariants, while some of the implicit Runge-Kutta methods can preserve also quadratic invariants.

The term Geometric integrator means a method that is designed not just to minimize (in the classical sense) the numerical error but also to render correctly the invariants (i.e. the geometry) of the solution.

Since an invariant of an ODE can be interpreted as a homogeneous space of some Lie group, the natural environment for investigating invariant-preserving numerical methods for ODEs is the framework of Lie groups. If we know how to retain Lie-group invariance under discretization, we can also keep the solution on the homogeneous space.

We will review briefly recent advances in numerical methods that advance solutions of ODEs on Lie groups. In particular we will mention Runge-Kutta Munthe-Kaas methods for solving general nonlinear ODEs and Magnus methods for the solution of linear ODEs.

 
Wednesday, July 23, 2003, 9 a.m. - Science Center 356
Mehul Vora
Clarkson University, Department of Mathematics and Computer Science
Dynamic Management of QoS with Priority (DMQP): A Dynamic Mechanism for Multimedia Multicasting

Abstract:
Multimedia on the Internet has become not only a reality but also a necessity in the Web's appeal to millions of users, and the lack of effective end-to-end service (QoS) in such applications is now a cause for significant concern. In addition, the heterogeneity of the networks and end systems makes it difficult to multicast Internet multimedia in an efficient and flexible way. The interaction between multicasting and the delivery of multiple time-correlated continuous media streams with real-time delay requirements poses various new and interesting problems in research on communication protocols and architectures.

We propose a novel resource reservation mechanism, Dynamic Management of QoS with Priority (DMQP), for multimedia multicasting over the Internet. DMQP is a resource reservation based approach to provide QoS guarantee for real-time data traffic, in which an application requests QoS by specifying a range of values and the network tries to provide service at a specified point within this range depending upon its service class. DMQP achieves the service differentiation by classifying the end-users into two classes : normal user and prioritized user.

 
Wednesday, June 18, 2003, 2 p.m. - Science Center 307
Barbara Morawska
Clarkson University, Department of Mathematics and Computer Science
Goal-Directed E-Unification - Completeness, Decidability and Complexity

Abstract:
In the thesis I explore a new approach to solving the E-unification problem, i.e., the problem of solving equations modulo an equational theory presented as a set of identities.

This approach is goal-directed, because it searches for a solution for an equation by analyzing the structure of the goal (the equation to be solved), as opposed to procedures based on a completion of an equational theory. It is top-down, since it analyzes the goal and applies inference rules to the goal only with respect to the top symbols in the terms in the goal equation.

Such a goal-directed, top-down procedure is shown to be a complete, semi-decision procedure.

Next I will examine different ways of putting syntactic restrictions on a presentation of an equational theory, so that this semi-decision procedure becomes a decision procedure (it always terminates).

Hence we are able to prove complexity results for E-unification in several classes of equational theories.

 
Thursday, June 12, 2003, 10 a.m. - Science Center 307
Shuangtiao Huang
Clarkson University, Department of Mathematics and Computer Science
Distributed Network Management System with CORBA
Abstract:
Network management systems (NMS) have played an important role in today's telecommunication network. Now telecommunication network environments have become more complicated, and heterogeneous devices are provided by different vendors. What is more, the amount of network devices is rapidly increasing. These trends require NMS to provide standard interfaces to share information and be scalable to manage complex networks. This paper presents a method to design an open, standards-based and distributed NMS with Common Object Request Broker Architecture (CORBA). CORBA facilitates the inter-communication among different NMS. Also CORBA's inherent distributed capability makes it possible to manage large-scale networks. Obviously, for a lot of devices, a single server is not enough to host the NMS. Here I propose an efficient algorithm to divide the managed system into partitions and distribute every partition to each server. This way, each server will hold an equivalent workload. At the same time, this method tries to reduce server-server communication. The basic principle here is based on heuristic optimization strategy.
 
Friday, May 23, 2003, 10:30 a.m. - Science Center 307
Jingyi Zou
Clarkson University, Department of Mathematics and Computer Science
Migrating Between Relational Databases and XML Documents
Abstract:
XML (eXtensible Markup Language) is fast emerging as the dominant standard for representing and exchanging information over the Internet. However, for the foreseeable future, a significant amount of business data will continue to be stored in the relational database system. It is necessary to be able to extract relational data from XML documents and store it in a database, as well as to generate XML documents from known databases. We build two frameworks XML Generator and X2R to solve the following problems: 1) Generating XML document and DTD from relational schema, 2) Mapping relational schema from an XML document with DTD.