Background Image

Seminars & Colloquia

In this Section
Thursday, November 5, 2009 - 5:30 p.m. - Snell Hall 213
Lisa Torrey, Department of Computer Science, St. Lawrence University
Machine Learning by Advice Taking and Transfer
Abstract:  In a type of machine learning called reinforcement learning (RL), an artificial agenda learns to take actions in its environment to earn rewards.  Typically there is a lot of trial-and-error involved in exploring the environment, and agents can take a long time to learn.  This talk will present record research into two ways of speeding up the process.  The first is enabling agents to take advice from a human.  The second is enabling agents to transfer knowledge from one task to another.

Friday, October  30, 2009 - 2:00 p.m. - Science Center 356

Hai Lin,  Division of Mathematics and Computer Science, Clarkson University
"Algorithms for Cryptographic Protocol Verifications in Presence of Algebraic Properties"
Abstract:   

The design of cryptographic protocols is error-prone, people have found flaws in seemingly secure protocols years after they were proposed. For example, the famous Needham-Schroeder protocol was found to be flawed almost 20 years after it was originally proposed. This dissertation is about giving automatic proofs that protocols are secure. We proposed several methods that can search for attacks systematically and automatically.

            Traditionally, the perfect encryption assumption was assumed when people verify protocols using formal methods. This assumption means that attackers can get nothing out of any encrypted message without knowing the key. But in some scenarios this assumption is not true, cryptographic primitives can have some algebraic properties. Under these properties, some partial information of an encrypted message may be released, and there can be attacks by taking advantage of the partial information. Traditional methods cannot find attacks based on these algebraic properties. In this dissertation, we proposed several methods that consider these properties.

            In this proposal, I will mostly focus on the motivations of our work. First I will briefly introduce cryptographic protocols and security issues in them. And then, I will talk about automated reasoning and its applications to protocol verification. Next, I will talk about algebraic properties in some protocols, why they can be dangerous and why it is difficult to consider them.

Friday, April 3, 2009 - 2:00 p.m. - Science Center Room 356
 Mamoon Hajja, Department of Computer Science, Clarkson University
Resource Isolation of Shares for VMware ESX Server and Xen Express
Abstract:  There have been a lot of improvements on the hardware of computing machines over the past years; many of which were on enabling computing machines to contain several operating systems and run them simultaneously, virtualization systems. This can be achieved by creating another layer to the design of computing machines that most of us know, a model where an operating system sits on bare hardware, controls the machine, and provides an interface to users of that machine. The purpose of that layer is to emulate the hardware in such a way that each operating system will think it is sitting on the hardware and sitting alone; many benefits virtualization has given to industry such as software testing and consolidation of idle machines. A further and more detailed discussion about virtualization and its benefits and uses will take place in the Introduction.
A number of vendors like VMware and Xen have been and still compete to deliver a product that meets the needs of their clients, which drove many researchers to compare and study those products. A number of papers have been published on quantifying the overhead that virtualization has created on a virtual machine over a base OS machine. Others researched the scalability of those products and the performance degradation on virtual machines that they could experience when more VMs are added to the system. Finally, some papers were published on performance isolation when one VM starts misbehaving in a way that it tries to conquer all hardware resources.
Our work looks at performance isolation when all VMs in a machine use all their shares of resources and then having one of them attempt to use as much of a resource as possible. Each resource will be looked at separately; they include the hard drive, memory, and network. I will be describing the design of the benchmark and the results of the tests. The underlying virtualization environments tested are ESX Server developed by VMware and XenExpress by XenSource; they are examples of full- and para-virtualized systems, respectively. Our results show the degradation that virtual machines experience when one machine tries to use as much as it can of a certain resource; we are not concerned about the latter machine.
Wednesday, March 11, 2009 - 1:00 p.m. - Science Center Room 356
Yuefeng Tang, Department of Computer Science, Clarkson University
Interpolation Algorithms in Program Verification
Abstract:  Recently, Interpolants have been used as an important component of program verification.  In this thesis, we propose three different procedures for interpolant generation.  The first method based on efficient rewriting methods constructs interpolants for linear inequalities over reals, and it does not require the use of combination procedures.  The process of building interpolants reflects the structure of the rewrite proof.  The second method, built on the linear arithmetic solver of Yices(SMT solver), constructs interpolants for linear inequalities over reals or linear equalities over integers, and it is designed to work well in an SMT theorem prover. We have implemented this method and give experimental results.
The third method is built on the linear arithmetic solver of Yices with Gomory cut strategy for linear inequalities over integers.  The interpolant is constructed in such a way that a conditional formula derived from a contradictory equation is repeatedly refined until it becomes an interpolant.
Friday, November 14, 2008, 2:00-3:00 p.m. - Science Center Room 356
Tao Yang
Department of Computer Science, Clarkson University
The Impact of Software Configuration on Power Consumption in Desktop Computing Systems
 
Abstract: There has been increased emphasis on minimizing the power consumed by computing systems. This thesis explores the impact of various software configuration options on the amount of power consumed by commodity desktop computers. In particular, we focus on the impact of BIOS settings and the choice of operating system. We (i) present a detailed overview of the concepts of the power management designed by ACPI and list out all the possible tuning options in the BIOS setting and give general configuration principles in the perspective of power efficiency for common desktop users; then (ii) present quantitative measurements that isolate common activities that consume the most power in a desktop computer system; finally (iii) compare two popular desktop operating systems, Windows XP and Ubuntu Linux, in term of their performance and power aspects. We find that (i) the BIOS configuration options can significantly affect the power consumption; (ii) the CPU and the hard drive are the largest power consumers in commodity desktop. Network communication and graphic processing are the largest power consuming hardware activities ;(iii) Windows XP achieves smaller hard drive bandwidth than Ubuntu Linux. To finish a same task, Windows needs more power to process based on longer processing time.  
 
Friday, August 22, 2008, 10:00a.m. - Science Center Room 344
Cheng Wang
Department of Computer Science, Clarkson University
An Empirical Study of Function Overloading in C++
 
Abstract: The usefulness and usability of programming tools (for example, languages, libraries, and frameworks) may greatly impact programmer productivity and software quality. Ideally, these tools should be designed to be both useful and usable. However, in reality, there always exist some tools or features whose essential characteristics can be fully understood only after they have been extensively used. The study described in this paper is focused on discovering how C++’s function overloading is used in production code using an instrumented g++ compiler. Our principal finding for the system studied is that the most ‘advanced’ subset of function overloading tends to be defined in only a few utility modules, which are probably developed and maintained by a small number of developers, the majority of application modules use only the ‘easy’ subset of function overloading, and most overloaded names are used locally within the modules rather than across module interfaces.  
 
Thursday, April 3 , 2008, 11:00a.m. - Snell Hall: Peterson Board Room
James Owens
Department of Computer Science, Clarkson University
A Study of Passwords and Methods Used in Brute-Force SSH Attacks
 
Abstract: In its Top-20 Security Risks report for 2007, the SANS Institute called brute-force password guessing attacks against SSH, FTP and Telnet servers “the most common form of attack to compromise servers facing the Internet.” Another recent study also suggests that Linux systems may play an important role in the command and control systems for botnets. Defending against brute-force SSH attacks may therefore prove to be a key factor in the effort to disrupt these networks. We report on a study of brute-force SSH attacks observed on three very different networks: an Internet-connected small business network, a residential system with a DSL Internet connection, and a university campus network. The similarities observed in the methods used to attack these disparate systems are quite striking. The evidence suggests that many brute-force attacks are based on pre-compiled lists of usernames and passwords, which are widely shared. We were able to confirm the existence of one such pre-compiled list after it was discovered in a SSH attack toolkit captured in a related honeypot project.  Analysis of the passwords used in actual malicious SSH traffic suggests that the common understanding of what constitutes a strong password may no longer be sufficient to protect systems from compromise. Study data are also used to evaluate the effectiveness of a variety of techniques designed to defend against these attacks.  
 
Monday, April 21, 2008, 2:00p.m. - Science Center Room 346
Yuefeng Tang
Department of Computer Science, Clarkson University
Interpolation Algorithms in Program Verification
 
Abstract: Recently, Interpolants have been used as an important part of program verification. In this thesis, we propose two different procedures for interpolant generation. The first method based on efficient rewriting methods constructs interpolants for linear inequalities over reals, and it does not require the use of combination procedures. The interpolant is constructed in such a way that it reflects the structure of the rewrite proof. The second method, built on the linear arithmetic solver of Yices(SMT solver), constructs interpolants for linear inequalities over reals or integers, and it is designed to work well in an SMT theorem prover. We have implemented this method and give experimental results.  
 
Thursday, April 3 , 2008, 11:00a.m. - Snell Hall: Peterson Board Room
James Owens
Department of Computer Science, Clarkson University
A Study of Passwords and Methods Used in Brute-Force SSH Attacks
 
Abstract: In its Top-20 Security Risks report for 2007, the SANS Institute called brute-force password guessing attacks against SSH, FTP and Telnet servers “the most common form of attack to compromise servers facing the Internet.” Another recent study also suggests that Linux systems may play an important role in the command and control systems for botnets. Defending against brute-force SSH attacks may therefore prove to be a key factor in the effort to disrupt these networks. We report on a study of brute-force SSH attacks observed on three very different networks: an Internet-connected small business network, a residential system with a DSL Internet connection, and a university campus network. The similarities observed in the methods used to attack these disparate systems are quite striking. The evidence suggests that many brute-force attacks are based on pre-compiled lists of usernames and passwords, which are widely shared. We were able to confirm the existence of one such pre-compiled list after it was discovered in a SSH attack toolkit captured in a related honeypot project.  Analysis of the passwords used in actual malicious SSH traffic suggests that the common understanding of what constitutes a strong password may no longer be sufficient to protect systems from compromise. Study data are also used to evaluate the effectiveness of a variety of techniques designed to defend against these attacks.  
 
Friday, March 14, 2008, 12:00p.m. - Science Center Room 356
Joachim S. Stahl
Department of Computer Science, University of South Carolina
Image Segmentation with Open Boundaries Capability Using Feature Maps
 
Abstract: Edge grouping methods aim at segmenting salient structures in noisy images. These methods are an intermediate step in computer vision and have applications in medical imaging, surveillance, and image content-based retrieval. In this presentation a new edge-grouping method is introduced that exhibits several useful properties. First, it combines both boundary and region information by defining a unified grouping cost. The region information of the desirable structures is included as a binary feature map that is of the same size as the input image. Second, it finds the globally optimal solution of this grouping cost, by extending a prior graph-based algorithm to achieve this goal. Third, it can detect both closed boundaries, where the structure of interest lies completely within the image perimeter, and open boundaries, where the structure of interest is cropped by the image perimeter. Given this capability for detecting both open and closed boundaries, the proposed method can be extended to segment an image into disjoint regions in a hierarchical way.  
 
Thursday, March 13, 2008, 3:00p.m. - Science Center Room 342
Kaiqi Xiong
Department of Computer Science, North Carolina State University
Attach-Resilient State Estimation in Power Grids
 
Abstract: A power grid is a complex system connecting electric power generators to consumers through power transmission and distribution networks across a large geographical area.  State estimation is used to understand the state of power grids based on readings of sensors placed at important power grid components.  Current state estimation approaches are highly vulnerable to malicious attacks; an attacker can compromise one or a few sensors to mislead state estimation and thus the power grid control algorithms, leading to catastrophic consequences (e.g., a large-scale blackout and nuclear meltdown).  In this talk, I will present a series of attack-resilient state estimation algorithms for power grids.  These algorithms use the intrinsic relationship among the state variables and the sensor measure­ments to effectively tolerate a large portion of malicious sensor readings. I will further discuss my investigation for the properties of these algorithms through theoretical analysis and simula­tion, which both demonstrate the effectiveness of the proposed approaches. I will also give an overview of my research with a brief discussion of my other research including public key-based group authentication, Denial of Service (DoS) attack mitigation and secure localization in sensor networks, and network bandwidth provisioning in access networks, and outline my future work.  
 
Wednesday, March 12, 2008, 4:00p.m. - CAMP 176 (Co-sponsored with Electrical and Computer Engineering)
Kaiqi Xiong
Department of Computer Science, Michigan Technological University
Modeling and Predicting Program Locality
 
Abstract: Given that the memory locality of an application is often the dominant factor in determining its overall performance, much research has been done to explore and exploit the temporal and spatial reuse characteristics of programs.  In this talk, I will present the recent findings in our research group that advance our understanding on program memory behavior. This research has been supported by NSF through the Career Award. I'll first present a model that predicts temporal locality on a per instruction basis. This model offers us a quantitative measure of temporal locality. I'll then discuss a unified model that integrates prediction for both temporal and spatial reuses in a quantitative manner. Specifically, given an instruction that references a memory location, we predict the memory locations to which the current reference is spatially correlated and when, in terms of reuse distance, those memory locations will be accessed in the future. This memory locality model will help us design new compiler optimizations and architecture to improve memory system performance. In this talk, I will share a couple of applications of this model. First, I will describe how to use it to predict critical instructions that cause most cache misses. This prediction pinpoints the locations in a program that needs attention. I'll further introduce a new prefetcher, which uses the predicted temporal and spatial reuse patterns to guide hardware prefetching, and show that this prefetcher is able to compete with a state-of-the-art dynamic scheme. Bio: Dr. Zhenlin Wang is an assistant professor in the Department of Computer Science of Michigan Technological University. His research interests are broadly in the areas of compiler, computer architecture, and operating system. In particular, he is interested in modeling and predicting program memory behavior, building compilers for multi-core systems, and designing cooperative memory management schemes for virtual machine. Professor Wang received his B.S. and M.S. in Computer Science from Beijing University in 1992 and 1995, respectively, and his Ph.D. in Computer Science from the University of Massachusetts, Amherst, in 2004. He is an NSF Career Award recipient.  
 
Tuesday, March 11, 2008, 3:00P.m. - Science Center 342
Wensheng Wu
IBM Almaden Research Center, San Jose, California
Schema Matching for Large-Scale Data Integration
 
Abstract: A data integration system is a system that provides uniform access to a set of data sources. Such a system largely facilitates the retrieval of information from multiple sources. An important task in building data integration system is schema matching, that is, to discover semantic correspondences or mappings among the elements from different schemas. While schema matching has been studied for decades, the scalability issue has not received much attention until recently. In particular, there are two dimensions of the scale. One is the number of elements in the schema, and the other is the number of schemas to be matched. In this talk, I will first present a solution for discovering topical structures of databases to support the divide-and-conquer approach to matching large schemas.  I will then describe a clustering-based matching algorithm for matching a large number of schemas. I will also briefly describe a new search-driven matching paradigm for focused integration when a complete integration is not required. Finally, I will discuss future directions and conclude the talk     
 
Tuesday, November 13, 2007, 10:00a.m. - Petersen Board Room, Snell Hall
Scott Aaronson
Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology
Computational Intractability as a Law of Physics
 
Abstract: Some of the most basic principles in physics can be seen as limits on technology: for example, the Second Law of Thermodynamics and the impossibility of superluminal signaling.  In this talk, I'll ask whether the hardness of NP-complete problems would likewise be useful to assume as a physical principle.  In studying this question, I'll touch on the computational implications of living in hypothetical universes with closed timelike curves, a nonlinear Schrödinger equation, particular many-particle entangled states left over from the Big Bang, or the ability to 'postselect' on quantum measurement outcomes.  Perhaps surprisingly, I'll show that one can make definite, nontrivial statements about what computational problems could and couldn't be efficiently solved in each of these universes.      
 
Friday, October 19, 2007, 3:00p.m. - SC 348
Siva Anantharaman
University of Orleans, France, Department of Computer Science
A Clausal View for Access Control and the Evaluation of XPath Queries
 

Abstract: More and more information is stored in XML and other hierarchical data formats, and query languages such as XPath are used to query this data. We study how XPath queries can be restricted by access control mechanisms to protect confidential or restricted data in XML documents. By expressing both the behavior of the query, and the rules restricting access, as constrained first-order clauses, we arrive at an efficient and powerful way of combining queries on documents and restrictions on access to those documents. In this way, our method gives users the best answers that do not reveal confidential information.

 
Tuesday, September 25, 2007, 1:00p.m. - SC 348
Bing Li
Clarkson University, Department of Computer Science
Optimized Layout on the Sphere of Dynamically Changing Planar Graphs
 
Abstract: We implemented an algorithm to embed a planar graph on the surface of a sphere, and to modify this embedding as edges are removed from the graph.  This relates to work by fellow student Rui Huang, which generates a random planar graph by randomly inserting and deleting edges.

To embed a random planar graph without distortion, we embed the graph on the surface of a sphere, rather than on the Euclidean plane.  We find an optimized layout by minimizing an energy function, which is designed to penalize distortions by assigning a high energy to them.  We experimented with different algorithms for finding the minimum energy.  As the graph changes, the energy function changes, and we must efficiently adjust the solution to minimize the new problem.
 
Thursday, August 23, 2007, 10:00a.m. - Snell 129
Wenjin Hu
Clarkson University, Department of Computer Science
A Study of the Performance Isolation Properties of Virtualization Systems
 
Abstract: In recent years, there have been a number of papers comparing the performance of different virtualization environments for x86. However, one important aspect of comparing virtualization systems is often not quantified – the degree to which they limit the impact of a misbehaving virtual machine on other virtual machines. Quantifying performance isolation is challenging because the impact can vary based on the type of “misbehavior”. Nevertheless, performance isolation is an essential feature especially in commercial service hosting environments. I present the design of a performance isolation benchmark that quantifies the impact on well-behaved VMs of various categories of extreme resource consumption in a misbehaving VM. I describe the design of our benchmark suite and present results of testing three flavors of virtualization – VMware (an example of full virtualization), Xen (an example of paravirtualization) and Solaris containers (an example of operating system level virtualization). Our results highlight the difference between these classes of virtualization systems as well as the importance of considering multiple categories of resource consumption when evaluating the performance isolation properties of a virtualization system.
 
Thursday, August 23, 2007, 2:00p.m. - SC 354
Rui Huang
Clarkson University, Department of Computer Science
An Optimized Triconnected Component Decomposition for the Generation of Random Labeled Planar Graphs
 
Abstract:
An algorithm for producing random labeled planar graphs uniformly from the set of all such graphs was proposed by Denise et. al.  This algorithm performs a random walk through the set of all labeled planar graphs with a fixed number of nodes by randomly adding and removing edges.  We implement their algorithm by creating an optimized data structure for planar graphs, that keeps track of all the ways the graph can be embedded (drawn) in the plane.  There was no previously available open source implementation of this data structure, and our implementation is optimized to have high performance on large planar graphs dominated by a single large triconnected component.  This allows our implementation to handle graphs that are orders of magnitude larger than previous work.
 
Monday, March 19, 2007, 10:00 a.m. - Snell212
Leslie Raju Cherian
Clarkson University, Department of Computer Science
A Study of the TCP Congestion Control Variables in the Linux Kernel
 
Abstract:
The Transmission Control Protocol – a transport layer protocol – allows data to be transferred from one computer to the other over the network. It guarantees reliable, in order data delivery. It achieves this by making sure that it does not flood the network or the receiver with more data than it can handle at a time. The various algorithms that control how the sender achieves this are explained in this thesis. Then we move ahead to see how these algorithms have been implemented in the Linux kernel today. Does it follow the theory closely? If not, are the changes significant? Did the changes that were made make the implementation more efficient? There were a number of departures of the code from the accepted literature, and not all of these changes were efficient. In this thesis, I have also proposed possible solutions in making these changes more efficient.
 
Wednesday, February 7, 2007, 4:00-5:00 - CAMP 176 - co-sponsored with Electrical and Computer Engineering
Thomas Hemker
Department of Computer Science, TU Darmstadt
Surrogate Optimization for Mixed-Integer Nonlinear Problems in Engineering Applications of Black-Box Type
Abstract: : Simulation based optimization becomes increasingly important in engineering applications, but usually employed software for computational modeling and simulation of complex, engineering problems have been developed over many years and have usually not been designed to meet the specific needs of optimization methods. Underlying iterative algorithms, approximation of tabular data etc. in addition result in very low smoothness properties of the objective function.
  
Thus, non-smooth black-box type optimization problems arise. If in addition to continuous, real-valued also discrete, integer-valued optimization variables have to be considered, only a few optimization methods based on evaluations of the objective function only can be applied besides computational expensive random search methods.
  
In this talk we present a surrogate optimization approach to solve such problems based on sequentially improved stochastic approximations. Numerical results for the proposed approach will be discussed for electro-magnetic design problems for electrical engineering, for groundwater management problems from civil engineering, as well as for walking speed optimization problems of humanoid robots.
 
Friday, January 5, 2007, 10:00 a.m. - Snell 175
Sailaja Yagnavajhala
Clarkson University, Department of Computer Science
Analysis of the Retransmission Timer of TCP
 
Abstract:
The TCP/IP protocol suite which is used for the Internet includes TCP as its transmission protocol among other protocols. For reliable timely deliver of data TCP maintains a timer called Retransmission timer or timeout. One of the performance factors that affect TCP is the timeout value. Incorrect setting of this value leads to either underutilized connection or unnecessary retransmissions. Retransmission timer computation algorithm has undergone different changes over time. In this thesis we analyzed the present retransmission timer algorithm that is widely accepted, under different network conditions in a Linux environment.
 
Monday, November 20, 2006, 3:00 p.m. - Snell 175
Rui Huang
Clarkson University, Department of Computer Science
An Implementation of a Markov Chain Generating Random Labeled Planar Graphs
 
Abstract:
We are implementing a fully dynamic algorithm to add and remove edges from a planar graph. Our implementation maintains the decomposition of the planar graph into triconnected components as edges are changed. We use this data structure to simulate a Markov chain on labeled planar graphs that generates random labeled planar graphs, and to measure various statistical properties of these random labeled planar graphs. Because we are using this data structure only on random planar graphs, it is optimized for graphs with a single large triconnected component, and should run much faster than the general algorithm for arbitrary planar graphs, which takes O(n2/3) time to add or remove an edge. There is currently no freely available software implementing this decomposition of planar graphs into triconnected components.
 
Thursday, April 13, 2006, 2:30 p.m. - Science Center 354
Amy Felty School of Information Technology and Engineering, University of Ottawa
Applying Program Verification to Privacy in Data Mining
Abstract:
In today's society, people have very little control over what kinds of personal data are collected and stored by various agencies in both the private and public sectors. Moreover, the ability to infer new knowledge from existing data is increasing rapidly with advances in database and data mining technologies. We describe an approach to addressing this problem that allows individuals to specify constraints on the way their own data is used. We use program correctness methods to allow developers of software that processes personal data to provide assurances that the software meets the specified privacy constraints. Our notion of "privacy correctness" differs from general software correctness in two ways. First, properties of interest are simpler and thus their proofs are generally easier to automate.
 
Thursday, April 13, 2006, 11:00 a.m. - Science Center 348
Xunyang Shen
Clarkson University, Department of Computer Science
A Fast and Reliable Software Implementation of SLI-FLP Hybrid System
 

Abstract:
Symmetric level-index (SLI) arithmetic was introduced to overcome the problems of overflow and underflow in scientific computations. This thesis reports on the progress of implementing SLI arithmetic, towards a practical framework of scientific computation. The implementation was programmed in C++ and combines the benefits of both the floating point (FLP) arithmetic and SLI arithmetic.

Following a general introduction to SLI arithmetic, the number representation scheme, the algorithms of arithmetic operations, and some implementation details will be discussed. According to the results from a systematic testing process and two real life applications, we claim that the implementation is fast and reliable, and is ready to be used in practical computational problems.

 
Monday, January 16, 2005, 9:30 a.m. - Science Center 344
Bin Zan
Clarkson University, Department of Computer Science
Low Energy and Reliable Transport Protocols in Wireless Sensor Networks
 

Abstract:
Wireless sensor networks (WSN) rely on the combined effort of several micro-sensor nodes for detecting spontaneous on the fly as well as persistent events in various parts of the spanning network. Therefore, it is important to consider the fast and reliable detection of these events occurring in various parts of the network. In the first part of this thesis, we studied two transport layer protocols (i.e., ESRT and PSFQ) for wireless sensor networks, in order to ascertain their reliability and congestion performance.

Grouping the wireless sensor nodes in clusters to detect events is carried out for prolonged network lifetime, load balancing and scalability. In the second part of this work, we propose a cluster based, energy-aware event-detection scheme where the events are reliably relayed to the sink in the form of aggregated data packets. The clustering scheme provides faster and better event detection and reliability control capabilities to the areas of the network where an event is occurring. It also reduces the overhead, latency and loss of event information due to cluster rotation. Simulation done in ns-2 shows that the proposed scheme improves the system lifetime, achieves the desired reliability in event detection, and lowers energy consumption.

Wireless sensor and actor networks (WSAN) are comprised of heterogeneous sensor and actor nodes. A hierarchical topology of sensors, actors and sink(s) is considered for measurement and control of discrete events in industry. In the third part of this work, a novel multi-hop, reliable and scalable transport layer scheme is proposed for providing robust control and end-to-end reliable data delivery. The proposed scheme is very well distributed over the sink and the actor nodes, and requires minimum functionality at the resource constrained sensor nodes. Simulations are conducted in ns-2 to study the performance of the proposed schemes, in terms of achieving the desired event detection reliability, re-tasking efficiency, responsiveness to various error conditions, and better congestion avoidance and control.

 
Thursday, November 17, 2005, 2:30 p.m. - Science Center 354
David Mix Barrington
University of Massachusetts/Amherst Amherst, NY
Grid Graph Reachability Problems
 

Abstract:
A grid graph is a graph embedded in a two-dimensional square grid: think of Manhattan with some streets closed and others one-way. The grid graph reachability problem is to decide, given a grid graph, whether there is a path between any two given vertices s and t.

This problem is in the complexity class NL (problems solvable by nondeterministic logspace Turing machines) but is not known to be in L (logspace computable problems). In fact, if it is in L then the reachability problem for general planar graphs is in L.

By putting restrictions on the allowed graphs, we can simplify the problem. If we prohibit "one way streets", or choices of which direction to go, we can use simpler algorithms that require fewer resources. In this talk I will present upper and lower bounds for the complexity of algorithms for many of these problems, and some surprising reductions among the problems. The resulting framework offers a promising direction to study the relationship between L and NL, and between L and the class NC 1 . This is joint work with Eric Allender, Tanmoy Chakraborty, Samir Datta, and Sambuddha Roy.

 
Wednesday, August 10, 2005, 1 p.m. - Science Center 307
Chad Homan
Department of Computer Science, Clarkson University
A Virtual Reality Environment that Uses an Intelligent Tutoring System to Teach the Fundamental Concepts of Operating a Power Wheelchair
Abstract:
This thesis describes the method of design and construction of a virtual environment in which an intelligent tutoring system (ITS) has been implemented. The intelligent tutoring system can be used to help students with physical and cognitive disabilities gain the primitive skills that they need to operate a power wheelchair. The virtual environment itself consists of several rooms, each containing an exercise that will help students gain knowledge of power wheelchair operation. The intelligent tutoring system is a computer simulation in which the user can navigate through the virtual world and experience joystick and power wheelchair movements that would be comparable to that of a real power wheelchair. In addition, the ITS will provide the user with both real-time feedback, as well as post exercise data and suggestions, on the student's actions within the simulation. The virtual reality ITS is implemented using the Virtools™ Dev. 3.0 Software Suite which incorporates C++ into a more high-level programming language.