CSE Technical Reports Sorted by Technical Report Number
|In this paper, the possibility of verifying pi-calculus processes via
Promela translation is investigated. A general translation method from
pi-calculus processes to Promela models is presented and its usefulness
is shown by performing verification tasks with translated pi-calculus
examples and SPIN. Model checking translated pi-calculus processes in
is shown to overcome shortcomings of the Mobility Workbench, which
a theorem-proving style mu-calculus model checking algorithm for the
|IP spoofing has been exploited by Distributed Denial of Service (DDoS) attackers to (1) conceal flooding sources and localities of flooding
traffic, and (2) coax uncompromised hosts into becoming reflectors,
redirecting and amplifying flooding traffic. Thus, the ability to filter
spoofed IP packets near victims is essential to their own protection
as well as to their avoidance of becoming involuntary DoS reflectors.
Although an attacker can forge any field in the IP header, he cannot
falsify the number of hops an IP packet takes to reach its destination.
This hop-count information can be inferred from the Time-to-Live (TTL)
value in the IP header. Based on this observation, we propose a novel
filtering technique for Internet servers to winnow away spoofed IP
packets. By clustering address prefixes based on hop-counts, Hop-Count
Filtering (HCF) builds an accurate IP to hop-count (IP2HC) mapping table
to detect and discard spoofed IP packets. Through analysis using network
measurement data, we show that HCF can identify and then discard close
to 90% of spoofed IP packets with little collateral damage. We implement
and evaluate the HCF in the Linux kernel, demonstrating its benefits with
|Group security protocols have primarily focused on the three major security problems -- authentication, confidentiality and integrity --
and overall these problems have been well addressed. However, the
problem of traffic analysis where an attacker monitors messages and is
able to obtain useful information from the protocol header fields as
well as the size, number or time of the packets has not been given
much attention. In this paper, we discuss these problems from a point
of view of anonymity -- privacy of the groups identity and
membership. We present modifications to secure group communication
protocols that provide anonymity to groups. Our scheme lets only
members of group g know whether or not a packet is for group g. We
also present an authenticated and private key-recovery scheme that
allows only group members to detect missed rekey messages from
subsequent group messages. We discuss the security guarantees our
solutions provide, analyze the performance costs and discuss several
performance optimizations. We show that adequate performance can be
achieved for many applications, including protecting traffic on
broadcast-based networks (such as Ethernets) from traffic analysis.
|Two ASs are connected in the Internet AS graph only if they have a business ``peering relationship.' By focusing on the
AS subgraph AS_PC whose links represent provider-customer relationships,
we present an empirical study that identifies three crucial causal
forces at work in the design of AS connectivity:
(i) AS-geography, i.e., locality and number of PoPs within individual ASs;
(ii) AS-specific business models, abstract toy models that describe
how individual ASs choose their ``best' provider; and
(iii) AS evolution, a historic account of the ``lives' of
individual ASs in a dynamic ISP market. Based on these findings
that directly relate to how provider-customer relationships may
be determined in the actual Internet, we develop a new
optimization-driven model for Internet growth at the AS_PC level.
Its defining feature is an explicit construction of a novel class of
intuitive, multi-objective, local optimizations by which
the different ASs determine in a fully distributed and
decentralized fashion their ``best' upstream provider. We show
that our model is broadly robust, perforce yields graphs that match
inferred AS connectivity with respect to many different metrics,
and is ideal for exploring the impact of new peering incentives or
policies on AS-level connectivity.
|Optimal behavior is a very desirable property of autonomous agents and, as such, has received much attention over the years. However, making optimal decisions and executing optimal actions typically requires a substantial effort on the part of an agent, and in some situations the agent might lack the necessary sensory, computational, or actuating resources to carry out the optimal policy. In such cases, the agent will have to do the best it can, given its architectural constraints. We distinguish between three ways in which an agents architecture can affect policy optimality. An agent might have limitations that impact its ability to formulate, operationalize (convert to internal representation), or execute an optimal policy. In this paper, we focus on agents facing the latter two types of limitations. We adopt the Markov decision problem framework in our search for optimal policies and show how gradations of increasingly constrained agent architectures create correspondingly more complex optimization problems ranging from polynomial to NP-complete problems. We also present algorithms based on linear and integer programming that work across a range of such constrained optimization problems.
|This paper presents the results from a detailed, experimental study of OSPF, an intra-domain routing protocol, running on a mid-size regional Internet
service provider. Using multiple, distributed probes running custom
monitoring tools, we collected continuous protocol information for a full
year. We use this data to analyze the health of the network including the
amount, source, duration and periodicity of routing instability. We found
that information from external routing protocols produces significant levels
of instability within OSPF. We also examine the evolution of the routing
topology over time, showing that short term changes are incremental and that
the long term trend shows constant change. Finally, we present a set of
detailed investigations into several large scale anomalies. These anomalies
demonstrate the significant impact external routing protocols have on OSPF.
In addition, they highlight the need for new network management tools that
can incorporate information from routing protocols.
|Client emulation tools play a central role in the performance evaluation, capacity planning, and workload characterization of servers. However, traditional emulation tools, because of their limited scalability and extensibility, fail to keep up with servers as they increase in complexity and performance. In this paper, we propose Eve, a scalable client emulation tool that is capable of stress-testing powerful servers. Eve relies on an open and modular architecture that provides a simple and extensible programming environment. By incorporating I/O call handing into its user-thread library, Eve is capable of simultaneously emulating thousands of clients. Furthermore, Eve uses a distributed shared variable (DSV) core to facilitate communication between different clients, thus enhancing scalability to multiple machines.
|In a wide variety of embedded control applications, it is often the energy resources that form the fundamental limits on the system, not
the systems computing capacity. Various techniques have been
developed to improve energy efficiency in hardware, such as Dynamic
Voltage Scaling (DVS), effectively extending the battery life of these
systems. However, a comprehensive mechanism of task adaptation is
needed in order to make the best use of the available energy
resources, even in the presence of DVS or other power-reducing
mechanisms. Further complicating this are the strict timeliness
guarantees required by real-time applications commonly found in
This paper develops a new framework called Energy-aware Quality
of Service (EQoS) that can manage real-time tasks and adapt their
execution to maximize the benefits of their computation for a limited
energy budget. The concept of an adaptive real-time task and the
notion of utility, a measure of the benefit or value gained from their
execution, are introduced. Optimal algorithms and heuristics are
developed to maximize the utility of the system for a desired system
runtime and a given energy budget, and then extended to optimize
utility without regard to runtime. We demonstrate the effects of DVS
on this system and how EQoS in conjunction with DVS can provide
significant gains in utility for fixed energy budgets. Finally, we
evaluate this framework through both simulations and experimentation
on a working implementation.
|By using a columnar subnetwork of continuous-time sigmoid activation units as a building block, hierarchical neural networks can be constructed that serve as cognitive models of algorithmic behavior, including goal-directed problem solving. Previous work by Polk et al.(Cognitive Brain Research, 2002) demonstrated the power of a hierarchical composition of local attractor networks for modeling problem solving in the Tower of London task, but critical timing issues were addressed by non-neural components. The essential function of the column structure proposed here for addressing timing issues is to produce a controllable propagation delay in the signal from a columns input unit to its output unit. Lateral inhibition between input units and between output units in different columns forms layered decision-making modules. These modules use winner-take-all attractor dynamics to compute the results of simple if-then rules applied to the feedforward outputs of other modules. Strong recurrent excitation in multiple layers of these columnar modules produces activation-based short-term memory that preserves the symbolic results of these computations during algorithmic processing. We show that propagation delay between layers allows the application of timing methods from digital sequential circuit design that solve the timing problems inherent in the existing Tower of London model without relying on discrete-time updating or binary values.
|Mobile computing platforms are performing increasingly complex and computationally intensive tasks. To help lengthen useful battery
life, these platforms often incorporate some form of hardware
power-down that is controlled by the system software. Unfortunately,
these often incur substantial transition latencies when switching
between power-down and active states, making them difficult to use in
time-critical embedded systems.
This paper introduces a class of sprint-and-halt schedulers
that attempt to maximize the energy savings of software-controlled
power-down mechanisms, while simultaneously maintaining hard real-time
deadline guarantees. Several different algorithms are proposed to
reclaim unused processing time, defer processing, and extend
power-down intervals while respecting task deadlines. Sprint-and-halt
schedulers are shown to reduce energy consumption by 40-70% over
typical operating parameters. For very large or small state
transition latencies, simple approaches work very close to theoretical
limits, but over a critical range of latencies, advanced schedulers
show 10-20% energy reduction over simpler methods.
|We present a new stability analysis for hybrid legged locomotion systems based on the ``symmetric" factorization of return maps. We
apply this analysis to 2 and 3 degree of freedom (DOF) models of the Spring
Loaded Inverted Pendulum (SLIP) with different leg recirculation
strategies. Despite the non-integrability of the SLIP dynamics, we
obtain a necessary condition for asymptotic stability (and a
sufficient condition for instability) at a fixed point, formulated
as an exact algebraic expression in the physical parameters. We use
this expression to study a variety of 2 DOF SLIP models that have
previously been posited as low dimensional representations of
running, focusing on the sensory ``cost" required to achieve ``fast"
transients as measured by the degree of singularity of the linearized
dynamics. We introduce a new 3 DOF SLIP model with pitching dynamics
whose stability properties, revealed by this analysis, provide for
the first time the beginnings of a formal explanation for the
surprisingly stable gaits of the open loop controlled robot, RHex.
|This paper evaluates, via both analysis and simulation, the end-to-end (e2e) delay performance of aggregate scheduling
with guaranteed-rate (GR) algorithms.
Deterministic e2e delay bounds for a single aggregation
are derived under the assumption that all incoming
flows at an aggregator conform to the token bucket model.
%and all flows in the same traffic aggregate share
%the same e2e path inside the aggregation region.
Each aggregator can use any of three types of GR
scheduling algorithms: stand-alone GR, two-level hierarchical GR,
and rate-controlled two-level hierarchical GR.
We also derive an e2e delay bound for the case of multiple
aggregations when the rate-controlled two-level
hierarchical GR is used at aggregators.
By using the GR scheduling algorithms to handle traffic aggregates,
we show not only the existence of delay bounds, but also
the fact that under certain conditions (e.g., when
the aggregate traverses a long
path after the aggregation point) the bounds are even tighter
than that of per-flow scheduling.
We then compare the analytic delay bounds numerically,
and conduct in-depth simulation to (i) confirm the analytic
results, and (ii) compare the e2e delays of aggregate and per-flow
scheduling. The simulation results have shown that aggregate
scheduling is very robust and can take advantage of statistical
multiplexing gains. It performs better than per-flow scheduling
in most simulation scenarios we considered.
Overall, aggregate scheduling is shown theoretically to
provide bounded e2e delays, and practically to
provide excellent e2e delay performance.
Moreover, it incurs lower scheduling and state-maintenance
overheads at routers than per-flow scheduling.
All of these salient features make aggregate scheduling
very attractive to Internet core networks.
Technical Reports Page