Integrated Services
An Introduction
|
Prof. David Bernstein
James Madison University
|
|
Computer Science Department
|
bernstdh@jmu.edu
|
Motivation
- Properties of IP:
- "Cross traffic" is unpredictable and impacts traffic
on other paths
- IP provides best effort service (i.e., no service guarantees)
- Possible Improvements:
- Service guarantees (e.g., for real-time applications)
- Service classes
Overview
- Service Classes:
- Guaranteed service class
- Controlled-load service class (i.e., can tolerate some
delay/loss)
- Protocols and Algorithms:
- Resource management
- State management
Overview (cont.)
- Approach:
- Based on sessions and/or flows
- Each flow has a stable path
- Routers along the path are responsible for providing QoS
- QoS Methods:
-
RSVP - for reservations
- Traffic Shaping - controls turbulent flow
- Admission Control - admits/rejects flow
- Scheduling - of packets
Traffic Shaping
- Goal:
- Control the arrivals of packets to avoid/reduce turbulence
- Leaky-Bucket Traffic Shaping:
- Some incoming packets are discarded (i.e., leak)
- Other packets are "regularized" (i.e., transmitted smoothly)
Admission Control
- Goal:
- Approach:
- Flow is admitted only if currently available resources can
service the flow without affecting the QoS of
previously admitted flows
Packet Scheduling
- Goal:
- Manage packets that are in queues in order to provide
the required QoS
- Components:
First-In, First Out (FIFO) Scheduling
- Defined:
- Packets are served in the order they arrive
- Issues:
- Source has an incentive to transmit at a high rate
(since the queue "favors" the most greedy flow)
- It is hard to control the delay of packets through
a network of FIFO queues
Priority Queue Scheduling
- Defined:
- Packets are placed in a priority-specific FIFO queue
- The nonempty queue with the highest priority is
served first
- Issues:
- Queues with low priorities can be starved of bandwidth
- Types:
- Preemptive - lower priority queues are preempted when
packets arrive at a higher priority queue
- Non-Preemptive
Earliest Deadline First Scheduling
- Defined:
- Serve the packet with the earliest deadline first
- Process:
- Compute the departure deadline for incoming packets
- "Sort" packets based on deadline (using a heap)
Fair Scheduling
- Idea:
- Allocate resources in a fair manner
- A Difficulty:
Some Concepts from Microeconomics
- Efficient Policies:
- Social welfare is increased
(a.k.a., the Kaldor-Hicks/utilitarian criterion)
- Equitable Policies:
- Nobody is made worse off
(a.k.a., the Pareto/egalitarian criterion)
- Fair/Superfair Policies:
- People would rather have what they have than what others have
(a.k.a., the Baumol/envy criterion)
A Networking Example
- The Situation:
- Source A and B route through R to C
- A wants 10Mb/s
- B wants 100Mb/s
- R has a capacity of 1.1MB/s
- What is Fair?
- A and B get equal shares (i.e., 0.55Mb/s each)
- Shares are based on the 1:10 ratio of wants (i.e.,
A gets 0.1Mb/s and B gets 1Mb/s)
Max-Min Fairness
- The Setting:
- \(N\) sources share a link of rate \(C\)
- \(W(f)\) denotes the "wanted" rate of \(f\)
- \(R(f)\) denotes the "received"/allocated rate for
\(f\)
- The Algorithm:
- while \(N \gt 0\)
- Pick the source, \(f\) with smallest \(W\)
- If \(W(f) \lt C/N\) set \(R(f) = W(f)\)
- else set \(R(f) = C/N\)
- Set \(N = N - 1\)
- Set \(C = C - R(f)\)
An Example of Max-Min Fairness
- The Situation:
- \(N = 3\)
- \(W(1) = 0.1\)
- \(W(2) = 0.5\)
- \(W(3) = 5.0\)
- \(C = 1.0\)
- Max-Min Fairness:
- \(R(1) = 0.10\)
- \(R(2) = 0.90 / 2 = 0.45\)
- \(R(3) = 0.45 / 1 = 0.45\)
Bit-by-Bit Fair Queueing
- Setup:
- Packets belonging to a flow are placed in a FIFO
queue for that flow
- Servicing the Queues:
- Queues are served one bit at a time in a round-robin
Weighted Bit-by-Bit Fair Queueing
- Setup:
- Packets belonging to flow \(f\) are placed in a FIFO
queue for that flow
- Servicing the Queues:
- Queues are served \(w_f\) bits at a time in
a round-robin
- Also Known As:
- Generalized Processor Sharing (GPS)
Packetized Weighted Fair Queueing
- Setup:
- Packets belonging to flow \(f\) are placed in a FIFO
queue for that flow
- Servicing the Queues:
- Determine what time a packet would complete if we served
flows bit-by-bit
- Serve packets in order ot increasing finishing time
- Also Known As:
- Packetized Generalized Processor Sharing (PGPS)
Some Observations
- Many routers support weighted fair queueing
- Weighted fair queueing can be used to provide different rates to
different flows
Summary of the Complete Process
- Source asks for a guaranteed QoS
- Source negotiates an arrivale rate with each router
- Routers perform admission control to ensure they have
sufficient resources
- Each router along the path reserves resources using RSVP
- Source starts sending packets at agreed rate
- Router determines the flow for each packet and puts it in the
appropriate queue
- Router uses weighted fair queueing to serve packets
There's Always More to Learn