
6
Lucent Technologies Inc.
Advance Product Brief
March 1997
ATM Buffer Manager (ABM)
LUC4AB01
S
LUCENT TECHNOLOGIES—PROPRIETARY
Use pursuant to Company Instructions
Description
(continued)
Egress Queue Processor (EQP)
The EQP stores received cells from the switch fabric in
an egress buffer located in external cell data RAM. It
also schedules cells for transmission to their destina-
tion MPHY port. The stored cells are organized into M
output queues (M
≤
31 where M is the number of
MPHY ports configured to be supported by the ABM)
with four delay priorities (subqueues) per queue. Each
queue is structured in FIFO order and corresponds to
one of the output MPHY ports. The EQP handles multi-
casting to the output MPHY ports as necessary. The
EQP also maintains thresholds and accumulates vari-
ous cell and queue statistics.
Each time slot, the length of every subqueue is com-
pared with its applicable egress backpressure (EBP)
threshold. If the egress backpressure option is enabled
and the EBP thresholds are exceeded, then egress
backpressure for the corresponding delay priority is
generated to the external switch fabric. The egress
backpressure status bit map is transmitted in the
ingress cell stream.
Processing of Egress Cells Received from the
Switch Fabric
Egress cell processing is activated when a cell is
received by VTOP. The EQP uses the connection tag
field in the local routing header of the incoming cell
(see data formats section) to look up the corresponding
MPHY port bit map stored during call setup. The MPHY
port bitmap is used to route the cell to its MPHY egress
queue(s). This bit map is programmed by the user. The
type of connection, unicast or multicast (as indicated by
the MPHY port bit map), determines the subsequent
operations performed by EQP.
For a unicast cell, the MPHY port bit map will contain
the MPHY port queue number to which it is destined.
The EQP uses this MPHY port queue number to look
up the current length of the target queue. This queue
length is then compared with the applicable egress
thresholds (CLP1, CLP0 +1, and EPD). If a threshold is
exceeded, then the cell is dropped. Otherwise, the cell
is linked to the target queue at the egress buffer loca-
tion obtained from the egress free list using an index
pair (IP) data structure obtained from the index pair
free list. The EQP requires the user to initialize the
egress free list and index pair free list as a linked list of
free pointer locations (organized as a LIFO stack). The
egress free list and IP free list are located in the pointer
RAM. Various individual statistic counters are updated.
For a multicast cell, the EQP uses the delay priority
field in the local header to determine the target multi-
cast delay priority subqueue. The cell is stored at the
egress buffer location in the cell data RAM obtained
from the egress free list. Each time slot, the EQP can
link a cell per from the multicast queue to up to two
destination MPHY port queues. A programmable
weighted round-robin schedule is used to determine
how frequently each multicast delay priority subqueue
is serviced. The weighted round-robin schedule for the
multicast queue is configurable by the microprocessor.
Its schedule provides a 16-entry (weight) table to deter-
mine the sequence of multicast delay priorities to be
serviced and, therefore, the fraction of total bandwidth
allocated to each delay priority. If a cell is available from
the subqueue chosen by the weighted round-robin
schedule, then that cell is linked. Otherwise, the high-
est nonempty delay priority subqueue is chosen. Once
a cell is selected from the multicast queue, the EQP
retrieves the MPHY port bit map for the cell. For each
MPHY port in the MPHY port bit map, the EQP per-
forms thresholding as described earlier for unicast
cells. If the cell is not dropped, the EQP obtains a free
IP and links the cell to appropriate MPHY port queues.