Sunday, 18 December 2011

Optical Networks Communication System Design Considerations


Fiber Optic Communication System Design Considerations
When designing a fiber optic communication system some of the following factors must be taken into consideration:
  • Which modulation and multiplexing technique is best suited for the particular application?
  • Is enough power available at the receiver (power budget)?
  • Rise-time and bandwidth characteristics
  • Noise effects on system bandwidth, data rate, and bit error rate
  • Are erbium-doped fiber amplifiers required?
  • What type of fiber is best suited for the application?
  • Cost
1. Power Budget
The power arriving at the detector must be sufficient to allow clean detection with few errors. Clearly, the signal at the receiver must be larger than the noise. The power at the detector, Pr, must be above the threshold level or receiver sensitivity Ps.
        Pr >= Ps
The receiver sensitivity Ps is the signal power, in dBm, at the receiver that results in a particular bit error rate (BER). Typically the BER is chosen to be one error in 109 bits or 10–9.
The received power at the detector is a function of:
  1. Power emanating from the light source (laser diode or LED)—(PL)
  2. Source to fiber loss (Lsf)
  3. Fiber loss per km (FL) for a length of fiber (L)
  4. Connector or splice losses (Lconn)
  5. Fiber to detector loss (Lfd)
The allocation of power loss among system components is the power budget. The power margin is the difference between the received power Pr and the receiver sensitivity Ps by some margin Lm.
          Lm = Pr – Ps
where  Lm is the loss margin in dB, Pr is the received power, Ps is the receiver sensitivity in dBm.
If all of the loss mechanisms in the system are taken into consideration, the loss margin can be expressed as the following equation. All units are dB and dBm.
          Lm = PL – Lsf – (FL × L) – Lconn – Lfd – Ps


2. Bandwidth and Riser Time Budgets
The transmission data rate of a digital fiber optic communication system is limited by the rise time of the various components, such as amplifiers and LEDs, and the dispersion of the fiber. The cumulative effect of all the components should not limit the bandwidth of the system. The rise time tr and bandwidth BW are related by
           BW = 0.35/tr
This equation is used to determine the required system rise time. The appropriate components are then selected to meet the system rise time requirements. The relationship between total system rise time and component rise time is given by the following equation

where ts is the total system rise time and tr1, tr2, … are the rise times associated with the various components.
To simplify matters, divide the system into five groups:
  1. Transmitting circuits (ttc)
  2. LED or laser (tL)
  3. Fiber dispersion (tf)
  4. Photodiode (tph)
  5. Receiver circuits (trc)
The system rise time can then be expressed as

The system bandwidth can then be calculated using the following equation from the total rise time ts as given in the above equation
          BW = 0.35/ts
Electrical and Optical Bandwidth
  • Electrical bandwidth (BWel) is defined as the frequency at which the ratio current out/current in (Iout/Iin) drops to 0.707. (Analog systems are usually specified in terms of electrical bandwidth.)
  • Optical bandwidth (BWopt) is the frequency at which the ratio power out/power in (Pout/Pin) drops to 0.5.
Because Pin and Pout are directly proportional to Iin and Iout (not I2in and I2out), the half-power point is equivalent to the half-current point. This results in a BWopt that is larger than the BWel as given in the following equation
          BWel = 0.707 × BWopt

3. Fiber Connectors
Many types of connectors are available for fiber optics, depending on the application. The most popular are:
  • SC—snap-in single-fiber connector
  • ST and FC—twist-on single-fiber connector
  • FDDI—fiber distributed data interface connector
In the 1980s, there were many different types and manufacturers of connectors. Today, the industry has shifted to standardized connector types, with details specified by organizations such as the Telecommunications Industry Association(TIA), the International Electrotechnical Commission, and the Electronic Industry Association (EIA).

Snap-in connector (SC)—developed by Nippon Telegraph and Telephone of Japan. Like most fiber connectors, it is built around a cylindrical ferrule that holds the fiber, and it mates with an interconnection adapter or coupling receptacle. A push on the connector latches it into place, with no need to turn it in a tight space, so a simple tug will not unplug it. It has a square cross section that allows high packing density on patch panels and makes it easy to package in a polarized duplex form that ensures the fibers are matched to the proper fibers in the mated connector.

Twist-on single-fiber connectors (ST and FC)—long used in data communication; one of several fiber connectors that evolved from designs originally used for copper coaxial cables.

Duplex connectors—A duplex connector includes a pair of fibers and generally has an internal key so it can be mated in only one orientation. Polarizing the connector in this way is important because most systems use separate fibers to carry signals in each direction, so it matters which fibers are connected. One simple type of duplex connector is a pair of SC connectors, mounted side by side in a single case. This takes advantage of their plug-in-lock design.
Other duplex connectors have been developed for specific types of networks, as part of comprehensive standards. One example is the fixed-shroud duplex (FSD) connector specified by the fiber distributed data interface (FDDI) standard.


4. Fiber Optic Couplers
A fiber optic coupler is a device used to connect a single (or multiple) fiber to many other separate fibers. There are two general categories of couplers:
  • Star couplers
  • T-couplers

A. Star Couplers
Transmissive type
Optical signals sent into a mixing block are available at all output fibers. Power is distributed evenly. For an n × n star coupler (n-inputs and n-outputs), the power available at each output fiber is 1/n the power of any input fiber.

The output power from a star coupler is simply
             Po = Pin/n
where n = number of output fibers.
An important characteristic of transmissive star couplers is cross talk or the amount of input information coupled into another input. Cross coupling is given in decibels and is typically greater than 40 dB.
The reflective star coupler has the same power division as the transmissive type, but cross talk is not an issue because power from any fiber is distributed to all others.
B. T-Couplers
In the following figure, power is launched into port 1 and is split between ports 2 and 3. The power split does not have to be equal. The power division is given in decibels or in percent. For example, and 80/20 split means 80% to port 2, 20% to port 3. In decibels, this corresponds to 0.97 dB for port 2 and 6.9 dB for port 3.

Directivity describes the transmission between the ports. For example, if P3/P1 = 0.5, P3/P2 does not necessarily equal 0.5. For a highly directive T-coupler, P3/P2 is very small. Typically, no power is expected to be transferred between any two ports on the same side of the coupler.
Another type of T-coupler uses a graded-index (GRIN) lens and a partially reflective surface to accomplish the coupling. The power division is a function of the reflecting mirror. This coupler is often used to monitor optical power in a fiber optic line.

5. Wavelength-Division Multiplexers (WDM)
The couplers used for wavelength-division multiplexing (WDM) are designed specifically to make the coupling between ports a function of wavelength. The purpose of these couplers is to separate (or combine) signals transmitted at different wavelengths. Essentially, the transmitting coupler is a mixer and the receiving coupler is a wavelength filter. Wavelength-division multiplexers use several methods to separate different wavelengths depending on the spacing between the wavelengths. Separation of 1310 nm and 1550 nm is a simple operation and can be achieved with WDMs using bulk optical diffraction gratings. Wavelengths in the 1550-nm range that are spaced at greater than 1 to 2 nm can be resolved using WDMs that incorporate interference filters. An example of an 8-channel WDM using interference filters is given in the following figure. Fiber Bragg gratings are typically used to separate very closely spaced wavelengths in a DWDM system (< 0.8 nm).


6. Erbium-Doped Fiber Amplifiers (EDFA)
Erbium-doped fiber amplifiers (EDFA)—The EDFA is an optical amplifier used to boost the signal level in the 1530-nm to 1570-nm region of the spectrum. When it is pumped by an external laser source of either 980 nm or 1480 nm, signal gain can be as high as 30 dB (1000 times). Because EDFAs allow signals to be regenerated without having to be converted back to electrical signals, systems are faster and more reliable. When used in conjunction with wavelength-division multiplexing, fiber optic systems can transmit enormous amounts of information over long distances with very high reliability.



7. Fiber Bragg Gratings (FBG)
Fiber Bragg gratings—Fiber Bragg gratings are devices that are used for separating wavelengths through diffraction, similar to a diffraction grating (see the following figure). They are of critical importance in DWDM systems in which multiple closely spaced wavelengths require separation. Light entering the fiber Bragg grating is diffracted by the induced period variations in the index of refraction. By spacing the periodic variations at multiples of the half-wavelength of the desired signal, each variation reflects light with a 360° phase shift causing a constructive interference of a very specific wavelength while allowing others to pass. Fiber Bragg gratings are available with bandwidths ranging from 0.05 nm to >20 nm.

Fiber Bragg grating are typically used in conjunction with circulators, which are used to drop single or multiple narrowband WDM channels and to pass other “express” channels. Fiber Bragg
gratings have emerged as a major factor, along with EDFAs, in increasing the capacity of next generation high-bandwidth fiber optic systems.

The following figure depicts a typical scenario in which DWDM and EDFA technology is used to transmit a number of different channels of high-bandwidth information over a single fiber. As shown, n-individual wavelengths of light operating in accordance with the ITU grid are multiplexed together using a multichannel coupler/splitter or wavelength-division multiplexer. An optical isolator is used with each optical source to minimize troublesome back reflections. A tap coupler then removes 3% of the transmitted signal for wavelength and power monitoring. Upon traveling through a substantial length of fiber (50-100 Km), an EDFA is used to boost the signal strength. After a couple of stages of amplifications, an add/drop channel consisting of a fiber Bragg grating and circulator is introduced to extract and then reinject the signal operating at the λ3 wavelength. After another stage of amplification via EDFA, a broadband WDM is used to combine a 1310-nm signal with the 1550-nm window signals. At the receiver end, another broadband WDM extracts the 1310-nm signal, leaving the 1550-nm window signals. The 1550-nm window signals are finally separated using a DWDM that employs an array of fiber Bragg gratings, each tuned to the specific transmission wavelength. This system represents the current state of the art in high-bandwidth fiber optic data transmission.





Thursday, 13 October 2011

SDH Q&A:helpful to crack SDH based interviews................

Q. What is SDH ?

SDH stands for Synchronous Digital Hierarchy & is an international Standard for a high capacity optical telecommunications network.It is a synchronous digital transport system aimed at providing a more simple,economical,& flexible teleccommunication infrastructure.

Q. What is the difference between SONET and SDH?
A. To begin with there is no STS-1. The first level in the SDH hierarchy is STM-1 (Synchronous Transport Mode 1) is has a line rate of 155.52 Mb/s. This is equivalent to SONET's STS-3c. Then would come STM-4 at 622.08 Mb/s and STM-16 at 2488.32 Mb/s. The other difference is in the overhead bytes which are defined slightly differently for SDH. A common misconception is that STM-Ns are formed by multiplexing STM-1s. STM-1s, STM-4s and STM-16s that terminate on a network node are broken down to recover the VCs which they contain. The outbound STM-Ns are then reconstructed with new overheads.

Q. What are the advantages of SDH over PDH ?

The increased configuration flexibility and bandwidth availability of SDH provides significant advantages over the older telecommunications system.
These advantages include:
A reduction in the amount of equipment and an increase in network reliability.
The provision of overhead and payload bytes - the overhead bytes permitting management of the payload bytes on an individual basis and facilitating centralized Fault sectionalisation.-nearly 5% of signal structure allocated for this purpose.
The definition of a synchronous multiplexing format for carrying lower-level digital signals (such as 2 Mbit/s, 34 Mbit/s, 140 Mbit/s) which greatly simplifies the interface to digital switches, digital cross-connects, and add-drop multiplexers.
The availability of a set of generic standards, which enable multi-vendor interoperability.
The definition of a flexible architecture capable of accommodating future applications, with a variety of transmission rates.Existing & future signals can be accomodated.

Q. What are the main limitations of PDH ?

The main limitations of PDH are:
Inability to identify individual channels in a higher-order bit stream.
Insufficient capacity for network management
Most PDH network management is proprietary
There's no standardised definition of PDH bit rates greater than 140 Mbit/s
There are different hierarchies in use around the world. Specialized interface equipment is required to interwork the two hierarchies


Q. What are some timing/sync defining rules of thumb?
A.
1. A node can only receive the synchronization referencesignal from another node that contains a clock ofequivalent or superior quality (Stratum level).
2. The facilities with the greatest availability (absence of outages) should be selected forsynchronization facilities.
3. Where possible, all primary and secondary synchronization facilities should be diverse, and synchronization facilities within the same cable should be minimized.
4. The total number of nodes in series from the stratum 1 source should be minimized. For example, the primary synchronization network would ideally look like a star configuration with the stratum 1 source at the center. The nodes connected to the star would branch out in decreasing stratum level from the center
5. No timing loops may be formed in any combination of primary


Q. What is meant by "Plesiochronous" ?

If two digital signals are Plesiochronous, their transitions occur at "almost" the same rate, with any variation being constrained within tight limits. These limits are set down in ITU-T recommendation G.811. For example, if two networks need to interwork, their clocks may be derived from two different PRCs. Although these clocks are extremely accurate, there's a small frequency difference between one clock and the other. This is known as a plesiochronous difference.

Q. What is meant by "Synchronous" ?

In a set of Synchronous signals, the digital transitions in the signals occur at exactly the same rate. There may however be a phase difference between the transitions of the two signals, and this would lie within specified limits. These phase differences may be due to propagation time delays, or low-frequency wander introduced in the transmission network. In a synchronous network, all the clocks are traceable to one Stratum 1 Primary Reference Clock (PRC).


Q. What is meant by "Asynchronous" ?

In the case of Asynchronous signals, the transitions of the signals don't necessarily occur at the same nominal rate. Asynchronous, in this case, means that the difference between two clocks is much greater than a plesiochronous difference. For example, if two clocks are derived from free-running quartz oscillators, they could be described as asynchronous.


Q. What are the various steps in multiplexing ?

The multiplexing principles of SDH follow, using these terms and definitions:

Mapping: A process used when tributaries are adapted into Virtual Containers (VCs) by adding justification bits and Path Overhead (POH) information.

Aligning: This process takes place when a pointer is included in a Tributary Unit (TU) or an Administrative Unit (AU), to allow the first byte of the Virtual Container to be located.

Multiplexing: This process is used when multiple lower-order path layer signals are adapted into a higher-order path signal, or when the higher-order path signals are adapted into a Multiplex Section.

Stuffing: As the tributary signals are multiplexed and aligned, some spare capacity has been designed into the SDH frame to provide enough space for all the various tributary rates. Therefore, at certain points in the multiplexing hierarchy, this space capacity is filled with "fixed stuffing" bits that carry no information, but are required to fill up the particular frame.

Explain 1+1 protection. In 1+1 protection switching, there is a protection facility (backup line) for each working facility At the near end the optical signal is bridged permanently (split into two signals) and sent over both the working and the protection facilities simultaneously, producing a working signal and a protection signal that are identical.At the Far End of the section, both signalsare monitored independently for failures. The receiving equipment selects either the working or the protection signal. This selection is based on the switch initiation criteria which are either a signal fail (hard failure such as the loss of frame (LOF) within an optical signal), or a signal degrade (soft failure caused by the error rate exceeding some pre-defined value).

Explain 1:N protection. In 1:N protection switching, there is one protection facility for several working facilities (the range is from 1 to 14). In 1:N protection architecture, all communication from the Near End to the Far End is carried out over the APS channel, using the K1 and K2 bytes. All switching is revertive; that is, the traffic reverts to the working facility as soon as the failure has been corrected.

In 1:N protection switching, optical signals are normally sent only over the working facilities, with the protection facility being kept free until a working facility fails.



Q. If voice traffic is still intelligible to the listener in a relatively poor communication channel, why isn't it easy to pass it across a network optimized for data?

A. Data communication requires very low Bit-error Ratio (BER) for high throughput but does not require constrained propagation, processing, or storage delay. Voice calls, on the other hand, are insensitive to relatively high BER, but very sensitive to delay over a threshold of a few tens of milliseconds. This insensitivity to BER is a function of the human brain's ability to interpolate the message content, while sensitivity to delay stems from the interactive nature (full-duplex) of voice calls. Data networks are optimized for bit integrity, but end-to-end delay and delay variation are not directly controlled. Delay variation can vary widely for a given connection, since the dynamic path routing schemes typical of some data networks may involve varying numbers of nodes (for example, routers). In addition, the echo-cancellers deployed to handle known excess delay on a long voice path are automatically disabled when the path is used for data. These factors tend to disqualify data networks for voice transport if traditional public switched telephone network (PSTN) quality is desired.

Q. How does synchronization differ from timing?

A. These terms are commonly used interchangeably to refer to the process of providing suitable accurate clocking frequencies to the components of the synchronous network. The terms are sometimes used differently. In cellular wireless systems, for example, "timing" is often applied to ensure close alignment (in real time) of control pulses from different transmitters; "synchronization" refers to the control of clocking frequencies.

Q. If I adopt sync status messages in my sync distribution plan, do I have to worry about timing loops?

A. Yes. Source Specific Multicasts (SSMs) are certainly a very useful tool for minimizing the occurrence of timing loops, but in some complex connectivities they are not able to absolutely preclude timing loop conditions. In a site with multiple Synchronous Optical Network (SONET) rings, for example, there are not enough capabilities for communicating all the necessary SSM information between the SONET network elements and the Timing Signal Generator (TSG) to cover the potential timing paths under all fault conditions. Thus, a comprehensive fault analysis is still required when SSMs are deployed to ensure that a timing loop does not develop.

Q. If ATM is asynchronous by definition, why is synchronization even mentioned in the same sentence?

A. The term Asynchronous Transfer Mode applies to layer 2 of the OSI 7-layer model (the data link layer), whereas the term synchronous network applies to layer 1 (the physical layer). Layers 2, 3, and so on, always require a physical layer which, for ATM, is typically SONET or Synchronous Digital Hierarchy (SDH); thus the "asynchronous" ATM system is often associated with a "synchronous" layer 1. In addition, if the ATM network offers circuit emulation service (CES), also referred to as constant bit-rate (CBR), then synchronous operation (that is, traceability to a primary reference source) is required to support the preferred timing transport mechanism, Synchronous Residual Time Stamp (SRTS).

Q. Most network elements have internal stratum 3 clocks with 4.6ppm accuracy, so why does the network master clock need to be as accurate as one part in 10^11?

A. Although the requirements for a stratum 3 clock specify a free-run accuracy (also pull-in range) of 4.6ppm, a network element (NE) operating in a synchronous environment is never in free-run mode. Under normal conditions, the NE internal clock tracks (and is described as being a traceable to) a Primary Reference Source that meets stratum 1 long-term accuracy of one part in 10^11.
This accuracy was originally chosen because it was available as a national primary reference source from a cesium-beam oscillator, and it ensured adequately low slip-rate at international gateways.
Note: If primary reference source (PRS) traceability is lost by the NE, it enters holdover mode. In this mode, the NE clock's tracking phase lock loop (PLL) does not revert to its free-run state, it freezes its control point at the last valid tracking value. The clock accuracy then drifts elegantly away from the desired traceable value, until the fault is repaired and traceability is restored.

Q. What are the acceptable limits for slip and/or pointer adjustment rates when designing a sync network?

A. When designing a network's synchronization distribution sub-system, the targets for sync performance are zero slips and zero pointer adjustments during normal conditions. In a real-world network, there are enough uncontrolled variables that these targets will not be met over any reasonable time, but it is not acceptable practice to design for a given level of degradation (with the exception of multiple timing island operation, when a worst-case slip-rate of no more than one slip in 72 days between islands is considered negligible). The zero-tolerance design for normal conditions is supported by choosing distribution architectures and clocking components that limit slip-rates and pointer adjustment rates to acceptable levels of degradation during failure (usually double-failure) conditions.

Q. Why is it necessary to spend time and effort on synchronization in telecom networks when the basic requirement is simple, and when computer LANs have never bothered with it?

A. The requirement for PRS traceability of all signals in a synchronous network at all times is certainly simple, but it is deceptively simple. The details of how to provide traceability in a geographically distributed matrix of different types of equipment at different signal levels, under normal and multiple-failure conditions, in a dynamically evolving network, are the concerns of every sync coordinator. Given the number of permutations and combinations of all these factors, the behavior of timing signals in a real-world environment must be described and analyzed statistically. Thus, sync distribution network design is based on minimizing the probability of losing traceability while accepting the reality that this probability can never be zero.

Q. How many stratum 2 and/or stratum 3E TSGs can be chained either in parallel or series from a PRS?

A. There are no defined figures in industry standards. The sync network designer must choose sync distribution architecture and the number of PRSs and then the number and quality of TSGs based on cost-performance trade-offs for the particular network and its services.

Q. Is synchronization required for non-traditional services such as voice-over-IP?

A. The answer to this topical question depends on the performance required (or promised) for the service. Usually, Voice-over-IP is accepted to have a low quality reflecting its low cost (both relative to traditional PSTN voice service). If a high slip-rate and interruptions can be accepted, then the voice terminal clocks could well be free-running. If, however, a high voice quality is the objective (especially if voice-band modems including Fax are to be accommodated) then you must control slip occurrence to a low probability by synchronization to industry standards. You must analyze any new service or delivery method for acceptable performance relative to the expectations of the end-user before you can determine the need for synchronization.

Q. Why is a timing loop so bad, and why is it so difficult to fix?

A. Timing loops are inherently unacceptable because they preclude having the affected NEs synchronized to the PRS. The clock frequencies are traceable to an unpredictable unknown quantity; that is, the hold-in frequency limit of one of the affected NE clocks. By design, this is bound to be well outside the expected accuracy of the clock after several days in holdover, so performance is guaranteed to become severely degraded.
The difficulty in isolating the instigator of a timing loop condition is a function of two factors: first, the cause is unintentional (a lack of diligence in analyzing all fault conditions, or an error in provisioning, for example) so no obvious evidence exists in the network's documentation. Secondly, there are no sync-specific alarms, since each affected NE accepts the situation as normal. Consequently, you must carry out trouble isolation without the usual maintenance tools, relying on a knowledge of the sync distribution topology and on an analysis of data on slip counts and pointer counts that is not usually automatically correlated.


Q.How do you get value of an E1 as 2.048Mbps?

A.As we know that voice signal is of frequency 3.3 Khz,and as per the Nyquist Rate or PCM quantization rate for transmission we required signal of >=2f(here ‘f’ is GIF [3.3]=4).and each sample of data is a byte. DS0: provides one 64kbps channel.E1: 32 DS0 or 32 channels with 64kbps

Also we know that voice signal frame consisits of 32 bytes .Hence value of an E1 will be

=2x4Khzx8bitsx32slots
=2.048Mbps



OR

PCM multiplexing is carried out with the sampling process, sampling the analog sources sequentially. These
sources may be the nominal 4-kHz voice channels or other information sources that have a 4-kHz bandwidth, such as data or freeze-frame video. The final result of the sampling and subsequent quantization and coding is a series of electrical pulses, a serial bit stream of 1s and 0s that requires some identification or indication of the beginning of a sampling sequence. This identification is necessary so that the far-end receiver knows exactly when the sampling sequence starts. Once the receiver receives the “indication,” it knows a priori (in the case of DS1) that 24 eight-bit slots follow. It synchronizes the receiver. Such identification is carried out by a framing bit, and one full sequence or cycle of samples is called a frame in PCM terminology.
Consider the framing structure of E1
PCM system using 8-level coding (e.g., 2^8= 256 quantizing steps or distinct PCM code words). Actually 256 samples of a signal will be sufficient to regenerate the original signal and each signal is made up of 1 or 0.

The E1 European PCM system is a 32-channel system. Of the 32 channels, 30 transmit speech (or data) derived from incoming telephone trunks and the remaining 2 channels transmit synchronization-alignment and signaling information. Each channel is allotted an 8-bit time slot (TS), and we tabulate TS 0 through 31 as follows:
TS TYPE OF INFORMATION
0 Synchronizing (framing)
1–15 Speech
16 Signaling
17–31 Speech
In TS 0 a synchronizing code or word is transmitted every second frame, occupying digits 2 through 8 as 0011011. In those frames without the synchronizing word, the second bit of TS 0 is frozen at a 1 so that in these frames the synchronizing word cannot be imitated. The remaining bits of TS 0 can be used for the transmission of supervisory information signals .Again, E1 in its primary rate format transmits 32 channels of 8-bit time slots. An E1 frame therefore has 8*32 =256 bits. There is no framing bit. Framing alignment is
carried out in TS 0.

The E1 bit rate to the line is:256 *8000 = 2, 048, 000 bps or 2.048 Mbps



Question for you Electrical E1 is ac or dc in nature????

Monday, 29 August 2011

Ethernet Testing Parameters


Testing Ethernet Services

The Ethernet connections must be tested to ensure
that they are operating correctly and also they are
performing to the required levels.This is done by testing
 the bandwidth, the delay and the loss of frames in the
 connection. In Ethernet terms these are called
Throughput, Latency and Frame Loss.

Throughput

Data throughput is simply the maximum amount of data,
that can be transported from source to destination.
However the definition and measuring of throughput is
complicated by the need to define an acceptable level of
quality. For example, if 10% errored or lost frames were
deemed to be acceptable then the throughput would be
measured at 10% error rate. Here we have
generally accepted definition that throughput should be
measured with zero errors or lost frames.

In any given Ethernet system the absolute maximum
throughput will be equal to the data rate, e.g. 10 Mbit/s
100 Mbit/s or 1000 Mbit/s. In practice these figures
cannot be achieved because of the effect of frame size.
The smaller size frames have a lower effective
throughput than the larger sizes because of the addition
of the pre-amble and the interpacket gap bytes, which do
not count as data.

Latency
Latency is the total time taken for a frame to travel from
source to destination. This total time is the sum of both
the processing delays in the network elements and the
propagation delay along the transmission medium.
In order to measure latency a test frame containing a
time stamp is transmitted through the network. The time
stamp is then checked when the frame is received. In
order for this to happen the test frame needs to return to
the original test set by means of a loopback (round-trip
delay).

Frame Loss
Frame loss is simply the number of frames that were
transmitted successfully from the source but were never
received at the destination. It is usually referred to as
frame loss rate and is expressed as a percentage of the
total frames transmitted. For example if 1000 frames
were transmitted but only 900 were received the frame
loss rate would be: (1000 – 900) / 1000 x 100% = 10%
Frames can be lost, or dropped, for a number of reasons
including errors, over-subscription and excessive delay.

Errors - most layer 2 devices will drop a frame with an
incorrect FCS. This means that a single bit error in
transmission will result in the entire frame being
dropped. For this reason BER, the most fundamental
measure of a SONET/SDH service, has no meaning in
Ethernet since the ratio of good to errored bits cannot be
ascertained.

Oversubscription - the most common reason for frame
loss is oversubscription of the available bandwidth. For
example, if two 1000 Mbit/s Ethernet services are
mapped into a single 622 Mbit/s SONET/SDH pipe (a
common scenario) then the bandwidth limit is quickly
reached as the two gigabit Ethernet services are loaded.
When the limit is reached, frames may be dropped.

Excessive Delay - The nature of Ethernet networks
means that it is possible for frames to be delayed for
considerable periods of time. This is important when
testing as the tester is “waiting” for all of the transmitted
frames to be received and counted. At some point the
tester has to decide that a transmitted frame will not be
received and count the frame as lost. The most common
time period used to make this decision is the RFC
specification of two seconds. Thus any frame received
more then two seconds after it is transmitted would be
counted as lost.






Ethernet Frame explained.

Actually,Ethernet frames look like









The function of the various parts is as follows: 

Preamble/Start of Frame Delimiter, 8 Bytes - Alternate
ones and zeros for the preamble, 11010101 for the SFD
(Start of Frame Delimiter). This allows for receiver
synchronisation and marks the start of frame.

Destination Address, 6 Bytes - The MAC destination
address of the frame, usually written in hex, is used to
route frames between devices. Some MAC addresses are
reserved, or have special functions. For example
FF:FF:FF:FF:FF:FF is a broadcast address which would go
to all stations.

Sources Address, 6 Bytes - The MAC address of the
sending station, usually written in hex. The source
address is usually built into a piece of equipment at
manufacture. The first three bytes would identify the
manufacturer and the second three bytes would be
unique to the equipment. However there are some
devices, test equipment for example, in which the
address is changeable.

VLAN Tag, 4 Bytes (optional) - The VLAN tag is
optional. If present it provides a means of separating
data into “virtual” LANs, irrespective of MAC address. It
also provides a “priority tag” which can be used to
implement quality of service functions.

Length/Type, 2 Bytes - This field is used to give either
the length of the frame or the type of data being carried
in the data field. If the length/type value is less than
05DC hex then the value represents the length of the
data field. If the value is greater than 0600 hex then it
represents the type of protocol in the data field, for
example 0800 hex would mean the frame was carrying
IP. 809B hex would mean the frame was carrying
AppleTalk.

Data, 46 to 1500 Bytes - The client data to be
transported. This would normally include some higher
layer protocol, such as IP or AppleTalk.

Frame Check Sequence, 4 Bytes - The check sequence
is calculated over the whole frame by the transmitting
device. The receiving device will re-calculate the
checksum and ensure it matches the one inserted by the
transmitter. Most types of Ethernet equipment will drop a
frame with an incorrect or missing FCS.
The minimum legal frame size, including the FCS but
excluding the preamble, is 64 bytes. Frames below the
minimum size are known as “runts” and would be
discarded by most Ethernet equipment.
The maximum standard frame size is 1522 bytes if VLAN
tagging is being used and 1518 bytes if VLAN is not being
used. It is possible to use frames larger than the
maximum size. Such frames are called “Jumbo Frames”
and are supported by some manufacturer’s equipment in
various sizes up to 64 Kbyte. Jumbo frames are identical
in form to standard frames but with a bigger data field.
This produces a better ratio of “overhead” bytes to data
bytes and hence more efficient transmission. Jumbos are
non-standard and manufacturer specific and therefore
interoperability cannot be guaranteed.
The frames are transmitted from left to right, least
significant bit first. The frames are separated by an
“inter-packet gap”. The minimum length of the interpacket
gap is 12 bytes. The inter-packet gap exists
because in a half duplex system time is needed for the
medium to go quiet before the next frame starts
transmission. The inter-packet gap is not really needed
for full duplex operation but is still used for consistency.

Auto-Negotiation
Most Ethernet devices support auto-negotiation. When
two devices are first connected together they will send
information to each other to “advertise” their
capabilities. The devices will then configure themselves
to the highest common setting. The capabilities
negotiated are speed, full or half duplex operation and
the use of flow control.