Kamis, 30 Oktober 2008

CDMA TECHNOLOGY

CDMA TECHNOLOGY


History of CDMA
The Cellular Challenge
The world's first cellular networks were introduced in the early 1980s, using analog
radio transmission technologies such as AMPS (Advanced Mobile Phone System).
Within a few years, cellular systems began to hit a capacity ceiling as millions of new
subscribers signed up for service, demanding more and more airtime. Dropped calls and
network busy signals became common in many areas.
To accommodate more traffic within a limited amount of radio spectrum, the
industry developed a new set of digital wireless technologies called TDMA (Time
Division Multiple Access) and GSM (Global System for Mobile). TDMA and GSM used
a time-sharing protocol to provide three to four times more capacity than analog systems.
But just as TDMA was being standardized, an even better solution was found in CDMA.
Commercial Development
The founders of QUALCOMM realized that CDMA technology could be used in
commercial cellular communications to make even better use of the radio spectrum than
other technologies. They developed the key advances that made CDMA suitable for
cellular, then demonstrated a working prototype and began to license the technology to
telecom equipment manufacturers.
The first CDMA networks were commercially launched in 1995, and provided
roughly 10 times more capacity than analog networks - far more than TDMA or GSM.
Since then, CDMA has become the fastest-growing of all wireless technologies, with
over 100 million subscribers worldwide. In addition to supporting more traffic, CDMA
brings many other benefits to carriers and consumers, including better voice quality,
broader coverage and stronger security.
The world is demanding more from wireless communication technologies than ever
before. More people around the world are subscribing to wireless services and consumers
are using their phones more frequently. Add in exciting Third-Generation (3G) wireless
data services and applications - such as wireless email, web, digital picture
taking/sending and assisted-GPS position location applications - and wireless networks
are asked to do much more than just a few years ago. And these networks will be asked to
do more tomorrow.
This is where CDMA technology fits in. CDMA consistently provides better capacity
for voice and data communications than other commercial mobile technologies, allowing
more subscribers to connect at any given time, and it is the common platform on which
3G technologies are built.
CDMA is a "spread spectrum" technology, allowing many users to occupy the same
time and frequency allocations in a given band/space. As its name implies, CDMA
assigns unique codes to each communication to differentiate it from others in the same
spectrum.
Brief Working of CDMA
CDMA takes an entirely different approach from TDMA. CDMA, after digitizing
data, spreads it out over the entire available bandwidth. Multiple calls are overlaid on
each other on the channel, with each assigned a unique sequence code. CDMA is a form
of spread spectrum, which simply means that data is sent in small pieces over a number
of the discrete frequencies available for use at any time in the specified range.




In CDMA, each phone's data has a unique code.
All of the users transmit in the same wide-band chunk of spectrum. Each user's signal is
spread over the entire bandwidth by a unique spreading code. At the receiver, that same
unique code is used to recover the signal. Because CDMA systems need to put an
accurate time-stamp on each piece of a signal, it references the GPS system for this
information. Between eight and 10 separate calls can be carried in the same channel
space as one analog AMPS call.
Spread Spectrum Communications
CDMA is a form of Direct Sequence Spread Spectrum communications. In general,
Spread Spectrum communications is distinguished by three key elements:
1. The signal occupies a bandwidth much greater than that which is necessary to send the
information. This results in many benefits, such as immunity to interference and jamming
and multi-user access, which we’ll discuss later on.
2. The bandwidth is spread by means of a code which is independent of the data. The
independence of the code distinguishes this from standard modulation schemes in which
the data modulation will always spread the spectrum somewhat.
3. The receiver synchronizes to the code to recover the data. The use of an independent
code and synchronous reception allows multiple users to access the same frequency band
at the same time.
In order to protect the signal, the code used is pseudo-random. It appears random, but
is actually deterministic, so that the receiver can reconstruct the code for synchronous
detection. This pseudo-random code is also called pseudo-noise (PN).
Three Types of Spread Spectrum Communications
Frequency hopping.
The signal is rapidly switched between different frequencies within the hopping
bandwidth pseudo-randomly, and the receiver knows before hand where to find the signal
at any given time.
Time hopping.
The signal is transmitted in short bursts pseudo-randomly, and the receiver knows
beforehand when to expect the burst.
Direct sequence.
The digital data is directly coded at a much higher frequency. The code is generated
pseudo-randomly, the receiver knows how to generate the same code, and correlates the
received signal with that code to extract the data.
Direct Sequence Spread Spectrum


CDMA is a Direct Sequence Spread Spectrum system. The CDMA system works directly
on 64 kbit/sec digital signals. These signals can be digitized voice, ISDN channels,
modem data, etc.
Figure 1 shows a simplified Direct Sequence Spread Spectrum system. For clarity,
the figure shows one channel operating in one direction only.
Signal transmission consists of the following steps:
1. A pseudo-random code is generated, different for each channel and each successive
connection.
2. The Information data modulates the pseudo-random code (the Information data is
“spread”).
3. The resulting signal modulates a carrier.
4. The modulated carrier is amplified and broadcast.
Signal reception consists of the following steps:
1. The carrier is received and amplified.
2. The received signal is mixed with a local carrier to recover the spread digital signal.
3. A pseudo-random code is generated, matching the anticipated signal.
4. The receiver acquires the received code and phase locks its own code to it.
5. The received signal is correlated with the generated code, extracting the Information
data.
Implementing CDMA Technology
The following sections describe how a system might implement the steps illustrated in
Figure 1.
Input data
CDMA works on Information data from several possible sources, such as digitized voice
or ISDN channels. Data rates can vary, here are some examples:



The system works with 64 kBits/sec data, but can accept input rates of 8, 16, 32, or 64
kBits/sec. Inputs of less than 64 kBits/sec are padded with extra bits to bring them up to
64 kBits/sec.
For inputs of 8, 16, 32, or 64 kBits/sec, the system applies Forward Error Correction
(FEC) coding, which doubles the bit rate, up to 128 kbits/sec. The Complex Modulation
scheme (which we’ll discuss in more detail later), transmits two bits at a time, in two bit
symbols. For inputs of less than 64 kbits/sec, each symbol is repeated to bring the
transmission rate up to 64 kilosymbols/sec. Each component of the complex signal
carries one bit of the two bit symbol, at 64 kBits/sec, as shown bel

Generating Pseudo-Random Codes
For each channel the base station generates a unique code that changes for every
connection. The base station adds together all the coded transmissions for every
subscriber. The subscriber unit correctly generates its own matching code and uses it to
extract the appropriate signals. Note that each subscriber uses several independent
channels.
In order for all this to occur, the pseudo-random code must have the following properties:
1. It must be deterministic. The subscriber station must be able to independently generate
the code that matches the base station code.
2. It must appear random to a listener without prior knowledge of the code (i.e. it has the
statistical properties of sampled white noise).
3. The cross-correlation between any two codes must be small (see below for more
information on code correlation).
4. The code must have a long period (i.e. a long time before the code repeats itself).
Code Correlation
In this context, correlation has a specific mathematical meaning. In general the
correlation function has these properties:
• It equals 1 if the two codes are identical
• It equals 0 of the two codes have nothing in common
Intermediate values indicate how much the codes have in common. The more they have
in common, the harder it is for the receiver to extract the appropriate signal.
There are two correlation functions:
Cross-Correlation: The correlation of two different codes. As we’ve said, this should
be as small as possible.
Auto-Correlation: The correlation of a code with a time-delayed version of itself. In
order to reject multi-path interference, this function should equal 0 for any time delay
other than zero.
The receiver uses cross-correlation to separate the appropriate signal from signals




Figure 2a. Pseudo-Noise Spreading





Figure 2b. Frequency Spreading
Pseudo-Noise Spreading
The FEC coded Information data modulates the pseudo-random code, as shown in Figure
2a. Some terminology related to the pseudo-random code:
• Chipping Frequency (fc): the bit rate of the PN code.
• Information rate (fi): the bit rate of the digital data.
• Chip: One bit of the PN code.
• Epoch: The length of time before the code starts repeating itself (the period of the
code). The epoch must be longer than the round trip propagation delay (The epoch
is on the order of several seconds).
Figure 2b shows the process of frequency spreading. In general, the bandwidth of a
digital signal is twice its bit rate. The bandwidths of the information data (fi) and the PN
code are shown together. The bandwidth of the combination of the two, for fc>fi, can be
approximated by the bandwidth of the PN code.Figure 3a. Complex Modulator

Figure 3b. Complex Modulation





Transmitting Data
The resultant coded signal next modulates an RF carrier for transmission using
Quadrature Phase Shift Keying (QPSK). QPSK uses four different states to encode each
symbol. The four states are phase shifts of the carrier spaced 90_ apart. By convention,
the phase shifts are 45, 135, 225, and 315 degrees. Since there are four possible states
used to encode binary information, each state represents two bits. This two bit “word” is
called a symbol. Figure 3 shows in general how QPSK works.

GSM Technology

GSM (Global System for Mobile communications: originally from Groupe Spécial Mobile) is the most popular standard for mobile phones in the world. Its promoter, the GSM Association, estimates that 82% of the global mobile market uses the standard. GSM is used by over 3 billion people across more than 212 countries and territories.Its ubiquity makes international roaming very common between mobile phone operators, enabling subscribers to use their phones in many parts of the world. GSM differs from its predecessors in that both signaling and speech channels are digital, and thus is considered a second generation (2G) mobile phone system. This has also meant that data communication was easy to build into the system.
The ubiquity of the GSM standard has been an advantage to both consumers (who benefit from the ability to roam and switch carriers without switching phones) and also to network operators (who can choose equipment from any of the many vendors implementing GSM[4]). GSM also pioneered a low-cost (to the network carrier) alternative to voice calls, the Short message service (SMS, also called "text messaging"), which is now supported on other mobile standards as well. Another advantage is that the standard includes one worldwide Emergency telephone number, 112[5]. This makes it easier for international travellers to connect to emergency services without knowing the local emergency number.
Newer versions of the standard were backward-compatible with the original GSM phones. For example, Release '97 of the standard added packet data capabilities, by means of General Packet Radio Service (GPRS). Release '99 introduced higher speed data transmission using Enhanced Data Rates for GSM Evolution (EDGE).

History
In 1982, the European Conference of Postal and Telecommunications Administrations (CEPT) created the Groupe Spécial Mobile (GSM) to develop a standard for a mobile telephone system that could be used across Europe. In 1987, a memorandum of understanding was signed by 13 countries to develop a common cellular telephone system across Europe.
In 1989, GSM responsibility was transferred to the European Telecommunications Standards Institute (ETSI) and phase I of the GSM specifications were published in 1990. The first GSM network was launched in 1991 by Radiolinja in Finland with joint technical infrastructure maintenance from Ericsson. By the end of 1993, over a million subscribers were using GSM phone networks being operated by 70 carriers across 48 countries.



Technical details
GSM is a cellular network, which means that mobile phones connect to it by searching for cells in the immediate vicinity. GSM networks operate in four different frequency ranges. Most GSM networks operate in the 900 MHz or 1800 MHz bands. Some countries in the Americas (including Canada and the United States) use the 850 MHz and 1900 MHz bands because the 900 and 1800 MHz frequency bands were already allocated.
The rarer 400 and 450 MHz frequency bands are assigned in some countries, notably Scandinavia, where these frequencies were previously used for first-generation systems.
GSM-900 uses 890–915 MHz to send information from the mobile station to the base station (uplink) and 935–960 MHz for the other direction (downlink), providing 124 RF channels (channel numbers 1 to 124) spaced at 200 kHz. Duplex spacing of 45 MHz is used. In some countries the GSM-900 band has been extended to cover a larger frequency range. This 'extended GSM', E-GSM, uses 880–915 MHz (uplink) and 925–960 MHz (downlink), adding 50 channels (channel numbers 975 to 1023 and 0) to the original GSM-900 band. Time division multiplexing is used to allow eight full-rate or sixteen half-rate speech channels per radio frequency channel. There are eight radio timeslots (giving eight burst periods) grouped into what is called a TDMA frame. Half rate channels use alternate frames in the same timeslot. The channel data rate for all 8 channels is 270.833 kbit/s, and the frame duration is 4.615 ms.
The transmission power in the handset is limited to a maximum of 2 watts in GSM850/900 and 1 watt in GSM1800/1900.
GSM has used a variety of voice codecs to squeeze 3.1 kHz audio into between 5.6 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, called Half Rate (5.6 kbit/s) and Full Rate (13 kbit/s). These used a system based upon linear predictive coding (LPC). In addition to being efficient with bitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal.
GSM was further enhanced in 1997[11] with the Enhanced Full Rate (EFR) codec, a 12.2 kbit/s codec that uses a full rate channel. Finally, with the development of UMTS, EFR was refactored into a variable-rate codec called AMR-Narrowband, which is high quality and robust against interference when used on full rate channels, and less robust but still relatively high quality when used in good radio conditions on half-rate channels.
There are five different cell sizes in a GSM network—macro, micro, pico, femto and umbrella cells. The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where the base station antenna is installed on a mast or a building above average roof top level. Micro cells are cells whose antenna height is under average roof top level; they are typically used in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Femtocells are cells designed for use in residential or small business environments and connect to the service provider’s network via a broadband internet connection. Umbrella cells are used to cover shadowed regions of smaller cells and fill in gaps in coverage between those cells.
Cell horizontal radius varies depending on antenna height, antenna gain and propagation conditions from a couple of hundred meters to several tens of kilometres. The longest distance the GSM specification supports in practical use is 35 kilometres (22 mi). There are also several implementations of the concept of an extended cell, where the cell radius could be double or even more, depending on the antenna system, the type of terrain and the timing advance.
Indoor coverage is also supported by GSM and may be achieved by using an indoor picocell base station, or an indoor repeater with distributed indoor antennas fed through power splitters, to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. These are typically deployed when a lot of call capacity is needed indoors, for example in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of the radio signals from nearby cells.
The modulation used in GSM is Gaussian minimum-shift keying (GMSK), a kind of continuous-phase frequency shift keying. In GMSK, the signal to be modulated onto the carrier is first smoothed with a Gaussian low-pass filter prior to being fed to a frequency modulator, which greatly reduces the interference to neighboring channels (adjacent channel interference).



Network structure
The network behind the GSM system seen by the customer is large and complicated in order to provide all of the services which are required. It is divided into a number of sections and these are each covered in separate articles.
the Base Station Subsystem (the base stations and their controllers).
the Network and Switching Subsystem (the part of the network most similar to a fixed network). This is sometimes also just called the core network.
the GPRS Core Network (the optional part which allows packet based Internet connections).
all of the elements in the system combine to produce many GSM services such as voice calls and SMS.
The BSS
The Base Station Subsystem is shown containing the Base Station Controller (BSC) and the Base Transceiver Station (BTS) connected together on the A-bis interface. The Packet Control Unit (PCU) is also shown connected to the BTS although the exact position of this depends on the vendor's architecture.
The BSS is connected by the Air Interface or Um to the mobile & is connected by the A interface to the NSS.

The NSS
The Network and Switching Subsystem is shown containing the MSC connected via the SS7 network to the HLR. The AUC and EIR, although technically separate functions from the HLR are shown together since combining them is almost standard in all Vendor's networks.
The NSS is connected by the A interface to the BSS. It has a direct connection to the PSTN from the MSC. There is also a connection to the Packet Core (called the Gs) although this is optional and not always implemented.

The GPRS Core Network
The GPRS Core Network shown here is simplified to just have the SGSN (connected to the BSS by the Gb interface) and the GGSN. The two are connected together by a private IP network called the GPRS backbone shown as the Gn Reference Point.

BTS Configuration Diagram


The main configuration of equipments inside the BTS are :
1. Main Processor Unit
2. Clock Source
3. Interface Unit
4. Base band Unit
5. Power Supply Unit
6. RF Unit
7. Antenna

Main Processor Unit
The functions of this unit are as a brain for the BTS:
· BTS initialization and self-testing
· configuration
· O&M signaling
· software download
· collection and management of external and internal alarms
Clock Source
UnitThe basic function of this unit is like a heart for the BTS:· Deliver a stable clocking pulse to all digital equipment inside BTS.
Interface Unit
Interface unit have function to translate between Source data which has specific Electrical Standard (E1, T1 or IP) to digital data and this data will deliver to other digital unit to be next processed
Base Band Unit
In the base band unit, the digital data will be processed and following the GSM standard, this unit creates a data which ready to be feed to RF Unit.
Power Supply Unit
The basic function of power supply unit is like a stomach for the BTS which produce a power for whole equipments in the BTS. With input the AC voltage like food in the human and produce DC voltage as a power. Power consumption of 1 Macro Outdoor BTS which have 6 Transceivers Unit is around 1500 Watt.
RF Unit
RF Unit converts the digital signal to Radio Frequency --RF-- Signal (air interface signal) following the GSM Standard. This signal type is still as an electrical signal.
Antenna Unit
Antenna as a traditional unit, have a function to convert electrical signal to electromagnetic signal. This unit is very important unit for creating cell dimension. Combination of horizontal - vertical polarization, antenna height and antenna tilting influence the radiation pattern of cell.The real examples of BTS is shown bellow:




GSM - Architecture - Configuration

A GSM network is composed of several functional entities, whose functions and interfaces are defined.
Figure 1 shows the layout of a generic GSM network. The GSM network can be divided into three broad parts. The Mobile Station is carried by the subscriber, the Base Station Subsystem controls the radio link with the Mobile Station. The Network Subsystem, the main part of which is the Mobile services Switching Center, performs the switching of calls between the mobile and other fixed or mobile network users, as well as management of mobile services, such as authentication. Not shown is the Operations and Maintenance center, which oversees the proper operation and setup of the network. The Mobile Station and the Base Station Subsystem communicate across the Um interface, also known as the air interface or radio link. The Base Station Subsystem communicates with the Mobile service Switching Center across the A interface.

SIM Subscriber Identity Module HLR Home Location Register
MS Mobile Station VLR Vistor Location Register
BTS Base Transceiver Station EIR Equipment Identity Register
BSC Base Station Controller AC Authentication Center
MSC Mobile services Switching Center PSTN Public Switched Telecomm Network
VLR Visitor Location Register ISDN Integrated Services Digital Network
FIGURE 1
Mobile StationThe mobile station (MS) consists of the physical equipment, such as the radio transceiver, display and digital signal processors, and a smart card called the Subscriber Identity Module (SIM). The SIM provides personal mobility, so that the user can have access to all subscribed services irrespective of both the location of the terminal and the use of a specific terminal. By inserting the SIM card into another GSM cellular phone, the user is able to receive calls at that phone, make calls from that phone, or receive other subscribed services.
The mobile equipment is uniquely identified by the International Mobile Equipment Identity (IMEI). The SIM card contains the International Mobile Subscriber Identity (IMSI), identifying the subscriber, a secret key for authentication, and other user information. The IMEI and the IMSI are independent, thereby providing personal mobility. The SIM card may be protected against unauthorized use by a password or personal identity number.
Base Station SubsystemThe Base Station Subsystem is composed of two parts, the Base Transceiver Station (BTS) and the Base Station Controller (BSC). These communicate across the specified A­bis interface, allowing (as in the rest of the system) operation between components made by different suppliers.
The Base Transceiver Station houses the radio tranceivers that define a cell and handles the radio­link protocols with the Mobile Station. In a large urban area, there will potentially be a large number of BTSs deployed. The requirements for a BTS are ruggedness, reliability, portability, and minimum cost.
The Base Station Controller manages the radio resources for one or more BTSs. It handles radio­channel setup, frequency hopping, and handovers, as described below. The BSC is the connection between the mobile and the Mobile service Switching Center (MSC). The BSC also translates the 13 kbps voice channel used over the radio link to the standard 64 kbps channel used by the Public Switched Telephone Network or ISDN.
Network SubsystemThe central component of the Network Subsystem is the Mobile services Switching Center (MSC). It acts like a normal switching node of the PSTN or ISDN, and in addition provides all the functionality needed to handle a mobile subscriber, such as registration, authentication, location updating, handovers, and call routing to a roaming subscriber. These services are provided in conjuction with several functional entities, which together form the Network Subsystem. The MSC provides the connection to the public fixed network (PSTN or ISDN), and signalling between functional entities uses the ITU­T Signalling System Number 7 (SS7), used in ISDN and widely used in current public networks.
The Home Location Register (HLR) and Visitor Location Register (VLR), together with the MSC, provide the call­routing and (possibly international) roaming capabilities of GSM. The HLR contains all the administrative information of each subscriber registered in the corresponding GSM network, along with the current location of the mobile. The current location of the mobile is in the form of a Mobile Station Roaming Number (MSRN) which is a regular ISDN number used to route a call to the MSC where the mobile is currently located. There is logically one HLR per GSM network, although it may be implemented as a distributed database.
The Visitor Location Register contains selected administrative information from the HLR, necessary for call control and provision of the subscribed services, for each mobile currently located in the geographical area controlled by the VLR. Although each functional entity can be implemented as an independent unit, most manufacturers of switching equipment implement one VLR together with one MSC, so that the geographical area controlled by the MSC corresponds to that controlled by the VLR, simplifying the signalling required. Note that the MSC contains no information about particular mobile stations - this information is stored in the location registers.
The other two registers are used for authentication and security purposes. The Equipment Identity Register (EIR) is a database that contains a list of all valid mobile equipment on the network, where each mobile station is identified by its International Mobile Equipment Identity (IMEI). An IMEI is marked as invalid if it has been reported stolen or is not type approved. The Authentication Center is a protected database that stores a copy of the secret key stored in each subscriber's SIM card, which is used for authentication and ciphering of the radio channel.
ADSL Technology

ADSL is a modem technology on the access network, which changes the existing infrastructure of the copper pair from the customer's house and the entire communications network, into a broadband network from end to end. ADSL enables, under optimum conditions, the transfer of multimedia, video, audio, high-speed Internet, by means of the existing access network up to a speed of 8 Mbps from the exchange to the subscriber, and up to 768 Kbps from the subscriber to the exchange (hence the name "asymmetric subscriber line").
The basic idea behind the technology is the need to transfer large amounts of information from the exchange to the subscriber's home (downloads of games, movies, etc.) while in the upstream channel (from the subscriber's home to the x exchange) a slower channel is sufficient, enabling communication with the content provider, sending emails or uploading to FTP servers.
ADSL technology uses the existing copper infrastructure deployed all over the country, making the broadband network possible without having to set up a new infrastructure. The technology enables maximum utilization of the typical bandwidth of the copper lines by means of complex data processing and encoding. Instead of using frequencies of 4 kHz, as was done until now, we use a range of frequencies between 0 KHz and 1.1 MHz, where standard ADSL system use 256 frequency channels (for the information moving downstream to from the exchange to the subscriber and for the upstream channel) with a bandwidth of 4 KHz per channel, thus enabling the transfer of much more information.

Technical Characteristics of ADSL
v Asymmetric distribution of the rate – up to 8 Mbps on the downstream channel and up to 768 on the u[stream channel.
v The range of frequencies is higher than the basic telephone frequency and up to a 1 MHz frequency.
v It is enabled on a regular analog telephone line or on an ISDN line.
v All the features of the line are maintained (such as conference call, call waiting, etc.).
v It is possible to surf the Internet and talk on the same line simultaneously (the use of filters does away with the need for a separate line for the Internet).

Asymmetric Digital Subscriber Line (ADSL) is a form of DSL, a data communications technology that enables faster data transmission over copper telephone lines than a conventional voiceband modem can provide. It does this by utilizing frequencies that are not used by a voice telephone call. A splitter - or microfilter - allows a single telephone connection to be used for both ADSL service and voice calls at the same time. Because phone lines vary in quality and were not originally engineered with DSL in mind, it can generally only be used over short distances, typically less than 4km[1].
At the telephone exchange the line generally terminates at a DSLAM where another frequency splitter separates the voice band signal for the conventional phone network. Data carried by the ADSL is typically routed over the telephone company's data network and eventually reaches a conventional internet network. In the UK under British Telecom the data network in question is its ATM network which in turn sends it to its IP network IP Colossus.

Explanation
The distinguishing characteristic of ADSL over other forms of DSL is that the volume of data flow is greater in one direction than the other, i.e. it is asymmetric. Providers usually market ADSL as a service for consumers to connect to the Internet in a relatively passive mode: able to use the higher speed direction for the "download" from the Internet but not needing to run servers that would require high speed in the other direction.
There are both technical and marketing reasons why ADSL is in many places the most common type offered to home users. On the technical side, there is likely to be more crosstalk from other circuits at the DSLAM end (where the wires from many local loops are close to each other) than at the customer premises. Thus the upload signal is weakest at the noisiest part of the local loop, while the download signal is strongest at the noisiest part of the local loop. It therefore makes technical sense to have the DSLAM transmit at a higher bit rate than does the modem on the customer end. Since the typical home user in fact does prefer a higher download speed, the telephone companies chose to make a virtue out of necessity, hence ADSL. On the marketing side, limiting upload speeds limits the attractiveness of this service to business customers, often causing them to purchase higher cost Digital Signal 1 services instead. In this fashion, it segments the digital communications market between business and home users.

On the wire
Currently, most ADSL communication is full-duplex. Full-duplex ADSL communication is usually achieved on a wire pair by either frequency-division duplex (FDD), echo-cancelling duplex (ECD), or time-division duplexing (TDD). FDD uses two separate frequency bands, referred to as the upstream and downstream bands. The upstream band is used for communication from the end user to the telephone central office. The downstream band is used for communicating from the central office to the end user.

Frequency plan for ADSL. The red area is the frequency range used by normal voice telephony (PSTN), the green (upstream) and blue (downstream) areas are used for ADSL.
With standard ADSL (annex A), the band from 25.875 kHz to 138 kHz is used for upstream communication, while 138 kHz – 1104 kHz is used for downstream communication. Each of these is further divided into smaller frequency channels of 4.3125 kHz. During initial training, the ADSL modem tests which of the available channels have an acceptable signal-to-noise ratio. The distance from the telephone exchange, noise on the copper wire, or interference from AM radio stations may introduce errors on some frequencies. By keeping the channels small, a high error rate on one frequency thus need not render the line unusable: the channel will not be used, merely resulting in reduced throughput on an otherwise functional ADSL connection.
Vendors may support usage of higher frequencies as a proprietary extension to the standard. However, this requires matching vendor-supplied equipment on both ends of the line, and will likely result in crosstalk problems that affect other lines in the same bundle.
There is a direct relationship between the number of channels available and the throughput capacity of the ADSL connection. The exact data capacity per channel depends on the modulation method used.

Modulation
ADSL initially existed in two flavours (similar to VDSL), namely CAP and DMT. CAP was the de facto standard for ADSL deployments up until 1996, deployed in 90 percent of ADSL installs at the time. However, DMT was chosen for the first ITU-T ADSL standards, G.992.1 and G.992.2 (also called G.dmt and G.lite respectively). Therefore all modern installations of ADSL are based on the DMT modulation scheme.

ADSL standards
Annexes J and M shift the upstream/downstream frequency split up to 276 kHz (from 138 kHz used in the commonly deployed annex A) in order to boost upstream rates. Additionally, the "all-digital-loop" variants of ADSL2 and ADSL2+ (annexes I and J) support an extra 256 kbit/s of upstream if the bandwidth normally used for POTS voice calls is allocated for ADSL usage.
While the ADSL access utilizes the 1.1 MHz band, ADSL2+ utilizes the 2.2 MHz band.
The downstream and upstream rates displayed are theoretical maxima. Note also that because Digital subscriber line access multiplexers and ADSL modems may have been implemented based on differing or incomplete standards some manufacturers may advertise different speeds. For example, Ericsson has several devices that support non-standard upstream speeds of up to 2 Mbit/s in ADSL2 and ADSL2+.
[edit] Installation issues
Due to the way it uses the frequency spectrum, ADSL deployment presents some issues. It is necessary to install appropriate frequency filters at the customer's premises, to avoid interferences with the voice service, while at the same time taking care to keep a clean signal level for the ADSL connection.
In the early days of DSL, installation required a technician to visit the premises. A splitter was installed near the demarcation point, from which a dedicated data line was installed. This way, the DSL signal is separated earlier and is not attenuated inside the customer premises. However, this procedure is costly, and also caused problems with customers complaining about having to wait for the technician to perform the installation. As a result, many DSL vendors started offering a self-install option, in which they ship equipment and instructions to the customer. Instead of separating the DSL signal at the demarcation point, the opposite is done: the DSL signal is filtered at each phone outlet by use of a low-pass filter for voice and a high-pass filter for data, usually enclosed in what is known as a microfilter. This microfilter can be plugged directly into any phone jack, and does not require any rewiring at the customer's premises.
A side effect of the move to the self-install model is that the DSL signal can be degraded, especially if more than 5 voiceband devices are connected to the line. The DSL signal is now present on all telephone wiring in the building, causing attenuation and echo. A way to circumvent this is to go back to the original model, and install one filter upstream from all telephone jacks in the building, except for the jack to which the DSL modem will be connected. Since this requires wiring changes by the customer and may not work on some household telephone wiring, it is rarely done. It is usually much easier to install filters at each telephone jack that is in use.

Rabu, 29 Oktober 2008

BroadBand

Broadband
By Irwin S
The Data Over Cable Service Interface Specification (DOCSIS) Radio Frequency Interface Specification includes a variety of assumed RF performance characteristics for downstream and upstream data channels, cable modem input and output, cable modem termination system (CMTS) output, and a number of other parameters. Collectively these define a DOCSIS-compliant cable network.
Check out the table on page 16, the "Electrical Input to CM" table from the Radio Frequency Interface Specification.
Two of the table's parameters are of particular importance for ensuring reliable cable modem operation: level range (one channel) and total input power.
The first of these says the DOCSIS digitally modulated signal is supposed to be in the -15 to +15 dBmV range. Signal levels outside of the stated range might cause modem operational problems.
In theory, a DOCSIS-compliant modem should work fine if the digitally modulated signal's input is kept in the -15 to +15 dBmV range. Some cable operators with whom I've spoken like to keep modem input in a "sweet spot" between -5 and +5 dBmV.
Making sure the downstream digitally modulated signal is indeed in the desired input range is important. It should be verified at the time the modem is installed at the customer premises and during any follow-up modem-related service calls. But how does one accurately measure the downstream signal? My recommendation is to use test equipment that includes a digital channel power function. Most quadrature amplitude modulation (QAM) analyzers, newer signal level meters (SLMs) and some spectrum analyzers support this feature. Push a button or select an appropriate menu function, and the instrument measures the digitally modulated signal's average power automatically. No bandwidth, detector or other corrections are needed.
The other important cable modem input parameter is total power—that is, the total power of all downstream signals combined. The spec is less than +30 dBmV. Exceed this value, and the odds are pretty good that the modem input will be overloaded.
How to figure input power
So how does one go about figuring out what the total downstream power is at a cable modem's input?
One way is to use Agilent’s 8591C spectrum analyzer. (Note: Other spectrum analyzers may have this capability, too. Check with the respective manufacturer of the analyzer you’re using.) There are two methods to measure total RF input power using the 8591C.
Make sure the instrument is turned off. Connect the RF source being measured to the spectrum analyzer’s RF input connector. Turn on the analyzer by pressing [LINE]. Let the analyzer go through its power-up procedure. When it’s finished, the screen will display the total RF power present at the RF input connector. Alternatively, while the 8591C is operating, press the green [PRESET] button and let the analyzer perform its reboot routine. When it’s finished, the total RF power present at the RF input connector will be displayed.
A more convenient method uses the 8591C’s menu functions. While the analyzer is operating, press the following keys in the order shown: [MODE], [CABLE TV ANALYZER], [Setup], [Analyzer Input], [TOTL PWR @ INPUT]. The spectrum analyzer will measure and display the total RF power present at the input connector. When the measurement is finished, press [Prev Menu] twice followed by [CHANNEL MEAS].
Another way
All right, I can hear you saying something like "That's nice, but there aren't too many installers or technicians with spectrum analyzers in their company vehicles. Isn't there another way to figure out total power?"
I'm glad you asked. There is indeed another way.
This method requires nothing more than a conventional SLM and a calculator or even the back of a napkin and will yield a number that's within a couple dB or so of the actual total power.
Using the SLM, measure three or four channels across the downstream spectrum. Next, average the readings. Let's say Ch. 2 is +2 dBmV, Ch. 52 is 0 dBmV, and Ch. 116 is -2 dBmV. If we average these three readings, the average per-channel level is 0 dBmV.
Assume that you have only one channel on your system, and its level is 0 dBmV. What's the total power? The answer is 0 dBmV (I'm excluding the power from the aural carrier as well as system noise—these will have a minor impact on the actual value). If you had two channels on your system, each at 0 dBmV, the total power would be about +3 dBmV. Four channels, each at 0 dBmV, would be about +6 dBmV. Eight channels, each at 0 dBmV, would be +9 dBmV. Sixteen channels, each at 0 dBmV, would be +12 dBmV, and so on. Every time the number of channels is doubled—and assuming all channels have the same per-channel signal level—the total power goes up by 3 dB. So, even with 128 channels, each at 0 dBmV, the total power would be in the vicinity of +21 dBmV.
This says a couple things. First, it's easy to estimate the approximate total power, and second, given typical drop levels, there shouldn't be a problem meeting the DOCSIS cable modem total power input parameter. Where things can get iffy is in hot drop situations often found at duplexes and other multiple dwelling units (MDUs). If the per-channel signal level is, say, +20 dBmV, the total power with 128 channels will be in the vicinity of +41 dBmV. It's very likely that the modem input will be overloaded, and the modem probably won't work reliably—if at all. Pad the input down to get the total power down a bunch, while keeping the digitally modulated signal in the -15 to +15 dBmV range.
Broadband: Modulation Error Ratio
By Irwin. S
Awhile back I went target shooting with a friend. While at the range, it occurred to me that what is also known as plinking is a little like modulation error ratio (MER) used to characterize, say, the 64- and 256-QAM (quadrature amplitude modulation) digitally modulated signals we transmit to our customers. OK, before you start to wonder whether I’ve had too much coffee today, bear with me as I discuss this somewhat off-the-wall analogy.
Similarities
A typical target used at the range comprises a set of concentric circles printed on a piece of paper. The center of the target is called the bull’s-eye, which carries the highest point value. The further away from the bull’s-eye, the lower the assigned points. Ideally, one would always hit the bull’s-eye and get the maximum possible score. In the real world, this seldom happens. Instead, one or two shots might hit at or near the bull’s-eye, and most of the rest hit somewhere in the circles surrounding the center of the target. For a person who is a decent shot, plinking usually results in a fairly uniform “fuzzy cloud” of holes in and around the bull’s-eye. The smaller the diameter of this cloud and the closer it is to the bull’s-eye, the higher the score.Factors affecting how close to the bull’s-eye the shots land include the quality and accuracy of the firearm, type of ammunition used, weather conditions if outdoors, ambient lighting, and the distance to the target. But the biggest factor by far is the person doing the shooting: amount of plinking experience, squeezing vs. jerking the trigger, steadiness of aim, breathing control and so on. My targets’ fuzzy clouds are definitely related to the person pulling the trigger. Those targets don’t magically jump out of the way when I shoot, although I’d swear that’s what happens sometimes. But I digress …Now visualize the constellation display on a QAM analyzer. Each symbol landing on the constellation can be thought of as a target of sorts. For instance, a 64-QAM constellation has 64 targets arranged in an eight-by-eight square-shaped grid. Ideally, when the 64 symbols are transmitted, they should land exactly on their respective targets’ “bull’s-eyes.” In reality, the symbols form a fuzzy cloud at and around the constellation’s target centers. When we measure MER, we are in effect measuring the fuzziness of those clouds. The smaller the fuzzy clouds, the higher the MER. Like a high score in target shooting, the higher the MER, the better.
What MER is ... and isn’t
All right, high MER is good, and low MER is not. Just what the heck is MER, anyway?Modulation error ratio is the ratio, in decibels, of average symbol power to average error power: MER(dB) = 10log(average symbol power/average error power). From this, you can see that the fuzzier the symbol cloud—that is, the greater the average error power—the lower the MER. Mathematically, a more precise definition of MER is:
where I and Q are the real (in-phase) and imaginary (quadrature) parts of each sampled ideal target symbol vector, and δI and δQ are the real (in-phase) and imaginary (quadrature) parts of each modulation error vector. This definition assumes that a long enough sample is taken so that all the constellation symbols are equally likely to occur.MER is affected by pretty much everything in a digitally modulated signal’s transmission path: transmitted phase noise; carrier-to-noise ratio (CNR); nonlinear distortions (composite triple beat, CTB; composite second order, CSO; cross modulation, X-mod; common path distortion, CPD); linear distortions (micro-reflections, amplitude tilt/ripple, group delay); in-channel ingress; laser clipping; data collisions; and even suboptimal modulation profiles. Some of these can be controlled fairly well, but no matter what we do, a digitally modulated signal is going to be impaired as it makes its way through a cable network. The worse these impairments, the fuzzier the constellation landings. The fuzzier the constellation landings, the lower the MER.As such, the constellation’s symbol landings will never be perfectly small points. They will always be spread out at least a little, the extent of which is described by MER. By itself, the measured MER value doesn’t tell us what caused it to be low in the first place, only that it is low. Crummy CNR? Beats? Group delay? Hard to say, until you do some additional diagnostics with your trusty QAM analyzer. For more on this, see “Digital Troubleshooting, Part 1” and “Troubleshooting Digitally Modulated Signals, Part 2” in the June and July 2006 issues of Communications Technology.
Confusion
I’ve written on a number of occasions about the confusion that exists regarding MER and CNR. They are not the same thing. Adding to the confusion is the fact that MER is often called signal-to-noise ratio, or SNR. A good example is a cable modem termination system’s (CMTS’s) reported upstream SNR. That parameter is MER, not CNR. Likewise, most set-tops and cable modems can report an SNR value, but here, too, it’s MER—downstream MER, that is.Not confused enough yet? MER can be an equalized value or an unequalized value. Both are legitimate parameters, but they are different. Equalized MER is the value after the QAM receiver’s adaptive equalizer compensates for some or most of the in-channel complex frequency response impairments. Unequalized MER is the value before the QAM receiver’s adaptive equalizer does its magic. This means that for the same signal under identical conditions, unequalized MER will always be at least a few decibels less than an equalized value. So if you replace a CMTS (or line card) that reports equalized upstream MER with one that reports unequalized MER, you’ll find that your upstream “SNR” (MER) is likely a few decibels less than before. This is normal. And no, you can’t simply add a correction factor to the unequalized MER number to get an equivalent equalized MER. It doesn’t work that way.Most QAM analyzers report equalized MER, as do set-tops and cable modems. Some CMTSs report equalized upstream MER; some report unequalized upstream MER. Some test equipment supports measurement of both equalized and unequalized MER—downstream and upstream. My personal preference is unequalized MER, since a low value may indicate the presence of linear distortions if the CNR checks out OK.
More
If you’re interested in a deep dive into the subjects of CNR, SNR and MER, I suggest you take a look at the white paper that Broadcom’s Bruce Currivan and I recently co-authored. It’s 41 pages long, includes some gnarly math, and treats the subject matter in-depth—you might want to get a strong cup of coffee when you read it. You’ll find “Digital Transmission: Carrier-to-Noise Ratio, Signal-to-Noise Ratio, and Modulation Error Ratio” online at the following URLs:http://www.cisco.com/en/US/products/hw/cable/ps2209/products_white_paper0900aecd805738f5.shtmlwww.cisco.com/application/pdf/en/us/guest/products/ps2209/c1244/cdccont_0900aecd805738f5.pdf
VoIP Testing
By Irwin.S
It looks like cable operators have struck oil with telephony service. Several MSOs are experiencing penetration rates of 20 percent and above. One independent operator has even noted that "all you need to do is crack open your window, announce you are offering telephony, and you have two weeks' backlog of work." The oil analogy has m
ore dimensions than revenue, however. Like crude oil, the information that defines a telephone call needs to be processed and transported to realize value. To keep the revenue flowing, cable operators need to test at three levels to ensure quality transport and a product that exceeds a century of consumer expectations.
Figure 1 correlates three levels of testing and measurement to locations in a cable system. At the lowest level, signal carrier quality is observed via common analog measurements such as carrier-to-noise ratio (CNR), composite second order (CSO), composite third order—more commonly called composite triple beat (CTB)—and channel power. At the next level, constellations and quantitative measurements such as modulation error ratio (MER) and bit error rate (BER) provide an indication of the health of the signal. Finally, the quality of the call content itself—the voice conversation—is measured by its mean opinion score (MOS) or closely related derived parameters. Where testing starts depends upon the situation. For new service introduction, it makes sense to begin with basic carrier quality to build a firm foundation for the offering. On the other hand, the likely path when solving a customer's complaint about voice quality for an offering that's been marketed for some time would be to start with MOS measurements.

The pipe is physical
Telephony quality begins with physical media. For cable systems, the medium is HFC plant that carries analog and digital information to network interfaces at hubs or headends. Although our telephony service is digital information, our transport mechanism is still analog. Impairments that adversely affect CNR and the presence of intermodulation distortion (CSO and CTB) have the same effect on digital information as a clogged pipeline does to petroleum transport. The damage done by changing data bit representations on an analog carrier is cumulative. Exceeding a packet loss threshold first garbles and then completely interrupts a voice conversation. While a maximum 3 percent packet loss was suggested as a guideline during early VoIP implementations, experience shows that it is best to shoot for between 0.1 and 0.5 percent for voice, and no more than 1 percent for high speed data.
DOCSIS sets the guidelines in Table 1 for limits on physical impairments.

Channel power has a different effect. Because digital modulation spreads information across a frequency spectrum, for normal operating conditions, the sum of average channel power for all digital services plus analog channel peak power must not exceed laser power specifications. Note that ingress noise will add to the total.
Traditionally, a spectrum analyzer is used to measure these parameters. However, because plant quality is so critical to digital signal transmission, most test instrument vendors have incorporated numerical readouts of these indicators into multifunctional hand-held devices easily deployed for field tests at all three levels. Examples include the JDSU Digital Service Activation Meter, Trilithic 860 DSPi, and Sunrise CM1000.
Navigating by the stars
Quadrature amplitude modulation (QAM) creates a constellation diagram. "Stars" in the diagram represent combinations of ones and zeros that are the coded equivalent of a snapshot sample of a voice conversation. Mathematicians have proven that perfect reproduction of a telephone conversation requires 8,000 unimpaired samples per second, so any faults that cause these stars to move or blur progressively degrade the quality of a call. The analog parameters discussed earlier can be one cause of such impairments. The codecs used to transform analog voice into the digital equivalent can be another.

Because the precise amount of degradation is difficult to determine using pure visual analysis, mathematical sampling theory is applied to quantify the resulting errors. The two most widely used parameters for measuring modulation error are MER and BER. The two are mathematically related, allowing both to be available as readouts on test equipment.
For digital signals, MER has been developed as a single indicator of system physical health, similar to baseband signal-to-noise ratio (SNR) in analog systems. MER can be defined in terms of two vectors, one pointing to an ideal constellation point, and the other pointing from an actual measured point to an ideal point, as shown in Figure 3. Statistical sampling and mapping constellations to mathematical coordinates create a numerical history of deviations from ideal star locations, allowing the derivation of a single number for MER that represents performance over time.
This numerical figure for MER is mathematically expressed as:
MER (dB) =
=20 log (average signal magnitude/average error magnitude)
As a benchmark, downstream MER should be 27 dB or better for 64-QAM and 31 dB or better for 256-QAM. Upstream MER should be 12 dB or better for quadrature phase shift keying (QPSK) and 18 dB or better for 16-QAM.
The ear is the boss
Though carrier and bit integrity underly voice quality, in the end, subscriber perception is the determining factor.
MOS is both the gold standard and grandfather of voice quality testing. Its methodology, which goes back to Bell Laboratories testing of network equipment, consists of assembling a panel of human listeners who rate the quality of several hundred speech samples from 1 to 5, with 5 indicating best quality.
Perceptual speech quality measure (PSQM), perceptual analysis measurement system (PAMS), perceptual evaluation of voice quality (PESQ), and PESQ-LQ are variations of models that predict MOS scores based upon comparison of a voice file that has been processed by the network under test against a clean reference file. Tests using these models are called intrusive because they require a dedicated test call, rather than the use of actual conversations. The PESQ model provides scores from -0.5 to 4.5, with 4.5 indicating best quality, while PESQ-LQ scores range from 1 to 5, the same as MOS. PsyTechnics, one of the developers of PESQ, claims a correlation of better than 90 percent between PESQ and MOS.
The ITU E Model is a design tool that predicts the average voice quality of calls processed by a network, based upon mathematical estimates of the effects of delay, jitter, packet loss and codec performance. It generates an R factor that rates a network from 0 to 100, with 100 indicating best quality. Because E model scores are based upon parameters that can be measured by test equipment, several vendors have correlated them to MOS scores to create MOS readouts. Similarly, ITU-T P.563 and PsyTechnics PsyVoIP are nonintrusive models that predict an MOS score based upon live traffic. These models analyze real-time protocol (RTP) streams for source and destination addresses, sequence number, and jitter profile and predict the impact of the Internet protocol (IP) bearer on the MOS value with an 80 to 90 percent correlation.
Automated voice quality testing measures voice quality by averaging several call samples over time. The most valid application is between network aggregation points, such as gateway to gateway, with a large number of test calls to simulate behavior under actual network traffic volumes. Applied this way, a network quality number can be established as a metric for other tests. When individual test scores for network endpoints are observed, they can be compared against network averages to determine possible faults such as a malfunctioning codec in a multimedia terminal adapter (MTA), or trends, such as poor routes.
Measuring voice quality
The two common methods to obtain derived MOS scores in a cable network are to use information from either actual calls or test voice files generated by a server located in the network. As shown in Figure 4, both methods use software probes located at strategic measurement points in the network to collect statistics on packet loss and delay and make comparisons. Measurement devices can be either rack-mounted or hand-held units.

Prior to PacketCable 1.5, embedded MTAs (EMTAs) were not required to forward data to generate voice quality scores. Most live call data analysis occurs at network aggregation points via rack-mounted units. Poor MOS scores at aggregation points indicate a problem in the network, but further testing is required to narrow the problem to RF plant conditions, traffic blockage or packet degradation at the EMTA.
In addition to analog testing and constellation analysis, intrusive troubleshooting can be done by comparing network test MOS scores to EMTA endpoint test scores observed at portable test equipment. In this case, a voice file server in the network under test generates test call files that travel as packets to measurement devices both in the network and at the EMTA. With current handheld test equipment, this type of MOS test can be done in both the upstream and downstream. After a downstream MOS is generated from a network-based voice file, test equipment at the customer premises sends its own test file, which generates the upstream counterpart. Both scores are provided as readouts at the field test gear.
PacketCable 1.5 simplifies the troubleshooting process and allows it to be done without service interruption by specifying that endpoints such as EMTAs must exchange real-time transport protocol (RTP) control protocol extended reports (RTCP XR) voice over IP (VoIP) metric information for live calls. This requirement mandates that the EMTA transmit E model R factors and derived MOS scores, as well as underlying packet loss, packet discards related to delay, signal and noise levels, and residual echo loss. PacketCable 1.5 compliant EMTAs are expected to become more prevalent in the latter part of 2006.
The bottom line
When and where to test are determined by a combination of quality and economics. Extensive plant prequalification testing makes sense prior to initial telephony offerings and when a system is reconfigured. Without the automatic availability of the end point information specified by PacketCable 1.5, potential quality degradation that might result in lost customers must be balanced against the cost of dispatching a field technician to take quality readings. The availability of RTCP XR data and automated data collection at centralized network management locations makes continuous monitoring viable and should result in data that makes optimum voice quality possible.
Justin J. Junkus is president of KnowledgeLink and telephony editor for Communications Technology. Reach me at Irwin@ssk.co.id
Sidebar
RTCP XR Explained
Real-time transport protocol (RTP) control protocol extended reports (RTCP XR) is an Internet Engineering Task Force protocol defined in RFC 3611 that adds information to the RTCP packet streams being transmitted by PacketCable-compliant embedded multimedia terminal adapters (EMTAs). The extended report blocks contained in an XR packet provide a way to send information from Internet protocol (IP) endpoints that is useful for assessing voice quality and reasons for degradation. The seven block types defined by the specification contain information on received packet losses and duplicates, packet reception times, receiver reference time information, receiver inter-report delays, detailed reception statistics, and voice quality scores. Although the information is intended for exchanges between IP endpoints such as EMTAs and gateways, it may be accessed by software probes at intermediate points in a network, particularly at points where test equipment is assessing network health.