Modem

Standard

A modem (modulator-demodulator) is a device that modulates an analog carrier signal to encode digital information, and also demodulates such a carrier signal to decode the transmitted information. The goal is to produce a signal that can be transmitted easily and decoded to reproduce the original digital data. Modems can be used over any means of transmitting analog signals, from light emitting diodes to radio. The most familiar example is a voice band modem that turns the digital data of a personal computer into modulated electrical signals in the voice frequency range of a telephone channel. These signals can be transmitted over telephone lines and demodulated by another modem at the receiver side to recover the digital data.

Modems are generally classified by the amount of data they can send in a given unit of time, usually expressed in bits per second (bit/s, or bps). Modems can alternatively be classified by their symbol rate, measured in baud. The baud unit denotes symbols per second, or the number of times per second the modem sends a new signal. For example, the ITU V.21 standard used audio frequency-shift keying, that is to say, tones of different frequencies, with two possible frequencies corresponding to two distinct symbols (or one bit per symbol), to carry 300 bits per second using 300 baud. By contrast, the original ITU V.22 standard, which was able to transmit and receive four distinct symbols (two bits per symbol), handled 1,200 bit/s by sending 600 symbols per second (600 baud) using phase shift keying.

Contents

[hide]

[edit] History

News wire services in 1920s used multiplex equipment that met the definition, but the modem function was incidental to the multiplexing function, so they are not commonly included in the history of modems.

TeleGuide terminal

Modems grew out of the need to connect teletype machines over ordinary phone lines instead of more expensive leased lines which had previously been used for current loop-based teleprinters and automated telegraphs. In 1943, IBM adapted this technology to their unit record equipment and were able to transmit punched cards at 25 bits/second.[citation needed]

Mass-produced modems in the United States began as part of the SAGE air-defense system in 1958, connecting terminals at various airbases, radar sites, and command-and-control centers to the SAGE director centers scattered around the U.S. and Canada. SAGE modems were described by AT&T’s Bell Labs as conforming to their newly published Bell 101 dataset standard. While they ran on dedicated telephone lines, the devices at each end were no different from commercial acoustically coupled Bell 101, 110 baud modems.

In the summer of 1960, the name Data-Phone was introduced to replace the earlier term digital subset. The 202 Data-Phone was a half-duplex asynchronous service that was marketed extensively in late 1960. In 1962, the 201A and 201B Data-Phones were introduced. They were synchronous modems using two-bit-per-baud phase-shift keying (PSK). The 201A operated half-duplex at 2,000 bit/s over normal phone lines, while the 201B provided full duplex 2,400 bit/s service on four-wire leased lines, the send and receive channels running on their own set of two wires each.

The famous Bell 103A dataset standard was also introduced by AT&T in 1962. It provided full-duplex service at 300 bps over normal phone lines. Frequency-shift keying was used with the call originator transmitting at 1,070 or 1,270 Hz and the answering modem transmitting at 2,025 or 2,225 Hz. The readily available 103A2 gave an important boost to the use of remote low-speed terminals such as the KSR33, the ASR33, and the IBM 2741. AT&T reduced modem costs by introducing the originate-only 113D and the answer-only 113B/C modems.

[edit] The Carterfone decision

The Novation CAT acoustically coupled modem

For many years, the Bell System (AT&T) maintained a monopoly on the use of its phone lines, allowing only Bell-supplied devices to be attached to its network. Before 1968, AT&T maintained a monopoly on what devices could be electrically connected to its phone lines. This led to a market for 103A-compatible modems that were mechanically connected to the phone, through the handset, known as acoustically coupled modems. Particularly common models from the 1970s were the Novation CAT and the Anderson-Jacobson, spun off from an in-house project at Stanford Research Institute (now SRI International). Hush-a-Phone v. FCC was a seminal ruling in United States telecommunications law decided by the DC Circuit Court of Appeals on November 8, 1956. The District Court found that it was within the FCC’s authority to regulate the terms of use of AT&T’s equipment. Subsequently, the FCC examiner found that as long as the device was not physically attached it would not threaten to degenerate the system. Later, in the Carterfone decision of 1968, the FCC passed a rule setting stringent AT&T-designed tests for electronically coupling a device to the phone lines. AT&T’s tests were complex, making electronically-coupled modems expensive,[citation needed] so acoustically-coupled modems remained common into the early 1980s.

In December 1972, Vadic introduced the VA3400. This device was remarkable because it provided full duplex operation at 1,200 bit/s over the dial network, using methods similar to those of the 103A in that it used different frequency bands for transmit and receive. In November 1976, AT&T introduced the 212A modem to compete with Vadic. It was similar in design to Vadic’s model, but used the lower frequency set for transmission. It was also possible to use the 212A with a 103A modem at 300 bit/s. According to Vadic, the change in frequency assignments made the 212 intentionally incompatible with acoustic coupling, thereby locking out many potential modem manufacturers. In 1977, Vadic responded with the VA3467 triple modem, an answer-only modem sold to computer center operators that supported Vadic’s 1,200-bit/s mode, AT&T’s 212A mode, and 103A operation.

[edit] The Smartmodem and the rise of BBSes

US Robotics Sportster 14,400 Fax modem (1994)

The next major advance in modems was the Smartmodem, introduced in 1981 by Hayes Communications. The Smartmodem was an otherwise standard 103A 300-bit/s modem, but was attached to a small controller that let the computer send commands to it and enable it to operate the phone line. The command set included instructions for picking up and hanging up the phone, dialing numbers, and answering calls. The basic Hayes command set remains the basis for computer control of most modern modems.

Prior to the Hayes Smartmodem, dial-up modems almost universally required a two-step process to activate a connection: first, the user had to manually dial the remote number on a standard phone handset, and then secondly, plug the handset into an acoustic coupler. Hardware add-ons, known simply as dialers, were used in special circumstances, and generally operated by emulating someone dialing a handset.

With the Smartmodem, the computer could dial the phone directly by sending the modem a command, thus eliminating the need for an associated phone instrument for dialing and the need for an acoustic coupler. The Smartmodem instead plugged directly into the phone line. This greatly simplified setup and operation. Terminal programs that maintained lists of phone numbers and sent the dialing commands became common.

The Smartmodem and its clones also aided the spread of bulletin board systems (BBSs). Modems had previously been typically either the call-only, acoustically coupled models used on the client side, or the much more expensive, answer-only models used on the server side. The Smartmodem could operate in either mode depending on the commands sent from the computer. There was now a low-cost server-side modem on the market, and the BBSs flourished.

Almost all modern modems can interoperate with fax machines. Digital faxes, introduced in the 1980s, are simply a particular image format sent over a high-speed (commonly 14.4 kbit/s) modem. Software running on the host computer can convert any image into fax-format, which can then be sent using the modem. Such software was at one time an add-on, but since has become largely universal.

[edit] Softmodem

A PCI Winmodem/softmodem (on the left) next to a traditional ISA modem (on the right).

Main article: Softmodem

A Winmodem or softmodem is a stripped-down modem that replaces tasks traditionally handled in hardware with software. In this case the modem is a simple interface designed to create voltage variations on the telephone line and to sample the line voltage levels (digital to analog and analog to digital converters). Softmodems are cheaper than traditional modems, since they have fewer hardware components. One downside is that the software generating and interpreting the modem tones is not simple (as most of the protocols are complex), and the performance of the computer as a whole often suffers when it is being used. For online gaming this can be a real concern. Another problem is lack of portability such that non-Windows operating systems (such as Linux) often do not have an equivalent driver to operate the modem.

[edit] Narrow-band/phone-line dialup modems

A standard modem of today contains two functional parts: an analog section for generating the signals and operating the phone, and a digital section for setup and control. This functionality is often incorporated into a single chip nowadays, but the division remains in theory. In operation the modem can be in one of two modes, data mode in which data is sent to and from the computer over the phone lines, and command mode in which the modem listens to the data from the computer for commands, and carries them out. A typical session consists of powering up the modem (often inside the computer itself) which automatically assumes command mode, then sending it the command for dialing a number. After the connection is established to the remote modem, the modem automatically goes into data mode, and the user can send and receive data. When the user is finished, the escape sequence, “+++” followed by a pause of about a second, may be sent to the modem to return it to command mode, then a command (e.g. “ATH”) to hang up the phone is sent. Note that on many modem controllers it is possible to issue commands to disable the escape sequence so that it is not possible for data being exchanged to trigger the mode change inadvertently.

The commands themselves are typically from the Hayes command set, although that term is somewhat misleading. The original Hayes commands were useful for 300 bit/s operation only, and then extended for their 1,200 bit/s modems. Faster speeds required new commands, leading to a proliferation of command sets in the early 1990s. Things became considerably more standardized in the second half of the 1990s, when most modems were built from one of a very small number of chipsets. We call this the Hayes command set even today, although it has three or four times the numbers of commands as the actual standard.

[edit] Increasing speeds (V.21, V.22, V.22bis)

The 300 bit/s modems used audio frequency-shift keying to send data. In this system the stream of 1s and 0s in computer data is translated into sounds which can be easily sent on the phone lines. In the Bell 103 system the originating modem sends 0s by playing a 1,070 Hz tone, and 1s at 1,270 Hz, with the answering modem putting its 0s on 2,025 Hz and 1s on 2,225 Hz. These frequencies were chosen carefully, they are in the range that suffer minimum distortion on the phone system, and also are not harmonics of each other.

In the 1,200 bit/s and faster systems, phase-shift keying was used. In this system the two tones for any one side of the connection are sent at the similar frequencies as in the 300 bit/s systems, but slightly out of phase. By comparing the phase of the two signals, 1s and 0s could be pulled back out, Voiceband modems generally remained at 300 and 1,200 bit/s (V.21 and V.22) into the mid 1980s. A V.22bis 2,400-bit/s system similar in concept to the 1,200-bit/s Bell 212 signalling was introduced in the U.S., and a slightly different one in Europe. By the late 1980s, most modems could support all of these standards and 2,400-bit/s operation was becoming common.

For more information on baud rates versus bit rates, see the companion article list of device bandwidths.

[edit] Increasing speeds (one-way proprietary standards)

Many other standards were also introduced for special purposes, commonly using a high-speed channel for receiving, and a lower-speed channel for sending. One typical example was used in the French Minitel system, in which the user’s terminals spent the majority of their time receiving information. The modem in the Minitel terminal thus operated at 1,200 bit/s for reception, and 75 bit/s for sending commands back to the servers.

Three U.S. companies became famous for high-speed versions of the same concept. Telebit introduced its Trailblazer modem in 1984, which used a large number of 36 bit/s channels to send data one-way at rates up to 18,432 bit/s. A single additional channel in the reverse direction allowed the two modems to communicate how much data was waiting at either end of the link, and the modems could change direction on the fly. The Trailblazer modems also supported a feature that allowed them to spoof the UUCP g protocol, commonly used on Unix systems to send e-mail, and thereby speed UUCP up by a tremendous amount. Trailblazers thus became extremely common on Unix systems, and maintained their dominance in this market well into the 1990s.

U.S. Robotics (USR) introduced a similar system, known as HST, although this supplied only 9,600 bit/s (in early versions at least) and provided for a larger backchannel. Rather than offer spoofing, USR instead created a large market among Fidonet users by offering its modems to BBS sysops at a much lower price, resulting in sales to end users who wanted faster file transfers. Hayes was forced to compete, and introduced its own 9,600-bit/s standard, Express 96 (also known as Ping-Pong), which was generally similar to Telebit’s PEP. Hayes, however, offered neither protocol spoofing nor sysop discounts, and its high-speed modems remained rare.

[edit] 4,800 and 9,600 bit/s (V.27ter, V.32)

Echo cancellation was the next major advance in modem design. Local telephone lines use the same wires to send and receive, which results in a small amount of the outgoing signal bouncing back. This signal can confuse the modem, which was unable to distinguish between the echo and the signal from the remote modem. This was why earlier modems split the signal frequencies into ‘answer’ and ‘originate’; the modem could then ignore its own transmitting frequencies. Even with improvements to the phone system allowing higher speeds, this splitting of available phone signal bandwidth still imposed a half-speed limit on modems.

Echo cancellation got around this problem. Measuring the echo delays and magnitudes allowed the modem to tell if the received signal was from itself or the remote modem, and create an equal and opposite signal to cancel its own. Modems were then able to send over the whole frequency spectrum in both directions at the same time, leading to the development of 4,800 and 9,600 bit/s modems.

Increases in speed have used increasingly complicated communications theory. 1,200 and 2,400 bit/s modems used the phase shift key (PSK) concept. This could transmit two or three bits per symbol. The next major advance encoded four bits into a combination of amplitude and phase, known as Quadrature Amplitude Modulation (QAM).

The new V.27ter and V.32 standards were able to transmit 4 bits per symbol, at a rate of 1,200 or 2,400 baud, giving an effective bit rate of 4,800 or 9,600 bit/s. The carrier frequency was 1,650 Hz. For many years, most engineers considered this rate to be the limit of data communications over telephone networks.

[edit] Error correction and compression

Operations at these speeds pushed the limits of the phone lines, resulting in high error rates. This led to the introduction of error-correction systems built into the modems, made most famous with Microcom‘s MNP systems. A string of MNP standards came out in the 1980s, each increasing the effective data rate by minimizing overhead, from about 75% theoretical maximum in MNP 1, to 95% in MNP 4. The new method called MNP 5 took this a step further, adding data compression to the system, thereby increasing the data rate above the modem’s rating. Generally the user could expect an MNP5 modem to transfer at about 130% the normal data rate of the modem. Details of MNP were later released and became popular on a series of 2,400-bit/s modems, and ultimately led to the development of V.42 and V.42bis ITU standards. V.42 and V.42bis were non-compatible with MNP but were similar in concept: Error correction and compression.

Another common feature of these high-speed modems was the concept of fallback, or speed hunting, allowing them to talk to less-capable modems. During the call initiation the modem would play a series of signals into the line and wait for the remote modem to respond to them. They would start at high speeds and progressively get slower and slower until they heard an answer. Thus, two USR modems would be able to connect at 9,600 bit/s, but, when a user with a 2,400-bit/s modem called in, the USR would fallback to the common 2,400-bit/s speed. This would also happen if a V.32 modem and a HST modem were connected. Because they used a different standard at 9,600 bit/s, they would fall back to their highest commonly supported standard at 2,400 bit/s. The same applies to V.32bis and 14,400 bit/s HST modem, which would still be able to communicate with each other at only 2,400 bit/s.

[edit] Breaking the 9.6k barrier

In 1980, Gottfried Ungerboeck from IBM Zurich Research Laboratory applied channel coding techniques to search for new ways to increase the speed of modems. His results were astonishing but only conveyed to a few colleagues.[1] Finally in 1982, he agreed to publish what is now a landmark paper in the theory of information coding.[citation needed] By applying parity check coding to the bits in each symbol, and mapping the encoded bits into a two-dimensional diamond pattern, Ungerboeck showed that it was possible to increase the speed by a factor of two with the same error rate. The new technique was called mapping by set partitions (now known as trellis modulation).

Error correcting codes, which encode code words (sets of bits) in such a way that they are far from each other, so that in case of error they are still closest to the original word (and not confused with another) can be thought of as analogous to sphere packing or packing pennies on a surface: the further two bit sequences are from one another, the easier it is to correct minor errors.

V.32bis was so successful that the older high-speed standards had little to recommend them. USR fought back with a 16,800 bit/s version of HST, while AT&T introduced a one-off 19,200 bit/s method they referred to as V.32ter (also known as V.32 terbo or tertiary), but neither non-standard modem sold well.

[edit] V.34/28.8k and 33.6k

An ISA modem manufactured to conform to the V.34 protocol.

Any interest in these systems was destroyed during the lengthy introduction of the 28,800 bit/s V.34 standard. While waiting, several companies decided to release hardware and introduced modems they referred to as V.FAST. In order to guarantee compatibility with V.34 modems once the standard was ratified (1994), the manufacturers were forced to use more flexible parts, generally a DSP and microcontroller, as opposed to purpose-designed ASIC modem chips.

Today, the ITU standard V.34 represents the culmination of the joint efforts. It employs the most powerful coding techniques including channel encoding and shape encoding. From the mere 4 bits per symbol (9.6 kbit/s), the new standards used the functional equivalent of 6 to 10 bits per symbol, plus increasing baud rates from 2,400 to 3,429, to create 14.4, 28.8, and 33.6 kbit/s modems. This rate is near the theoretical Shannon limit. When calculated, the Shannon capacity of a narrowband line is \scriptstyle Bandwidth * log_2 (1 + P_u/P_n), with \scriptstyle P_u/P_n the (linear) signal-to-noise ratio. Narrowband phone lines have a bandwidth from 300–4000 Hz, so using \scriptstyle P_u/P_n=1000 (SNR = 30dB): capacity is approximately 35 kbit/s.

Without the discovery and eventual application of trellis modulation, maximum telephone rates using voice-bandwidth channels would have been limited to 3,429 baud * 4 bit/symbol == approximately 14 kbit/s using traditional QAM. (DSL makes use of the bandwidth of traditional copper-wire twisted pairs between subscriber and the central office, which far exceeds that of analog voice circuitry.)

[edit] V.61/V.70 Analog/Digital Simultaneous Voice and Data

The V.61 Standard introduced Analog Simultaneous Voice and Data (ASVD). This technology allowed users of v.61 modems to engage in point-to-point voice conversations with each other while their respective modems communicated.

In 1995, the first DSVD (Digital Simultaneous Voice and Data) modems became available to consumers, and the standard was ratified as v.70 by the International Telecommunication Union (ITU) in 1996.

Two DSVD modems can establish a completely digital link between each other over standard phone lines. Sometimes referred to as “the poor man’s ISDN,” and employing a similar technology, v.70 compatible modems allow for a maximum speed of 33.6 kbps between peers. By using a majority of the bandwidth for data and reserving part for voice transmission, DSVD modems allow users to pick up a telephone handset interfaced with the modem, and initiate a call to the other peer.

One practical use for this technology was realized by early two player video gamers, who could hold voice communication with each other while in game over the PSTN.

Advocates of DSVD envisioned whiteboard sharing and other practical applications for the standard, however, with advent of cheaper 56kbps analog modems intended for Internet connectivity, peer-to-peer data transmission over the PSTN became quickly irrelevant. Also, the standard was never expanded to allow for the making or receiving of arbitrary phone calls while the modem was in use, due to the cost of infrastructure upgrades to telephone companies, and the advent of ISDN and DSL technologies which effectively accomplished the same goal.

Today, Multi-Tech is the only known company to continue to support a v.70 compatible modem. While their device also offers v.92 at 56kbps, it remains significantly more expensive than comparable modems sans v.70 support.

[edit] Using digital lines and PCM (V.90/92)

Modem bank at an ISP.

In the late 1990s Rockwell/Lucent and U.S. Robotics introduced new competing technologies based upon the digital transmission used in modern telephony networks. The standard digital transmission in modern networks is 64 kbit/s but some networks use a part of the bandwidth for remote office signaling (e.g., to hang up the phone), limiting the effective rate to 56 kbit/s DS0. This new technology was adopted into ITU standards V.90 and is common in modern computers. The 56 kbit/s rate is only possible from the central office to the user site (downlink). In the United States, government regulation limits the maximum power output, resulting in a maximum data rate of 53.3 kbit/s. The uplink (from the user to the central office) still uses V.34 technology at 33.6 kbit/s.

Later in V.92, the digital PCM technique was applied to increase the upload speed to a maximum of 48 kbit/s, but at the expense of download rates. For example a 48 kbit/s upstream rate would reduce the downstream as low as 40 kbit/s, due to echo on the telephone line. To avoid this problem, V.92 modems offer the option to turn off the digital upstream and instead use a 33.6 kbit/s analog connection, in order to maintain a high digital downstream of 50 kbit/s or higher.[2] V.92 also adds two other features. The first is the ability for users who have call waiting to put their dial-up Internet connection on hold for extended periods of time while they answer a call. The second feature is the ability to quickly connect to one’s ISP. This is achieved by remembering the analog and digital characteristics of the telephone line, and using this saved information to reconnect at a fast pace.

[edit] Using compression to exceed 56k

Today’s V.42, V.42bis and V.44 standards allow the modem to transmit data faster than its basic rate would imply. For instance, a 53.3 kbit/s connection with V.44 can transmit up to 53.3*6 == 320 kbit/s using pure text. However, the compression ratio tends to vary due to noise on the line, or due to the transfer of already-compressed files (ZIP files, JPEG images, MP3 audio, MPEG video).[3] At some points the modem will be sending compressed files at approximately 50 kbit/s, uncompressed files at 160 kbit/s, and pure text at 320 kbit/s, or any value in between.[4]

In such situations a small amount of memory in the modem, a buffer, is used to hold the data while it is being compressed and sent across the phone line, but in order to prevent overflow of the buffer, it sometimes becomes necessary to tell the computer to pause the datastream. This is accomplished through hardware flow control using extra lines on the modem–computer connection. The computer is then set to supply the modem at some higher rate, such as 320 kbit/s, and the modem will tell the computer when to start or stop sending data.

[edit] Compression by the ISP

As telephone-based 56k modems began losing popularity, some Internet service providers such as Netzero and Juno started using pre-compression to increase the throughput and maintain their customer base. As example, the Netscape ISP used a compression program that compressed images, text, and other objects at the modem server, just prior to sending them across the phone line. Certain content using lossy compression (e.g., images) may be recompressed (transcoded) using different parameters to the compression algorithm, making the transmitted content smaller but of lower quality. The server-side compression operates much more efficiently than the on-the-fly compression of V.44-enabled modems due to the fact that V.44 is a generalized compression algorithm whereas other compression techniques are application-specific (JPEG, MPEG, Vorbis, etc.). Typically Website text is compacted to 4%,[citation needed] thus increasing effective throughput to approximately 1,300 kbit/s. The accelerator also pre-compresses Flash executables and images to approximately 30% and 12%, respectively.

The drawback of this approach is a loss in quality, where the GIF and JPEG images are lossy compressed, which causes the content to become pixelated and smeared. However the speed is dramatically improved such that Web pages load in less than 5 seconds, and the user can manually choose to view the uncompressed images at any time. The ISPs employing this approach advertise it as “surf 5× faster” or simply “accelerated dial-up”.[5][6]

These accelerated downloads are now integrated into the Opera web browser.

[edit] List of dialup speeds

Note that the values given are maximum values, and actual values may be slower under certain conditions (for example, noisy phone lines).[7] For a complete list see the companion article list of device bandwidths. A baud is one symbol per second; each symbol may encode one or more data bits.

Connection Modulation Bitrate [kbit/s] Year Released
110 baud Bell 101 modem FSK 0.1 1958 [8]
300 baud (Bell 103 or V.21) FSK 0.3 1962
1200 modem (1200 baud) (Bell 202) FSK 1.2
1200 Modem (600 baud) (Bell 212A or V.22) QPSK 1.2 1980?[9][10]
2400 Modem (600 baud) (V.22bis) QAM 2.4 1984 [9]
2400 Modem (1200 baud) (V.26bis) PSK 2.4
4800 Modem (1600 baud) (V.27ter) PSK 4.8 [11]
9600 Modem (2400 baud) (V.32) QAM 9.6 1984 [9]
14.4k Modem (2400 baud) (V.32bis) QAM 14.4 1991 [9]
28.8k Modem (3200 baud) (V.34) QAM 28.8 1994 [9]
33.6k Modem (3429 baud) (V.34) QAM 33.6 [12]
56k Modem (8000/3429 baud) (V.90) 56.0/33.6 1998 [9]
56k Modem (8000/8000 baud) (V.92) 56.0/48.0 2000 [9]
Bonding modem (two 56k modems)) (V.92)[13] 112.0/96.0
Hardware compression (variable) (V.90/V.42bis) 56.0-220.0
Hardware compression (variable) (V.92/V.44) 56.0-320.0
Server-side web compression (variable) (Netscape ISP) 100.0-1,000.0

[edit] Radio modems

Direct broadcast satellite, WiFi, and mobile phones all use modems to communicate, as do most other wireless services today. Modern telecommunications and data networks also make extensive use of radio modems where long distance data links are required. Such systems are an important part of the PSTN, and are also in common use for high-speed computer network links to outlying areas where fibre is not economical.

Even where a cable is installed, it is often possible to get better performance or make other parts of the system simpler by using radio frequencies and modulation techniques through a cable. Coaxial cable has a very large bandwidth, however signal attenuation becomes a major problem at high data rates if a digital signal is used. By using a modem, a much larger amount of digital data can be transmitted through a single piece of wire. Digital cable television and cable Internet services use radio frequency modems to provide the increasing bandwidth needs of modern households. Using a modem also allows for frequency-division multiple access to be used, making full-duplex digital communication with many users possible using a single wire.

Wireless modems come in a variety of types, bandwidths, and speeds. Wireless modems are often referred to as transparent or smart. They transmit information that is modulated onto a carrier frequency to allow many simultaneous wireless communication links to work simultaneously on different frequencies.

Transparent modems operate in a manner similar to their phone line modem cousins. Typically, they were half duplex, meaning that they could not send and receive data at the same time. Typically transparent modems are polled in a round robin manner to collect small amounts of data from scattered locations that do not have easy access to wired infrastructure. Transparent modems are most commonly used by utility companies for data collection.

Smart modems come with a media access controller inside which prevents random data from colliding and resends data that is not correctly received. Smart modems typically require more bandwidth than transparent modems, and typically achieve higher data rates. The IEEE 802.11 standard defines a short range modulation scheme that is used on a large scale throughout the world.

[edit] WiFi and WiMax

Wireless data modems are used in the WiFi and WiMax standards, operating at microwave frequencies.

WiFi is principally used in laptops for Internet connections (wireless access point) and wireless application protocol (WAP).

[edit] Mobile modems and routers

Modems which use a mobile telephone system (GPRS, UMTS, HSPA, EVDO, WiMax, etc.), are known as wireless modems (sometimes also called cellular modems). Wireless modems can be embedded inside a laptop or appliance or external to it. External wireless modems are connect cards, usb modems for mobile broadband and cellular routers. A connect card is a PC card or ExpressCard which slides into a PCMCIA/PC card/ExpressCard slot on a computer. USB wireless modems use a USB port on the laptop instead of a PC card or ExpressCard slot. A cellular router may have an external datacard (AirCard) that slides into it. Most cellular routers do allow such datacards or USB modems. Cellular Routers may not be modems per se, but they contain modems or allow modems to be slid into them. The difference between a cellular router and a wireless modem is that a cellular router normally allows multiple people to connect to it (since it can route, or support multipoint to multipoint connections), while the modem is made for one connection.

Most of the GSM wireless modems come with an integrated SIM cardholder (i.e., Huawei E220, Sierra 881, etc.) and some models are also provided with a microSD memory slot and/or jack for additional external antenna such as Huawei E1762 and Sierra Wireless Compass 885.[14][15] The CDMA (EVDO) versions do not use R-UIM cards, but use Electronic Serial Number (ESN) instead.

The cost of using a wireless modem varies from country to country. Some carriers implement flat rate plans for unlimited data transfers. Some have caps (or maximum limits) on the amount of data that can be transferred per month. Other countries have plans that charge a fixed rate per data transferred—per megabyte or even kilobyte of data downloaded; this tends to add up quickly in today’s content-filled world, which is why many people are pushing for flat data rates.

The faster data rates of the newest wireless modem technologies (UMTS, HSPA, EVDO, WiMax) are also considered to be broadband wireless modems and compete with other broadband modems below.

Until end of April 2011, worldwide shipments of USB modems still surpass embedded 3G and 4G modules by 3:1 due to USB modems can be easily discarded, but embedded modems could start to gain popularity as tablet sales grow and as the incremental cost of the modems shrinks, so by 2016 the ratio may change to 1:1.[16]

[edit] Broadband

ADSL modems, a more recent development, are not limited to the telephone’s voiceband audio frequencies. Some ADSL modems use coded orthogonal frequency division modulation (DMT, for Discrete MultiTone; also called COFDM, for digital TV in much of the world).

Cable modems use a range of frequencies originally intended to carry RF television channels. Multiple cable modems attached to a single cable can use the same frequency band, using a low-level media access protocol to allow them to work together within the same channel. Typically, ‘up’ and ‘down’ signals are kept separate using frequency division multiple access.

New types of broadband modems are beginning to appear, such as doubleway satellite and power line modems.

Broadband modems should still be classed as modems, since they use complex waveforms to carry digital data. They are more advanced devices than traditional dial-up modems as they are capable of modulating/demodulating hundreds of channels simultaneously.

Many broadband modems include the functions of a router (with Ethernet and WiFi ports) and other features such as DHCP, NAT and firewall features.

When broadband technology was introduced, networking and routers were unfamiliar to consumers. However, many people knew what a modem was as most internet access was through dial-up. Due to this familiarity, companies started selling broadband modems using the familiar term modem rather than vaguer ones like adapter or transceiver, or even “bridge”.

[edit] Home networking

Although the name modem is seldom used in this case, modems are also used for high-speed home networking applications, especially those using existing home wiring. One example is the G.hn standard, developed by ITU-T, which provides a high-speed (up to 1 Gbit/s) Local area network using existing home wiring (power lines, phone lines and coaxial cables). G.hn devices use orthogonal frequency-division multiplexing (OFDM) to modulate a digital signal for transmission over the wire.

The phrase “null modem” was used to describe attaching a specially wired cable between the serial ports of two personal computers. Basically, the transmit output of one computer was wired to the receive input of the other; this was true for both computers. The same software used with modems (such as Procomm or Minicom) could be used with the null modem connection.

[edit] Deep-space telecommunications

Many modern modems have their origin in deep space telecommunications systems of the 1960s.

Differences between deep space telecom modems and landline modems:

  • digital modulation formats that have high doppler immunity are typically used
  • waveform complexity tends to be low, typically binary phase shift keying
  • error correction varies mission to mission, but is typically much stronger than most landline modems

[edit] Voice modem

Voice modems are regular modems that are capable of recording or playing audio over the telephone line. They are used for telephony applications. See Voice modem command set for more details on voice modems. This type of modem can be used as an FXO card for Private branch exchange systems (compare V.92).

[edit] Popularity

This section does not cite any references or sources. Please help improve this section by adding citations to reliable sources. Unsourced material may be challenged and removed. (November 2009)

A CEA study in 2006 found that dial-up Internet access is on a notable decline in the U.S. In 2000, dial-up Internet connections accounted for 74% of all U.S. residential Internet connections. The US demographic pattern for (dial-up modem users per capita) has been more or less mirrored in Canada and Australia for the past 20 years.

Dial-up modem use in the US had dropped to 60% by 2003, and in 2006 stood at 36%. Voiceband modems were once the most popular means of Internet access in the U.S., but with the advent of new ways of accessing the Internet, the traditional 56K modem is losing popularity.

[edit] See also

[edit] References

  1. ^ IEEE History Center. “Gottfried Ungerboeck Oral History”. Retrieved 2008-02-10.
  2. ^ See November and October 2000 update at http://www.modemsite.com/56k/v92s.asp
  3. ^ Modem compression: V.44 against V.42bis
  4. ^ Re: Modems FAQ – Wolfgang Henke.
  5. ^ Accelerated Dial-Up Service – © 1998-2010 NetZero, Inc.
  6. ^ ISP Review: Netscape with Web Accelerator – Copyright © 2004-2008 by Great-ISP-Deals.com.
  7. ^ Data communication over the telephone network
  8. ^ blogspot.com – baudline signal analyzer
  9. ^ a b c d e f g tldp.org – 29.2 Historical Modem Protocols
  10. ^ concordia.ca – Data Communication and Computer Networks
  11. ^ garretwilson.com – Group 3 Facsimile Communication
  12. ^ upatras.gr – Implementation of a V.34 modem on a Digital Signal Processor
  13. ^ About bonding modems
  14. ^ http://www.3gmodem.com.hk/Huawei/E1762.html
  15. ^ http://www.reghardware.com/2008/07/16/review_sierra_compass_885/
  16. ^ http://news.yahoo.com/s/pcworld/20110503/tc_pcworld/laptopusersstillpreferusbmodems

[edit] External links

Wikibooks has a book on the topic of

Wikimedia Commons has media related to: Modems

[edit] Standards organizations and modem protocols

[edit] General modem info (drivers, chipsets, etc.)

[edit] Other

Telephone network modem standards Motorola modem 28k.jpg
ITU V-Series | V.92 | K56flex | X2 | MNP | Hayes command set

 

[hide]v · d · e Internet access
Network type Wired Wireless
Optical Coaxial cable Twisted pair Phone line Power line Unlicensed terrestrial bands Licensed terrestrial bands Satellite
LAN Ethernet G.hn  · MoCA · HomePNA Ethernet HomePNA  · G.hn G.hn  · HomePlug Wi-Fi · Bluetooth · DECT · Wireless USB
WAN PON · Ethernet DOCSIS Ethernet Dial-up · ISDN · DSL Power line Muni Wi-Fi GPRS · iBurst · WiBro/WiMAX · UMTS-TDD, HSPA · EVDO · LTE · MMDS Satellite

Antivirus

Standard

Antivirus or anti-virus software is used to prevent, detect, and remove malware, including but not limited to computer viruses, computer worm, trojan horses, spyware and adware. This page talks about the software used for the prevention and removal of such threats, rather than computer security implemented by software methods.

A variety of strategies are typically employed. Signature-based detection involves searching for known patterns of data within executable code. However, it is possible for a computer to be infected with new malware for which no signature is yet known. To counter such so-called zero-day threats, heuristics can be used. One type of heuristic approach, generic signatures, can identify new viruses or variants of existing viruses by looking for known malicious code, or slight variations of such code, in files. Some antivirus software can also predict what a file will do by running it in a sandbox and analyzing what it does to see if it performs any malicious actions.

No matter how useful antivirus software can be, it can sometimes have drawbacks. Antivirus software can impair a computer’s performance. Inexperienced users may also have trouble understanding the prompts and decisions that antivirus software presents them with. An incorrect decision may lead to a security breach. If the antivirus software employs heuristic detection, success depends on achieving the right balance between false positives and false negatives. False positives can be as destructive as false negatives.[1] Finally, antivirus software generally runs at the highly trusted kernel level of the operating system, creating a potential avenue of attack.[2]

Contents

[hide]

[edit] History

An example of free antivirus software: ClamTk 3.08.

Most of the computer viruses written in the early and mid 1980s were limited to self-reproduction and had no specific damage routine built into the code.[3] That changed when more and more programmers became acquainted with virus programming and created viruses that manipulated or even destroyed data on infected computers.

There are competing claims for the innovator of the first antivirus product. Possibly the first publicly documented removal of a computer virus in the wild was performed by Bernd Fix in 1987.[4][5] There were also two antivirus applications for the Atari ST platform developed in 1987. The first one was G Data [6] and second was UVK 2000[7].

Fred Cohen, who published one of the first academic papers on computer viruses in 1984,[8] began to develop strategies for antivirus software in 1988[9] that were picked up and continued by later antivirus software developers.

Also in 1988 a mailing list named VIRUS-L[10] was started on the BITNET/EARN network where new viruses and the possibilities of detecting and eliminating viruses were discussed. Some members of this mailing list like John McAfee or Eugene Kaspersky later founded software companies that developed and sold commercial antivirus software.

Before internet connectivity was widespread, viruses were typically spread by infected floppy disks. Antivirus software came into use, but was updated relatively infrequently. During this time, virus checkers essentially had to check executable files and the boot sectors of floppy disks and hard disks. However, as internet usage became common, viruses began to spread online.[11]

Over the years it has become necessary for antivirus software to check an increasing variety of files, rather than just executables, for several reasons:

  • Powerful macros used in word processor applications, such as Microsoft Word, presented a risk. Virus writers could use the macros to write viruses embedded within documents. This meant that computers could now also be at risk from infection by opening documents with hidden attached macros.[12]
  • Later email programs, in particular Microsoft’s Outlook Express and Outlook, were vulnerable to viruses embedded in the email body itself. A user’s computer could be infected by just opening or previewing a message.[13]

As always-on broadband connections became the norm, and more and more viruses were released, it became essential to update virus checkers more and more frequently. Even then, a new zero-day virus could become widespread before antivirus companies released an update to protect against it.

[edit] Identification methods

MalwarebytesAnti-Malware version 1.46 – a proprietary freeware antimalware product

There are several methods which antivirus software can use to identify malware.

Signature based detection is the most common method. To identify viruses and other malware, antivirus software compares the contents of a file to a dictionary of virus signatures. Because viruses can embed themselves in existing files, the entire file is searched, not just as a whole, but also in pieces.[14]

Heuristic-based detection, like malicious activity detection, can be used to identify unknown viruses.

File emulation is another heuristic approach. File emulation involves executing a program in a virtual environment and logging what actions the program performs. Depending on the actions logged, the antivirus software can determine if the program is malicious or not and then carry out the appropriate disinfection actions.[15]

[edit] Signature-based detection

Traditionally, antivirus software heavily relied upon signatures to identify malware. This can be very effective, but cannot defend against malware unless samples have already been obtained and signatures created. Because of this, signature-based approaches are not effective against new, unknown viruses.

As new viruses are being created each day, the signature-based detection approach requires frequent updates of the virus signature dictionary. To assist the antivirus software companies, the software may allow the user to upload new viruses or variants to the company, allowing the virus to be analyzed and the signature added to the dictionary.[14]

Although the signature-based approach can effectively contain virus outbreaks, virus authors have tried to stay a step ahead of such software by writing “oligomorphic“, “polymorphic” and, more recently, “metamorphic” viruses, which encrypt parts of themselves or otherwise modify themselves as a method of disguise, so as to not match virus signatures in the dictionary.[16]

[edit] Heuristics

Some more sophisticated antivirus software uses heuristic analysis to identify new malware or variants of known malware.

Many viruses start as a single infection and through either mutation or refinements by other attackers, can grow into dozens of slightly different strains, called variants. Generic detection refers to the detection and removal of multiple threats using a single virus definition.[17]

For example, the Vundo trojan has several family members, depending on the antivirus vendor’s classification. Symantec classifies members of the Vundo family into two distinct categories, Trojan.Vundo and Trojan.Vundo.B.[18][19]

While it may be advantageous to identify a specific virus, it can be quicker to detect a virus family through a generic signature or through an inexact match to an existing signature. Virus researchers find common areas that all viruses in a family share uniquely and can thus create a single generic signature. These signatures often contain non-contiguous code, using wildcard characters where differences lie. These wildcards allow the scanner to detect viruses even if they are padded with extra, meaningless code.[20] A detection that uses this method is said to be “heuristic detection.”

[edit] Rootkit detection

Main article: Rootkit

Anti-virus software can also scan for rootkits; a rootkit is a type of malware that is designed to gain administrative-level control over a computer system without being detected. Rootkits can change how the operating system functions and in some cases can tamper with the anti-virus program and render it ineffective. Rootkits are also difficult to remove, in some cases requiring a complete re-installation of the operating system.[21][22]

[edit] Issues of concern

[edit] Unexpected renewal costs

Some commercial antivirus software end-user license agreements include a clause that the subscription will be automatically renewed, and the purchaser’s credit card automatically billed, at the renewal time without explicit approval. For example, McAfee requires users to unsubscribe at least 60 days before the expiration of the present subscription[23] while BitDefender sends notifications to unsubscribe 30 days before the renewal.[24] Norton Antivirus also renews subscriptions automatically by default.[25]

[edit] Rogue security applications

Some apparent antivirus programs are actually malware masquerading as legitimate software, such as WinFixer and MS Antivirus.[26]

[edit] Problems caused by false positives

A “false positive” is when antivirus software identifies a non-malicious file as a virus. When this happens, it can cause serious problems. For example, if an antivirus program is configured to immediately delete or quarantine infected files, a false positive in an essential file can render the operating system or some applications unusable.[27] In May 2007, a faulty virus signature issued by Symantec mistakenly removed essential operating system files, leaving thousands of PCs unable to boot.[28] Also in May 2007, the executable file required by Pegasus Mail was falsely detected by Norton AntiVirus as being a Trojan and it was automatically removed, preventing Pegasus Mail from running. Norton anti-virus had falsely identified three releases of Pegasus Mail as malware, and would delete the Pegasus Mail installer file when that happened.[29] In response to this Pegasus Mail stated:

On the basis that Norton/Symantec has done this for every one of the last three releases of Pegasus Mail, we can only condemn this product as too flawed to use, and recommend in the strongest terms that our users cease using it in favour of alternative, less buggy anti-virus packages.[29]

In April 2010, McAfee VirusScan detected svchost.exe, a normal Windows binary, as a virus on machines running Windows XP with Service Pack 3, causing a reboot loop and loss of all network access.[30][31]

In December 2010, a faulty update on the AVG anti-virus suite damaged 64-bit versions of Windows 7, rendering it unable to boot, due to an endless boot loop created.[32]

When Microsoft Windows becomes damaged by faulty anti-virus products, fixing the damage to Microsoft Windows incurs technical support costs and businesses can be forced to close whilst remedial action is undertaken.[33][34]

[edit] System and interoperability related issues

Running multiple antivirus programs concurrently can degrade performance and create conflicts.[35] However, using a concept called multiscanning, several companies (including G Data[36] and Microsoft[37]) have created applications which can run multiple engines concurrently.

It is sometimes necessary to temporarily disable virus protection when installing major updates such as Windows Service Packs or updating graphics card drivers.[38] Active antivirus protection may partially or completely prevent the installation of a major update.

A minority of software programs are not compatible with anti-virus software. For example, the TrueCrypt troubleshooting page reports that anti-virus programs can conflict with TrueCrypt and cause it to malfunction.[39]

Support issues also exist around antivirus application interoperability with common solutions like SSL VPN remote access and network access control products.[40] These technology solutions often have policy assessment applications which require that an up to date antivirus is installed and running. If the antivirus application is not recognized by the policy assessment, whether because the antivirus application has been updated or because it is not part of the policy assessment library, the user will be unable to connect.

[edit] Effectiveness

Studies in December 2007 showed that the effectiveness of antivirus software had decreased in the previous year, particularly against unknown or zero day attacks. The computer magazine c’t found that detection rates for these threats had dropped from 40-50% in 2006 to 20-30% in 2007. At that time, the only exception was the NOD32 antivirus, which managed a detection rate of 68 percent.[41]

The problem is magnified by the changing intent of virus authors. Some years ago it was obvious when a virus infection was present. The viruses of the day, written by amateurs, exhibited destructive behavior or pop-ups. Modern viruses are often written by professionals, financed by criminal organizations.[42]

Independent testing on all the major virus scanners consistently shows that none provide 100% virus detection. The best ones provided as high as 99.6% detection, while the lowest provided only 81.8% in tests conducted in February 2010. All virus scanners produce false positive results as well, identifying benign files as malware.[43]

Although methodologies may differ, some notable independent quality testing agencies include AV-Comparatives, ICSA Labs, West Coast Labs, VB100 and other members of the Anti-Malware Testing Standards Organization.[44]

[edit] New viruses

Anti-virus programs are not always effective against new viruses, even those that use non-signature-based methods that should detect new viruses. The reason for this is that the virus designers test their new viruses on the major anti-virus applications to make sure that they are not detected before releasing them into the wild.[45]

Some new viruses, particularly ransomware, use polymorphic code to avoid detection by virus scanners. Jerome Segura, a security analyst with ParetoLogic, explained:[46]

It’s something that they miss a lot of the time because this type of [ransomware virus] comes from sites that use a polymorphism, which means they basically randomize the file they send you and it gets by well-known antivirus products very easily. I’ve seen people firsthand getting infected, having all the pop-ups and yet they have antivirus software running and it’s not detecting anything. It actually can be pretty hard to get rid of, as well, and you’re never really sure if it’s really gone. When we see something like that usually we advise to reinstall the operating system or reinstall backups.[46]

A proof of concept virus has used the Graphics Processing Unit (GPU) to avoid detection from anti-virus software. The potential success of this involves bypassing the CPU in order to make it much harder for security researchers to analyse the inner workings of such malware.[47]

[edit] Rootkits

Detecting rootkits is a major challenge for anti-virus programs. Rootkits have full administrative access to the computer and are invisible to users and hidden from the list of running processes in the task manager. Rootkits can modify the inner workings of the operating system[48] and tamper with antivirus programs.[21]

[edit] Damaged files

Files which have been damaged by computer viruses are normally damaged beyond recovery. Anti-virus software removes the virus code from the file during disinfection, but this does not always restore the file to its undamaged state. In such circumstances, damaged files can only be restored from existing backups; installed software that is damaged requires re-installation.[49]

[edit] Firmware issues

Active anti-virus software can interfere with a firmware update process.[50] Any writeable firmware in the computer can be infected by malicious code.[51] This is a major concern, as an infected BIOS could require the actual BIOS chip to be replaced to ensure the malicious code is completely removed.[52] Anti-virus software is not effective at protecting firmware and the motherboard BIOS from infection.[53]

[edit] Other methods

A command-line virus scanner, Clam AV 0.95.2, running a virus signature definition update, scanning a file and identifying a Trojan

Installed antivirus software running on an individual computer is only one method of guarding against viruses. Other methods are also used, including cloud-based antivirus, firewalls and on-line scanners.

[edit] Cloud antivirus

Cloud antivirus is a technology that uses lightweight agent software on the protected computer, while offloading the majority of data analysis to the provider’s infrastructure.[54]

One approach to implementing cloud antivirus involves scanning suspicious files using multiple antivirus engines. This approach was proposed by an early implementation of the cloud antivirus concept called CloudAV. CloudAV was designed to send programs or documents to a network cloud where multiple antivirus and behavioral detection programs are used simultaneously in order to improve detection rates. Parallel scanning of files using potentially incompatible antivirus scanners is achieved by spawning a virtual machine per detection engine and therefore eliminating any possible issues. CloudAV can also perform “retrospective detection,” whereby the cloud detection engine rescans all files in its file access history when a new threat is identified thus improving new threat detection speed. Finally, CloudAV is a solution for effective virus scanning on devices that lack the computing power to perform the scans themselves.[55]

[edit] Network firewall

Network firewalls prevent unknown programs and processes from accessing the system. However, they are not antivirus systems and make no attempt to identify or remove anything. They may protect against infection from outside the protected computer or network, and limit the activity of any malicious software which is present by blocking incoming or outgoing requests on certain TCP/IP ports. A firewall is designed to deal with broader system threats that come from network connections into the system and is not an alternative to a virus protection system.

[edit] Online scanning

Some antivirus vendors maintain websites with free online scanning capability of the entire computer, critical areas only, local disks, folders or files. Periodic online scanning is a good idea for those that run antivirus applications on their computers because those applications are frequently slow to catch threats. One of the first things that malicious software does in an attack is disable any existing antivirus software and sometimes the only way to know of an attack is by turning to an online resource that isn’t already installed on the infected computer.[56]

[edit] Specialist tools

Using rkhunter to scan for rootkits on an Ubuntu Linux computer.

Virus removal tools are available to help remove stubborn infections or certain types of infection. Examples include Trend Micro‘s Rootkit Buster,[57] and rkhunter for the detection of rootkits, Avira‘s AntiVir Removal Tool,[58] PCTools Threat Removal Tool,[59] and AVG‘s Anti-Virus Free 2011.[60]

A rescue disk that is bootable, such as a CD or USB storage device, can be used to run antivirus software outside of the installed operating system, in order to remove infections while they are dormant. A bootable antivirus disk can be useful when, for example, the installed operating system is no longer bootable or has malware that is resisting all attempts to be removed by the installed antivirus software. Examples of some of these bootable disks include the Avira AntiVir Rescue System,[58] PCTools Alternate Operating System Scanner,[61] and AVG Rescue CD.[62] The AVG Rescue CD software can also be installed onto a USB storage device, that is bootable on newer computers.[62]

[edit] Popularity

A survey by Symantec in 2009 found that a third of small to medium sized business did not use antivirus protection at that time, whereas more than 80% of home users had some kind of antivirus installed.[63]

[edit] See also

[edit] References

  1. ^ “Softpedia Exclusive Interview: Avira 10”. Ionut Ilascu. Softpedia. 14 April 2010. Retrieved 2011-09-11.
  2. ^ “Norton AntiVirus ignores malicious WMI instructions”. Munir Kotadia. CBS Interactive. 21 October 2004. Retrieved 2009-04-05.
  3. ^ History of viruses
  4. ^ Kaspersky Lab Virus list
  5. ^ Wells, Joe (1996-08-30). “Virus timeline”. IBM. Retrieved 2008-06-06.
  6. ^ G Data Software AG (2011). “G Data presents security firsts at CeBIT 2010”. Retrieved 22 August 2011.
  7. ^ Karsmakers, Richard (January 2010). “The ultimate Virus Killer UVK 2000”. Retrieved 22 August 2011.
  8. ^ Fred Cohen 1984 “Computer Viruses – Theory and Experiments”
  9. ^ Fred Cohen 1988 “On the implications of Computer Viruses and Methods of Defense”
  10. ^ VIRUS-L mailing list archive
  11. ^ Panda Security (April 2004). “(II) Evolution of computer viruses”. Retrieved 2009-06-20.
  12. ^ Szor 2005, pp. 66–67
  13. ^ Slipstick Systems (February 2009). “Protecting Microsoft Outlook against Viruses”. Retrieved 2009-06-18.
  14. ^ a b Landesman, Mary (2009). “What is a Virus Signature?”. Retrieved 2009-06-18.
  15. ^ Szor 2005, pp. 474–481
  16. ^ Szor 2005, pp. 252–288
  17. ^ “Generic detection”. Kaspersky. Retrieved 2009-02-24.
  18. ^ Symantec Corporation (February 2009). “Trojan.Vundo”. Retrieved 2009-04-14.
  19. ^ Symantec Corporation (February 2007). “Trojan.Vundo.B”. Retrieved 2009-04-14.
  20. ^ “Antivirus Research and Detection Techniques”. ExtremeTech. Retrieved 2009-02-24.
  21. ^ a b Rootkit: Definition, Prevention and Removal
  22. ^ Rootkit
  23. ^ Kelly, Michael (October 2006). “Buying Dangerously”. Retrieved 2009-11-29.
  24. ^ Bitdefender (2009). “Automatic Renewal”. Retrieved 2009-11-29.
  25. ^ Symantec (undated). “Ongoping Protection”. Retrieved 2009-11-29.
  26. ^ SpywareWarrior (2007). “Rogue/Suspect Anti-Spyware Products & Web Sites”. Retrieved 2009-11-29.
  27. ^ Emil Protalinski (November 11, 2008). “AVG incorrectly flags user32.dll in Windows XP SP2/SP3”. Ars Technica. Retrieved 2011-02-24.
  28. ^ Aaron Tan (May 24, 2007). “Flawed Symantec update cripples Chinese PCs”. CNET Networks. Retrieved 2009-04-05.
  29. ^ a b David Harris (June 29, 2009). “January 2010 – Pegasus Mail v4.52 Release”. Pegasus Mail. Retrieved 2010-05-21.
  30. ^ “McAfee DAT 5958 Update Issues”. 21 April 2010. Retrieved 22 April 2010.
  31. ^ “Botched McAfee update shutting down corporate XP machines worldwide”. 21 April 2010. Retrieved 22 April 2010.
  32. ^ John Leyden (December 2, 2010). “Horror AVG update ballsup bricks Windows 7”. The Register. Retrieved 2010-12-02.
  33. ^ McAfee to compensate businesses for buggy update, retrieved Thursday, 2 December 2010
  34. ^ Buggy McAfee update whacks Windows XP PCs, retrieved Thursday, 2 December 2010
  35. ^ Microsoft (January 2007). “Plus! 98: How to Remove McAfee VirusScan”. Retrieved 2009-11-29.
  36. ^ Robert Vamosi (May 28, 2009). “G-Data Internet Security 2010”. PC World. Retrieved 2011-02-24.
  37. ^ Kelly Jackson Higgins (May 5, 2010). “New Microsoft Forefront Software Runs Five Antivirus Vendors’ Engines”. Darkreading. Retrieved 2011-02-24.
  38. ^ Microsoft (April 2009). “Steps to take before you install Windows XP Service Pack 3”. Retrieved 2009-11-29.
  39. ^ “Troubleshooting”. Retrieved 2011-02-17.
  40. ^ Field Notice: FN – 63204 – Cisco Clean Access has Interoperability issue with Symantec Anti-virus – delays Agent start-up
  41. ^ Dan Goodin (December 21, 2007). “Anti-virus protection gets worse”. Channel Register. Retrieved 2011-02-24.
  42. ^ Dan Illett (July 13, 2007). “Hacking poses threats to business”. Computer Weekly. Retrieved 2009-11-15.
  43. ^ AV Comparatives (February 2010). “Anti-Virus Comparative No. 25”. Retrieved 18 April 2010.
  44. ^ Guidelines released for antivirus software tests
  45. ^ Kotadia, Munir (July 2006). “Why popular antivirus apps ‘do not work'”. Retrieved 14 April 2010.
  46. ^ a b The Canadian Press (April 2010). “Internet scam uses adult game to extort cash”. CBC News. Retrieved 17 April 2010.
  47. ^ Researchers up evilness ante with GPU-assisted malware – Coming to a PC near you, by Dan Goodin
  48. ^ GIBSON RESEARCH CORPORATION SERIES: Security Now!
  49. ^ “How Anti-Virus Software Works”. Retrieved 2011-02-16.
  50. ^ “BT Home Hub Firmware Upgrade Procedure”. Retrieved 2011-03-06.
  51. ^ “The 10 faces of computer malware”. July 17, 2009. Retrieved 2011-03-06.
  52. ^ “New BIOS Virus Withstands HDD Wipes”. Friday 27 March 2009. Retrieved 2011-03-06.
  53. ^ “Phrack Inc. Persistent BIOS Infection”. June 1, 2009. Retrieved 2011-03-06.
  54. ^ Zeltser, Lenny (October 2010). “What Is Cloud Anti-Virus and How Does It Work?”. Retrieved 2010-10-26.
  55. ^ Jon Erickson (August 6, 2008). “Antivirus Software Heads for the Clouds”. Information Week. Retrieved 2010-02-24.
  56. ^ Brian Krebs (March 9, 2007). “Online Anti-Virus Scans: A Free Second Opinion”. Washington Post. Retrieved 2011-02-24.
  57. ^ Ryan Naraine (February 2, 2007). “Trend Micro ships free ‘rootkit buster'”. ZDNet. Retrieved 2011-02-24.
  58. ^ a b Neil J. Rubenking (March 26, 2010). “Avira AntiVir Personal 10”. PC Magazine. Retrieved 2011-02-24.
  59. ^ Neil J. Rubenking (September 16, 2010). “PC Tools Spyware Doctor with AntiVirus 2011”. PC Magazine. Retrieved 2011-02-24.
  60. ^ Neil J. Rubenking (October 4, 2010). “AVG Anti-Virus Free 2011”. PC Magazine. Retrieved 2011-02-24.
  61. ^ Neil J. Rubenking (November 19, 2009). “PC Tools Internet Security 2010”. PC Magazine. Retrieved 2011-02-24.
  62. ^ a b Carrie-Ann Skinner (March 25, 2010). “AVG Offers Free Emergency Boot CD”. PC World. Retrieved 2011-02-24.
  63. ^ Michael Kaiser (April 17, 2009). “Small and Medium Size Businesses are Vulnerable”. National Cyber Security Alliance. Retrieved 2011-02-24.

Szor, Peter (2005), The Art of Computer Virus Research and Defense, Addison-Wesley, ISBN 0-32-130454-3

here are a few suggestions of free antivirus download link:

avira

smadav

pcmav

norton

credit to: http://en.wikipedia.org/wiki/Antivirus

Standard
“Computer networks” redirects here. For the periodical, see Computer Networks (journal).
“Datacom” redirects here. For other uses, see Datacom (disambiguation).

Internet map. The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users worldwide.

Distributed processing

A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communications channels that allow sharing of resources and information.[1]

Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope.

The rules and data formats for exchanging information in a computer network are defined by communications protocols. Well-known communications protocols are Ethernet, a hardware and Link Layer standard that is ubiquitous in local area networks, and the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, as well as host-to-host data transfer, and application-specific data transmission formats.

Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of these disciplines.

Contents

[hide]

[edit] History

Wiki letter w cropped.svg This section requires expansion.

Before the advent of computer networks that were based upon some type of telecommunications system, communication between calculation machines and early computers was performed by human users by carrying instructions between them. Many of the social behaviors seen in today’s Internet were demonstrably present in the 19th century and arguably in even earlier networks using visual signals.

Today, computer networks are the core of modern communication. All modern aspects of the public switched telephone network (PSTN) are computer-controlled, and telephony increasingly runs over the Internet Protocol, although not necessarily the public Internet. The scope of communication has increased significantly in the past decade, and this boom in communications would not have been possible without the progressively advancing computer network. Computer networks, and the technologies needed to connect and communicate through and between them, continue to drive computer hardware, software, and peripherals industries. This expansion is mirrored by growth in the numbers and types of users of networks from the researcher to the home user.

[edit] Properties

Computer networks:

Facilitate communications 
Using a network, people can communicate efficiently and easily via email, instant messaging, chat rooms, telephone, video telephone calls, and video conferencing.
Permit sharing of files, data, and other types of information
In a network environment, authorized users may access data and information stored on other computers on the network. The capability of providing access to data and information on shared storage devices is an important feature of many networks.
Share network and computing resources
In a networked environment, each computer on a network may access and use resources provided by devices on the network, such as printing a document on a shared network printer. Distributed computing uses computing resources across a network to accomplish tasks.
May be insecure
A computer network may be used by computer hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from normally accessing the network (denial of service).
May interfere with other technologies
Power line communication strongly disturbs certain forms of radio communication, e.g., amateur radio.[5] It may also interfere with last mile access technologies such as ADSL and VDSL.[6]
May be difficult to set up
A complex computer network may be difficult to set up. It may also be very costly to set up an effective computer network in a large organization or company.

[edit] Communication media

Computer networks can be classified according to the hardware and associated software technology that is used to interconnect the individual devices in the network, such as electrical cable (HomePNA, power line communication, G.hn), optical fiber, and radio waves (wireless LAN). In the OSI model, these are located at levels 1 and 2.

A well-known family of communication media is collectively known as Ethernet. It is defined by IEEE 802 and utilizes various standards and media that enable communication between devices. Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium.

[edit] Wired technologies

  • Twisted pair wire is the most widely used medium for telecommunication. Twisted-pair cabling consist of copper wires that are twisted into pairs. Ordinary telephone wires consist of two insulated copper wires twisted into pairs. Computer networking cabling (wired Ethernet as defined by IEEE 802.3) consists of 4 pairs of copper cabling that can be utilized for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed ranges from 2 million bits per second to 10 billion bits per second. Twisted pair cabling comes in two forms which are Unshielded Twisted Pair (UTP) and Shielded twisted-pair (STP) which are rated in categories which are manufactured in different increments for various scenarios.
  • Coaxial cable is widely used for cable television systems, office buildings, and other work-sites for local area networks. The cables consist of copper or aluminum wire wrapped with insulating layer typically of a flexible material with a high dielectric constant, all of which are surrounded by a conductive layer. The layers of insulation help minimize interference and distortion. Transmission speed range from 200 million to more than 500 million bits per second.
  • Optical fiber cable consists of one or more filaments of glass fiber wrapped in protective layers that carries data by means of pulses of light. It transmits light which can travel over extended distances. Fiber-optic cables are not affected by electromagnetic radiation. Transmission speed may reach trillions of bits per second. The transmission speed of fiber optics is hundreds of times faster than for coaxial cables and thousands of times faster than a twisted-pair wire. This capacity may be further increased by the use of colored light, i.e., light of multiple wavelengths. Instead of carrying one message in a stream of monochromatic light impulses, this technology can carry multiple signals in a single fiber.

[edit] Wireless technologies

  • Terrestrial microwave – Terrestrial microwaves use Earth-based transmitter and receiver. The equipment looks similar to satellite dishes. Terrestrial microwaves use low-gigahertz range, which limits all communications to line-of-sight. Path between relay stations spaced approx, 48 km (30 miles) apart. Microwave antennas are usually placed on top of buildings, towers, hills, and mountain peaks.
  • Communications satellites – The satellites use microwave radio as their telecommunications medium which are not deflected by the Earth’s atmosphere. The satellites are stationed in space, typically 35,400 km (22,200 miles) (for geosynchronous satellites) above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.
  • Cellular and PCS systems – Use several radio communications technologies. The systems are divided to different geographic areas. Each area has a low-power transmitter or radio relay antenna device to relay calls from one area to the next area.
  • Wireless LANs – Wireless local area network use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANs use spread spectrum technology to enable communication between multiple devices in a limited area. An example of open-standards wireless radio-wave technology is IEEE 802.11.
  • Infrared communication can transmit signals between devices within small distances of typically no more than 10 meters. In most cases, line-of-sight propagation is used, which limits the physical positioning of communicating devices.
  • A global area network (GAN) is a network used for supporting mobile communications across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off the user communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.[7]

[edit] Exotic technologies

There have been various attempts at transporting data over more or less exotic media:

  • Extending the Internet to interplanetary dimensions via radio waves.[9]

A practical limit in both cases is the round-trip delay time which constrains useful communication.

[edit] Communications protocol

A communications protocol defines the formats and rules for exchanging information via a network and typically comprises a complete protocol suite which describes the protocols used at various usage levels. An interesting feature of communications protocols is that they may be – and in fact very often are – stacked above each other, which means that one is used to carry the other. The example for this is HTTP running over TCP over IP over IEEE 802.11, where the second and third are members of the Internet Protocol Suite, while the last is a member of the Ethernet protocol suite. This is the stacking which exists between the wireless router and the home user’s personal computer when surfing the World Wide Web.

Communication protocols have themselves various properties, such as whether they are connection-oriented versus connectionless, whether they use circuit mode or packet switching, or whether they use hierarchical or flat addressing.

There exist a multitude of communication protocols, a few of which are described below.

[edit] Ethernet

Main article: Ethernet

Ethernet is a family of connectionless protocols used in LANs, described by a set of standards together called IEEE 802 published by the Institute of Electrical and Electronics Engineers. It has a flat addressing scheme and is mostly situated at levels 1 and 2 of the OSI model. For home users today, the most well-known member of this protocol family is IEEE 802.11, otherwise known as Wireless LAN (WLAN). However, the complete protocol suite deals with a multitude of networking aspects not only for home use, but especially when the technology is deployed to support a diverse range of business needs. MAC bridging (IEEE 802.1D) deals with the routing of Ethernet packets using a Spanning Tree Protocol, IEEE 802.1Q describes VLANs, and IEEE 802.1X defines a port-based Network Access Control protocol which forms the basis for the authentication mechanisms used in VLANs, but also found in WLANs – it is what the home user sees when they have to enter a “wireless access key”.

[edit] Internet Protocol Suite

The Internet Protocol Suite, often also called TCP/IP, is the foundation of all modern internetworking. It offers connection-less as well as connection-oriented services over an inherently unreliable network traversed by datagram transmission at the Internet protocol (IP) level. At its core, the protocol suite defines the addressing, identification, and routing specification in form of the traditional Internet Protocol Version 4 (IPv4) and IPv6, the next generation of the protocol with a much enlarged addressing capability.

[edit] SONET/SDH

Synchronous Optical NETworking (SONET) and Synchronous Digital Hierarchy (SDH) are standardized multiplexing protocols that transfer multiple digital bit streams over optical fiber using lasers. They were originally designed to transport circuit mode communications from a variety of different sources, primarily to support real-time, uncompressed, circuit-switched voice encoded in PCM format. However, due to its protocol neutrality and transport-oriented features, SONET/SDH also was the obvious choice for transporting Asynchronous Transfer Mode (ATM) frames.

[edit] Asynchronous Transfer Mode

Asynchronous Transfer Mode (ATM) is a switching technique for telecommunication networks. It uses asynchronous time-division multiplexing and encodes data into small, fixed-sized cells. This differs from other protocols such as the Internet Protocol Suite or Ethernet that use variable sized packets or frames. ATM has similarity with both circuit and packet switched networking. This makes it a good choice for a network that must handle both traditional high-throughput data traffic, and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins.

While the role of ATM is diminishing in favor of next-generation networks, it still plays a role in the last mile, which is the connection between an Internet service provider and the home user. For an interesting write-up of the technologies involved, including the deep stacking of communications protocols used, see.[10]

[edit] Scale

Computer network types by geographical scope
This box: view · talk · edit

Networks are often classified by their physical or organizational extent or their purpose. Usage, trust level, and access rights differ between these types of networks.

[edit] Personal area network

A personal area network (PAN) is a computer network used for communication among computer and different information technological devices close to one person. Some examples of devices that are used in a PAN are personal computers, printers, fax machines, telephones, PDAs, scanners, and even video game consoles. A PAN may include wired and wireless devices. The reach of a PAN typically extends to 10 meters.[11] A wired PAN is usually constructed with USB and Firewire connections while technologies such as Bluetooth and infrared communication typically form a wireless PAN.

[edit] Local area network

A local area network (LAN) is a network that connects computers and devices in a limited geographical area such as home, school, computer laboratory, office building, or closely positioned group of buildings. Each computer or device on the network is a node. Current wired LANs are most likely to be based on Ethernet technology, although new standards like ITU-T G.hn also provide a way to create a wired LAN using existing home wires (coaxial cables, phone lines and power lines).[12]

Typical library network, in a branching tree topology and controlled access to resources

All interconnected devices must understand the network layer (layer 3), because they are handling multiple subnets (the different colors). Those inside the library, which have only 10/100 Mbit/s Ethernet connections to the user device and a Gigabit Ethernet connection to the central router, could be called “layer 3 switches” because they only have Ethernet interfaces and must understand IP. It would be more correct to call them access routers, where the router at the top is a distribution router that connects to the Internet and academic networks’ customer access routers.

The defining characteristics of LANs, in contrast to WANs (Wide Area Networks), include their higher data transfer rates, smaller geographic range, and no need for leased telecommunication lines. Current Ethernet or other IEEE 802.3 LAN technologies operate at speeds up to 10 Gbit/s. This is the data transfer rate. IEEE has projects investigating the standardization of 40 and 100 Gbit/s.[13] LANs can be connected to Wide area network by using routers.

[edit] Home network

A home network is a residential LAN which is used for communication between digital devices typically deployed in the home, usually a small number of personal computers and accessories, such as printers and mobile computing devices. An important function is the sharing of Internet access, often a broadband service through a cable TV or Digital Subscriber Line (DSL) provider.

[edit] Storage area network

Wiki letter w cropped.svg This section requires expansion.

[edit] Campus network

A campus network is a computer network made up of an interconnection of LANs within a limited geographical area. The networking equipment (switches, routers) and transmission media (optical fiber, copper plant, Cat5 cabling etc.) are almost entirely owned (by the campus tenant / owner: an enterprise, university, government etc.).

In the case of a university campus-based campus network, the network is likely to link a variety of campus buildings including, for example, academic colleges or departments, the university library, and student residence halls.

[edit] Backbone network

A backbone network is part of a computer network infrastructure that interconnects various pieces of network, providing a path for the exchange of information between different LANs or subnetworks. A backbone can tie together diverse networks in the same building, in different buildings in a campus environment, or over wide areas. Normally, the backbone’s capacity is greater than that of the networks connected to it.

A large corporation which has many locations may have a backbone network that ties all of these locations together, for example, if a server cluster needs to be accessed by different departments of a company which are located at different geographical locations. The equipment which ties these departments together constitute the network backbone. Network performance management including network congestion are critical parameters taken into account when designing a network backbone.

A specific case of a backbone network is the Internet backbone, which is the set of wide-area network connections and core routers that interconnect all networks connected to the Internet.

[edit] Metropolitan area network

A Metropolitan area network (MAN) is a large computer network that usually spans a city or a large campus.

Sample EPN made of Frame relay WAN connections and dialup remote access.

Sample VPN used to interconnect 3 offices and remote users

[edit] Wide area network

A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances, using a communications channel that combines many types of media such as telephone lines, cables, and air waves. A WAN often uses transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI reference model: the physical layer, the data link layer, and the network layer.

[edit] Enterprise private network

An enterprise private network is a network built by an enterprise to interconnect various company sites, e.g., production sites, head offices, remote offices, shops, in order to share computer resources.

[edit] Virtual private network

A virtual private network (VPN) is a computer network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network when this is the case. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features.

VPN may have best-effort performance, or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider. Generally, a VPN has a topology more complex than point-to-point.

[edit] Internetwork

An internetwork is the connection of multiple computer networks via a common routing technology using routers. The Internet the an aggregation of many connected internetworks spanning the Earth.

[edit] Organizational scope

Networks are typically managed by organizations which own them. According to the owner’s point of view, networks are seen as intranets or extranets. A special case of network is the Internet, which has no single owner but a distinct status when seen by an organizational entity – that of permitting virtually unlimited global connectivity for a great multitude of purposes.

[edit] Intranets and extranets

Intranets and extranets are parts or extensions of a computer network, usually a LAN.

An intranet is a set of networks, using the Internet Protocol and IP-based tools such as web browsers and file transfer applications, that is under the control of a single administrative entity. That administrative entity closes the intranet to all but specific, authorized users. Most commonly, an intranet is the internal network of an organization. A large intranet will typically have at least one web server to provide users with organizational information.

An extranet is a network that is limited in scope to a single organization or entity and also has limited connections to the networks of one or more other usually, but not necessarily, trusted organizations or entities—a company’s customers may be given access to some part of its intranet—while at the same time the customers may not be considered trusted from a security standpoint. Technically, an extranet may also be categorized as a CAN, MAN, WAN, or other type of network, although an extranet cannot consist of a single LAN; it must have at least one connection with an external network.

[edit] Internet

The Internet is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet Protocol Suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet is also the communications backbone underlying the World Wide Web (WWW).

Participants in the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet Protocol Suite and an addressing system (IP addresses) administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.

[edit] Network topology

[edit] Common layouts

A network topology is the layout of the interconnections of the nodes of a computer network. Common layouts are:

  • A bus network: all nodes are connected to a common medium along this medium. This was the layout used in the original Ethernet, called 10BASE5 and 10BASE2.
  • A star network: all nodes are connected to a special central node. This is the typical layout found in in a Wireless LAN, where each wireless client connects to the central Wireless access point.
  • A ring network: each node is connected to its left and right neighbor node, such that all nodes are connected and that each node can reach each other node by traversing nodes left- or rightwards. The Fiber Distributed Data Interface (FDDI) made use of such a topology.
  • A mesh network: each node is connected to an arbitrary number of neighbors in such a way that there is at least one traversal from any node to any other.
  • A fully connected network: each node is connected to every other node in the network.

Note that the physical layout of the nodes in a network may not necessarily reflect the network topology. As an example, with FDDI, the network topology is a ring (actually two counter-rotating rings), but the physical topology is a star, because all neighboring connections are routed via a central physical location.

[edit] Overlay network

An overlay network is a virtual computer network that is built on top of another network. Nodes in the overlay are connected by virtual or logical links, each of which corresponds to a path, perhaps through many physical links, in the underlying network. The topology of the overlay network may (and often does) differ from that of the underlying one.

A sample overlay network: IP over SONET over Optical

For example, many peer-to-peer networks are overlay networks because they are organized as nodes of a virtual system of links run on top of the Internet. The Internet was initially built as an overlay on the telephone network .[14]

The most striking example of an overlay network, however, is the Internet itself: At the IP layer, each node can reach any other by a direct connection to the desired IP address, thereby creating a fully connected network; the underlying network, however, is composed of a mesh-like interconnect of subnetworks of varying topologies (and, in fact, technologies). Address resolution and routing are the means which allows the mapping of the fully-connected IP overlay network to the underlying ones.

Overlay networks have been around since the invention of networking when computer systems were connected over telephone lines using modems, before any data network existed.

Another example of an overlay network is a distributed hash table, which maps keys to nodes in the network. In this case, the underlying network is an IP network, and the overlay network is a table (actually map) indexed by keys.

Overlay networks have also been proposed as a way to improve Internet routing, such as through quality of service guarantees to achieve higher-quality streaming media. Previous proposals such as IntServ, DiffServ, and IP Multicast have not seen wide acceptance largely because they require modification of all routers in the network.[citation needed] On the other hand, an overlay network can be incrementally deployed on end-hosts running the overlay protocol software, without cooperation from Internet service providers. The overlay has no control over how packets are routed in the underlying network between two overlay nodes, but it can control, for example, the sequence of overlay nodes a message traverses before reaching its destination.

For example, Akamai Technologies manages an overlay network that provides reliable, efficient content delivery (a kind of multicast). Academic research includes end system multicast and overcast for multicast; RON (resilient overlay network) for resilient routing; and OverQoS for quality of service guarantees, among others.

[edit] Basic hardware components

Apart from the physical communications media themselves as described above, networks comprise additional basic hardware building blocks interconnecting their terminals, such as network interface cards (NICs), hubs, bridges, switches, and routers.

[edit] Network interface cards

A network card, network adapter, or NIC (network interface card) is a piece of computer hardware designed to allow computers to physically access a networking medium. It provides a low-level addressing system through the use of MAC addresses.

Each Ethernet network interface has a unique MAC address which is usually stored in a small memory device on the card, allowing any device to connect to the network without creating an address conflict. Ethernet MAC addresses are composed of six octets. Uniqueness is maintained by the IEEE, which manages the Ethernet address space by assigning 3-octet prefixes to equipment manufacturers. The list of prefixes is publicly available. Each manufacturer is then obliged to both use only their assigned prefix(es) and to uniquely set the 3-octet suffix of every Ethernet interface they produce.

[edit] Repeaters and hubs

A repeater is an electronic device that receives a signal, cleans it of unnecessary noise, regenerates it, and retransmits it at a higher power level, or to the other side of an obstruction, so that the signal can cover longer distances without degradation. In most twisted pair Ethernet configurations, repeaters are required for cable that runs longer than 100 meters. A repeater with multiple ports is known as a hub. Repeaters work on the Physical Layer of the OSI model. Repeaters require a small amount of time to regenerate the signal. This can cause a propagation delay which can affect network communication when there are several repeaters in a row. Many network architectures limit the number of repeaters that can be used in a row (e.g. Ethernet’s 5-4-3 rule).

Today, repeaters and hubs have been made mostly obsolete by switches (see below).

[edit] Bridges

A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model. Bridges broadcast to all ports except the port on which the broadcast was received. However, bridges do not promiscuously copy traffic to all ports, as hubs do, but learn which MAC addresses are reachable through specific ports. Once the bridge associates a port and an address, it will send traffic for that address to that port only.

Bridges learn the association of ports and addresses by examining the source address of frames that it sees on various ports. Once a frame arrives through a port, its source address is stored and the bridge assumes that MAC address is associated with that port. The first time that a previously unknown destination address is seen, the bridge will forward the frame to all ports other than the one on which the frame arrived.

Bridges come in three basic types:

  • Local bridges: Directly connect LANs
  • Remote bridges: Can be used to create a wide area network (WAN) link between LANs. Remote bridges, where the connecting link is slower than the end networks, largely have been replaced with routers.
  • Wireless bridges: Can be used to join LANs or connect remote stations to LANs.

[edit] Switches

A network switch is a device that forwards and filters OSI layer 2 datagrams (chunks of data communication) between ports (connected cables) based on the MAC addresses in the packets.[15] A switch is distinct from a hub in that it only forwards the frames to the ports involved in the communication rather than all ports connected. A switch breaks the collision domain but represents itself as a broadcast domain. Switches make forwarding decisions of frames on the basis of MAC addresses. A switch normally has numerous ports, facilitating a star topology for devices, and cascading additional switches.[16] Some switches are capable of routing based on Layer 3 addressing or additional logical levels; these are called multi-layer switches. The term switch is used loosely in marketing to encompass devices including routers and bridges, as well as devices that may distribute traffic on load or by application content (e.g., a Web URL identifier).

[edit] Routers

A router is an internetworking device that forwards packets between networks by processing information found in the datagram or packet (Internet protocol information from Layer 3 of the OSI Model). In many situations, this information is processed in conjunction with the routing table (also known as forwarding table). Routers use routing tables to determine what interface to forward packets (this can include the “null” also known as the “black hole” interface because data can go into it, however, no further processing is done for said data).

[edit] Firewalls

A firewall is an important aspect of a network with respect to security. It typically rejects access requests from unsafe sources while allowing actions from recognized ones. The vital role firewalls play in network security grows in parallel with the constant increase in ‘cyber’ attacks for the purpose of stealing/corrupting data, planting viruses, etc.

[edit] Network performance

Main article: network performance

Network performance refers to the service quality of a telecommunications product as seen by the customer. It should not be seen merely as an attempt to get “more through” the network.

The following list gives examples of Network Performance measures for a circuit-switched network and one type of packet-switched network, viz. ATM:

  • Circuit-switched networks: In circuit switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads.[17] Other types of performance measures can include noise, echo and so on.

There are many different ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modelled instead of measured; one example of this is using state transition diagrams to model queuing performance in a circuit-switched network. These diagrams allow the network planner to analyze how the network will perform in each state, ensuring that the network will be optimally designed.[19]

[edit] Network security

Main article: network security

In the field of networking, the area of network security[20] consists of the provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and network-accessible resources. Network Security is the authorization of access to data in a network, which is controlled by the network administrator. Users are assigned an ID and password that allows them access to information and programs within their authority. Network Security covers a variety of computer networks, both public and private that are used in everyday jobs conducting transactions and communications among businesses, government agencies and individuals. Networks can be private, such as within a company, and others which might be open to public access. Network Security is involved in organization, enterprises, and all other type of institutions. It does as its titles explains, secures the network. Protects and oversees operations being done.

[edit] Network resilience

Main article: resilience (network)

In computer networking: “Resilience is the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation.”[21]

[edit] Views of networks

Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a Network Administrator is responsible to keep that network up and running. A community of interest has less of a connection of being in a local area, and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies.

Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application layer gateways that interconnect the physical media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more physical media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using virtual LAN (VLAN) technology.

Both users and administrators will be aware, to varying extents, of the trust and scope characteristics of a network. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees).[22] Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).[22]

Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, which share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).

Over the Internet, there can be business-to-business (B2B), business-to-consumer (B2C) and consumer-to-consumer (C2C) communications. Especially when money or sensitive information is exchanged, the communications are apt to be secured by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure Virtual Private Network (VPN) technology.

[edit] See also

 

[edit] References

  1. ^ Computer network definition
  2. ^ Michael A. Banks (2008). On the way to the web: the secret history of the internet and its founders. Apress. p. 1. ISBN 9781430208693.
  3. ^ Christos J. P. Moschovitis (1998). History of the Internet: a chronology, 1843 to the present. ABC-CLIO. p. 36. ISBN 9781576071182.
  4. ^ Chris Sutton. “Internet Began 35 Years Ago at UCLA with First Message Ever Sent Between Two Computers”. UCLA. Archived from the original on March 8, 2008.
  5. ^ The National Association for Amateur Radio: Broadband Over Powerline
  6. ^ “The Likelihood and Extent of Radio Frequency Interference from In-Home PLT Devices”. Ofcom. Retrieved 18 June 2011.
  7. ^ Mobile Broadband Wireless connections (MBWA)
  8. ^ Bergen Linux User Group’s CPIP Implementation
  9. ^ Interplanetary Internet, 2000 Third Annual International Symposium on Advanced Radio Technologies, A. Hooke, September 2000
  10. ^ Martin, Thomas. “Design Principles for DSL-Based Access Solutions”. Retrieved 18 June 2011.
  11. ^ “personal area network (PAN)”. Retrieved January 29, 2011.
  12. ^ New global standard for fully networked home, ITU-T Press Release
  13. ^ IEEE P802.3ba 40Gb/s and 100Gb/s Ethernet Task Force
  14. ^ D. Andersen, H. Balakrishnan, M. Kaashoek, and R. Morris. Resilient Overlay Networks. In Proc. ACM SOSP, Oct. 2001.
  15. ^ “Define switch.”. http://www.webopedia.com. Retrieved April 8, 2008.
  16. ^ “Basic Components of a Local Area Network (LAN)”. NetworkBits.net. Retrieved April 8, 2008.
  17. ^ ITU-T Study Group 2, Teletraffic Engineering Handbook (PDF), Retrieved on February 13, 2005.
  18. ^ Telecommunications Magazine Online, Americas January 2003, Issue Highlights, Online Exclusive: Broadband Access Maximum Performance, Retrieved on February 13, 2005.
  19. ^ “State Transition Diagrams”. Retrieved July 13, 2003.
  20. ^ Simmonds, A; Sandilands, P; van Ekert, L (2004). “An Ontology for Network Security Attacks”. Lecture Notes in Computer Science 3285: 317–323. doi:10.1007/978-3-540-30176-9_41.
  21. ^ The ResiliNets Research Initiative definition of resilience.
  22. ^ a b RFC 2547

 This article incorporates public domain material from the General Services Administration document “Federal Standard 1037C”.

[edit] Further reading

credit to: http://en.wikipedia.org/wiki/Computer_network

Printer

Standard

In computing, a printer is a peripheral which produces a text and/or graphics of documents stored in electronic form, usually on physical print media such as paper or transparencies. Many printers are primarily used as local peripherals, and are attached by a printer cable or, in most new printers, a USB cable to a computer which serves as a document source. Some printers, commonly known as network printers, have built-in network interfaces, typically wireless and/or Ethernet based, and can serve as a hard copy device for any user on the network. Individual printers are often designed to support both local and network connected users at the same time. In addition, a few modern printers can directly interface to electronic media such as memory cards, or to image capture devices such as digital cameras, scanners; some printers are combined with a scanners and/or fax machines in a single unit, and can function as photocopiers. Printers that include non-printing features are sometimes called multifunction printers (MFP), multi-function devices (MFD), or all-in-one (AIO) printers. Most MFPs include printing, scanning, and copying among their many features.

Consumer and some commercial printers are designed for low-volume, short-turnaround print jobs; requiring virtually no setup time to achieve a hard copy of a given document. However, printers are generally slow devices (30 pages per minute is considered fast; and many inexpensive consumer printers are far slower than that), and the cost per page is actually relatively high. However, this is offset by the on-demand convenience and project management costs being more controllable compared to an out-sourced solution. The printing press remains the machine of choice for high-volume, professional publishing. However, as printers have improved in quality and performance, many jobs which used to be done by professional print shops are now done by users on local printers; see desktop publishing. Local printers are also increasingly taking over the process of photofinishing as digital photo printers become commonplace. The world’s first computer printer was a 19th century mechanically driven apparatus invented by Charles Babbage for his Difference Engine.[1]

A virtual printer is a piece of computer software whose user interface and API resembles that of a printer driver, but which is not connected with a physical computer printer.

Contents

[hide]

Printing technology

Printers are routinely classified by the printer technology they employ; numerous such technologies have been developed over the years. The choice of engine has a substantial effect on what jobs a printer is suitable for, as different technologies are capable of different levels of image or text quality, print speed, cost, and noise. Some printer technologies don’t work with certain types of physical media, such as carbon paper or transparencies.

A second aspect of printer technology that is often forgotten is resistance to alteration: liquid ink, such as from an inkjet head or fabric ribbon, becomes absorbed by the paper fibers, so documents printed with liquid ink are more difficult to alter than documents printed with toner or solid inks, which do not penetrate below the paper surface.

Cheques should either be printed with liquid ink or on special cheque paper with toner anchorage.[2] For similar reasons carbon film ribbons for IBM Selectric typewriters bore labels warning against using them to type negotiable instruments such as cheques. The machine-readable lower portion of a cheque, however, must be printed using MICR toner or ink. Banks and other clearing houses employ automation equipment that relies on the magnetic flux from these specially printed characters to function properly.

Modern print technology

The following printing technologies are routinely found in modern printers:

Toner-based printers

Main article: Laser printer

A laser printer rapidly produces high quality text and graphics. As with digital photocopiers and multifunction printers (MFPs), laser printers employ a xerographic printing process but differ from analog photocopiers in that the image is produced by the direct scanning of a laser beam across the printer’s photoreceptor.

Another toner-based printer is the LED printer which uses an array of LEDs instead of a laser to cause toner adhesion to the print drum.

Liquid inkjet printers

Inkjet printers operate by propelling variably-sized droplets of liquid or molten material (ink) onto almost any sized page. They are the most common type of computer printer used by consumers. Today’s photo-quality ink jet printers have DPI resolution in the thousands (1200 to 4800dpi). They will give you acceptable quality photo prints of images with 140-200ppi resolution, and high quality prints of images with 200-300ppi resolution. [3]

Solid ink printers

Main article: Solid ink

Solid ink printers, also known as phase-change printers, are a type of thermal transfer printer. They use solid sticks of CMYK-coloured ink, similar in consistency to candle wax, which are melted and fed into a piezo crystal operated print-head. The printhead sprays the ink on a rotating, oil coated drum. The paper then passes over the print drum, at which time the image is transferred, or transfixed, to the page. Solid ink printers are most commonly used as colour office printers, and are excellent at printing on transparencies and other non-porous media. Solid ink printers can produce excellent results. Acquisition and operating costs are similar to laser printers. Drawbacks of the technology include high energy consumption and long warm-up times from a cold state. Also, some users complain that the resulting prints are difficult to write on, as the wax tends to repel inks from pens, and are difficult to feed through automatic document feeders, but these traits have been significantly reduced in later models. In addition, this type of printer is only available from one manufacturer, Xerox, manufactured as part of their Xerox Phaser office printer line. Previously, solid ink printers were manufactured by Tektronix, but Tek sold the printing business to Xerox in 2001.

Dye-sublimation printers

A dye-sublimation printer (or dye-sub printer) is a printer which employs a printing process that uses heat to transfer dye to a medium such as a plastic card, paper or canvas. The process is usually to lay one colour at a time using a ribbon that has colour panels. Dye-sub printers are intended primarily for high-quality colour applications, including colour photography; and are less well-suited for text. While once the province of high-end print shops, dye-sublimation printers are now increasingly used as dedicated consumer photo printers.

Inkless printers

Thermal printers

Main article: Thermal printer

Thermal printers work by selectively heating regions of special heat-sensitive paper. Monochrome thermal printers are used in cash registers, ATMs, gasoline dispensers and some older inexpensive fax machines. Colours can be achieved with special papers and different temperatures and heating rates for different colours; these coloured sheets are not required in black-and-white output. One example is the ZINK technology.

UV printers

Xerox is working on an inkless printer which will use a special reusable paper coated with a few micrometres of UV light sensitive chemicals. The printer will use a special UV light bar which will be able to write and erase the paper. As of early 2007 this technology is still in development and the text on the printed pages can only last between 16–24 hours before fading.

Obsolete and special-purpose printing technologies

Epson MX-80, a popular model in use for many years

The following technologies are either obsolete, or limited to special applications though most were, at one time, in widespread use.

Impact printers rely on a forcible impact to transfer ink to the media, similar to the action of a typewriter. All but the dot matrix printer rely on the use of formed characters, letterforms that represent each of the characters that the printer was capable of printing. In addition, most of these printers were limited to monochrome printing in a single typeface at one time, although bolding and underlining of text could be done by “overstriking”, that is, printing two or more impressions in the same character position. Impact printers varieties include, typewriter-derived printers, teletypewriter-derived printers, daisy wheel printers, dot matrix printers and line printers. Dot matrix printers remain in common use in businesses where multi-part forms are printed, such as car rental services. An overview of impact printing [4] contains a detailed description of many of the technologies used.

Pen-based plotters were an alternate printing technology once common in engineering and architectural firms. Pen-based plotters rely on contact with the paper, but not impact, per se, and special purpose pens that are mechanically run over the paper to create text and images.

Typewriter-derived printers

Several different computer printers were simply computer-controllable versions of existing electric typewriters. The Friden Flexowriter and IBM Selectric typewriter were the most-common examples. The Flexowriter printed with a conventional typebar mechanism while the Selectric used IBM’s well-known “golf ball” printing mechanism. In either case, the letter form then struck a ribbon which was pressed against the paper, printing one character at a time. The maximum speed of the Selectric printer (the faster of the two) was 15.5 characters per second.

Teletypewriter-derived printers

Main article: Teleprinter

The common teleprinter could easily be interfaced to the computer and became very popular except for those computers manufactured by IBM. Some models used a “typebox” that was positioned, in the X- and Y-axes, by a mechanism and the selected letter form was struck by a hammer. Others used a type cylinder in a similar way as the Selectric typewriters used their type ball. In either case, the letter form then struck a ribbon to print the letterform. Most teleprinters operated at ten characters per second although a few achieved 15 CPS.

Daisy wheel printers

Main article: Daisy wheel printer

Daisy-wheel printers operate in much the same fashion as a typewriter. A hammer strikes a wheel with petals, the “daisy wheel”, each petal containing a letter form at its tip. The letter form strikes a ribbon of ink, depositing the ink on the page and thus printing a character. By rotating the daisy wheel, different characters are selected for printing. These printers were also referred to as letter-quality printers because, during their heyday, they could produce text which was as clear and crisp as a typewriter, though they were nowhere near the quality of printing presses. The fastest letter-quality printers printed at 30 characters per second.

Dot-matrix printers

Main article: Dot matrix printer

In the general sense many printers rely on a matrix of pixels, or dots, that together form the larger image. However, the term dot matrix printer is specifically used for impact printers that use a matrix of small pins to create precise dots. The advantage of dot-matrix over other impact printers is that they can produce graphical images in addition to text; however the text is generally of poorer quality than impact printers that use letterforms (type).

A Tandy 1000 HX with a Tandy DMP-133 dot-matrix printer.

Dot-matrix printers can be broadly divided into two major classes:

Dot matrix printers can either be character-based or line-based (that is, a single horizontal series of pixels across the page), referring to the configuration of the print head.

At one time, dot matrix printers were one of the more common types of printers used for general use, such as for home and small office use. Such printers would have either 9 or 24 pins on the print head. 24-pin print heads were able to print at a higher quality. Once the price of inkjet printers dropped to the point where they were competitive with dot matrix printers, dot matrix printers began to fall out of favor for general use.

Some dot matrix printers, such as the NEC P6300, can be upgraded to print in colour. This is achieved through the use of a four-colour ribbon mounted on a mechanism (provided in an upgrade kit that replaces the standard black ribbon mechanism after installation) that raises and lowers the ribbons as needed. Colour graphics are generally printed in four passes at standard resolution, thus slowing down printing considerably. As a result, colour graphics can take up to four times longer to print than standard monochrome graphics, or up to 8-16 times as long at high resolution mode.

Dot matrix printers are still commonly used in low-cost, low-quality applications like cash registers, or in demanding, very high volume applications like invoice printing. The fact that they use an impact printing method allows them to be used to print multi-part documents using carbonless copy paper, like sales invoices and credit card receipts, whereas other printing methods are unusable with paper of this type. Dot-matrix printers are now (as of 2005) rapidly being superseded even as receipt printers.

Line printers

Line printers, as the name implies, print an entire line of text at a time. Three principal designs existed. In drum printers, a drum carries the entire character set of the printer repeated in each column that is to be printed. In chain printers, also known as train printers, the character set is arranged multiple times around a chain that travels horizontally past the print line. In either case, to print a line, precisely timed hammers strike against the back of the paper at the exact moment that the correct character to be printed is passing in front of the paper. The paper presses forward against a ribbon which then presses against the character form and the impression of the character form is printed onto the paper.

Comb printers, also called line matrix printers, represent the third major design. These printers were a hybrid of dot matrix printing and line printing. In these printers, a comb of hammers printed a portion of a row of pixels at one time, such as every eighth pixel. By shifting the comb back and forth slightly, the entire pixel row could be printed, continuing the example, in just eight cycles. The paper then advanced and the next pixel row was printed. Because far less motion was involved than in a conventional dot matrix printer, these printers were very fast compared to dot matrix printers and were competitive in speed with formed-character line printers while also being able to print dot matrix graphics.

Line printers were the fastest of all impact printers and were used for bulk printing in large computer centres. They were virtually never used with personal computers and have now been replaced by high-speed laser printers. The legacy of line printers lives on in many computer operating systems, which use the abbreviations “lp”, “lpr”, or “LPT” to refer to printers.

Pen-based plotters

Main article: Plotter

A plotter is a vector graphics printing device which operates by moving a pen over the surface of paper. Plotters have been used in applications such as computer-aided design, though they are rarely used now and are being replaced with wide-format conventional printers, which nowadays have sufficient resolution to render high-quality vector graphics using a rasterized print engine. It is commonplace to refer to such wide-format printers as “plotters”, even though such usage is technically incorrect. There are two types of plotters, flat bed and drum.

Sales

Since 2005, the world’s top selling brand of inkjet and laser printers has been HP which now has 46% of sales in inkjet and 50.5% in laser printers.[5]

Other printers

A number of other sorts of printers are important for historical reasons, or for special purpose uses:

Printing mode

The data received by a printer may be:

Some printers can process all three types of data, others not.

  • Character printers, such as daisy wheel printers, can handle only plain text data or rather simple point plots.
  • Pen plotters typically process vector images. Inkjet based plotters can adequately reproduce all three.
  • Modern printing technology, such as laser printers and inkjet printers, can adequately reproduce all three. This is especially true of printers equipped with support for PostScript and/or PCL; which includes the vast majority of printers produced today.

Today it is common to print everything (even plain text) by sending ready bitmapped images to the printer, because it allows better control over formatting.  Many printer drivers do not use the text mode at all, even if the printer is capable of it.

Monochrome, colour and photo printers

A monochrome printer can only produce an image consisting of one colour, usually black. A monochrome printer may also be able to produce various tones of that color, such as a grey-scale. A colour printer can produce images of multiple colours. A photo printer is a colour printer that can produce images that mimic the colour range (gamut) and resolution of prints made from photographic film. Many can be used on a standalone basis without a computer, using a memory card or USB connector.

The printer manufacturing business

Often the razor and blades business model is applied. That is, a company may sell a printer at cost, and make profits on the ink cartridge, paper, or some other replacement part. This has caused legal disputes regarding the right of companies other than the printer manufacturer to sell compatible ink cartridges. To protect their business model, several manufacturers invest heavily in developing new cartridge technology and patenting it.

Other manufacturers, in reaction to the challenges from using this business model, choose to make more money on printers and less on the ink, promoting the latter through their advertising campaigns. Finally, this generates two clearly different proposals: “cheap printer – expensive ink” or “expensive printer – cheap ink”. Ultimately, the consumer decision depends on their reference interest rate or their time preference. From an Economics viewpoint, there is a clear trade-off between cost per copy and cost of the printer.[6]

Printing speed

The speed of early printers was measured in units of characters per second. More modern printers are measured in pages per minute. These measures are used primarily as a marketing tool, and are not as well standardised as toner yields. Usually pages per minute refers to sparse monochrome office documents, rather than dense pictures which usually print much more slowly, especially colour images. PPM are most of the time referring to A4 paper in Europe and letter paper in the United States, resulting in a 5-10% difference.

Printer steganography

An illustration showing small yellow tracking dots on white paper, generated by a color laser printer.

Main article: printer steganography

Printer steganography is a type of steganography produced by color printers, including Brother, Canon, Dell, Epson, HP, IBM, Konica Minolta, Kyocera, Lanier, Lexmark, Ricoh, Toshiba and Xerox[7] brand color laser printers, where tiny yellow dots are added to each page. The dots are barely visible and contain encoded printer serial numbers, as well as date and time stamps.

See also

References

Wikimedia Commons has media related to: Printers
[hide]v · d · eBasic computer components
Input devices
Output devices
Monitor · Printer · Speakers
Removable data storage
Computer case
Data ports

credit to: http://en.wikipedia.org/wiki/Printer_%28computing%29

one single free walk @bugis

Standard

yeaah! new post!

these few days i forgot to post this story, i’ve got my free walk!

it’s fun to found myself walking alone without direction and let my feet lead my way.

so, when my parents are out for bussiness, i’m at bugis junction already.

i don’t go to many place in bugis, just bugis junction and bugis market, but it’s kind of fun too, LOL

at bugis junction, one of many things i like here is the interior of the mall and of course, the little stall around there makes me feel like in somewhere good.

i like the window design, it’s part of the hotel intercontinental at bugis. i’ve try the room and it’s tottaly awsome, feel like princess

unfortunetly, haven’t caught some photos yet, just a few in bugis market which i’ll tell you later.

so, as my investigation*LOL investigation* at that hotel, i like their bed, soft and comfortable, and of course, BIG. then , i also like their bathroom, with BIG bathtub, yes! unfortunetly, no cartoon network there*i’m in love with cartoon you know!

but all of that was nothing to compare with their undefeatable buffet breakfast! OMG, i love my parents*because they take the breakfast package too ;P*

i’m a big fan of croissant, and i guarantee, they have the best one! i’t stil warm and golden and crispy! with the cold butter and your selection of jam, would make it even perfect! still, i haven’t take any picture of them T^T

and the nuts there really making me nut! lots of variety were served. owh yeah, and the bacon was delighted, crispy!!

also many other food which i haven’t mention yet, but i DO suggest you to stay for a couple of night, the price does worth it!

eh, one more, please wear your own sandal or shoes to have your breakfast, don’t use the hotel’s sandal, trust me, they won’t let you in with hotel’s sandal.

okay, back to the topic!

fiuh, looks like i’ve talk write much, huh?

next, about the bugis junction,we can simply goes there by a door near the breakfast room*i’ts actually a restaurant but i forgot the name, if i’m not wrong, it’s olive or something like that* or, by the main entrance of bugis.

the stall there sell many things which as my observation almost all connected to fashion; socks, perfumes, scarf, bag, wallet etc

beware while you are shopping there, as we know, the shopkeeper are all nice, watch your wallet! before you realize it, you’ve buy soooo many things, hahaha, but the service and the quality was worth it. ;D

if you are looking for wallet, bag, diary or doll, i DO suggest you to go to eIII(read: e three). they sell those with some characters such as emily the strange, domo, kunio, NBC, etc

as i said watch your wallet! kkkk

then after i got bored*kkk, lie, never got bored been there* i went to bugis market, it’s a short walk to there from bugis junction.

i bought a very nice punk t-shirt with a rabbit drawing on it. the cotton was so soft that make me tempted to buy it, since my sister didn’t come, i bought it for her, it’s S$12, quiet cheap eh?

and one more that i like from that shop(where i bought the pink t-shirt) they are honest, i didn’t realize that i slip a S$100 there, but the shop keeper does return it, i was shy (>//<)

so, after seeking the whole market i think i better spend my shopping spree here, much cheaper than bugis junction, heehee

i found a very sweet rose design skirt, but my money was quiet less after been from bugis junction T^T, so i ignore it, that skirt was S$14, oh, kinda sad i didn’t buy it.

then, i found this ice cream man

hoho, unique way to scoop the ice cream!

also beware with this ice cream man, he likes to kidding with you, it’s fun anyway, but some people will stare you when he kidding at you, heehee, what an expirience

umhh, not all day at bugis actually, i went to plaza singapura at the evening and luckily got a seat in swensen, i ordered a classic hotdod that you’ve got to try, the taste was great! love the soft hotdog bread. and, i ordered a hotplate ice cream! lovely~

next post will be…KOREA!

free

Standard

it’s doesn’t mean  free for the price of smthng, but free for explor, wasn’t it’ll be nice if we just have a or two days in life for walking without direction? must be soooo nice, for me, and maybe you too, haha, but preferably ALONE, much calming without companion~