-->

High Speed Internet Connection

- 1:20 AM

Media Centre for Your Connected Home | Baba AweSam
photo src: babaawesam.com

Internet access is the process that enables individuals and organisations to connect to the Internet using computer terminals, computers, mobile devices, sometimes via computer networks. Once connected to the Internet, users can access Internet services, such as email and the World Wide Web. Internet service providers (ISPs) offer Internet access through various technologies that offer a wide range of data signaling rates (speeds).

Consumer use of the Internet first became popular through dial-up Internet access in the 1990s. By the first decade of the 21st century, many consumers in developed nations used faster, broadband Internet access technologies. By 2014 this was almost ubiquitous worldwide, with a global average connection speed exceeding 4 Mbit/s.


high speed internet connection Gallery
photo src: keywordsuggest.org


Maps, Directions, and Place Reviews



History

The Internet developed from the ARPANET, which was funded by the US government to support projects within the government and at universities and research laboratories in the US - but grew over time to include most of the world's large universities and the research arms of many technology companies. Use by a wider audience only came in 1995 when restrictions on the use of the Internet to carry commercial traffic were lifted.

In the early to mid-1980s, most Internet access was from personal computers and workstations directly connected to local area networks or from dial-up connections using modems and analog telephone lines. LANs typically operated at 10 Mbit/s, while modem data-rates grew from 1200 bit/s in the early 1980s, to 56 kbit/s by the late 1990s. Initially, dial-up connections were made from terminals or computers running terminal emulation software to terminal servers on LANs. These dial-up connections did not support end-to-end use of the Internet protocols and only provided terminal to host connections. The introduction of network access servers supporting the Serial Line Internet Protocol (SLIP) and later the point-to-point protocol (PPP) extended the Internet protocols and made the full range of Internet services available to dial-up users; although slower, due to the lower data rates available using dial-up.

Broadband Internet access, often shortened to just broadband, is simply defined as "Internet access that is always on, and faster than the traditional dial-up access" and so covers a wide range of technologies. Broadband connections are typically made using a computer's built in Ethernet networking capabilities, or by using a NIC expansion card.

Most broadband services provide a continuous "always on" connection; there is no dial-in process required, and it does not interfere with voice use of phone lines. Broadband provides improved access to Internet services such as:

  • Faster world wide web browsing
  • Faster downloading of documents, photographs, videos, and other large files
  • Telephony, radio, television, and videoconferencing
  • Virtual private networks and remote system administration
  • Online gaming, especially massively multiplayer online role-playing games which are interaction-intensive

In the 1990s, the National Information Infrastructure initiative in the U.S. made broadband Internet access a public policy issue. In 2000, most Internet access to homes was provided using dial-up, while many businesses and schools were using broadband connections. In 2000 there were just under 150 million dial-up subscriptions in the 34 OECD countries and fewer than 20 million broadband subscriptions. By 2004, broadband had grown and dial-up had declined so that the number of subscriptions were roughly equal at 130 million each. In 2010, in the OECD countries, over 90% of the Internet access subscriptions used broadband, broadband had grown to more than 300 million subscriptions, and dial-up subscriptions had declined to fewer than 30 million.

The broadband technologies in widest use are ADSL and cable Internet access. Newer technologies include VDSL and optical fibre extended closer to the subscriber in both telephone and cable plants. Fibre-optic communication, while only recently being used in premises and to the curb schemes, has played a crucial role in enabling broadband Internet access by making transmission of information at very high data rates over longer distances much more cost-effective than copper wire technology.

In areas not served by ADSL or cable, some community organizations and local governments are installing Wi-Fi networks. Wireless and satellite Internet are often used in rural, undeveloped, or other hard to serve areas where wired Internet is not readily available.

Newer technologies being deployed for fixed (stationary) and mobile broadband access include WiMAX, LTE, and fixed wireless, e.g., Motorola Canopy.

Starting in roughly 2006, mobile broadband access is increasingly available at the consumer level using "3G" and "4G" technologies such as HSPA, EV-DO, HSPA+, and LTE.


High Speed Internet Connection Video



Availability

In addition to access from home, school, and the workplace Internet access may be available from public places such as libraries and Internet cafes, where computers with Internet connections are available. Some libraries provide stations for physically connecting users' laptops to local area networks (LANs).

Wireless Internet access points are available in public places such as airport halls, in some cases just for brief use while standing. Some access points may also provide coin-operated computers. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels also have public terminals, usually fee based.

Coffee shops, shopping malls, and other venues increasingly offer wireless access to computer networks, referred to as hotspots, for users who bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A Wi-Fi hotspot need not be limited to a confined location since multiple ones combined can cover a whole campus or park, or even an entire city can be enabled.

Additionally, Mobile broadband access allows smart phones and other digital devices to connect to the Internet from any location from which a mobile phone call can be made, subject to the capabilities of that mobile network.

Speed

The bit rates for dial-up modems range from as little as 110 bit/s in the late 1950s, to a maximum of from 33 to 64 kbit/s (V.90 and V.92) in the late 1990s. Dial-up connections generally require the dedicated use of a telephone line. Data compression can boost the effective bit rate for a dial-up modem connection to from 220 (V.42bis) to 320 (V.44) kbit/s. However, the effectiveness of data compression is quite variable, depending on the type of data being sent, the condition of the telephone line, and a number of other factors. In reality, the overall data rate rarely exceeds 150 kbit/s.

Broadband technologies supply considerably higher bit rates than dial-up, generally without disrupting regular telephone use. Various minimum data rates and maximum latencies have been used in definitions of broadband, ranging from 64 kbit/s up to 4.0 Mbit/s. In 1988 the CCITT standards body defined "broadband service" as requiring transmission channels capable of supporting bit rates greater than the primary rate which ranged from about 1.5 to 2 Mbit/s. A 2006 Organization for Economic Co-operation and Development (OECD) report defined broadband as having download data transfer rates equal to or faster than 256 kbit/s. And in 2015 the U.S. Federal Communications Commission (FCC) defined "Basic Broadband" as data transmission speeds of at least 25 Mbit/s downstream (from the Internet to the user's computer) and 3 Mbit/s upstream (from the user's computer to the Internet). The trend is to raise the threshold of the broadband definition as higher data rate services become available.

The higher data rate dial-up modems and many broadband services are "asymmetric"--supporting much higher data rates for download (toward the user) than for upload (toward the Internet).

Data rates, including those given in this article, are usually defined and advertised in terms of the maximum or peak download rate. In practice, these maximum data rates are not always reliably available to the customer. Actual end-to-end data rates can be lower due to a number of factors. In late June 2016, internet connection speeds averaged about 6 Mbit/s globally. Physical link quality can vary with distance and for wireless access with terrain, weather, building construction, antenna placement, and interference from other radio sources. Network bottlenecks may exist at points anywhere on the path from the end-user to the remote server or service being used and not just on the first or last link providing Internet access to the end-user.

Network congestion

Users may share access over a common network infrastructure. Since most users do not use their full connection capacity all of the time, this aggregation strategy (known as contended service) usually works well and users can burst to their full data rate at least for brief periods. However, peer-to-peer (P2P) file sharing and high-quality streaming video can require high data-rates for extended periods, which violates these assumptions and can cause a service to become oversubscribed, resulting in congestion and poor performance. The TCP protocol includes flow-control mechanisms that automatically throttle back on the bandwidth being used during periods of network congestion. This is fair in the sense that all users that experience congestion receive less bandwidth, but it can be frustrating for customers and a major problem for ISPs. In some cases the amount of bandwidth actually available may fall below the threshold required to support a particular service such as video conferencing or streaming live video-effectively making the service unavailable.

When traffic is particularly heavy, an ISP can deliberately throttle back the bandwidth available to classes of users or for particular services. This is known as traffic shaping and careful use can ensure a better quality of service for time critical services even on extremely busy networks. However, overuse can lead to concerns about fairness and network neutrality or even charges of censorship, when some types of traffic are severely or completely blocked.

Outages

An Internet blackout or outage can be caused by local signaling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests.

On April 25, 1997, due to a combination of human error and software bug, an incorrect routing table at MAI Network Service (a Virginia Internet Service Provider) propagated across backbone routers and caused major disruption to Internet traffic for a few hours.


Needle On Speedometer Points Word Internet Stock Illustration ...
photo src: www.shutterstock.com


Technologies

When the Internet is accessed using a modem, digital data is converted to analog for transmission over analog networks such as the telephone and cable networks. A computer or other device accessing the Internet would either be connected directly to a modem that communicates with an Internet service provider (ISP) or the modem's Internet connection would be shared via a Local Area Network (LAN) which provides access in a limited area such as a home, school, computer laboratory, or office building.

Although a connection to a LAN may provide very high data-rates within the LAN, actual Internet access speed is limited by the upstream link to the ISP. LANs may be wired or wireless. Ethernet over twisted pair cabling and Wi-Fi are the two most common technologies used to build LANs today, but ARCNET, Token Ring, Localtalk, FDDI, and other technologies were used in the past.

Ethernet is the name of the IEEE 802.3 standard for physical LAN communication and Wi-Fi is a trade name for a wireless local area network (WLAN) that uses one of the IEEE 802.11 standards. Ethernet cables are interconnected via switches & routers. Wi-Fi networks are built using one or more wireless antenna called access points.

Many "modems" provide the additional functionality to host a LAN so most Internet access today is through a LAN, often a very small LAN with just one or two devices attached. And while LANs are an important form of Internet access, this raises the question of how and at what data rate the LAN itself is connected to the rest of the global Internet. The technologies described below are used to make these connections.

Hardwired broadband access

The term broadband includes a broad range of technologies, all of which provide higher data rate access to the Internet. The following technologies use wires or cables in contrast to wireless broadband described later.

Dial-up access

Dial-up Internet access uses a modem and a phone call placed over the public switched telephone network (PSTN) to connect to a pool of modems operated by an ISP. The modem converts a computer's digital signal into an analog signal that travels over a phone line's local loop until it reaches a telephone company's switching facilities or central office (CO) where it is switched to another phone line that connects to another modem at the remote end of the connection.

Operating on a single channel, a dial-up connection monopolizes the phone line and is one of the slowest methods of accessing the Internet. Dial-up is often the only form of Internet access available in rural areas as it requires no new infrastructure beyond the already existing telephone network, to connect to the Internet. Typically, dial-up connections do not exceed a speed of 56 kbit/s, as they are primarily made using modems that operate at a maximum data rate of 56 kbit/s downstream (towards the end user) and 34 or 48 kbit/s upstream (toward the global Internet).

Multilink dial-up

Multilink dial-up provides increased bandwidth by channel bonding multiple dial-up connections and accessing them as a single data channel. It requires two or more modems, phone lines, and dial-up accounts, as well as an ISP that supports multilinking - and of course any line and data charges are also doubled. This inverse multiplexing option was briefly popular with some high-end users before ISDN, DSL and other technologies became available. Diamond and other vendors created special modems to support multilinking.

Integrated Services Digital Network

Integrated Services Digital Network (ISDN) is a switched telephone service capable of transporting voice and digital data, is one of the oldest Internet access methods. ISDN has been used for voice, video conferencing, and broadband data applications. ISDN was very popular in Europe, but less common in North America. Its use peaked in the late 1990s before the availability of DSL and cable modem technologies.

Basic rate ISDN, known as ISDN-BRI, has two 64 kbit/s "bearer" or "B" channels. These channels can be used separately for voice or data calls or bonded together to provide a 128 kbit/s service. Multiple ISDN-BRI lines can be bonded together to provide data rates above 128 kbit/s. Primary rate ISDN, known as ISDN-PRI, has 23 bearer channels (64 kbit/s each) for a combined data rate of 1.5 Mbit/s (US standard). An ISDN E1 (European standard) line has 30 bearer channels and a combined data rate of 1.9 Mbit/s.

Leased lines

Leased lines are dedicated lines used primarily by ISPs, business, and other large enterprises to connect LANs and campus networks to the Internet using the existing infrastructure of the public telephone network or other providers. Delivered using wire, optical fiber, and radio, leased lines are used to provide Internet access directly as well as the building blocks from which several other forms of Internet access are created.

T-carrier technology dates to 1957 and provides data rates that range from 56 and 64 kbit/s (DS0) to 1.5 Mbit/s (DS1 or T1), to 45 Mbit/s (DS3 or T3). A T1 line carries 24 voice or data channels (24 DS0s), so customers may use some channels for data and others for voice traffic or use all 24 channels for clear channel data. A DS3 (T3) line carries 28 DS1 (T1) channels. Fractional T1 lines are also available in multiples of a DS0 to provide data rates between 56 and 1,500 kbit/s. T-carrier lines require special termination equipment that may be separate from or integrated into a router or switch and which may be purchased or leased from an ISP. In Japan the equivalent standard is J1/J3. In Europe, a slightly different standard, E-carrier, provides 32 user channels (64 kbit/s) on an E1 (2.0 Mbit/s) and 512 user channels or 16 E1s on an E3 (34.4 Mbit/s).

Synchronous Optical Networking (SONET, in the U.S. and Canada) and Synchronous Digital Hierarchy (SDH, in the rest of the world) are the standard multiplexing protocols used to carry high-data-rate digital bit-streams over optical fiber using lasers or highly coherent light from light-emitting diodes (LEDs). At lower transmission rates data can also be transferred via an electrical interface. The basic unit of framing is an OC-3c (optical) or STS-3c (electrical) which carries 155.520 Mbit/s. Thus an OC-3c will carry three OC-1 (51.84 Mbit/s) payloads each of which has enough capacity to include a full DS3. Higher data rates are delivered in OC-3c multiples of four providing OC-12c (622.080 Mbit/s), OC-48c (2.488 Gbit/s), OC-192c (9.953 Gbit/s), and OC-768c (39.813 Gbit/s). The "c" at the end of the OC labels stands for "concatenated" and indicates a single data stream rather than several multiplexed data streams.

The 1, 10, 40, and 100 gigabit Ethernet (GbE, 10 GbE, 40/100 GbE) IEEE standards (802.3) allow digital data to be delivered over copper wiring at distances to 100 m and over optical fiber at distances to 40 km.

Cable Internet access

Cable Internet provides access using a cable modem on hybrid fiber coaxial wiring originally developed to carry television signals. Either fiber-optic or coaxial copper cable may connect a node to a customer's location at a connection known as a cable drop. In a cable modem termination system, all nodes for cable subscribers in a neighborhood connect to a cable company's central office, known as the "head end." The cable company then connects to the Internet using a variety of means - usually fiber optic cable or digital satellite and microwave transmissions. Like DSL, broadband cable provides a continuous connection with an ISP.

Downstream, the direction toward the user, bit rates can be as much as 400 Mbit/s for business connections, and 250 Mbit/s for residential service in some countries. Upstream traffic, originating at the user, ranges from 384 kbit/s to more than 20 Mbit/s. Broadband cable access tends to service fewer business customers because existing television cable networks tend to service residential buildings and commercial buildings do not always include wiring for coaxial cable networks. In addition, because broadband cable subscribers share the same local line, communications may be intercepted by neighboring subscribers. Cable networks regularly provide encryption schemes for data traveling to and from customers, but these schemes may be thwarted.

Digital subscriber line (DSL, ADSL, SDSL, and VDSL)

Digital Subscriber Line (DSL) service provides a connection to the Internet through the telephone network. Unlike dial-up, DSL can operate using a single phone line without preventing normal use of the telephone line for voice phone calls. DSL uses the high frequencies, while the low (audible) frequencies of the line are left free for regular telephone communication. These frequency bands are subsequently separated by filters installed at the customer's premises.

DSL originally stood for "digital subscriber loop". In telecommunications marketing, the term digital subscriber line is widely understood to mean Asymmetric Digital Subscriber Line (ADSL), the most commonly installed variety of DSL. The data throughput of consumer DSL services typically ranges from 256 kbit/s to 20 Mbit/s in the direction to the customer (downstream), depending on DSL technology, line conditions, and service-level implementation. In ADSL, the data throughput in the upstream direction, (i.e. in the direction to the service provider) is lower than that in the downstream direction (i.e. to the customer), hence the designation of asymmetric. With a symmetric digital subscriber line (SDSL), the downstream and upstream data rates are equal.

Very-high-bit-rate digital subscriber line (VDSL or VHDSL, ITU G.993.1) is a digital subscriber line (DSL) standard approved in 2001 that provides data rates up to 52 Mbit/s downstream and 16 Mbit/s upstream over copper wires and up to 85 Mbit/s down- and upstream on coaxial cable. VDSL is capable of supporting applications such as high-definition television, as well as telephone services (voice over IP) and general Internet access, over a single physical connection.

VDSL2 (ITU-T G.993.2) is a second-generation version and an enhancement of VDSL. Approved in February 2006, it is able to provide data rates exceeding 100 Mbit/s simultaneously in both the upstream and downstream directions. However, the maximum data rate is achieved at a range of about 300 meters and performance degrades as distance and loop attenuation increases.

DSL Rings

DSL Rings (DSLR) or Bonded DSL Rings is a ring topology that uses DSL technology over existing copper telephone wires to provide data rates of up to 400 Mbit/s.

Fiber to the home

Fiber-to-the-home (FTTH) is one member of the Fiber-to-the-x (FTTx) family that includes Fiber-to-the-building or basement (FTTB), Fiber-to-the-premises (FTTP), Fiber-to-the-desk (FTTD), Fiber-to-the-curb (FTTC), and Fiber-to-the-node (FTTN). These methods all bring data closer to the end user on optical fibers. The differences between the methods have mostly to do with just how close to the end user the delivery on fiber comes. All of these delivery methods are similar to hybrid fiber-coaxial (HFC) systems used to provide cable Internet access.

The use of optical fiber offers much higher data rates over relatively longer distances. Most high-capacity Internet and cable television backbones already use fiber optic technology, with data switched to other technologies (DSL, cable, POTS) for final delivery to customers.

Australia began rolling out its National Broadband Network across the country using fiber-optic cables to 93 percent of Australian homes, schools, and businesses. The project was abandoned by the subsequent LNP government, in favour of a hybrid FTTN design, which turned out to be more expensive and introduced delays. Similar efforts are underway in Italy, Canada, India, and many other countries (see Fiber to the premises by country).

Power-line Internet

Power-line Internet, also known as Broadband over power lines (BPL), carries Internet data on a conductor that is also used for electric power transmission. Because of the extensive power line infrastructure already in place, this technology can provide people in rural and low population areas access to the Internet with little cost in terms of new transmission equipment, cables, or wires. Data rates are asymmetric and generally range from 256 kbit/s to 2.7 Mbit/s.

Because these systems use parts of the radio spectrum allocated to other over-the-air communication services, interference between the services is a limiting factor in the introduction of power-line Internet systems. The IEEE P1901 standard specifies that all power-line protocols must detect existing usage and avoid interfering with it.

Power-line Internet has developed faster in Europe than in the U.S. due to a historical difference in power system design philosophies. Data signals cannot pass through the step-down transformers used and so a repeater must be installed on each transformer. In the U.S. a transformer serves a small cluster of from one to a few houses. In Europe, it is more common for a somewhat larger transformer to service larger clusters of from 10 to 100 houses. Thus a typical U.S. city requires an order of magnitude more repeaters than in a comparable European city.

ATM and Frame Relay

Asynchronous Transfer Mode (ATM) and Frame Relay are wide-area networking standards that can be used to provide Internet access directly or as building blocks of other access technologies. For example, many DSL implementations use an ATM layer over the low-level bitstream layer to enable a number of different technologies over the same link. Customer LANs are typically connected to an ATM switch or a Frame Relay node using leased lines at a wide range of data rates.

While still widely used, with the advent of Ethernet over optical fiber, MPLS, VPNs and broadband services such as cable modem and DSL, ATM and Frame Relay no longer play the prominent role they once did.

Wireless broadband access

Wireless broadband is used to provide both fixed and mobile Internet access with the following technologies.

Satellite broadband

Satellite Internet access provides fixed, portable, and mobile Internet access. Data rates range from 2 kbit/s to 1 Gbit/s downstream and from 2 kbit/s to 10 Mbit/s upstream. In the northern hemisphere, satellite antenna dishes require a clear line of sight to the southern sky, due to the equatorial position of all geostationary satellites. In the southern hemisphere, this situation is reversed, and dishes are pointed north. Service can be adversely affected by moisture, rain, and snow (known as rain fade). The system requires a carefully aimed directional antenna.

Satellites in geostationary Earth orbit (GEO) operate in a fixed position 35,786 km (22,236 miles) above the Earth's equator. At the speed of light (about 300,000 km/s or 186,000 miles per second), it takes a quarter of a second for a radio signal to travel from the Earth to the satellite and back. When other switching and routing delays are added and the delays are doubled to allow for a full round-trip transmission, the total delay can be 0.75 to 1.25 seconds. This latency is large when compared to other forms of Internet access with typical latencies that range from 0.015 to 0.2 seconds. Long latencies negatively affect some applications that require real-time response, particularly online games, voice over IP, and remote control devices. TCP tuning and TCP acceleration techniques can mitigate some of these problems. GEO satellites do not cover the Earth's polar regions. HughesNet, Exede, AT&T and Dish Network have GEO systems.

Satellites in low Earth orbit (LEO, below 2000 km or 1243 miles) and medium Earth orbit (MEO, between 2000 and 35,786 km or 1,243 and 22,236 miles) are less common, operate at lower altitudes, and are not fixed in their position above the Earth. Lower altitudes allow lower latencies and make real-time interactive Internet applications more feasible. LEO systems include Globalstar and Iridium. The O3b Satellite Constellation is a proposed MEO system with a latency of 125 ms. COMMStellation(TM) is a LEO system, scheduled for launch in 2015, that is expected to have a latency of just 7 ms.

Mobile broadband

Mobile broadband is the marketing term for wireless Internet access delivered through mobile phone towers to computers, mobile phones (called "cell phones" in North America and South Africa, and "hand phones" in Asia), and other digital devices using portable modems. Some mobile services allow more than one device to be connected to the Internet using a single cellular connection using a process called tethering. The modem may be built into laptop computers, tablets, mobile phones, and other devices, added to some devices using PC cards, USB modems, and USB sticks or dongles, or separate wireless modems can be used.

New mobile phone technology and infrastructure is introduced periodically and generally involves a change in the fundamental nature of the service, non-backwards-compatible transmission technology, higher peak data rates, new frequency bands, wider channel frequency bandwidth in Hertz becomes available. These transitions are referred to as generations. The first mobile data services became available during the second generation (2G).

The download (to the user) and upload (to the Internet) data rates given above are peak or maximum rates and end users will typically experience lower data rates.

WiMAX was originally developed to deliver fixed wireless service with wireless mobility added in 2005. CDPD, CDMA2000 EV-DO, and MBWA are no longer being actively developed.

In 2011, 90% of the world's population lived in areas with 2G coverage, while 45% lived in areas with 2G and 3G coverage.

WiMAX

Worldwide Interoperability for Microwave Access (WiMAX ) is a set of interoperable implementations of the IEEE 802.16 family of wireless-network standards certified by the WiMAX Forum. WiMAX enables "the delivery of last mile wireless broadband access as an alternative to cable and DSL". The original IEEE 802.16 standard, now called "Fixed WiMAX", was published in 2001 and provided 30 to 40 megabit-per-second data rates. Mobility support was added in 2005. A 2011 update provides data rates up to 1 Gbit/s for fixed stations. WiMax offers a metropolitan area network with a signal radius of about 50 km (30 miles), far surpassing the 30-metre (100-foot) wireless range of a conventional Wi-Fi local area network (LAN). WiMAX signals also penetrate building walls much more effectively than Wi-Fi.

Wireless ISP

Wireless Internet service providers (WISPs) operate independently of mobile phone operators. WISPs typically employ low-cost IEEE 802.11 Wi-Fi radio systems to link up remote locations over great distances (Long-range Wi-Fi), but may use other higher-power radio communications systems as well.

Traditional 802.11b is an unlicensed omnidirectional service designed to span between 100 and 150 m (300 to 500 ft). By focusing the radio signal using a directional antenna 802.11b can operate reliably over a distance of many km(miles), although the technology's line-of-sight requirements hamper connectivity in areas with hilly or heavily foliated terrain. In addition, compared to hard-wired connectivity, there are security risks (unless robust security protocols are enabled); data rates are significantly slower (2 to 50 times slower); and the network can be less stable, due to interference from other wireless devices and networks, weather and line-of-sight problems.

Deploying multiple adjacent Wi-Fi access points is sometimes used to create city-wide wireless networks. Some are by commercial WISPs but grassroots efforts have also led to wireless community networks. Rural wireless-ISP installations are typically not commercial in nature and are instead a patchwork of systems built up by hobbyists mounting antennas on radio masts and towers, agricultural storage silos, very tall trees, or whatever other tall objects are available. There are a number of companies that provide this service.

Proprietary technologies like Motorola Canopy & Expedience can be used by a WISP to offer wireless access to rural and other markets that are hard to reach using Wi-Fi or WiMAX.

Local Multipoint Distribution Service

Local Multipoint Distribution Service (LMDS) is a broadband wireless access technology that uses microwave signals operating between 26 GHz and 29 GHz. Originally designed for digital television transmission (DTV), it is conceived as a fixed wireless, point-to-multipoint technology for utilization in the last mile. Data rates range from 64 kbit/s to 155 Mbit/s. Distance is typically limited to about 1.5 miles (2.4 km), but links of up to 5 miles (8 km) from the base station are possible in some circumstances.

LMDS has been surpassed in both technological and commercial potential by the LTE and WiMAX standards.


How Do I Make My internet Connection Faster? - Compare Broadband
photo src: www.comparebroadband.com.au


Pricing and spending

Internet access is limited by the relation between pricing and available resources to spend. Regarding the latter, it is estimated that 40% of the world's population has less than US$20 per year available to spend on information and communications technology (ICT). In Mexico, the poorest 30% of the society counts with an estimated US$35 per year (US$3 per month) and in Brazil, the poorest 22% of the population counts with merely US$9 per year to spend on ICT (US$0.75 per month). From Latin America it is known that the borderline between ICT as a necessity good and ICT as a luxury good is roughly around the "magical number" of US$10 per person per month, or US$120 per year. This is the amount of ICT spending people esteem to be a basic necessity. Current Internet access prices exceed the available resources by large in many countries.

Dial-up users pay the costs for making local or long distance phone calls, usually pay a monthly subscription fee, and may be subject to additional per minute or traffic based charges, and connect time limits by their ISP. Though less common today than in the past, some dial-up access is offered for "free" in return for watching banner ads as part of the dial-up service. NetZero, BlueLight, Juno, Freenet (NZ), and Free-nets are examples of services providing free access. Some Wireless community networks continue the tradition of providing free Internet access.

Fixed broadband Internet access is often sold under an "unlimited" or flat rate pricing model, with price determined by the maximum data rate chosen by the customer, rather than a per minute or traffic based charge. Per minute and traffic based charges and traffic caps are common for mobile broadband Internet access.

Internet services like Facebook, Wikipedia and Google have built special programs to partner with mobile network operators (MNO) to introduce zero-rating the cost for their data volumes as a means to provide their service more broadly into developing markets.

With increased consumer demand for streaming content such as video on demand and peer-to-peer file sharing, demand for bandwidth has increased rapidly and for some ISPs the flat rate pricing model may become unsustainable. However, with fixed costs estimated to represent 80-90% of the cost of providing broadband service, the marginal cost to carry additional traffic is low. Most ISPs do not disclose their costs, but the cost to transmit a gigabyte of data in 2011 was estimated to be about $0.03.

Some ISPs estimate that a small number of their users consume a disproportionate portion of the total bandwidth. In response some ISPs are considering, are experimenting with, or have implemented combinations of traffic based pricing, time of day or "peak" and "off peak" pricing, and bandwidth or traffic caps. Others claim that because the marginal cost of extra bandwidth is very small with 80 to 90 percent of the costs fixed regardless of usage level, that such steps are unnecessary or motivated by concerns other than the cost of delivering bandwidth to the end user.

In Canada, Rogers Hi-Speed Internet and Bell Canada have imposed bandwidth caps. In 2008 Time Warner began experimenting with usage-based pricing in Beaumont, Texas. In 2009 an effort by Time Warner to expand usage-based pricing into the Rochester, New York area met with public resistance, however, and was abandoned. On August 1, 2012 in Nashville, Tennessee and on October 1, 2012 in Tucson, Arizona Comcast began tests that impose data caps on area residents. In Nashville exceeding the 300 Gbyte cap mandates a temporary purchase of 50 Gbytes of additional data.


High Speed Internet DSL Cable Fiber Optics Satellite Wireless ...
photo src: hrct.net


Digital divide

Despite its tremendous growth, Internet access is not distributed equally within or between countries. The digital divide refers to "the gap between people with effective access to information and communications technology (ICT), and those with very limited or no access". The gap between people with Internet access and those without is one of many aspects of the digital divide. Whether someone has access to the Internet can depend greatly on financial status, geographical location as well as government policies. "Low-income, rural, and minority populations have received special scrutiny as the technological "have-nots."

Government policies play a tremendous role in bringing Internet access to or limiting access for underserved groups, regions, and countries. For example, in Pakistan, which is pursuing an aggressive IT policy aimed at boosting its drive for economic modernization, the number of Internet users grew from 133,900 (0.1% of the population) in 2000 to 31 million (17.6% of the population) in 2011. In North Korea there is relatively little access to the Internet due to the governments' fear of political instability that might accompany the benefits of access to the global Internet. The U.S. trade embargo is a barrier limiting Internet access in Cuba.

Access to computers is a dominant factor in determining the level of Internet access. In 2011, in developing countries, 25% of households had a computer and 20% had Internet access, while in developed countries the figures were 74% of households had a computer and 71% had Internet access. When buying computers was legalized in Cuba in 2007, the private ownership of computers soared (there were 630,000 computers available on the island in 2008, a 23% increase over 2007).

Internet access has changed the way in which many people think and has become an integral part of peoples economic, political, and social lives. The United Nations has recognized that providing Internet access to more people in the world will allow them to take advantage of the "political, social, economic, educational, and career opportunities" available over the Internet. Several of the 67 principles adopted at the World Summit on the Information Society convened by the United Nations in Geneva in 2003, directly address the digital divide. To promote economic development and a reduction of the digital divide, national broadband plans have been and are being developed to increase the availability of affordable high-speed Internet access throughout the world.

Growth in number of users

Access to the Internet grew from an estimated 10 million people in 1993, to almost 40 million in 1995, to 670 million in 2002, and to 2.7 billion in 2013. With market saturation, growth in the number of Internet users is slowing in industrialized countries, but continues in Asia, Africa, Latin America, the Caribbean, and the Middle East.

There were roughly 0.6 billion fixed broadband subscribers and almost 1.2 billion mobile broadband subscribers in 2011. In developed countries people frequently use both fixed and mobile broadband networks. In developing countries mobile broadband is often the only access method available.

Bandwidth divide

Traditionally the divide has been measured in terms of the existing numbers of subscriptions and digital devices ("have and have-not of subscriptions"). Recent studies have measured the digital divide not in terms of technological devices, but in terms of the existing bandwidth per individual (in kbit/s per capita). As shown in the Figure on the side, the digital divide in kbit/s is not monotonically decreasing, but re-opens up with each new innovation. For example, "the massive diffusion of narrow-band Internet and mobile phones during the late 1990s" increased digital inequality, as well as "the initial introduction of broadband DSL and cable modems during 2003-2004 increased levels of inequality". This is because a new kind of connectivity is never introduced instantaneously and uniformly to society as a whole at once, but diffuses slowly through social networks. As shown by the Figure, during the mid-2000s, communication capacity was more unequally distributed than during the late 1980s, when only fixed-line phones existed. The most recent increase in digital equality stems from the massive diffusion of the latest digital innovations (i.e. fixed and mobile broadband infrastructures, e.g. 3G and fiber optics FTTH). As shown in the Figure, Internet access in terms of bandwidth is more unequally distributed in 2014 as it was in the mid-1990s.

In the United States

In the United States, billions of dollars have been invested in efforts to narrow the digital divide and bring Internet access to more people in low-income and rural areas of the United States. Internet availability varies widely state by state in the U.S. In 2011 for example, 87.1% of all New Hampshire residents lived in a household where Internet was available, ranking first in the nation. Meanwhile, 61.4% of all Mississippi residents lived in a household where Internet was available, ranking last in the nation. The Obama administration has continued this commitment to narrowing the digital divide through the use of stimulus funding. The National Center for Education Statistics reported that 98% of all U.S. classroom computers had Internet access in 2008 with roughly one computer with Internet access available for every three students. The percentage and ratio of students to computers was the same for rural schools (98% and 1 computer for every 2.9 students).

Rural access

One of the great challenges for Internet access in general and for broadband access in particular is to provide service to potential customers in areas of low population density, such as to farmers, ranchers, and small towns. In cities where the population density is high, it is easier for a service provider to recover equipment costs, but each rural customer may require expensive equipment to get connected. While 66% of Americans had an Internet connection in 2010, that figure was only 50% in rural areas, according to the Pew Internet & American Life Project. Virgin Media advertised over 100 towns across the United Kingdom "from Cwmbran to Clydebank" that have access to their 100 Mbit/s service.

Wireless Internet Service Provider (WISPs) are rapidly becoming a popular broadband option for rural areas. The technology's line-of-sight requirements may hamper connectivity in some areas with hilly and heavily foliated terrain. However, the Tegola project, a successful pilot in remote Scotland, demonstrates that wireless can be a viable option.

The Broadband for Rural Nova Scotia initiative is the first program in North America to guarantee access to "100% of civic addresses" in a region. It is based on Motorola Canopy technology. As of November 2011, under 1000 households have reported access problems. Deployment of a new cell network by one Canopy provider (Eastlink) was expected to provide the alternative of 3G/4G service, possibly at a special unmetered rate, for areas harder to serve by Canopy.

A rural broadband initiative in New Zealand is a joint project between Vodafone and Chorus, with Chorus providing the fibre infrastructure and Vodafone providing wireless broadband, supported by the fibre backhaul.

Access as a civil or human right

The actions, statements, opinions, and recommendations outlined below have led to the suggestion that Internet access itself is or should become a civil or perhaps a human right.

Several countries have adopted laws requiring the state to work to ensure that Internet access is broadly available and/or preventing the state from unreasonably restricting an individual's access to information and the Internet:

  • Costa Rica: A 30 July 2010 ruling by the Supreme Court of Costa Rica stated: "Without fear of equivocation, it can be said that these technologies [information technology and communication] have impacted the way humans communicate, facilitating the connection between people and institutions worldwide and eliminating barriers of space and time. At this time, access to these technologies becomes a basic tool to facilitate the exercise of fundamental rights and democratic participation (e-democracy) and citizen control, education, freedom of thought and expression, access to information and public services online, the right to communicate with government electronically and administrative transparency, among others. This includes the fundamental right of access to these technologies, in particular, the right of access to the Internet or World Wide Web."
  • Estonia: In 2000, the parliament launched a massive program to expand access to the countryside. The Internet, the government argues, is essential for life in the 21st century.
  • Finland: By July 2010, every person in Finland was to have access to a one-megabit per second broadband connection, according to the Ministry of Transport and Communications. And by 2015, access to a 100 Mbit/s connection.
  • France: In June 2009, the Constitutional Council, France's highest court, declared access to the Internet to be a basic human right in a strongly-worded decision that struck down portions of the HADOPI law, a law that would have tracked abusers and without judicial review automatically cut off network access to those who continued to download illicit material after two warnings
  • Greece: Article 5A of the Constitution of Greece states that all persons has a right to participate in the Information Society and that the state has an obligation to facilitate the production, exchange, diffusion, and access to electronically transmitted information.
  • Spain: Starting in 2011, Telefónica, the former state monopoly that holds the country's "universal service" contract, has to guarantee to offer "reasonably" priced broadband of at least one megabyte per second throughout Spain.

In December 2003, the World Summit on the Information Society (WSIS) was convened under the auspice of the United Nations. After lengthy negotiations between governments, businesses and civil society representatives the WSIS Declaration of Principles was adopted reaffirming the importance of the Information Society to maintaining and strengthening human rights:

The WSIS Declaration of Principles makes specific reference to the importance of the right to freedom of expression in the "Information Society" in stating:

A poll of 27,973 adults in 26 countries, including 14,306 Internet users, conducted for the BBC World Service between 30 November 2009 and 7 February 2010 found that almost four in five Internet users and non-users around the world felt that access to the Internet was a fundamental right. 50% strongly agreed, 29% somewhat agreed, 9% somewhat disagreed, 6% strongly disagreed, and 6% gave no opinion.

The 88 recommendations made by the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression in a May 2011 report to the Human Rights Council of the United Nations General Assembly include several that bear on the question of the right to Internet access:

Network neutrality

Network neutrality (also net neutrality, Internet neutrality, or net equality) is the principle that Internet service providers and governments should treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. Advocates of net neutrality have raised concerns about the ability of broadband providers to use their last mile infrastructure to block Internet applications and content (e.g. websites, services, and protocols), and even to block out competitors. Opponents claim net neutrality regulations would deter investment into improving broadband infrastructure and try to fix something that isn't broken.


What was Your very first Internet Connection speed? - Networking ...
photo src: linustechtips.com


Natural disasters and access

Natural disasters disrupt internet access in profound ways. This is important--not only for telecommunication companies who own the networks and the businesses who use them, but for emergency crew and displaced citizens as well. The situation is worsened when hospitals or other buildings necessary to disaster response lose their connection. Knowledge gained from studying past internet disruptions by natural disasters could be put to use in planning or recovery. Additionally, because of both natural and man-made disasters, studies in network resiliency are now being conducted to prevent large-scale outages.

One way natural disasters impact internet connection is by damaging end sub-networks (subnets), making them unreachable. A study on local networks after Hurricane Katrina found that 26% of subnets within the storm coverage were unreachable. At Hurricane Katrina's peak intensity, almost 35% of networks in Mississippi were without power, while around 14% of Louisiana's networks were disrupted. Of those unreachable subnets, 73% were disrupted for four weeks or longer and 57% were at "network edges where important emergency organizations such as hospitals and government agencies are mostly located". Extensive infrastructure damage and inaccessible areas were two explanations for the long delay in returning service. The company Cisco has revealed a Network Emergency Response Vehicle (NERV), a truck that makes portable communications possible for emergency responders despite traditional networks being disrupted.

A second way natural disasters destroy internet connectivity is by severing submarine cables--fiber-optic cables placed on the ocean floor that provide international internet connection. The 2006 undersea earthquake near Taiwan (Richter scale 7.2) cut six out of seven international cables connected to that country and caused a tsunami that wiped out one of its cable and landing stations. The impact slowed or disabled internet connection for five days within the Asia-Pacific region as well as between the region and the United States and Europe.

With the rise in popularity of cloud computing, concern has grown over access to cloud-hosted data in the event of a natural disaster. Amazon Web Services (AWS) has been in the news for major network outages in April 2011 and June 2012. AWS, like other major cloud hosting companies, prepares for typical outages and large-scale natural disasters with backup power as well as backup data centers in other locations. AWS divides the globe into five regions and then splits each region into availability zones. A data center in one availability zone should be backed up by a data center in a different availability zone. Theoretically, a natural disaster would not affect more than one availability zone. This theory plays out as long as human error is not added to the mix. The June 2012 major storm only disabled the primary data center, but human error disabled the secondary and tertiary backups, affecting companies such as Netflix, Pinterest, Reddit, and Instagram.

Source of the article : Wikipedia



EmoticonEmoticon

 

Start typing and press Enter to search