The Internet Supports Point to Point Asynchronous Communication
Internet Today
Mobile Network and Transport Layer
Vijay K. Garg , in Wireless Communications & Networking, 2007
Internet Protocol version 6 (IPv6)
Today's Internet operates over the common network layer datagram protocol, Internet Protocol version 4 (IPv4). In the early 1990s, a new design of addressing scheme was initiated within the Internet Engineering Task Force (IETF) due to the recognized weaknesses of IPv4. The result was IPv6 (see Figure 14.7). The single most significant advantage IPv6 offers is increased destination and source addresses. IPv6 quadruples the number of network address bits from 32 bits in IPv4 to 128 bits, which provides more than enough globally unique IP addresses for every network device on the planet. This will lead to network simplification, first, through less need to maintain a routing state within the network and second, through reduced need for address translation; hence, it will improve the scalability of the Internet.
IPv6 will allow a return to a global end-to-end environment where the addressing rules of the network are transparent to applications. The current IP address space is unable to satisfy the potentially large increase in number of users or the geographical needs of Internet expansion, let alone the requirements of emerging applications such as Internet-enabled personal digital assistants (PDAs), personal area networks (PANs), Internet-connected transportation, integrated telephony services, and distributed gaming.
The use of globally unique IPv6 addresses simplifies the mechanisms used for reachability and end-to-end security for network devices, functionally crucial to the applications and services driving the demand for the addresses.
The lifetime of IPv4 has been extended using techniques such as address reuse with translation and temporary use allocations. Although these techniques appear to increase the address space and satisfy the traditional client/server setup, they fail to meet the requirements of new applications. The need for an always-on environment to be connectable precludes these IP address conversion, pooling, and temporary allocation techniques, and the "plug and play" required by consumer Internet applications further increases address requirements. The flexibility of the IPv6 address space provides the support for private addresses but should reduce the use of network address translation (NAT) because global addresses are widely available. IPv6 reintroduces end-to-end security that is not always readily available throughout an NAT-based network.
The success of IPv6 will depend ultimately on the innovative applications that run over IPv6. A key part of IPv6 design is its ability to integrate into and coexist with existing IP networks. It is expected that IPv4 and IPv6 hosts will need to coexist for a substantial time during the steady migration from IPv4 to IPv6, and the development of transition strategies, tools, and mechanisms has been part of the basic IPv6 design from the start. Selection of a deployment strategy will depend on current network environment, and factors such as the forecast of traffic for IPv6 and availability of IPv6 applications on end systems.
IPv6 does not allow for fragmentation and reassembly at an intermediate router; these operations can be performed only by the source and destination. If an IPv6 datagram received by a router is too large to be forwarded over the outgoing link, the router simply drops the datagram and sends a packet too big ICMP message back to sender. The checksum field in IPv4 was considered redundant and was removed because the transport layer and data link layer protocols perform checksum.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012373580550048X
The Internet and TCP/IP Networks
Jean Walrand , Pravin Varaiya , in High-Performance Communication Networks (Second Edition), 2000
4.1 THE INTERNET
The Internet today interconnects a large number of computers and networks throughout the world. There were 1 million such computers in early 1993, 5 million in 1995, 16 million in 1997, and over 50 million in 1999 organized in 2 million domains.
The Internet has its origin in the ARPANET network sponsored by the U.S. Department of Defense starting in the 1960s. The ARPANET was a datagram store-and-forward network that the Department of Defense liked for its ability to reroute packets around failures. This feature makes datagram networks survivable. Another important objective of ARPANET was to enable the interconnection of heterogeneous networks. The technical success of the Internet is due to the large variety of applications (from.e-mail and telnet, to file transfer and WWW) that IP can support on the one hand and, on the other hand, the many different networks that can implement IP. (See Figure 4.2 in section 4.2.) We will discuss this key feature at the end of this section.
A major factor that contributed to the popularity of the Internet was the exploitation of network externalities. This was achieved through early standardization and free distribution of its protocols and their software implementations, which could run on personal computers as well as on workstations and mainframe computers. Network externalities were created by the National Science Foundation's subsidy of the construction and use of the Internet. Since 1995, expansion and development have been funded by private enterprise.
The spectacular growth of the Internet is fueled largely by the World Wide Web, which makes multimedia information available at the click of a mouse. The vast increase in demand for Internet traffic explains the rapid growth in network capacity. It is estimated that in 1999 the volume of data traffic is comparable to that of voice. Moreover, because data traffic doubles every year whereas voice traffic increases only by about 10% a year, voice traffic will soon amount to only a small fraction of the total. These observations lead to the conclusion that the future network should be optimized for data and, in the background, should be able to carry voice traffic reliably and with a small delay.
The Internet is a network of networks. It comprises backbone networks of point-to-point links that connect regional gateways called network access points (NAPs). Routers attached to the NAPs are called points of presence (PoPs). Subscribers connect to a PoP either with a dial-in modem over their telephone line, or with a digital subscriber line (typically ADSL), a cable modem, or a leased digital line. Some businesses connect to a PoP with an optical link. The customer's computers are typically interconnected with a local network.
Over time, and incrementally, link speeds have increased from 56 Kbps to 1.5 Mbps to 45 Mbps. The recent explosive growth in demand is being met by 155-Mbps, 622-Mbps, and higher-speed (2.4 Gbps and 9.6 Gbps) SONET links, leased from telephone companies. Local area network speeds have increased as well to 100-Mbps Ethernet LANs and FDDI rings, Gbps Ethernet LANs, and soon to 10-Gbps Ethernet LANs. The Internet is used for applications that require (relatively) low transmission rates and that can tolerate large delays. Recent advances are aimed at accommodating applications that need higher network performance.
Until April 1995, the backbone of the Internet was managed under the auspices of the National Science Foundation. The topology of the backbone was simple, with four public NAPs and two private ones. When competing commercial carriers took over the backbone, the topology and routes became much more complex. Today, there are 75 public NAPs around the world, 12 of them in the United States. Many telecommunication companies are building U.S.-wide Internet backbones. As of May 1999, these backbones include those of ANS, AT&T, BBN/GTE, CERFNET, DIGEX, EBONE, MCI, NETCOM, PSI, Qwest, Sprint, UUNET, and Verio. Figure 4.1 shows the topology of the ANS backbone. ANS (Advanced Networks & Services, Inc.) was a subsidiary of America On Line until 1998 when it was acquired by WorldCom. You can find the maps of the other backbones at http://boardwatch.internet.com/isp . Much of the traffic is routed through private interconnections. These "private peering" arrangements between ISPs can take place anywhere the transaction is mutually convenient, and they account for an estimated two-thirds of all Internet traffic.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780080508030500095
Overview
Jean Walrand , Pravin Varaiya , in High-Performance Communication Networks (Second Edition), 2000
1.3.1 The Internet
The Internet today comprises hundreds of thousands of local area networks (LANs) worldwide, interconnected by a backbone wide area network (WAN). LANs typically operate at rates of 10 to 100 Mbps. Until 1995 links of the WAN supported lower bit rates, but the dramatic increase in traffic combined with the reduction in the cost of optical links have increased backbone link rates to as much as 10 Gbps. To deal with these large link rates, some network service providers deploy IP routers interconnected by ATM switched networks.
Corporate data networks typically have a much smaller speed connection to the Internet backbone compared with their LAN speeds. This is appropriate because LANs support the high bit rate traffic between workstations and file servers within a single organization. The WAN, on the other hand, supports electronic mail and infrequent file transfers that can be accomplished with low-speed connections. For example, in 1995 40,000 students, faculty, and staff at the University of California, Berkeley, had access to the Internet using 20,000 workstations and PCs. Within the university campus these computers were interconnected by 10-Mbps Ethernets and 100-Mbps FDDI rings. The Internet traffic between the campus and the rest of the world was handled by two 1.5-Mbps links, with an average utililization of 30%. By comparison, the telephone links between the campus and the rest of the world have a capacity of 200 Mbps.
Users access the Internet in one of two ways. Within a large company, government agency, or university, the user's PC or workstation is attached to a LAN that is part of the Internet. Users at home and in small companies subscribe to an ISP. Subscribers use low-speed modems to connect their PCs to ISP hosts, which, in turn, have Internet access. Some users are changing to higher speed access through cable TV or ADSL.
We consider here some of the many factors that have contributed to the spectacular success of the Internet, which by 1999, at 25 years of age, connected 300 million users worldwide. For users connected to LANs the incremental cost of Internet access is a small fraction of the cost of the LANs. This insignificant incremental cost, combined with the network externalities of services like e-mail, Web access, and formation of special interest groups, led to an exponential growth. Second, more and more people own PCs, and newer PCs come equipped with networking hardware, including a built-in modem, and software. For these users, the cost of Internet access is only the charges of their ISP provider. In 1999 ISPs provide unlimited access at 28.8 Kbps for an affordable $20 monthly charge. Higher speed access over cable TV or ADSL costs $40 per month. The combination of positive networking externalities and the steady reduction in the cost of computers accounts for some of the Internet's exponential growth.
Additional growth is fueled by innovative, low bit rate, delay-insensitive applications such as the World Wide Web (WWW), with icon-driven interfaces that make browsing easy. Internet applications such as e-mail and file transfer can be provided at a cost that no alternative network can match. Lastly, designers of these new applications often distribute them freely. They do so because the Internet has been developed by, and in turn has helped to sustain, a remarkable cadre of experts who strongly support keeping the Internet a free and open network. The successful introduction of commercial software such as WWW browsers may also require an initially subsidized distribution of the software to overcome the critical size depicted in Figure 1.14. These issues are discussed in Chapter 10.
One future development of the Internet, then, is more growth of the same kind: more users and more low bit rate, delay-insensitive applications for which the Internet has an overwhelming cost advantage.
Another possibility is that the Internet will develop into the information superhighway by supporting real-time, high bit rate, delay-sensitive applications such as interactive voice and video applications. To support those applications, the Internet will need to change in three ways. The backbone links will have to be upgraded, and the network switches for those links must be replaced by switches with very large throughput and low delays. This change is well on its way. At that point network designers may replace the IPv4 (Internet Protocol version 4) network layer, which cannot guarantee the delay, bandwidth, and loss bounds that real-time applications need. IPv4 could be replaced with a newer version, IPv6, or with ATM, or more likely, with enhancements to IPv4 that meet some of those needs. (The Internet Protocol is discussed in Chapter 4.) Recent commercial routers support some rudimentary resource reservation needed for real-time applications. The decision to migrate to ATM for the high-speed, low-delay links of the Internet may be forced by the advantages of ATM over IP. However, at present, applications and operating systems that exploit this native ATM capability are rare. These latter considerations point to an Internet growth path built on ATM technology and high-capacity links.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978008050803050006X
Introduction to Videoconferencing
Kevin Jeffay , in Readings in Multimedia Computing and Networking, 2002
BEST-EFFORT DELIVERY OF LIVE DIGITAL AUDIO AND VIDEO
Given that today's Internet does not support real-time communications, the fundamental networking problem is that of coding the media streams and biasing how they are introduced into the network so as to maximize the probability that they can be received by conference participants and played out in a satisfactory manner. Chapter 9 introduces many of the classical problems and solutions to the best-effort multimedia networking problem. Here we focus on application-level techniques for dealing with the effects of network congestion that are particularly well suited to the transmission and playout of live media.
It is useful to differentiate between congestion in the small and congestion in the large. By congestion in the small, we refer to the state of the network wherein switches in the network are temporarily unable to forward all arriving packets in real-time and queues build up at the switches. As queues at switches grow and shrink, the end-to-end transmission delay experienced by packets corresponding to a videoconference will grow and shrink as well. This deviation in end-to-end delay is referred to as delayjitter. Delayjitter is problematic because applications such as videoconferencing typically require continual playout of media samples. Ideally, media samples such as video frames should be displayed continuously, with each successive frame displayed immediately after its predecessor. Because frames are typically generated and displayed at fixed intervals, it follows that continuous playout can occur only if each frame experiences the same end-to-end delay. The existence of delayjitter implies that continuous playout is, in general, impossible. Because of this limitation, a buffer is introduced at the receiver to smooth variation in end-to-end delay. However, the problem of setting the depth (size) of this buffer remains. A large buffer will ensure gross delayjitter can be accommodated but at the cost of increasing the effective end-to-end delay (the transmission to playout delay) of the application. Given that latency is a key performance measure for videoconferencing, such a large buffer may be unacceptable, especially if such gross delayjitter is an infrequent occurrence. On the other hand, a small buffer will ensure minimal additional delay is incurred, but such a buffer will be ineffective in times of high delayjitter. In this case, the buffer may empty and the application be starved for new frames to display. This will result in a "gap" in the playout, an equally undesirable event.
Videoconferencing applications (as well as most streaming media applications) must explicitly manage the fundamental tradeoff between minimizing end-to-end display latency and minimizing the frequency with which gaps appear in the playout. This is done through what are known as jitter-buffer or elastic queue management, algorithms. These algorithms monitor changes in end-to-end delay and attempt to dynamically set a playout buffer depth at the receiver that is appropriate for the requirements of the application and the current perceived network conditions. The paper "An empirical study of delay jitter management policies," by D. L. Stone and K. Jeffay, presents two families of policies for adaptively setting the depth of the playout queue at the receiver and an empirical evaluation of the policies based on a set of network traces. These classic jitter-buffer algorithms represent the best practice for adaptive playout queue management.
Congestion in the large refers to the state of the network wherein queues at switches have filled to capacity and packets are dropped. When such a state is reached, applications must reduce their load on the network to ensure the network doesn't reach a state of congestive collapse. Whereas most applications reduce their load on the network in times of congestion by relying on the underlying transport protocol (most notably TCP) to reduce the transmission rate, videoconferencing and other streaming applications must develop their own means of reducing the load they generate. The process of modulating the bit-rate generated by a multimedia application to match a transmission rate that is sustainable in the network is called media scaling. The paper "Media scaling for audiovisual communication with the Heidelberg transport system," by L. Delgrossi and colleagues, presents a framework for addressing this problem for video streams. They present a taxonomy of methods for reducing the bit rate of a video stream and comment on the tradeoffs between the level of reduction achieved and the likely degradation in the perceptual quality of the stream that results.
Although media scaling attempts to reduce the load placed on the network by a videoconferencing application and thereby implicitly reduce the loss rate seen by the application, methods are still required to ameliorate the effects of packet loss. The final paper in this chapter, "Retransmission-based error control for interactive video applications over the Internet," by I. Rhee, considers a method for retransimtting lost video packets in videoconference. Retransmission requires the receiver to detect when a packet has been lost and to inform the sender of this fact, who will subsequently retransmit the packet. Because each stage in this process takes time, conventional wisdom held that retransmission would be ineffective for interactive applications such as videoconferencing because it would increase the end-to-end latency to unacceptable levels. Although this is indeed frequently the case, Rhee demonstrates that significant utility in retransmitting missing media samples remains. For example, although a retransmitted media sample may arrive too late to be played, in the case of inter-coded samples (such as MPEG video frames), the retransmitted sample may be used to improve the quality of the playout of future samples.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781558606517501303
Administration
Philip Bourne , ... Joseph McMullen , in UNIX for OpenVMS Users (Third Edition), 2003
11.7.2 TCP/IP Basics
The primary networking protocol used on the Internet today is referred to as TCP/IP. This is a suite of products and protocols that includes the following:
- ■
-
Internet Protocol (IP): provides the underlying transfer layer
- ■
-
Transmission Control Protocol (TCP): provides connection-oriented communication between processes and adds error detection and other services
- ■
-
User Data Protocol (UDP): provides a low-overhead connectionless datagram protocol
- ■
-
Serial Line IP (SLIP): supports networking over serial communication lines
- ■
-
Point-to-Point Protocol (PPP): supports networking over synchronous and asynchronous communication lines
- ■
-
Internet Control Message Protocol (ICMP): supports messages such as those used by the ping utility
Programs called daemons, which are started at boot time and listen for network requests, control much of the work on networked systems. On OpenVMS, there are several background processes that are similar in concept to daemons, including NETACP and EVL. Some of the most important UNIX network daemons include the following:
- ■
-
inetd: detects IP and UDP connection requests
- ■
-
rpcbind: controls remote procedure call requests
- ■
-
routed: detects network routing packets sent using the Router Interchange Protocol
- ■
-
gated: handles multiprotocol routing for external network gateways
- ■
-
syslog: detects messages and forwards them to other processes
- ■
-
nfsd: detects requests for operations using NFS
- ■
-
ftpd: transfers files between systems
- ■
-
telnetd: controls remote login sessions
The master network daemon is inetd; it oversees all network activity on a UNIX machine connected to a TCP/IP network. It listens for connection requests. When it receives a connection request, it starts the appropriate daemons.
Some of the important files involved in this process are as follows (see Figure 11.4):
- ■
-
At boot time, /etc/inetd reads the /etc/inetd.conf file, which defines the services that inetd should oversee.
- ■
-
Next /etc/inetd.conf maps a service to a protocol and a daemon that starts it.
- ■
-
The service is in turn mapped to an Internet port number in the /etc/services file.
- ■
-
The service is further mapped to a specific protocol in the /etc/protocols file.
11.7.2.1 Internet Addressing
In TCP/IP networks, each computer that is part of the network is called a host. Each host is identified by a unique host name in addition to a unique IP host address. The TCP/IP protocol translates the host name to the host address, as required by the IP protocol. Users can supply either the host name or IP address to UNIX networking commands.
The Internet uses distributed name and address mechanisms. The Domain Name Service (DNS) provides a hierarchy of host names to IP address mapping and distributes it across the network.
On TCP/IP networks, hosts are grouped hierarchically in domains. The top-level domain name in the hierarchy can represent an organizational domain or a geographical domain. Examples of organizational domains include .com for commercial organizations, .edu for educational institutions, and . gov for government institutions. Typically, there is one domain assigned to an entity. For example, hp. com is the domain assigned to HP, and umich.edu is the domain assigned to the University of Michigan. The top-level domain can be divided into subdomains that further identify the host. The subdomains are separated by periods. An example of a subdomain is music.iamich.edu. Figure 11.5 illustrates this Internet domain example.
Associated with any computer on a TCP/IP network, referred to as a host, is an Internet address expressed in numeric form (e.g., 128.59.98.1). By comparison, in DECnet Phase IV networks, each host is identified by a unique, but nonhierarchical, host name (e.g., pluto) and a unique address consisting of its area number and node number (e.g., 1.121). Host name to-number mapping is not distributed. Users can supply either the host name or the host address where it is desired to specify a host.
See Section 13.2.5 for more information on Internet host naming.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978155558276050011X
Routing Issues
In IP Addressing & Subnetting INC IPV6, 2000
From Millions to Thousands of Networks
For engineers, the biggest push on the Internet today is to devise a plan to limit the huge growth in available networks on the Internet. We have learned in the previous section that the addition of so many networks on the Internet has severely hindered the ability to maintain effective routing tables for all the new networks that have been added. It was becoming more difficult to route packets to their destinations because the route to the destination was sometimes not included in the large routing tables maintained by these routing domains. This threat, much like a tornado warning, was due to touch down on the Internet before the dreaded exhaustion of IP addresses.
Now that CIDR has come to the rescue, the problem is to implement CIDR fast enough to consolidate these networks to minimize the number of entries in the routing tables. From the millions of networks out there, CIDR is able to consolidate contiguous IP addresses, known as supernetting, into fewer numbers of networks that contain more hosts. The only caveat with CIDR is that these must be contiguous class C addresses. The authority for assigning IP addresses has assigned large contiguous blocks of IP addresses to large Internet Service Providers. These large ISPs assign a smaller subset of contiguous addresses from their block to other ISPs or large network customers, as illustrated in Figure 6.2.
The bottom line is that the large ISP maintains a large block of contiguous addresses that it can report to a higher authority for CIDR address aggregation. With CIDR, the large ISP does not have to report every class C address that it owns; it has to report the prefix that every class C address has in common. These addresses are aggregated into a single supernetted address for routing purposes. In our example, the prefix is 198.113.201, which is what all IP addresses have in common. Instead of advertising six routes, we are advertising only one. That is a decrease of 83 percent. Imagine if every ISP were able to decrease the routes they advertise by this much. This can literally bring the number of networks from millions down to thousands. Not only does this decrease the number of networks, but it is a significant reduction in the number of routing table entries. By March of 1998, the number of global routing table entries was around 50,000. Without CIDR it is speculated that the number of global routes would have been nearly twice that number. You can always count on the standards committees behind the scenes of the Internet to deliver effective solutions when adversity stares them in the face.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781928994015500097
Routing and Peering
Walter Goralski , in The Illustrated Network (Second Edition), 2017
The Internet Today
There is really no such thing as the Internet today. The concept of "the Internet" is a valid one, and people still use the term all the time. But the Internet is no longer a thing to be charted and understood and controlled and administered as in the early days. What we have today is an interlocking grid of ISPs, an ISP "grid-net," so to speak. Actually, the graph of the Internet is a bit less organized than this, although ISPs closer to the core have a higher level of interconnection than those at the edge. This is an interconnected mesh of ISPs and related Internet-connected entities such as government bureaus and learning institutions. Also, keep in mind that in addition to the "big-I internet," there are other internetworks that are not part of this global, public whole.
If we think of the Internet as a unity, and have no appreciation of actual ISP connectivity, then the role of routing protocols and routing policies on the Internet today cannot be understood. Today, Internet talk is peppered with terms like peers, aggregates, summaries, Internet exchange points (IXPs), backbones, border routers, edge routers, and points of presence (POPs). These terms don't make much sense in the context of the Internet as a unified network.
The Internet as the spaghetti bowl of connected ISPs is shown in Figure 14.2. There are large national ISPs, smaller regional ISPs, and even tiny local ISPs. There are also pieces of the Internet that act as exchange points for traffic, such as the Internet Exchange Points (IXPs). IXPs can by housed in POPs (also called carrier hotels), formal places dedicated for this purpose, and in various collocation facilities, where the organizations rent floor space for a rack of equipment ("broom closet") or larger floor space for more elaborate arrangements, such as redundant links and power supplies. The IXPs are also called transit exchanges in some contexts and are often run by former telephone companies.
Each cloud, except the one at the top of the figure, basically represents an ISP's AS. Within these clouds, the routing protocol can be an IGP such as OSPF, because it is presumed that each and every network device (such as the backbone routers) in the cloud is controlled by the ISP. However, between the clouds, an EGP such as BGP must be used, because no ISP can or should be able to directly control a router in another ISP's network.
The ISPs are all chained together by a complex series of links with only a few hard and fast rules (although there are exceptions). As long as local rules are followed, as determined by contract, the smallest ISP can link to another ISP and thus give their users the ability to participate in the global public Internet. Increasingly, the nature of the linking between these ISPs is governed by a series of agreements known as peering arrangements. Peers are equals, and national ISPs may be peers to each other, but treat smaller ISPs as just another customer, although it's not all that unusual for small regional ISPs to peer with each other.
Peering arrangements detail the reciprocal way that traffic is handed off from one ISP (and that means AS) to another. Peers might agree to deliver each other's packets for no charge, but bill non-peer ISPs for this privilege, because it is assumed that the national ISP's backbone will be shuttling a large number of the smaller ISPs' packets. But the national ISP won't be using the small ISP much. A few examples of national ISPs, peer ISPs, and customer ISPs are shown in the figure. These large "Tier 1" ISPs all connect to one another and usually pass traffic from one to another without worrying about payments.This is just an example, and very large ISPs often have plenty of very small customers and some of those will be attached to more than one other ISP and employ high capacity links. There will also be "stub AS" networks with no downstream customers.
Millions of PCs and Unix systems act as clients, servers, or both on the Internet. These hosts are attached to LANs (typically) and linked by routers to the Internet. The LANs and "site routers" are just "customers" to the ISPs. Now, a customer of even moderate size could have a topology similar to that of an ISP with a distinct border, core, and aggregation or services routers. Although all attached hosts conform to the client–server architecture, many of them are strictly Web clients (browsers) or Web servers (Web sites), but the Web is only one part of the Internet (although probably the most important one). It is important to realize that the clients and servers are on LANs, and that routers are the network nodes of the Internet. The number of client hosts greatly exceeds the number of servers.
The link from the client user to the ISP is often a simple cable or DSL link. In contrast, the link from a server LAN's router to the ISP could be a leased, private line, but there are important exceptions to this (Metro Ethernet at speeds greater than 10 Mbps is very popular). There are also a variety of Web servers within the ISP's own network. For example, the Web server for the ISP's customers to create and maintain their own Web pages is located inside the ISP cloud.
The smaller ISPs link to the backbones of the larger, national ISPs. Some small ISPs link directly to national backbones, but others are forced for technical or financial reasons to link in a "daisy-chain" fashion to other ISPs, which link to other ISPs, and so on until an ISP with direct access to an IXP is reached. Peering bypasses the need to use the IXP structure to deliver traffic.
Many other countries obtain Internet connectivity by linking to an IXP in the United States, although many countries have established their own IXPs. Large ISPs routinely link to more than one IXP for redundancy, while truly small ones rarely link to more than one other ISP for cost reasons. Peer ISPs often have multiple, redundant links between their border routers. (Border routers are routers that have links to more than one AS.) For a good listing of the world's major IXPs, see http://en.wikipedia.org under Internet Exchange Point.
Speeds vary greatly in different places on the global Internet. Client access in rural areas is still often by way of low-speed dial-up telephone lines, typically 33.6 to 56 kbps. Servers are connected by Metro Ethernet or by medium-speed private leased lines, typically faster than 1.5 Mbps. The highspeed backbone links between national ISPs run at yet higher speeds, and between the IXPs themselves, speeds of 155 Mbps (known as OC-3c), 622 Mbps (OC-12c), 2.4 Gbps (OC-48c), and 10 Gbps (OC-192c) can be used, although "n × 10" Gbps Ethernet trunks are less expensive. Higher speeds, such as 40 Gbps and 100 Gbps Ethernet, are always needed, both to minimize large Web site content-transfer latency times (like video and audio files) and because the backbones concentrate and aggregate traffic from millions of clients and servers onto a single network.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012811027000014X
Multicast
Walter Goralski , in The Illustrated Network (Second Edition), 2017
PIM-SM
The most important multicast routing protocol for the Internet today is PIM sparse mode, defined in RFC 2362. PIM-SM is ideal for a number of reasons, such as its protocol-independent nature (PIM can use regular unicast routing tables for RPF checks and other things), and it's a nice fit with SSM (in fact, not much else fits at all with SSM). So, we'll look at PIM-SM in a little more detail (also in addition, because that's what we'll be using on the Illustrated Network's routers).
If a potential receiver is interested in the content of a particular multicast group, it sends an IGMP Join message to the local router—which must know the location of the network RPs servicing that group. If the local router is not currently on the distribution tree for that group, the router sends a PIM Join message (not an IGMP message) through the network until the router becomes a leaf on the shared tree (RPT) to the RP. Once multicast packets are flowing to the receiver, the routers all check to see if there is a shorter path from the source to the destination than through the RP. If there is, the routers will transition the tree from an RPT to an SPT using PIM Join and Prune messages (technically, they are PIM Join/Prune messages, but it is common to distinguish them). The SPT is rooted at the designated router of the source. All of this is done transparently to the receivers and usually works very smoothly.
There are other reasons to transition from an RPT to an SPT, even if the SPT is actually longer than the RPT. An RP might become quite busy, and the shortest path might not be optimal as determined by unicast routing protocols. A lot of multicast discussion at ISPs involves issues such as how many RPs there should be (how many groups should each service?) and where they should be located (near their sources? centrally?). A related issue is how routers know about RPs (statically? Auto-RP? BSR?), but these discussions have no clear or accepted answers.
There is only one PIM-SM feature that needs to be explained. How does traffic get from the sender's local router to the RP? The rendezvous point could create a tree directly to every source, but if there is a lot of sources, there is a lot of state information to maintain. It would be better if the senders' local routers could send the content directly to the RP.
But how? The destination address of all multicast packets is a group address and not a unicast address. So, the source's router (actually, the DR) encapsulates the multicast packets inside a unicast packet sent to the RP and tunnels the packet to the RP in this form. The RP decapsulates the multicast content and makes it available for distribution over the RPT tree.
There is much more to PIM-SM that has not been detailed here, such as PIM-SM for SSM (sometimes seen as PIM-SSM). But it is enough to explain the interplay among host receivers, IGMP (in IPv4), MLD (in IPv6), PIM itself, the RP, and the source.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128110270000187
Layer 7: The Application Layer
In Hack the Stack, 2006
Other Protocols
One of the most popular protocols used on the Internet today is Hypertext Transfer Protocol (HTTP). HTTP is not as old as some of the other protocols discussed in this chapter, but it still contains security issues. The majority of attacks that take place over HTTP target the Web applications that run on top of the protocol.
Other application layer protocols commonly used are the Post Office Protocol (POP3) and the Internet Message Access Protocol (IMAP). These protocols are used to retrieve e-mail from servers and are typically used in conjunction with SMTE While both protocols have built-in authentication mechanisms, they still suffer from a variety of weaknesses (e.g., eavesdroppers can obtain authentication information from these protocols).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781597491099500125
The Botnet Problem
Xinyuan Wang , Daniel Ramsbrock , in Computer and Information Security Handbook, 2009
7. Summary
Botnets are one of the biggest threats to the Internet today, and they are linked to most forms of Internet crime. Most spam, DDoS attacks, spyware, click fraud, and other attacks originate from botnets and the shadowy organizations behind them. Running a botnet is immensely profitable, as several recent high-profile arrests have shown. Currently, many botnets still rely on a centralized IRC C&C structure, but more and more botmasters are using P2P protocols to provide resilience and avoid a single point of failure. A recent large-scale example of a P2P botnet is the Storm Worm, widely covered in the media.
A number of botnet countermeasures exist, but most are focused on bot detection and removal at the host and network level. Some approaches exist for Internet-wide detection and disruption of entire botnets, but we still lack effective techniques for combating the root of the problem: the botmasters who conceal their identities and locations behind chains of stepping-stone proxies.
The three biggest challenges in botmaster traceback are stepping stones, encryption, and the low traffic volume. Even if these problems can be solved with a technical solution, the trace must be able to continue beyond the reach of the Internet. Mobile phone networks, open wireless access points, and public computers all provide an additional layer of anonymity for the botmasters.
Short of a perfect solution, even a partial traceback technique could serve as a very effective deterrent for botmasters. With each botmaster that is located and arrested, many botnets will be eliminated at once. Additionally, other botmasters could decide that the risks outweigh the benefits when they see more and more of their colleagues getting caught. Currently, the economic equation is very simple: Botnets can generate large profits with relatively low risk of getting caught. A botmaster traceback solution, even if imperfect, would drastically change this equation and convince more botmasters that it simply is not worth the risk of spending the next 10–20 years in prison.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B978012374354100008X
The Internet Supports Point to Point Asynchronous Communication
Source: https://www.sciencedirect.com/topics/computer-science/internet-today
0 Response to "The Internet Supports Point to Point Asynchronous Communication"
Post a Comment