16 The Medium Access Sublayer: 802.11 (Wi-Fi) –II

Prof. Bhushan Trivedi

epgp books

 

Introduction

 

We have looked at the 802.11 network, its two different modes of operation; i.e. DCF and PCF in the previous module. We have also seen how the DCF mode operates and what are the issues which makes it inferior to the Ethernet it tries to model. Both ways of communicating using the DCF mode, the one with CSMA/CA and one without were discussed. The PCF mode and various service primitives used to communicate in the PCF mode were also discussed.

 

Now is the time for demonstrating how PCF and DCF mode can work together on a given network. We will also learn how the mode which combines both of these modes, the hybrid mode, is used. Interestingly, the Wi-Fi network is increasingly used for voice and video (for using VOIP calls or Skype calls with video or WhatsApp calls). We have already seen that such real-time traffic demands additional quality measures. The extensions are designed to provide such quality of service, especially for this real-time traffic. Another issue is, the wireless devices need better power management, the extensions have provided elaborate measures for doing so. Another critical point is to make sure the protocol remains fair to all devices and a fast device should not be hampered because it is given the same service as a slow device gets. The extension also can divide the traffic into multiple classes and forward the traffic as per the priority assigned to those classes.

 

Managing PCF and DCF modes together

 

Transmission in PCF and DCF mode introduces some delay during transmission. The SIFS and DIFS delays that we have looked at in the previous module sounds wasteful at the first glance. This mechanism just wastes time unnecessarily when we know that nobody is transmitting or plans to transmit. When one station completes the transmission, why the sender waits for DIFS? Why the receiver waits for SIFS? There does not seem to be any reason for such delay.

 

As it turns out to be, this is not a wasteful but a very clever design to manage prioritized transmission. Ethernet has nothing similar to 802.11 in this context. Permitting a PCF and DCF transmission together demands some form of sequencing in operation. The PCF mode, where only access point decides the transmission sequence, even there, there are delays. This strange design allows both PCF and DCF transmissions to take place simultaneously in the single cell. PCF is given higher priority over DCF in a way that the node communicating to AP, or AP communicating to a node sends only after a delay called PIFS which is much shorter than DIFS. That means if a node is interested in sending to an AP, and at the same time some other device plans to some other node directly, the node sending to AP gets higher preference and there is no chance of having a collision. When a directly communicating node waited for somebody else to finish its transmission, it has to further wait for DIFS. Meanwhile, the AP or some other associated node sends only after PIFS, which is shorter and thus gets the first chance to transmit. As per the rule, after waiting for DIFS, if the node finds the channel busy, it has to go back to ‘waiting for transmission to complete mode’ and again wait for the DIFS. If the PIF transmission is not over, and the next frame is to be sent, it will also have to wait for a shorter period and thus eventually the DCF transmission can only take place when there is no PCF transmission. Only then there is no transmission from an AP or to and AP and channel is idle after the DIFS time elapsed, the DCF transmission can take place.

 

This mechanism also extends to help other transmissions which are critical for completing the operations, like RTS, CTS, ACK etc. They have even shorter time to wait (SIFS) as compared to the PIFS. That means those three operations enjoy the highest priority. That means, even when AP wants to send, after completion of somebody who is sending, the ack to that sender is sent with the highest priority, only then the PCF transmission can take place and finally the DCF transmission takes its turn.

 

The scheme based on choosing different times for different operations is quite clever in the sense that it automatically sets the priority for different operations without needing any field to indicate priority and any logic for testing the priority or choosing a priority queue for placing the operation in that queue and so on. This scheme is quite simple and efficient. The RTS, CTS, and ACK get the highest priority, the PCF communication gets the next priority and the DCF is the third in the list and can only take place if no other traffic is on. Interestingly, when there is no other traffic, a node can respond back with a negative acknowledgment of a garbled frame that they have received. The negative ack indicates that the frame is received but it is garbled so the sender should not wait for the timeout and retransmit immediately. For that, one more interval called EIFS which is little larger than DIFS is also introduced.

 

Whenever there are fragments of a single frame is to be sent, these fragments are given the same priority as of the RTS, CTS, and ACK and thus after the first fragment starts after DIFS, all other fragments start only after SIFS. That means if the first time slots can be sent, the rest of the fragments are given highest priority and they do not compete with other DCF transmission or even PCF transmissions. The idea is to send all the fragments of the same frame as fast as we can.

 

 

Whenever a receiver receives a frame or a fragment, the response must get back to the sender immediately for preventing it to retransmit. That job is done by giving ACK the highest priority. Similarly, when a frame needs to be sent without collision, the RTS is given equally high priority so the RTS is sent before any PCF or DCF transmission. The CTS, for the same reason, is given the same level of priority.

 

Figure 18.1 depicts the complete priority list. You can easily see that the priority is set according to what we have discussed so far.

 

Wi-Fi Extensions

 

The 802.11 is getting almost omnipresent today. The original design that we discussed so far has some problems which were become more apparent with the increasing usage of the network. There was a quality of service issues, both PCF and DCF modes needed some improvement and the researchers and designers felt the need for another mode based on both of these modes, called a hybrid mode. All that led to an extension to 802.11 standards and known as 802.11e. Most of the extensions are optional right now1 but may soon be part of the main stream.

 

802.11 network is increasingly used for sending real-time video and audio transmission like VoIP and video conferencing solutions like Skype etc. These applications are quite delayed sensitive in the sense that users can notice little bit of delay in transmission and irritated if the delay crosses the tolerance limit. The original standard was like Ethernet with no service classes and no preferences for specific traffic. Any solution that incorporates better QOS begins with segregating the traffic into some classes and treat delay sensitive applications with more urgency. Segregation demands identification of frames belong to such connections and treat them in the fashion they need. The 802.11e provided the services of defining the traffic classes and segregating frames into those classes.

 

EDSA

 

Let us look at first of the category of channel access mode which gives higher priority traffic a better chance of transmission, known as EDCA or Enhanced Distributed Channel Access. EDSA provides two different services to the users. The first service allows categorizing the traffic into multiple classes with different priorities. Normally used categories are Voice, Video, Best Effort and Background from higher priority to lower. Best effort is normal traffic like web access or something where the user is involved. A background traffic is where the user is not directly involved for example a file download. The SIFS, PIFS, DIFS, and EIFS are different interframe spacing which users have no control over. The EDCA allows the administrator to provide additional dynamic interframe spacing in a way that the first access to the channel is given to highest priority traffic. Defining and using such spacing, however, depends on the physical layer beneath. Even when the MAC layer defines the new spacing values if the physical layer cannot be in a position to provide those new spacing by sensing transmission after that spacing it cannot function. The wireless card should also be able to make sure the traffic which comes in is properly categories and given the service it needs. That means, the card that the users choose to use, decides if these extensions are provided or not.

 

1 There is an alliance called Wi-Fi Multimedia (WMM). Those AP who claim to be certified by this alliance must support two services, EDSA and TXOP, that means priority-based transmission. The rest of the extensions are optional even for these AP.

 

The process of adding and using additional frame spacing is not possible if there is no protocol available to manage this process. Senders, Receivers, and AP all must use this protocol to coordinate about these spacing and work accordingly. Traffic Control Multiple Access (TCMA), protocol is designed for this purpose.

 

The administrators need to design different interframe spacing for each category2. Once that is done, the higher priority traffic gets the earlier access to the channel and thus have higher chances of access to the channel than the low priority traffic. These additional interframe spacing are known as Arbitrary Interframe Spacing (AIFS). For every service class, the admin provides an AIFS based on its priority.

 

Providing services based on the traffic is one thing, providing services based on the sender is another. The wireless devices operate at various speeds; for example, the 802.11b operates at 11 Mb while the 802.11a and 802.11g operate at 54 Mb, 11n operates at 600 Mb, 11ac operates at 1 Gb and so on. When two senders with different speeds are sending we need to provide some semblance to the system in a way that the slow sender does not make other slow. The problem gets more complex because the speed that we mentioned above is maximum speed and the sender may be sending at a slower speed based on many issues including the distance from the destination.

 

Suppose there are two senders, one of them sending at 12 Mbps and another at 54 Mb. If you remember our discussion about DCF transmission in the previous module, we know that only when one frame is sent and an ack is received, the next sender can send. There is no priority given to any sender, and whosoever can transmit after that sender finishes, sends its frame. In above case one of the senders (one with 12 Mb) takes five times more time to transmit frame. If we assume that the algorithm works in a fair manner and both senders get the equal opportunity to transmit, i.e. each of them sends a frame alternatively. This mechanism, however fair it seems, is quite unfair to the fast sender as it stalls it transmission and makes it as slow as the other sender. This situation is known as rate anomaly and demands a solution.

 

The solution should provide transmission sequence not based on the unit of frame but unit of time. The 802.11e provides exactly that. A fixed time is provided to each station in the fray. Each station can transmit as many frames it can during that period, without waiting for the ack. Thus the low rate stations will be able to send with slow transmission rate while the high data rate stations will be able to send with the fast transmission rate. This scheme is much better than the original scheme of allowing an equal chance to all. When this method is deployed and the frame is long enough not possible to be transmitted in given duration, the frame is the fragmented and maximum size of the possible segment during that period is transmitted. Next segment is transmitted in the next iteration. Suppose each sender is given 1 or 2 ms to transmit, one can clearly understand that a 54 Mb sender will be able to send 5 times what the other 12 Mb sender could. This scheme is known as Transmission Opportunity or TXOP. TXOP solves the rate anomaly problem.

 

2  These traffic categories are known as access categories in EDCA.

 

TXOP helps in providing treatment according to the speed of the sender and EDCA is designed also to provide traffic services based on priorities. Above all, the AP may announce periodically what exactly the available bandwidth is, so senders can increase their speed if the available bandwidth permit so. The EDCA access categories durations are also designed according to the traffic, heavier traffic will have more duration and lighter traffic will have smaller duration. Normal values of the duration are between 15 to 1023 ms for background and best effort traffic in which the sender can only send one frame. Voice and video are 1.5 ms and 3 ms approximately (thus they have higher priorities). For the same class of data, EDCA also deploys admission control. The admission control process denies any traffic of a given class when the enough data for a given class is already due for transmission. When AP publishes the available bandwidth, the senders check for available bandwidth before adding traffic.

 

HCCA

 

HCCA (HCF (Hybrid Coordination Function) Controlled Channel Access) is basically PCF with extended functionality. Beacon frames are now can be sent at desired intervals and not fixed like the original design. HCCA is flexible enough to allow DCF or EDCA as well as conventional PCF type communication. That is the reason it is called Hybrid mode. The machines can choose the mode they want to operate on. It implements TXOP like EDCA so QOS can be honored. The station may be able to set the data rate it wants to transmit and a number of tolerable jitters. Like PCF, the AP polls each member and those who want to send using AP may respond back. How it allows both modes together, is also interesting. The AP announces that anybody wants to send may send like DCF mode. It is called direct link access. The conventional PCF mode disallows any two communicating nodes to talk to each other directly, HCCA allows that. As long as the AP does not need to interfere, it will let the machines communicate directly. As soon as it finds a frame for a receiver or expects a frame, it chooses a shorter frame spacing for that operation to avoid any collisions with ongoing transmissions. Once this phase commences, AP starts acting like the conventional PCF mode where it is the boss. The AP is called hybrid coordinator or HC and controls the access during this period. Traffic classes that we discussed for EDCA are also present here. Rather than using pure round robin algorithm and give equal time and priority to each sender and process, it prioritized the traffic and sends higher priority traffic before low priority. HCCA can also assign priority based on the communicating parties (i.e. based on who the sender is). In some cases, for example, when the network contains a few servers, such a scheme is quite effective. When the servers are given the higher priority than other nodes, the communication efficiency increases to a large extent. As each client gets a faster response, it can speed up the local processing and eventually provide much faster experience to the user.

 

That means HCCA allows both DCF like and PCF-like mode and alternate between them. It also allows station based priority. The HCCA mode is much more complex than the EDCA mode and not yet become widespread.

 

HCCA function

 

Every communication is followed by some period for which nobody is sending. When the typical timeframe is over, for some period, anybody can send. For example, any AP based communication can take place after SIFS and till DIFS. Once a sender starts sending and everybody starts listening, again nobody else sends for that period. The period when nobody is expected to transmit and garble others is known as contention free period while when there is a possibility of multiple users sending together is called contention period. The HCCA functioning is based on deciding and setting the value of CFP to any value the AP wants. For example, when a frame is being sent, all neighbors remain silent for that CFP. The HCCA can extend this CFP to any value it wants. This extended period is called Controlled Access Phase. In this phase, no machine is allowed to send but AP can and thus it is controlled by AP. AP can send or receive frame during this period. In the controlled access phase, all stations function in EDCA. Thus traffic classes and streams (data which is being transmitted) are defined and used. The HC (Hybrid Controller) is more powerful than AP. In the case of AP, it allocates every station a turn and a channel for transmission. That is known as per station scheduling. The HC can decide to schedule traffic based on sessions running (traffic streams) on that machine as well. Stations can provide traffic volume for each prioritized scheme and HC can optimize scheduling based on that information. TXOP is also allowed and that means each station can send for a period of time and not a unit of frames. That means, faster senders are not starved of bandwidth now.

 

HCCA is more advanced in one more way. It asks a lot of information from each station and provides quality of service based on that information and the requirements for that station. For example, a station transmitting video or audio might have specific requirements about jitter and delay tolerance which the HCCA can provide. If such measures are properly chosen and implemented, delay sensitive applications like VOIP or video streaming can function much better. HCCA is not mandatory standard and very few AP are actually designed to provide this service right now.

 

Another critical feature of the 802.11 extensions that it provides an elaborate mechanism to manage power. That feature is known as APSD or automatic power-save delivery, which we will be discussing next.

 

Automatic Power Save Delivery

 

The original design of 802.11 only allowed a single power saver bit. Whenever a machine is going to sleep mode due to dipping battery, it turns that bit on. The machine thereafter goes to sleep mode. Once the machine goes to sleep mode, the AP only awakes it if there is a frame for it and not otherwise. The extension extends this process and provides a better method to save power. Here is the description.

 

This method is known as APSD or Automatic Power Save Delivery. The Wi-Fi is being increasingly used as a platform for making VOIP calls. Whenever a VOIP call is on, the data (the voice packets) are to be sent at fixed intervals. The stations, whose battery is dipping and the voice calls are going on, can instruct the AP before going down about their periodic requirements. After that, the machine sets into rhythmic periods of sending data and go to scheduled rest period until the next scheduled time to send data arrives. When that scheduled time arrives, the machine comes back to normal mode, sends the next frame and start dozing off again. Thus the machine switches over from sleep to sending and sending to sleep. Such scheduled automatic power saving mechanism is known as S(Scheduled)-APSD. Moreover, the AP might hold off all frames came for that station during the sleep period. Only when the machine woke up and sent its frame, the AP delivers those received and buffered frames together. That is the reason why the word Delivery is also used in the name. This is quite a clever scheme. As the bulk of the frames are sent together, all the DIFS-frame-SIFS-ACK sequences (known as signaling overhead) are just eliminated. Only the time to send the frame is needed. The result of this process is a synchronized service. When it is scheduled, the AP delivers all the frame for that machine together without any signaling process. This can also be implemented without the machine has gone to sleep mode in a form of synchronized service. Whenever a machine has gone into a power-save mode, the process of sending and receiving only happens at the scheduled time spot. The scheduled time is known a priory and thus the AP can deliver the frames without preceded by any signaling.

 

This scheme is so good that one can use that for even unscheduled traffic. AP can continue to buffer all the frames destined for that machine. Only when the machine decides to send the frame, all frames are passed to the device in bulk. Unlike the scheduled delivery, this unsynchronized delivery mechanism only happens when the device decides to send the frame and not otherwise. There is no a priory commitment to any time slot. Scheduled APSD is available in both EDCA and HCCA but U-APSD only is possible with EDCA.

 

APSD has a few advantages over conventional power saving mode. Here is a list.

 

1. Reduction in signaling traffic and thus saving network bandwidth

 

2. Collisions are avoided when timeslots are fixed and communication takes place only during scheduled period.

 

3. The buffered frames are sent together back and back and thus reduces overheads and power required.

Few other features of 802.11e

 

There are quite a few other features that we cover together in the following.

 

1. Only one ack needs to be sent when the bulk of frames are sent using TXOP. The TXOP defines a block as a number of frames sent during that period. For example, VOIP traffic can send as many frames it can during the TXOP provided to it and all the receiver needs to do is to send only one ACK back. Thus the entire block of frames is acknowledged only once.

 

2. It is also possible for the sender to indicate that he does not want to get ACKs of the frame he is sending. This is specifically useful for the case where the real-time data is being transmitted and sending the ACK and retransmission is out of question. The service classes clearly define both classes, one that requires ACK and one that doesn’t.

 

3. It is also possible that nodes can communicate directly to each other under AP’s control, unlike the conventional way. This is known as direct link setup. If both frames are part of same Basic Service Set (connected to same access points), station to station direct transfer of frame is possible. Streaming video from a smartphone to TV and a smartphone to printer communication for printing a file are such examples of traffic which can perform much better if directly connected.

 

Let us now see the content of the frame header and understand what each field of the frame mean.

 

The 802.11 frame structure

 

The 802.11 frame structure is depicted in the figure 18.2. The first field, called frame control, is of 2 bytes and contain many flags which are also shown in the figure with their respective sizes in bits. Let us begin with the frame control.

 

Frame Control

 

An 802.11 frame begins with a two-byte Frame Control (FC) field. The first bit of FC is version of the 802.11 protocol used to send this frame. The current version is zero. Next two bits indicate type of the frame. A frame can be of management or control or data as mentioned in the previous module. That information is kept here. Control frames can be RTS, CTS or ACK. Management frames include beacon, authentication de-authentication, association-de-association-re-association frames etc. which helps in communication. All other frames which communicate data from sender to receiver are data frames. It is important to note that management and control frames are much shorter and do not contain other fields of the header. Figure 18.3 indicates the types together.

 

 

Let us now look at other flags. The To-AP and From-AP fields indicate intermediary AP exists for this transmission. If the frame is transmitted from sender to receiver without AP involved, both bits are zero. If AP is sending and receiver is the receiving node, the To-AP bit is on. If the sender sends it to AP ( later on, to be delivered to the receiver), the From-AP bit is on. We will soon see how these bits are put to use.

 

When the frame is fragmented, the next field, more fragments, indicate so. The receiver concludes that other fragments of the same frame are still to come when a frame with this bit on arrives. The frames are numbered and thus sequence number do exist. However, there is bit called Retry indicates if the frame is a repeat transmission. The power management bit that we mentioned during the discussion of APSD comes next. If the bit is turned on by the sender, AP understands that it wants to go to power saver mode. Another field “more data” indicates if the current transmission is over or not. The last two bits, W, and O are not much significant. W indicates the old-fashioned WEP security is applied or not. With the additional security measured provided by 802.11i, this field has lost its relevance. O indicates if the data needs to be ordered at the receiver or not. Again, with traffic classification and detailed quality provisioning, this field also makes little sense.

 

Duration

 

Please recall our discussion on the CAMA/CA method where we know that the neighbors run the NAV till the communication is on. How do they know the time? By reading the value carried by this field. This field indicates the time for which the channel is occupied while this frame is being sent. This field exists in both control frames, RTS and CTS and thus available to all neighbors. The frame, which is transmitted after that, contains the same value in this field.

 

Distributed Service Set and four address fields

 

A wireless cell covered by a single access point is termed as BSS or Basic Service Set. Whenever a sender and a receiver belongs to two different cells (BSS), the interconnection between two cells comes into picture. Most of the times multiple BSS are connected by wired connections like Ethernet but sometimes even wireless connections are also provided. In that case, the complete network, consisting of few BSS connected together, is known as Distributed Service Set or DSS. Every BSS has one SSID and thus a DSS contains a few BSS with individual SSIDs. Figure 18.4 shows how two BSS combines to form one DSS. We have named them as Cell1 and Cell2.

 

 

Both the BSS are connected by a switch or a router. It is also possible to have multiple routers and a wired segment a but we have shown only one interconnecting device in our figure. Now consider a case where node number 1 wants to connect to node no. 4. This process happens in three different steps. First phase is node 1 to the AP-1 (of Cell 1), second is from AP-1 to AP-2 (using switch and wired connection), and the third is from AP-2 to node 4. Figure 18.5 shows the values of the third and fourth address fields in all three phases.

 

 

When the frame travels from node-1 to AP-1, the To-AP field is true as the frame is mainstream to AP. The first two addresses are node-1 of BSS1 and node-4 of BSS2, i.e. the sender and receiver like other cases and thus not much to discuss those values. The third address is the AP’s where the frame is being sent.

 

In the second phase where the AP-1 is sending to AP-2, and if it is a wired network like shown in the figure 18.4, it is not a wireless frame and the values that we are discussing does not make any sense as the frame will be wired and does not carry these fields. If it is wireless, the address-3 is of AP-1 and address-4 is AP-2. When both addresses are valid, both the flags are also true as shown in 18(c). Also, the noteworthy point is that in most cases the interconnection is wired and thus the fourth address is hardly used.

 

In the third phase when the AP-2 sends the frame to node-4, From-AP flag becomes true and the third address (yes, third and not forth) address is used to store its address as shown in 18.5 (b).

 

Ethernet only needs two addresses and not four like 802.11 as there are no AP in Ethernet. The 802.11 fourth both, AP and Routers or Switches when the DSS is to be traversed. Thus we have one more layer of hierarchy and that is why the need for four addresses.

 

When we only need to have intra-cell communication, when both sender and receiver belong to the same network, only first two addresses are used and both flag values are zero.

 

When the data crosses BSS and travel over wired network, other issue arises, the converting a wireless frame into wired frame and vice versa at the other end. This is an issue in itself which we do not address in this module but will discuss in next module when we discuss connecting multiple networks at the data link layer. For more information, you may refer to reference-1.

 

Final three fields are self-explanatory. The sequence number is unique for every frame and help receiver to send the acknowledgments. First 12 bits identify the frame and next four are a fragment of it. The payload field carries the network layer packet and CRC is the same cyclic redundancy check which we have looked at earlier.

 

The payload size is much bigger than the Ethernet normal size (2312 bytes). However, normally most Ethernet frames are as big as 1500 bytes, the maximum size but wireless frames are hardly as big as their biggest size.

 

Summary

 

We have looked at 802.11 different interframe spacing. We moved on to describe two additional modes provided by extensions. Extensions are designed to provide quality of service. It is possible to provide different traffic classes and divide the traffic into those classes to provide different services to different classes of traffic. Two additional modes are provided over and above two modes that we have seen in the previous module. The DCSA mode is more common which provides additional interframe spacing and differentiate between the real-time and the non-real-time traffic. The HCCA mode is less common but is very flexible and provides both DCF like and DCSA like service together. The AP is known as Hybrid Controller here. Finally, we have looked at the complete frame structure of the 802.11.

you can view video on The Medium Access Sublayer: 802.11 (Wi-Fi) –II

References

  1. Computer Networks by Bhushan Trivedi, Oxford University Press
  2. Data Communication and Networking, Bhushan Trivedi, Oxford University Press