After the radio on your roof, the main parts of the MCSNet network are the access points (APs) on the towers that the subscribers connect to, tower to tower feeds, and the connection to the fibre for the region. The best way to understand how the performance is affected through this network is through a roadway analogy, so let's dig in and see if we can figure out this wireless ISP thing.
To illustrate some of the difficulties and obstacles associated with providing Internet, we will use a highway analogy. Imagine a busy two lane highway full of cars going from point A to point B, with all of the cars travelling at the speed limit. If we can't increase the speed limit, how can we get more cars to point B in less time? By increasing the number of lanes. With the Internet, the amount of cars that can get to point B is called the bandwidth, which is a term that directly illustrates where the limitation is. The bandwidth on our highway would be the number of lanes for the traffic to share, and with wireless it's the number of frequencies, which is the width of the wireless band it uses (e.g. 20 MHz bandwidth). If you read through the wifi section above, you won't be surprised that there are only a few slices of frequencies available to use on wireless, so it's not possible to keep adding lanes to this highway.
Why can't we increase the speed limit of the highway? After all, 16 lanes still won't get many cars to point B if they are only going 3 km/hr. The reason is that in our analogy, the speed of the cars is the speed of the data, the data is at a real speed limit, which is the speed of light for most of the route. The time it takes to get from point A to point B is called the latency (latency can be measured by a ping test, where the time for the data to get there and back is measured), and is mainly determined by the distance it needs to travel.
Here we see our highway, point B is on the right, where it feeds multiple fictional cities
If your connection was right at point B off of the highway with no other cities to share with, that would be called a dedicated connection, which should be more consistent can be much more expensive to operate ($thousands). Instead, the vast majority of Internet connections share a feed off of point B like in the image above. When the Internet slows down due to congestion, it's not because the highway is moving slower, it's just that the highway can't hold enough cars to keep all of the feeds off of point B fed at the same time, it's rush hour.
This was a more dominant model when the Internet started way back when. In our highway analogy, each city would have a guaranteed rate of cars coming in; the idea is that each city's traffic rate is reserved so that it's available when needed. We have 4 lanes and 8 cities, so each city gets about half a lane worth of traffic. This solves the slowdown issue, as busy as any city may be, it can't affect the rate of cars to the others. The big downside of this is that the city of Bedrock can no longer get it's double wide shipment of brontosaurus burgers in, since they are only allotted half a lane of traffic, which is a shame since the highway is actually only busy for a few hours in the evening. The same holds true for the Internet, it's mostly only busy from about 8pm to midnight, so while a guaranteed rate is nice, it leads to slower top speeds ("double lane downloads" like HD video not possible), and the network is barely utilized for a large portion of the day. It's also more expensive because it doesn't allow for more cities/subscribers to split the cost and bring costs down. This was the early age of 'broadband' Internet, where a 1.5 Mbps T1 line was more than $1,000 a month and it was essentially business only, everyone else was still rocking dialup with averaged about 0.024-0.048 Mbps rates. It's important to note that the majority of these concerns only apply to wireless Internet, the cable and fibre connections in the more urban centers have an enormous amount of bandwidth available to them.
Let's scrap the reservation system, if the highway is free enough, then Bedrock can get their double lane shipments in at the low times of the day, and if it's during the busy time, then there won't be any room for them and the problem takes care of itself. Traditional Internet use has only been for short bursts, so this sharing system works very well to get the most speeds for everyone. If the highway is open and a city can use all four lanes to get their cars in within 1 second (instead of 8 seconds with the reservation system), then let them have at it, and now the highway is free faster for another city to do the same. With the traditional Internet it works exactly this way, one subscriber grabs a webpage in 0.1 seconds using the full pipe, then a couple of moments later another subscriber downloads some emails in 0.15 seconds, and things are just zipping along! If we watch the traffic on our highway during the busy time, we find that at any moment, that for every 8 cities connected, there is about 1 cities worth of traffic on the highway, so we can connect 8 times as many cities to share the cost.
Because of the long sleep to short burst nature of Internet activity, it became possible to share the cost of an expensive highway among many subscribers so that any one subscriber could get up to 5 Mbps for under $100--this is also why almost all ISPs list the speeds as 'up to'. Our highway is now accessible for all, and it feeds much more than the original 8 cities. The number of connections off of the highway changes regularly, so it's very difficult to predict the average or lowest rates for any individual city at a certain time. The 4 lanes are available through most of the day for each city, and we know it will be less at certain times. If a competing highway advertises their 3 lane highway as up to 3 lane speeds, it forces other providers to use the same simple metric, and thus the 'up to' arms race is created. Accessibility is way up, and costs are way down, but what remains is finding a way to prevent one hoarding city from filling the highway constantly, we want all cities to have a chance at 4 lane speeds, so it's not fair if Springfield is using all 4 lanes constantly for some abnormal activity.
We need a way to mitigate the effects of a tiny amount of heavy users from clogging the lanes, if we don't have a way to manage this, then the speeds will be slow for everyone as free lanes are never available. Throttling used to be a popular way to prevent a single connection from clogging the highway continuously. It simply means to allow all connections free access to the whole highway, but if a connection exceeds a certain amount for a set period of time, then limit them down to their portion of a lane to make it more fair for the rest of the connections sharing with them. Bandwidth hogs only make up around 1-2% of users, so most connections shouldn't notice it? Throttling ran into a reputation problem, mostly due to abuse by providers using it too aggressively, or using it to hamper or prevent the use of certain Internet services. The big grief with a throttled connection is that the throttling hit you right when you needed to use the Internet for a download or an update, so it made the connection very unpredictable. Throttling became a faux pas because of its misuse and either disappeared or was weaseled into some fine print. MCSNet's main competitor still uses throttling, where it sets the evening rates for non-preferred services down to 0.3 Mbps during the peak usage times.
What if we let the heavy users throttle themselves? We give everyone a set amount for the month, and once they use that allowance up, they are done. This treats all parts of the Internet equally, so there is no making one part go slow to make another go fast, and non-heavy users don't have to worry about it and can avoid getting throttled during the one time they do anything intensive on the Internet. This is the system that MCSNet uses. It doesn't have the immediate reactive fix that throttling does, so there can be times when the heavy users are going hard making things slow, but there's at least some assurance that they will not be able to do it constantly.
Most of the time you just have to read the fine print to see what funny definition of unlimited is in play. We already know that if some limit or throttling isn't in play, then if the service is in any way popular, it will be slow for the average user. If such a thing as unlimited existed, it would be a boon for the really heavy users, but even the dishonest 'unlimited' ISPs can't alienate the other 98% of their subscribers and watch the network slow to crawl, so it's generally safe to know that there is some hidden throttling in place when the traffic limits are hidden in the fine print. One larger wireless provider in Canada, with claims of unlimited usage, used to throttle a connection after about 3 MB of continuous traffic (approx 1 song worth) down to 0.7 Mbps. MCSNet is proud to have a clear definition of the traffic allowances, but we also provide a slower unlimited mode that can be used for free for the remainder of the month if the traffic cap is breached.
Streaming video like youtube and Netflix is an incredible advancement of the Internet, but it's our villain for today, because it changes the sleep burst pattern of Internet access to continual non-stop burst. Watching a 2 hour HD movie on Netflix is like trying to use all 4 lanes of the highway for 2 continuous hours, and if enough other connections try to do the same at the same time, then there won't be enough bandwidth to support any of them. The streaming video requires the data to make it to you within a very short period of time to avoid interruptions in the video. A bandwidth hungry HD video stream is not only bad for others sharing the highway, but for the sensitivity of the stream itself to keep the video going. This is why it's important, if you are using Netflix, to set the playback setting so it doesn't need that whole 4 lane highway to work. Here's the support page about fixing the Netflix playback setting.
The sad part about the Netflix conundrum is that the cities don't have to use the highway at all to get their videos in, there are already satellite feeds and cable and physical media that have been providing the service for years. Prior to Netflix, the main bandwidth hog was file sharing, where people share whole movie files with each other, and more recently extremely large video game downloads, but the big difference is that these activities can be done during off hours when the highway is free, however, most Netflix usage is right during the peak usage times in the late evening.
Worldwide Top Internet use applications during peak times in 2014. Netflix and Youtube make up about half of the downstream usage during peak times.
Why not just block Netflix and keep the Internet fast and reliable? While this would be an immediate fix for the speeds, it potentially leaves the Internet in a dark age where streaming video is blocked. This is a big part of the net neutrality fight that has been going on, where Internet providers shouldn't specifically choose what parts of the Internet work or don't work. Instead, it's the much longer road of adapting our highway to support it and providing higher priced tiers to take advantage and pay for it. This is what is happening with new technologies like OFDM (as well as LTE) equipment that can support many more 'lanes' over wireless. Just like replacing all existing highway systems, the upgrades will be expensive and take years to complete. The funny part is, once all of the upgrades are done, we'll be back to the old reserved lane system that was too expensive and wasted network resources so long ago, except this time the prices have come down some over time, and the guaranteed half a lane might be 4 or 8 lanes now. In the meantime, Netflix will only work up to the point that a small enough number of subscribers choose to use it. It's not guaranteed to work if the highway is busy, consider it like other things available only in the more urban centers like dry cleaning and door to door mail delivery, the copper and fibre in the city running right to their block affords a relatively enormous amount of bandwidth to play with. OFDM is a 'fix' for the Netflix problem right now, but when is there enough bandwidth? Streaming video services will graduate to 3D and/or, 60 frames per second, or ultra-high def (UHD) where the requirements double and double again. We'll surely be looking for another acronym to save the day when we undoubtedly take this ride again.
These are the four main frequencies used in most fixed wireless Internet feeds like MCSNet. This is different from the wireless that the cell providers use, who have a much wider amount of frequencies available. The 3.65 GHz band is a licensed band that was only recently opened up for use. A higher frequency will have a smaller wavelength, so it will be more sensitive to shooting through obstacles compared to lower frequency radios. The 900 MHz band offers the most penetration through trees by far.
The slower top rate on 900 MHz is due to the tiny size of the band available to use. To make a comparison with our highway analogy, if the 900 MHz band is a 2 lane highway, the other bands would have a 5 lane highway or more. It's such a letdown that the strongest band at getting connections is the slowest. The tiny size also contributes to interference issues, as there is little room for multiple providers to co-exist. If your location is too heavily treed, then 900 MHz may be required to get any connection, unless you have a tower (a 40 ft tower is about $600ish) to allow us to mount the radio at your location with enough elevation to shoot through the tops of the trees. There is currently no OFDM technology option on the 900 band at this time (2015), but we hope to see some in the next couple of years.
In simpler terms, wireless technology uses the aspects of the radio waves (amplitude, phase angle, etc.) to translate to the 1s and 0s for data, this is called modulation. Covering how the amplitude and phase angle is modulated is beyond the scope of this page, but if you are interested and technically minded, there is a good run-down of it in this Anandtech article here. To put is simply, it's just a way to get more lanes in our highway in the same frequency space. To get an OFDM connection, we first have to have OFDM access points (APs) on a nearby tower that you can connect to, and have a strong enough signal to get a connection to it. OFDM is available on all bands except for the one that most needs it, which is the 900 MHz band. The decision to leave 900 MHz behind was by the main device manufacturer, Cambium (they purchased from Motorola), who may have identified the tiny size of the band and the likelihood of interference in heavily populated areas as reasons to pass on it.
After you get connected to the tower site, we still have to get from the tower to the rest of the Internet, which is another area where the speeds can still vary. Most of the tower feeds are wireless, they have a wireless feed from another tower, to eventually get to the fibre optic cable that feeds the region. With these wireless tower links, we are dealing with the same limited lane highway problem, it's just on a grander scale and is more predictable. Since the highway feeds a single point (referred to as point to point), the expected throughput can be forecasted. Most of these links in the MCSNet network are already through the new OFDM technology to allow for faster rates, but they can still get saturated and affect the performance. On any given weekday, it's likely that one part or another is being upgraded somewhere in our network, it's never ending. After the tower feed, it connects to the fibre that feeds the region (some towers are fed directly by fibre), which offers an incredible amount of bandwidth. This fibre has traditionally been through the Alberta SuperNet, which is a good example of how even fibre can slow down if not properly maintained. The fibres converge at Edmonton where they connect to the rest of the Internet.