Extra Credits: Spectrum Crunch

This week, we discuss a major bandwidth problem we're about to run into.

Show Notes:

Would you like James to come speak at your school or organization?

For info, contact us at: kate@extra-credits.net

Audio Version:

Recent Comments:

  • Just wondering if anyone has any recent news regarding the spectrum crunch?
    According to the video we are supposed to reach the spectrum wall in less than half a year from now.
    It might be nice to go over this topic again.

  • I posted this on the video as well but i want to let everyone have this bit of info about a potential solution to this problem if it can just be implemented check this guy out to see what i mean it will only take about 10 minutes of your time

    http://www.ted.com/talks/harald_haas_wi ...


  • Sorry in advance for the wall of text. And no, there is no TLDR...

    Steve Perlman, the creator of OnLive and Quicktime and a few other things has, I believe, an answer to this problem: Distributed-Input-Distributed-Output (DIDO) wireless technology


    That is quite possibly the worst white paper I have ever read. It literally explained nothing in any greater detail than a grade school science presentation...

    Anyway, I think I will take a stab at trying to parse sone of the mechanics from that... thing. It seems to be a combination of some of the ideas found in the ATM/Frame-Relay Layer II protocol and some of the waveform modulation ideas found in AESA Radar Systems and 3D MRI systems.

    Quick primer on OSI model protocols: http://en.wikipedia.org/wiki/Osi_model

    With ATM/Frame-Relay (they are different protocols, but ATM is the basis for Frame-Relay, and hence, they share quite a few attributes) they operate in a so called "asyncronous" fashion, meaning that there is no dedicated connection. Rather, all subscribers in a given area share the bandwidth found in the copper/fiber serial channel and are allotted a certain maximum usage of the available bandwidth at a given time. However, the bandwidth is not statically assigned, instead, each DTE subscriber is allotted a certain time-slot on the channel, usually no more than several hundred nano-seconds. If the time-slot goes unused, it is forfeited to the next subscriber in line, and so on and so forth. When sped up to real-time, it creates a seam-less, multi-user/single connection interface.

    Now, with the actual broadcast of the radio signal, it seems to use back-end control software and hardware to pre-compute destructive and constructive interference patterns in order to create unique broad-spectrum waveforms that communicate the data back and forth between the AP, and also give the APs the information they need in order to create their own broad-spectrum waveforms in order to give each piece of client hardware it's unique data over the same chunk of wireless spectrum and then give each piece of client hardware nano-second level leases to their chunk of spectrum, in order to not waste the processing power necessary to calculate an open, unused connection on the waveform...

    Very convoluted...

    Although, why you wouldn't just use an etherchannel connections between the control hardware and the APs like every other wireless network topology, I don't know... It sure would eliminate a ton of latency and processing overhead...

    Now, I can definitely see how this can be used to connect 10 users at full speed, like they are claiming. But the processing power needing to calculate the waveform needed to connect 100 users would be staggering, and at least exponentially greater than that needed for 10... far greater than current AP control hardware can muster...

    I thought I'd read something earlier in the thread about there being harmful bands of radiation. But I assume they're taking that into account.

    Not really, as long as it's low enough power, microwave radiation, which is essentially beta radiation, will just bounce off of your skin. For short range, pico-cell communication, you would only need a few watts of broadcast power to achieve decent range and throughput. And for long range broadcast... it would essentially be the same radiation and power levels that we are already exposed to on a regular basis.

    My issue... is how they hell they are going to get microwaves to go through walls and objects on a pico-cell level?

    I never realized dropped packets were generating that much junk traffic. Given that, though, I'm kind of surprised why it took them so long to design a solution

    Oh hell yeah! Even with wired WAN connections, a connection quality of 80-90 percent is considered good, which means 10-20 percent packet loss/corruption, PLUS Layer 4 TCP resend requests for missing packets, PLUS the 10-20 percent loss for the resend requests, etc., etc. And this doesn't even include ICMP Source Quench requests for link over-saturation and the resulting Sliding Window tug-of-war. And this is just for seemingly reliable physical links!

    With wireless, 50 percent is a more realistic number, and that is not even factoring in the fact that a good 2.5 percent of an 802.11 MTU is nothing more than Layer 1 an 2 header data, like local network control information (to prevent to devices from communicating at the same time) and error correction protocols. And, that doesn't even include that layer 3 and layer 4 control protocols that get stacked on top of that, AND that a good portion of a given payload may just be padding to get the payload up to a total of 1500 bytes.


    Ultimately, I think the cure for this problem lies in infrastructure expansion, both physical and wireless. After all, wireless is useless without a physical route to the backbone connection.

    With wireless, the easiest method to decrease spectrum saturation, in terms of technological development, would be to decrease the size of cellular broadcast domains.

    Instead of having one broadcast domain spanning 10km, you create 10 broadcast domains spanning 1km but all assigned the same chuck of spectrum as to not divide of the bandwidth. Then, just have them overlap and communicate with each-other through control hardware to keep them from assigning bits of spectrum to users on the border area that will cause interference.

    So, instead of having 1000 users all sharing 100Mb/s of bandwidth, you have 100 users sharing 100mb/s. The problem is that this is quite a bit less cost efficient. But, given current tech, is totally do-able.

    End wall-o-text

  • Norway is taking a big step by killing off FM radio in two years.

  • FM radio is 20-30MHz of spectrum, and not even the juicy portion of it. When we are talking about people not being able to fit the traffic in 5GHz, what are these 30MHz going to do?

    That said, going over to digital radio is a good idea. Way better channel selection. More music/talk. Less advertisement.

    Everybody wins.

Join The Discussion: