Tangentially related: http://www.digitalspy.co.uk/tech/news/a ... posal.html (Brief synopsis: The analog switchover in the UK has freed up the 600 and 800 band of frequencies, at the moment the 800 is being auctioned off to 4G providers, while the 600 is being used to provide 10 up to new HD over-the-air channels, temporarially.
I never realized dropped packets were generating that much junk traffic. Given that, though, I'm kind of surprised why it took them so long to design a solution. CD players have been using similar encoding to read over scratches and dirt on the surface for, well, as long as CD players have been around. And overall, while error correction really is a complex field of study, it's been around long enough for people to just apply the results to a problem.
Just wondering if anyone has any recent news regarding the spectrum crunch?
According to the video we are supposed to reach the spectrum wall in less than half a year from now.
It might be nice to go over this topic again.
I posted this on the video as well but i want to let everyone have this bit of info about a potential solution to this problem if it can just be implemented check this guy out to see what i mean it will only take about 10 minutes of your time
That is quite possibly the worst white paper I have ever read. It literally explained nothing in any greater detail than a grade school science presentation...
Anyway, I think I will take a stab at trying to parse sone of the mechanics from that... thing. It seems to be a combination of some of the ideas found in the ATM/Frame-Relay Layer II protocol and some of the waveform modulation ideas found in AESA Radar Systems and 3D MRI systems.
With ATM/Frame-Relay (they are different protocols, but ATM is the basis for Frame-Relay, and hence, they share quite a few attributes) they operate in a so called "asyncronous" fashion, meaning that there is no dedicated connection. Rather, all subscribers in a given area share the bandwidth found in the copper/fiber serial channel and are allotted a certain maximum usage of the available bandwidth at a given time. However, the bandwidth is not statically assigned, instead, each DTE subscriber is allotted a certain time-slot on the channel, usually no more than several hundred nano-seconds. If the time-slot goes unused, it is forfeited to the next subscriber in line, and so on and so forth. When sped up to real-time, it creates a seam-less, multi-user/single connection interface.
Now, with the actual broadcast of the radio signal, it seems to use back-end control software and hardware to pre-compute destructive and constructive interference patterns in order to create unique broad-spectrum waveforms that communicate the data back and forth between the AP, and also give the APs the information they need in order to create their own broad-spectrum waveforms in order to give each piece of client hardware it's unique data over the same chunk of wireless spectrum and then give each piece of client hardware nano-second level leases to their chunk of spectrum, in order to not waste the processing power necessary to calculate an open, unused connection on the waveform...
Although, why you wouldn't just use an etherchannel connections between the control hardware and the APs like every other wireless network topology, I don't know... It sure would eliminate a ton of latency and processing overhead...
Now, I can definitely see how this can be used to connect 10 users at full speed, like they are claiming. But the processing power needing to calculate the waveform needed to connect 100 users would be staggering, and at least exponentially greater than that needed for 10... far greater than current AP control hardware can muster...
I thought I'd read something earlier in the thread about there being harmful bands of radiation. But I assume they're taking that into account.
Not really, as long as it's low enough power, microwave radiation, which is essentially beta radiation, will just bounce off of your skin. For short range, pico-cell communication, you would only need a few watts of broadcast power to achieve decent range and throughput. And for long range broadcast... it would essentially be the same radiation and power levels that we are already exposed to on a regular basis.
My issue... is how they hell they are going to get microwaves to go through walls and objects on a pico-cell level?
I never realized dropped packets were generating that much junk traffic. Given that, though, I'm kind of surprised why it took them so long to design a solution
Oh hell yeah! Even with wired WAN connections, a connection quality of 80-90 percent is considered good, which means 10-20 percent packet loss/corruption, PLUS Layer 4 TCP resend requests for missing packets, PLUS the 10-20 percent loss for the resend requests, etc., etc. And this doesn't even include ICMP Source Quench requests for link over-saturation and the resulting Sliding Window tug-of-war. And this is just for seemingly reliable physical links!