Blog

The problem of selling the NBN

Posted on June 14th, 2013

I came across a picture on Twitter that I think shows one of the biggest problems with how the NBN is being promoted:

NBN Poster

In my opinion, this is a terrible slogan. For the most part, average people don’t actually care about broadband speed. Anyone interested in technology wants faster than 100Mb/s internet at home today (I know I do), but the average person can’t see past the needs of today and doesn’t understand that these speeds will be required in the next ten years. So advertising faster internet isn’t enough.

But I think it’s worse than this – people know that the NBN is quite expensive, and if they think it’s only about faster internet, then they will think that they might as well vote for Malcolm Turnbull’s ridiculous, backwards thinking, cheaper (in the short term but more expensive in the long run and a complete waste of money) NBN. If the NBN is just about speed, then it appears to be indulgent (I know it’s not, but that’s how it seems).

We need to be telling these people why they need the NBN. And I think the biggest reason is that Telstra’s copper needs complete replacement by 2018. There is simply no way that Malcolm Turnbull’s fraudband, or the option of doing nothing will work. We need to replace the copper network, and we need to do it correctly with optical fibre, since keeping the copper limping on with FTTN will cost billions of dollars a year to maintain. Your grandmother probably doesn’t care about gigabit internet, but she does care that her phone line will not work in a few years unless it’s replaced (and it only makes sense to replace it with fibre).

We need to be focusing on the fact that optical fibre is the most reliable communications medium. There are simply no dropouts like ADSL, there are no blackspots like wireless. It doesn’t slow to a crawl in peak times like HFC. Fibre to the node can’t fix that – only fibre to the premises can. We need people to know that the NBN (unlike Fraudband) isn’t just the internet – but it will replace over the air TV with quality many times what we have now (Ultra High Definition TVs cost $1300 in the US now – we’ll have it soon in Australia. FTTN can’t support that!), it will carry your phone line, it will allow video conferencing and point-to-point links for business, and applications that don’t exist yet.

There is so much that is better about the real, optical fibre NBN, but if we only talk about speed, many people will just think that Malcolm’s plan can do it almost as good and cheaper. That’s a problem.

IPv6 and NAT

Posted on April 11th, 2013

It’s interesting that one of the biggest reasons that people seem to have for not wanting to move to IPv6 is the lack of NAT – people just seem flabbergasted when they hear that there is no trace of NAT in IPv6. This is quite interesting to me – I can’t say NAT is something I ever liked in the slightest, having played around with running my own servers over a home internet connection in my youth, and getting frustrated with all the limitations it brought. But it seems that some people really do like it.

The biggest problem though is that the main thing people like about NAT is based on a dangerous misconception. Basically, the idea is that NAT adds some level of security. This is completely false. Let me emphasise this:

NAT does not provide any security
Not any security at all. None. If you think it does, then you’re probably shouldn’t be involved in running anything bigger than a home network.

What is NAT?

As I said before, the biggest misconception about NAT is that it has something to do with security. This is not at all the case. So what actually is it for?

NAT is simply a hack that was designed to stave off IP address space exhaustion by hiding the computers behind one public IP address, giving them private IPs, and then doing bodgy address translation at the router. The main thing it does is not let computers behind be reachable through any ports, except for ports that are explicitly forwarded. This breaks reachability if you want to run, say, two web servers on port 80 (only one can get through to the outside world). From a security standpoint, it basically gives you a very simple, badly configured firewall. You still need a firewall for proper security with NAT, and almost any firewall gives better security than NAT. I’ve read some people talking about NAT being another ‘layer’ so it’s increasing security, but it’s just not – the firewall does exactly the same thing and more, rendering any security from NAT unnecessary.

All home routers now contain a stateful firewall built in, and all operating systems have their own firewalls as well. There is absolutely no reason why NAT would need to be relied on for security in any modern network.

NAT breaks the Internet

A fundamental part of how the Internet is supposed to work assumes end-to-end routability – that is, that you should be able to route packets from any IP to any other IP. By hiding many networks of computers behind single public addresses, this is broken.

The other part is that a computer should be able to have the option to host services for others to consume – but with NAT, you are severely limited in this. With port forwarding you can only have one machine accessible on a given port, so you have to mess around with proxies and other extra hardware for no good reason. NAT just adds cost, bottlenecks, extra points of failure and unnecessary complexity.

Routability vs. Reachability

Just because an address is routable does not necessarily mean it is always reachable though. People seem to have the idea that as soon as NAT is gone and they have a public IP, they are completely open and at the mercy of the internet. This is what the firewall is for – all ports should be blocked by default, and only opened to an endpoint if that computer is hosting a service on that port. Given that there is a firewall both on the computer and on the router, adding NAT does nothing extra.

NAT is a hack that only breaks things, and I can’t wait to get rid of it. The fact that IPv6 totally does away with it is something that I’m very happy about, and one of the reasons why we need to drastically speed up our deployment of the next generation internet protocol.

The Coalition’s NBN policy failure

Posted on April 9th, 2013

Well, completely as I expected, the Coalition’s broadband plan that they just released is an utterly shameful farce of a policy, and as expected, they’re using shameless lies to try an sell it.

It turns out that the $90 billion lie that the Coalition were spreading was to make the plan look cheap – as it’s a lot more expensive than the Coalition have been saying it would cost.

And they are sprouting that it will be “just as good” or “better” than fibre to the premises (FttP), which is completely, objectively false.

It still uses outdated technology for most people – 70% of the population are going to be stuck on last-decade VDSL technology for the foreseeable future (this is the technology that the former CTO of BT called “one of the biggest mistakes humanity has made” given the disaster trying to use it in the UK), and it will cost billions of dollars extra to rip out the nodes and expensive VDSL DSLAMS and replace it with the fibre network we should have built. But hey, it’s a bit less on the upcoming budget, so who cares about the future? That’s going to be a couple of terms away, by which time Malcom Turnbull will have retired…

And as usual, since it works for the Murdoch media, who will lavish praises on the Coalition plan ignoring the actual technology, you know lots of clueless citizens are going to be misinformed to thinking this is a good idea.

A primer on how to understand the Coalition’s Broadband Policy from a technical perspective

Posted on April 8th, 2013

So today two things came to light from the Coalition – the first is that they’re still trying to spread fear, uncertainty and doubt about Labor’s NBN plan by spreading completely evidence-free falsehoods (saying the NBN will cost $90 billion). The second is that apparently their broadband policy will be released very soon – some sources saying even tomorrow. If they actually do it (they have been promising it for quite a while now), this is great news, because it will be far easier to objectively show how it is a failure of a policy and call out the Coalition’s lies.

Now, for anyone who isn’t up to speed with the technology (this is probably the vast majority of Australians), I decided to put together a quick guide on how to evaluate the technology in the policy when it comes out:

If it uses Hybrid Fibre Coaxial (HFC), it’s not worth considering

The Coalition have said that (in order to cut costs), they won’t be rolling out any new infrastructure to places where there is already cable internet available. Other times they have said that if anything, they would be last. So this means that for many people in Australia, the NBN under the coalition would be no different than you get now for a long time. This also has the problem that it would push more people onto the HFC network, which was an afterthought put onto the Foxtel cable network. This is a big problem.

On paper, cable internet is very fast. But the problem is that unlike most other internet technologies, you’re sharing this speed with up to hundreds of people in your neighbourhood. This means that performance degrades rapidly the more people you have connected, and if anyone who lives near you is a big downloader, your internet is going to suffer. For many parts of Australia, HFC is already woefully insufficient. Internet bandwidth requirements have been steadily rising since the internet began, and are not going to stop for the foreseeable future. So if HFC isn’t good enough now, why do the Coalition think it can be part of the NBN?

Labor’s current NBN plan is to replace this coaxial network with fibre to the home. Fibre that does not corrode like copper coaxial cable. Fibre that can be upgraded to 10 times faster in the next few years just by replacing some of the equipment at either end (a tiny fraction of the cost of putting in the fibres at the start). Network providers are even trialling 40 and 100 times faster connections over the very same fibre – and it will be able to go even faster in the future. HFC, though, is a complete dead end.

Inclusion of HFC as one of the technologies to make up the NBN would prove that the Coalition’s plan is worthless.

If it relies on the existing copper network (eg. FttN), it’s a failure

There are many, many reasons why Fibre to the Node (FttN) is not sufficient for a next generation broadband network. VDSL (which is the carrier technology for the ‘last mile’ between the node and the home) will not provide speeds that are a great deal better than current ADSL2+. It will never deliver speeds in excess of 0.2 gb/s, whereas fibre will be upgradable to hundreds of gigabits per second. VDSL will require large, fridge sized cabinets at every block, requiring large backup batteries and requiring far more power than the passive optical network (possibly even increasing the power consumption in Australia to a point where new power stations will have to be constructed).

The biggest problem, though, is that it’s not going to be a great deal cheaper than the current NBN. Perhaps this is why the Coalition are spreading the lie about the NBN possibly costing ninety billion dollars – to make their plan look like a bargain. Worse, when in five or six years it is realised that FttN is insufficient, it could costs more than 21 billion dollars to upgrade to FttP from the Coalition’s plan.

This technology is not future proof. Unlike fibre, it’ll be obsolete before it’s finished. It will not be much cheaper, and be harder and more expensive to get to the FttP network we need in the future.

The ABC News’s Technology section has more details about the problems with FttN, and why a fibre network is far more effective and future proof if you’re interested in the specifics.

Cheaper and Faster mean nothing if it’s not going to serve its purpose

The main thrust of the Coalition’s plan from the start was that it would be both cheaper and faster. What they want to deliver cheaper and faster though will only be able to be described as a complete mess of a network. There’s no point in installing infrastructure if it’s not going to be useful and do what it’s supposed to do. The Coalition’s plan, in short, is to build a white elephant.

Fibre to the premises is the only technology that can provide the speeds required by business and homes into the future, and the only technology that provides a cost-effective upgrade path. This is too important a project to scrap just to save a few billion dollars, when the economic benefits will be many times that.

Observations on browsing the web with IPv6

Posted on April 5th, 2013

Over the last few years I’ve been getting more interested in the lower-level networking technologies and protocols that power the internet, like routing, BGP, IP, and so on. It’s really quite fascinating to go further in-depth and play around with this stuff. So, the other day I bought a bit of really cheap network gear from eBay, and started messing around with an IPv6 tunnel using Hurricane Electric‘s free tunnelbroker service.

Currently, my internet provider (TPG) doesn’t offer native IPv6 connectivity, but there is a transition technology that allows you to create a tunnel to a network that does have IPv6 and route your traffic through that. I couldn’t get the tunnel to work with my current modem/router (the NAT seemed to block protocol 41), so I picked up a TP-Link TD-8817 ADSL2+ modem for $10 and bridged that using PPPoE to a Ubuntu virtual machine. I set up this VM to forward IPv6 traffic, but for the sake of experimentation, not IPv4. I also installed radvd to advertise the IPv6 prefix that HE assigned me to the machines on my network so they could configure themselves with stateless autoconfiguration. It works like a charm – all in all, it was pretty simple to set up!

Now, having a router only forward IPv6 traffic is quite an unusual setup, and will continue to be for many years. As more and more providers offer IPv6, pretty much everyone will go over to a dual-stack configuration – that is, machines will get both a private IPv4 address shared over NAT (as is currently the norm), and an IPv6 address. Any IPv6 ready sites will work over that protocol, and any ones lagging will go over the old IPv4.

Running just with IPv6 though, I’m temporarily free from the tyranny of network address sharing! But while this is awesome, there are a few downsides too.

One thing I eventually noticed was that my own sites simply did not work – but I quickly fixed this.

All in all, while a lot of sites on the internet do work when browsing on an IPv6 only network (like Google, Facebook, Wikipedia, etc), there are a surprising number of big-name sites that don’t work at all. I would have expected that ISPs of all people would have this sorted out, but unbelievably Telstra and Optus’s web sites don’t work at all! TPG’s does, but they still have some big problems with their site – going to http://tpg.com.au doesn’t work with IPv6 – you have to go to http://www.tpg.com.au. With IPv4, the redirect works properly, but they’ve messed it up for IPv6. Also, when you try to go to their page with HTTPS, it just sends regular HTTP traffic instead of doing a proper TLS handshake.

Some other sites I would have thought would work are news sites like the ABC, Hacker News, The Verge and Ars Technica. I guess these will come on line eventually, but since most of them are technology focused, I would have expected them to be on the ball with the change.

The next experiment I’ll do will be setting up a DNS server on a VM to delegate reverse DNS to. Then I can allocate addresses to some of my server VMs and have the reverse DNS agree.

Sites now available via IPv6

Posted on April 3rd, 2013

I’ve been playing around with IPv6 recently, and noticed that none of my sites are reachable over it. Turns out that adding IPv6 addresses to all my domains and subdomains was free and really easy with the provider I’m using, so now all my sites are ready for the future of the internet!

There shouldn’t be any change in functionality for people still using IPv4.

Helicopter

Posted on January 29th, 2012

Doing some tests with our new RED Scarlet-X camera rig, I shot this little clip of my remote-control helicopter. Turns out our white infinity isn’t as infinite as we thought!

Final Cut X – Exactly as I thought

Posted on July 3rd, 2011

Apple caused a fair bit of controversy when they released their new Final Cut X non-linear editing software a few weeks ago – it had very little third party support, no XML, AAF or OMF file exports (those are files for taking your edit from the editing software, into other software like Pro Tools or Logic for grading, and BlackMagic Resolve or similar for grading and finishing). There was also limited support for tape and several video file formats, like RED’s R3D files and Sony XDCAM files, and no multi-cam editing.

There is a whole lot about Final Cut X that is really good (like its new 64 bit architecture built on Core Video and other modern APIs instead of the old, crufty QuickTime and so much more), but for some reason everybody writing reviews on the internet seemed to have decided that Final Cut X was never going to be updated, and it was as fully featured as it was ever going to be. This was the end. Apple wasn’t going to add any more features. Apple hates the pros! They’ve abandoned us!

That idea, though, is completely wrong, and just plain silly. Apple has released a set of frequently asked questions about FCX that addresses almost all of the concerns I’ve seen, and is pretty much exactly the same as what I had been saying about it. For instance: multicam support – not there but it’s coming in an update. AAF, XML, OMF will be available via an API for third party developers soon. Monitoring through cards like AJA’s KONA 3 card will be possible when drivers are updated. More import formats are coming.

There are some people saying that Apple should have communicated better from the start, but I thought it was all fairly obvious…

Vodafone – possibly the worst mobile carrier in Australia

Posted on February 15th, 2011

I can’t say this enough – Vodafone is simply the worst mobile phone carrier I have ever experienced. I have had an iPhone 4 with Vodafone for about six months now and while the iPhone 4 is an amazing phone, being on Vodafone has been nothing but a painful experience. I’ve had nothing but problems with the service itself, their billing, their customer support (where “within five working days” seems to mean “in ten working days”), and outages with their MyVodafone website (where you check what your data usage is, although it seems to be several days out of date when you check the bill).

But apart from the customer service and all that, the biggest problem is that the 3G is just terrible. Unreliable is probably being too kind – most of the time it just downright doesn’t work. I’m in Brisbane, and on most towers, 3G requests will frequently just time out. You often have to repeatedly stop and refresh the page loading, sometimes for several minutes before anything will load. Sometimes, this will not work at all, and you have to either reboot the phone or switch to aeroplane mode and back, and then try repeatedly refreshing to get it to work. An example of this was when I was in the middle of Brisbane’s central business district one night (if your carrier has good coverage somewhere, you would expect it to be in the CBD!), trying to transfer some money between my accounts using the Commonwealth Bank’s NetBank online banking site. It took about ten minutes before I could get any part of the login window to appear, and in the end it took about twenty minutes to do a transaction that would have taken two minutes on WiFi.

In Sydney, the 3G was somewhat reliable, but still slow. But it’s certianly nowhere near the good reliability that my brother’s iPhone on Optus or other relative’s phones on Optus and Telstra have.

I am looking through the contract to see if there’s any way to get out of it without paying exit fees – certianly common sense would suggest that by not providing an adequate level of service that I’m paying them for, they are in breech of their contract, but given my experience with their customer service, I don’t know if they really care.

If I am unable to leave, there’s no way that I am staying with Vodafone Australia after my contract expires, and I expect that I will never purchase any product or service from them in the future. I only went with Vodafone because it was slightly cheaper than Optus, but given my experience and the far superior service my brother (who is on Optus) gets, Vodafone was just not worth the saving at all.

Web Development in C++

Posted on February 1st, 2011

If you have a look around the ‘net, there really aren’t many frameworks or libraries that are intended for developing web applications in good old compiled C++. I’m sure that this does not surprise most people – these days all the cool kids are using fancy languages and frameworks like Ruby on Rails, Python, and so on. In the enterprise, it’s all Java (ugh…) and ASP.Net. But hardly anything is said about any compiled languages, and the thought of writing web apps in C++ draws looks as if you just said you were going to use Perl!

I, though, think that writing web applications using C++ really has a lot of benefits. First of all is the speed of compiled code – interpreted languages like Ruby, Python and PHP really are pretty rubbish when compared to the performance of an application written in C++, and this is especially important when creating web applications. This is mainly all about scalability – for a large site, the more requests you can handle per second on a single server, the less servers you need (of course, scaling the data store and using caching is also important to this). This is one thing that Facebook found out very quickly. It prompted them to write a compiler that converts PHP code into C++ and compiles it into a binary along with a built-in, threaded web server.

I’m sure that the biggest problem most people have with using C++ for web development is that it is hard. This was one of Facebook’s concerns too – since PHP is easier to learn, they are able to hire more people to work on their code, and can write code faster. I find this quite concerning though. One thing that my experience in software development has taught me is that bad coders can write bad code in any language – and the easier the language, the more bad coders there tends to be. Now, of course I’m not trying to say that PHP is a bad language, and that you shouldn’t be using it (in fact, I like and use it for small projects and prototyping), but choosing it for Facebook’s reasons makes it more likely that they hire more substandard programmers than they would otherwise. And as for being faster to develop in – well, a good C++ web framework should help here, and to be honest, often what happens is the faster you code, the less care you take and the more bugs you write into it.

I’m writing my own framework to make development faster and easier, called Nexus. It runs in a normal web server (my preference is the extremely fast Cherokee web server, but you can use Apache, nginx or lighthttpd if you wish), and is based around the FastCGI++ library, which is excellent and provides a far superior C++ API than other FastCGI libraries I’ve looked at.