Going Postal Social

I was fortunate/unfortunate (please delete as applicable) enough to receive an invite to Google+ yesterday.  I thought (even though I eventually cancelled my account on its competitor) it would be fair to give it a try, to see what it was like and whether it would be better than “the other social network”.

I quite like the concept of “circles” in G+ (and something I’m sure that “the other site” will be emulating fairly quickly) in that it appears possible to restrict certain postings to certain people – as long as there aren’t any bugs in the G+ code, I suppose.  One more thing that I like about G+ is that it seems a lot “simpler”.  As far as I can see, you can’t install apps into it, which for me is a plus because (talking to my various social networking friends) is one of the big distractions.  As you might expect, it’s quite closely tied in with other Google services and I can’t find a way to change my primary email address to my “proper” one (unlike my gmail one which I hardly ever use)

So… first impressions are okay, but obviously time will tell whether this (a) becomes really popular and (b) what the privacy implications are.  And I still need to read the terms and conditions properly…

New Humax firmware

New Humax firmware recently out for my HDR-Fox T2 — and it’s got some fixes, but also some new features.  The big one for me is that it can now act as a DLNA server as well as a client, so I can watch my recorded programmes on my PC.  Which is quite cool, and I expect I’ll get bored of it soon and go back to putting myself in front of the telly as I usually do.  There is also a YouTube feature on the TV portal now.

You can get the new firmware from the Humax UK web site, or you can just wait for the OTA download to come round on the Freeview HD mux.

How (not to) upgrade your hard disks

This should have been easy.  But like everything “important” that I try to do, it ended up as a bit of a nightmare.  The problem was simple: 300GB disks too small, need more space.  The solution was also simple: Buy three shiny new 3TB disks and fit them, copy data over, job done.  Simple?  Of course not…

The first thing was, will my computer support them?  Some of you will know that Windows really really hates drives bigger than 2TiB in size and the only way to make them work is to (a) use 64-bit Windows Vista Service Pack 1 or later and (b) use a motherboard with a UEFI instead of a BIOS, a compatible SATA disk controller, and a GPT partition table (rather than the old-fashioned MBR style) must be used as well, since MBR partition tables do not support drives bigger than 2TiB.  I do not have a motherboard with a UEFI, but thankfully neither do I run Windows on this particular server — it runs Linux (Debian 6.01 Squeeze to be precise).  This is a plus, because it means that using GRUB 2 I do not have to worry about any such problems, since it already supports GPT partition tables, and 64-bit LBA, and everything else you need to actually boot a 3TB disk.  Everything fine so far then!

So, I went ahead and bought myself three hard disks – one Seagate, and two WD Caviar Green disks, one of which was to be the backup disk.  After suitably “de-green”ing the WD drives (by using the idle3ctl utility, available from Sourceforge, which is a Linux version of Western Digital’s own wdidle3 utility) and turning the auto head parking off, I got to work.  All four drives were fitted, and I created an EFI System Partition (for future use, as I don’t have a UEFI motherboard yet), a BIOS Boot Partition to store GRUB 2 on (this is necessary with GPT partition tables since there is no ‘spare’ space to store the boot code, so I normally create a 1MiB partition to store that, and the rest was a RAID partition to store the logical LVM volumes.  All seems reasonable so far, taking care to align the partitions to MiB boundaries since the Western Digital drives are 4K physical sector drives (but the Seagate one claims not to be).

So, backups done, I shut down the machine, pulled the power, and installed the new disks as SATA3 and SATA4 on the controller so that the existing Maxtor 300GB disks would boot up as normal, which they duly did.  Once the system was booted, it was time to configure the RAID-1 array on the new disks, and then to move the data across.  I’d been practising this the week before on a kvm virtual machine so that I’d know what to do if it went wrong.  Couldn’t go wrong twice, could it…?

Well, yes it could.  The first “mistake” I made was to install the old system in such a way that both the / (root) partition was on a logical volume.  Normally, this wouldn’t matter, but it tends to matter when you’re trying to move it to another volume.  When I tried this before, it seemed to work, but this time something went wrong.  I issued the fateful “lvm pvmove” command to move the LVMs from one physical disk to another, everything stopped.  Oops.  I had started the process and then gone to bed – and when I got up the next morning it was still going and nothing was printed on the screen (even though I had verbose on).  What should I do?  At worst, I could just restore the backups (even though it would take ages)…  I ended up pushing the reset button.  The machine then failed to boot, but it did make it into the initramfs (since it couldn’t find the RAID array).  This is the bit where I think you’re supposed to panic!

Thankfully, I managed to reassemble the RAID array from the initrams and then ran lvm pvmove –abort to stop any already running moves in process.  Personally I don’t think it even got started, so I took the risk and ran the lvm pvmove command again from inside the initramfs (which is probably a good place to do it since no filesystems are mounted at this point).  It started.  It printed percentages.  It was going.  And then I had to go to work…

Got home to discover that it appeared to have worked.  All the LVs had been successfully moved, or so it said.  So the final thing to do was to get the system booted properly, by mounting the root volume and exiting the initramfs, and then (to cut a long story short) checking that /etc/mdadm/mdadm.conf had the right info in it, running update-initramfs -u once booted to make sure it detected the new RAID-1 array, then I installed GRUB 2 on the new disks.  And rebooted, and held my breath, and ….

Phew!  It booted!  Thank goodness for that!  Then I had to shut the machine down again to fit the hotplug SATA caddy which the backup drive went into, and also the new SATA DVD writer.  Once that was done, the old disks were removed, and the new machine was booted.

There was one final challenge – some idiot had set all his kvm virtual machines up so that none of the partition tables on the virtual machines were 4K sector aligned.  So I ended up spending quite a few hours sorting that one out – and that’s another (long) story.

But I got it done, and now it’s working, and hopefully I won’t have to do that again for another 5 years… (by which time I expect we’ll all be buying 30TB drives for £100!)  And next time, I’ll be using the Debian Rescue disk to boot into before doing the lvm pvmove…

Google+ vs Facebook

There appears to be a lot of hype at the moment concerning the new Google+ service from, well, Google.  Is it really the “Facebook killer” that everyone’s been waiting for, or is the hype surrounding it just that?

Currently, Google+ is in limited beta so you (à la Google Wave) need an “invite” to get on it – which apparently I don’t have.  At the moment, I have no idea whether I even want an invite – after all, to quote the words of this xkcd cartoon — “it’s not Facebook but it’s like Facebook.”

So, those of you that know me also know that I cancelled my Facebook account (mainly because I read the Terms of Service and didn’t really like what I read); so what is it (if anything) that makes Google+ worth signing up for, that Facebook doesn’t have?  Should I even bother?  After all, both companies have one thing in common, and that is that they are making money out of your personal data (and mine as well).  And this is the sticking point for me; whether I offer my data to Facebook or Google or anyone else, someone else is making money out of that data.  Perhaps Google may take a slightly better attitude to data privacy, maybe they won’t.

Will it be successful? Will it overtake Facebook? Does anyone even care? Who knows — I haven’t even got an invite!  (But, I suppose, if I ever do decide to sign up, I can always cancel my account just like I did for their main competitor…)

World IPv6 Day – what now?

Hopefully most people will have realised by now that last Wednesday (8 June) was World IPv6 Day.  The idea of this day was to enable IPv6 on various web sites, including some quite famous ones like Google and Facebook and Yahoo! and see what would happen.

Irritatingly, there were many press articles about World IPv6 Day were published which seemed to me to be still largely sceptical of the whole thing, or was dismissed as a pointless exercise.  But really, the whole point of the day itself was basically to see what would happen when people enabled IPv6 on their servers.  Perhaps The Register put it the best with their headline, even if the article itself wasn’t much good in my opinion —”World IPv6 Day fails to kill Internet”, as if that was some kind of surprise.  (I knew it wouldn’t, but then again this web site has been IPv6 enabled for ages…)

But even if the press think or thought it was all one great big massive publicity stunt, what now?  Because now, the day is over, and presumably we can all go back to sleep — but sadly we can’t, because IPv4 addresses are still running out.  We can’t use NAT, or double NAT, or treble NAT forever.  And it left me thinking about what should happen next, because the world after World IPv6 Day doesn’t seem much different to the world before World IPv6 Day.  Maybe we should have another one next year…?

Thankfully, things do seem to be moving, but slowly.  Two big UK ISPs have announced on their respective web sites that they either have plans to roll out v6 or are trialling it – O2/Be are hopefully going to have v6 available at the end of the year, and Plusnet are currently trialling now.  And Billion now have a UK version of an IPv6 ADSL wireless router (albeit with beta firmware) also shipping.

I’m hoping that when O2/Be roll out their v6 implementation, being one of the larger UK ISPs, that this would entice the likes of BT Retail and TalkTalk to follow suit.  Plusnet, although owned by BT, is not really big enough to be called a ‘big player’ in the market.  It’s the old problem – the ISPs are claiming there’s no demand, and the service providers can’t roll it out because the ISPs aren’t offering it, which means they can’t create demand.  Perhaps things won’t change until the companies that got the last APNIC allocations use their allocations up 9 months hence.

It’s all slightly depressing, isn’t it? :)

Dilemma of the week – to HTML or not to HTML?

I was only considering earlier today whether it might be time to give in to all my principles and ‘convert’ to the ‘standard’ that Microsoft probably should have but in fact never invented – HTML email*.  Up until now, all my emails have been resolutely plain text, monospaced affairs designed to take up as little space as possible.  However, in today’s world, everyone else is using HTML mail and I’m not — so perhaps it’s time to give in and convert.

The big disadvantage, of course, is that your email basically needs to be sent twice, once as a text/plain MIME part for the ‘legacy’ mail readers that can only understand plain text, and once as a text/html MIME part for the HTML-capable readers.  This does noticeably increase the size of an email, but then again in today’s “broadband” world this probably isn’t anywhere nearly as much of a problem as it used to be.  The downside of HTML email is that it does dramatically increase the scope for nasties to enter into your e-mail client by use of nefarious HTML tags.  Thankfully modern email clients are much better at sanitising HTML, and also not loading remote images by default so much of this risk is reduced, but not all.  The upside of plain text is that it is just that — plain text.  Nothing to go wrong.

Another ‘attraction’ of HTML mail is that you can make your mails much ‘prettier’ using colours, boxes, CSS, whatever (I think you can even put in a background if you’re that desperate!), as well as different font styles/sizes (although this is limited to what fonts are on the client machine, which basically forces you to specify either “serif” or “sans-serif” and hope for the best, unless there is a way of embedding fonts these days.)

Perhaps I’ll trial it and see how it goes.  I can always go back to my old-fashioned ways if it doesn’t work out … :)

* Believe it or not, it was Netscape!

Review: IPv4 “Significant Announcement” ceremony and press conference

So now we know what blocks of IPv4 look like.  They’re glass!  Today was the live webcast of the Numbering Resource Organisation‘s “Significant Announcement” ceremony from some hotel in Florida, USA.  Each of the Regional Internet Registries were awarded a commemorative glass block and some kind of large white certificate as they were each given their final /8 allocation of 16,777,216 IPv4 addresses.  Each award was followed by a speech, the quality of which (in my opinion at least) fared from ‘appalling’ to ‘not that good’.  This was followed by a press conference in which I understand the questions were not that great, and in some cases answered inaccurately.  So I’m expecting a whole raft of wrong news articles tomorrow.

Now we can say they are all gone.  They truly are.  You can check the official list – they really are all allocated!  Goodbye and thanks for all the fish…

Update: The actual ceremony and press conference are on YouTube now – announcement and press conference.

LDAP, pGINA 2.1 and Single Sign-on

Now that the frenzy of IPv4 exhaustion is over for a little while, it was time to turn my hand to some of the more mundane aspects of computing.  One of the “things to do” on my list was single sign-on; that is, being able to log in using the same user name and password at any machine on my network.

There’s two main problems getting this to work – the main one is that I have a mix of Windows and Linux machines on my network.  This requires a bit of thought.  Many years ago, there was a fantastic piece of software called pGina which implemented the Microsoft GINA specification (which if you want the simple explanation, is the bit of code that does the login box).  Using pGina, you could add plugins to authenticate users via something other than local users or a Windows domain controller.  So I used the LDAP plugin, and it worked, and it was great.

Then something happened.  Microsoft released Windows Vista.  And in that version of Windows, Microsoft decided to revamp the way the login box was done, replacing the GINA stuff with something called ‘Credential Providers’.  And my beloved pGina stopped working.  With the author at the time indicating that a Vista version wasn’t going to be forthcoming very quickly, I gave up and went to local authentication again.

But…

Last week I discovered that there was a new 2.x version of pGina which *did* implement a Credential Provider, so now Vista and Windows 7 users can once again use LDAP login on the Windows box.  Great news!  So, it was time to get all this up and running again.  To cut a long story short, I have pGina 2.1 installed, but it is not working yet.  The reason why is that I wanted to concentrate on getting the Linux part of it working first, and then sort pGina out later.

The Linux part was going to be interesting.  Using concepts that I first discovered the best part of 15 years ago, and remembering how to do it, was going to be fun.  The first job was to implement a common login system between all the Linux machines.  There are multiple ways to do this, and I would have preferred to have gone the Kerberos+LDAP route, but this isn’t actually possible yet using pGina since it was the LDAP plugin, but not a Kerberos one.  So plain LDAP it was.

It’s been a long time since I’d used OpenLDAP with any kind of sensible purpose, and a bit of a surprise was waiting for me – in Debian squeeze, they had decided to move to the ‘dynamic’ cn=config type configuration, where all the config is stored in the directory itself, rather than the old-fahsioned slapd.conf method.  It took a while to figure this out… but once I had, it was just a case of firing up Eclipse, using Apache Directory Studio to navigate the LDAP tree, and to put all the right options and permissions and suchlike in.

So, now I have an LDAP tree which will support single sign on.  It was just a case then of installing the libpam-ldap and libnss-ldap packages and configuring them up appropriately.  One thing that did catch me out was the fact that Debian seem to link their packages with GnuTLS rather than OpenSSL.  Although I knew this, it wasn’t working properly.  Much frustration later, it appears that reason it wasn’t working was that you cannot use the tlscertdir parameter when using GnuTLS – only tlscertfile will work.  So, having figured that out, all my clients are now talking to each other using StartTLS rather than plain text.

That done, the next job was to somehow make my ‘network’ home directories appear on all the machines.  NFS is the obvious choice for this, but for one reason and another, using straight NFS is not likely to work in my network.  Specifically, I didn’t want the situation where I could not mount my NFS drives on boot, if the virtual machines didn’t come up in the right order or it got ‘stuck’.  So, I decided to resurrect the automounter.  I haven’t used this in donkeys’ years, but I was nicely surprised to see that the latest version of autofs, autofs5, comes with LDAP support – which is handy, since I had just set up my LDAP server anyway.

So, a little scratching of heads and a few entries in my LDAP server later, I had the automounter configured, which would pick up my NFS directories up from the file server but (most importantly) only mount them when required, which means that none of my virtual machines would hang upon boot if the file server hadn’t started up yet, since the home directories aren’t required then.

So far, all is well, and it seems to be quite a good solution.  I still haven’t got pGina working, mainly because I haven’t had time, but hopefully that shouldn’t be too difficult to get going, now that I know the rest of it works.

IPv4 all gone

The news has been announced.  In the last two hours, APNIC have been allocated the last two /8s in the IPv4 address pool, which will trigger the distribution of the ‘final five’ blocks, one to each Regional Internet Registry, which officially means that there are no more IPv4 addresses left in the IANA pool.

What does this mean now?  Well, each RIR still has a stock of addresses.  With APNIC taking the last two blocks, they now have in the region of 3.2 /8s left, ARIN have about the same, and RIPE have nearly 4.  Obviously each RIR will get an additional 1/8 from the ‘final five’ in addition to this.  Current estimates are that these addresses will be gone in around 6 months.

Party time!

IPv4 Exhaustion: Could tomorrow be the big day?

The Internet has been buzzing over the past few days about what the exact date will be for IANA to ‘push the button’ and finally exhaust their stock of /8s by allocating two blocks to APNIC.  The rumours have been for quite some time that 31st Jan/1st Feb was going to be the big day, but now big big (and not very subtle) hints from several people who should know have been dropped, and also neatly coincides with NANOG 51, the perennial meeting of the North American Network Operators Group, and is also the day before Chinese New Year’s Eve.

So, watch this space!  NANOG 51 starts today, with the main events tomorrow, Tuesday and Wednesday.  I’m expecting an announcement around 09:30 EST (so 14:30 UK time, and around 00:30 in APNIC’s office in Brisbane) tomorrow.

Anyone for a party? :)