Brexit

I was awake soon after 5:30 yesterday morning. As I got to my computer, the EU referendum results weren’t confirmed, but it was looking certain that the country had voted (narrowly, but decisively) to leave the European Union. My thoughts during the day are nicely summed up by my tweets and retweets.

My initial reaction was anger.

(Hmm… the downside of rolling news coverage – that story has changed dramatically since I first linked to it.)

A few minutes later I was slightly more coherent (and almost philosophical)

Then the reality of the situation started to sink in

I tried to be positive

I was being sarcastic, of course. We’ll return to this subject later on.

I started to see life imitating art in a quite frightening way.

(And, yes, I know I should replace that picture with one of Boris Johnson)

Nigel Farage is (and, apparently, always has been) a despicable man. So it should have come as no surprise that his victory speech was insulting and divisive.

I don’t mind not being considered ordinary, but I’m certain I’m real and I like to think I’m decent. Tom Coates inverted Farage’s phrase nicely.

When Cameron resigned, I immediately became worried about the fall-out.

Really, if your best option is a man who stuck his penis into a pig’s mouth, then it must be clear that you’re in trouble.

Then I checked the stock market and realised that many of the Brexit supporters may have shot themselves in the foot.

A story in the FT illustrated the fall nicely (“nicely” isn’t really the right word!)

The markets bounced back a bit later in the day – but it was one of the most volatile days of trading in history.

Fox News can, of course, always be relied on to get important facts wrong.

Then I started to see data on the demographics of the voting – where it became obvious that it was mainly the older generations who were voting against the EU

Can I just point out that it’s #NotAllBabyBoomers :-/

Remember the £350m a week that was going to be diverted to the NHS. Turns out that was a lie.

It was a lie on many fronts.

  • It was a lie because the UK doesn’t send £350m a week to the EU
  • It was a lie because it ignored the money that we get back from the EU
  • It was a lie because any money saved was never going to be spent on the NHS

It was a lie that the Leave campaign were called out on many times, but they refused to retract it.

To be fair to Farage (and that’s not a phrase I ever expected to write) he wasn’t part of the official Leave campaign, so he wasn’t the right person to ask about this. But someone should certainly take Johnson or Gove to task over it.

Going back to the baby-boomers, I retweeted a friend’s innocent question

Then it started to look like Cameron might not be the only party leader to go in the fallout from the referendum

Incidentally, has anyone seen any evidence of the Lib Dems in this campaign? A couple of days ago I saw footage of Tim Farron in a crowd somewhere. Took me a few seconds to remember who he was; and then another minute or so to remember that he was the leader of the Lib Dems.

Euro-myths have always really annoyed me

More bad news from the City

I should point out that Morgan Stanley have denied the story. I guess time will tell who is telling the truth here.

By mid-afternoon, I was working on alternative plans

A final thought struck me

I mean, they were a single-issue party. And they’ve won that battle. Surely, there’s no need for the party to exist any longer. They can’t surely expect people to vote for them now (although, UK voters are a very strange bunch). If they closed down, they could all go back to the Tories and Farage and Carswell could get places in the new Johnson/Gove cabinet.

Oh, now I’m really depressed.

Ten Years?

It’s been some considerable time since I wrote anything about Nadine Dorries. I still keep an eye on what she’s up to, but most of the time it’s just the same old nonsense and it’s not worth writing about.

But I was interested to read her recent blog post explaining why she had given up Twitter (again). Of course, she uses it to rehash many of her old claims of stalking and the like, but what I found really interesting was when she said:

After almost ten years on Twitter (so long I can’t remember) and with 28,000 followers, I have made my own modest exit.

Because that “almost ten years” didn’t fit my recollections. Twitter has just had its tenth anniversary. As I wrote recently, almost no-one has been on Twitter for ten years – certainly not any British MPs.

It’s simple enough to use one of the many “how long have I been on Twitter?” sites to work out when her current @NadineDorriesMP account joined Twitter. It seems to be January 2012.

But that’s not the full story. She has joined and left Twitter a few times. Let’s see what we can find out.

Firstly, here’s a blog post from May 2009 where she doesn’t seem to be planning to join Twitter any time soon.

Anyway, safe to say, I shan’t be joining the legions of twitters any day soon.

It’s several months later, in September 2009, when she announces that she has joined Twitter. So that “ten years” is more like six and a half.

I’m pretty sure that first account was also called @NadineDorriesMP. At some point over the next couple of years, she closed that account (I’ll dig through her blog later to see if I can find any evidence to date that) and some time later she returned with a new account called @Nadine_MP. I know that because in May 2011 she gave up that second account and forgot to remove the Twitter widget from her web site. Then someone else took over the now-abandoned username and used it to deface her site. And then, as we saw above, she rejoined in January 2012.

So I think the list of Nadine’s Twitter accounts goes like this:

  • NadineDorriesMP (Sept 2009 – Unknown)
  • Nadine_MP (Unknown – May 2011)
  • NadineDorriesMP (Jan 2012 – Mar 2016)

That last account is still registered. She just chooses not to use it any more. If past behaviour is anything to go by, she’ll be back at some point.

Anyway, here’s another good example of why you can’t trust anything that Dorries says. Even on a simple fact like how long she has been using Twitter, she just pulls numbers out of the air. She makes stuff up to suit her and she’s been doing it for years.

Twitter’s Early Adopters

You’ll be seeing that tweet a lot over the next few days. It’s the first ever public tweet that was posted to the service we now know as Twitter. And it was sent ten years ago by Jack Dorsey, one of Twitter’s founders.

Today, Twitter has over a hundred million users, who send 340 million tweets a day (those numbers are almost certainly out of date already) but I thought it would be interesting to look back and look at Twitter’s earliest users.

Every Twitter user has a user ID. That’s an integer which uniquely identifies them to the system. This is a simple incrementing counter[1]. You can use a site like MyTwitterID to get anyone’s ID given their Twitter username. It’s worth noting that you can change your username, but your ID is fixed. When I registered a new account last week, I got an ID that was eighteen digits long. But back in 2006, IDs were far shorter. Jack’s ID, for example, is 12. That’s the lowest currently active ID on the system. I assume that the earlier numbers were used for test accounts.

Using the Twitter API you can write a program that will give you details of a user from their ID. Yesterday I wrote a simple program to get the details of the first 100,000 Twitter users (the code is available on Github). The results from running the program are online. That’s a list of all of the currently active Twitter users with an ID less than 100,000.

The first thing you’ll notice is that there are far fewer than you might expect. The API only returns details on currently active users. So anyone who has closed their account won’t be listed. I expected that perhaps 20-25% of accounts might fall into that category, but it was much higher than that.

There are 12,435 users in the file. That means that 87,500 of the first 100,000 Twitter accounts are no longer active. That was such a surprise to me that I assumed there was a bug in my program. But I can’t find one. It really looks like almost 90% of the early Twitter users are no longer using the service.

The dates that the account were created range from Jack‘s on 21st March 2006 to Jeremy Hulette (ID 99983 – the closest we have to 100,000) exactly nine months later on 21st December 2006.  I guess you could get a good visualisation of Twitter’s early growth by plotting ID against creation date – but I’ll leave that to someone else.

My file also contains location. But it’s important to note that I’m getting the location that is currently associated with that account – not the original location (I wonder if Twitter still have that information). I know a large number of people who were in London when they joined Twitter by who are now in San Francisco, so any conclusions you draw from the location field are necessarily sketchy. But bearing that in mind, here are some “firsts”.

  • First non-Californian: rabble (ID 22, PDX & MVD)
  • First non-America: florian (ID 38, Berlin)
  • First Brit: blaine (ID 246, London)

That last one seems a little high to me. I might have missed someone earlier who didn’t put “UK” in their location.

So who’s on the list? Is there anyone famous? Not that I’ve seen yet. Oh, there are well-known geeks on the list. But no-one you’d describe as a celebrity. No musicians, no actors, no politicians, no footballers or athletes. I may have missed someone – please let me know if you spot anyone.

Oh, and I’m on the list. I’m at number 14753. I signed up (as @davorg) at 11:30 on Wednesday 22nd November 2006. I suspect I’m one of the first thousand or so Brits on the list – but it’s hard to be sure of that.

Anyway, happy birthday to Twitter. I hope that someone finds this data interesting. Let me know what you find.

[1] Actually, there’s a good chance that this is no longer the case – but it was certainly true back in 2006.

My Family in 1939

Here in the UK, a census has been taken almost every ten years since 1841. There were a few censuses before that, but before 1841 they only counted people – they didn’t include lists of names.

These census records are released 100 years after the date of the census and this data is of great interest to genealogists. The most recent census that we have access to is from 1911 and the one from 1921 will be released at the start of 2022.

But occasionally, other records emerge that are almost as useful as a census. For example, in September 1939, on the eve of the Second World War, the British government took a national register which was used to issue identity cards to everyone.

Last November, FindMyPast made the contents of this register available to everyone. Initially I didn’t look at it as I have a FindMyPast subscription and I was annoyed that this didn’t cover the new records. I assumed that eventually the new data would be rolled into my existing subscription, so I decided to wait.

I didn’t have to wait very long. Yesterday I got access to the records. So I settled down last night to find out what I could about my ancestors in 1939. As it turned out, it didn’t take long. There were only ten of them and they were split across four households.

george_clarke

This is most of my father’s family. You can see his parents, James and Ivy Cross. They are living with Ivy’s parents George and Lily Clarke. George worked for Greene King all of his life (for over sixty years) and this is the last job he did for them – running an off-licence in Holland-on-Sea. James and Ivy lived in the same building until James died in 1970. I remember spending a lot of time there when I was a child. I even have vague memories of George who died when I was three or four.

My father was born three months after this register was taken – in January 1940 – so it’s interesting to note that Ivy is, at this time, six months pregnant.

albert_cross

Just down the road are the rest of my father’s family – James’ parents Albert and Lily Cross living with their daughter (my great-aunt) Grace. Albert’s father (another James) was the lifeboatman who I have written about before.

robert_sowman

Looking a bit further afield, we find most of my mother’s family living in Thorpe-le-Soken. You’ll see my great-grandparents, Robert and Agnes Sowman, along with three closed records. Records are closed if the people in them are born less than 100 years ago and aren’t known to have died. The first two closed records here are my grandmother, Cecilia, and her sister Margaret. Both of these woman are no longer alive, so I should be able to get FindMyPast to open these records by sending them copies of their death certificates. The third closed record will be for Constance, the third daughter in the family.

maud_mary_turpin

And finally, here’s the final part of my family. Maud Turpin, living alone in Maldon. Maud is Agnes Sowman’s mother. Actually, this record showed me the only piece of information that I didn’t already know. Previously, I wasn’t sure when Maud’s husband Alfred died. He was still alive in the 1911 census and this record gives me strong evidence that he died before 1939. I think I’ve found a good candidate for his death record in 1931.


So that’s a pretty good summary of what you’ll find in the 1939 register. It’s a good substitute for a census (particularly as there was no census in 1941 – as the country was too busy fighting a war) and it’s nice that it’s not covered by census privacy laws, so it has been released to the public about 25 years sooner than you might expect. But, certainly in my case, I already had a lot of knowledge about my family in this period so I didn’t learn very much that was new. If I had paid the £7 per household that FindMyPast had initially asked for, I think I would have been very disappointed.

I should point out that You don’t just get this information. Each results page gives a map (actually, a selection of maps) showing where your ancestors lived. This is a nice touch. There are also random newspaper cuttings and photos from the locality. You might find these interesting – I really didn’t.

Has anyone else used these records yet? Have you found anything interesting?

p.s. And yes, if you’re paying close attention, you’ll notice that there’s one grandparent missing from my list above. Ask me about that in the pub one day.

2015 in Gigs

As has become traditional round these parts, it’s time for my annual review of the gigs I saw last year.

I saw 48 gigs in 2015. That’s up on 2014’s 45, but still short of my all time high of 60 in 2013. I saw Chvrches, Stealing Sheep and Paper Aeroplanes twice. I was supposed to see a couple of other artists twice, but Natalie Prass cancelled the second show and I couldn’t get to the second Soak show as I was ill.

As always, there were some disappointments. Renaissance really weren’t very good (I waited to hear “Northern Lights” and then buggered off) and Elbow weren’t as good as I’d seen them before. But the biggest disappointment this year has to be Bob Dylan. He was terrible. I left at the interval.

About half-way through the year, I stopped writing reviews on my gig site. I’ve put up posts with just the data about the shows and I hope to back-fill some of the reviews at some point, but I can’t see it happening soon. Hopefully I’ll keep the site more up to date this year.

So here (in chronological order) are my favourite gigs of the year:

  • Stealing Sheep – It’s been far too long since I saw Stealing Sheep, but the release of a new album brought them to London a couple of times. I’m going to do with the Chat’s Palace show as my favourite as I like smaller venues.
  • Laura Marling – This was simply astonishing in every way. I was completely spellbound thoughout this show. Almost certainly gig of the year.
  • Soak – If there’s any justice in the world, Soak is going to be huge. See her in intimate venues while you can.
  • Amanda Palmer – There always has to be an Amanda Palmer gig on the list. It’s the law.
  • Chvrches – Another act I saw twice. The small album launch show at the Tufnell Park Dome just pipped the huge extravaganza at Alexandra Palace.
  • Heaven 17 – Another band I’ve started seeing whenever I can.
  • Garbage – Sometimes, seeing bands decades after their peak can be a little disappointing. That certainly wasn’t the case for Garbage.
  • John Grant – First time I’d seen John Grant. I hope it won’t be the last.
  • Fuzzbox – Another act from my youth who made an impressive return.
  • The Unthanks – I’ve been meaning to get round to see the Unthanks for years. I’m glad I did. I’ll be seeing them again as soon as possible.

Gigs that fell just outside of the top ten included Julian Cope, Suzanne Vega, Paper Aeroplanes and Smoke Fairies. Oh, and the Indie Daze Festival was great too.

I already have tickets for a dozen shows in 2016. I’m particularly looking forward to ELO in April and seeing the Cure for the first time for far too many years in December.

Doctor Who Festival

In 2013, to celebrate the 50th anniversary of Doctor Who, the BBC put on a big celebration at the Excel centre in London’s Docklands. They must have thought that it went well as this year they decided to do it all over again at the Doctor Who Festival which took place last weekend. Being the biggest Doctor Who fan I know, I was at both events and I thought it might be interesting to compare them.

Each event ran over three days (Friday to Sunday). I visited both events on the Sunday on the basis that there would be one more episode of the show to talk about. This was particularly important in 2013 when the 50th anniversary special was broadcast on the Saturday night.

Price

Let’s start with the basics. This years event was more expensive than the 2013 one. And the price increases were both large and seemingly random. Here’s a table comparing the prices.

Standard Tardis
Adult Child Family Adult Child Family
2013 £45.00 £20.00 £104.00 £95.50 £44.25 £218.00
2015 £68.00 £32.35 £171.00 £116.00 £52.75 £293.00
Increase 51.11% 61.75% 64.42% 21.47% 19.21% 34.40%

You’ll see that some prices “only” went up by about 20% while others increased by an eye-watering 65%. There’s obviously money to be made in these events. And, equally obviously, Doctor Who fans are happy to pay any price for entrance to these events. I don’t know about you, but those increases over two years where inflation has hovered around 0% scream “rip-off” to me.

You’ll notice that I’ve quoted prices for two different types of ticket. There are standard tickets and “Tardis” tickets. Tardis tickets give you certain extras. We’ll look at those next.

Tardis Tickets

I’ll admit here that I went for the Tardis ticket both times. The big advantage that this ticket gives you is that in the big panels (and we’ll see later how those panels are the main part of the days) the front eight or so tickets are reserved for Tardis ticket holders. So if you have a Tardis ticket you are guaranteed to be close enough to see the people on  the stage. Without a Tardis ticket you can be at the far end of the huge hall where you might be able to make out that some people are on the stage, but you’ll be relying on the big video screens to see what is going on.

To me, that’s the big advantage of the Tardis ticket. Does it justify paying almost double the standard ticket price? I’m not sure. But you get a couple of other advantages. You get a free goodie bag. In 2013, that contained a load of tat (postcards, stickers, a keyfob, stuff like that) that I ended up giving away. This year we got the show book (which was pretty interesting and very nearly worth the £10 they were charging for it) and a t-shirt (which was being sold on the day for £25). So the 2015 goodie bag was a massive improvement on the 2013 one.

Tardis ticket-holders also got access to a special lounge were you could relax and partake of free tea, coffee and biscuits. In 2013 this was in a private area away from the rest of the show. This year it was a cordoned off corner of the main exhibition hall which didn’t seem like quite so much of a haven of calm.

Main Panels

The main structure of the day is made up of three big discussion panels that are held in a huge room. Each panel is run twice during the day, but when you buy your ticket you know which time you’ll be seeing each panel.

Each panel has people who are deeply involved in the show. In 2013 we had the following panels:

  • Danny Hargreaves of Real SFX talking about the special effects on the show.
  • Peter Davison, Colin Baker and Sylvester McCoy talking about playing the Doctor. I think Tom Baker also came to this panel on one of the three days.
  • Matt Smith, Jenna Coleman and Stephen Moffat talking about the show.

This year we had:

  • Kate Walsh of Millennium FX (who make a lot of the prosthetics for the show) talking to Mark Gatiss.
  • Stephen Moffat, Toby Whithouse and Jamie Mathieson talking about writing for the show. This panel had different writers on each of the three days.
  • Peter Capaldi, Jenna Coleman, Michelle Gomez, Ingrid Oliver and Stephen Moffat talking about the show. Jenna Coleman was only on this panel on Sunday.

Both sets of panels were equally interesting. Having the former Doctors taking apart in the 50th anniversary year made a lot of sense.

Exhibition Hall

The other main part of the event was an exhibition hall where various things were taking place. I think this was disappointing this year. Here are some comparisons:

Sets from the show

As far as I can remember, in 2013 there was only the entrance to Totter’s Yard and the outside of a Tardis. This year there was Davros’ hospital room, Clara’s living room and the outside of a Tardis (although this clearly wasn’t a “real” Tardis – the font on the door sign was terrible). So there were more sets this year, but I rather questioned their description of Clara’s living room as an “iconic” set.

Merchandise

There were a lot of opportunities to buy stuff, but it seemed to me that there were rather fewer stalls there this year. Merchandise seemed to fall into two categories. There was stuff that you would have been better off buying from Amazon (DVDs, board games, books, stuff like that). And there was really expensive stuff. I really can’t justify spending £60 or £80 for incredibly intricate replicas of props from the show or £200(!) for a copy of one of the Doctor’s coats.

There was one big exception to the “cheaper on Amazon” rule. The BBC shop had a load of classic DVDs on sale for £6 each.

In 2013 I bought a couple of postcards. This year I managed to resist buying anything. But I appeared to be rather unusual in that – there were a lot of people carrying many large bags of stuff.

Other Stages

Both years, around the edge of the main hall there were areas where other talks and workshops were taking place. This years seemed slightly disappointing. For example, on one stage in 2013 I saw Dick Maggs giving an interesting talk about working with Delia Derbyshire to create the original theme tune. The equivalent area this year had a group of assistant directors giving a list of the people who work on set when an episode of the show is being made.

In 2013, the centre of this room was given over to an area where many cast members from the show’s history were available for autographs and photos. This year, that’s where Clara’s living room was set up. In fact the four cast members who were in the panel I mentioned above were the only cast members who were involved in this event at all. I realise that it makes more sense for there to be lots of cast members involved in the 50th anniversary celebrations, but surely there were some other current cast members who could have turned up and met their fans.

Also in this hall was an area where the Horror Channel (who are the current home of Classic Doctor Who in the UK) were showing old episodes. There was something similar in 2013, but (like the Tardis lounge) it was away from the main hall. Moving this and the Tardis lounge to the main hall made me think that they were struggling a bit to fill the space.

In Summary

This year’s event was clearly a lot more expensive than the one in 2013 and I think attendees got rather less for their money. All in all I think it was slightly disappointing.

The big panels are clearly the centrepiece of the event and they are well worth seeing. But I think you need a Tardis ticket in order to guarantee getting a decent view. Oh, yes you can get in the ninth row without a Tardis ticket, but you’d be competing with a lot of people for those seats. You’d spend the whole day queuing to stand a chance of getting near the front.

I don’t know what the BBC’s plans for this event are, but it’s clearly a good money-spinner for them and I’d be surprised if they didn’t do it again either next year or in 2017. And the fans don’t really seem to mind how much they pay to attend, so it’ll be interesting to see how the next one is priced.

I think that the big panels still make the event worth attending, but there’s really not much else that I’m interested in. So I’m undecided as to whether I’d bother going again in the future.

Were you are the event? What did you think of it? How much money did you spend in total?

Eighteen Classic Albums

A couple of months ago, I wrote a post about a process I had developed for producing ebooks. While dabbling in a few projects (none of which are anywhere near being finished) I established that the process worked and I was able to produce ebooks in various different formats.

But what I really needed was a complete book to try the process on, so that I could push it right through the pipeline so it was for sale on Amazon. I didn’t have the time to write a new book, so I looked around for some existing text that I could reuse.

Long-time readers might remember the record club that I was a member of back in 2012. It was a Facebook group where each week we would listen to a classic album and then discuss it with the rest of the group. I took it a little further and wrote up a blog post for each album. That sounded like a good set of posts to use for this project.

So I grabbed the posts, massaged them a bit, added a few other files and, hey presto, we have a book. All in all it took about two or three hours of work. And a lot of that was my amateur attempts at creating a cover image. If you’re interested in the technical stuff, then you can find all the input files on Github.

There has been some confusion over the title of the book. Originally, I thought there were seventeen reviews in the series. But that was because I had mis-tagged one. And, of course, you only find problems like that after you create the book and upload it to Amazon. So there are rare “first printing” versions available with only seventeen reviews and a different title. Currently the book page on Amazon is still showing the old cover. I hope that will be sorted out soon. I’ll be interesting to see how quickly the fixed version is pushed out to people who have already bought the older edition.

My process for creating ebooks is working well. And the next step of the process (uploading the book to Amazon) was pretty painless too. You just need to set up a Kindle Direct Publishing account and then upload a few files and fill in some details of the book. I’ve priced it at $2.99 (which is £1.99) as that’s the cheapest rate at which I can get 70% of the money. The only slight annoyance in the process is that once you’ve uploaded a book and given all the details, you can’t upload a new version or change any of the information (like fixing the obvious problems in the current description) until the current version has been published across all Amazon sites. And that takes hours. And, of course, as soon as you submit one version you notice something else that needs to be fixed. So you wait. And wait.

But I’m happy with the way it has all gone and I’ll certainly be producing more books in the future using this process.

Currently three people have bought copies. Why not join them. It only costs a couple of quid. And please leave a review.

How To Travel From London To Paris

Imagine that you want to travel from London to Paris. Ok, so that’s probably not too hard to imagine. But also imagine that you have absolutely no idea how to do that and neither does anyone that you know. In that situation you would probably go to Amazon and look for a book on the subject.

Very quickly you find one called “Teach Yourself How To Travel From London To Paris In Twenty-One Days”. You look at the reviews and are impressed.

I had no idea how to get from London to Paris, but my family and I followed the instructions in this book. I’m writing this from the top of the Eiffel Tower – five stars.

And

I really thought it would be impossible to get from London to Paris, but this book really breaks it down and explains how it’s done – five stars.

There are plenty more along the same lines.

That all looks promising, so you buy the book. Seconds later, it appears on your Kindle and you start to read.

Section one is about getting from London to Dover. Chapter one starts by ensuring that all readers are starting from the same place in London and suggests a particular tavern in Southwark where you might meet other travellers with the same destination. Chapter two suggests a walking route that you might follow from Southwark to Canterbury. It’s written in slightly old-fashioned English and details of the second half of the route are rather sketchy.

Chapter two contains a route to walk from Canterbury to Dover. The language has reverted to modern English and the information is very detailed. There are reviews of many places to stay on the way – many of which mention something called “Trip Advisor”.

Section two is about crossing the channel. Chapter three talks about the best places in Dover to find the materials you are going to need to make your boat and chapter four contains detailed instructions on how to construct a simple but seaworthy vessel. The end of the chapter has lots of advice on how to judge the best weather conditions for the crossing. Chapter five is a beginner’s guide to navigating the English Channel and chapter six has a list of things that might go wrong and how to deal with them.

Section three is about the journey from Calais to Paris. Once again there is a suggested walking route and plenty of recommendations of places to stay.

If you follow the instructions in the book you will, eventually, get to Paris. But you’re very likely to come away thinking that it was all rather more effort than you expected it to be and that next time you’ll choose a destination that it easier to get to.

You realise that you have misunderstood the title of the book. You thought it would take twenty-one days to learn how to make the journey, when actually it will take twenty-one days (at least!) to complete the journey. Surely there is a better way?

And, of course, there is. Reading further in the book’s many reviews you come across the only one-star review:

If you follow the instructions in this book you will waste far too much time. Take your passport to St. Pancras and buy a ticket for the Eurostar. You can be in Paris in less than four hours.

The reviewer claims to be the travel correspondent for BBC Radio Kent. The other reviewers were all people with no knowledge of travel who just happened to come across the book in the same way that you did. Who are you going to trust?

I exaggerate, of course, for comic effect. But reviews of technical books on Amazon are a lot like this. You can’t trust them because in most cases the reviewers are the very people who are least likely to be able to give an accurate assessment of the technical material in the book.

When you are choosing a technical book you are looking for two things:

  • You want the information in the book to be as easy to understand as possible
  • You want the information in the book to be as accurate and up to date as possible

Most people pick up a technical book because they want to learn about the subject that it covers. That means that, by definition, they are unable to judge that second point. They know how easily they understood the material in the book. They also know whether or not they managed to use that information to achieve their goals. But, as my overstretched metaphor above hopefully shows, it’s quite possible to follow terrible advice and still achieve your goals.

I first came aware of this phenomena in the late 1990s. At the time a large amount of dynamic web pages were built using Perl and CGI. This meant that a lot of publishers saw this as a very lucrative market and dozens of books on the subject were published many of which covered the Perl equivalent of walking from London to Paris. And because people read these books and managed to get to Paris (albeit in a ridiculously roundabout manner) they thought the books were great and gave them five-star reviews. Much to the chagrin of Perl experts who were standing on the kerbside on the A2 shouting “but there’s a far easier way to do that!”

This is still a problem today. Earlier this year I reviewed a book about penetration testing using Perl. I have to assume that the author knew what he was doing when talking about pen testing, but his Perl code was positively Chaucerian.

It’s not just book reviews that are affected. Any kind of technical knowledge transfer mechanism is open to the same problems. A couple of months ago I wrote a Perl tutorial for Udemy. It only covered the very basics, so they included a link to one of their other Perl courses. But having sat through the first few lessons of this course, I know that it’s really not very good. How did the people at Udemy choose which one to link to? Well it’s the one with the highest student satisfaction ratings, of course. It teaches the Perl equivalent of boat-building. A friend has a much better Perl course on Udemy, but they wouldn’t use that as it didn’t have enough positive feedback.

Can we blame anyone for this? Well, we certainly can’t blame the reviewers. They don’t know that they are giving good reviews to bad material. I’m not even sure that we can blame the authors in many cases. It’s very likely that they don’t know how much they don’t know (obligatory link to the Dunning–Kruger effect). I think that in some cases the authors must know that they are chancing their arm by putting themselves forward as an expert, but most of them probably believe that they are giving good advice (because they learned from an expert who taught them how to walk from London to Paris and so the chain goes back to the dawn of time).

I think a lot of the blame must be placed with the publishers. They need to take more responsibility for the material they publish. If you’re publishing in a technical arena then you need to build up contacts in that technical community so that you have people you can trust who can give opinions on your books. If you’re publishing a book on travelling from London to Paris then see if you can find a travel correspondent to verify the information in it before you publish it and embarrass yourselves. In fact, get these experts involved in the process of commissioning process. If you what to publish a travel book then ask your travel correspondent friends if they know anyone who could write it. If someone approaches you with a proposal for a travel book then run the idea past a travel correspondent or two before signing the contract.

I know that identifying genuine experts in a field can be hard. And I know that genuine experts would probably like to be compensated for any time they spend helping you, but I think it’s time and money well-spent. You will end up with better books.

Or, perhaps some publishers don’t care about the quality of their books. If bad books can be published quickly and cheaply and people still buy them, then what business sense does it make to make the books better.

If you take any advice away from this piece, then don’t trust reviews and ratings of technical material.

And never try to walk from London to Paris (unless it’s for charity).

Writing Books (The Easy Bit)

Last night I spoke at a London Perl Mongers meeting. As part of the talk I spoke about a toolchain that I have been using for creating ebooks. In this article I’ll go into a little more detail about the process.

Basically, we’re talking about a process that takes one or more files in some input format and (as easily as possible) turns them into one or more output formats which can be described as “ebooks”. So before we can decided which tools we need, we should decide what those various file formats should be.

For my input format I chose Markdown. This is a text-based format that has become popular amongst geeks over the last few years. Geeks tend to like text-based formats more than the proprietary binary formats like those produced by word processors. This is for a number of reasons. You can read them without any specialised tools. You’re not tied down to using specific tools to create them. And it’s generally easier to store them in a revision management system like Github.

For my output formats, I wanted EPUB and Mobipocket. EPUB is the generally accepted standard for ebooks and Mobipocket is the ebook format that Amazon use. And I also wanted to produce PDFs, just because they are easy to read on just about any platform.

(As an aside, you’ll notice that I said nothing in that previous paragraph about DRM. That’s simply because nice people don’t do that.)

Ok, so we know what file formats we’ll be working with. Now we need to know a) how we create the input format and b) how we convert between the various formats. Creating the Markdown files is easy enough. It’s just a text file, so any text editor would do the job (it would be interesting to find out if any word processor can be made to save text as Markdown).

To convert our Markdown into EPUB, we’ll need a new tool. Pandoc describes itself as “a universal document converter”. It’s not quite universal (otherwise that would be the only tool that we would need), but it is certainly great for this job. Once you have installed Pandoc, the conversion is simple:

pandoc -o your_book.epub title.txt your_book.md --epub-metadata=metadata.xml --toc --toc-depth=2

There are two extra files you need here (I’m not sure why it can’t all be in the same file, but that’s just the way it seems to be). The first (which I’ve called “title.txt”), contains two lines. The first line has the title of your book and the second has the author’s name. Each line needs to start with a “%” character. So it might look like this:

% Your title
% Your name

The second file (which I’ve called “metadata.xml”) contains various pieces of information about the book. It’s (ew!) XML and looks like this:

<metadata xmlns:dc="http://purl.org/dc/elements/1.1/">
<dc:title id="main">Your Title</dc:title>
<meta refines="#main" property="title-type">main</meta>
<dc:language>en-GB</dc:language>
<dc:creator opf:file-as="Surname, Forename" opf:role="aut">Forename Surname</dc:creator>
<dc:publisher>Your name</dc:publisher>
<dc:date opf:event="publication">2015-08-14</dc:date>
<dc:rights>Copyright ©2015 by Your Name</dc:rights> </metadata>

So after creating those files and running that command, you’ll have an EPUB file. Next we want to convert that to a Mobipocket file so that we can distribute our book through Amazon. Unsurprisingly, the easiest way to do that is to use a piece of software that you get from Amazon. It’s called Kindlegen and you can download it from their site. Once it is installed, the conversion is as simple as:

kindlegen perlwebbook.epub

This will leave you with a file called “your_book.mobi” which you can upload to Amazon.

There’s one last conversion that you might need. And that’s converting the EPUB to PDF. Pandoc will make that conversion for you. But it does it using a piece of software called LaTeX which I’ve never had much luck with. So I looked for an alternative solution and found it in Calibre. Calibre is mainly an ebook management tool, but it also converts between many ebook formats. It’s pretty famous for having a really complex user interface but, luckily for us, there’s a command line program called “ebook-convert” – which we can use.

ebook-convert perlwebbook.epub perlwebbook.pdf

And that’s it. We start with a Markdown file and end up with an ebook in three formats. Easy.

Of course, that really is the easy part. There’s a bit that comes before (actually writing the book) and a bit that comes after (marketing the book) and they are both far harder. Last year I read a book called Author, Publisher, Entrepreneur which covered these three steps to a very useful level of detail. Their step two is rather different to mind (they use Microsoft Word if I recall correctly) but what they had to say about the other steps was very interesting. You might find it interesting if you’re thinking of writing (and self-publishing) a book.

I love the way that ebooks have democratised the publishing industry. Anyone can write and publish a book and make it available to everyone through the world’s largest book distribution web site.

So what are you waiting for? Get writing. If you find my toolchain interesting (or if you have any comments on it) then please let me know.

And let me know what you’ve written.

Financial Account Aggregation

Three years ago, I wrote a blog post entitled Internet Security Rule One about the stupidity of sharing your passwords with anyone. I finished that post with a joke.

Look, I’ll tell you what. I’ve got a really good idea for an add-on for your online banking service. Just leave the login details in a comment below and I’ll set it up for you.

It was a joke because it was obviously ridiculous. No-one would possibly think it was a good idea to share their banking password with anyone else.

I should know not to make assumptions like that.

Yesterday I was made aware of a service called Money Dashboard. Money Dashboard aggregates all of your financial accounts so that you can see them all in one convenient place. They can then generate all sorts of interesting reports about where your money is going and can probably make intelligent suggestions about things you can do to improve your financial situation. It sounds like a great product. I’d love to have access to a system like that.

There’s one major flaw though.

In order to collect the information they need from all of your financial accounts, they need your login details for the various sites that you use. And that’s a violation of the Internet Security Rule One. You should never give your passwords to anyone else – particularly not passwords that are as important as your banking password.

I would have thought that was obvious. But they have 100,000 happy users.

Of course they have have a page on their site telling you exactly how securely they store your details. They use “industry-standard security practices”, their application is read-only “which means it cannot be used for withdrawals, payments or to transfer your funds”. They have “selected partners with outstanding reputations and extensive experience in security solutions”. It all sounds lovely. But it really doesn’t mean very much.

It doesn’t mean very much because at the heart of their system, they need to log on to your bank’s web site pretending to be you in order to get hold of your account information. And that means that no matter how securely they store your passwords, at some point they need to be able to retrieve them in plain text so they can use them to log on to your banks web site. So there must be code somewhere in their system which punches through all of that security and gets the string “pa$$word”. So in the worst case scenario, if someone compromises their servers they will be able to get access to your passwords.

If that doesn’t convince you, then here’s a simpler reason for not using the service. Sharing your passwords with anyone else is almost certainly a violation of your bank’s terms and conditions. So if someone does get your details from Money Dashboard’s system and uses that information to wreak havoc in your bank account – good luck getting any compensation.

Here, for example, are First Direct’s T&Cs; about this (in section 9.1):

You must take all reasonable precautions to keep safe and prevent fraudulent use of any cards, security devices, security details (including PINs, security numbers, passwords or other details including those which allow you to use Internet Banking and Telephone Banking).

These precautions include but are not limited to all of the following, as applicable:

[snip]

  • not allowing anyone else to have or use your card or PIN or any of our security devices, security details or password(s) (including for Internet Banking and Telephone Banking) and not disclosing them to anyone, including the police, an account aggregation service that is not operated by us

Incidentally, that “not operated by us” is a nice piece of hubris. First Direct run their own account aggregation service which, of course, they trust implicitly. But they can’t possibly trust anybody else’s service.

I started talking about this on Twitter yesterday and I got this response from the @moneydashboard account. It largely ignores the security aspects and concentrates on why you shouldn’t worry about breaking your bank’s T&Cs.; They seem to be campaigning to get T&Cs; changed so allow explicit exclusions for sharing passwords with account aggregation services.

I think this is entirely wrong-headed. I think there is a better campaign that they should be running.

As I said above, I think that the idea of an account aggregation service is great. I would love to use something like Money Dashboard. But I’m completely unconvinced by their talk of security. They need access to your passwords in plain text. And it doesn’t matter that their application only reads your data. If someone can extract your login details from Money Dashboard’s systems then they can do whatever they want with your money.

So what’s the solution? Well I agree with one thing that Money Dashboard say in their statement:

All that you are sharing with Money Dashboard is data; data which belongs to you. You are the customer, you should be telling the bank what to do, not the other way around!

We should be able to tell our banks to share our data with third parties. But we should be able to do it in a manner that doesn’t entail giving anyone full access to our accounts. The problem is that there is only one level of access to your bank account. If you have the login details then you can do whatever you want. But what if there was a secondary set of access details – ones that could only read from the account?

If you’ve used the web much in recent years, you will have become familiar with this idea. For example, you might have wanted to give a web app access to your Twitter account. During this process you will be shown a screen (which, crucially, is hosted on Twitter’s web site, not the new app) asking if you want to grant rights to this new app. And telling you which rights you are granting (“This app wants to read your tweets.” “This app wants to tweet on you behalf.”) You can decide whether or not to grant that access.

This is called OAuth. And it’s a well-understood protocol. We need something like this for the finance industry. So that I can say to First Direct, “please allow this app to read my account details, but don’t let them change anything”. If we had something like that, then all of these problems will be solved. The Money Dashboard statement points to the Financial Data and Technology Association – perhaps they are the people to push for this change.

I know why Money Dashboard are doing what they are doing. And I know they aren’t the only ones doing it (Mint, for example, is a very popular service in the US). And I really, really want what they are offering. But just because a service is a really good idea, shouldn’t mean that you take technical short-cuts to implement it.

I think that the “Financial OAuth” I mentioned above will come about. But the finance industry is really slow to embrace change. Perhaps the Financial Data and Technology Association will drive it. Perhaps one forward-thinking bank will implement it and other bank’s customers will start to demand it.

Another possibility is that someone somewhere will lose a lot of money through sharing their details with a system like this and governments will immediately close them all down until a safer mechanism is in place.

I firmly believe that systems like Money Dashboard are an important part of the future. I just hope that they are implemented more safely than the current generation.