Python 3.5, GTK+ 3, Glade and OpenCV

I’ve just spent an hour or so figuring out how to display an OpenCV image in a GTK+ 3 window that’s created through a Glade UI using Python 3.  Since it’s not at all obvious even where to find the documentation, I’m writing it down here.

Background – Python 3 and GTK+

Time was, to use GTK in Python you installed PyGTK.  Those days are gone.  What we have now is called GObject Introspection – or ‘gi’.  What it does is pretty cool – it can expose any GObject-based library in Python.  Any new GObject-based library that’s written is immediately available in Python.  Just like that.

What’s really really dumb about it is calling it ‘gi’.  Try Googling that!

So, here’s where the documentation is: https://lazka.github.io/pgi-docs/.  Once you’ve found the documentation, it’s pretty easy to use.  Finding it is the hard part.

So, show me how to do it

Here’s code that takes an OpenCV feed from a webcam and displays it in a Glade UI.  First, the Glade file:

<?xml version="1.0" encoding="UTF-8"?>
<!-- Generated with glade 3.18.3 -->
<interface>
  <requires lib="gtk+" version="3.12"/>
  <object class="GtkWindow" id="window1">
    <property name="can_focus">False</property>
    <signal name="delete-event" handler="onDeleteWindow" swapped="no"/>
    <child>
      <object class="GtkBox" id="box1">
        <property name="visible">True</property>
        <property name="can_focus">False</property>
        <property name="orientation">vertical</property>
        <child>
          <object class="GtkToggleButton" id="greyscaleButton">
            <property name="label" translatable="yes">Greyscale</property>
            <property name="visible">True</property>
            <property name="can_focus">True</property>
            <property name="receives_default">True</property>
            <signal name="toggled" handler="toggleGreyscale" swapped="no"/>
          </object>
          <packing>
            <property name="expand">False</property>
            <property name="fill">True</property>
            <property name="position">0</property>
          </packing>
        </child>
        <child>
          <object class="GtkImage" id="image">
            <property name="visible">True</property>
            <property name="can_focus">False</property>
            <property name="stock">gtk-missing-image</property>
          </object>
          <packing>
            <property name="expand">False</property>
            <property name="fill">True</property>
            <property name="position">1</property>
          </packing>
        </child>
      </object>
    </child>
  </object>
</interface>

The main thing to note here is that we’re using a GtkImage object to display the video. Each frame, we’ll replace the GtkImage’s image data with the frame from the camera. I’ve also added a button to switch between greyscale and colour. Note that the developers are all Americans and so spell ‘grey’ and ‘colour’ wrong.

And here’s the Python code:

import cv2
import numpy as np
import gi

gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, Gdk, GLib, GdkPixbuf

cap = cv2.VideoCapture(1)

builder = Gtk.Builder()
builder.add_from_file("test.glade")

greyscale = False

class Handler:
    def onDeleteWindow(self, *args):
        Gtk.main_quit(*args)

    def toggleGreyscale(self, *args):
        global greyscale
        greyscale = ~ greyscale

window = builder.get_object("window1")
image = builder.get_object("image")
window.show_all()
builder.connect_signals(Handler())

def show_frame(*args):
    ret, frame = cap.read()
    frame = cv2.resize(frame, None, fx=2, fy=2, interpolation = cv2.INTER_CUBIC)
    if greyscale:
        frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        frame = cv2.cvtColor(frame, cv2.COLOR_GRAY2RGB)
    else:
        frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

    frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

    pb = GdkPixbuf.Pixbuf.new_from_data(frame.tostring(),
                                        GdkPixbuf.Colorspace.RGB,
                                        False,
                                        8,
                                        frame.shape[1],
                                        frame.shape[0],
                                        frame.shape[2]*frame.shape[1])
    image.set_from_pixbuf(pb.copy())
    return True

GLib.idle_add(show_frame)
Gtk.main()

Things to note here:

  • It’s quite important to handle the window’s delete_event signal. Otherwise it can be quite difficult to kill the program (Ctrl+C doesn’t work; try Ctrl+Z and then kill -9 %1).
  • I’m resizing the video to twice it’s native resolution.
  • To convert to greyscale, I first convert BGR to greyscale and then greyscale to RGB. GTK+ can apparently only handle the RGB colourspace, so you need to end up there one way or another. OpenCV natively generates BGR, not RGB, so even to display colour you need to do a conversion.
  • To get the data into a form that GtkImage understands, we first convert the numpy ndarray to a byte array using .tostring(). We then use GdkPixbuf.Pixbuf.new_from_data to convert this to a pixbuf. The False argument is to say there is no alpha channel. 8 is the only bit depth supported. frame.shape[1] is the image width and frame.shape[2] is the image height, and the last argument is the number of bytes in one row of the image (ie. the number of channels times the width in pixels).
  • We don’t display the pixbuf directly but instead display a copy of it. This gets around a wrinkle in the memory management which would otherwise require us to manually clean up the pixbuf object when we’re done with it.
  • The function gets called by the GTK idler; GLib.idle_add(show_frame) is adding the function to the list of functions called when idle.
  • You have to return True from idle functions or they don’t get called again.

That’s it!

Election Pickings

In case you’ve been under a rock, or otherwise dead, there was an election and Donald Trump won it.  Let me state my position up front:  I don’t live in the USA.  I’m generally on the right wing of politics.  I’m not a fan of open borders.  I backed brexit.  But I don’t like Donald Trump.  I dislike anyone who I wouldn’t trust in the same room with my female relatives.  I don’t really like anyone much who’s on reality TV.  I take a fairly dim view of American influence in the world, and so the slogan, “Make America great again,” sends shivers down my spine, not because of the damage he might cause while failing but because he might succeed.

The travails of the Republican party have been pretty well covered in this election.  To briefly caricature, the Democrats chose the candidate the party machine told them to choose, the candidate who represented the Washington machine, big corporate interests and the influence of foreign money, the candidate with 1.3 billion dollars to spend on the campaign, and dressed her up in hope, love and feminism (“and the greatest of these is feminism,” one is tempted to add).  The republicans party machine chose a candidate, too, and their membership told them where to stick it.  Half (ie. some number that’s not ‘all’ and not ‘none’) of the party establishment refused to endorse their own party’s candidate.  Normally that kind of party division ends in disaster, but not this time.  This time, the party grass-roots have elected a president over the will of the party hierarchy.

All that’s been pretty well picked over in the last few weeks.  I want to comment on the problems on the left.  So many on the left are scratching their heads and wondering, “How did this happen?  Couldn’t the electorate see what was happening?  Why didn’t they listen?”  To me, lots of the problems are summed up in this election-eve clip from Bill Maher.  It’s not that long and worth watching in full.

Any Democrats still out there wondering why no-one listened?  No?  Good.  Bill freely admits he lied to you about George Bush Jr.  He lied to you about John McCain.  He lied to you about Mitt Romney.  He said all those guys were racist, sexist and homophobic and would be the end of the world as we know it, but that was just a trick to try to convince you.  That was lying to you for your own good, to scare you into choosing the ‘right’ candidate when election time came.

But this time, you should trust him.  This Republican candidate is different.  This time, “shit just got real.”  The man’s racist!  He’s sexist!  He’s homophobic!  He’ll be the end of the world as we know it!

Do you think these people go on courses to help them suppress their self-awareness, or is the lack innate?

As much as there is undoubtedly to dislike about Trump, it’s hard, on a few minutes’ reflection, not to conclude that a good deal of the mud thrown at him is just made up.  That’s a mistake.  People are not quite as stupid as the commentariat assumes and they can, generally, spot mud being thrown in the hope that some will stick.

Let’s have a look at some specifics.

The accusation that Trump is sexist is doubtless true in some ways, but it’s quite a complicated, nuanced picture.  The mistake here was to take someone who openly objectifies women and to assume (be it naively or cynically) that he therefore is against women’s rights.  It’s not a very convincing assumption.  Finding someone who claims Trump sexually assaulted them is easy; finding someone who claims he paid them less because they’re a woman turned out to be much harder.  Finding someone who claims he treats women in the workplace well is easy; he’s got a daughter who runs a largish slice of his business for him.  He might well cause a twenty-year backlog of sexual assault cases in the courts, but the idea that he would “put women’s rights back twenty years” is pretty hard to sustain.  His opponents opted to demonise him, and it quickly became such a caricature that even the remnant of the accusation that is true tended to get written off.  Feminists still don’t seem to have realised that rights-feminism and objectification of women go hand in hand; removing responsibility from men frees them to be the alpha male.

The accusation that Trump is in the Russian’s pockets is a puzzling one.  It’s based on the idea that Russian state-sponsored hackers stole and leaked documents to make the Democrats look bad.  There is a certain amount of evidence that is “consistent with” that picture, but it seems a long shot to call it certain.  That and a couple of vaguely positive remarks that Trump has made about Putin are the sum of the evidence.

What’s puzzling is this:  Why would Clinton make an issue of this?  Her own skeletons here are positively bursting out of the closet.  For her the concern is not Russia but the Middle East.  The Clinton Foundation’s receipts of hundreds of millions of dollars from Middle-Eastern governments is well-documented, and the correlations between that money and things that happened at the State department are at least near enough to make a curious person raise their eyebrows.  Why risk bringing that up?  And why didn’t Trump’s campaign bring it up?  My guess there, at least, is simple political ineptness.  This was a campaign run on domestic concerns and in simple, single-colour slogans.  Lining up funny money from overseas with complex deals done with foreign governments in a way that would resonate with the electorate was just too hard.

The accusation that Trump will start World War III, that the nuclear codes would not be safe in his hands, is frankly bizarre.  The man is an isolationist protectionist.  If someone attacks the United States then, have no doubt, the response will be swift and brutal.  But an expeditionary warrior he is not.  Far from provoking confrontation, he’s more likely to let allies be steamrollered by another power without concern.  The dual concerns that he might start World War III and that he’s too friendly with the Russians just don’t line up.

One of the candidates in this election did more-or-less advocate provoking World War III and it wasn’t Trump.  When I asked on Facebook if anyone could name a policy of the Clinton campaign, the only one that anyone could come up with was a no-fly zone over Aleppo.  And guess who’s bombing Aleppo?  That’s right:  Clinton openly advocated policy that would necessitate the USA shooting down Russian jets.  Again the lack of self-consciousness in the people making these accusations is, well, surprising.

And so we come to Maher’s most terrifying accusation:  “Once fascists get power, they don’t give it up.  You’ve got President Trump for life.”  He’s saying that Trump will find a way either of rigging the next election or of avoiding it altogether.

I guess I can’t rule out that he’s right.  My history of predicting politics is, after all, pretty bleak.  I thought brexit was a losing cause.  I had a tenner on Michael Gove for PM.  I thought Tony Abbott would make a good PM.  Only a few months ago, I predicted that Trump would win the primary but crash in the election.  But haven’t we heard this before?  Does anyone remember 2008 and the last days of the Bush Jr presidency?  There was a serious segment of the internet that firmly believe that Bush Jr would declare martial law and suspend presidential elections.  You can still find articles on the Huffington Post, published this year, by people who really believe that Bush was threatening to do so if some piece of legislation or other wasn’t passed (eg this).  If you’re looking for the origins of popular fake news on the internet, this wouldn’t be a bad place to start.  It’s especially perplexing when another accusation is that he doesn’t really want to be president and only won by mistake.  The two don’t really go hand in hand.

And, again, it seems an odd objection from a left wing that is so vulnerable on this point.  While it’s true that Obama has made relatively few executive orders during his presidency, he does seem to have a knack of making ones that get invalidated by the courts, and of those that remain there are some that make big policy in important areas and it’s hard to see how some of them shouldn’t be invalidated.  An executive order can’t contradict the constitution or statute law; how then can “deferring” the deportation of illegal immigrants, which is required by statute law, be a valid presidential order?  I guess the argument is that he’s not cancelling their deportation, just scheduling a date for it that’s not in the near future.  My argument here is not about the attractiveness of one immigration policy over another, it’s about the rule of law and the ends not justifying the means; it’s hard not to call this kind of sophistry what it is.  It’s hard to avoid the conclusion that he’s tried to rule without congress.

It’s not only Obama’s legacy.  The thousands of people out on the streets this week shouting, “Not my president,” “Mr Hate leave my state,” and “Dump the Trump,” don’t seem to realise the irony, the nearly-five-million people who have signed a petition on change.org urging electors to vote against their mandate and elect Clinton don’t seem to realise the irony, that for fear of a president who might refuse to give up power they are advocating overturning the result of a valid election against the will of the electorate (see this fine article for a good analysis of why even the attempt is a rotten idea – in summary, changing the rules after the election so your candidate wins is a slippery slope the other side will eventually burn you with, and it’s especially pointless when the chances of it coming off are as slim as they are this year).

“Trust me,” says Maher, “This one’s different.”  “Trust me.”  People didn’t trust him.  Why?  Because he’s lied to them repeatedly, and it seems fairly certain he’s lying again.  The problems of the right are an unhealthy streak of xenophobia and racism; the problems of the left are that they have burned their own trustworthiness on the altar of electoral success.  If the left wants to succeed, they need to rebuild some of that trust.  It’s going to take a lot of time and a lot of honesty, and there’s many signs of it so far.

Making an ESP8266 Web-Accessible

I’ve spent a few hours recently making an ESP8266 web-accessible. Here I’m documenting my progress for the benefit of others.

Requirements

My project is to control some lamps around my house from my smartphone.  There are lots of ways of solving this, such as WiFi-enabled power sockets or having a slave with an email account to sit next to the switch.  My requirements boil down to these:

  • Reasonably cheap.  WiFi-enabled switches seem to start at around £25 a go.  £75 to control three lamps is too much.
  • Accessible from outside my home network.  It can’t depend on being on the same WiFi network.
  • Secure.  No-one else should be able to twiddle my lights.  This is not to be another IoT project where security is an afterthought (or non-thought).
  • It’s not a hard requirement, but I didn’t want to depend on a free messaging server.  Too many of them don’t worry too much about privacy or security.

The Setup

This diagram shows the overall architecture:

arch

I’ve bought some radio-controlled power sockets off eBay.  They come with a remote control which uses a 433MHz radio to send a 24-bit control signal.  I’ve bought an ESP8266 and a 433MHz transmitter/receiver module pair.  The receiver is only useful for sniffing the codes sent by the remote.  When you first plug one of the sockets in, you have to hold down a button on the remote to ‘programme’ the switch to recognize that button on the remote.  It’d be possible to make the ESP8266 send a new code when you press a particular button on the website, to programme a socket, but I haven’t bothered so far.

I’m programming the ESP8266 using the Arduino ESP8266 platform available on GitHub here.

The ESP8266 connects to my home WiFi network and, through that, to my cloud virtual private server (VPS).  This is a virtual machine running on someone’s cloud (in my case WebSound).  It costs me £2.50 per month.  I actually already had this server available for a different project I maintain, so it hasn’t added anything to the cost.  I have a DNS registration through GoDaddy, so I just added a new host name to the DNS configuration so the same machine now has two separate names (ie. a.example.com and b.example.com).

The communication from the ESP8266 to the VPS uses the MQTT protocol.  MQTT originally stood for MQ Telemetry Transport.  You can read about it online, but basically it’s a protocol for publish-subscribe communication.  Some clients subscribe to topics and others publish messages on those topics; the subscribers receive the messages published by the publishers.  In my case, I’ve opted for MQTT encrypted using TLS.  By requiring that clients connect using certificates that I’ve signed using my CA or master key, this solves both the authentication and encryption problems on the ESP8266 side.

Also running on the VPS is a web server.  The front end of this is an nginx instance, mainly because it does everything I need and I was already using it for the other project.  The webserver is running two “virtual servers” (it’s easy to get lost in the various types of “virtual” here).  This means that you get a different “virtual server” depending on which DNS name you use for the host – if someone looks up a.example.com they get my web-enable ESP8266, if they look up b.example.com they get my other project.  The nginx part handles this as well as encryption using TLS.

The back end of the webserver is using Flask, a Python micro-web-framework.  Flask uses an API called WSGI which is a generic interface to Python web services.  You can run a Flask application by just starting a Python interpreter and loading your web service module.  This is fine for debugging, but in production you want cool features like multi-threading and resilience to exceptions; you get these for free with a WSGI container, in my case gunicorn.  This loads your web service module and instantiates as many instances as it needs to handle the incoming load.

The web server uses the Paho library to connect to the MQTT server, again encrypted using TLS.

Web Server Setup

My VPS is running Ubuntu Server 14.04 (my VPS host spun up a new instance for me with this already installed).  This makes installing the various bits and pieces fairly straightforward:

$ sudo apt-get install nginx python3.4 python-virtualenv mosquitto git supervisor openssl
$ virtualenv -p /usr/bin/python3.4 ha
$ . ha/bin/activate
(ha) $ pip install Flask Flask-SQLAlchemy python-social-auth paho-mqtt gunicorn

Here’s a quick overview of what I’ve installed:

  • nginx – front end web server
  • python3.4, python-virtualenv – the Python interpreter and a tool called virtualenv for setting up isolated Python configurations.  This lets me have a different set of installed Python packages for each project.  In this case, I’ve created a new virtualenv called ‘ha’ and activated it.
  • mosquitto – MQTT server
  • git – source control system used for getting the Let’s Encrypt software
  • supervisor – a service monitoring framework
  • openssl – tools for creating certificates etc
  • Flask – back end WSGI framework
  • Flask-SQLAlchmey – a Python framework for accessing databases
  • python-social-auth – a Python framework for authenticating against social media services, such as Google.
  • paho-mqtt – a library for accessing MQTT servers from Python
  • gunicorn – a Python WSGI container

Certificates

All the encryption involved uses the public key infrastructure (PKI).  Public-private key cryptography relies on a pair of keys.  Anything encrypted using one of the keys can only be decrypted by using the other key in the pair.  So you can publish your public key and keep your private key private.  Now you can encrypt someone with your private key and anyone with your public key can check that it was really you that encrypted it; this is a sort of digital signature.  And someone with your public key can use it to encrypt a message and send it to you; only you can read it.

This is extended to the concept of signed certificates.  A certificate is a public key that is signed with someone else’s private key.  That “someone else” is a certificate authority or CA.  The CA is saying that this public key really belongs to the person claiming to own it and, so long as no-one has managed to steal either of their private keys, it’s cryptographically verifiable.

Who you choose to be your CA depends on what you’re using the certificate for.  Web browsers have a list of “trusted” CAs and their public keys built in to them.  If you want to run an encrypted website, you really need to have a certificate that’s signed by one of these trusted CAs or your users will get horrible warnings when they try to access your site.  It used to be that getting a certificate signed by one of these trusted CAs was hideously expensive – tens of thousands of pounds.  These days, there is the Let’s Encrypt project.  They are a CA who give away certificates.  You can’t use them to prove that you are who you say you are, but you can use them to prove that you control the server the certificate is issued for.  This is enough to set up a trusted HTTPS web server.  The only people who are likely to have trouble with it are those running Windows XP and frankly they deserve everything they get.

On the MQTT side, I have created my own root certificate and use it to sign the certificates used by the ESP8266 and the web server.  These are never publicly visible and all I want to be certain of is that the certificates used to connect to mosquitto are ones that I have signed, not someone trying to connect without a certificate I issued.

Create your CA certificate/key pair like this:

$ openssl req -new -x509 -days 1095 -extensions v3_ca -keyout ca.key -out ca.crt

This creates a new key and certificate pair that we will use to sign other certificates.  A couple of things to note here:

  • Keep your key (ca.key) safe!  Anyone who gets hold of it can sign certificates and you won’t be able to tell you didn’t sign them.
  • Mark in your diary now a day just short of three years from now to renew your certificates.  That 1095 on the command line is three years and after that your CA certificate will expire.  It is not likely you will remember this until everything stops working – rather embarrassing.

Next, create a key and a certificate request for mosquitto:

$ openssl genrsa -out mosquitto.key 2048
$ openssl req -out mosquitto.csr -key mosquitto.key -new

And sign it with your CA key:

$ openssl x509 -req -in mosquitto.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out mosquitto.crt -days 1095

You now have a key/certificate pair, called mosquitto.key and mosquitto.crt.  Put them in /etc/mosquitto/certs.  Also, put ca.crt (NOT ca.key) in /etc/mosquitto/ca_certificates.

Repeat the process above to create a key pair for your ESP8266 (esp8266.key / esp8266.crt) and python framework (server.key / server.csr).

Nginx configuration

We start by getting a certificate from Let’s Encrypt.  Stop nginx:

$ sudo service stop nginx

Next, download the Let’s Encrypt software and request a certificate

$ git clone https://github.com/letsencrypt/letsencrypt $ cd letsencrypt
$ sudo ./letsencrypt-auto certonly --standalone

I had to run this a few times before it worked, apparently just because the Let’s Encrypt servers are a bit overloaded. It will ask you some questions along the way, most importantly what the server you want the certificate for is called. You have to get this right, and it has to be the server where you are actually running these commands!

Once this completes successfully, you should have a directory called /etc/letsencrypt that contains the key and certificate that has been created.  In particular, you should have /etc/letsencrypt/live/a.example.com (substitute in the name of your site) which will contain:

  • cert.pem – your web server’s certificate
  • chain.pem – not sure what this is
  • fullchain.pem – combination of cert.pem and chain.pem
  • privkey.pem – your web server’s private key

Now create a file called /etc/nginx/sites-available/a.example.conf (the actual name of this file doesn’t matter, so long as it’s in the right directory) and put this in it:

# Declare a back-end server listening on port 8100 that we will
# hand requests on to
upstream ha {
    server 127.0.0.1:8100;
}

# Declare our server
server {
    listen 443 ssl; # Listening on the default SSL port for SSL connections
    server_name a.example.com;
    client_max_body_size 1000M; # You could reduce this...
    keepalive_timeout 15;

    # Configure it to use our shiny new certificate
    ssl_certificate /etc/letsencrypt/live/a.example.com/cert.pem
    ssl_certificate_key /etc/letsencrypt/live/a.example.com/cert.pem
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Only use newer protocols
    ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; # Only use reasonably secure ciphers
    ssl_prefer_server_ciphers on;

    location / {
        proxy_redirect off;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Protocol $scheme;
        proxy_pass http://ha; # This is the back end we declared above
    }

    location /static/ {
        root /home/me/ha/static;
    }
    location /robots.txt {
        root /home/me/ha/static;
    }
    location /favicon.ico {
        root /home/me/ha/static;
    }
}

This sets up the server so that it serves /robots.txt, /favicon.ico and anything under /static/ straight from the filesystem, and forwards everything else on to our Python framework which is (or will be) listening on port 8100.  We serve these files directly in nginx because it is a lot more efficient at doing so than Flask is.

Python configuration

Log in to the VPS and activate our Python virtualenv:

$ . ha/bin/activate
(ha) $

Now, create a basic web service in ~/ha/project/ha/wsgi.py with this content:

from flask import Flask
application = Flask(__name__)
application.config.from_object('ha.settings')
@application.route('/')
def index():
    return 'Hello, world!'

This just returns the string ‘Hello, world!’ whenever someone tries to access our server.  By the way, the ‘ha’ name that keeps coming up isn’t special, it’s just what I decided to call my project (originally it stood for ‘home automation’ though my ambitions have got less grand).

Next create ~/ha/project/ha/gunicorn.conf.py:

command = '/home/tkcook/ha/bin/gunicorn'
pythonpath = '/home/tkcook/ha'
bind = '127.0.0.1:8100'
workers = 2
user = 'me'

This is just a Python file with variables to configure the WSGI container, gunicorn.  It will run two worker threads and listen on port 8100 (this has to match what you put in the nginx configuration).

Lastly, we’ll use a programme called ‘supervisor’ to run gunicorn as a service.  Create /etc/supervisor/conf.d/ha.conf:

[group:ha]
programs=gunicorn_ha

[program:gunicorn_ha]
command=/home/me/ha/bin/gunicorn -c gunicorn.conf.py -p gunicorn.pid ha.wsgi
directory=/home/me/ha/project
user=me
autostart=true
redirect_stderr=true
environment=LANG="en_US.UTF-8",LC_ALL="en_US.UTF-8",LC_LANG="en_US.UTF-8"

Now you can start your Python framework service:

$ sudo supervisorctl start ha:*

Supervisor is actually quite sophisticated and we’re not really using much of its capabilities here.  It can set up groups of programmes that can be started and stopped together but we have just one group with one programme.

Finally, restart nginx:

$ sudo service nginx restart

and you should be able to open https://a.example.com/ (or whatever your server is called) and it should respond with ‘Hello, world!’.  Phew!

Mosquitto configuration

Create /etc/mosquitto/conf.d/local.conf (actually the name doesn’t matter, so long as it’s in that directory and ends in ‘.conf’):

allow_anonymous false
password_file /etc/mosquitto/passwd

listener 8883
cafile /etc/mosquitto/ca_certificates/ca.crt
keyfile /etc/mosquitto/certs/mosquitto.key
certfile /etc/mosquitto/certs/mosquitto.crt
require_certificate true

Create the password database like this:

$ sudo mosquitto_passwd -c /etc/mosquitto/passwd esp8266_user
$ sudo mosquitto_passwd /etc/mosquitto/passwd python_user

Enter a suitable password when prompted.  Restart mosquitto:

$ sudo service mosquitto restart

You should now have an MQTT server listening on port 8883.  It will only accept clients that present a certificate signed by your CA.

Code

I won’t present all my ESP8266 code here, just the bit to do with connecting to the MQTT server.

I’m using the Arduino platform support for ESP8266; if you’re using some other programming system for the ESP8266, you’ll have to figure out things for yourself.

Note that the ESP8266 is just powerful enough to do TLS encryption.  You’ll need to make sure the CPU is running at 160MHz, not 80MHz or else the hardware watchdog trips while connecting to the server and the thing reboots.  Even running at 160MHz I’ve had it reboot occasionally if the server takes too long to respond.

I’m using the pubsubclient Arduino library available here for the MQTT connection.  It has a port to ESP8266 and a reasonable ESP8266 example; the hard thing is that the example doesn’t cover using TLS.  Here’s a brief sketch of how it’s done:

#include <ESP8266WiFi.h>
#include <PubSubClient.h>
#include "certificates.h"

WiFiClientSecure espClient;
PubSubClient client(espClient);

void setup() {
    WiFi.begin("my_ssid", "my_password");
    while(WiFi.status() != WL_CONNECTED)
        delay(500);
    espClient.setCertificate(certificates_esp8266_bin_crt, certificates_esp8266_bin_crt_len);
    espClient.setPrivateKey(certificates_esp8266_bin_key, certificates_esp8266_bin_key_len);
    client.setServer("a.example.com", 8883);
    client.setCallback(callback);
}

void reconnect() {
    while(!client.connected()) {
        if(client.connect("ESP8266Client", "esp8266_user", "your_password_here")) {
            // Resubscribe to all your topics here so that they are
            // resubscribed each time you reconnect
        } else {
            delay(500);
        }
    }
}

void loop() {
    if(!client.connected()) {
        reconnect();
    }
    client.loop();
    // Your control logic here
}

Basically, to connect to MQTT using TLS, you just use WiFiClientSecure instead of WiFiClient and set the certificate and private keys before connecting.  The tricky bit really is generating the file certificates.h.  Openssl by default generates keys and certificates in a human-readable text format, but here we need them in binary.  Openssl can get us part of the way by converting a text file to a binary one; Ubuntu provides the xxd utility to get us the rest of the way by converting the binary file to a C array definition we can use directly in our code.  First convert the text files to binary ones, putting the result in a directory called certificates:

$ mkdir certificates
$ openssl x509 -in esp8266.crt -out certificates/esp8266.bin.crt -outform DER
$ openssl rsa -in esp8266.key -out certificates/esp8266.bin.key -outform DER
$ xxd -i certificates/esp8266.*
$ cat certificates/esp8266.* > certificates.h

Now if you look in certificates.h, you will find two C arrays called certificates_esp8266_bin_crt and certificates_esp8266_bin_key, and two variables with their lengths, which you can use directly when setting up the WiFiClientSecure instance.

There are quite a few things that can go wrong in this process and I can’t remember all of the dead ends I went down.  Here are a few:

  • All the certificates have to be signed by your CA!
  • Remember to set the ESP8266 to run at 160MHz.
  • You may also have to disable the software watchdog timout – use ESP.wdtDisable() to do this.
  • Error messages at both the mosquitto end and the ESP8266 end are pretty unhelpful.  Make sure all your certificates are set up right and that your username and password is correct; otherwise you just get generic TLS failure messages with no indication what is wrong.

Python MQTT Connection

Again I’m not going to paste all my code here, but here’s a sketch of how to connect to the MQTT server from Python:

import paho.mqtt.client as paho
client = paho.Client()
client.tls_set("ca.crt", certfile="server.crt", keyfile="server.key")
client.username_pw_set("python_user", password="your_password_here")
client.connect("a.example.com", port=8883)
client.loop_start()

There are a couple of options for how to make the client loop; you can either set off a background thread as I have done above, or you can call .loop() regularly to process incoming messages.

Using MQTT

MQTT, as I said above, is a publish-subscribe messaging system.  So, in my example, I’ve assigned an ID to each ESP8266.  Each ESP8266 then subscribes to a topic whose name is this ID, in hexadecimal.  So if the device ID is 0xDEADBEEF1234, it subscribes to ‘/DEADBEEF1234’.  The webserver then publishes commands on this topic which the ESP8266 broadcasts over the 433MHz radio.

For more complex scenarios, more complex topics can be constructed.  For instance, I could have a topic called ‘/DEADBEEF1234/temperature’ where the ESP8266 could publish a measured temperature, ‘/DEADBEEF1234/command’ where the webserver could publish commands and so on.  You can subscribe to ‘/DEADBEEF/+’, which would match either ‘/DEADBEEF1234/temperature’ or ‘/DEADBEEF1234/command’, or you can subscribe to ‘/DEADBEEF/#’ which will match either of them as well as ‘/DEADBEEF1234/measurements/measurement1’ and so on.  That is, a ‘+’ matches any string but only one path level, while ‘#’ matches any string and any number of path levels.

At the moment, I’m using the value returned by ESP.getChipId() as the unique identifier.  This has a couple of issues:

  • It’s only 24 bits, so there are only ~16.9 million unique devices possible.  That’s not a small number, but I don’t think it’s impossible that more ESP8266 devices than that will be made.  Collisions would be bad – it would let two people turn each others lights on and off (for example).
  • It’s the bottom 24 bits of the MAC address.  This is a bit more serious.  Anyone who can connect to the WiFi network can see the devices MAC address, and once you know the MAC address you are about half way to being able to control it.  My database setup means that I record which users are associated with which devices and only those users can control those devices through the web interface.  But if someone got hold of a client key and certificate then they could use them to connect directly to the MQTT server and control any device for which they know the MAC address.  Or they could just flood the system with messages for random device IDs and see what havoc they could cause!  And where would they get the client key and certificate?  Well, they come burned into the flash of the ESP8266 device.  So once I start selling these, I’m also giving away the key to connect directly to the MQTT server, if someone is keen enough to download the flash from the ESP8266 and figure out which bits are the key, certificate, username, password and hostname for the MQTT server.  I haven’t figured out a good way around this yet!

Web Client Authentication

Authenticating web users is one of the hardest bits of this to get working, and one of the hardest bits to get right.  Once again I’ll sketch how I went about it here, but whether what I’ve done will be suitable for you will depend a lot on what you want to achieve.  In particular, I just wanted to require that a user have a Google account to for Google to authenticate them for me.  If you want to use other services, you’re on your own from here.

First you need to set up an application in your Google developer’s console.  Go to https://console.developers.google.com/.  Create a new project.  Open the API Manager screen and click on Credentials.  Create an OAuth Client ID.  You will have to fill in some details for the OAuth consent screen the user will be presented with when logging in.  Note the client ID and client secret.  The type should be set to ‘Web Application’.

You need an object type to store user details in.  I’m using a PostgreSQL database to store my data and SQLAlchemy to access it in Python.  That’s not very important, so long as you have some sort of User object.  Here’s the guts of mine:

from sqlalchemy import create_engine, Column, Integer, String, Boolean
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
from ha.settings import DB_USER, DB_PASS
from flask.ext.login import UserMixin

# ha is the database instance name
engine = create_engine('postgresql+psycopg2://' + DB_USER + ':' + DB_PASS + '@localhost/ha')
db = declarative_base()
db_session = scoped_session(sessionmaker(autocommit=False, autoflush=False, bind=engine))

class User(db, UserMixin):
    __tablename__ = 'users'
    id = Column(Integer, primary_key = True)
    email = Column(String)
    active = Column(Boolean, nullable = False, default = False)
    fname = Column(String)
    lname = Column(String)
    accesstoken = Column(String)
    username = Column(String(400))

    @property
    def is_authenticated(self):
        return True

    @property
    def is_active(self):
        return self.active

    @property
    def is_anonymous(self):
        return False

    @property
    def get_auth_token(self):
        return self.accesstoken

I’ve put this in models.py, putting it in a module called ha.models.  Don’t forget to create the matching schema in your database.

Then in my wsgi.py:

import ha.models, ha.settings

from flask import Flask, redirect, render_template, session, jsonify, g, url_for, abort
from flask.ext.login import LoginManager, current_user, login_user, logout_user, login_required

from social.apps.flask_app import routes
from social.apps.flask_app.routes import social_auth
from social.apps.flask_app.template_filters import backends
from social.apps.flask_app.default.models import init_social
from social.apps.flask_app.default import models

application = Flask(__name__)
application.config.from_object('ha.settings')
application.register_blueprint(social_auth)
init_social(application, ha.models.db_session)

login = LoginManager()
login.login_view = 'login'
login.init_app(application)

@login.user_loader
def load_user(id):
    return ha.models.db_session.query(ha.models.User).get(int(id))

@login.token_loader
def load_user_from_token(token):
    return ha.models.db_session.query(User).filter(User.accesstoken == token).first()

@application.before_request
def global_user():
    g.user = current_user

@application.teardown_appcontext
def commit_on_success(error=None):
    if error is None:
        ha.models.db_session.commit()
    else:
        ha.models.db_session.rollback()

     ha.models.db_session.remove()

@application.context_processor
def inject_user():
    try:
        return {'user': g.user}
    except AttributeError:
        return {'user': None}

application.context_processor(backends)

@application.route('/login')
def login():
    return redirect(url_for("social.auth", backend="google-oauth2"))

@login_required
@application.route('/')
def index():
    if g.user.is_anonymous:
        return redirect(url_for('login'))
    if not g.user.is_active:
        return redirect(url_for('nouser'))
    return render_template('index.html')

@login_required
@application.route('/logout')
def logout():
    logout_user()
        return redirect('/')

@application.route('/nouser')
def nouser():
    return render_template('nouser.html')

There’s a lot to get your head around here.  Essentially, the python_social_auth package handles authentication for you.  Given the configuration at the beginning, you can then decorate any web endpoint with ‘@login_required’ and Flask will automatically redirect any non-logged-in user to the Google login service before letting them access that page.

When someone new accesses your service, they’ll get registered in your database.  You’ll then have to have some mechanism to check that they really are someone you want accessing your service and make them active.  At the moment I’m the only user of my service and I’ve manually updated my database to make my account active; if you’re aiming for the big time, you’ll need something better than this.

I think the only thing left that is very mysterious here is the ‘ha.settings’ module.  It is just a file called settings.py which contains:

SECRET_KEY='something nice and random here'
SESSION_COOKIE_NAME='ha_session'
SESSION_PROTECTION='strong'

SOCIAL_AUTH_LOGIN_URL='/login'
SOCIAL_AUTH_REDIRECT_URL='/'
SOCIAL_AUTH_LOGIN_REDIRECT_URL='/'
SOCIAL_AUTH_USER_MODEL='ha.models.User'
SOCIAL_AUTH_AUTHENTICATION_BACKENDS=('social.backends.google.GoogleOAuth2',)
SOCIAL_AUTH_USERNAME_IS_FULL_EMAIL=True
SOCIAL_AUTH_REDIRECT_IS_HTTPS=True

SOCIAL_AUTH_GOOGLE_OAUTH2_KEY='Your Google oauth2 key here'
SOCIAL_AUTH_GOOGLE_OAUTH2_SECRET='Your Google oauth2 secret here'

DB_USER='Your database username here'
DB_PASS='Your database password here'

SERVER_NAME='a.example.com'

Randomness really does matter in your SECRET_KEY.  Try this to get some reasonable randomness:

$ head -1 /dev/urandom | base64

And, as a final trap for your players, don’t forget to exclude this file from source control before you push it all to github!

Conclusion

Putting all these together took me quite a few hours of puzzling through stuff.  I hope a description of a real-world setup, with encryption throughout, is helpful to people.  If there’s things that are not clear please comment below and I’ll try to clear it up.

Detained Under Section 2 of the Immigration Act

It’s been a weird sort of day.  Long and weird.  It started, more or less, in immigration detention, and has kept weird since.

Yesterday (depending on your time zone) I drove to Adelaide, got on a plane, flew to Singapore, spent five hours more-or-less in a swimming pool, got on another plane and flew to London.  The end of this was me being disgorged onto an unsuspecting Heathrow at a time alleged to be 5:35 before the M, though personally I doubt the evidence for such a time existing.  After a blessedly short wait in a queue, I reached the Border Force officer (apparently Border Agency is so very 2013) who would decide if I was the sort of person who should be let loose on a green and pleasant land.

You never know what to expect with such people.  Sometimes they wave you through with a smile and a nod, other times they want the history of your life and ancestry unto the fourth generation.  I have often suspected that it depends mostly on how near they are to the end of a shift.

This time I am particularly nervous.  Last time I left the UK, my relations with the immigration authorities were decidedly ambiguous.  Not to put too fine a point on it, they were trying to get rid of me.  I suppose, in a way, they succeeded.  How they might feel about my return, even for two weeks, was therefore uncertain, even if they have changed their name in the meantime.

“So you’re staying for two weeks?”

“Yes, that’s right.”

“What’s the reason for your visit to the UK?”

“Part business, part pleasure.  Visiting a client, catching up with friends.”

“Okay.”  That wasn’t too bad.  He puts my passport in his scanner.  “Hmmm.”  His brow furrows.  Uh-oh.

“Have you lived here before?”

“Yes.  I was here for about six years.”  He starts flicking through my (brand new) passport.

“Where’s your visa?”

“It was in my old passport, which has expired.”  And which the Border Agency never quite got around to returning.

“Did you apply for indefinite leave to remain in the UK?”

“Yes, but it was refused.”

“Did you leave within the visa expiry?”

“No.”

“But within the appeals process?”

“Yes.”

“Was the appeal rejected?”

“No, it was allowed, but I decided to leave anyway before that came through.”

“Hmmm.”

I don’t like the sound of that Hmmm.

“I’m going to have to detain you under section 2 of the immigration act.”  Possibly he mentioned a paragraph too, my memory’s a bit hazy on this point.  That sounds bad.

“I…”

“It’s nothing to worry about.” I have my doubts about that, too.  At this point I have a slightly bizarre picture of being shipped to a camp on a Pacific Island, held for several years and then deported.  I’m only staying for two weeks, it’d be quicker and cheaper just to let me in.  I’m aware that such things happen in the world, of course, but my ideas about the world include a pretty firm idea that it they don’t happen to me.  Because, when you get right down to it, I’m white, and immigration detention centres aren’t for Our Sort of People.

“I’ll have to confiscate your passport, too.”  A bad sign, methinks.  “If you’d like to just follow me?”  No, I wouldn’t like, but what choice do I have?

Don’t worry, folks.  This is the nice, white-people’s version of immigration detention.  As it turns out, “detention” is about a dozen seats roped off from the queue of people waiting.  He gives me a receipt for my passport, then disappears with it, but comes back ten minutes later with an explanation I didn’t really catch.  As near as I could make out, whoever was doing the paperwork related to my case basically pushed it all to one side when I left, creating the impression in their records that I hadn’t actually left the country and had over-stayed my visa by quite a stretch.  A bit of a problem when here I am trying to enter the country again.  He’s very helpfully fixed their records so that next time I try to enter the country it won’t happen again.  I’m free to go, my passport returned with a shiny new stamp in it.

Since then, I’ve been staying awake and trying to shake a feeling that something is badly wrong.  Staying awake is absolutely key to beating jet-lag, but is easier said than done.  When they turf you out of the airport at 6:30 AM, on the back of four hours of patchy sleep, you think, How hard can it be?  I feel great!  By about midday, your body is starting to say, Hey-ho, time for bed, what?  By the late afternoon, your limbs feel like they are made of lead and every step is an effort.  By bed-time, you’re walking into door frames and forgetting in which order you take your pants and trousers off, and also possibly when it is socially acceptable to do so.

The key to staying awake is to stay active.  It’s often tempting to see if alcohol will help.  After all, you’ve nothing else to do, no car you can drive anyway, and the well-known time-compressing effects of alcohol could be beneficial.  But it can’t be done.  Any drink at all, up to around six minutes before you intend to go to bed, will end your effort to stay awake.  I demonstrated this ably the first time I came to the UK, by falling asleep on a table in the Bayswater Arms Hotel, on the strength of most of a pint of ale.  The staff, all Australian ex-pats, kindly woke me up and gave me a shove in the direction of my hotel.

Caffeine can be helpful, but can also wreak its own particular revenge when it does come time to sleep.  No, the key is activity.  In my case today, coffee with friends in Bath, an excellent midday all-day breakfast at the Wine Bar in the Keynsham high street and a haircut (I have yet to find a barber within 80 miles of where we live in Australia) got me through to about half past two.  Then I found myself at a loose end, and there is nothing more fatal to the staying-awake project.  First you’ll sit down.  Then you’ll rest your head back on the seat.  Just for a minute.  Then you snore.

So, I said jokingly to my wonderful hosts, “I’m going to go and see if Bristol is still there.”  This got a laugh, but as I got off the train at Temple Meads station, I realised that I really was there to see if Bristol is still there.  And the reason I’ve felt all day that something is very wrong is that it is still all there.  So far I can personally confirm that Heathrow Airport, Paddington station, the Great Western Railway, Bath, Keynsham and Bristol are all still there, as of various points in the day, and this has come as something of a shock to me.

It’s an extraordinarily self-centred view of the world, but I expected them all to be largely gone, or at least changed much for the worse.  You see, my leaving England in June was a very stressful time and has resulted in a very large change to my circumstances; I expected, subconsciously, that everything and everyone within about a twenty mile radius (around two million people live in that area) would have also found it terrifically stressful and experienced a very large change for the worse.

But, not only have they got over my departure without any noticeable hiccough, they have even had the gall to go on gradually changing things without me.  Keynsham train station has a pedestrian ramp (finally!)  The Old Bank has changed hands, and the new management is not colour blind (it’s not painted bright pink any more) and, rumour has it, even does vaguely decent food (man was not meant to meddle in such things!)  My ideas of my own importance are suitably recalibrated.

I’ve made it to quarter past six without sleeping; another couple of hours, and I can give up the battle.

Quality, Microsoft, Quality

My day job involves writing software.  Today, actually, it involves porting software from Windows to Linux.  Until now, our software has been built with Microsoft Visual C++ but, strangely enough, VC++ doesn’t support targeting Linux, so we’re having to build it with GCC.

The application is command-line-only.  It involves no networking, only basic file IO, no console input and minimal console output.  For the most part, it is a number-cruncher.  How hard can this be?

Very tricky, it turns out.  Mostly because the Microsoft C++ compiler, even after all these years, is a buggy heap of shite.

To be fair, some would say that VC++ has always been a buggy heap of shite.  The impression that’s put around the industry is that this used to be the case, but now it’s pretty much on a par with other compilers.

Well, I’m here to tell you that it just ain’t so.  VC++ is still full of bugs.  Most of the ones I’ve come across so far have been bugs of the worst sort for maintainability and portability – the compiler allows things that the language specification says are illegal.  So your developers can sit and happily spew out non-conforming code, compile it and run it, never even realising they are writing rubbish that won’t compile anywhere else.

Here’s my top 5 I love to hate:

1.  Preprocessor phases are done in the wrong order.

Seriously.  The standard specifies exactly what order things happen in.  How hard is it to just implement that order?  Specifically, the preprocessor is supposed to remove comments before it expands macros.  It doesn’t.  It means that an enterprising developer can write this piece of genius:

#ifdef INCLUDE_SOME_CODE
#define SOME_CODE
#else
#define SOME_CODE /##/
#endif

That ## in the second definition of the SOME_CODE macro means, ‘Paste these two things together and put the result in the output.’  Of course it works out to a one-line comment.  Someone has found a way to conditionally comment out code – so long as your compiler (specifically the preprocessor) is completely broken.

2.  Templates are parsed at the wrong time.

Seriously.  The compiler is supposed to parse template definitions as it finds them, then instantiate them when they are used.  Microsoft decided that was too easy.  When the VC++ compiler finds a template definition, it sticks it’s thumb in the page there to remember where that template is defined.  Then, when it finds an instantiation of that template, it goes back and parses the template.  This causes grief, mainly because it means that the compiler already knows what the template arguments are when it parses the template.  This lets it try to be clever.  In the process of being clever, it deviates from the standard in all sorts of ways.  For instance:

template<typename T>
class A {
    My compiler is a smelly, buggy heap of shite.
};

template<>
A<int>
{
    // Actually do something useful in here.
};

int main() {
    A<int> a;
    return 0;
}

This compiles just fine.  When the compiler is parsing this file, it doesn’t bother to parse the templated class definitions until it gets to the instantiation in the main function.  When it sees

A<int>

it spots that this is an instantiation of the template A.  Of course, it now knows what the template parameter is, so it knows that it only needs to parse the specialisation of the template for T=int, so that’s exactly what it does.  It never bothers to parse the general template class.  Hey, what’s the point, anyway?  It never gets used.

If you can’t see the problem here, your brain needs looking at (or you don’t write software for a living).  Suppose for a minute that the template definition is in a library header file somewhere.  Suppose it’s at the bottom of a long chain of template definitions.  Suppose that chain of template definitions is rarely used; in fact, it’s not used at all in your library code.  Some people who use your library might want to use it in some obscure corner cases, though.  Your library compiles fine, but when they use this obscure feature of your library, their code won’t compile any more, and they get several pages of incomprehensible template error messages.  Whose code is to blame?  It can’t be yours; your library compiles fine!  Your poor user will probably spend several days poring over his code before he even realises it could be your fault.

3.  Dependent and non-dependent namespaces are combined.

I guess this is a consequence of number 2, but I think it deserves a separate item.

When the compiler is parsing a template, it can come across two different types of name, dependent and non-dependent.  Dependent names are ones that depend on the template parameters; non-dependent names don’t depend on the template names.  When the compiler is looking for a non-dependent name, it isn’t supposed to search dependent names.  This is because, until you know what the template parameters are, you don’t know what is a dependent name and what isn’t.

To demonstrate this, consider this line of code:

A<B> C;

What is this?  Is it a declaration of a variable called C with type A<B>?  Or is it two comparison operations, mean to be equivalent to

( A<B ) > C

?  Either would be legal, and without knowing what A, B and C are, you can’t tell.  This is why writing a parser for C++ is so hard and why people designing new languages bang on so long and hard about the importance of context-free grammars.

Now, suppose the compiler finds the above line in a template definition and A, B and C are template parameters.  If template parsing is done right (according to the standard) then the compiler doesn’t know what A, B and C are and so it can’t tell what the statement is meant to mean.  The standard provides a way out, though; if you’re in this situation and you’re trying to declare a variable C of type A<B>, you’re supposed to write:

typename A<B> C;

This tells the compiler that you’re naming a type, not constructing an expression.  It’s rather an ugly kludge, but it’s what we have in the standard.

But if you do template parsing the Microsoft way, you already know what the template parameters are when you parse the template.  You know what A, B and C are.  You can tell what is meant by that line of code.  What’s the point of issuing an error message, just because the standard says you should?

To drive this home, here’s an example of code with two errors according to the standard but which the Microsoft compiler has no problem with:

#include <vector>
template<typename T>
class Base {
protected:
    std::vector<T> data;
};

template<typename T>
class Deriv : public Base<T>{
public:
    Deriv() {
        data.push_back(T());
    }
};

Spotted the problems?  No?  Who would, without a compiler to tell you they are there?  The two errors are both name lookups that should fail.  Firstly, std::vector<T> depends on a template parameter.  We don’t know what T is.  This could be two comparisons rather than a variable declaration.  Secondly, and perhaps a bit more subtly, data should not be directly available in class Deriv.  The reason is that it is actually called Base<T>::Deriv- a dependent name!  data` looks like a non-dependent name, so the compiler isn’t supposed to check the list of dependent names to find it.  To write this code correctly, you’re supposed to say:

#include<vector>
template<typename T>
class Base {
protected:
    typename std::vector<T> data;
};

template<typename T>
class Deriv: public Base<T> {
protected:
    using Base<T>::data;
public:
    Deriv() {
        data.push_back(T());
    }
};

And, while you’re thinking about how hard it was to spot those bugs in the original version, remember that the compiler might not even be parsing the content of those templates if they’re not instantiated.

4.  What is a static enum, anyway?

I have no idea who thought this was necessary, or a good idea, but someone had written code like this:

class A {
    static enum B {
        V1
    };
};

It turns out that static isn’t the only possibility; you can declare your enums to be volatile, register or (if your version of VC++ is pre-C++11) auto.  And various combinations of the above.

5.  const_iterator is for what again?

Don’t try this at home:

template<typename T>
void Erase(std::vector<T>& t, std::vector<T>::const_iterator& ci) {
    t.erase(ci);
}

And you thought const_iterators were harmless…

The Devil Incarnate?

The Church of England (and Wales) is in a bit of a fluff over the devil.  Should we ask candidates for baptism, or their parents and godparents, to reject him and all his works?  Or is that too confronting, too difficult for them to understand, too medieval a mode of expression?  Do we even believe in him any more?

Anne Atkins, in today’s Thought for the Day, argues for the existence of the devil:

Michael Green, in his book, I Believe in Satan’s Downfall, points out that we regard the highest forms of life to be those that are sentient, capable of awareness and planning. … God thinks: He speaks, and argues. He feels: He and loves and hates. He wills: deciding on action and carrying it out.

If this is so, it is at least a rational supposition that the same could be true of evil. Indeed, otherwise it’s hard to see how evil ultimately exists. The difference between a wicked crime and an unfortunate accident is intent: one is wilful, the other fortuitous. If there is no evil objective behind the sorrows of the world, then they are not wrong but random…

This type of thinking about evil goes back a long way, indeed it is essentially the dualism of ancient Greek philosophy, which sees good and evil as two great powers in contest over the world.  As rational as the supposition might be, it has never been a Christian view of the world.  It misunderstands the nature of the devil.  And it misunderstands the nature of God.

The devil is not the embodiment of all evil; not evil incarnate.  God expects Satan to do good, just as every other being is expected to do good; on what other grounds is he punished for doing evil?  His punishment is quite clear: “Because you have done this…” (Gen 3:14)  That Satan is created is so fundamental that it is not even mentioned explicitly; we believe in one God, the Father, the Almighty, maker of heaven and earth, of all that is, seen and unseen.”  Satan’s eternal punishment is seen by John: “the devil who had deceived them was thrown into the lake of fire and sulfur where the beast and the false prophet were, and they will be tormented day and night forever and ever.” (Rev 20:10)

Satan’s role is not to create evil per se but to drag humanity away from God.  From the very first temptation in the garden, Satan led not to destruction and death but to disobedience to God.  Satan did not cause evil to Job as an end in itself but so that he could accuse Job.  Jesus was not tempted to use his power to cause misery, suffering and death but to bow down and worship him.

And God is not merely the embodiment of all good.  God is not good because he has to be good, as though there were some law of goodness to which he is subject.  God is good because he chooses to be good.  If it were not so, what would the temptation of Jesus mean?  Temptation is not temptation if there is not the real possibility to giving in to it; resisting temptation is easy when the thing you are resisting is impossible anyway.  It is very easy for me, say, to resist the temptation of levitation, or of eating nitrogen, or of living in Wales, because those things are simply physically impossible.  And if God’s goodness were not by choice but by compulsion, on what grounds would it be glorious and worthy of praise?

It does not do to ponder a sort of parallel universe where God is not like this; thanks be to God that he is.

Fifty Shades – Five Weirdest Things

Perhaps I should be ashamed of it, but I’ve been reading Fifty Shades of Grey  The book that’s become famous for its kinky, kooky, oddball practices does indeed contain some strange ideas.  From the sultry heat of Oregon to the Red Room of Pain, here are the five weirdest things I’ve come across:

5. Wine.  Unless you earn approximately $100,000 per hour, you’ve never tasted wine and know nothing about it.  As the sign in my local wine bar has it, “Wine:  How classy people get wasted.”

4. Accents.  British accents are quite common in Washington state, it seems, and always indicate adventure and sophistication.  This girl needs to spend half a day in Leeds some time (I wouldn’t wish more than half a day of Leeds on anyone).  Also, it seems Irish accents are hard to recognise.  Who knew?

3. Wine again.  Pink champagne is the utter height of sophistication.  Apparently.

2. Cars.  Christian is obsessive about Ana’s safety and insists on checking the safety ratings of all the cars he buys her and only purchasing the safest.  It is, then, a shame that neither of the cars he’s bought her so far have been rated by either the NHTSA or Euro-NCAP.  But it’s okay, they’re European-designed cars, and therefore the safest available.  This guy needs to test-drive a Reliant.

1. Breakfast.  Forget all that Red Room of Pain rubbish; bacon with maple syrup is a serious perversion and should be banned immediately.