Reality has Levels

It’s been a while since I blogged. I’ve been busy.

A major theme emerging from ‘writing my book’ is that we humans are very bad at confusing our models of reality with the reality we are modelling.

I started planning with the ‘Freemind mind-mapping tool for hierarchical brains’ before finding my own creative process had a network architecture and discovering ‘concept mapping’ which uses graphs to represent concepts and propositions. I saw that graphs were what I needed and decided to experiment with building my own software tools from bits I had lying around.

I didn’t have a current programming language, so I set out to learn Clojure. Being a Lisp, Clojure uses tree-structures internally to represent lists and extends the idea to abstractions such as collections but the only native data structures available to me appeared to be 1-dimensional.  I confidently expected to be able to find ways to extend this to 3 or more dimensional graphs but despite much reading and learning lots of other things, I’d failed to find what I was looking for. I had in mind the kind of structures you can build with pointers, in languages like ‘C’. There are graph libraries but I was too new to Clojure to believe my first serious program needed to be dependent on language extensions, when I haven’t securely grasped the basics.

This morning, I think I ‘got it’. I am trying to build a computational model of my graphical view of a mathematical idea which models a cognitive model of reality. There was always scope for confusion. Graphs aren’t really a picture, they are a set of 1-dimensional connections and potentially another set of potential views of those connections, constructed for a particular purpose. I was trying to build a data structure of a view of a graph, not the graph itself and that was a really bad idea. At least I didn’t get software confused with ‘actual’ reality, so there’s still hope for me.

Yesterday, I used Clojure/Leiningen’s in-built Test-Driven Development tool for the first time. It looks great. The functional model makes TDD simple.

When your Netbook falls off its Sky-hook

I have an Eee PC 1000 ‘netbook’ that has been with me for a while. It’s not very fast but then I type quite slowly. It’s become a personal challenge to see how long I can keep using it. It’s always run Linux but the latest version of Ubuntu won’t fit into its (ample, in my opinion) 1GB of memory. I can’t upgrade it in place either because it has 2 SSDs, currently configured as / on the 8GB and /user on the 32GB. A couple of days ago my desktop environment went AWOL.

Trying to deal with the space imitations, I’d tried booting it from a Lubuntu Live memory stick. It seemed quicker with LXDE but Lubuntu also has leaner apps than the ones I’m used to, so I decided not to install it permanently. I may find another way to keep my current apps but replace Unity by LXDE. Afterwards, I think I rebooted it to check it was OK but I may have shut down from the login screen. The following morning I logged in and got an empty, frozen desktop display. I couldn’t even open a terminal window but I found I could log in to the Guest account. Odd. I opened a console window from my normal account and rolled up my sleeves. I had a /user (a different one, I later discovered.)

To cut a long story short, the answer was:

$ sudo mount -a /dev/sdb1 /home

My home directory wasn’t there but because it wasn’t it had fallen back to the original /user folder on the system disk. The Guest account logs in on /tmp on the system disk, so didn’t have a problem. Now, I just need to work out why whatever was auto-magically mounting it for me and why it decided to stop.

[ Update: The permanent fix was to find out the ID of my device with
$ sudo blkid

then add the following line to /etc/fstab

UUID=4b18fe5c-2d2a-4d12-938b-a38046a3cf84 /home ext4  errors=remount-ro 0  0

I still haven’t found the hole in the sky where the hook came detached. ]

Living a Virtual Life

There is a Taoists story about it being impossible to know at the time whether an event is lucky or unlucky. At my age, you start to reflect how things have gone, from a safe distance.

I planned to go to Birmingham University to study mathematics with a side-order of computer science. My ‘A’ Level results were, to put it mildly, ‘below expectation’ so I scraped into Aston through the Clearing process, to study mathematics, computer science and physics. The teaching language was Algol 68 and the visionary assumption throughout the course was that within a few years all computers would be virtual memory systems. We would never have to worry about physical restrictions on memory allocation. We had a linear address space to play with, that could be as big as the available disk space and there would be a garbage collector to tidy up after me. A few years later, PCs were to make those assumptions invalid.

I actually graduating into a recession caused by a war to the death between Margaret Thatcher and the unions. Many large companies cancelled their graduate recruitment programmes. I was unemployed until just before Christmas, when I took the first job I was offered, as a programmer in a College of Higher Education in Cambridge. I’d never heard of the computer they used. It was one of the first batch of half a dozen DEC VAXes delivered to the UK: a 32-bit super-mini running the new Virtual Memory System OS, VAX/VMS. I specialised in VMS/OpenVMS for the next 25 years, gradually becoming a system manager and specialist in high-availability clusters and development environments. I had side-stepped Bill Gates’ “No-one needs more than 640K” pronouncement and all the mess that went with it.

I lost direct touch with software development until a few years ago when I joined an agile team as analyst and decided I wanted to get back into writing code. Initially I picked Python, until I saw a demonstration of Clojure. I knew I had to have it. Clojure designer Rich Hickey says that we can treat disk space as effectively infinite. That has a huge impact on our ability to design software as temporal flow rather than last known state. Servers have become virtual too. Software is doing everything it can to escape the physical realm entirely. I’m holding on for a free ride, hoping to stay lucky, a link to a virtual copy of ‘The Wizard Book’ on my Cloud-drive. Nothing is Real. I’m not even sure about Time.

Things I used to be Wrong about – Part 1

I get very annoyed about politicians being held to account for admitting they were wrong, rather than forcefully challenged when they were wrong in the first place. Unless they lied, if someone was wrong and admits it, they should be congratulated. They have grown as a human being.

I am about to do something very similar. I’m going to start confessing some wrong things I used to think, that the world has come to agree with me about. I feel I should congratulate you all.

You can’t design a Database without knowing how it will be used

I was taught at university that you could create a single abstract data model of an organisation’s data. “The word database has no plural”, I was told. I tried to create a model of all street furniture (signs and lighting) in Staffordshire, in my second job. I couldn’t do it. I concluded that it was impossible to know what was entities and what was attributes. I now know this is because models are always created for a purpose. If you aren’t yet aware of that purpose, you can’t design for it. My suspicion was confirmed in a talk at Wolverhampton University by Michael ‘JSD’ Jackson. The revelation seemed a big shock to the large team from the Inland Revenue. I guess they had made unconscious assumptions about likely processes.

Relations don’t understand time

(They would probably say the same about me.) A transaction acting across multiple tables is assumed to be instantaneous. This worried me. A complex calculation requiring reads could not be guaranteed to be consistent unless all accessed tables are locked against writes, throughout the transaction. Jackson also confirmed that the Relational Model has no concept of time. A dirty fix is data warehousing which achieves consistency without locking by the trade-off of guaranteeing the data is old.

The Object Model doesn’t generalise

I’d stopped developing software by the time I heard about the Object Oriented Programming paradigm. I could see a lot of sense in OOP for simulating real-world objects. Software could be designed to be more modular when the data structures representing the state of a real-world object and the code which handled state-change were kept in a black box with a sign on that said “Beware of the leopard”. I couldn’t grasp how people filled the space between the objects with imaginary software objects that followed the same restrictions, or why they needed to.

A new wave of Functional Programming has introduced immutable data structures. I have recently learned through Clojure author Rich Hickey’s videos that reflecting state-change by mutating the value of variables is now a sin punishable by a career in Java programming. Functional Programmers have apparently always agreed with me that not all data structures belong in an object

There are others I’m still waiting for everyone to catch up on:

The Writable Web is a bad idea

The Web wasn’t designed for this isn’t very good at it. Throwing complexity bombs at an over-simplified model rarely helps.

Rich Hickey’s Datomic doesn’t appear to have fixed my entity:attribute issue

Maybe that one is impossible.

Agility vs Momentum

[ This post is aimed at readers with at least basic understanding of agile product development. It doesn’t explain some of the concepts discussed.]

We often talk of software development as movement across a difficult terrain, to a destination. Organisational change projects are seen as a lightening attack on an organisation, though in reality, have historically proved much slower than the speed of light. Large projects often force through regime change for ‘a leader’. Conventionally, this leader has been unlikely to travel with the team. Someone needs to “hold the fort”. There may be casualties due to friendly firings.

Project Managers make ‘plans’ of a proposed ‘change journey’ from one system state to another, between points in ‘change space’, via the straightest line possible, whilst ignoring the passage of time which makes change possible. Time is seen as distance and its corollary, cost. The language of projects is “setting-off”, “pushing on past obstacles” and “blockers” such as “difficult customers”, along a fixed route, “applying pressure” to “overcome resistance”. A project team is an army on the march, smashing their way through to a target, hoping it hasn’t been moved. Someone must pay for the “boots on the ground” and their travel costs. This mind-set leads to managers who perceives a need to “build momentum” to avoid “getting bogged down”.

Now let us think about the physics:

  •  momentum = mass x velocity, conventionally abbreviated to p = mv.
    At this point it may also be worth pointing out Newton’s Second Law of Motion:
  • force = mass x acceleration, or F = ma
    (Interpretted by Project Managers as “if it gets stuck, whack it hard with something heavy.”)

What about “agile software developments”? There is a broad range of opinion on precisely what those words mean but there is much greater consensus on what agility isn’t.

People outside the field are frequently bemused by the words chosen as Agile jargon, particularly in the Scrum framework:
A Scrum is not held only when a product development is stuck in the mud.
A Scrum Master doesn’t tell people what to do.
Sprints are conducted at a sustainable pace.
Agility is not the same as speed. Arguably, in agile environments, speed isn’t the same thing as velocity either.

Many teams measure velocity, a crude metric of progress, only useful to enable estimation of how much work should be scheduled for the next iteration, often guessed in ‘story-points’, representing relative ‘size’ but in agile environments, everything is optional and subject to change, including the length of the journey.

If agility isn’t speed, what is it? It is lots of things but the one that concerns us here is the ability to change direction quickly, when necessary. Agile teams set off in a direction, possibly with a destination in mind but aware that it might change. If the journey throws up unexpected new knowledge, the customer may wish to use the travelling time to reach a destination now considered more valuable. The route is not one straight line but a sequence of lines. It could end anywhere in change-space, including where it started (either through failing fast or the value of the journey being exploration rather than transportation.) Velocity is therefore current progress along a potentially windy road of variable length, not average speed through change-space to a destination. An agile development is really an experiment to test a series of hypotheses about an organisational value proposition, not a journey. Agile’s greatest cost savings come from ‘wrong work not done’.

Agility is lightweight, particularly on up-front planning. Agile teams are small and aim to carry everything they need to get the job done. This enables them to set off sooner, at a sensible pace and, if they are going to fail, to fail fast, at low cost. Agility delivers value as soon as possible and it front-loads value. If we measured velocity in terms of value instead of distance, agile projects would be seen to decelerate until they stop. If you are light, immovable objects can be avoided rather than smashed through. Agile teams neither need nor want momentum, in case they decide to turn fast.

A Brexit Thought Experiment

I’m a big fan of thought experiments. I like science but I’m too lazy to do real experiments. Why do something when you can think about doing it?

I’ve been observing the political manoeuvring around Brexit and 2nd referendums. I think people are saying things they don’t really believe in order to get an outcome they believe to be right and people are saying things which sound good, to hide the evil swirling beneath the surface.

I asked myself: Which is the greater wrong: doing a good thing for a bad reason or a bad thing for a good reason?

I thought:

‘A good thing’ is highly subjective, depending on your personal values and consequent belief in what is fair. A comparison of  ‘bad thing’s is probably even more fluid. I see it in terms of balance between good and harm to self and others. It’s complex.

‘Good’ and ‘bad’ reasons also depend on your personal targets and motivations along with another subjective moral evaluation of those.

An individual may see a good thing as a positive value and a bad thing as a negative value and believe that as long as the sum is positive, so is the whole package. People call this “pragmatism”. They also tell me it is easier to ask for forgiveness than permission. These people get things done and, generally, only hurt other people.

‘A reason’ sounds like dressing up something you feel you want in logic. Is that always reasonable?

We need to balance what we want and our chances of success against the risks and uncertainty of what we might lose or fail to achieve. To measure success objectively, we need to have specified some targets before we start.

Brexit didn’t have either a plan or targets. It appears to be driven by things that people don’t want. How will we know if it has succeeded or failed? We are told the strategy and tactics must be kept secret or the plan will fail and targets will be missed. If this was a project I was working on, I’d be reading the jobs pages every lunch time. I’ve stopped worrying about the thought experiment.

Women’s Day Intuition

The first thing I did yesterday, on International Women’s Day 2017, was retweet a picture of Margaret Hamilton, allegedly the first person in the world to have the job title ‘Software Engineer’. The tweet claimed the pile of printout she was standing beside, as tall as her, was all the tweets asking “Why isn’t there an International Men’s Day?” (There is. It’s November 19th, the first day of snowflake season.) The listings were actually the source code which her team wrote to make the Apollo moon mission possible. She was the first virtual woman on the Moon.

I followed up with a link to a graph showing the disastrous decline of women working in software development since 1985, by way of an explanation of why equal opportunities aren’t yet a done deal. I immediately received a reply from a man, saying there had been plenty of advances in computer hardware and software since 1985, so perhaps that wasn’t a coincidence. This post is dedicated to him.

I believe that the decade 1975 – 1985, when the number of women in computing was still growing fast, was the most productive since the first, starting in the late 1830s, when Dame Ada Lovelace made up precisely 50% of the computer software workforce worldwide. It also happens to approximately coincide with the first time I encountered computing, in about 1974 and stopped writing software in about 1986.

1975 – 1985:
As I entered: Punched cards then a teletype, connected to a 24-bit ICL 1900-series mainframe via 300 Baud accoustic coupler and phone line. A trendy new teaching language called BASIC, complete with GOTOs.

As I left: Terminals containing a ‘microprocessor’, screen addressable via ANSI escape sequences or bit-mapped graphics terminals, connected to 32-bit super-minis, enabling ‘design’. I used a programming language-agnostic environment with a standard run-time library and a symbolic debugger. BBC Micros were in schools. The X windowing system was about to standardise graphics. Unix and ‘C’ were breaking out of the universities along with Free and Open culture, functional and declarative programming and AI. The danger of the limits of physics and the need for parallelism loomed out of the mist.

So, what was this remarkable progress in the 30 years from 1986 to 2016?

Good:

Parallel processing research provided Communicating Sequential Processes and the Inmos Transputer.
Declarative, non-functional languages that led to ‘expert systems’. Lower expectations got AI moving.
Functional languages got immutable data.
Scripting languages like Python & Ruby for Rails, leading to the death of BASIC in schools.
Wider access to the Internet.
The read-only Web.
The idea of social media.
Lean and agile thinking. The decline of the software project religion.
The GNU GPL and Linux.
Open, distributed platforms like git, free from service monopolies.
The Raspberry Pi and computer science in schools

Only looked good:

The rise of PCs to under-cut Unix workstations and break the Data Processing department control. Microsoft took control instead.
Reduced Instruction Set Computers were invented, providing us with a free 30 year window to work out the problem of parallelism but meaning we didn’t bother.
In 1980, Alan Kay had invented Smalltalk and the Object Oriented paradigm of computing, allowing complex real-world objects to be simulated and everything else to be modelled as though it was a simulation of objects, even if you had to invent them. Smalltalk did no great harm but in 1983 Bjarne Stroustrup left the lab door open and C++ escaped into the wild. By 1985, objects had become uncontrollable. They were EVERYWHERE.
Software Engineering. Because writing software is exactly like building a house, despite the lack of gravity.
Java, a mutant C++, forms the largely unrelated brand-hybrid JavaScript.
Microsoft re-invents DEC’s VMS and Sun’s Java, as 32-bit Windows NT, .NET and C# then destroys all the evidence.
The reality of social media.
The writeable Web.
Multi-core processors for speed (don’t panic, functions can save us.)

Why did women stop seeing computing as a sensible career choice in 1985 when “mine is bigger than yours” PCs arrived and reconsider when everyone at school uses the same Raspberry Pi and multi-tasking is becoming important again? Probably that famous ‘female intuition’. They can see the world of computing needs real functioning humans again.