Category Archives: Information Systems

A new target for Software Developers: Sensei.

I originally wrote this as an answer to a question on Quora but I’m increasingly concerned at the cost of higher education for young people from families that are not wealthy. I had parents who would have sacrificed anything for my education but I had clever friends who were not so fortunate. The system is bleeding talent into dead-end jobs. Below, I consider other models of training as I hope it might start a conversation in the technology community and the political infrastructure that trickles money down into it.

Through learning about ‘Agile’ software development, I became interested in related ‘Lean’ thinking. It borrows from Japanese cultural ideas and the way the martial arts are taught. I think the idea is that first you do, then you learn and finally you understand (as illustrated by the film ‘Karate Kid’.) That requires a ‘master’ or ‘Sensei’ to guide and react to what s/he sees about each individual’s current practice. It seems a good model for programming too. There may be times when doing is easier if you gain some understanding before you ‘do’ and advice and assistance with problem solving could be part of this. I’m not alone in thinking this way, as I see phrases like “kata” and “koans” appearing around software development.

I’ve also seen several analogies to woodworking craft which suggests that a master-apprentice relationship might be appropriate. There is even a ‘Software Craftsmanship’ movement. This could work as well in agile software development teams, as it did for weavers of mediaeval tapestries.

A female Scrum Master friend assures me that the word “master” is not gendered in either of these contexts. Of course, not all great individual crafts people make good teachers but teams with the best teachers would start to attract the best apprentices.

If any good programmers aren’t sure about spending their valuable developer’s time teaching, I recommend the “fable in novella form” Jonathan Livingston Seagull, written by Richard Bach, about a young seagull that wants to excel at flying.

Small software companies ‘have a lot on’ but how much would they need to be paid to take on an apprentice in their development teams, perhaps with weekly day-release to a local training organisation? I’d expect a sliding scale to employment as they became productive or were rejected back into the cold, hard world if they weren’t making the grade.

Advertisements

Model Software

Today, I had to stop myself writing “solving the problem” about developing software. Why do we say that? Why do software people call any bounded area of reality “the problem domain”?

My change of mind has been fermenting for a while, due to modelling business processes, learning about incremental, agile software development and more recently writing and learning functional programming. In the shower this morning, I finally concluded that I think software is primarily a modelling medium. We solve problems using the models we build.

Wanting to create another first-person shooter game or to model the fluids in a thermo-nuclear reactor are challenges, not problems. We build models of systems we have defined and the systems don’t even have to be real. I read a couple of days ago that a famous modern philosopher said our world is made of both reality and our ideas. Assuming the computer hardware is real, the software can model either reality or our imagination; our chosen narrative.

‘Digital’ gets everyone working with software models instead of reality. Once everyone lives inside the shared model, when does it become our reality?

Or when did it?

What I Don’t Know

A Wiki is sometimes described by the ‘backronym’ “What I Know Is”.

Recently I’ve been using Quora. You can ask a question or, if you see someone else’s question and think you know the answer, you can reply. Over time, you become associated with areas of knowledge that interest you. The obvious equivalent to the Wiki translation would be “What I want to know is” but you always have to use social systems to understand their dynamics. I didn’t ‘get’ Twitter until I’d used it for some time because at first, nothing happens.

I discovered Quora is really about “What I Don’t Know Is”. It’s obvious that by asking a question, you declare a ‘known unknown’  but human ignorance goes deeper than that. As someone who tries to answer questions, you learn about the gaps in your own knowledge, that you are unable to explain a concept you thought you understood and that there are things many people don’t know or understand that you had assumed were obvious to everyone. We all struggle to provide good, clear, concise, unambiguous questions and answers, because we don’t know everything.

I discovered that other people’s thinking and motivations are often very different from mine. I wasn’t aware of how much more of a rush many young people are in to ‘be a star’ at something, often without much understanding of what that something is. I never hurried or set targets, so I wasn’t aware of how much I’d learned about life, until I read some of their questions.

A journey of a thousand thoughts can begin with a single question. The view from the far end of that trail may be different. We need to be curious about everything around us rather than too ambitious to arrive at a fixed destination by the fastest or shortest route. You can dream about the future but it may not arrive packaged as you expect it and pieces may be missing. Plan your early moves, travel at a sustainable rate and stay aware. I’m worried that many of the young hopefuls on Quora will burn-out before they get close to their targets and become disillusioned.

I’m still hungry to learn. There is so much I don’t know and it’s growing all the time.

Reality has Levels

It’s been a while since I blogged. I’ve been busy.

A major theme emerging from ‘writing my book’ is that we humans are very bad at confusing our models of reality with the reality we are modelling.

I started planning with the ‘Freemind mind-mapping tool for hierarchical brains’ before finding my own creative process had a network architecture and discovering ‘concept mapping’ which uses graphs to represent concepts and propositions. I saw that graphs were what I needed and decided to experiment with building my own software tools from bits I had lying around.

I didn’t have a current programming language, so I set out to learn Clojure. Being a Lisp, Clojure uses tree-structures internally to represent lists and extends the idea to abstractions such as collections but the only native data structures available to me appeared to be 1-dimensional.  I confidently expected to be able to find ways to extend this to 3 or more dimensional graphs but despite much reading and learning lots of other things, I’d failed to find what I was looking for. I had in mind the kind of structures you can build with pointers, in languages like ‘C’. There are graph libraries but I was too new to Clojure to believe my first serious program needed to be dependent on language extensions, when I haven’t securely grasped the basics.

This morning, I think I ‘got it’. I am trying to build a computational model of my graphical view of a mathematical idea which models a cognitive model of reality. There was always scope for confusion. Graphs aren’t really a picture, they are a set of 1-dimensional connections and potentially another set of potential views of those connections, constructed for a particular purpose. I was trying to build a data structure of a view of a graph, not the graph itself and that was a really bad idea. At least I didn’t get software confused with ‘actual’ reality, so there’s still hope for me.

Yesterday, I used Clojure/Leiningen’s in-built Test-Driven Development tool for the first time. It looks great. The functional model makes TDD simple.

Things I used to be Wrong about – Part 1

I get very annoyed about politicians being held to account for admitting they were wrong, rather than forcefully challenged when they were wrong in the first place. Unless they lied, if someone was wrong and admits it, they should be congratulated. They have grown as a human being.

I am about to do something very similar. I’m going to start confessing some wrong things I used to think, that the world has come to agree with me about. I feel I should congratulate you all.

You can’t design a Database without knowing how it will be used

I was taught at university that you could create a single abstract data model of an organisation’s data. “The word database has no plural”, I was told. I tried to create a model of all street furniture (signs and lighting) in Staffordshire, in my second job. I couldn’t do it. I concluded that it was impossible to know what was entities and what was attributes. I now know this is because models are always created for a purpose. If you aren’t yet aware of that purpose, you can’t design for it. My suspicion was confirmed in a talk at Wolverhampton University by Michael ‘JSD’ Jackson. The revelation seemed a big shock to the large team from the Inland Revenue. I guess they had made unconscious assumptions about likely processes.

Relations don’t understand time

(They would probably say the same about me.) A transaction acting across multiple tables is assumed to be instantaneous. This worried me. A complex calculation requiring reads could not be guaranteed to be consistent unless all accessed tables are locked against writes, throughout the transaction. Jackson also confirmed that the Relational Model has no concept of time. A dirty fix is data warehousing which achieves consistency without locking by the trade-off of guaranteeing the data is old.

The Object Model doesn’t generalise

I’d stopped developing software by the time I heard about the Object Oriented Programming paradigm. I could see a lot of sense in OOP for simulating real-world objects. Software could be designed to be more modular when the data structures representing the state of a real-world object and the code which handled state-change were kept in a black box with a sign on that said “Beware of the leopard”. I couldn’t grasp how people filled the space between the objects with imaginary software objects that followed the same restrictions, or why they needed to.

A new wave of Functional Programming has introduced immutable data structures. I have recently learned through Clojure author Rich Hickey’s videos that reflecting state-change by mutating the value of variables is now a sin punishable by a career in Java programming. Functional Programmers have apparently always agreed with me that not all data structures belong in an object

There are others I’m still waiting for everyone to catch up on:

The Writable Web is a bad idea

The Web wasn’t designed for this isn’t very good at it. Throwing complexity bombs at an over-simplified model rarely helps.

Rich Hickey’s Datomic doesn’t appear to have fixed my entity:attribute issue

Maybe that one is impossible.

Agility vs Momentum

[ This post is aimed at readers with at least basic understanding of agile product development. It doesn’t explain some of the concepts discussed.]

We often talk of software development as movement across a difficult terrain, to a destination. Organisational change projects are seen as a lightening attack on an organisation, though in reality, have historically proved much slower than the speed of light. Large projects often force through regime change for ‘a leader’. Conventionally, this leader has been unlikely to travel with the team. Someone needs to “hold the fort”. There may be casualties due to friendly firings.

Project Managers make ‘plans’ of a proposed ‘change journey’ from one system state to another, between points in ‘change space’, via the straightest line possible, whilst ignoring the passage of time which makes change possible. Time is seen as distance and its corollary, cost. The language of projects is “setting-off”, “pushing on past obstacles” and “blockers” such as “difficult customers”, along a fixed route, “applying pressure” to “overcome resistance”. A project team is an army on the march, smashing their way through to a target, hoping it hasn’t been moved. Someone must pay for the “boots on the ground” and their travel costs. This mind-set leads to managers who perceives a need to “build momentum” to avoid “getting bogged down”.

Now let us think about the physics:

  •  momentum = mass x velocity, conventionally abbreviated to p = mv.
    At this point it may also be worth pointing out Newton’s Second Law of Motion:
  • force = mass x acceleration, or F = ma
    (Interpretted by Project Managers as “if it gets stuck, whack it hard with something heavy.”)

What about “agile software developments”? There is a broad range of opinion on precisely what those words mean but there is much greater consensus on what agility isn’t.

People outside the field are frequently bemused by the words chosen as Agile jargon, particularly in the Scrum framework:
A Scrum is not held only when a product development is stuck in the mud.
A Scrum Master doesn’t tell people what to do.
Sprints are conducted at a sustainable pace.
Agility is not the same as speed. Arguably, in agile environments, speed isn’t the same thing as velocity either.

Many teams measure velocity, a crude metric of progress, only useful to enable estimation of how much work should be scheduled for the next iteration, often guessed in ‘story-points’, representing relative ‘size’ but in agile environments, everything is optional and subject to change, including the length of the journey.

If agility isn’t speed, what is it? It is lots of things but the one that concerns us here is the ability to change direction quickly, when necessary. Agile teams set off in a direction, possibly with a destination in mind but aware that it might change. If the journey throws up unexpected new knowledge, the customer may wish to use the travelling time to reach a destination now considered more valuable. The route is not one straight line but a sequence of lines. It could end anywhere in change-space, including where it started (either through failing fast or the value of the journey being exploration rather than transportation.) Velocity is therefore current progress along a potentially windy road of variable length, not average speed through change-space to a destination. An agile development is really an experiment to test a series of hypotheses about an organisational value proposition, not a journey. Agile’s greatest cost savings come from ‘wrong work not done’.

Agility is lightweight, particularly on up-front planning. Agile teams are small and aim to carry everything they need to get the job done. This enables them to set off sooner, at a sensible pace and, if they are going to fail, to fail fast, at low cost. Agility delivers value as soon as possible and it front-loads value. If we measured velocity in terms of value instead of distance, agile projects would be seen to decelerate until they stop. If you are light, immovable objects can be avoided rather than smashed through. Agile teams neither need nor want momentum, in case they decide to turn fast.

Women’s Day Intuition

The first thing I did yesterday, on International Women’s Day 2017, was retweet a picture of Margaret Hamilton, allegedly the first person in the world to have the job title ‘Software Engineer’. The tweet claimed the pile of printout she was standing beside, as tall as her, was all the tweets asking “Why isn’t there an International Men’s Day?” (There is. It’s November 19th, the first day of snowflake season.) The listings were actually the source code which her team wrote to make the Apollo moon mission possible. She was the first virtual woman on the Moon.

I followed up with a link to a graph showing the disastrous decline of women working in software development since 1985, by way of an explanation of why equal opportunities aren’t yet a done deal. I immediately received a reply from a man, saying there had been plenty of advances in computer hardware and software since 1985, so perhaps that wasn’t a coincidence. This post is dedicated to him.

I believe that the decade 1975 – 1985, when the number of women in computing was still growing fast, was the most productive since the first, starting in the late 1830s, when Dame Ada Lovelace made up precisely 50% of the computer software workforce worldwide. It also happens to approximately coincide with the first time I encountered computing, in about 1974 and stopped writing software in about 1986.

1975 – 1985:
As I entered: Punched cards then a teletype, connected to a 24-bit ICL 1900-series mainframe via 300 Baud accoustic coupler and phone line. A trendy new teaching language called BASIC, complete with GOTOs.

As I left: Terminals containing a ‘microprocessor’, screen addressable via ANSI escape sequences or bit-mapped graphics terminals, connected to 32-bit super-minis, enabling ‘design’. I used a programming language-agnostic environment with a standard run-time library and a symbolic debugger. BBC Micros were in schools. The X windowing system was about to standardise graphics. Unix and ‘C’ were breaking out of the universities along with Free and Open culture, functional and declarative programming and AI. The danger of the limits of physics and the need for parallelism loomed out of the mist.

So, what was this remarkable progress in the 30 years from 1986 to 2016?

Good:

Parallel processing research provided Communicating Sequential Processes and the Inmos Transputer.
Declarative, non-functional languages that led to ‘expert systems’. Lower expectations got AI moving.
Functional languages got immutable data.
Scripting languages like Python & Ruby for Rails, leading to the death of BASIC in schools.
Wider access to the Internet.
The read-only Web.
The idea of social media.
Lean and agile thinking. The decline of the software project religion.
The GNU GPL and Linux.
Open, distributed platforms like git, free from service monopolies.
The Raspberry Pi and computer science in schools

Only looked good:

The rise of PCs to under-cut Unix workstations and break the Data Processing department control. Microsoft took control instead.
Reduced Instruction Set Computers were invented, providing us with a free 30 year window to work out the problem of parallelism but meaning we didn’t bother.
In 1980, Alan Kay had invented Smalltalk and the Object Oriented paradigm of computing, allowing complex real-world objects to be simulated and everything else to be modelled as though it was a simulation of objects, even if you had to invent them. Smalltalk did no great harm but in 1983 Bjarne Stroustrup left the lab door open and C++ escaped into the wild. By 1985, objects had become uncontrollable. They were EVERYWHERE.
Software Engineering. Because writing software is exactly like building a house, despite the lack of gravity.
Java, a mutant C++, forms the largely unrelated brand-hybrid JavaScript.
Microsoft re-invents DEC’s VMS and Sun’s Java, as 32-bit Windows NT, .NET and C# then destroys all the evidence.
The reality of social media.
The writeable Web.
Multi-core processors for speed (don’t panic, functions can save us.)

Why did women stop seeing computing as a sensible career choice in 1985 when “mine is bigger than yours” PCs arrived and reconsider when everyone at school uses the same Raspberry Pi and multi-tasking is becoming important again? Probably that famous ‘female intuition’. They can see the world of computing needs real functioning humans again.