Tag Archives: software engineering

Women’s Day Intuition

The first thing I did yesterday, on International Women’s Day 2017, was retweet a picture of Margaret Hamilton, allegedly the first person in the world to have the job title ‘Software Engineer’. The tweet claimed the pile of printout she was standing beside, as tall as her, was all the tweets asking “Why isn’t there an International Men’s Day?” (There is. It’s November 19th, the first day of snowflake season.) The listings were actually the source code which her team wrote to make the Apollo moon mission possible. She was the first virtual woman on the Moon.

I followed up with a link to a graph showing the disastrous decline of women working in software development since 1985, by way of an explanation of why equal opportunities aren’t yet a done deal. I immediately received a reply from a man, saying there had been plenty of advances in computer hardware and software since 1985, so perhaps that wasn’t a coincidence. This post is dedicated to him.

I believe that the decade 1975 – 1985, when the number of women in computing was still growing fast, was the most productive since the first, starting in the late 1830s, when Dame Ada Lovelace made up precisely 50% of the computer software workforce worldwide. It also happens to approximately coincide with the first time I encountered computing, in about 1974 and stopped writing software in about 1986.

1975 – 1985:
As I entered: Punched cards then a teletype, connected to a 24-bit ICL 1900-series mainframe via 300 Baud accoustic coupler and phone line. A trendy new teaching language called BASIC, complete with GOTOs.

As I left: Terminals containing a ‘microprocessor’, screen addressable via ANSI escape sequences or bit-mapped graphics terminals, connected to 32-bit super-minis, enabling ‘design’. I used a programming language-agnostic environment with a standard run-time library and a symbolic debugger. BBC Micros were in schools. The X windowing system was about to standardise graphics. Unix and ‘C’ were breaking out of the universities along with Free and Open culture, functional and declarative programming and AI. The danger of the limits of physics and the need for parallelism loomed out of the mist.

So, what was this remarkable progress in the 30 years from 1986 to 2016?

Good:

Parallel processing research provided Communicating Sequential Processes and the Inmos Transputer.
Declarative, non-functional languages that led to ‘expert systems’. Lower expectations got AI moving.
Functional languages got immutable data.
Scripting languages like Python & Ruby for Rails, leading to the death of BASIC in schools.
Wider access to the Internet.
The read-only Web.
The idea of social media.
Lean and agile thinking. The decline of the software project religion.
The GNU GPL and Linux.
Open, distributed platforms like git, free from service monopolies.
The Raspberry Pi and computer science in schools

Only looked good:

The rise of PCs to under-cut Unix workstations and break the Data Processing department control. Microsoft took control instead.
Reduced Instruction Set Computers were invented, providing us with a free 30 year window to work out the problem of parallelism but meaning we didn’t bother.
In 1980, Alan Kay had invented Smalltalk and the Object Oriented paradigm of computing, allowing complex real-world objects to be simulated and everything else to be modelled as though it was a simulation of objects, even if you had to invent them. Smalltalk did no great harm but in 1983 Bjarne Stroustrup left the lab door open and C++ escaped into the wild. By 1985, objects had become uncontrollable. They were EVERYWHERE.
Software Engineering. Because writing software is exactly like building a house, despite the lack of gravity.
Java, a mutant C++, forms the largely unrelated brand-hybrid JavaScript.
Microsoft re-invents DEC’s VMS and Sun’s Java, as 32-bit Windows NT, .NET and C# then destroys all the evidence.
The reality of social media.
The writeable Web.
Multi-core processors for speed (don’t panic, functions can save us.)

Why did women stop seeing computing as a sensible career choice in 1985 when “mine is bigger than yours” PCs arrived and reconsider when everyone at school uses the same Raspberry Pi and multi-tasking is becoming important again? Probably that famous ‘female intuition’. They can see the world of computing needs real functioning humans again.

Advertisement

Concerns

I learned to write ‘computer code’ in the era of Structured Programming. In the last few months I have come to question how much science I was exposed to in my Computer Science education. I was taught facts and current best practice but that wasn’t enough.

I’d stopped writing code professionally by the time Software Engineering became trendy, so I skipped relational databases, object orientation and coding for the web before I decided to reconnect with software development via the Business Analysis, UML modelling and Scrum Agile, Product Owner route. I THINK I know what objects are now.

When I decided to do some coding again, I at first decided to learn Python but quickly jumped ship to Clojure. I’m finding the functional model new and exciting but also unfamiliar and strange. I’ve made a huge leap into the dark, from a direction that that the text books I’m reading weren’t expecting. This post represents me taking a breath of air.

I thought I had my head around ‘separation of concerns’ into code modules, in a world made of objects. An object is a model of a real-world entity in a software simulation of reality. It represents the state of an object’s data and via calls to its methods, implements message passing between objects. What functional programming texts have shown me is that OO also invented objects that had no equivalent in the real world. In the functional world, concerns are implemented in stateless functions and state is represented by the flow of change over data structures, outside the functions.

What I haven’t yet worked out is what the “logically discrete functions, interacting through well-defined interfaces” of ‘Top-Down Design’ and ‘Step-wise refinement’ were supposed to represent. Anything we liked, I suspect, because no-one else knew either. I feel now that I was equipped with excellent knowledge of woodworking tools and the idea of furniture without being shown any woodworking joints. At least I recognised at the time that I was clueless and stopped. Many didn’t. OK, I think I’m ready to carry on.

Lean or Agile? Pick any two.

I still see many people writing about adopting Lean and/or Agile software development. I can remember how difficult it was for my team to work out what ‘Agile’ was and I think it has got harder since, as growing popularity has drawn charlatans into the area. I see two main types of useful articles.

  1. What (theory) : “It’s a philosophy” articles which usually point first towards the different values of agile and lean practitioners. But you can’t “do” a philosophy, so we get:
  2. What (practice) : Methodology – the study of methods that embody the philosophies. Many will say that Lean & Agile are not processes but I disagree; I think they are ‘software development process’ change processes.

I’d like to try something different: WHY?

The old ways of planning engineering projects, used for building a tower block, didn’t work for software. We don’t know enough, with sufficient certainty at the beginning of development to design top-down and are rarely sufficiently constrained by physics to be forced to build bottom-up.

Unusually for computing, the words ‘lean’ and ‘agile’ have useful meanings.

Lean is about ‘travelling light’, by avoiding waste in your software development process. It uses observation and incremental changes of your current process, while you incrementally deliver business value via working software.

Agile is about doing only valuable work, being nimble and able to change direction, in response to changed requirements or better understanding. It recognises that there are very few completely stable business processes, so software developers need to identify changes that will have impact on the software under development and apply effort in a new direction.

I recommend that you consider both approaches, as they are complementary. Neither removes the need for appropriate engineering practices. We’re only throwing out hard-engineering stuff we packed but didn’t prove useful on a software journey. We throw out what we don’t need, to prevent the weight of unecessary baggage holding us back.

Applying the Science Process to Process Change (see Recursion)

Now that we’ve established that very few people who work in “IT” have anything to do with technology, except as a tool, and that “computer science” isn’t about computers, we’re back at the original reason I began to write a book. I was once an ‘Information Systems Engineer’ and wasn’t very sure what any of those words meant, particularly “information”.

Software development teams often see their role as solving their customer’s problems. Software package providers say they are “solution providers”. What does the ‘unspeakable profession’ actually do? We got some clues from Hal Abelson in the video I linked to in my last post, that new areas of intellectual endeavour often confuse the ‘essence’ of their subject with their tools; so do we really engineer software?

Hal said that writing software is the process (or function) of formalising our intuitions about process (function.) Our software is a speculative formal abstraction of our intuitive understanding of a process we may not entirely understand. No wonder software projects so often fail. Like the rest of science, software is built on ideas that haven’t been proved wrong yet.

Software developers are presented with, or attempt to discover, experts’ (declarative) knowledge of the business process in the ‘domain’ we are about to change. This is ‘the abstract requirements’. Some of this may have to be implied from imperative knowledge embedded in existing software. It may be presented as imperative solutions. It may be incomplete.

We then follow our own process (function) for: ‘the way we do software’, in order to design a new process, some of which is also likely to be have to be embedded in software. Applying functions to change functions? That’s what functional programming does, isn’t it?

I believe that the ‘stuff’ of ‘computer science’, is state-change of systems of process and data. Who remembers ‘Data Processing’? Functional programming points out that the processes themselves are data and dynamic state-change is unpredictable and therefore dangerous.

Engineering would apply ‘project thinking’ to this unstable, poorly defined change. I may once have tried to be an information systems engineer but I saw that it didn’t work and took a 20 year break from software development.

Agile frameworks recognise such changes as risky experiments and carefully apply the scientific method, incrementally with feedback loops, to check assumptions.

We may need to return another time to see what functional programming has to say about 2 projects making concurrent changes from the same initial state. Until then, good luck with those.

Computational Science + Informatics = Software Development

1970s At university, I studied Physics and:

  • Computer Science

but it had very little to do with computers. It was far more about becoming a

  • Computer Programmer

1980s When I started work I heard about the increasing formalisation of the software development process and I wanted to be a

  • Software Engineer

1990s I’d moved into server management by the time I qualified as a

  • Chartered Information Systems Engineer

which fit in with my thinking that information is the important resource in any organisation but took me no closer to knowing how to make great software efficiently. I became disillusioned by watching people trying to apply the rigorous methods of hard engineering to the uncertainties and unknown complexities of software. I became interested in prototyping, incremental delivery and Agile.

2000s I was increasingly drawn to the idea that constructing software is a design discipline and that a software developer needed to be a

  • Software Craftsman

2010s After a few years working with an agile software team, I decided I wanted to try writing software again myself. A few weeks ago, an online MIT course on Python programming introduced me to an idea that I felt very comfortable with: a good developer is a

  • Computational Scientist (I’d settle for ‘Computing Scientist’)

and I’d add the option

  • Informatician, which overlaps with what librarians do now

Agile development processes are adaptations of the scientific method, to research what customers want and work out how to give it to them in the way that meets their opinion of best value, and I don’t see why software developers can’t be their own customer. Every great new software component starts off as someone’s experiment.

Software Life-cycle. Part 1 – From Engineering to Craftsmanship

I graduated just after the Structured Programming War was won. I was probably the first generation to be taught to program by someone who actually knew how; to be warned of the real and present danger of the GOTO statement and to be exposed first to a language that didn’t need it. I didn’t need to fall back to assembler when the going got tough or to be able to read hex dumps or deal with physical memory constraints. I entered the computing profession just as people were starting to re-brand their programmers as ‘software engineers’ and academics were talking of ‘formal methods’ then ‘iterative development’ and ‘prototyping’ as we lost faith and retreated, as the techniques borrowed from other engineering disciplines continued to disappoint when applied to software.

After some years away from software development, I returned to find ‘Agile’, ‘Lean’ and ‘Software Craftsmanship’. We’d surrendered to the chaos, accepted that we weren’t designers of great engineering works but software whittlers. I was pleased that we’d dropped the pretence that we knew what we were doing but disappointed that we’d been reduced to hand-weaving our systems like hipsters.

There had been another good change too: The Object Model. The thrust of software engineering had often been decomposition but our model had been the parts breakdown structure, the skeletal parts of a dead system. Objects allowed us to model running systems, the process network at the heart of computation. I’ve recently seen a claim that the Unix command line interface with its pipes and redirection was the first object system. Unix begat GNU and Free software and Linux and close to zero costs for the ‘means of production’ of software. I don’t believe that Agile or Lean start-ups could have happened in a world without objects, the Internet or Free software. Do you know how much work it takes to order software on a tape from the US by post? I do.

So here we are, in our loft rooms, on a hand crafted loom made of driftwood and sweat, iterating towards a vague idea emerging out of someone’s misty musings and watching our salary eroded towards the cost of production. Is this why I studied Computer Science for 3 years? Who turned my profession into a hobby activity?

The British Computing Society

I just engaged in a debate about the BCS on LinkedIn . Yes, THAT again: https://andywootton.wordpress.com/2013/11/08/is-it-whats-in-a-name/

Officially, the three letters B, C and S, don’t stand for anything. They used to mean British Computer Society and that is what it still says on the Royal Charter that bestows upon the BCS the royal privilege of awarding Chartered status to members. I’ve suggested we get out the correcting fluid (or an appropriately skilled scribe) and change the middle word to “Computing”.

In his famous book title, Niklaus Wirth said, “Algorithms + Data Structures = Programs”
https://en.wikipedia.org/wiki/Algorithms_%2B_Data_Structures_%3D_Programs

My suggestion was that ‘Computing + Information = what BCS members do’

Look what Wiki-P says about the word “computing”: https://en.wikipedia.org/wiki/Computing

“Computing is any goal-oriented activity requiring, benefiting from, or creating algorithmic processes—e.g. through computers. Computing includes designing, developing and building hardware and software systems; processing, structuring, and managing various kinds of information; doing scientific research on and with computers; making computer systems behave intelligently; and creating and using communications and entertainment media. The field of computing includes computer engineering, software engineering, computer science, information systems, and information technology.”

At the moment, the ambitions of BCS Council only extend as far as the ‘IT’, which comes last in the list. Some of us “software engineers, computer scientists and information systems” people feel we are not being adequately represented. I promise I didn’t change Wikipedia to prove my case.

There aren’t many things in informatics (the Anglicised version of what the rest of Europe call ‘computing + information’) that can’t be represented by a ‘directed graph’.
https://en.wikipedia.org/wiki/Directed_graph#/media/File:Directed.svg

The ‘blobs’ normally show the ‘processes’ where the computing happens (people or machines) or ‘information at rest’, typically with a different colour or blob-shape. The ‘arcs’ or ‘edges’ typically show potential ‘flows of information’ or ‘control’. It is an unfortunate coincidence that the directed graph example I found shows us going around in circles.