Tag Archives: Computing

Women’s Day Intuition

The first thing I did yesterday, on International Women’s Day 2017, was retweet a picture of Margaret Hamilton, allegedly the first person in the world to have the job title ‘Software Engineer’. The tweet claimed the pile of printout she was standing beside, as tall as her, was all the tweets asking “Why isn’t there an International Men’s Day?” (There is. It’s November 19th, the first day of snowflake season.) The listings were actually the source code which her team wrote to make the Apollo moon mission possible. She was the first virtual woman on the Moon.

I followed up with a link to a graph showing the disastrous decline of women working in software development since 1985, by way of an explanation of why equal opportunities aren’t yet a done deal. I immediately received a reply from a man, saying there had been plenty of advances in computer hardware and software since 1985, so perhaps that wasn’t a coincidence. This post is dedicated to him.

I believe that the decade 1975 – 1985, when the number of women in computing was still growing fast, was the most productive since the first, starting in the late 1830s, when Dame Ada Lovelace made up precisely 50% of the computer software workforce worldwide. It also happens to approximately coincide with the first time I encountered computing, in about 1974 and stopped writing software in about 1986.

1975 – 1985:
As I entered: Punched cards then a teletype, connected to a 24-bit ICL 1900-series mainframe via 300 Baud accoustic coupler and phone line. A trendy new teaching language called BASIC, complete with GOTOs.

As I left: Terminals containing a ‘microprocessor’, screen addressable via ANSI escape sequences or bit-mapped graphics terminals, connected to 32-bit super-minis, enabling ‘design’. I used a programming language-agnostic environment with a standard run-time library and a symbolic debugger. BBC Micros were in schools. The X windowing system was about to standardise graphics. Unix and ‘C’ were breaking out of the universities along with Free and Open culture, functional and declarative programming and AI. The danger of the limits of physics and the need for parallelism loomed out of the mist.

So, what was this remarkable progress in the 30 years from 1986 to 2016?

Good:

Parallel processing research provided Communicating Sequential Processes and the Inmos Transputer.
Declarative, non-functional languages that led to ‘expert systems’. Lower expectations got AI moving.
Functional languages got immutable data.
Scripting languages like Python & Ruby for Rails, leading to the death of BASIC in schools.
Wider access to the Internet.
The read-only Web.
The idea of social media.
Lean and agile thinking. The decline of the software project religion.
The GNU GPL and Linux.
Open, distributed platforms like git, free from service monopolies.
The Raspberry Pi and computer science in schools

Only looked good:

The rise of PCs to under-cut Unix workstations and break the Data Processing department control. Microsoft took control instead.
Reduced Instruction Set Computers were invented, providing us with a free 30 year window to work out the problem of parallelism but meaning we didn’t bother.
In 1980, Alan Kay had invented Smalltalk and the Object Oriented paradigm of computing, allowing complex real-world objects to be simulated and everything else to be modelled as though it was a simulation of objects, even if you had to invent them. Smalltalk did no great harm but in 1983 Bjarne Stroustrup left the lab door open and C++ escaped into the wild. By 1985, objects had become uncontrollable. They were EVERYWHERE.
Software Engineering. Because writing software is exactly like building a house, despite the lack of gravity.
Java, a mutant C++, forms the largely unrelated brand-hybrid JavaScript.
Microsoft re-invents DEC’s VMS and Sun’s Java, as 32-bit Windows NT, .NET and C# then destroys all the evidence.
The reality of social media.
The writeable Web.
Multi-core processors for speed (don’t panic, functions can save us.)

Why did women stop seeing computing as a sensible career choice in 1985 when “mine is bigger than yours” PCs arrived and reconsider when everyone at school uses the same Raspberry Pi and multi-tasking is becoming important again? Probably that famous ‘female intuition’. They can see the world of computing needs real functioning humans again.

Advertisement

Open Rights (in Birmingham)

Last night I went to this: https://wordpress.com/read/blog/id/94628536/ , the first meeting of the ‘Open Rights Group Birmingham’, to see what THAT is all about.

There was a table full of us, gathered from the worlds of computing, art and politics. Thinking about what happened, I’ve realised that although I’m interested in all three areas, I’ve never experienced them mashed-up before. We were in the cafe at Birmingham Open Media, after closing time, like radicals, ready to change the world.

Our mission from HQ, should we choose to accept it, was to consider what Brum could do to help ORG’s ‘Snooper’s Charter’ campaign: “We demand an end to indiscriminate retention, collection and analysis of everyone’s Internet communications, regardless of whether they are suspected of a crime. We want the police and intelligence agencies to have powers that are effective and genuinely protect our privacy and freedom of speech.”
https://www.openrightsgroup.org/campaigns/dont-let-the-snoopers-charter-bounce-back

What fascinated me most was the different intuitive responses of the three groups. The techies saw it as a problem to be fixed or provided with tools. Those in public services and the world of politics saw a policy decision to be campaigned on and influenced, using their knowledge of the tools of our broken democracy and those from the art world saw it as something to be responded to, to influence public opinion. That is a heady combination: identify a problem, motivate popular demand for change to generate political appetite, provide a technical solution. It also demonstrates that politicians are often the blockers rather than the enablers of societal change.

I’ve also watched a video on the societal imperatives driving the move of businesses from hierarchies to networks. Imagine that applied to democracy. Netwocracy?

Software Life-cycle. Part 2 – From Craftsmanship to Computational Science

I decided to learn the programming language Python. I was steered towards the MIT OpenCourseware ‘Introduction to Computer Science and Programming’ 6.00 course, taught by Prof. Eric Grimson and Prof. John Guttag (they say it is a course about computational thinking.)

http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-00-introduction-to-computer-science-and-programming-fall-2008/video-lectures/

As the first lecture felt a bit basic, at great personal risk of uncovering ‘a spoiler’, I skipped on to the course summary in the last lecture to see if it was worth sticking around. I found it inspirational. Prof. John Guttag explains computation in the context of ‘The Scientific Method’. I’ve since realised that his explanation maps with great accurately onto Agile iterative methods. Agilists aren’t engineers, we’re scientists again. Engineering Project Management uses experience of similar previous projects. Why would you ever write similar software twice? Most of the work is already done. Every change to a computational system should be R&D.

Every Scrum Sprint is a suite of computational experiments. The Product Owner is our test subject. This feels right. I never felt like a computer scientist. In James Gleich’s ‘The Information’, he explains that Alan Turing introduced Babbage’s mechanical Difference Engine when talking to non-specialist, to emphasise that computing is an abstract concept, independent from computers and electricity. I’ve always had a computational scientist trying to get out <Woo asplodes>.

Software Life-cycle. Part 1 – From Engineering to Craftsmanship

I graduated just after the Structured Programming War was won. I was probably the first generation to be taught to program by someone who actually knew how; to be warned of the real and present danger of the GOTO statement and to be exposed first to a language that didn’t need it. I didn’t need to fall back to assembler when the going got tough or to be able to read hex dumps or deal with physical memory constraints. I entered the computing profession just as people were starting to re-brand their programmers as ‘software engineers’ and academics were talking of ‘formal methods’ then ‘iterative development’ and ‘prototyping’ as we lost faith and retreated, as the techniques borrowed from other engineering disciplines continued to disappoint when applied to software.

After some years away from software development, I returned to find ‘Agile’, ‘Lean’ and ‘Software Craftsmanship’. We’d surrendered to the chaos, accepted that we weren’t designers of great engineering works but software whittlers. I was pleased that we’d dropped the pretence that we knew what we were doing but disappointed that we’d been reduced to hand-weaving our systems like hipsters.

There had been another good change too: The Object Model. The thrust of software engineering had often been decomposition but our model had been the parts breakdown structure, the skeletal parts of a dead system. Objects allowed us to model running systems, the process network at the heart of computation. I’ve recently seen a claim that the Unix command line interface with its pipes and redirection was the first object system. Unix begat GNU and Free software and Linux and close to zero costs for the ‘means of production’ of software. I don’t believe that Agile or Lean start-ups could have happened in a world without objects, the Internet or Free software. Do you know how much work it takes to order software on a tape from the US by post? I do.

So here we are, in our loft rooms, on a hand crafted loom made of driftwood and sweat, iterating towards a vague idea emerging out of someone’s misty musings and watching our salary eroded towards the cost of production. Is this why I studied Computer Science for 3 years? Who turned my profession into a hobby activity?

The British Computing Society

I just engaged in a debate about the BCS on LinkedIn . Yes, THAT again: https://andywootton.wordpress.com/2013/11/08/is-it-whats-in-a-name/

Officially, the three letters B, C and S, don’t stand for anything. They used to mean British Computer Society and that is what it still says on the Royal Charter that bestows upon the BCS the royal privilege of awarding Chartered status to members. I’ve suggested we get out the correcting fluid (or an appropriately skilled scribe) and change the middle word to “Computing”.

In his famous book title, Niklaus Wirth said, “Algorithms + Data Structures = Programs”
https://en.wikipedia.org/wiki/Algorithms_%2B_Data_Structures_%3D_Programs

My suggestion was that ‘Computing + Information = what BCS members do’

Look what Wiki-P says about the word “computing”: https://en.wikipedia.org/wiki/Computing

“Computing is any goal-oriented activity requiring, benefiting from, or creating algorithmic processes—e.g. through computers. Computing includes designing, developing and building hardware and software systems; processing, structuring, and managing various kinds of information; doing scientific research on and with computers; making computer systems behave intelligently; and creating and using communications and entertainment media. The field of computing includes computer engineering, software engineering, computer science, information systems, and information technology.”

At the moment, the ambitions of BCS Council only extend as far as the ‘IT’, which comes last in the list. Some of us “software engineers, computer scientists and information systems” people feel we are not being adequately represented. I promise I didn’t change Wikipedia to prove my case.

There aren’t many things in informatics (the Anglicised version of what the rest of Europe call ‘computing + information’) that can’t be represented by a ‘directed graph’.
https://en.wikipedia.org/wiki/Directed_graph#/media/File:Directed.svg

The ‘blobs’ normally show the ‘processes’ where the computing happens (people or machines) or ‘information at rest’, typically with a different colour or blob-shape. The ‘arcs’ or ‘edges’ typically show potential ‘flows of information’ or ‘control’. It is an unfortunate coincidence that the directed graph example I found shows us going around in circles.

Living in The Future (but not as you knew it, in 1991)

I’ve been going through some press cuttings from 1991. In a single column from UK professional publiucation ‘Computing’, 7 November:

“Connecting information systems is emerging as an alternative to corporate acquisition or merger. The beauty of this approach is it allows two companies to combine resources without the need for amalgamation.” Predicting the networked society?

Except: “The first step towards information sharing outside the company is electronic data interchange” Remember EDI?

“some organisations are finding that their customer data is a source of considerable income if shared with a third party. American Airlines learnt this almost by accident when it implemented its computer reservation system. Sales of its customer information now generate higher profit margins than sales of flight tickets.” Google improved the model by giving the product away and only renting their customers out by the hit.

“Ultimately this will lead to a global information infrastructure, where datacom lines are as important as the roads and railways to the health of the economy”

Then in “WHAT’S IN STORE FOR THE OFFICE”:

“Professor John Larmouth, director of the IT institute at Salford University, suffers from technological schitzophrenia – he sees two futures for office computing in the next decade.

‘It is arguable that everyone will have a notebook PC that fits into their briefcase. They will use it wherever they are and perhaps just have a VDU on their office desk.’ He describes a future where first-class rail carriages have telephone links, where executives are entirely independent of the IS department. ‘But,’ he says, ‘that rather goes against the X-terminals approach.’

This is Larmouth’s second scenario: no processors on the desktop but a pool of CPUs held at a central point and accessed via a terminal on the local area network. Certain CPUs would be dedicated to particular users so they don’t have to wait for processor time. That brings control back to the IS department.”

“‘That’s just an extension of what we have now,’ says Mayon-White at Cranfield Institute of Technology, ‘though people are right to point to it. If this is the tail-end of the PC and network age, then the next is parallelism” He and Andy Bytheway of Cranfield go on to describe nanotechnology, “the next steps during this decade” and biological intelligence, where “an inventor can translate his design into an electronic signal, transmit it over a telephone line straight into a bucket of this universal matter. The matter would, of it’s own accord, then take the form of this design. And that’s where the economy shifts from one based on information to one based on biology.”

Close but no cigar? They didn’t even predict the smoking ban.