Tag Archives: Linux

It’s Getting Harder to Love Linux

Update

I used to be an operating system specialist. I’m not any more. I want to use a computer as a tool to get something done, like most people do.

I’ve used Linux for many years without ever learning too much about it. There was a time when I considered that to be proof it had caught up with Windows for ease-of-use. Windows has crashed and burned on me more than once

Things seemed to change when Canonical took their premature decision to move Ubuntu to the Unity graphical shell on the Gnome desktop. I finally lost patience with Unity a few months ago and installed the Gnome Desktop instead. It’s been mostly OK but I’ve had a couple of odd disappearances of freemind (Java) and umbrello (KDE.) Umbrello is running with it’s icons missing and I’ve manually reinstalled freemind. I now have to work out how to add a Java app to the Gnome desktop. This is too hard for ‘us normal people’.

In the meantime, freemind is started from my terminal with the command

sh -c “cd ~/bin/freemind && sh ‘freemind.sh'”

At least that’s nice and simple, if you like that kind of thing. Sadly, I don’t.

Update: I’ve fixed Umbrello. Via the KDE bug system I discovered that Fedora users were short of an icon library so I experimentally searched for the same library in the Ubuntu repos and added it. The oxygen-icon-theme transitional package adds oxygen5-icon-theme.

I’ve updated the bug report:
https://bugs.launchpad.net/ubuntu/+source/umbrello/+bug/1598401

Update 2: It came unfixed the first time I tried to save – after a couple of hours work, obviously. I’ve backed up to paper before applying this workaround:

https://bugs.launchpad.net/ubuntu/+source/umbrello/+bug/1585611

I used the synaptic package manager to add kio then let it sort itself out. UML updates saved, with no loss of data. This is why I use a real operating system. Morning not wasted after all. The Linux love is returning.

Advertisement

Software Life-cycle. Part 1 – From Engineering to Craftsmanship

I graduated just after the Structured Programming War was won. I was probably the first generation to be taught to program by someone who actually knew how; to be warned of the real and present danger of the GOTO statement and to be exposed first to a language that didn’t need it. I didn’t need to fall back to assembler when the going got tough or to be able to read hex dumps or deal with physical memory constraints. I entered the computing profession just as people were starting to re-brand their programmers as ‘software engineers’ and academics were talking of ‘formal methods’ then ‘iterative development’ and ‘prototyping’ as we lost faith and retreated, as the techniques borrowed from other engineering disciplines continued to disappoint when applied to software.

After some years away from software development, I returned to find ‘Agile’, ‘Lean’ and ‘Software Craftsmanship’. We’d surrendered to the chaos, accepted that we weren’t designers of great engineering works but software whittlers. I was pleased that we’d dropped the pretence that we knew what we were doing but disappointed that we’d been reduced to hand-weaving our systems like hipsters.

There had been another good change too: The Object Model. The thrust of software engineering had often been decomposition but our model had been the parts breakdown structure, the skeletal parts of a dead system. Objects allowed us to model running systems, the process network at the heart of computation. I’ve recently seen a claim that the Unix command line interface with its pipes and redirection was the first object system. Unix begat GNU and Free software and Linux and close to zero costs for the ‘means of production’ of software. I don’t believe that Agile or Lean start-ups could have happened in a world without objects, the Internet or Free software. Do you know how much work it takes to order software on a tape from the US by post? I do.

So here we are, in our loft rooms, on a hand crafted loom made of driftwood and sweat, iterating towards a vague idea emerging out of someone’s misty musings and watching our salary eroded towards the cost of production. Is this why I studied Computer Science for 3 years? Who turned my profession into a hobby activity?

20 Years Since Historic Brum Linux Event – ‘A storm was coming’

I did some Twitts this morning:

“The history of Welsh computing: Inmos Transputer, Raspberry Pi. Impressive. Whatever happened to parallel processing? Or druids.”

[At this point I did a search to check ‘David’ Cox’s name]

“…I should probably have included Alan Cox’s networking contributions to the Linux kernel in between those two.”

“… At least I now know what happened to one of the druids.”

[When I saw the photos, I thought 2 of them were a young RMS ]
https://en.wikipedia.org/wiki/Richard_Stallman

“…I just learned that Alan Cox comes from Solihull, which may explain why I think I may have met him at the first Linux event I ever attended”

[Then I went back to the search Window and found this link]

“…Isn’t The Internet good? This meeting!

[18th September but which year: 1994? The first release of the kernel was in 1991.
but http://www.ukuug.org/about/timeline/
shows MH was UKUUG newsletter editor 1995-6.
Are we approaching 20 years of Linux (or “Free Unix”) in Birmingham?]

Yes kids, my first Linux distro was Lasermoon. Martin Houston also wrote the magazine article that caused me to be there and started SBLUG.”

Martin Houston was a quiet, unassuming programmer who first brought Linux to the attention of me and probably most people in Britain who’d heard of it at that point. He was “the organiser” of the UK  Unix User Group Linux SIG. I think his article in one of the DEC magazines was the first time I ever saw Linux mentioned and this meeting was at DEC’s office on the Birmingham Business Park, organised by the DEC User Society, DECUS. They must have been trying hard to recover from Ken Olsen’s accusation that Unix was snake-oil.

Soon afterwards I went to either the first or second meeting of the South Birmingham Linux User Group. Martin understood the importance of marketing and coined the phrase “A storm is coming and its name is Linux” which,  for 1994, showed remarkable foresight and possibly misplaced confidence. A few years later, Martin turned up at Powergen in Coventry as a contract programmer but I haven’t heard of him since.

I remember that the demonstration of a Linux installation on a “portable PC” (they didn’t fit on your lap then) by Colin Bruce of Coventry University involved floppy disks and a parallel port network adapter (‘portables’ didn’t have a network connection. What do you think this was, The Future?)

And yes, Linux kernel hacker Alan Cox, famously Welsh, is a Brummie.

My search for a “GNU/Linux ‘Shiny’ OS” to be a minimum-cost competitor to Google’s Chrome OS

I think I’ve made it fairly clear that I don’t completely trust Google not to behave like IBM or Microsoft or Apple (in music), if they find themselves in a position of monopoly power in services. I believe there is a significant possibility that the UK government are soon going to jump out of the Microsoft frying pan, into the everlasting fire of Google services, using support for Free and Open Source Software as their excuse. This will delight their Google handlers and perhaps earn them a tickle of their cash-hungry bellies.

As I pointed out in a recent post, use of the Linux kernel and free-to-use services no longer guarantees you any real Freedom. We face a future where cheap Google Boxes in our houses and Google phones in our pockets/handbags will be the portal through which every message we send or receive passes. It seems likely that Google Chrome OS and Android will merge in some way, into an impenetrable fortress, keeping our data safe for us and Google and our government.

We need an alternative. The Free Software movement seems to be blindly following Google towards a destination of its own eventual destruction. What happened to the community’s ideals? Are we so easily bought?

I’m looking for an alternative way forward. I want a simple web and application server constructed and run on FOSS services, available from multiple providers because “The Market is Good” and “Competition Benefits the Consumer”, RIGHT? I only want to use server software that I could take away and have run elsewhere if I was not happy with my service provider and I want a web-based client that uses entirely open Web standards with no proprietary extensions “for greater power” (see: Chrome.) Obviously, as a Real Linux we have the option to also enable local desktop applications, rather than drive consumers to our company shops.

Please tell me if you think there is an obvious alternative FOSS solution to the Google monLOLopoly. If there isn’t then we need to elect one soon because democracy needs choices to stay viable. Our ‘choice of Freedom’ is at risk.

Management Summary of ‘Social vs Capital’ parts 1 – 3

  1. Avoid hardware vendor lock-in by using Unix-style operating systems that can run on any appropriate hardware.
  2. Avoid software vendor lock-in by using Free & Open Source Software (FOSS) so development can be done by anyone with appropriate skills and shared for the common good.
  3. Avoid service provider lock-in by only using FOSS on Linux to provide services,  operated by multiple service providers in a distributed network with easy data transfer/mirroring between providers.

All 3 suggestions encourage fair competition between competitors in a free market, without the potential to abuse a market-leading position, to the benefit of consumers.

Social vs Capital Part 2

To recap:

Unix happened because companies trying to run their businesses using software didn’t like being dependent on the whims of hardware manufacturers. Each manufacturer defined their own hardware architecture. Customers wanted hardware-independence.

GNU/Linux and the Free & Open-Source Software movement happened because coders didn’t like waiting for someone else to fix their problems or to decide not to fix them. They wanted access to the source code and the legal right to change software and share their changes. They wanted software-supplier independence.

I simplified last time. BSD, GNU & Linux weren’t the only game in town. An important book was published: ‘Operating Systems, Design And Implementation’ by Andrew Tanenbaum. Tanenbaum believed in giving his students access to working source code. He had used Unix but when AT&T pulled up their draw-bridge, he needed a replacement that his readers could use and change – so he wrote one. ‘Minix’ was a minimal rewrite  of the key functions of Unix v7 which ran on a twin-floppy IBM PC (just about.) Unfortunately, the publishers, Prentice-Hall, insisted on retaining Copyright to the software. To get a copy, you had to buy a computer science book you probably didn’t want but. But I bought it, as did a young man called Linus Torvald. Tanenbaum was also instrumental in the production of the Amsterdam Compiler Kit which was the starting point for the GNU C compiler. My first sight of the GNU effort was on a listing for a 9-inch tape from the DEC User Society: It included emacs, gcc & the ACK, to be run on your own commercial Unix system.

Stallman and the GNU organisation were writing another replacement for Unix, free of copyrighted code. Their aim was to ensure that if you used their software, no-one else would ever prevent you from using code, particularly if you had contributed to it. Copyleft was born. ‘Free’ (as in beer) licences were not new. The BSD licence allows anyone to take BSD code and do what they wish with it, including building non-Free code on top and selling it.  OS X is built on FreeBSD but Apple sells licences and protects “its” intellectual property from re-use by competitors, including those companies that contributed code that Apple used. ‘Strangely’ Stallman didn’t think this was fair so worked towards creating the GNU licence, the GPL. The original idea has been described as “viral”. If you used GNU-licensed code then your code was required to be GNU-licensed too and you were required to make it available to anyone who wanted it, for only the cost of reproduction. THe GPL was arguably ‘less free’ because it enforced sharing and prevented commercial exploitation. GPL supporters point out that the code you contribute becomes a marketing tool to sell a future service. You offer a service (typically to write other software) rather than sell a software product licence. There is downward market pressure on price and upward on quality, to provide the best value service.
Compromises were made to the GPL  later, leading to the ‘Lesser’ LGPL. This allowed  software libraries to be used in conventionally licensed commercial software.

Having established the new ground-rules, GNU started work, from the top downwards. The bottom layer, the HURD kernel, has still not been delivered. Fortunately, that guy Linus started at the bottom and worked up. When he and his Internet recruits started needing to test their kernel, the GNU tools were ready. Because GNU & Linux were both copying the same open system interfaces, they worked together.

The Free Software movement happened because all these individuals knew they couldn’t do everything on their own. If they wrote something new or fixed a problem, they gave it away. In return, someone else would have fixed a future problem before they found it and shared the solution with them, free.

People sometimes question what happened to ‘the hippy generation’. It appears that many of them went into computer science and carried right on with implementing a community based on freedom & love, inside any institution that would pay them a salary. ‘The Community’ developed a culture and distributed processes and tools that anyone was allowed to use. When people who had grown up in this community started their own companies, they didn’t follow the IBM model, as Bill Gates and Steve Jobs had. They adopted the Free tool-kits of the hippies. Facebook, Google, the Nokia Maemo/MeeGo team, Red Hat, Canonical (Ubuntu Linux) and the Steam gaming platform come from a new breed of entrepreneur. They are not in business to sell you hardware or software as a product. They sell you a service on the Internet. Many of these services are ‘free at point of delivery’.

But there are costs and someone has to pay. You are no longer locked into a hardware or software supplier but to a single service provider and they have your information, probably the most valuable asset of your organisation.

Masters of TeX

What are these things we call ‘posts’,  ‘documents’, ‘essays’ or ‘books’? If a picture is worth a thousand words then pictures are clearly CHEATING so, for now, let’s restrict our  attention to text-only ‘things’. In an earlier post, I called them “text objects”.

I have an unusual writing technology history. I was late enough for Computer Science to exist but early enough for it not to have had much effect. My final-year project was typed on a manual typewriter, by my Mum who had the necessary ‘office skills’. I joined the workplace when type-writers were on the wane but before word-processors had taken hold. A time of daisy-wheel printers as an alternative to clickety-clack line-printing onto stripy fan-fold paper. PCs hadn’t happened yet. My first software for writing, after a text editor, was a text processor. I learned to use DEC’s Runoff to format text but I might equally have used roff, nroff or troff on Unix. They were similar tagging languages derived from Runoff for Multics. Runoff arrived at DEC via MIT’s CTSS and University of California at  Berkeley. This was an age in which an idea for software was not patentable. Everyone copied the good ideas and progressed together.

One of the key decisions in the design of the Unix operating system was that all data was a bit-stream. One type of bit-stream was the text-file: printable, characters, each represented by a unique 8-bit byte-code. Unix came with a free set of tools that allowed people who thought like programmers to manipulate text files files very efficiently. An author created a text stream with embedded ‘markup’ language to give hints about structure and style. It was a technique borrowed from publishing.

Word processors were for typists. They were an incremental upgrade to a type-writer. ‘Normal’ people didn’t think like programmers. Normal people just want to print letters, not ‘run-off’ a new copy of their maths PhD, full of strange characters, diagrams and correct pagination.

In 1978, mathematician Donald Knuth moved text-processing forward into full, commercial grade type-setting with TeX. It had all the complexity that always comes free with flexibility. Technical authors no longer needed to risk having their beautiful formulae mangled by an innumerate type-setter. Leslie Lamport introduced a macro compiler to make TeX easier to use but it was too late. The ‘adequate mouse-trap’ had been sold, to the lowest time-investment bidder. You probably came in at WordStar or WordPerfect or Word or even Google Docs and you have my sincere sympathy.

Now, you are unlikely to use a ‘text processing language’ unless you are a Unix, or more likely Linux weirdo. Which is a great shame, because, even if you are normal, you have probably started to think like a programmer. You may find yourself wanting version control or shared document authorship or multiple output formats from the same source, or with variants.

We all need to take a few steps back before we can move forward. Word processors and WYSIWYG are wrong-headed. What-You-Get is a many headed monster. You can’t see it because it may not exist yet.

I’d love to paint a happy picture now, of software tools, available Free that are going to make it all easy – but I can’t. There are 3 viable competing tagging standards: LaTeX, DocBook XML and DITA. The newer standards have fewest mature tools. The state of the Free market in this area is ‘broken’. There are commercial tools and Unix can offer you tagging modes for text editors.

This is about where I came in, in 1982. 30 years of progress lost because accountants picked the wrong path to take computer science.