Have I ensured that a world socialist revolution will never happen?
A book by Steve Wallis (www.socialiststeve.me.uk)
Continuing with my trade at university
In chapter 4, I explained that my mum had asked at Cardiff University’s computing centre where the best place to study computing was, and that she was told Manchester. I also explained that I had visited the University of Manchester on its “open day” and was impressed by the ancient machines that had been built at the university, on display in the computing centre downstairs from the Department of Computer Science.
However, I was not aware of the full significance of those machines until many years later, when boards appeared in the city centre celebrating the city’s achievements – one of them said that the first computer was built in Manchester! I was sceptical, since I was told when I studied the history of computing in my O-level computer studies course at school that the first computer was Charles Babbage’s “analytical engine”, and as far as I can recall, the university’s role in the history of computing wasn’t mentioned at all! In reality, the analytical engine didn’t run a program (so did the same thing each time unless rewired) and didn’t even work! The first real computer, i.e. the first one successfully running a program, was developed at that university; it was called “Mark I”!
Even more recently, I received an email from somebody called Simon in Venezuela who had misread something I had written on one of my web pages due to his poor knowledge of English. I realised he could be a very important contact of mine since I hadn’t previously come across anyone from Venezuela and that country had (and still has) a very left-wing government (led by Hugo Chávez). I therefore wanted to give Simon some advice on automatic translators on the internet. My knowledge of Spanish was very limited so the best thing to do was to advise him on how to search for a translation program. I thought that “Ask Jeeves” could be the best search engine to try since it takes natural language questions, but I decided to test that program by asking it where the first computer running a program was built. I tried(presumably a US-based website) first, and all the answers it gave were gobbledegook! I therefore tried the UK site ; searching the whole of the internet from there yielded the same or similar rubbish, but when I searched the UK only, the second web page it gave (below similar gobbledegook) was a BBC site about an event commemorating Mark I being built at Manchester. I advised Simon therefore to search using the UK or Spanish “Ask Jeeves” site and choose a program that translated English to Spanish well. I later, mainly for the benefit of voters in Manchester who may have had difficulty reading my manifesto for the 2005 general election (which I ultimately didn’t stand in as I’ll explain in chapter YYY), constructed a web page (at ) giving advice on and links to automatic translation programs.
So why was there a reluctance, particularly in the USA, to reveal where the first real computer was built? Well, computer science is clearly the most important science subject, since programs are used to control many things in the modern world and artificial intelligence techniques can be used to make computers more powerful still. The forces of big business didn’t want many of the world’s best programmers to congregate in Manchester! Such a concentration of talent at one university, with many students from working class backgrounds, could have seriously threatened the capitalist system itself! Also, the US ruling class has a vested interest in pretending that their own companies (such as IBM and Micro$oft) were great innovators, rather than acknowledging that they nicked most of the techniques used in their hardware and software from elsewhere. [Admittedly, window-based operating systems, such as Microsoft Windows, did originate with an American company: Apple.]
I started a BSc (“Bachelor of Science” is the full sexist name) degree in computer science at the Victoria University of Manchester (to use its official title) in the autumn of 1984. At the time, there was an institution known as UMIST (standing for the University of Manchester Institute of Science and Technology), which for most purposes was a separate university but was known as the Faculty of Technology of my university for registration and graduation purposes. In XXX, the universities merged to become Manchester University, which is what we normally called the Victoria University of Manchester anyway! Confusing or what?
At that time, the computer science degree was a mixture of software and hardware, but you could alternatively do a software-only degree called “computing and information systems” or a hardware-only degree called “computer engineering”. I chose computer science because I quite liked the hardware concepts that I studied at school (such as AND and OR gates) and I wasn’t keen on doing a degree with “information systems” in the title, since business programming didn’t interest me (particularly due to my left-wing views). However, I couldn’t understand transistors at all when they were “explained” in the hardware course in my first year at university. At the time, I blamed a bad lecturer, but others coped so I think I had a mental block – either due to my subconscious sabotaging my efforts in that subject in order to encourage me to specialise in software, or simply because software concepts were dominating the upper levels of my mind drowning out my attempts to understand this subject.
I therefore switched my degree to computing and information systems after the first term, doing an extra maths course for the rest of the first year. Despite the title of my degree, I only did one course on information systems, studying COBOL (the language virtually all commercial programs were written in at the time) in my second year. I thought COBOL was a dreadful language – programs looked like English but were extremely longwinded and the language’s rules were ridiculously complicated. My first COBOL program, an extremely simple one of about 40 lines, generated about 20 error messages!
In the third and final year of my BSc, I had to do a project as well as attend lectures. I was most interested in artificial intelligence (AI) and my project was on travelling from one place to another, using different forms of transport, as quickly as possible. My project supervisor was initially keen for me to use Logo, a simple language used to teach programming to children, with programs usually controlling a “turtle” that moved across the screen drawing lines. It was similar to Forth, which I had written a compiler for and used, as described in chapter YYY, with operations acting on a stack. It did have some fairly advanced features, but it was not particularly suitable for AI programming, and I eventually switched to the much more suitable logic programming language Prolog. The program I wrote wasn’t really original, but doing the project gave me some knowledge of AI searches and the Prolog language, which came in handy later. I had to write a report, which I spent the first half of the 1987 Easter holiday on, before concentrating on revising for my finals (final year exams).
In my second year exams, I had received a mark of about 65%. Although this was a 2:1, those exams were only worth 20% of the marks for my final degree, which meant I needed 71% in my final year to get a “first” (first class honours) – assuming the guideline of 70% was required. I did receive a first, but when I finally got a breakdown of the marks I discovered that I only just did well enough. My overall mark was 70.8%, with exactly 70% for my final year project (worth 20% of the marks). There was one course in the final year, involving complex mathematical calculations, that was much harder than my other courses, and I worked out that I had needed over 80% in that course’s exam in order to reach 70% overall. Fortunately, we had an “open book exam” for that course (unlike all the others), allowing us to take books and examples of such calculations that we had already performed, including from previous years’ exam papers, and I got about 95% in it.
I started my PhD (which stands for “Doctor of Philosophy” although I didn’t study that subject) in computer science the following autumn, at the same department of the same university, working with the Mushroom Group (although others in that group were doing different research). The topic of my PhD ended up being integrating object-oriented programming (OOP) with knowledge representation (the sub-field of AI concerned with representing knowledge on a computer) on a declarative basis, but I only had a vague idea of what the topic would be for the first year and a half. I had two supervisors: Trevor Hopkins (with OOP expertise) and Alan Rector (for AI help).
Declarative programming consists of specifying things that are true, typically logical or mathematical statements, which can be contrasted with imperative programming where you provide a series of instructions to be performed in the order specified.
I started off by learning the language Smalltalk – in my opinion, the best OOP language, of which the widely used C++ and Java are more modern examples. In OOP, an object typically sends a message to another object, perhaps passing it other objects as arguments of the message, that object executes a routine (called a “method” in Smalltalk) depending on the kind (“class” in Smalltalk) of object it is, which can itself send further messages before finally returning yet another object to the sender of the message. Classes are in a hierarchy, with subclasses “inheriting” methods from their superclasses. Sending a message like this is an imperative approach, and most OOP languages (including the three mentioned in this paragraph) are imperative.
My supervisor Trevor and a researcher called Mario Wolczko ran a short course teaching Smalltalk-80 programming to people from industry or at university from time to time, and I worked through the notes from that course on a Sun workstation.
I realised that the Model-View-Controller paradigm in Smalltalk, used to display objects on the screen (in views) and allowing them to be modified by the user (using controllers), was rather cumbersome and error-prone, and I devised my own mechanism called “Representations”. In my mechanism, you could set up a hierarchy of different “representations” of the same object, perhaps with “translators” between them in the hierarchy modifying the values stored in the representations in an arbitrary way. Some representations corresponded to views on the screen, perhaps with controllers to change the representations’ values in some way. Changes could also be made to abstract representations by a program. Irrespective of how the change originated, a change made to any representation was automatically propagated to other representations in the hierarchy.
I devised a system called RUN by combining my Representations mechanism with two computing techniques: unification (matching two structures assigning values to variables, used a lot in Prolog) and non-deterministic finite automata (NFAs, which are simple graphs representing routes through networks). RUN operated on semantic networks, one of the simplest forms of knowledge representation; such networks consisted of nodes typically representing objects or concepts in the world and arcs representing relationships between them. RUN could be used to perform searches, and automatically updated intermediate sets of data and search results due to changes made to the semantic network by the user.
RUN wasn’t particularly fast or complicated, and wasn’t object-oriented (apart from being implemented in such a language), so I needed the inspiration that taking six months off in the middle to work on another project gave me.
The project I worked on, being paid a salary as a Research Assistant, was developing a Computer-Based Training (CBT) authoring toolkit, which we called CAT. It was to be used for teaching fault-finding in industrial machines such as a printing press. I wrote a language that was suitable for both simulation of such machines and developing user interfaces for the fault-finders, which I called FOOD (standing for Framework of Object-Oriented Declarations). As used in the project, it was therefore CAT FOOD!
I worked with Stephanie Wilson (who liked to be called “Steph”); she developed a graphical user interface for FOOD while I worked on the language itself. It was a bit frustrating working with Steph – it was ages before she produced any software that could be used to construct FOOD structures, so I had to provide a temporary user interface in the meantime, but she did a good job in the end. We were both supervised by Trevor Hopkins and also worked with some people in industry, including Khawar Iqbal who actually used FOOD.
In FOOD, components had a number of attributes, which were defined as constants, linked to attributes of other components, or specified as arbitrarily complex formulae involving current values of other attributes and perhaps previous values of attributes. For simulation purposes, components had classes, which were in a hierarchy and defined their general properties. For user interface construction, components were based on typical examples known as prototypes, which were stored in a library. FOOD was written entirely in Smalltalk, with the formulae to determine attributes’ values compiled into Smalltalk; it was therefore reasonably but not extremely fast.
Whereas RUN wasn’t particularly object-oriented and FOOD was more to do with simulations than knowledge representation, both featured in my PhD thesis. However, the final and most relevant system I developed for my PhD, which took influences from both, was called ROOK (Representation of Object-Oriented Knowledge).
In ROOK, classes could have one or more “representations” (apologies for me using this word in yet another sense!), each of which had a set of attributes. Conversion formulae could be defined between the attributes of two different representations of the same class, so that a function defined for one representation was applicable to the other (automatically converting as required). “Multiple inheritance” was provided, so that the classes of all arguments of a function (rather than just a single object) were used to determine the formula that specified the result.
ROOK was really a set of structures and routines to manipulate them rather than a fully-fledged language. I wrote routines to query the structures, perhaps inverting formulae to find arguments of a function that yield a particular result. I wrote a compiler from ROOK structures into FOOD that generated inefficient code but that could be used in some circumstances that a compiler into C, which I also wrote and generated very efficient code, could not. [C is a fairly low-level language designed for speed, on which C++, which provides additional object-oriented facilities, was based. I don’t think C++ had been released at the time. XXX]
The research council that provided my PhD grant refused to support me after three years, despite me taking six months out, so I became employed as a Research Associate (much the same as a Research Assistant but with slightly higher pay) in a multimedia communications project called MultiComms in the autumn of 1990. I worked with Rhodri Davies (who liked to be called “Rhod”) and Ian Piumarta, and was again supervised by Trevor Hopkins. Trevor’s supervision was useful because he allowed me to spend a lot of time I should have been spending on this new project finishing off my PhD, partly because a funding council gave incentives for people to finish PhDs within four years of starting the research. I finished my PhD thesis in the early autumn of 1991, just meeting the four-year deadline.
In the MultiComms project, we developed software to facilitate communications between Smalltalk objects representing multimedia entities on different machines across the internet (in the early days of the internet when this was much less straightforward than it is now). I wrote software to do this in Smalltalk and C (but the particularly low-level C routines were written by someone else), and provided examples in Smalltalk including a videophone.
During my second year on the MultiComms project, Mario put me in touch with Scott Moss at Manchester Polytechnic, who wanted some Smalltalk programming doing for him. I did the odd day while I was still working on MultiComms, and then a few weeks in the summer when the project had finished. He was impressed with my work and, after an informal interview, offered me a job working with him – starting just when the polytechnic was renamed Manchester Metropolitan University. I’ll write about my work at that institution, developing a simulation/AI language called SDML, in chapter YYY.