DeL 2012 paper: Distance learning in archives-rich environments

Proposal for the paper delivered at the Designs on eLearning 2012 conference in the University of the Arts, London, talking about the g MA (Res) TD programme. The description below is in academic-speak, but is useful for those interested in how we’re approaching this.

 

Developing a New Model for Distance-Learning in an Archives-Rich Discipline

This paper describes the challenges in the development of a new programme targeting distance learners in a domain where conventional literature is not easily available, and engagement with original artefacts is essential for the research skills.

 

Context

Typeface design is a design field that has experienced considerable growth in the last decade. Central to this growth have been the strategy of OEM suppliers to support global markets without localising instances of their products, and the shift to region- or worldwide branding by major companies. The specialised skills required for high quality multi-script typefaces exclude autodidacts, and underline the need for structured education in multi-script typeface design. The University of Reading has pioneered teaching in this area through a very successful full-time residential MA programme in the Department of Typography & Graphic communication, whose graduates occupy dominant positions in the industry. The programme has inspired similar initiatives at postgraduate level, most notably in Argentina, Mexico, and Switzerland. Teaching relies heavily on the use of artefacts from the Department’s Collections & Archives, and particularly the Non-Latin Collection. The Collection comprises around 10,000 drawings of letterforms, commercial correspondence, and material relating to the technology of typesetting non-Latin typefaces. The artefacts are unique and irreplaceable, and generally sensitive to repeated handling. Student work on the MA is split evenly between practical and academic work. The main academic output takes the form of a rigorous dissertation based on original research. The better examples are of publication quality, and contribute to the nascent scholarship in the field.

 

Objective, and a Challenge

We surveyed the field and identified a community of practitioners transitioning to teaching careers, and educators seeking to gain higher qualifications in a research-intensive environment. Unlike early-career designers, this community does not require practical skills building, but is characterised by a lack of engagement with the literature in the field, and a lack of understanding in specialist areas, most notably working with archival material, documenting artefact-based research, and integrating artefact-based research into practice. We have also identified a broader lack of academic writing skills. Seeking to capture this audience, we designed a new MA programme: we expanded the academic elements to occupy the full credit weighting, and strengthened particularly the research methods elements. However, our target community is international in location, and limited in mobility: professionals cannot interrupt their practice, and educators cannot easily take out a full twelve months. This represented significant challenges for three reasons: firstly, because the print literature in typeface design is not generally present in university libraries, even if these institutions run graphic design programmes. Secondly, because our methodology for building research skills is founded on intimate engagement with original artefacts. And, thirdly, because we place considerable expectations on group-based learning and peer engagement.

 

Programme Development

In response to the limitations to student mobility, and the three challenges we identified, we developed a hybrid mode of study. Our model combines a part-time, distance-learning mode for the majority of the 24-month registration, with three full-time residential periods of two weeks each. The aim is to combine self-directed learning through guided study, discourse development through engagement with an online community of peers, face-to-face feedback on presentations and discussion, and hands-on experience with sensitive artefacts. The programme follows a three term per academic year structure. Students will start the course in October of Year 1, with the first residential period towards the end of the the first term (late November – early December). The second residential period will take place in the summer of the first year, at a time that coincides with the vacation period of most HEIs. The third residential period will take place in the autumn of Year 2. Currently we intend  to recruit only one cohort every two years.

 

Online Presence

We audited the literature we intended  students to have access to, and identified only partial coverage by our institutional provision, especially for a worldwide cohort. We are addressing this by making available online ex-copyright material in an environment that allows shared use and annotation, and working with our institution to enable global access to copyrighted print resources, in electronic form. We will be employing collaborative tools for asynchronous seminars, and building a knowledge base around the core texts of the programme. For the second residential period we will run parallel student-led blogs on predetermined areas of study. All material will be shared amongst the whole cohort and staff, and final states of texts made available more publicly.

Countdown!

Gerard Unger on national trends in typeface design

Only a couple of weeks to go!

We planned the TDi course as a complete short course on typeface design, not [just] an introduction to the subject or a tutorial on software. We started by cramming the best elements of the full-blown MATD programme, then cranked the staff-student ratio to a level closer to private tuition than class-based learning. We have a lot of flexibility to customise what we deliver to fit each participant’s interests, and structure sessions on active learning principles; this means we get you to ask the questions, and find the answers through guided research.

We focus on type, but from a wider, typographically rich perspective. We consider the documents typefaces are used to typeset, the technologies used for typesetting or rendering, the user conditions for reading, and the historical and cultural context of their development and use. For example, if we are discussing the optimal texture for reading paragraphs with a Latin typeface, we don’t just tell people ‘this or that proportion or pattern’. Instead, we start by looking at a table-full of material that spans time and genre (from some Estienne books from the 1570s, to Modern books from the late 1700s, to some 19C publicity, to a range of 20C material, to articles parsed on Instapaper on an iPad. We then guide the group’s observations until people arrive themselves at valid conclusions. We use this approach especially strongly in our sessions for non-Latin scripts, when developing an understanding of how the script works is essential. For that we discuss (and try out) tools, and examine a lot of archival material that sheds light on why some scripts have the forms they do, today.

The days of the first week are full of alternating practical sessions, and hands-on seminars, often in groups of two or three students per staff member. We make Fiona Ross, Gerard Unger, and Gerry Leonidas (the three main contributors to the MATD) available all week, all the time, and bring in a number of additional contributors for specific sessions. The practicals are both in group crits (on the wall) and one-to-one. We have a few evening lectures in the first week, usually to round off the day’s teaching, shared by Gerard Unger and Gerry Leonidas. A typical day starts with the cohort having a common session, then splitting for smaller group sessions, then reconvene. The small group sizes allow us to run the tutorials and seminars very informally, and trust people to experience material from our collections and archives intimately. From incunables, to original type drawings, and from Otl Aicher’s original posters for the Munich Olympics to a full run of Octavo and Emigre issues, participants have unprecedented access to typographic treasures. (It is difficult to describe the impact on a designer contemplating the forms of their italic of being presented with Giovannantonio Tagliente’s original writing manual. At the same time, we can answer questions about the current state of typeface design in a range of areas, with our own work and case-studies of flagship OEM projects. By the end of the first week, we aim to have helped participants develop a deep understanding of typeface design, a solid set of skills for type development, and a good research-based process for expanding your knowledge and practice in new areas.

The second week takes these foundations, and focuses on developing the practical work started in the first week, guided more by the objectives set by the student for their own project. (Often people come with a specific typeface to work on, or a script they want to build experience in, and so on.)

Most of the participants are international: the past three years saw people from 21 different countries coming to Reading. A small number coming during or after their MA courses in other institutions, or to help with type-related PhDs. Most are mature professionals, designers and typographers, or educators.

Full details in the PDF on this page.

p.s. We’ve got two places left for this year’s course. Get in touch if you’re interested.

A comics reminisce, and the Daytripper

 

It gets more difficult for books to take me by surprise, as I get older. It may be down to the books I read, but I tend to find this in comics more than in prose (my generation never called them ‘graphic novels’, but this is what I’m talking about).

Manara's HP and Giuseppe Bergman
Giuseppe Bergman and the calm before the storm

My love of comics goes back to the dawn of my literacy – the combination of story and images speaks to me very strongly. During my thirteenth year a new magazine called βαβέλ (babel) hit the newsstands in Athens, translating into Greek a knowledgeable selection of mostly European comics. Monthly instalments of anarchic, fantastical, irreverent, and sometimes profound illustrated stories held a mirror up to two deeply messed up decades, full of crises, political fluctuations, and social unrest. Post-1968 European artists had little patience for the self-absorbed, blathering demigods of 2000AD or Marvel. Instead, I got Liberatore and Tamburini’s dystopian Ranxerox, anticipating the broken down cities of Blade Runner; Édika, Gottlieb, and Lauzier, showing up the absurdities of urban middle classness; the dark, black humour of Altan and Vuillemin (still going strong); and Reiser, subversive even thirty years after his death. I balanced these with Will Eisner‘s deeply human stories, Hugo Pratt‘s languorously adventurous Corto Maltese, and Manara’s extended Bergman stories: like Corto Maltese, a man caught in a turbulent stream of fate, but dealing with his predicament rather less gracefully. (By the way, has anybody noticed that Hayao Miyazaki’s Porco Rosso is really a porcine Corto Maltese?)

Abuli & Bernet's Torpedo
Abuli & Bernet’s Torpedo

The French and Italians dominated my early collection: Giardino, Battaglia, Varenne, Saudelli, Crepax, most of them alternating between adaptations of noir story lines and wonderfully indulgent fantasies. I suspect that my love of noir literature was seeded with Abuli & Bernet’s Torpedo, and Muñoz & Sampayo’s Alack Sinner. These partnerships of superb storytellers and image-makers (Spanish and Argentinian, respectively) have superlative peers today: Darwyn Cooke’s coldly amoral Parker, an exceptional translation of Richard Stark‘s character, is rivalled for impact by Jacques Tardi’s adaptation of Manchette’s West Coast Blues. I re-read both frequently: they are masterpieces of telling a story with the least expenditure of words: only situation, and action.

‘Your sorrow, my sorrow’

All of these stories have characters (men, mostly) in different stages of coming to terms with a world that exceeds them. In noir, the main character may have the odds stacked against him, but has perseverance, cunning, and strength to carry him forward. The most interesting stories introduce any range of character flaws, making the personalities more human. Unlike Stark’s ruthlessly efficient Parker, Andrea Pazienza‘s Zanardi is amoral in a self-destructive way, just as Moebius‘ John DiFool is a hunkering coward. By far my favourite ‘man-in-over-his-head’ character has been Pierre Christin & Enki Bilal‘s Alcide Nikopol: dislocated in time (through a bungled hibernation) and frame of reference (an Earth where ancient Egyptian gods play politics) he strives to adapt while still sucking in as much of this new world he finds himself in.

Alcide Nikopol and Horus the God
Alcide Nikopol and Horus the God

I knew Fábio Moon and Gabriel Bá from De-Tales (and Bá from The Umbrella Academy). A few days ago I got a copy of Daytripper. I started reading, and it hit me like a sledgehammer.

The book is about Brás, a man with ambitions to be a writer, a good father, a worthy son, and a friend. Each chapter picks one part of his life, but weaves in the storyline the unpredictability of accidents, a series of plausible ‘what ifs’ which interrupt the storyline. This is a device every Greek understands well: the Three Moirai (or Three Fates), Clotho, Lachesis, and Atropos spin, apportion and cut the thread of life. (Yes, that’s the origin of the phrase.) In the Daytripper the story picks up in the next chapter, the point of interruption unknown. This wonderful device, a cross between parallel universes and a linear world, is life laid bare: a microcosm of emotions and personal, immediate relationships, within a maelstrom of unpredictability. Most will pass with little effect, some will upturn everything.

Daytripper, three generations under a tree
Three generations under a tree

There is a lot to read in Brás’ desire for his life to exceed the limits of the immediate action and relationships. He strives to be a good friend, and father, but has deeper desires: he captures perfectly the frustration at the heart of the modern human condition, where a wider consciousness, contemplation, and ambition can place seemingly insurmountable obstructions. For most of the Daytripper, Brás embodies F Scott Fitzgerald’s famous aphorism: ‘This is what I think now: that the natural state of the sentient adult is a qualified unhappiness.’

The dialogue is economical, like reality distilled. With the excess of words removed, the force of the environment and the unspoken, imagined expressions become more powerful. And it underlines the unspoken moments, when what is not said is more powerful than paragraphs of text. This is right at the heart of the power of comics: the illustrator does not supplant the visual imagination of the reader, but fires it up and channels it in new directions. The experience of reading becomes imaginatively richer because of the presence of images.

The women in Brás’ life offer a fascinating insight into the mind of the troubled male. They are ever-present, but in the periphery; they represent the family, continuity, and the next generation, but do not share in his contemplation. Only towards the end do we see a shift: when the son has taken on the role of father himself, companionship and affection have proven a stronger constant. This is juxtaposed with the role of Jorge, Brás’ friend: stronger in intensity, alternatively present and missing, catalytic at times, but ultimately absent. The overarching feeling of solitude, the man and his thoughts alone, is accepted and embraced brilliantly at the end of a life full of people.

The Daytripper is the best example of visual poetry I have read in quite a while.

Daytripper
A man and his thoughts, with coffee, by the sea

Od „Ala Ma Kota” Do E-Matury slides

These are the slides from my presentation at the Od „Ala Ma Kota” Do E-Matury conference in Warsaw, on 18 April 2012. The talks were 16 minutes long, so this is a fairly condensed deck. As always, my slides are starting points for each idea; there’s about a minute’s worth of elaboration for the key ones. I’ve been asked to write the talk up, and I’ll link to that when it goes public.

Ala Ma Kota conference slide 1
“It’s typography, Jim, but not as we know it…”

Six observations, four challenges, one conclusion, and five predictions…

Ala Ma Kota conference slide 2
Observations
Ala Ma Kota conference slide 3
People run out of time faster than they run out of options
Ala Ma Kota conference slide 4
There’s no such thing as “new media”, only new users
Ala Ma Kota conference slide 5
Internationalisation and geography matter
Ala Ma Kota conference slide 6
The permanent and ephemeral are reversed
Ala Ma Kota conference slide 7
Materials become precious
Ala Ma Kota conference slide 8
The market does not wait for teachers to write lesson plans
Ala Ma Kota conference slide 9
Physical properties are no help in predicting potential uses
Ala Ma Kota conference slide 10
Challenges for designers
Ala Ma Kota conference slide 11
Support for text and typography is not Good Enough
Ala Ma Kota conference slide 12
Support for text and typography is not Good Enough
Ala Ma Kota conference slide 13
Typographers lose the reassurance of familiar, visible, tactile structures
Ala Ma Kota conference slide 14
(A conventional structure is defined by spreads and sequence in the bound object)
Ala Ma Kota conference slide 15
(An e-publication’s structure relies on content sections…)
Ala Ma Kota conference slide 16
(…which are not differentiated without reference to external navigation)
Ala Ma Kota conference slide 17
(…and therefore rely on the root-level navigation for the publication to explain itself)
Ala Ma Kota conference slide 18
We lack great models for integrating inline and immersive content
Ala Ma Kota conference slide 19
Texts are becoming nodes in networks, but typography has been volume-bound 
Ala Ma Kota conference slide 20
(A single of the sections we identified has four aspects that traditional typography has no solution for:) 
Ala Ma Kota conference slide 21
(links within the text to other texts,)
Ala Ma Kota conference slide 22
(annotations by the user,)
Ala Ma Kota conference slide 23
(annotations by other users,)
Ala Ma Kota conference slide 24
(and links within the annotations.)
Ala Ma Kota conference slide 25
(Developments in literature, which is easy to parse, show some ways forward) 
Ala Ma Kota conference slide 26
(uncovering meaningful connections in the text)
Ala Ma Kota conference slide 27
Conclusion
Ala Ma Kota conference slide 28
This emerging typography is traditional at the paragraph level, and potentially innovative at the semantic level 
Ala Ma Kota conference slide 29
Predictions 
Ala Ma Kota conference slide 30
Three to five years of conservative solutions: just ‘digital books’
Ala Ma Kota conference slide 31
Local interfaces will adopt traditional solutions
Ala Ma Kota conference slide 32
Personal aggregators will threaten traditional authorship models
Ala Ma Kota conference slide 33
Knowledge platforms will push beyond ‘digital books’ when trust systems mature
Ala Ma Kota conference slide 34
Authors, publishers, and students will continue to adapt much faster than teachers (and schools, and education authorities)
Ala Ma Kota conference slide 35
Thank you

 

Time Capsule reborn

If you have a Time Capsule, you won’t have escaped the news that they tend to die sooner than they should. The Time Capsule Memorial Register will have been on your mind, especially if you’ve read the news reports (see ‘Apple finally admits problems with Time Capsule and offers replacement’ in The Guardian, for example). All the more worrying if you use the thing for a Time Machine backup.

I sent mine to Chris Fackrell, who tested and repaired the power supply, then punched a hole on the baseplate to enable ventilation. Well, ‘punched a hole’ is what I would have done; what Chris did is indistinguishable from OEM work, save for the use of a metal mesh (which does look better than the cutouts in plastic Apple would have used, if they had designed the thing with adequate ventilation). He also replaced the hard drive, so I got a hefty upgrade in capacity.

The cost of doing this was less than the cost of a new Time Capsule, which would be in risk of failing itself anyway. And much less that the cost of a new Base Station and a NAS, which is what the Time Capsule is in my setup.

If you’ve got one, check it out. My warning signs were overheating of the case, and brown marks on the baseplate.

In search of the digital Cresci (1999)

The text below is a pre-publication version of an article that appeared in Information Design Journal, in 2000. Although somewhat dated, it is useful for typeface design students. It picks up references from a cluster of texts from four issues of Visible Language that form part of the sources for one of the seminars in the MATD and MA (Res) TD:

Donald E Knuth, ‘The concept of a Meta-Font.’ In vol XVI, no 1, 1982

Douglas R Hofstadter, ‘Metafont, metamathematics, and metaphysics.’ And:
[various] ‘Other replies to Donald E Knuth’s article, “The concept of a MetaFont”.’ In vol XVI, no 4, 1982

Sampson, Geoffrey, ‘Is Roman type an open-ended system? A response to Douglas Hofstadter.’ And:
Douglas R Hofstadter, ‘A reply from Douglas Hostadter.’ In vol XVII, no 4, 1983

Donald E Knuth, ‘Lessons learned from Metafont.’ In vol XIX, no 1, 1985

 

**********************

No review of digital typography is complete without a long stopover in Don Knuth’s neighbourhood. A mathematician based in Stanford, California, Knuth is primarily active in the field of computer programming. During the mid-1970s, out of frustration with the quality of phototypeset mathematics, Knuth was driven to address the problem of at least matching the standard of hot-metal Monotype output. [1. Knuth’s first two volumes of The art of computer programming (Reading, MA: Addison-Wesley, 1968 and 1969 respectively) were typeset on Monotype machines. A good source on the standard achievable by hot-metal composition is the winter 1956 issue of the Monotype Recorder (vol. 40, no. 4), which was devoted to the setting of mathematics with the Monotype 4-line system.] The product of this endeavour was Tex, a versatile and powerful typesetting system which outputs device-independent documents. Alongside Tex, Knuth developed Metafont, [2. Knuth typesets Tex and Metafont as TEX and METAFONT respectively through-out the book. Here the words are treated as software titles, not logotypes.] a system for generating typeforms. (The term “typeform” is used to signify the rendering of a typographic character, therefore a mark intended to function in conjunction with other marks with which it forms a collection (normally a typeface), without prior knowledge of the context of its use. On the other hand, a letterform is a one-off mark produced for a particular context, e.g. a manuscript or a piece of calligraphy.) From early on Knuth made both systems freely available, and it is not an overstatement to say that Tex has transformed the production of scientific writing. Tex users number in the tens (if not the hundreds) of thousands, and it will be a rare math-intensive paper that is not so typeset.

Digital typography, published in 1999, is the third in a planned series of eight books of Knuth’s published works, together with some new material. It is a hefty 680 pages, comprising 34 papers and articles, including the transcripts from three relatively recent question & answer sessions. The majority of the contents has been previously published in TUGboat, the journal of the Tex Users Group. Knuth has already begun the process of identifying errata; characteristically, readers who contribute can look forward to a reward of $2.56. (The list is updated at Knuth’s Stanford page.) To his credit, not only is the prose very readable, but the mathematical exposition managed to flatter this reader that half-forgotten business studies algebra was up to the task of digesting the arguments. However, for reviews on the programming content, and opinions on literate programming (an area of Knuth’s activity to which he attaches great importance), readers are advised to turn to TUGboat vol 20, No 2, 1999: its editor has announced reviews by Knuth’s fellow computer scientists.

At one level, the book is an archival collation of technical papers and notes; at another, it is a source of pertinent ideas and fascinating suggestions – especially so when addressing the nature of letters and typeforms. Inevitably, the more technical chapters will make non-specialists feel they are eavesdropping on a conversation having missed the key remark. Even so, reading the early projections back-to-back with the mature re-evaluations (mostly through the question & answer transcripts) sheds a revealing light on the development of a significant body of work.

The papers fall in two broad categories, Tex and Metafont, with a few further items on rasterization. Of the two main themes, Tex is the more significant, if less interesting – a corollary of its undisputed status: the value of Knuth’s contribution to electronic typesetting is as significant as the superiority of Tex’s line-breaking algorithms over pretty much anything else available now, let alone twenty years ago. Indeed, it is only this year, with the launch of Adobe’s InDesign that we get a ‘multi-line composer’ – a line-breaking algorithm that monitors several lines before fixing line-breaks. Adobe properly acknowledge the credit, [3. See ‘Adobe InDesign in depth: text and typography’ pp. 3–4, 8.6.99. Adobe’s description of InDesign’s ‘multi-line composer’ is impressively close to Tex, and they even use the term ‘badness’ (a Tex trademark) in their documentation.] but, at the time of writing, it remains to be seen whether InDesign can match the typographic texture Tex can achieve.

Tex is based on the twin concept of approaching documents as lists of boxes joined by stretchable ‘glue’, and defining values of ‘badness’ for deviations from optimal spacing values. Knuth repeatedly mentions that the potential of these simple premises was not fully foreseen. Indeed, a non-Texpert looking at the typesetting complexity of material produced with Tex cannot but admire the elegance and economy of the concept. In this respect Digital Typography is a showcase item – and if the typesetting of the mathematical matter is more obviously impressive, the evenness of the texture in less extravagant pages sets a subtle benchmark. (The only gripe in this department being a propensity for too-wide, almost double, spaces after full stops – an annoyingly persistent legacy of typists, which is avoidable in Tex.) Indeed, the combination of quality printing on good paper and the effectiveness of Tex is enough to render the singularly unattractive Computer Modern typeface used for most of the book digestible, by any standards no mean feat.

By far the more interesting parts of the book are the chapters on the inception, development, and use of Metafont. Particularly enjoyable for documenting the evolution of a design is the chapter on AMS Euler, a typeface family for mathematical typesetting that Knuth developed in association with Hermann Zapf for the American Mathematical Society. [4. One cannot help but think that Zapf, probably the best known representative of the calligrapher-as-type-designer approach, was the ideal choice for a meta-collaborator: Zapf’s technique lends itself readily to interpretation in the Meta-font idiom. It is tempting to speculate on developments had Knuth collaborated so closely with a type designer from a punchcutting or drafting background. It is, however, interesting to compare other documented uses of Metafont for large projects (see, for example: Southall, Richard, ‘Metafont in the Rockies: the Colorado typemaking project.’ In Roger D. Hersch et al (eds.), Electronic publishing, artistic imaging and digital typography. Berlin: Springer Verlag, 1998, pp. 167–180, where the emphasis was on control of the rasterized output).] Predictably, work on Metafont started as an attempt to address the practical problem of supplying typefaces for Tex – remember, this is long before the days of PostScript and TrueType, and digital typefaces suitable for mathematical typesetting were thin on the ground. Knuth’s original goal was ‘to find a purely mathematical way to define the letter shapes and to construct the corresponding raster patterns’. [5. Digital typography, p. 35] This statement can be something of a Pandora’s box, depending on whether one interprets ‘to define letter shapes’ to mean: ‘to codify an explicit description of a typeform (or a group of typeforms)’ – or: ‘to specify a process for the generation of new typeforms’. Throughout the book, one gets the impression that Metafont does the one thing, and its creator thinks (sometimes, at least) that it can do the latter, as if Knuth saw in Metafont more than the technology implied. In Digital Typography he recounts how he studied letterforms and saw regularities in the design, from which he realised that he ‘shouldn’t just try to copy the letterforms, but [he] should somehow try to capture the intelligence, the logic behind those letterforms’. [6. Digital typography, p. 607] One cannot but think that, at some point, Knuth must have made a mental leap from devising a description system for typeface families to a broader generic system for typeform description and generation. Perhaps it was his enthusiasm for letterforms that led him to such statements. In any case, this quote raises two fundamental questions: given that there is some ‘intelligence’ behind typeforms, is it possible to make it explicit? And, secondly, assuming that it is so, is it possible to codify this ‘intelligence’ in mathematical terms?

In any case, Knuth seems to have been aiming at a new approach for designing a typeface family, an approach that could ensure consistency in the design of various, not necessarily predetermined, weights and styles. (A goal that Adobe’s Multiple Master fonts have also sought to address – bar the ‘not necessarily predetermined’ bit.) The first version of the system, Metafont 79, defined ‘pens’ and ‘erasers’, and prescribed the trajectories between co-ordinates in two dimensions that these pens (or erasers) would have to execute in order to render each typeform. The dimensions and behaviours of pens and points were parameters definable by the user. A particular Metafont would be a collection of a finite set of parametric mark-makers and behaviours, each parameter assuming one of a potential infinity of values. In other words, using a single ‘master description’ infinite variations on a theme could be output. Key to this point is the fact that there exists not a singular, explicit collection of typeforms from which variations are extrapolated; rather, the specified parameters define a ‘design space’ within which any instance is equally valid as the next one. In essence Metafont-the-application is a system for the description of categories of typeforms; each Metafont-family is a classification system with a fixed number of ‘pigeonholes’ of infinite depth; each Metafont-typeface the compilation of a selection from each pigeonhole.

Knuth’s scientism was presented as the most recent chapter in the book that started with Felice Feliciano in the 1460s, continued with Damiano da Moyle, Fra Luca de Pacioli, and nearly twenty others, to reach a spectacular highpoint around 1700 in the engravings supervised by Phillipe Grandjean. Almost with no exception, these attempts at instructing on the ‘correct’ or ‘proper’ formation of letterforms (mainly capitals) were no more than fascinating red herrings of rationalisation. The most important exception to this trend was the Milanese scribe Giovanni Francesco Cresci, who pointed out the futility of his contemporaries’ propositions – and is in fact quoted in Digital Typography. But Knuth then does an about-face and writes: ‘Well, Cresci was right. But fortunately there have been a few advances in mathematics during the last 400 years, and we now have some other tricks up our sleeves beyond straight lines and circles. In fact, it is now possible to prescribe formulas that match the nuances of the best type designers’. [7. Digital typography, pp. 38–39] This remark can be interpreted as either ‘we can codify an existing design without any information loss’ (which is perfectly acceptable), or ‘it is possible to specify algorithms for the creation of letterforms’ – we should add the caveat: to Cresci’s standard. Neither of these interpretations is a correct description of Metafont, but the latter is closer to Knuth’s writing about it.

Metafont is a tool for creating typeforms in the same way that a chisel is for creating letterforms. A meta-designer will approach the computer with more or less pre-formed intentions about the general style of typeforms, if not a wholly clear notion of a specific instance of the Metafont he will be defining. He would then have to mentally translate his intentions into the Metafont grammar of pens, erasers, trajectories, and edges, and enter the appropriate code. And, as with all similar activities, we can expect the designer to undertake several proof-and-revision cycles until the result is deemed satisfactory. The meta-designer uses a computer to facilitate the expression in a usable format of a pre-conceived set of typeforms, in the same way as someone using Fontographer or FontLab: the concept of a typeform is visualised internally, then translated into a formal grammar understood by a tool, then entered in the tool’s memory for processing. For sure, as with all tools, Metafont will in some way affect this ‘double translation’ of the designer’s intentions. But to claim that Metafont aims in ‘the explicit implementation of the design ideas in a computer system’ [8. Bigelow, Charles, [contribution to] ‘Other replies to Donald E. Knuth’s article, “The concept of a Meta-Font”.’ In Visible Language, vol. XVI, no. 4, 1982, p. 342] misses the point that Metafont simply implements the product of the design ideas in a specific medium. What results from meta-designing is nothing more than the final trace of the process, not in any way a representation of the design process itself – let alone the ideas that generated it. Ultimately Metafont rests on two flawed assumptions: one, that by studying the finished product of designers’ work we could understand what was going through their minds, and isolate these intentions from the effects of their tools; and, two, that we could then express the range of these intentions in code for a computer ‘to carry out the same ideas. Instead of merely copying the form of the letters, […] to copy the intelligence underlying the form’. [9. Digital typography, p. 8]

What is important in type design? Type designers would say: patterns, relationships, the interplay of negative space and positive mass. A designer may intellectualise the design process ex post facto, but it is highly questionable that this process can be made explicit before the design is complete – indeed, it is safe to assume that it is largely crystallised during designing. To seek to represent the internal process of designing as the making of marks is to mistake a procedure for its motive.

Given that a considerable part of Knuth’s research was based on manuscript books and fine printing (that would, most probably, make use of oldstyle typefaces), it was perhaps not unexpected for him to adopt a model that replicated the rendering of calligraphic letterforms. However, the fluidity and potential for optical balance in formal calligraphy does not survive in Metafont. It is indicative that, in the process of refining the mathematical description of an ‘S’, Knuth chose Francesco Tornielo’s geometric description from 1517. Tornielo’s essay was one more entry to the list of the Renaissance geometric fallacies that started with Feliciano – one could add that Tornielo’s were among the less beautiful letterforms. Knuth follows and amplifies Torniello’s instructions, then ‘solves’ the problem of a mathematically correct representation of the ‘S’. However, even though the meta-S is far more detailed and flexible than Tornielo’s, it does not hold any information on what makes a good ‘S’.

Translation of Tornielo's instructions in Metafont

It could be argued that a procedural model that replicates an existing, non-computer-based activity does little justice to the potential of computers as design tools. A computer application for designing typeforms is unconstrained by lettering procedures. Shouldn’t this liberation suggest a new design paradigm? In its ductal rationale Metafont is suggestive of a digital version of Gerrit Noordzij’s scheme for description and classification of typeforms according to the translation and expansion of a rendering nib. [10. Noordzij, Gerrit, The stroke of the pen: fundamental aspects of western writing. The Hague, 1982. (A somewhat expanded version of this paper was published under the title ‘Typeface classification’ in the first volume of Bernd Holthusen & Albert-Jan Pool, Scangraphic digital type collection (edition 4). Mannesmann Scangraphic, 1990, pp 65–81)] But Noordzij has his feet firmly planted in the western lettering tradition, and approaches typeforms as inextricably linked to this background. Significantly, his analysis is exegetical, and not intended as a tool for the generation or manipulation of new typefaces. Furthermore, Noordzij acknowledges the presence of exceptions outside the possible design space (the ‘typographic universe’) of his system. This internal consistency is not obvious in Metafont’s approach, in which analysis of one idiom is presented as a tool for describing typeforms of any pedigree. In other words, Knuth adopted an idiom that seems foreign to his tools. Admittedly, the significantly redesigned Metafont 84 made provision for the definition of typeform edges, but Knuth still believed that the pen metaphor ‘gives the correct “first order” insights about how letters are drawn; the edge details are important second-order corrections that refine the designs, but they should not distract us from the chief characteristics of the forms’. [11. Digital typography p. 330]

For a type designer from a non-calligraphic background the ‘moving pen’ paradigm will probably limit the formal freedom of defining typeforms on a virtual plane. The designer will have to translate the contours and patterns in his mind to the grammar of Metafont, a process which is counter-intuitive – not to mention the possibility of the intended forms being unreasonably difficult to convert into Metafont. Moreover, type designers do not generally think in nebulous trends (‘some sort of serif about here’ or ‘a bit of a notch around there’) – they tend to approach new designs with a set of curves or patterns, in positive or negative space, that they then develop to construct the whole of the typeface. In this respect the flexibility afforded by Metafont in creating new designs is of debatable benefit.

Instances of the same meta-S

This issue highlights a common denominator in discussions of digital type: the relationship between the designer’s intentions and the typeforms that are finally rendered is not fixed. The designer will record disembodied – perhaps idealised? – typeforms as they would render under hypothetical, ‘perfect’ conditions, then edit the font data or enrich it with information to make the typeforms render as faithfully to the original intention as the technology allows. It is arguable that, to some extent, a similar process existed in all typeform-making technologies. However, currently available type design software depends heavily on this dissociation of intended and realised typeform. PostScript Type 1 fonts are the best example of this: dependence on a system- or printer-resident rasterizer for rendering means that as the rasterizer is upgraded the same font information may generate different output. In TrueType fonts rasterizing is primarily controlled by information in the font file itself, but the process of specifying this information is quite distinct from the design of the outlines.  Metafont is exceptional in that the typeforms generated from the user’s definitions of [groups of] typeforms are inextricably linked to the resolution of the output device. ‘Pens’ are digitised to the appropriate raster before rendering the typeforms. It is also possible for the user to define explicitly the raster frequency, as pixels-per-point or per-millimetre. In other words, the ‘enrichment’ is not a separate stage in font design and production, but an integral aspect of working with Metafont. It might well be that in this respect Metafont’s approach is truer to the digital world – or not?

{{work:4_dekmetafonts.jpg|The middle S in the sequence of the previous illustration with defining points highlighted. Notice the white regions in the counters, where “erasing” has been specified}}

There is another argument against Knuth’s scientism: Metafont fails the typographic equivalent of the Turing Test.  He asserts that ‘Metafont programs can be used to communicate knowledge about type design, just as recipes convey the expertise of a chef’. I would argue that neither is the case, but I am not a professional cook. To stick to the culinary analogy, however, Metafont can be seen as one of those multi-function tools that chop, grate, slice, and generally do faster and in some cases better all sorts of things that you could do by hand – but it does not join you at the table when the dinner is ready. Can we supply a computer-literate person with the Metafont-programs for a typeface family, and in any way expect them to develop an understanding of the original designer’s concepts for the design?

Indeed it was a designer and educator with far more experience than this writer that interpreted Metafont as implying that ‘the parameters of a design are more important than the design itself – that is: than the idea behind the design and how the face looks and reads’.  Here our attention is pointed to a fact that Knuth seems to overlook: typeforms are social animals. Type designers must marry their personal (culturally coloured) viewpoint with their speculations on the reception of their typeforms within their environment. Furthermore, it does not follow that the eventual interpretation of the typeforms will be the anticipated one. This cycle inevitably informs the design process. The changing appreciation of designs through time is witness to how the same typeform – the same procedure – can elicit different interpretations. (Does Akzidenz Grotesk carry the same connotations today as it did at the turn of the century?) Conversely, if the environment within which a typeform is created is foreign to us, Metafont’s ‘capture of the intelligence behind the design’ will not reveal the designer’s intentions. (If a contemporary designer works in a script we are not familiar with, could the Metafont files offer insight into which parts of typeforms were essential elements, and which were embellishments?) Since the computer operates in a social vacuum, it cannot understand what it’s doing. Not only is Metafont merely replicating a procedure, but its product is meaningless within the computer’s frame of reference: Metafont cannot judge whether the typeforms it generates are acceptable in the type designer’s environment. In other words, it is possible that an instance of a meta-typeface is valid within the Metafont-system (indeed, assuming a debugged program, it would not be produced were it not so), but not acceptable in the social context.

 

Now the reasoning behind the caveat about generating letterforms ‘to Cresci’s standard’ becomes obvious: the qualitative sufficiency of the outcome is not guaranteed by the otherwise valid implementation of a Metafont. Knuth has forgotten his brief recognition that Cresci had got it right. Like a true descendant of the 15th century deterministic letterers so enamoured of the ruler and compass, he defines a formula for determining the optimal curvature at a given point along a typeform. But what he comes up with is, inevitably, his own interpretation of a ‘most pleasing curve’. Clearly, each type designer has his own inner set of ‘most pleasing curves’, proportions, and patterns that he returns to in each design. It could probably be argued that the mathematical expression of curves constrains the range of possible ‘most pleasing’ curves for each circumstance. (W. A. Dwiggins might have had something to add to this. While he was satisfied with the Mergenthaler draftsmen’s transfer of his own drawings to blueprints for pattern-making, he did comment on the over-regularised Legibility Group typefaces. In copying Dwiggins’ drawings the draftsmen were only using their french curves to follow patterns that had been designed according to one person’s vision. On the other hand, the Legibility Group typefaces were designed in-house, and – despite C. H. Griffith’s supervision – were a collective effort. It is not difficult to imagine that in such an environment french curves would suggest patterns, instead of just follow them.)

The subjectivity of what constitutes a ‘most pleasing curve’ is borne witness by the variety of opinions on any type design. Despite any generalisations we may care to make, the optical compensation of shapes so that they look and feel correct, rather than actually measure so according to a geometric rule, is very much up to the designer’s judgement (and retina). It is this subjectivity which demands that every type designer goes through long printing out, looking at, editing, and starting-all-over-again sessions. It is the presence of these internal, deeply subjective ‘french curves’ that gives rise to the re-visitation by many designers of similar themes across quite different typefaces. In this respect, by drawing on existing cultural patterns and expanding on them to create new, personal interpretations, is it an exaggeration to compare typeface design to composing a piece of music, a case for type design as jazz?

When Knuth’s concepts first reached a wider, typographically-minded, audience the debate generated arguments that still resonate. At its best, Digital Typography is a source of provocative inspiration, an opportunity for debate that should not be missed. Its challenges should be heartily taken on by typographers, type designers, and educators alike. We may disagree with Knuth’s adherence to Tornielo’s ways, but his thoughts have brought us that little bit closer to our search for a Cresci for the digital age.

‘In search of the digital Cresci: some thoughts on Don Knuth’s Digital Typography‘. Stanford, CA: CSLI Publications, 1999. In Information Design Journal, vol 9, no 2 & 3, 2000, pp 111–118

Languages, scripts, and typefaces (2006)

[Response published in tipoGrafica no 70 (2006)

Do you consider that new technologies will enable those languages that have historically not been represented in font design, to incorporate the sounds appropriate to their tongues?

Hm… the question is somewhat misleading. The ‘languages that have historically not been represented in font design’ bit suggests that typeface designers who are native users of the other languages, the ones that have ‘historically not been represented in font design”, designed their typefaces with the sounds of their language in mind. This is not the same as saying ‘I can hear the words when I read’ or something like that; it means that the designer would have specific sounds in mind when designing a particular glyph. I’m pretty certain this is not the case; even more, I think the hypothesis runs counter to the basic mechanics of natural scripts.

Are fonts developed for specific languages? Even in the old days of 8-bit codepages, when each font file could take up to 256 characters, any single font allowed many languages to be typeset; the same font would do for English, Spanish, French, Finnish, and Italian, just as the same font with the declaration ‘CE’ following its name would cover Croatian, Czech, Estonian, Hungarian, Latvian, Lithuanian, Polish, Romanian, Latin-based Serbian, Slovak, Slovenian and Turkish (I think that’s all).

Such groupings (and there were plenty) were a combination of technical limitations (fitting all the characters in the font file) and commercial expediency: development, distribution, and retail channels. Each of these fonts claimed it could be used to typeset all these languages – and it did, offering a more or less adequate typographic representation of a script’s subset. I am choosing my words carefully here, because the point I am making is that typefaces offer interpretations of scripts, not languages.

We can shed some light on this distinction if we consider another distinction. So far I’ve been using the term ‘character’, but in fact this is not strictly correct. At the heart of contemporary applications and typefaces is the Unicode standard: a system for assigning a unique identifier to each character in any script ever used by humans. In this sense, ‘Latin small letter a’ and ‘Greek small letter alpha’ are characters, but ‘a’ and ‘α’ are not: they are glyphs: typographic representations of characters. In other words, all ‘α’s in all the typefaces in the world are the same character: Greek alphas, and all the ‘a’s are Latin [script] ays (how do you spell ‘a’?) – not English or Spanish or Finnish ays. To put it bluntly: the character implies a specification for the formal configuration of the glyph (relationship of positive and negative spaces) but is ignorant of the specific shape.

The relationship between character and glyph is, in my view, strongly analogous to that of a glyph and its voicing within a language. The Latin ‘a’ implies an ‘envelope’ of sounds within each language that is written with the Latin script, and a set of relationships of this sound with neighbouring glyphs. The leeway in speaking the glyph is, however, considerable; even to my unfamiliar ears a word such as ‘tipografia’ sounds very different when spoken by my Argentinian, Mexican, or Spanish students. Should they be writing with different glyphs for the ‘a’ in each language?

If, for the sake of argument, we posited that: yes, each of these three languages requires a different ‘a’ (or a different combination of ‘gr’, for that matter) then we must automatically decide what is the minimum difference in enunciation between the two variants that will trigger a choice one way or the other. Do we document the range of possible sounds that pass for ‘a’ in speech in each of these languages? This can quite quickly turn into a massive exercise in mapping speech patterns and deviations – the age-old classification problem of the infinite pigeonholes, the ‘duck-billed platypus’.

I can almost hear you say: ‘hold on there, you’ve missed the point! We should only be looking at each language in turn, not compare across languages!’ OK; but what will we do with dialects, regional variations, inflections dependant on age, social class, education level, professional affiliations, and the like? Again, this is a dead-end. Should I write English with different glyphs from my children? I have an expat’s accent, forever getting the length of vowels and the strength of consonants wrong; but my children, who go to nursery with all the other children in the area, speak English with an impeccable accent (so much so, they already correct their parents…).

There is only one area where we can strive for a close, one-to-one relationship between spoken sounds and the glyphs of a typeface, and that is the domain of linguists who document spoken language. (The International Phonetic Alphabet is fairly comprehensive in its coverage of sounds the human larynx can produce, and only extended when someone researching vanishing or spoken-only languages come across a new sound.)

Going down that route will take us further away from the visible form of language, and into questions that deflect from and confuse the study of typeface design; this must, by definition, be limited to the representation of characters, not of sounds. The formal qualities of the glyphs may bear many influences, from the direct (mark-making tools such as nibs and brushes) to the lofty (theories of construction); and they will normally take a long time to define a envelope of expression for each time and place (the strength of which is tested each time we come across the ‘design-by-dictat’ approach seen in Korea, Russia, Turkey, and – most recently – in the ex-Soviet states of the Caspian).

So what about the bells and whistles? Current technology promises a range of options that were not available before outside specialised environments. These must be seen as limited to the level of enriching the typographic expression of a script, but not jumping out at the level of the sounds the glyphs will generate in specific users. So, if a Polish typographer prefers diacritics to lie flat over the vowels, whereas a French one may opt for the upright ones, all the better if the font files can provide both, change from one to the other on the fly, and keep both users happy. Similarly, if the Dutch have a mark that looks like an ‘i’ and a ‘j’ close together and are taught at school this is one letter, you would hope that the whole chain of text editors, word processors, spell checkers, dictionaries and thesauri would recognise it as such. Speaking is interpreting sounds within a culturally-sensitive envelope; so is typeface design: defining glyphs within the acceptable spectrum of each character. But the designer cannot interfere where there is not linguistic ground to step on: if it means different things, go ahead and make a new glyph – but if it just sounds different, well, that’s just a reflection of the imprecise, fluid, and constantly mutable nature of human expression.

Let’s stick to mark-making, I say.