Scroll Down

We’re nearing the end of this journey, which makes it a good moment to zoom out and look at the path so far. The science of legibility is about 140 years old and was originally developed within psychology. Type designers didn’t really engage with it (at least in writing) until the 1940s, and a more rigorous, design-relevant approach didn’t emerge until the late 1990s.

Over time, both psychology and design have evolved in how they understand legibility, moving from the search for a mythical universal legibility to a more nuanced view of it as a multidimensional phenomenon, full of layered factors and conflicting goals. Today, legibility research has finally matured: a science capable of isolating typographic variables in meaningful ways and asking questions that actually matter, not just to scientists, readers, or graphic designers, but to type designers as well. Several of these studies have shaped Tiphares, the first typeface in our library.

Ovink Sans and Ovink Serif by Sofie Beier, superimposed. The only difference between them is the presence or absence of serifs.

Study 01: Building a semi serif
The first two studies that informed Tiphares are by Sofie Beier. The first one, The Influence of Serifs on ‘h’ and ‘i’ (co-written with Mary Dyson in 2014)1Beier, S., & Dyson, M. C. (2014). The influence of serifs on 'h' and 'I': useful knowledge from design-led scientific research. Visible Language, 47(3), 74-95., is a decade old but still razor-sharp. The study builds on a 1973 experiment by Harris2Harris, J. (1973). Confusions in letter recognition. Printing technology, 17(2), 29-34., one of many over the past century or so that have tried to determine the impact of serifs versus sans serifs, without finding consistent enough data to draw any firm conclusions. Harris compared Univers, Gill Sans Medium, and Baskerville, and found that when four letters were shown briefly in the central field of vision, the confusion rate for letter pairs like b > h, a > n, n > u, and u > n was much higher in the serif font than in the sans.

Beier and Dyson followed this idea using two nearly identical fonts except for the presence of serifs, OvinkSerif and OvinkSans. Their experiment focused on letters like j, i, l, b, h, n, u, and a, shown at a distance in the central field of vision. The data confirmed that serifs placed at vertical extremes (not the base) improved recognition, specially in letters like l, b, h, n, and u.

Tiphares takes this as a direct reference, resulting in a slightly unusual yet highly functional semi-serif. The only exception lies in the i and the j. Since the recommendations focus on how serifs help reduce ambiguity between letterforms, the opportunity here is clear. In Tiphares, I added a serif to the i but not to the j, making their differences more obvious. The reasoning is tied to character frequency: the less common j would turn the serifs in an anecdote, while the much more frequent i reinforces their role as part of the typeface’s identity.

Recommendations on serif placement by Beier and Dyson

Serif positioning in Tiphares, following Beier and Dyson’s study almost to the letter, except for the i and the j.

Study 02: How to design high-performance numbers
For a typeface designed with signage or interface design in mind, numerals are critical. I once read a phrase, supposedly from Maxim Zhukov (apologies if it wasn’t you, Mr. Zhukov, but I hope you’d approve nonetheless), that went something like: “If you want to know how good a typeface is, look at its numbers”. I suspect he said that because it’s more common to put design energy into the letters, while so-called secondary glyphs often get less attention. It’s one of those thoughts that has made my life significantly worse: every time I’m designing an @, a §, or the shape of an 8, I picture Mr. Zhukov and a jury of typographic purists silently judging my decisions over my shoulder.

The second study that fed into Tiphares is another one by Beier, this time with Jean Baptiste Bernard and Eric Castet3Beier, S., Bernard, J.-B., & Castet, E. (2018). Numeral legibility and visual complexity. DRS Design Research Society, 2018.. They tested three different designs for digits 1–9 (no 0) to identify which had the highest recognition rates. Rather than distance reading, the experiment used another clever approach: numbers were briefly displayed for 150 milliseconds in groups of three in the peripheral visual field (specifically, the right and lower areas), while participants focused on a central point. The goal was to identify the middle number. This method forced them to rely on parafoveal vision, which is far less equipped with recognition sensors, so resolution and accuracy drop.

The results are gold for type designers, full of practical takeaways for maximizing recognizability:
6, 9, and 2 work better with rigid diagonals.
• 7 and 4 should go serif-less.
• 3 and 5 benefit from large apertures.
• 1 should be narrow.

Again, Tiphares follows these rules as closely as possible.

Image from Beier, Bernard, and Castet comparing two rows of digits used in their experiment. The top row proved collectively more legible than the bottom one

Applying those findings to Tiphares’ set of lining proportional figures

Study 03: The highway and the negative
The third study (or set of studies) is the Clearview project. Here, the insights are less direct but no less meaningful. Rather than examining individual typographic features (as Beier et al. do), Clearview is a comparative case study: one font versus another, and the broader implications that come with it.

Clearview started in the late 90s as an attempt to improve road sign legibility in the U.S., especially under poor visibility conditions. It was initiated by information designer Don Meeker and type designer James Montalbano, with research support from Penn State and funding from 3M.

In 2004, the FHWA gave provisional approval for Clearview. But in 2016, that approval was revoked due to a lack of conclusive evidence and concerns over inconsistent implementation. In 2018, Congress stepped in and restored its optional use, allowing states to choose whether or not to adopt Clearview*.

A traffic sign using Clearview. 

The team conducted several studies—some independent, others peer-reviewed—but for Tiphares, the most relevant are those by Garvey and colleagues, published in 1998 and 20154Garvey, P. M., Pietrucha, M. T., & Meeker, D. T. (1998). Clearer road signs ahead. Ergonomics in Design, 6(3), 7-11. | Garvey, P. M., Klena, M. J., Eie, W.-Y., Meeker, D., & Pietrucha, M. T. (2015). The Legibility of the Clearview Typeface System Versus Standard Highway Alphabets on Negative-and Positive-Contrast Signs. Mid-Atlantic Universities Transportation Center.. They tested how quickly drivers could read signs set in Clearview versus Highway Gothic. The method was simple: drive toward a sign, stop when you can read it, and measure the distance. At night, Clearview outperformed Highway Gothic by 16%, with no significant difference during the day†. That might sound modest, but they estimated it could translate into an extra second or more to brake or maneuver, which at highway speeds means about 33 meters/±110 feet of additional distance. The authors credit this to the lighter weights and improved negative space in Clearview, which better handled the halation caused by car lights and reflective sheeting.

Image from the Clearview website. On the top left, E & D represent two samples of Highway Gothic using lower and uppercase letters that represent 100% of performance. On the right, you can see samples of Clearview signs tested. 

This is the point where we need to be careful about overgeneralizing scientific conclusions. The Clearview study doesn’t prove that lighter fonts work better than heavier ones on dark backgrounds. What it actually shows is that carefully balancing black and white relationships in letterforms can improve type performance under specific conditions.

That’s the insight that became the jumping-off point for Tiphares Negative: a complementary version of Tiphares with the same horizontal width in all its glyphs, designed for inverse-polarity interfaces, where white text sits on dark backgrounds, and still looks visually balanced. It’s surprising how different the same typeface feels in reversed polarity. In fact, the effect can seem like a trick the first time you notice it. But it’s not‡.

① Tiphares Book; ② Tiphares Book; ③ Tiphares Negative Book; ④ Tiphares Negative Book. If what I am saying here makes any sense, you should be seeing ② the heaviest one, even when it is the same font as ①; and ④ should look equal in weight to ①, even when it is actually lighter (as in ③)

On the left, Tiphares Book; on the right Tiphares Negative Book

4. Phototypesetting and the Soft
The last piece is not a study, but a set of intuitive observations. If you’re under 50 as I’m writing this (2025), chances are you have no idea what phototypesetting is. I only know about it secondhand myself, since I’m also a spring chicken.

Phototypesetting was a technical process where light was projected through a film negative containing the glyphs of a typeface, exposing them onto photographic paper. It meant the end of the Monotype and Linotype machines (not the companies), and the end of the cool, giant drawers of metal blocks in different sizes. A single master could now project type at virtually any scale. That’s something we take for granted today, since it’s how digital typography works, but back then, it changed everything. It also quietly killed optical sizes (so sad). Suddenly, one design had to work everywhere.

I stumbled on this photo and it’s way too good not to share. It shows the inside of a Harris Fototronic 4000, circa 1982, at New England Typographic Service in Bloomfield, CT. Those five glass disk fonts are mounted on a turret, spinning at 3,600 RPM when in use. Each disk carried two typefaces, typically roman and italic of the same weight.

But phototypesetting also brought another curious side effect. Display typefaces gained finesse and precision, while text faces acquired a slightly gummy aspect, rather typical of the era. Type designers started making their designs much softer and fluffier trying to control the flare and bloom caused by light projection, especially at small sizes.

Look at text fonts made for phototype: ITC Garamond, for example, feels like an almost comic version of the Garamond models, with ultra-softened serifs compensating for light flare. It’s a perfect demonstration of the kind of practical expertise type designers had to develop back then: a bug turned into a feature. Decades later, Erik Spiekermann applied the same kind of thinking in his design for FF Info, where he introduced softened terminals to counteract light scatter on signage at Düsseldorf Airport, the first place the typeface was used5https://spiekermann.com/en/unit-rounded/

Look at two very different takes on the same starting point. Adobe Garamond stays sharp and historically faithful, while ITC Garamond feels almost like a parody, with an oversized x-height no Renaissance punchcutter would have dared, and plenty of fluffy little details ¶.

More fluffiness. (1) Adobe Garamond; (2) ITC Garamond

The idea of creating a high-performance typeface with a softer counterpart (something less rigid but still tuned for demanding reading conditions) became the rationale behind Tiphares Soft. The result is a version of Tiphares with slightly rounded corners, stroke widths aligned with Tiphares Negative, and the same metrics as both Standard and Negative. In everyday use, Tiphares Soft feels more approachable yet remains fully capable. And in backlit scenarios or on dark, high-contrast interfaces, those rounded details help absorb the visual glitches of imperfect rendering, especially at smaller sizes. That said, I doubt anyone will ever use it strictly for that purpose. So let’s just call Tiphares Soft the friendly sibling of the family6Thanks a lot to Juanjo López, Stephen Coles, and David Berlow for their advises and insights about phototype and its effects on type <3..


Conclusion

If there’s one thing to take from this article, it’s that type design doesn’t have to lean only on history. For centuries, designers built on the work of punchcutters, printers, and earlier models. My aim here was to show how contemporary scientific research can also become part of that lineage. We already have a wealth of rigorous studies that can inspire a new design workflow, one that enhances the real-world performance of typefaces.

And here’s the crucial point: practice should guide research. Only by designing, testing, and applying type in the different contexts where it’s actually used will researchers know what questions still need answers.

Now that we’ve unpacked the origins, the next article will finally introduce Tiphares itself.

Notes

* For more information about the project, there is a dedicated website www.clearviewhwy.com

† in fact, subsequent studies found a lot of different percentages of improvements

‡ (If you’re curious, there’s a wealth of research on polarity and reading: Piepenbrock et al., 2013 & 2014; Dobres et al., 2017; Chan & Lee, 2005; A.-H. Wang et al., 2003; Westenberg, 2020; Aleman et al., 2018; Legge, Pelli et al., 1985; Buchner et al., 2009... and the list goes on.)

§ Of course, I am going to share the origin of this badass picture: https://hoxsie.org/2019/01/16/a-phototypesetting-mystery/

¶ So yes, I’m doing something not exactly rigorous here: I’m talking about a model of typefaces, the Garamonds, and a specific one, phototype ITC Garamond, while showing digital revivals instead of their original representations. As I write this, I’m in Japan without access to proper sources, so the “real” references will have to wait until I’m back (and remember). Chill :P