Skip to main content

Learning A Stroke‐Based Representation for Fonts

Author(s): Balashova, Elena; Bermano, Amit H; Kim, Vladimir G; DiVerdi, Stephen; Hertzmann, Aaron; et al

To refer to this page use:
Abstract: Designing fonts and typefaces is a difficult process for both beginner and expert typographers. Existing workflows require the designer to create every glyph, while adhering to many loosely defined design suggestions to achieve an aesthetically appealing and coherent character set. This process can be significantly simplified by exploiting the similar structure character glyphs present across different fonts and the shared stylistic elements within the same font. To capture these correlations, we propose learning a stroke‐based font representation from a collection of existing typefaces. To enable this, we develop a stroke‐based geometric model for glyphs, a fitting procedure to reparametrize arbitrary fonts to our representation. We demonstrate the effectiveness of our model through a manifold learning technique that estimates a low‐dimensional font space. Our representation captures a wide range of everyday fonts with topological variations and naturally handles discrete and continuous variations, such as presence and absence of stylistic elements as well as slants and weights. We show that our learned representation can be used for iteratively improving fit quality, as well as exploratory style applications such as completing a font from a subset of observed glyphs, interpolating or adding and removing stylistic elements in existing fonts.
Publication Date: 2019
Citation: Balashova, Elena, Amit H. Bermano, Vladimir G. Kim, Stephen DiVerdi, Aaron Hertzmann, and Thomas Funkhouser. "Learning A Stroke‐Based Representation for Fonts." In Computer Graphics Forum 38, no. 1 (2019): pp. 429-442. doi: 10.1111/cgf.13540
DOI: 10.1111/cgf.13540
ISSN: 0167-7055
EISSN: 1467-8659
Pages: 429 - 442
Type of Material: Journal Article
Journal/Proceeding Title: Computer Graphics Forum
Version: Author's manuscript

Items in OAR@Princeton are protected by copyright, with all rights reserved, unless otherwise indicated.