Building out an HTML5 game UI pipeline with Figma

Veronica Vega

A few months back, I was once again unplugging my laptop’s display cord from my Dell UltraSharp and leaving my desk for the common area of my team’s modern, ultra trendy, kombucha-tap-equipped shared workspace in FiDi. It was going to be another low-key afternoon breaking from my usual engineering work and instead moving stuff around in the Cocos Creator editor, so I figured I’d take advantage of a comfy couch and renewed cup of coffee while I pieced some new UI into place. As refreshing as switching things up can be, I think we all knew at this point that drag-and-dropping new menu dialogs together is probably not the best use of engineering time.

While the Cocos Creator editor can serve as a quick way to put together layouts on the design side if we had wanted to use it in this way, it lacks the robustness of graphics creation, custom shapes, and the real-time collaboration capabilities optimal for rapid UX design iteration and approval that Figma provides. In order to really integrate the workflow between product and engineering, we needed a way to easily translate finalized concepts into UI data that our current engine can understand with no tedious recreation necessary on the engineering side. Awesomely enough, Figma provides an intuitive RESTful API to access your document data from anywhere, so you have everything you need to build a design-to-engine UI pipeline.

Figma’s getter endpoints range in specificity from getting entire file lists by project to specific nodes or style data in a given file, so feel free to browse the available endpoints in the docs. We’ll be coming back to a few of them shortly.

For now, let’s start with grabbing the data for a single Figma file. To perform a basic user-authenticated request for a Figma file, you will need two primary pieces of information:

  1. An access token tied to your Figma account. Note that you can also generate an OAuth2 token if you’d prefer to have app-level authentication for your tool.

2. A file ID for the file data you want to pull. It can be easily gleaned from the file’s URL (see below).

Once you have these pieces of information, you’ll just need a way to make a request. cURL works, but if you want some ease-of-poking-around and beautified responses as you explore the API, I’d recommend Postman.

For purposes of this example let’s leverage some trendy Medium code formatting and use cURL:

If everything went as planned, you should have received a JSON-represented document tree in response as displayed on the left below:

Note how frames and layers are represented, with the bottom-most appearing first in the list of children

For this particular request, the resulting data is going to get you about 80% of the way to having everything you need to recreate the handiwork of your UX designer in various formats. The resulting data provides comprehensive descriptions of your entire document, such as (but not limited to):

  1. Bounding box sizing and position
  2. Parent-child relationships
  3. Layer type (shapes, text, etc)
  4. RGBA color descriptors
  5. Gradient, image, and 2d shape fill data
  6. Layer opacity
  7. Font styles and strokes

Therefore, you can easily transpose this structure to HTML, an engine-specific layout file such as the Cocos Creator prefabs I was using, or another form of pre-processed JSON layout data that suits the needs of your application.

So, what’s missing? Well, let’s take a closer look at the results. If we look at any of our layers that contain graphics, you’ll typically see one of two types of sources:

  1. A user provided bitmap image; something that was uploaded to the Figma document to be used as a layer fill.
  2. A custom vector-based asset. This is something created in-app with Figma’s graphics creation tools.
Structure of an uploaded image fill. Note the “imageRef” field is an ID hash.

For bitmap image fills, the data provided includes an imageRef field, but doesn’t seem to map to anything useful within the current data structure. To map this ID to a real image URL, we’ll need to ask Figma for the image fill data associated with this file or node. In this example, we’ll fetch against the images endpoint for this file. Let’s bring our request command back out:

The result should be a map of IDs to bucket URLs. Here is where you will find the actual references to each asset attached to the file.

From here, you can easily automate fetching assets to your project’s asset directories to be referenced by your in-game UI. Awesome bonus detail: Figma manages image assets in such a way that identical image fills are already de-duped — meaning if you have multiple instances of the same image throughout your mock, the corresponding nodes will reference a single image.

Getting vector-based graphics from Figma is something that may depend on the capabilities of your game engine or personal preference. Our initial response data contains all of the fields necessary to recreate custom shapes. This can be useful if you are able to or prefer to leverage granular custom 2d graphics in your layout models.

In the case of Cocos Creator prefabs, support for basic shapes such as ellipses and rectangles was easily done by pairing a geometric mask component with a nested fill layer, but we needed a way to support more complex 2d graphics such as custom polygons, rounded rectangles, and paths. A simpler way to include custom vector art in general, especially if it’s multi-layered, is to flatten and export it as a raster to use as an image fill. Figma enables us to automate this, too!

Leveraging the images endpoint, you can provide specific node ids (as seen in the initial file request data) that you’d like to flatten and export as a bitmap image. You can specify scaling, export format, and more:

An example of a complex asset flattened and provided as a raster by the Figma images API endpoint

So, how can we identify whether or not a layer requires flattening and exporting? One way to do this is for the UX designer to flag the layer in a way your tooling understands. Options include toggling the “Export” flag in the Figma’s UI (represented as a boolean in the returned file data), or providing some sort of convention for the layer name. In our case, because our designer needs to flag and manually export rasters of entire frames for inclusion in pitch decks and product meetings, it was easiest for us to decouple our tooling from that feature and rely instead on naming conventions.

While our document data already provides a few basics for constructing text elements such as characters, font weight, size, and alignment, unfortunately through Figma there is no native API mapping to the corresponding custom font assets. The only font family data we are able to extract from text styles is the family and post script name.

There’s still have a programmatic solution! If you are sourcing your custom fonts from Google Fonts, we can leverage another tool to fetch them: the Google Fonts developer API. Similar to how you’d access the Figma API, you’ll need to first acquire an API key to make requests.

To get a comprehensive list of all the fonts available, you can hit the web fonts endpoint like so:

Note that your key in this case is a GET parameter passed to the endpoint instead of a header value. The response is a large list of all the font data available through Google, with asset references included:

Found “Nunito”!

From here, you can discern your variant of the font, fetch the corresponding font asset, and include it in your project asset directory just as you would images.

With structure, graphics, and fonts easily accessible through these APIs, you’ll have everything you need to build out an automated design-to-engine UI layout pipeline with a little bit of collaboration with your designer.

The possibilities of design-to-engineering interfacing don’t just stop at UI mocks; entire 2d levels with implicit data can be built in, too!

Here is an example from our latest title, Stickerpets Island, where island progression is easily defined on the product side in Figma. There are a lot of creative ways your workflow implementations can leverage the provided data to easily plug in visual structure within all types of contexts. Happy tooling!