What are DTCG tokens?
There is a lot of talk about tokens being in the Design Token Community Group (DTCG) format. It’s a step toward standardizing how we format our tokens, so they can be consumed by everything and anything.
But right now, that format is not always the most usable when it comes to putting them into a codebase.
What was our problem with this token format?
In our codebase we didn’t want to directly use tokens in the DTCG format as it would mean incredibly long dot notation and you’d have to add .$value
at the end of every token used.
There is also the issue of them being exported from zeroheight with the aliases being unresolved. And it would be remiss to expect engineers to manually resolve them each time they were used.
Also, our codebase is (mostly) in TypeScript and so there was a lot to be gained from having them typed.
How did we go about transforming them?
When you export your tokens from a zeroheight automation, if you are using modes or collections in your variables, you will get a handful of files. We found that if we could get these all into one, it made it easier to export and use in code.
Creating a better structure
Firstly, you will want to start off by transforming each of these into a more refined JSON structure. This can be achieved with the following snippet of code:
function transformToken(obj) {
// Return if it's a primitive value
if (typeof obj !== 'object' || obj === null) {
return obj;
}
const transformed = {};
for (const key in obj) {
if (typeof obj[key] === 'object' && '$value' in obj[key]) {
// Transform if $value exists
transformed[key] = obj[key]['$value'];
} else {
// Recursively transform nested objects
transformed[key] = transformToken(obj[key]);
}
}
return transformed;
};
This will take us from a deeply nested format, such as:
"logo": {
"text": {
"$type": "color",
"$value": "#212121"
},
"background": {
"$type": "color",
"$value": "#FFFFFF"
}
}
To a slightly more flat structure, like this:
"logo": {
"text": "#212121",
"background": "#FFFFFF"
}
Resolving token aliases
The next step is to to resolve the aliased values being used in other tokens.
For this, we need knowledge of the structure of our token sets to know which ones are being aliased from. In our case, we have the primitives and the brand tokens that are used as alias values. So that means we can use this to resolve them with the following function:
// Where tokenSet contains the tokens needed to be resolved
function recursiveReplace(tokenSet, primitives, tokens) {
if (typeof tokenSet === 'object' && !Array.isArray(tokenSet)) {
const newTokenSet = {};
for (let key in tokenSet) {
newTokenSet[key] = recursiveReplace(tokenSet[key], primitives, tokens);
}
return newTokenSet;
} else if (Array.isArray(tokenSet)) {
return tokenSet.map(item => recursiveReplace(item, primitives, tokens));
} else if (typeof tokenSet === 'string') {
return resolveValue(tokenSet, primitives, tokens);
}
return tokenSet;
}
Formatting token names
The tokens in Figma are not usually camel case, but for our codebases it doesn’t make sense to have dashes (or even spaces) used, so we then run the transformed tokens through the following function to update their names.
function renameKeysToCamelCase(obj) {
if (Array.isArray(obj)) {
return obj.map(item => renameKeysToCamelCase(item));
} else if (typeof obj === 'object' && obj !== null) {
return Object.keys(obj).reduce((acc, key) => {
const camelCaseKey = toCamelCase(key);
acc[camelCaseKey] = renameKeysToCamelCase(obj[key]);
return acc;
}, {});
} else {
return obj;
}
}
Getting your token set together
Depending on how many files you have, you made need to use these functions on multiple files and then combine the outputs to give you one big JSON to export.
At zeroheight we distribute this in a private NPM package that our other codebases can import and use as they build out new components and screens.
Adding types for our new tokens
We generate a tokens.d.ts
file to export alongside our tokens. To do this, take your finalised tokens JSON file and run it through the following function:
function jsonToTypeScriptType(json, typeName = 'Root') {
const getType = (value) => {
if (typeof value === 'string') return 'string';
if (typeof value === 'number') return 'number';
if (typeof value === 'boolean') return 'boolean';
if (Array.isArray(value)) {
// Assume homogeneous array
return `${getType(value[0])}[]`;
}
if (typeof value === 'object') return 'object';
return 'any';
};
// Recursive function to generate type structure
const generateType = (obj, indent = 2) => {
let typeStr = '';
const indentation = ' '.repeat(indent);
for (const key in obj) {
const value = obj[key];
const safeKey = !isNaN(key) ? `"${key}"` : key;
if (typeof value === 'object' && !Array.isArray(value)) {
// Nested object, generate its type recursively
typeStr += `${indentation}${safeKey}: {\n${generateType(value, indent + 2)}${indentation}};\n`;
} else {
// Base types
typeStr += `${indentation}${safeKey}: ${getType(value)};\n`;
}
}
return typeStr;
};
// Create the full type definition
const typeDefinition = `interface ${typeName} {\n${generateType(json)}};\nexport declare const Tokens: Tokens;\n`;
return typeDefinition;
}
This creates a fairly loose type structure, e.g. using string
rather than stating what that string could potentially be. But this function can be refined to add more specificity if required.
Overall, this final step of the automating design token flows from Figma through to the codebase can really help get your engineers on board with using new tokens quickly!