What is Explainable AI?

A Policy Primer exploring Recommendation 12 of the Australian Human Rights and Technology Report.
Explore the Policy Primer in Figma
This interactive Policy Primer explains some of the findings from the Australian Human Rights and Technology Report for a general audience.
At over 200 pages, there’s a lot of content covered in the Report. For the purposes of this project, I focused on just a few paragraphs that discussed the concept of Explainable AI.

Machine learning algorithms (a type of AI) are used all over the internet, from tailored recommendations to content moderation. Recent years have seen ML algorithms used to make increasingly impactful decisions, such as whether or not your mortgage will be approved.

"Explainable AI" seeks to create algorithms that not only output decisions, but explanations for their decisions. While it sounds great in theory, this may be a more difficult task than it seems. The Policy Primer dives into the complexities of trying to explain a machine learning algorithm, with plenty of background information for the less knowledgeable.

Design Notes
Figma as medium
After exploring different publication options, this project ended up coming to life as a Figma prototype.

Figma is hugely powerful in its own right, even with no intention to turn it into code. With better embed/export options, Figma could be used far beyond interface design.
Intentional interactivity
By keeping the interactions simple, relying on arrows for navigation, the focus remains on the content.

Allowing users to click on different topics (as on the Algorithmic Bias page) to explore further keeps the content from seeming overwhelming.
The power of silly little drawings
I’m no illustrator, so I made the graphics and background for the Policy Primer in a pixel art style. Having a consistent, playful visual language is imperative when the content is a bit dry and abstract. They completely transformed this project.
Further Reading / References