Implementing and consuming APIs is very error-prone. I've now worked on multiple single page applications written in Elm and I've grown to love the constraint of having to explicitly define JSON decoders. They are very helpful in exposing bugs and errors in documentation.

In this post, I describe a strategy I've adopted to catch compatibility mistakes as early as possible. It involves JSON Schema in automated tests for both the backend and frontend. JSON Schema is not a silver bullet, but it has been useful in my personal experience and is worth the trouble to gain some extra confidence in a program.

The main ideas behind the approach are:

  1. the API producer and consumers should always comply with the same schema version
  2. maintaining and sharing the schema and validating against the schema should be very convenient, even automated if possible, otherwise you will stop doing it.
  3. tests should fail when either side breaks the contract

This post uses Elm 0.18 and Ruby on Rails, but it may be relevant to future Elm versions and certainly applies to other backend programming languages and ecosystems.

Describing JSON schemas in Elm

json-elm-schema is an Elm library for defining JSON schemas. For example:

module Schema.Profile exposing (schema)

import Json.Encode as Encode
import JsonSchema exposing (..)

schema : Schema
schema =
[ title "User profile"
, properties
[ required "name" <| string []
, optional "age" <|
[ description "Age in years"
, minimum 0
, examples Encode.object
[ [ ( "name", Encode.string "Jane Doe" )
, ( "age", 42 )
, [ ( "name", Encode.string "John Smith" )
, ( "age", 25 )

This package helps:

It is going to be the foundation of the entire approach.

Easy fuzz tests for JSON decoders

Say we have a function allowing us to decode the API response described above into an Elm record.

module Profile exposing (Profile, decoder)

import Json.Decode as Decode exposing (Decoder)

type alias Profile =
{ name : String
, age : Int

decoder : Decoder Profile
decoder =
(Decode.field "name" Decode.string)
(Decode.field "age"

One way to test the decoder is to give it some example input and assert that we get correctly decoded data back.

decoderTest =
test "profile decoder" <|
\_ ->
Json.Decode.decodeString Profile.decoder
{ "name": "Jane Doe", "age": 42 }

|> Expect.equal
{ name = "Jane Doe", age = 42 }

Instead of writing examples by hand, we can generate a large number of random JSON documents that conform to the schema we wrote before and make sure that we can decode all of them. The JsonSchema.Fuzz module provides just what we need.

module ProfileTest exposing (suite)

import Test exposing (..)
import JsonSchema.Fuzz exposing (schemaValue)
import Schema.Profile
import Profile
import Json.Decode exposing (decodeValue)
import Helpers exposing (expectOk)

suite : Test
suite =
describe "Profile"
[ fuzz
(schemaValue Schema.Profile.schema)
"decoder complies with the profile JSON schema"
\value ->
decodeValue Profile.decoder value
|> expectOk

(Read more about fuzzers and fuzz tests in the elm-test documentation.)

expectOk is a custom expectation which takes a Result (which decodeValue returns) and only succeeds if that result is not an error.

module Helpers exposing (..)

import Expect exposing (Expectation)

expectOk : Result String value -> Expect.Expectation
expectOk val =
case val of
Ok _ ->

Err err -> err

I use this helper because JSON values are randomly generated and we do not know anything about their contents except whether they are valid according to the schema. Some decoders may still benefit from example-based tests where we do make assertions about the concrete output values.

I have not yet needed JSON encoders on the Elm side, but json-elm-schema also has a validator module which can help test them.

Sharing the JSON schemas with Ruby

Next, we want to make the schemas available to the Ruby code. Elm cannot do file system operations, but the elm-json-schema-compiler npm module allows you to compile the schemas defined in Elm code to regular JSON schema files. Unfortunately, I could not use it directly since I wanted some extra flexibility. Namely, I want to generate all the schemas in one go and keep the Elm json-elm-schema package as a test dependency only.

I modified the command line tool as shown below. There are two parts: an Elm program that encodes the schemas and sends them through a port and a Node.js program that compiles and runs the Elm program, saving the JSON strings to files as they come in through the port.

port module Main exposing (..)

import JsonSchema exposing (Schema)
import JsonSchema.Encoder exposing (encode)
import Schema.Profile

type alias NamedSchema =
( String, Schema )

schemas : List NamedSchema
schemas =
[ ( "profile", Schema.Profile.schema ) ]

emitSchemas : List NamedSchema -> Cmd ()
emitSchemas namedSchemas =
|> (encodeNamedSchema >> emit)
|> Cmd.batch

encodeNamedSchema : NamedSchema -> ( String, String )
encodeNamedSchema ( title, schema ) =
( title, encode schema )

main : Platform.Program Never () ()
main =
{ init = ( (), emitSchemas schemas )
, update = \_ _ -> ( (), Cmd.none )
, subscriptions = \_ -> Sub.none

{-| Emits pairs of schema name and schema JSON string -}
port emit : ( String, String ) -> Cmd a

The important part for everyday use is the schemas function where we declare the file name for each schema (the node script takes care of adding the .json suffixes). In this example we only have one schema:

[ ( "profile", Schema.Profile.schema ) ]

The program which does all the input/output work takes two arguments: the path to the Elm file presented above and an output directory for the generated schemas.

#!/usr/bin/env node

if (process.argv.length < 4) {
fail(`Generate JSON schema files using json-elm-schema
bin/generate-schemas <elm-schema-file> <output-directory>

const fs = require('fs')

const path = require('path')
const temp = require('temp').track()
const compiler = require('node-elm-compiler')

const sourcePath = path.resolve(process.argv[2])
const prefixPath = path.resolve(process.argv[3])

const targetPath = temp.path({ suffix: '.js' })

compiler.compileSync(sourcePath, {
yes: true,
output: targetPath,
cwd: 'tests',

const Elm = require(targetPath)
const app = Elm.Main.worker()

app.ports.emit.subscribe(function (payload) {
const title = payload[0]
const json = payload[1]
const fileName = title + '.json'
const schemaPath = path.join(prefixPath, fileName)

fs.writeFile(schemaPath, json, (err) => {
if (err) throw err

console.log('Wrote schema for "' +
title + '" to ' + schemaPath)

function fail (msg) {

In hindsight, there is probably a way to solve using the original npm package and a bash script, but on the upside I learned how to run "headless" Elm programs and carry out side-effects for them.

Schema validation in Ruby tests

Thanks to the json_schema gem we can verify that the API server complies with the schema. Generate the schemas into a pre-defined directory such as test/schemas.

$ bin/generate-schemas tests/Schema/Main.elm test/schemas

Don't forget to include this as a build step in your continuous integration pipeline to ensure you are always using the latest schemas.

Define a test helper like so:

require 'json_schema'

module SchemaHelper
def assert_response_schema(schema_name)
full_path =
Rails.root.join('test', 'schemas', schema_name)

schema_data = JSON.parse(
schema = JsonSchema.parse!(schema_data)

payload = JSON.parse(response.body)


This loads the specified schema and validates the server response against it. It is meant for use in a controller test.

require 'test_helper'
require 'schema_helper'

class ProfilesControllerTest < ActionDispatch::IntegrationTest
include SchemaHelper

test 'the output corresponds to the schema' do
get '/profile'

assert_response :success
assert_response_schema 'profile.json'

You could also modify the helper assertion to explicitly take the JSON structure as an argument, allowing you to unit test your JSON serializers.