This page is a brief summary of the Chirp technology platform.

Chirp is a platform to send data over sound: it is intended as a way for many kinds of device to communicate over the air.

Any device with a speaker can emit a chirp, and any sufficiently powerful device with a microphone can decode a chirp. It has been initially made available as a free application for iOS. The system has been designed for extension into many use-cases where it would be impractical to use existing network technologies.

How does it work?

 

The Chirp platform comprises two parts: an audio protocol, which encodes a character sequence as a series of pitched tones; and a network protocol, which stores an arbitrary blob of data and assigns it a unique short code of ten characters.

Chirp audio protocol

 

The Chirp audio protocol was designed to be friendly. That is: simple to implement.

It has an alphabet of 32 characters [0-9, a-v] mapped to 32 pitches a semitone apart.

0 = 1760hz
1 = 1864hz

v =10.5khz

An entire chirp is a sequence of 20 puretones of 87.2ms each. The first 2 tones are a common ‘frontdoor’ pair – hj –  to indicate to a device that the following tones are a chirp shortcode; the next 10 tones represent the 10-character payload. The final 8 tones are Reed-Solomon error correction characters.

[FD] [SHORTCODE] [ERROR-C]

With 5-bits per character and 10 characters per chirp, our total address space is 50 bits.

Error correction means that Chirp transmissions are resilient to noise. A code can be reconstituted when over 25% of it is missing or misheard.

The beak (encoder)

 

A sending, or encoding device (in our terminology, a “beak”) needs only to be able to emit a series of sine tones between 1.7khz and 10.5khz with accurate timing. To render audio more birdlike and distinctively chirp-ish, we do some other stuff, on which topic much more later.

Further beakage

 

Beaks can be decoupled from brains. We intend to develop and freely release chirp encoders written in HTML5, Processing, SuperCollider, MAX, PureData and anything else you can think of.

87.2ms note-lengths give a convenient number for MIDI sequencers, where chirps can be written as twenty 16th notes at 170bpm.

The brain (decoder)

 

A receiving, or decoding device (in our terminology, a “brain”) needs to be able to track and decode successive pitches with error correction. The brain is the result of a significant research effort to make it robust against noise, whilst also implemented efficiently on devices with limited DSP capabilities. We’ll be publishing more information on this topic shortly.

Network protocol

 

An inherent limitation of the audio protocol is its highly limited transmission rate.

To send larger amounts of data, we have built a RESTful network infrastructure which allows arbitrary pieces of data to be associated with Chirp shortcodes. A sending device can thus upload a photo to the cloud, and obtain a shortcode representing it to be send over the air. A receiving device hears the shortcode over its microphone, and resolves it with a GET request.

Network stack

 

Our network stack is built with MongoDB, nginx and Pyramids running on Amazon’s cloud. Images and larger data objects are stored separately on Amazon S3. We’ll be publishing more detailed information about the challenges of building and maintaining this system shortly.

Example transaction

 

To create a new shortcode, a client POSTs a packet of JSON data to the server, which is validated against a number of schema (image/jpeg, text/plain, etc).

A successful request returns a JSON response containing the shortcode, ready to be sent over the air. A “long” version of the code including error correction is included for simpler devices that might not easily be able to generate the error bits, ready to be output as unmodified audio.

iOS Application and beyond

 

Our current offering is a free iOS app which rolls all of the above services together in a simple tool to share and receive photos, text and links. Many more usecases are anticipated. A major part of Chirp’s appeal is that it is platform-agnostic: it is after all, just sound.

Implementations for various different platforms will appear in the coming months.

Okay, okay. So where’s the API?

 

Coming soon. Well, as soon as.

And lastly, if you’re reading this, you’re probably the kind of person we’d enjoy talking to. Feel free to get in touch. Thanks!

[Last Updated 22.07.12]