This tutorial is part of the "Seeing Sounds with Three.JS" tutorial series. In this part we will be getting started with the web audio API to add music to an existing scene.
We're building on the previous part of this series where we created a terrain. The purpose is to get that terrain moving to the beat of any chosen music track, so it looks like the floor is jumping.
However before that can be done, we first need to analyse the chosen sound file and detect the beat- and that is what we will do in this tutorial by creating an audio analyser object. The audioAnalyser will allow us to detect the beat of the music to be used in the next tutorial which will look at using that beat to move our terrain.
The Audio Analyser
The final audioAnalyser script we're create can be seen here:
Overview
Overall the audioAnalyser consists of a constructor, and 5 functions:
- constructor - sets up the objects main properties, such as the sound track.
- loadAudio() - this does what you expect
- decodeAudio() - once the audio is loaded, it needs to be decoded.
- setUpAnalyser() - sets up everything needed to analyse the music
- analyseBoost() - finds the beat
- play() - call this to start everything off
Usage
You can use the audio analyser like this:
//create an instance (providing your sound file path) var audioAnalyser = new AudioAnalyser('objects/audio/music.mp3'); //play music audioAnalyser.play();
How it works
The Audio Context
When instantiating the audioAnalyser, a number of things are set up in the constructor. The main parts are the creation of an Audio Context, the Buffer Source, and the XMLHttpRequest:
//create audio context this.audioCtx = new (window.AudioContext || window.webkitAudioContext)(); //create buffer source this.source = this.audioCtx.createBufferSource(); //create XMLHttpRequest this.request = new XMLHttpRequest();
The Audio Context is used to manage the sound passed into the constructor, and also to play it. All the functionality we need to analyse our music is provided by the Audio Context, such as decoding the music.
A Buffer Source is then created from the Audio Context through calling audioCtx.createBufferSource();
The XMLHttpRequest property is used for fetching the sound from the filepath.
The Audio Buffer
Calling audioAnalyser.play()
starts everything off by triggering loadAudio()
, which uses the request property created in the consturctor to load our sound (as seen in line 3) :
loadAudio:function(){ this.request.open('GET', this.sound, true); this.request.responseType = 'arraybuffer'; var audioAnalyserInstance = this; //on load decode the audio this.request.onload = function() { audioAnalyserInstance.decodeAudio(); } //then send the request this.request.send(); }
Line 4 then sets a response type of arraybuffer for loading the sound. This arraybuffer then becomes part of our request object, and is used to decode the audio. That happens in line 9, where we call decodeAudio();
Decode the Audio
Here, we again use the Audio Context's (this.audioCtx) decodeAudioData function, using our request object as a parameter so that we can access the arraybuffer which it holds (line 4):
decodeAudio:function(){ var audioAnalyserInstance = this; this.audioCtx.decodeAudioData(this.request.response, function(decodedData) { audioAnalyserInstance.setUpAnalyser(decodedData); }, function(e){"Error with decoding audio data" + e.err}); }
The decodeAudioData also has a callback function as the second parameter (line 4), which simply calls the setUpAnalyser function of our audioAnalyser (line 6).
That's all for now
That's enough for one tutorial. In the next part, we'll look at the last section of the audioAnalyser code, and find out how an the audio context's createAnalyser function can be used to analyse the beat.