Capturing Sound for Analysis

As previously described, sound is vibration traveling through a substance. These vibrations can be captured by a microphone. Microphones convert the vibrations that travel through air into a constantly varying electrical current. When a microphone is used to capture sound by a computer, that sound is digitized. Specifically, amplitude samples of a specific size (sample size) are taken many times a second (sample rate). This stream of data is called a PCM (pulse code modulation) stream, which forms the foundation for digital audio. Taken all together, the samples represented in the PCM stream digitally represent the audio waveform that is captured. The higher the sample rate, the more accurate the representation and the higher the frequency of audio that can be captured.

As we learned in the previous chapter, when we started working with the AudioRecord class, these parameters may be passed into the constructor of the AudioRecord class when creating an object. To revisit what each of the parameters means, please see the "Raw Audio Recording with AudioRecord" section in Chapter 7.

NOTE: The Nyquist sampling theorem, named after Harry Nyquist, who was an engineer for Bell Labs in the early to mid-twentieth century, explains that the highest frequency that may be captured by a digitizing system is one half of the sample rate used. Therefore, in order to capture audio at 440 Hz (middle A), our system needs to capture samples at 880 Hz or higher.

Here is a quick recap of the steps required to capture audio using an object of type AudioRecord.

int frequency = 8000;

int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO; int audioEncoding = AudioFormat.ENCODING_PCM_l6BIT;

int bufferSize = AudioRecord.getMinBufferSize(frequency, channelConfiguration, audioEncoding);

AudioRecord audioRecord = new AudioRecord(

MediaRecorder.AudioSource.MIC, frequency, channelConfiguration, audioEncoding, bufferSize);

short[] buffer = new short[blockSize]; audioRecord.startRecording();

while (started) {

int bufferReadResult = audioRecord.read(buffer, 0, blockSize);

audioRecord.stop();

The foregoing code doesn't actually do anything with the audio that is captured. Normally we would want to write it to a file or to analyze it in some other manner.

+1 0

Post a comment