Playing Sound with Audio Track

Use the AudioTrack class to play raw audio directly into the hardware buffers. Create a new Audio Track object, specifying the streaming mode, frequency, channel configuration, and the audio encoding type and length of the audio to play back.

AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, frequency, channelConfiguration, audioEncoding, audioLength,


Because this is raw audio, there is no meta-data associated with the recorded files, so it's important to correctly set the audio data properties to the same values as those used when recording the file.

When your Audio Track is initialized, run the play method to begin asynchronous playback, and use the write method to add raw audio data into the playback buffer.;

audioTrack.write(audio, 0, audioLength);

You can write audio into the Audio Track buffer either before play has been called or after. In the former case, playback will commence as soon as play is called, while in the latter playback will begin as soon as you write data to the Audio Track buffer.

Listing 11-22 plays back the raw audio recorded in Listing 11-21, but does so at double speed by halving the expected frequency of the audio file.

LISTING 11-22: Playing raw audio with Audio Track Available for downloadon int frequency = 11025/2; int channelConfiguration = AudioFormat.CHANNEL_CONFIGURATION_MONO; int audioEncoding = AudioFormat.ENCODING_PCM_16BIT;

File file = new File(Environment.getExternalStorageDirectory(), "raw.pcm")

// Short array to store audio track (16 bit so 2 bytes per short) int audioLength = (int)(file.length()/2); short[] audio = new short[audioLength];

InputStream is = new FilelnputStream(file); BufferedlnputStream bis = new BufferedlnputStream(is); DatalnputStream dis = new DatalnputStream(bis);

// Close the input streams. dis.close();

// Create and play a new AudioTrack object

AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, frequency, channelConfiguration, audioEncoding, audioLength,


audioTrack.write(audio, 0, audioLength);


Since Android 1.5 (API level 3), Android has supported voice input and speech recognition using the Recognizerlntent class.

This API lets you accept voice input into your application using the standard voice input dialog shown in Figure 11-1.

Voice recognition is initiated by calling startNewActivity ForResult, and passing in an Intent specifying the RecognizerIntent.ACTION_RECOGNIZE_SPEECH action constant.

The launch Intent must include the Recognizerlntent .extra_language_model extra to specify the language model used to parse the input audio. This can be either language_model_free_form or language_model_web_search; both are available as static constants from the Recognizerlntent class.

You can also specify a number of optional extras to control the language, potential result count, and display prompt using the following Recognizer Intent constants:

> extra_prompt Specify a string that will be displayed in the voice input dialog (shown in Figure 11-1) to prompt the user to speak.

> extra_maxresults Use an integer value to limit the number of potential recognition results returned.

> extra_language Specify a language constant from the Locale class to specify an input language other than the device default. You can find the current default by calling the static getDefault method on the Locale class.


The engine that handles the speech recognition may not be capable of understanding spoken input from all the languages available from the Locale class.

Not all devices will include support for speech recognition. In such cases it is generally possible to download the voice recognition library from the Android Market.

Listing 11-23 shows how to initiate voice recognition in English, returning one result, and using a custom prompt.

LISTING 11-23: Initiating a speech recognition request drnnloadon Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH) // Specify free form input intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,

RecognizerIntent.LANGUAGE_MODEL_FREE_FORM); intent.putExtra(RecognizerIntent.EXTRA_PROMPT, continues

LISTING 11-23 (continued)

"or forever hold your peace"); intent.putExtra(RecognizerIntent.EXTRA_MAX_RESULTS, 1); intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE, Locale.ENGLISH); startActivityForResult(intent, V0ICE_REC0GNITI0N);

When the user has completed his or her voice input, the resulting audio will be analyzed and processed by the speech recognition engine. The results will then be returned through the onActivityResult handler as an Array List of strings in the extra_results extra as shown in Listing 11-24.

Each string returned in the Array List represents a potential match for the spoken input.

LISTING 11-24: Finding the results of a speech recognition request

©Override protected void onActivityResult(int requestCode, int resultCode, Intent data) {

if (requestCode == VOICE V0ICE_REC0GNITI0N && resultCode == RESULT_0K) { ArrayList<String> results;

results = data.getStringArrayListExtra(RecognizerIntent.EXTRA_RESULTS); // TODO Do something with the recognized voice strings

super.onActivityResult(requestCode, resultCode, data);


In this chapter you learned how to play, record, and capture multimedia within your application.

Beginning with the Media Player, you learned how to play back audio and video from local files, application resources, and online streaming sites. You were introduced to the Video View and learned how to create and use Surface Views to play back video content, provide video recording preview, and display a live camera feed.

You learned how to use Intents to leverage the native applications to record video and take pictures, as well as use the Media Recorder and Camera classes to implement your own still and moving image capture solutions.

You were also shown how to read and modify Exif image data, add new media to the Media Store, and manipulate raw audio.

Finally, you were introduced to the voice and speech recognition libraries, and learned how to use them to add voice input to your applications.

In the next chapter you'll explore the low-level communication APIs available on the Android platform.

You'll learn to use Android's telephony APIs to monitor mobile connectivity, calls, and SMS activity. You'll also learn to use the telephony and SMS APIs to initiate outgoing calls and send and receive SMS messages from within your application.

Available for download on

Available for download on

Mobile Apps Made Easy

Mobile Apps Made Easy

Quick start guide to skyrocket your offline and online business success with mobile apps. If you know anything about mobile devices, you’ve probably heard that famous phrase coined by one of the mobile device’s most prolific creators proclaiming that there’s an app for pretty much everything.

Get My Free Training Guide

Post a comment