<aside> ⚠️ Caution: Still Currently in Heavy Development

</aside>

Get Started


  1. Install into your project, yarn or npm etc

    yarn add apla-responder
    
    npm install apla-responder
    
    1. Include the dependency components near the top in your handler JS files where you want to use APLA.

      const { AudioResponse, Components: Apla } = require('apla-responder');
      

Examples


Simple Alexa-spoken response and a reprompt, to keep the session open

handle(handlerInput) {
	// some logic
	const prompt = "Would you like another?";
	
	const res = new AudioResponse(handlerInput);
	res.speak("That's the correct answer! " + prompt)

	// or res.speak("<speak>Can also include SSML like this instead of PlainText</speak>", "SSML");

	res.repromptWith(prompt)
	
	return res.getResponse(); //
}

Use getResponseBuilder() to continue adding further directives as usual

handle(handlerInput) {
	// some logic
	const prompt = "Which colour would you like?";
	const slotName = "colour";
	
	const res = new AudioResponse(handlerInput, "my-custom-directive-token");
	res.speak("Nice! " + prompt);
	res.repromptWith(prompt);

	return res.getResponseBuilder() // and more directives using this method
						.addElicitSlotDirective(slotName) // or other ask-sdk function etc
						.getResponse();
}

🎵 Use multiple audio items sequentially in a response

const fanfareUrl = "<https://somepath.com/to/audio.mp3>";
const fanfare = new Apla.Audio(fanfareUrl);

handle(handlerInput) {
	// some logic
	
	const res = new AudioResponse(handlerInput);

	res.playAudio(fanfare);
	res.playAudio(fanfare**Url**);
	res.speak("That's correct!");
	
	return res.getResponse();
}

Behaviour

  1. Plays the audio file
  2. Plays the audio file again
  3. Finally Alexa speaks her sentence.

Prefer to use the JS class approach for individual components - e.g. new Apla.Audio() so that more details can be passed in if/when there's more features available.


🎶 Create a Mixer to form a soundscape and play multiple audio at once

const fanfareUrl = "<https://somepath.com/to/audio.mp3>";
const fanfare = new Apla.Audio(fanfareUrl);
const applauseUrl = "<https://somepath.com/to/applause.mp3>";

const queenEntranceMixer = new Apla.Mixer([
	fanfare,
	new Apla.Speech("<speak><break time=\\"1s\\"/>Please welcome the Queen!</speak>", "SSML"),
	new Apla.Audio(applause);
]);

handle(handlerInput) {
	// some logic
	
	const res = new AudioResponse(handlerInput);

	res.useMixer(queenEntranceMixer); // this could come from a CMS or 'content' part of the voice app
	res.silence(1500);
	res.speak("The Queen then looked upon her subjects.");
	
	return res.getResponse();
}

Behaviour

  1. The fanfare audio track starts playing as the applause audio track starts.
  2. 1 second later Alexa says her "Please welcome the Queen!".
  3. Once the Mixer component has finished playing, there's 1.5 seconds of silence.
  4. Alexa then says "The Queen then looked upon her subjects.".

You may want to reuse certain soundscapes across multiple handlers/handler groups (I guess that's the whole point of APLA documents in the first place lol) so moving the content, such as queenEntranceMixer to a different part of the voice app and returning it from a function may be the best way to go.

Last Updated: 28th July 2020

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/fcbb3037-fd81-482d-8d69-70bd787da3d7/blog_audio-APL_954x240_(1).png

Introducing Alexa Presentation Language (APL) for Audio

apla-responder


Github

https://github.com/fx-adr/apla-responder

If you want to see more flexibility, plz 👍 my feature request on alexa.uservoice.com to be able to use APLA for reprompts.

APLA Responder Features

APLA Responder Changelog

<aside> 👀 This is a live Notion document, you may see things change in front of you.

</aside>