Rapport Web Viewer Configuration
General Setup
projectId: Your project’s ID. (String, required)
projectToken: Your project’s token. (String, required)
lobbyZoneId: ID of the lobby zone that your AI is added to. (String, required)
aiUserId: Your AI user’s ID associated with this project. (String, required)
Rendering
backgroundColor: Hex value of a color or 'transparent'. (String, default ‘transparent’) eg.: #fcba03
loadingImage: URL of the image shown until the AI is fully loaded. (String, default null)
orbitalControls: Allow camera rotation around the avatar. (Boolean, default true)
ocAngle: Maximum angle of rotation around the avatar. (Number, default 45 degrees)
ocZoom: Allow zoom in and out on the avatar. (Boolean, default false)
cameraPosition: Allow adjustment of default calculated camera position. Eg. use param object cameraPosition = { x: 0, y: 0.5, z: 2 } or camera-position attribute as a string '{"x":0,"y":0.5,"z":2}'
signalIndicator: Controls visibility of signal indicator icon. Value 'auto' makes signal indicator visible during poor network conditions. (String default ‘auto’, enum [’never', ‘always’, ‘auto’])
progressBar: A 0-100% loading bar indicating progress of room session setup. (Boolean, default false)
statusBar: A text notification bar indicating steps taken during a session request. Log in, create a room, download a model, communicate with CPI tower, etc. (Boolean, default false)
showLogo: Controls the visibility of the Rapport logo and link rendered on the scene. Free tier projects always has logo set to visible. (Boolean, default false)
targetFps: Limit frame rate to a certain value. Screen refresh rate is the hard limit of this value. Devices that can support less frames per seconds then this value will rendes as much frames per second as possible up to the target. (Integer, default 50)
Advanced
openingText: Sends the value to the AI CPIM after the session is connected. (String, default null)
ttsOpeningText: Sends the value to the TTS CPIM after the session is connected. (String, default null)
emotions: If true, icons will appear in the footer indicating the emotion (positive, negative, neutral) detected in stream from the user's microphone. (Boolean, default false)
micControl: If enabled, a microphone for muting / unmuting the user’s microphone will appear. (Boolean, default false)
volumeControl: If true, a slider will appear to allow the user to control the output volume, and a speaker for muting unmuting output volume. (Boolean, default false)
timeout: max duration of session. After this time, the session will disconnect and a sessionDisconnected info notification will be sent. (Seconds, default 600)
inactivity: an inactivity timer to disconnect a session. Inactivity is reset by messages received from a CPIM module, eg aiMessage, asrMessage. (Seconds, default 180)
logLevel: level used for setting Status Code notifications integrator wishes to receive. All notifications will be sent if set to ‘info’. Set to ‘warning’ to receive ‘warning’ and ‘error’ notifications. Set this to ‘error’ to only receive notification of error / terminating sessionDisconnected events. (String, default ‘info’)
Audio
Startup attributes
aecRequired: Echo cancellation should be available for most conversations with a microphone. If echo cancellation can not be established and if aecRequired is set to true, the session will terminate with sessionDisconnected notifications. Warning messages will be sent if AEC can not be established and a microphone is not required. (Boolean, default false)
Microphone
Startup attributes
micRequired: When set to false microphone permission request is skipped. RWV won't use the microphone in this mode and won't start the session until there is a user interaction on the page. (Boolean, default true)
micMuted: When set to true the session is started with a muted microphone. (Boolean, default false)
micDelay: Session is started with a muted microphone and unmutes after a delay. The delay value is in milliseconds. Setting the delay to 0 disables this behavior. (Number, default 300)
Methods
muteMic(mute = boolean)
scene.muteMic(true);
Mutes microphone if mute boolean is true. Unmutes microphone if mute boolean is false. Throws an error if trying to mute before sessionConnected.
getMic()
await scene.getMic();
The getMic method is a promise which resolves to true if microphone access is received. Can be called after sessionConnected event is emitted and the session was started with mic-required set to false. Rejects if the call fails to get access to the user’s microphone.
Speaker
Startup attributes
speakerMuted: When set to true the session is started with muted speakers. (Boolean, default false)
Methods
muteSpeaker(mute = boolean)
scene.muteSpeaker(true);
Mutes speaker if mute boolean is true. Unmutes speaker if mute boolean is false. Throws an error if trying to mute before sessionConnected.
Noise Gate
Rapport provides a built in noise gate to improve user experience in noisy environments. The user can fine tune the noise gate in the options modal.
Properties
noiseGateThreshold: Noise gate threshold getter/setter. Value is in decibels and must be between -100 and +100. (Number, default -50)
scene.audio.mic.noiseGateThreshold = -100;
Methods
disableNoiseGate()
scene.audio.mic.disableNoiseGate();
Disables noise gate by setting the noiseGateThreshold property to -100.
Push to talk
Rapport provides a built-in push-to-talk feature accessible from the options modal. By default, this feature is turned off. Users can open the configuration modal during the session and set a push to talk key.
The built-in push-to-talk feature works only if the cursor is on top of the Rapport scene. To circumvent this restriction see below for custom push-to-talk implementation.
Startup attributes
pttKey: Set a predefined push-to-talk key. The value must be a KeyboardEvent code property value. By setting this value, the session will be started with a muted microphone and push-to-talk enabled. (String, default null)
Custom implementation
It is possible to implement custom push-to-talk functionality.
scene.sessionRequest({
sessionConnected: () => {
// Start session with muted microphone and disabled noise gate.
scene.muteMic(true);
scene.audio.mic.disableNoiseGate();
}
});
// Listen to specific key press events and toggle the microphone based on button interraction.
window.addEventListener('keydown', (e) => {
if (e.code === 'Space') {
scene.muteMic(false);
}
});
window.addEventListener('keyup', (e) => {
if (e.code === 'Space') {
scene.muteMic(true);
}
});
Events
stateChanged
The state changed event is emitted when the internal audio state changes by either a user interaction or by internal logic. This event’s callback is called with the new state. e.g.: User mutes the microphone by clicking the microphone icon.
scene.addEventListener('stateChanged', (e) => {
console.log(e.detail);
});
Animations
The animations module allows to play animation clips built into the model file.
Methods
get()
scene.animations.get();
Returns an array of animation clip names contained by the model.
play(name = string, loop = boolean (default false), speed = number (default 1))
scene.animations.play('Idle', true, 1);
Plays the given animation by name. The animation can be looped and the speed of the animation can be set.
stop()
scene.animations.stop();
Stops the currently playing animation clip.
setSpeed(speed = number)
scene.animations.setSpeed(2);
Changes the currently running animation clip’s speed to the given value.
Events
animationFinished
The animation finished event is emitted when the requested animation clip play reached its end.
scene.addEventListener('animationFinished', () => {});
Manage scene lights
The lights module provides methods to read, create, update and delete lights on the scene during runtime. Multiple different type of lights are supported. Each type has its own properties that can be modified.
Example light object:
{
type: 'DirectionalLight',
intensity: 1,
color: '#ffffff',
shadows: true,
rotation: {
x: 0,
y: 0
}
}
Type of lights:
AmbientLight:
Reference: https://threejs.org/docs/?q=light#api/en/lights/AmbientLight
{
type: 'AmbientLight',
intensity: 1,
color: '#ffffff',
}
DirectionalLight:
{
type: 'DirectionalLight',
intensity: 1,
color: '#ffffff',
shadows: true,
rotation: {
x: 0,
y: 0,
},
}
Reference: https://threejs.org/docs/?q=light#api/en/lights/DirectionalLight
SpotLight:
SpotLight:
type: 'SpotLight',
intensity: 1,
color: '#ffffff',
distance: 0,
angle: Math.PI / 3,
penumbra: 0,
decay: 1,
rotation: {
x: 0,
y: 0,
}
}
Reference: https://threejs.org/docs/?q=light#api/en/lights/SpotLight
HemisphereLight:
{
type: 'HemisphereLight',
intensity: 1,
skyColor: '#bae8ff',
groundColor: '#ffedd2',
}
https://threejs.org/docs/?q=light#api/en/lights/HemisphereLight
Create:
Create a light object based on the given properties and attach it to the scene.
Returns the new light object.
myRapportScene.lights.create({
type: 'DirectionalLight',
intensity: 1,
color: '#ffffff',
shadows: true,
rotation: {
x: 0,
y: 0,
},
});
Read:
Returns all light objects attached to the scene in an array.
myRapportScene.lights.read();
Update:
Update the given light object with the new properties.
myRapportScene.lights.update(lightObject, {
intensity: 1,
color: '#ffffff',
shadows: true,
rotation: {
x: 0,
y: 0,
},
});
Delete:
Delete the given light from the scene.
myRapportScene.lights.delete();
Delete all:
Delete all lights from the scene.
myRapportScene.lights.delete();
Light tester playground:https://demos.rapport.cloud/demos/tech/light/
Events
The <rapport-scene> element emits events during lifecycle points. You can listen to these events in the following ways.
myRapportScene.addEventListener(‘sessionDisconnected’, (reason) => {});
myRapportScene.addEventListener(‘sessionConnected’, () => {});
You can also pass callback functions to the configuration object.
myRapportScene.sessionRequest({
sessionDisconnected: (reason) => {},
sessionConnected: () => {},
});
sessionDisconnected(reason)
Called after the session ends because of an error or timeout. Doesn’t invoke after disconnect triggered with rapportScene.sessionDisconnect(). The argument reason can be used to determine the reason for the disconnect. Learn more in the Status codes section.sessionConnected()Called after the scene and the AI is fully ready to start the conversation. Useful if you want to show a loader until the session is fully connected.
sessionConnected()
Called after the scene and the AI is fully ready to start the conversation. Useful if you want to show a loader until the session is fully connected.
Methods
sessionRequest()
> myRapportScene.sessionRequest(configuration = {});
The session requestMethod is used to request a Rapport session. Returns a Promise and resolves if a Rapport session has successfully been requested and rejects with a reason if the Promise fails. Parameters for starting the session need to be obtained from either <rapport-scene> attributes or from the given configuration parameter object. If the configuration parameter is omitted then the element's attributes will be used. Element attributes always have priority over the configuration parameters. A resolved sessionRequest does not mean the AI is ready for a conversation yet. Use the sessionConnected event to determine when the AI is fully ready to start the conversation.
sessionDisconnect
> myRapportScene.sessionDisconnect();
The synchronous sessionDisconnect method is used to disconnect the session with the AI. After disconnecting the scene will show the loading image. Returns after the session has made a best effort at disconnecting. You can call sessionRequest() again in a disconnected state. Removing myRapportScene from the DOM automatically calls sessionDisconnect(). If the network connection is lost midstream, it will take up to 30 seconds for the session to be fully disconnected by CPI.
CPI (Cloud Processing Infrastructure) modules
CPI modules are a way to configure aspects of your AI session. The main modules currently are; ASR (Automatic Speech Recognition), AI, TTS (Text-To-Speech), AC (Animation Controller). You can send commands to specific modules and you can listen to responses from specific modules. Also, echo cancellation should be available for most conversations with a microphone. If echo cancellation cannot be established and aecRequired is set to true, the session will terminate with a sessionDisconnected notification. Warning messages will be sent if AEC can not be established and a microphone is not required.
Modules and their capabilities will continue to expand in the future, including language translations between steps.
The general flow of data is:
The user speaks into the microphone :
ASR module converts microphone audio into text (if no microphone ASR is bypassed, and integrator would use myRapportScene.modules.ai.sendText(text)).
ASR text is sent to the AI module to get a text response.
AI text response is converted into audio by the TTS module.
TTS audio is sent to the AC module to generate animation.
You can listen for responses from specific CPI modules with event listeners:
// moduleName = 'asrMessage' || 'aiMessage' || 'acMessage'
rapportScene.addEventListener(moduleName, (e) => {}); // see e.detail.params
or as sessionRequest callback parameters:
myRapportScene.sessionRequest({
asrMessage: (e) => {}, // see ev.params;
aiMessage: (e) => {},
acMessage: (e) => {},
});
Listening to the asrMessage will give a transcript of what the user said while listening to the aiMessage (for audio-stream driven AIs).
If a command is sent, a message will be sent back to the module's event listener and callback. These messages will need to be checked for results; indicating success or potential error notifications.
General
Events
moduleError
The moduleError event (moduleErrorCallback) is emitted when a CPIM or it's related service respond with an error.
rapport.addEventListener('moduleError', (e) => {
console.log(e.detail);
});
ASR (Automatic Speech Recognition)
This module controls the ASR service connected to your project.
Methods
scene.modules.asr.setLanguage(languageCode, provider)
Set ASR service’s language.
Possible provider values: ['amazon', 'azure', 'google', 'lex'].
The provider given needs to match with the provider set in the project.
scene.modules.asr.setLanguage('en-GB', 'amazon');
scene.modules.asr.identifyLanguage({ languageHints, timeout })
Trigger CPI language identification sequence. CPI will try to detect the speaker's language from the given list (languageHints
) during the given time frame (timeout
).
Language hints: en-US,fr-FR,de-DE,it-IT,ja-JP,ko-KR,pt-BR,es-US,zh-CN
The timeout value must be between 5-30 seconds (inclusive).
myRapportScene.modules.asr.identifyLanguage({
languageHints: 'en-US,de-DE,fr-FR',
timeout: 10,
});
CPI will respond with an asynchronous message either when the timeout maximum duration is reached or the speaker’s language is identified during the given time frame.
myRapportScene.addEventListener('asrMessage', (e) => {
switch (e.detail.method) {
case 'asr-text': {
// What ASR heard.
console.log(e.detail.params.text);
break;
}
case 'asr-lang': {
if (e.detail.params.timeout) {
console.log('language identification timed out');
break;
}
console.log(`identified language: ${e.detail.params.lang}`);
break;
}
default: {
break;
}
}
});
Magic marker
Magic marker flag will present in the ASR message payload when its detected.
scene.addEventListener('asrMessage', (e) => {
console.log(e.detail.magic_marker);
});
Related topics:
TTS (Text-To-Speech)
TTS can be configured when setting up an AI project on Accounts.
The TTS language can be set as shown in the two examples below. This is demonstrated in the Real-time translation demo
rapport.modules.asr.setLanguage('en-GB');
rapport.modules.tts.setLanguage('ja-JP', 'Mizuki', undefined, false);
rapport.modules.asr.setLanguage('en-US');
rapport.modules.tts.setLanguage('es-US', 'Lupe');
sendText(text)
> myRapportScene.modules.tts.sendText(text);
The TTS sendText method is used to directly get the AI to say something to the user. Used by ‘tts-opening-text’. Throws an error if the text is not a string.
Events
ttsStart
The TTS start event is emitted when audio playback starts. This event’s callback is called with the ID of the command that started playing if applicable and with the transcript of the said speech. The emit of this event is synchronized with the character’s start speech animation.
Response:
Example:
scene.addEventListener('ttsStart', (e) => {
console.log(e.detail.commandId);
console.log(e.detail.text);
});
ttsEnd
The TTS end event is emitted when audio playback ends. This event’s callback is called with the ID of the command that stopped playing if applicable and with the transcript of the said speech. The emit of this event is synchronized with the character’s end speech animation.
scene.addEventListener('ttsEnd', (e) => {
console.log(e.detail.commandId);
console.log(e.detail.text);
});
AC (Animation Controller)
Properties
moods: Mood list getter. Value is an array containing strings of possible mood names. (array)
scene.modules.ac.moods;
Methods
scene.modules.ac.setMood(mood = string)
scene.modules.ac.setMood('positive');
Sets the mood for the avatar.
The mood is set until a new mood overwrites the currently set mood. Setting mood will overwrite the default AC which has automatic mood detection that switches between the moods depending on audio detected.
scene.modules.ac.setScale(scale = number)
scene.modules.ac.setScale(1.5);
Sets the amount the animation is exaggerated. Usually a number between 0 and 2. The closer to zero the value is the smaller the mouth movements. The default value is 1.
scene.modules.ac.setSpeed(speed = number)
scene.modules.ac.setSpeed(1.5);
Sets the animation play speed. Usually a number between 0 and 2. The closer to zero the value is the slower the animation speed. The default value is 1.
scene.modules.ac.setFrequency(frequency = number)
scene.modules.ac.setFrequency(1.5);
Sets the frequency of animation state changes. Usually a number between 0 and 2. The closer to zero the value is the longer amount of time an animation state is played. The default value is 1.
scene.modules.ac.setModifier(modifier = string, value= number)
scene.modules.ac.setModifier('SG_COM_MOD_NONVERBAL_SPEED', 1.5);
Sets the value for SGCOM behaviour modifiers. Value argument is a number between 0 and 2 with default set to 1.
List of modifiers: SG Com Commands and Modifiers
AI (Artificial Intelligence)
This module is the AI you configured in accounts. Useful if you have a session with no microphone access and want to send text to an AI.
sendText(text)
> myRapportScene.modules.ai.sendText(text);
The AI sentText mothod is used to send a string directly to the AI. The String can be in SSML format. (Insert SSML documentation here.) Useful if you want to create a button driven or hybrid AI. Throws an error if text is not a string.
setLexUserId(userId)
> myRapportScene.modules.ai.setLexUserId(userId);
The AI setLexUserId method is used for setting the Lex AI userId. Throws an error if userId is not a string.
getUserId();
> myRapportScene.modules.ai.getUserId();
The AI getUserId() method is used to get current Lex AI userId. Returns current Lex AI userId.
Commands
This module gets, and triggers commands you configured in accounts for your project.
Methods
await scene.modules.commands.get()
await scene.modules.commands.get();
Resolves into an array containing available command IDs predefined in your project.
await scene.modules.commands.trigger(commandId = string)
await scene.modules.commands.trigger('command_001');
Triggers the command in your project associated with the given command ID.
scene.modules.commands.stopAllSpeech()
scene.modules.commands.stopAllSpeech();
Stops all ongoing and queued TTS requests and commands.
VA (Voice Analytics)
This module provides access to voice analytic services that are connected with your project.
SoapBox Labs
Methods
scene.modules.va.trigger(command= object)
Start voice recording.
scene.modules.va.trigger({
method: 'start-mic-capture',
params: { type:'soapbox-verification', category: ['hello world' ] },
});
Stop voice recording and send recorded audio to SoapBox Labs service.
scene.modules.va.trigger({ method: 'stop-mic-capture' });
Events
vaMessage
The VA message event is emitted when the service responds with the voice analytic result. This event’s callback is called with the voice analytic result.
rapport.addEventListener('vaMessage', (e) => {
console.log(e.detail);
});
SA (Sentiment Analysis)
This module provides access to sentiment analysis services that are connected with your project.
Methods
scene.modules.sa.analyseSentiment(params = object)
Requests sentiment analysis on the given text. The result is provided in the SA message event.
scene.modules.sa.analyseSentiment({ text: 'Hello! How are you?' });
Events
saMessage
The SA message event is emitted when the service responds with the sentiment analysis result. This event’s callback is called with the sentiment analysis result.
rapport.addEventListener('saMessage', (e) => {
console.log(e.detail.result);
});
User interface
Configure behavior of user interface elements.
Error modal
By default error modal are shown when an error occurs during runtime or during session request.
Properties
enabled: Controls the visibility of the error modal. The default value is true. (boolean)
scene.modalError.enabled = false;
iframe Based Integration:
The Iframe-based integration approach is good for quick prototyping and basic integration. We suggest using the web component-based integration, however, in order to have a faster initial loading speed, smaller download size, and more overall flexibility.
Browser permissions:
By default, you need to set the allow="microphone" attribute to the iframe in order to give microphone access permission for the iframe. The allow="microphone" attribute can be omitted if the micRequired RWV param is set to false. In this case, the iframe content will automatically show a button that the user needs to press in order to start the session. User interaction on the iframe is necessary to give audio context for the iframe if microphone permission is not provided.
API:
You can pass parameters to the Rapport scene via the iframe src attribute value as URL query parameters. The base URL is https://accounts.rapport.cloud/avatar-iframe/. Make sure you pass the query parameter values in URL-encoded format.
projectId:
Your project's id. (String, required)
projectToken:
Your project's token. (String, required)
aiUserId:
Your AI user id is associated with the given project. (String, required)
lobbyZoneId:
Lobby zone id you added your AI user to. (String, required)
micRequired:
When set to false microphone permission request is skipped. RWV won’t use the microphone in this mode and won’t start the session until there is a user interaction on the iframe. (Boolean, default true)
openingText:
A string sent to the AI via the ai module sendText method after the session is connected. (String, default null)
orbitalControls:
Allow camera rotation around the avatar. (Boolean, default true)
ocZoom:
Allow zoom in and out on the avatar with the mouse scroll wheel. (Boolean, default true)
ocAngle:
Maximum angle of rotation around the avatar. (Number, default 45)
backgroundColor:
Background color of the scene. (String, default 'transparent')
initialMood:
Sets initial mood for the avatar via the ac module setMood method after the session connected. (String, default null)
buttonLabel:
Sets the label of the button shown when 'micRequired' is false. (String, default 'Start conversation')
autoConnect:
Setting this to false won't start the session. Useful for experimenting with styling during integration. (Boolean, default true)
showLogo:
Controls the visibility of the Rapport logo and link rendered on the scene. Free-tier projects always have logo set to visible. (Boolean, default false)
Styling:
It’s possible to remove the default iframe border and scrollbar with the following attributes.
<iframe
scrolling="no"
style="border: 0;"
></iframe>
Example utilizing an <iframe>:
<iframe
src="https://accounts.rapport.cloud/avatar-iframe/?projectId=yourProjectId&projectToken=yourProjectToken&aiUserId=yourAiUserId&lobbyZoneId=yourLobbyZoneId&buttonLabel=Start%20demo"
title="Rapport"
allow="microphone"
scrolling="no"
style="border: 0; width: 600px; height: 600px; max-width: 100%; max-height: 100%;"
></iframe>