Initialize the SDK
Create a singleSquire instance after the user logs in or when your EHR client opens. Pass your access token and the language in which reports should be generated. See supported language codes for accepted values.
Initialize the SDK once per session. Reinstantiating it on every page load is unnecessary and may cause missed events.
Get microphones
Before starting a consultation, you can retrieve the list of available microphones to let the user choose their preferred device. Calling this method also triggers the browser’s microphone permission prompt if the user hasn’t already granted access.| Property | Type | Description |
|---|---|---|
deviceId | string | Unique identifier for the microphone device |
label | string | Human-readable display name |
isDefault | boolean | true if this is the system default microphone |
Register event listeners
Register asummary-ready callback before starting a consultation so you don’t miss the report when it arrives. The SDK emits this event once the consultation has finished processing.
summary object, see the output documentation.
Start a consultation
CallstartRecording to begin capturing audio and start a new consultation session. The method returns a promise that resolves with a Consultation object once the session is established.
By default,
inputLanguage falls back to outputLanguage and audioInput uses the system default microphone.templateId option controls the structure of the generated report. Supported values are specialist_general_api and soap_new_complaint_api.
Stop a consultation
CallstopRecording to end the session. The SDK finalises the audio stream, sends any remaining data to the server, and emits summary-ready when the report is ready.
Pause and resume a consultation
To temporarily halt recording without ending the session, usepauseRecording and resumeRecording. This is useful when the patient steps out of the room or the provider needs a break.
Retrieve a consultation summary
You can fetch the consultation summary on demand at any time after the session completes, rather than waiting for thesummary-ready event.
- From the Consultation object
- From a consultation ID
Use the
Consultation object returned by startRecording:Dictation mode
The SDK supports real-time dictation, delivering both finalised and in-progress transcription results as the provider speaks. Use thedictation_api template to enable this mode.
Start a dictation session
Handle live transcription updates
Listen for thesummary-ready event, which fires multiple times during the session with intermediate results. The transcription is split into two fields:
fixed— text that has been finalised and will not changeongoing— text still being processed; may change with the next update
summary-ready event fires with report_status set to 'final' and the complete transcript in fixed. For the full summary structure, see the output documentation.
Error handling
For built-in reconnection logic, explicittry/catch error handling, SDK state events, and log level configuration, see the error handling page.