Skip to main content
Once you have installed the SDK and obtained an access token from your backend, follow the steps below to record consultations and receive structured medical reports. All SDK interactions happen on the client side — never expose your token generation logic to the browser.

Initialize the SDK

Create a single Squire instance after the user logs in or when your EHR client opens. Pass your access token and the language in which reports should be generated. See supported language codes for accepted values.
import Squire from '@squirehealth/sdk';

// Your logic to fetch an access token from your backend goes here ...

const squire = new Squire({
  token: 'REPLACE_WITH_ACCESS_TOKEN',
  outputLanguage: 'nl',
});
Initialize the SDK once per session. Reinstantiating it on every page load is unnecessary and may cause missed events.

Get microphones

Before starting a consultation, you can retrieve the list of available microphones to let the user choose their preferred device. Calling this method also triggers the browser’s microphone permission prompt if the user hasn’t already granted access.
const audioInputs = await squire.getAudioInputs();
Each item in the returned array has the following properties:
PropertyTypeDescription
deviceIdstringUnique identifier for the microphone device
labelstringHuman-readable display name
isDefaultbooleantrue if this is the system default microphone

Register event listeners

Register a summary-ready callback before starting a consultation so you don’t miss the report when it arrives. The SDK emits this event once the consultation has finished processing.
squire.on('summary-ready', (summary) => {
  // `summary` contains the structured report data generated by Squire.
  // Your logic to store consultation report data in the EHR goes here.
});
For the full structure of the summary object, see the output documentation.

Start a consultation

Call startRecording to begin capturing audio and start a new consultation session. The method returns a promise that resolves with a Consultation object once the session is established.
const consultation = await squire.startRecording();
You can customise the recording by passing options:
const consultation = await squire.startRecording({
  inputLanguage: 'fr',          // Language spoken during the consultation
  audioInput: audioInputs[0],   // Microphone selected by the user
  templateId: 'specialist_general_api', // Output template
});
By default, inputLanguage falls back to outputLanguage and audioInput uses the system default microphone.
The templateId option controls the structure of the generated report. Supported values are specialist_general_api and soap_new_complaint_api.

Stop a consultation

Call stopRecording to end the session. The SDK finalises the audio stream, sends any remaining data to the server, and emits summary-ready when the report is ready.
squire.stopRecording();

Pause and resume a consultation

To temporarily halt recording without ending the session, use pauseRecording and resumeRecording. This is useful when the patient steps out of the room or the provider needs a break.
// Pause
squire.pauseRecording();

// Resume
squire.resumeRecording();

Retrieve a consultation summary

You can fetch the consultation summary on demand at any time after the session completes, rather than waiting for the summary-ready event.
Use the Consultation object returned by startRecording:
const summary = await consultation.getSummary();
For the complete summary structure, see the output documentation. For advanced SOAP operations like splitting and merging, see Multi-SOAP.

Dictation mode

The SDK supports real-time dictation, delivering both finalised and in-progress transcription results as the provider speaks. Use the dictation_api template to enable this mode.

Start a dictation session

const consultation = await squire.startRecording({
  inputLanguage: 'en',
  templateId: 'dictation_api',
});

Handle live transcription updates

Listen for the summary-ready event, which fires multiple times during the session with intermediate results. The transcription is split into two fields:
  • fixed — text that has been finalised and will not change
  • ongoing — text still being processed; may change with the next update
squire.on('summary-ready', (summary) => {
  const transcription = summary.data?.find(d => d.template === 'dictation_api');
  if (!transcription) return;

  const transcriptionSection = transcription.sections?.find(
    s => s.section_id === 'transcription'
  );
  if (!transcriptionSection) return;

  const fixedText = transcriptionSection.fields.fixed?.text || '';
  const ongoingText = transcriptionSection.fields.ongoing?.text || '';

  // Update your UI with the fixed and ongoing text.
  document.getElementById('fixed-text-output').textContent = fixedText;
  document.getElementById('ongoing-text-output').textContent = ongoingText;
});
When the dictation stops, a final summary-ready event fires with report_status set to 'final' and the complete transcript in fixed. For the full summary structure, see the output documentation.

Error handling

For built-in reconnection logic, explicit try/catch error handling, SDK state events, and log level configuration, see the error handling page.