Deepgram Logo

Community

Improving Speaker Diarization in Live Transcription with Deepgram

In this guide, we explore how to enhance the accuracy of speaker diarization when using Deepgram's real-time transcription services, particularly in the context of conference calls such as those conducted over Google Meet.

Understanding Speaker Diarization

Speaker diarization is the process of partitioning an audio stream into homogeneous segments according to the identity of the speaker. This feature allows users to differentiate between speakers during a live transcription session. However, misidentification can occur, where non-existent speakers are detected, leading to suboptimal results.

Configuration for Optimal Diarization

To improve speaker diarization, it is crucial to utilize the correct parameters within your setup. Below are some optimal configurations and explanations for each:

  • Model: Using the nova-2 model, which is optimized for Spanish language transcription, can enhance performance in these scenarios.
  • Smart Formatting and Punctuation: These options help improve the readability of the transcription output, making it easier to follow.
  • Diarization: Ensure this option is set to true to activate speaker identification.
  • Utterance Ending: Setting utterance_end_ms to 1100 helps the engine determine appropriate points to consider an utterance complete. Modifying this can refine speech segmentation accuracy.
  • Endpointing: Reducing endpointing can decrease the delay required to decide if the end of an utterance has been reached. A setting of 500ms might improve responsiveness.

Alternatives: Multichannel Transcription

In certain situations, such as conference calls, employing multichannel transcription can offer enhanced accuracy. This approach involves processing each speaker in individual channels, which clarifies speaker boundaries and reduces misidentification.

For more information on this strategy, refer to Deepgram's Multichannel Documentation.

Conclusion

To achieve more accurate speaker diarization, consider experimenting with different settings and utilizing the multichannel transcription feature. Adjusting the utterance_end_ms and endpointing parameters can play a significant role in improving speech segmentation.

If issues persist or the system behavior seems inconsistent, reach out to your Deepgram support representative (if you have one) or visit our community for assistance: Deepgram Community.

References