ARCHETYPE Speech intelligibility test (openEHR-EHR-OBSERVATION.speech_intelligibility_test.v0)

ARCHETYPE IDopenEHR-EHR-OBSERVATION.speech_intelligibility_test.v0
ConceptSpeech intelligibility test
DescriptionRecord of results from an audiological speech intelligibility test conducted for the purpose of assessing speech recognition, speech discrimination, or speech intelligibility.
UseUse to record the results of audiological speech tests carried out to assess the ability of a subject to understand speech in quiet or in noise, i.e. speech recognition, speech discrimination and speech intelligibility. Results can either be speech intelligibility scores for given stimulus levels (fixed SNR), or speech recognition thresholds obtained from adaptive procedures.
MisuseNot to be used for audiological speech testing that is used for phonemic confusions analysis. Not to be used to assess speech production. Not to be used to record audiological speech tests where the presentation level is not known - for example: unmonitored live voice.
PurposeTo record results from an audiological speech test conducted for the purpose of assessing speech recognition, speech discrimination, or speech intelligibility.
ReferencesDerived from: Audiology Speech Test Result, Draft archetype [Internet]. Australian Digital Health Agency (NEHTA), ADHA Clinical Knowledge Manager. Authored: 2013 Jan 13. Available at: http://dcm.nehta.org.au/ckm#showArchetype_1013.1.1174_3 (discontinued).

Taylor B. Predicting Real World Hearing Aid Benefit with Speech Audiometry: An Evidence-Based Review; 2007 May 07 [cited 2013 Feb 08]. Available from: http://www.audiologyonline.com/articles/predicting-real-world-hearing-aid-946.

Madel J, Flexer C. Pediatric Audiology: Diagnosis, Technology, and Management. Thieme Medical Publishers; 2008. Chapter 10, Evaluation of Speech Perception in Infants and Children, p89-105.

Gordon-Salant S. Age-related differences in speech recognition performance as a function of test format and paradigm. Ear Hear. 1987 Oct;8(5):277-82. PubMed PMID: 3678641.

Nilsson M, Soli S D, Sullivan, J A. Development of the Hearing In Noise Test for the measurement of speech reception thresholds in quiet and in noise. Acoust. Soc. Am. 95, 1085 (1994), DOI:10.1121/1.408469.
Copyright© openEHR Foundation, HiGHmed, Hearing4all, HiGHmed
AuthorsAuthor name: Mareike Buhl
Organisation: Hearing Institute, Institut Pasteur, Paris, France; University of Oldenburg and Cluster of Excellence Hearing4all, Germany
Email: mareike.buhl@uni-oldenburg.de
Date originally authored: 2025-12-01
Other Details LanguageAuthor name: Mareike Buhl
Organisation: Hearing Institute, Institut Pasteur, Paris, France; University of Oldenburg and Cluster of Excellence Hearing4all, Germany
Email: mareike.buhl@uni-oldenburg.de
Date originally authored: 2025-12-01
Other Details (Language Independent)
  • Licence: This work is licensed under the Creative Commons Attribution-ShareAlike 3.0 License. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0/.
  • Custodian Organisation: HiGHmed
  • References: Derived from: Audiology Speech Test Result, Draft archetype [Internet]. Australian Digital Health Agency (NEHTA), ADHA Clinical Knowledge Manager. Authored: 2013 Jan 13. Available at: http://dcm.nehta.org.au/ckm#showArchetype_1013.1.1174_3 (discontinued). Taylor B. Predicting Real World Hearing Aid Benefit with Speech Audiometry: An Evidence-Based Review; 2007 May 07 [cited 2013 Feb 08]. Available from: http://www.audiologyonline.com/articles/predicting-real-world-hearing-aid-946. Madel J, Flexer C. Pediatric Audiology: Diagnosis, Technology, and Management. Thieme Medical Publishers; 2008. Chapter 10, Evaluation of Speech Perception in Infants and Children, p89-105. Gordon-Salant S. Age-related differences in speech recognition performance as a function of test format and paradigm. Ear Hear. 1987 Oct;8(5):277-82. PubMed PMID: 3678641. Nilsson M, Soli S D, Sullivan, J A. Development of the Hearing In Noise Test for the measurement of speech reception thresholds in quiet and in noise. Acoust. Soc. Am. 95, 1085 (1994), DOI:10.1121/1.408469.
  • Current Contact: Heather Leslie, Atomica Informatics, Australia
  • Original Namespace: org.highmed
  • Original Publisher: HiGHmed
  • Custodian Namespace: org.highmed
  • MD5-CAM-1.0.1: AEE12A6346A695DEB68AE59E16551EB3
  • Build Uid: f63cb2e9-748b-48c2-9563-c1d300b921ad
  • Revision: 0.0.1-alpha
Keywordsspeech test, speech recognition, speech discrimination, hearing, audiology
Lifecyclein_development
UID0bff2e30-cf81-43c1-9160-24743105b0cc
Language useden
Citeable Identifier1246.145.3018
Revision Number0.0.1-alpha
protocol
Test nameTest name: The name of the conducted speech test.
Use published name (including a reference) if possible.
Test languageTest language: The language of the speech stimulus.
Add language terminology at template level to include all coded options.
Voice typeVoice type: The voice type (sex) of the speech stimulus.
  • Female [Stimuli presented by a female talker.]
  • Male [Stimuli presented by a male talker.]
  • Synthetic female [Stimuli presented by a synthetically generated female voice.]
  • Synthetic male [Stimuli presented by a synthetically generated male voice.]
  • Child [Stimuli presented by a child.]
  • Synthetic child [Stimuli presented by a synthetically generated children voice.]
AnnouncementAnnouncement: The type of announcement before presenting the speech stimulus.
  • Sentence [An announcement sentence is presented prior to each stimulus.]
  • Non-speech auditory cue [A non-speech auditory announcement is presented prior to each stimulus (e.g., beep).]
  • Visual cue [A visual cue is presented prior to each stimulus (e.g., flashing light).]
  • Without [No announcement prior to each stimulus.]
Test in background noiseTest in background noise: To describe if the speech intelligibility test is conducted in noise or in quiet.
  • Quiet [The speech test is conducted without background noise.]
  • Noise [The speech test is conducted in background noise.]
Fixed stimulusFixed stimulus: To indicate for which stimulus (speech or noise) the level is kept constant during the measurement.
Only applicable for speech in noise tests measured adaptively.
  • Speech [The speech level is kept constant during the test.]
  • Noise [The noise level is kept constant during the test.]
Level controlLevel control: The description of the adaptive level control procedure.
Applicable for adaptive measurements.
Adaptive procedure nameAdaptive procedure name: The name of the adaptive level control procedure.
Choice of:
  •  Coded Text
    • One-up one-down [Level control according to one-up one-down procedure (e.g., Levitt, 1971).]
    • Brand & Kollmeier (2002) [Level control according to Brand & Kollmeier (2002).]
  •  Text
Step sizeStep size: The stepsize of level changes.
Property: Loudness
Units: dB
Limit decimal places: 0
Adaptive procedure detailsAdaptive procedure details: Additional details about adaptive level control procedure.
Speech stimulusSpeech stimulus: Properties of the speech stimulus.
Type of speech stimulusType of speech stimulus: The type of speech stimulus used.
Choice of:
  •  Coded Text
    • Nonsense syllable [A consonant-vowel (CV) or CCV or VC or VCC item that is not a real word but is phonotactically correct.]
    • Nonsense CVC [Nonsense word comprising a consonant, then a vowel, then a final consonant, for example, "wub" or "yat".]
    • Nonsense word [A speech stimulus that is not a real word but is phonotactically correct.]
    • Monosyllabic word [A word comprised of a single syllable. For example, 'green'.]
    • Spondee word [A word comprised of 2 syllables with equal stress on each syllable. For example, 'sunshine'.]
    • Trochee word [A word that is comprised of two syllables with stress on the first syllable. For example 'bucket'.]
    • Matrix sentence [Sentence composed of a fixed syntactical structure without meaningful context.]
    • Meaningful sentence [A grammatical unit of one or more words that expresses an independent statement, question, request, command, exclamation, everyday-sentences etc.]
    • Number [A number.]
    • Digit triplet [A sequence of three digits.]
    • Phoneme [A phoneme.]
  •  Text
AzimuthAzimuth: Azimuth angle of 'Speech stimulus'.
Only applicable if 'Directionality characteristics' is 'Virtual acoustics'.
Property: Angle, plane
Units: 0.0..360.0 °
ElevationElevation: Elevation angle of 'Speech stimulus'.
Only applicable if 'Directionality characteristics' is 'Virtual acoustics'.
Property: Angle, plane
Units: 0.0..360.0 °
Assumed value: 0.0°
Limit decimal places: -1
Noise stimulusNoise stimulus: Properties of the noise stimulus.
Can be repeated for several, directional noise stimuli, for example: competing talkers from different azimuth directions.
Type of noise stimulusType of noise stimulus: The type of noise stimulus used during speech in noise testing. Further details can be given in 'Noise type details'.
Not applicable for speech in quiet tests.
Choice of:
  •  Coded Text
    • White noise [Noise that has the same power at all frequencies (i.e., a flat power spectrum).]
    • Speech spectrum noise [Noise spectrum that approximates the average long term speech spectrum.]
    • Multitalker babble [A recording of the voices of many people who are talking simultaneously, resulting in an unintelligible babble.]
    • Alternate speaker [The masker is a single person speaking and this speaker is different to the speaker used for the test stimulus.]
  •  Text
AzimuthAzimuth: Azimuth angle of 'Noise stimulus'.
Only applicable if 'Directionality characteristics' is 'Virtual acoustics'.
Property: Angle, plane
Units: 0.0..360.0 °
ElevationElevation: Elevation angle of 'Noise stimulus'.
Only applicable if 'Directionality characteristics' is 'Virtual acoustics'.
Property: Angle, plane
Units: 0.0..360.0 °
Assumed value: 0.0°
Limit decimal places: -1
Noise stimulus detailsNoise stimulus details: Details on signals used as noise in 'Type of noise stimulus', e.g., name of established noise, number of talkers included in a babble noise, or other specific characteristics.
Not applicable for speech in quiet tests.
Directionality characteristicsDirectionality characteristics: Only applicable for 'Tested side' binaural.
Diotic and dichotic: no virtual acoustics.
  • Diotic [The same sound is presented to both ears.]
  • Dichotic [Different sounds are presented to both ears.]
  • Virtual acoustics [The sound is rendered using head-related transfer functions, enabling directionality of sound sources.]
MaskingMasking: The description of the signal used for masking (at the contralateral ear) if applicable, for example: in case of asymmetric hearing loss and monaural (aided) measurements.
Type of masking stimulusType of masking stimulus: The type of masking stimulus used during speech in noise testing.
Choice of:
  •  Coded Text
    • White noise [Noise that has the same power at all frequencies (i.e., a flat power spectrum).]
  •  Text
Presentation methodPresentation method: The method used to present the test stimuli.
Choice of:
  •  Coded Text
    • Insert earphone [The stimulus is presented via insert earphones.]
    • Headphone [The stimulus is presented via external headphones - either circumaural or supraaural.]
    • Direct streaming to sound processor [The stimulus is directly streamed to the sound processor of a hearing device.]
  •  Text
Presentation method detailsPresentation method details: Details of device used to present test stimulus as specified in 'Presentation method'.
For example: type of headphone or speaker.
Include:
openEHR-EHR-CLUSTER.device.v1
Test environmentTest environment: The environment in which the speech test is administered.
Choice of:
  •  Coded Text
    • Sound treated room [Test environment that has been treated acoustically.]
    • Non-sound treated room [Test environment that has not been treated acoustically.]
    • Free-field [Room with free-field characteristics, also called 'anechoic room'.]
  •  Text
Test environment detailsTest environment details: Additional details of 'Test environment'.
For example: specific audiometric booth or a free text description of the room's sound treatment.
Include:
openEHR-EHR-CLUSTER.device.v1
Presentation methodPresentation method: The method used to present the test stimuli.
Choice of:
  •  Coded Text
    • Loudspeaker [The stimulus is presented via a loudspeaker.]
    • Insert earphone [The stimulus is presented via insert earphones.]
    • Headphone [The stimulus is presented via external headphones - either circumaural or supraaural.]
    • Direct streaming to sound processor [The stimulus is directly streamed to the sound processor of a hearing device.]
    • Live voice [The stimulus is spoken.]
  •  Text
Presentation method detailsPresentation method details: Details of device used to present test stimulus as specified in 'Presentation method'.
For example: type of headphone or speaker.
Include:
openEHR-EHR-CLUSTER.device.v1
Presented sensory modalitiesPresented sensory modalities: The method used to present the speech test stimulus.
For example: a visual stimulus can be used to test lip reading skills.
  • Auditory test [An auditory stimulus is presented to the test subject.]
  • Audiovisual test [A combination of auditory and visual stimuli are presented to the test subject.]
  • Visual test [A visual stimulus is presented to the test subject.]
Response setResponse set: The type of the response set.
  • Open set [The response set is unlimited.]
  • Closed set [The response set is limited.]
Response modeResponse mode: The mode used to enter or record the response.
  • Computer [The response is entered via a software interface.]
  • Vocal [The response is given verbally.]
  • Picture pointing [The response is given by pointing to a picture of the stimulus item.]
  • Written response alternatives [The response is given by pointing to written text corresponding to the stimulus item.]
  • Concrete object pointing [The response is given by pointing to a 3-dimensional object.]
Scoring methodScoring method: The scoring method used.
  • Word scoring [Counting correct words in responses.]
  • Sentence scoring [Counting correct sentences in responses.]
  • Phoneme scoring [Counting correct phonemes in responses.]
  • Keyword scoring [Counting correct keywords in responses.]
state
Hearing device during testHearing device during test: Information about hearing device use during the speech test.
Hearing deviceHearing device: Details of the hearing device used.
For example: hearing aid or cochlear implant as type of device.
Include:
openEHR-EHR-CLUSTER.device.v1
Side of hearing deviceSide of hearing device: Identification of the side where the hearing device is worn during the test.
  • Left [The hearing device is worn at the left side.]
  • Right [The hearing device is worn at the right side.]
CommentComment: Additional information about the hearing device that is not captured in 'Hearing device' or 'Side of hearing device'.
For example: information about hearing device settings.
Confounding factorsConfounding factors: Additional issues or factors that may impact the speech test, not captured in other fields.
For example: medication or noise exposure prior to test.
Language proficiency of the listenerLanguage proficiency of the listener: Language proficiency in broad categories.
Choice of:
  •  Coded Text
    • Native [Listener is native speaker in the presented test language.]
    • Fluent [Listener is fluent in the presented test language.]
    • Basic [Listener has only basic proficiency in the presented test language.]
    • None [Listener has no proficiency in the presented test language.]
  •  Text
data
Tested sideTested side: Identification of the tested ear(s).
  • Right ear [The test stimuli were presented to the right ear.]
  • Left ear [The test stimuli were presented to the left ear.]
  • Binaural [The test stimuli were presented to the both ears simultaneously.]
Measurement of test listMeasurement of test list: Results obtained with one test list.
Applicable to both adaptive and fixed level/SNR measurements.
Measurement identifierMeasurement identifier: The identifier to group several 'Measurements of test list' based on which a 'SRT derived' is estimated.
Test lists used together can be grouped using 'Related measurements' in 'SRT derived'.
Specification of test listSpecification of test list: The name or number of the used test list.
If applicable for the specific test, for example: Matrix sentence test or German Freiburg monosyllabic speech test.
Speech level at startSpeech level at start: The start level of the presented speech stimulus at the beginning of the test list.
Applicable if the presented level at the beginning of a test list is not captured in the levels stored per test list item.
Property: Loudness
Units: -50.0..200.0 dB[SPL]{SPL}
Limit decimal places: 2
SNR at startSNR at start: The signal-to-noise ratio at the beginning of the test list.
Applicable if the presented level at the beginning of a test list is not captured in the levels stored per test list item. Only applicable for measurements in noise.
Property: Loudness
Units: -50.0..200.0 dB SNR
Limit decimal places: 2
Speech intelligibility of test listSpeech intelligibility of test list: The speech intelligibility of the test list.
Only applicable for fixed level/SNR measurements (speech and noise level are constant during the complete test list).
  • Percent
Numerator: 0.0..100.0
Measurement of test list itemMeasurement of test list item: To record speech intelligibility results per test list item (for example: word, sentence).
Test list item identifierTest list item identifier: The name, description, or testing order of the test list item.
Speech levelSpeech level: The level of the presented speech stimulus (test list item).
Property: Loudness
Units: -50.0..200.0 dB[SPL]{SPL}
Limit decimal places: 2
SNRSNR: The signal-to-noise ratio of the test list item.
Property: Loudness
Units: -50.0..200.0 dB SNR
Limit decimal places: 2
Speech intelligibility of test list itemSpeech intelligibility of test list item: The speech intelligibility of the test list item (for example: word, sentence).
  • Percent
Numerator: 0.0..100.0
Speech recognition threshold adaptiveSpeech recognition threshold adaptive: The estimated speech recognition threshold (SRT) resulting from an adaptive measurement.
Not applicable for fixed level measurements.
Target speech intelligibilityTarget speech intelligibility: The target percentage of the speech intelligibility measurement.
For example: 50%.
  • Percent
Numerator: 0.0..100.0
Assumed value: 50.0

Speech level at targetSpeech level at target: The level of the speech stimulus required to achieve the target speech intelligibility.
Property: Loudness
Units: -50.0..200.0 dB[SPL]{SPL}
Limit decimal places: 2
SNR at targetSNR at target: The signal-to-noise ratio required to achieve the target speech intelligibility.
Only applicable for measurements in noise.
Property: Loudness
Units: -50.0..200.0 dB SNR
Limit decimal places: 2
Slope at targetSlope at target: The slope of the psychometric function at the target intelligibility.
Property: null
Units: 0.0..100.0 [arb'U]{%}/dB
Limit decimal places: 2
Speech recognition threshold derivedSpeech recognition threshold derived: The estimated speech recognition threshold (SRT) derived from several 'Measurements of test lists'.
Not applicable for adaptive level measurements.
Target speech intelligibilityTarget speech intelligibility: The target percentage of the speech intelligibility measurement.
For example: 50%.
  • Percent
  • Fraction
Speech level at targetSpeech level at target: The level of the speech stimulus required to achieve the target speech intelligibility.
Property: Loudness
Units: -50.0..200.0 dB[SPL]{SPL}
Limit decimal places: 2
SNR at targetSNR at target: The signal-to-noise ratio required to achieve the target speech intelligibility.
Only applicable for measurements in noise.
Property: Loudness
Units: -50.0..200.0 dB SNR
Limit decimal places: 2
Slope at targetSlope at target: The slope of the psychometric function at the target intelligibility.
Property: null
Units: 0.0..100.0 [arb'U]{%}/dB
Limit decimal places: 2
SRT calculation methodSRT calculation method: The description of how the SRT is derived from 'Measurements of test lists'.
Related measurementsRelated measurements: The identifiers 'Measurements of test lists' from which the SRT is derived, as specified in 'Measurement identifier'.
Overall commentOverall comment: Additional narrative about the speech intelligibility measurement not captured in other fields.
No test resultNo test result: Details to explicitly record that this examination was not performed.
Include:
openEHR-EHR-CLUSTER.exclusion_exam.v1
Training conditionTraining condition: Indicates if the measurement was performed for training or test purposes.
Applicable if the speech intelligibility test requires training, for example: Matrix test.
events
Any eventAny event: Default, unspecified point in time or interval event which may be explicitly defined in a template or at run-time.
Other contributorsStephen Chu, NEHTA, Australia
Kathy Currie, Northern Territory Health, Australia
Sam Heard, Ocean Informatics, Australia (Editor)
Anthony Leech, Hearing Health, Australia
Kerrie Lee, Ngaanyatjarra Health Service, Australia
Heather Leslie, Atomica Informatics, Australia (Editor)
Ian McNicoll, Ocean Informatics UK, United Kingdom
Kirsten Wagener, Hörzentum Oldenburg, Germany
Tahereh Afghah, Hörzentum Oldenburg, Germany
Ania Warzybok, University of Oldenburg and Cluster of Excellence Hearing4all, Germany
Birger Kollmeier, University of Oldenburg and Cluster of Excellence Hearing4all, Germany
Lena Schell-Majoor, University of Oldenburg and Cluster of Excellence Hearing4all, Germany
Daniel Berg, University of Oldenburg and Cluster of Excellence Hearing4all, Germany
Eugen Kludt, Hannover Medical School and Cluster of Excellence Hearing4all, Germany
Antje Wulff, University of Oldenburg and Cluster of Excellence Hearing4all, Germany
Pascal Biermann, University of Oldenburg and Cluster of Excellence Hearing4all, Germany
Translators