Abstract
The objective of this study was to assess ability to identify asynchronies during noninvasive ventilation (NIV) through ventilator waveforms according to experience and interface, and to ascertain the influence of breathing pattern and respiratory drive on sensitivity and prevalence of asynchronies.
35 expert and 35 nonexpert physicians evaluated 40 5-min NIV reports displaying flow–time and airway pressure–time tracings; identified asynchronies were compared with those ascertained by three examiners who evaluated the same reports displaying, additionally, tracings of diaphragm electrical activity. We determined: 1) sensitivity, specificity, and positive and negative predictive values; 2) the correlation between the double true index (DTI) of each report (i.e., the ratio between the sum of true positives and true negatives, and the overall breath count) and the corresponding asynchrony index (AI); and 3) the influence of breathing pattern and respiratory drive on both AI and sensitivity.
Sensitivities to detect asynchronies were low either according to experience (0.20 (95% CI 0.14–0.29) for expert versus 0.21 (95% CI 0.12–0.30) for nonexpert, p=0.837) or interface (0.28 (95% CI 0.17–0.37) for mask versus 0.10 (95% CI 0.05–0.16) for helmet, p<0.0001). DTI inversely correlated with the AI (r2=0.67, p<0.0001). Breathing pattern and respiratory drive did not affect prevalence of asynchronies and sensitivity.
Patient–ventilator asynchrony during NIV is difficult to recognise solely by visual inspection of ventilator waveforms.
Abstract
Detection of patient–ventilator asynchrony during NIV by visual inspection of ventilator waveforms is difficult http://ow.ly/3ce930eGdn6
Introduction
Modes of partial ventilatory assistance, where a patient's breathing effort drives the ventilator, offer clinical advantages such as reduced need for sedation, lower risk of respiratory muscle atrophy, improved oxygenation and less haemodynamic impairment. A good interaction between patient effort and ventilator assistance is necessary for optimal performance of partially supported modes. A poor interaction is characterised by asynchrony, which is when the patient and the ventilator do not work in unison [1]. In intubated patients, a rate of asynchrony as high as 10%, mainly caused by ineffective triggering, has been associated with worsened outcomes, such as longer durations of mechanical ventilation [2] and intensive care unit (ICU) stay [3, 4], higher rate of tracheotomy [3], and reduced rate of survival [2] and home discharge [4].
Because the patient's tolerance is a determinant of successful noninvasive ventilation (NIV) [5], optimal patient–ventilator interaction may be crucial in these patients. Recent studies indicate that high rates of asynchrony also occur during NIV [6–11]. Several strategies, such as the use of ventilators with algorithms for air-leak detection and compensation [12], application of leak-insensitive ventilatory modes [8–10], reduction of the applied pressure [7] and choice of the appropriate interface [11, 13], may limit the number of asynchronies during NIV. Nevertheless, the number of patients with severe asynchrony, as characterised by an asynchrony index (AI) ≥10% [3], remains high, ranging between 50% and 80% [6, 8–10].
The capacity of ICU physicians to detect the major patient–ventilator asynchronies during invasive ventilation by visual inspection of flow and pressure waveforms, as displayed on the ventilator screen, is low and only slightly influenced by the observer's clinical experience [14]. No study has so far evaluated the ability of ICU physicians to recognise asynchronies during NIV. We therefore designed this international, prospective, multicentre study to: 1) assess the ability of ICU physicians to recognise patient–ventilator asynchronies during pressure support ventilation by visual inspection of flow and pressure waveforms; 2) determine the impact of the interface, operators' experience and geographic origin on the ability to detect asynchronies; and 3) ascertain the influence of support level, breathing pattern and respiratory drive on sensitivity and prevalence of asynchronies.
Materials and methods
Expanded methods are available in the supplementary material. The study was performed in eight ICUs in China, Italy and the Netherlands, after approval from local ethics committees.
Patients and protocol
Flow, airway pressure (Paw) and diaphragm electrical activity (EAdi) tracings had been obtained from 40 patients enrolled in previous studies [6, 8, 15], who had received NIV, 20 through facial masks and 20 through a helmet, for treatment of acute respiratory failure (ARF) of various aetiologies. Patient's characteristics at enrolment and ventilator settings are reported in table S1. EAdi had been obtained through a dedicated nasogastric feeding tube (EAdi catheter; Maquet Critical Care, Solna, Sweden), positioned as previously described [8, 16]. Airflow, Paw and EAdi were acquired from the ventilator, recorded by means of dedicated software (Nava Tracker Version 3.0; Maquet Critical Care) and stored on a hard disk.
One 5-min epoch of data was randomly extracted from all 40 recordings, the overall durations of which were 20–30 min. Flow–time and Paw–time tracings were scaled to simulate the waveforms displayed on ventilator screens, and uploaded to a dedicated online website.
70 physicians were drawn at random from the medical staff of eight ICUs (three in the Chinese Republic, four in Italy and one in the Netherlands), 35 of whom had been on staff for ≥3 years and were classified as expert (Ex), and 35 residents with ≥6 months of ICU training, considered as nonexpert (N-Ex). All of them were familiar with NIV. Physicians were asked to independently identify patient–ventilator asynchronies according to previously published criteria [3, 7, 17].
We considered the following asynchronies: 1) ineffective effort (IE), defined by a drop in Paw and a positive deflection of expiratory flow not triggering ventilator support; 2) autotriggering (AT), identified by a ventilator cycle without a preceding Paw deflection; and 3) double-triggering (DT), i.e. one properly triggered breath followed by a second ventilator insufflation after a time <50% of the inspiratory time. To simulate bedside conditions, physicians had ≤5 min to analyse each report.
Data analysis
Three examiners independently reviewed the tracings on a booklet including the EAdi tracings. The predefined criterion for considering an event asynchronous was the agreement between no less than two examiners. This analysis was considered the gold standard and used for reference [14]. Accordingly, we calculated the AI of each tracing as the number of asynchronous events divided by the overall breath count, i.e. the sum of ventilator cycles and nontriggered breaths [3].
The analysis performed online on a dedicated website by the 70 physicians on every breath of the 40 reports, i.e. presence (yes or no) and type of asynchrony (IE, AT or DT), was matched with the reference, referred to as breath analysis (BA) [14]. Sensitivity, specificity, and positive (PPV) and negative (NPV) predictive values were calculated for each physician and tracing. The physician's performance in detecting asynchronies was also assessed by evaluating their ability to detect the reports with AI ≥10%, referred to as report analysis (RA). The AI based on the scores for each report was calculated for all physicians and compared with the reference for determining sensitivity, specificity, PPV and NPV [14].
For each waveform analysis by every single observer, a double true index (DTI) was calculated as the ratio between the sum of true positives and true negatives, and the overall breath count (i.e. the sum of ventilator cycles and IEs). DTI represents the ability to properly identify both synchronous and asynchronous breaths, and ideally should be 100%.
Ventilator cycling (RRmec), inspiratory duty cycle, patient's (neural) respiratory rate, inspiratory duty cycle, inspiratory trigger delay, inspiratory tidal volume (VT) and air leaks were computed as previously described [8, 16]. EAdi amplitude from baseline to peak (EAdipeak) and EAdi–time product were computed to assess the neural drive [18, 19].
Statistics
Fleiss' κ coefficient was computed to calculate the degree of agreement in classification for the gold standard analysis between the three examiners. The normal distribution was ascertained by means of the Kolmogorov–Smirnov test. To assess the ability of ICU physicians to detect patient–ventilator asynchrony, sensitivity, specificity, PPV and NPV were calculated for both BA and RA, and overall reported as mean±sd or median (interquartile range), as indicated. Data were then grouped according to: 1) level of experience (Ex or N-Ex); 2) interface (mask or helmet); and 3) geographic origin (Asia or Europe). The Mann–Whitney U-test or Student's t-test was applied to assess statistical differences between groups, as appropriate. The linear regression was used to assess the correlation between the mean DTI (mean value of all observers) of each tracing and the corresponding AI, both overall and separately for the mask and helmet subgroups. The Chi-squared test for linear trends was applied to ascertain the influence of the level of pressure support, RRmec, VT and EAdipeak on both AI and ability to properly recognise asynchronies (i.e. sensitivity). For all the tests, the null hypothesis was rejected for values <0.05.
Results
We assessed the performance of 70 ICU physicians, 30 in China and 40 in Europe (32 in Italy and eight in the Netherlands). All the involved centres are confident with NIV, as delivered with both mask and helmet. All 70 observers performed the entire analysis per tracing within the 5-min time limit. Overall, each physician evaluated 4215 breaths. The agreement between the three examiners for gold standard analysis was very high (κ=0.98).
Types and distribution of asynchronies in tracings are reported in table 1, both overall, and with a mask and helmet separately. Breathing pattern, respiratory drive and air leaks for each tracing are reported in table S2.
Figure 1 depicts portions of two representative reports, one during NIV through a helmet and the other through a mask. For BA, the overall median sensitivity was 0.20 (95% CI 0.13–0.29), specificity was 0.88 (95% CI 0.84–0.93), PPV was 0.18 (95% CI 0.12–0.24) and NPV was 0.89 (95% CI 0.88–0.90). For RA, overall sensitivity was 0.10 (95% CI 0.05–0.25), both specificity and PPV were 1.00 (95% CI 1.00–1.00), and NPV was 0.53 (95% CI 0.51–0.57). Sensitivity, specificity, PPV and NPV for subgroups defined according to level of experience, type of interface and geographic origin are displayed in table 2 for both BA and RA. Furthermore, sensitivity, specificity, PPV and NPV for each type of asynchrony are reported separately by interface in table S3. Regardless of the analysis, asynchronies were more frequently detected with the mask than with the helmet, while the expertise and geographic origin did not affect the rate of detection. In table S4, the rates of false negatives, i.e. unrecognised asynchronies, are reported as percentage of the total number of each of the three considered asynchronies (IE, AT and DT), both overall, and separately for mask and helmet. The high rates of false negatives explain the low sensitivity and PPV, and the high specificity and NPV.
The overall DTI was 0.79 (95% CI 0.66–0.88), DTI with a mask was 0.82 (95% CI 0.70–0.90) and DTI with a helmet was 0.76 (95% CI 0.66–0.88) (p=0.54). Figure 2 presents the regression lines for the overall data, and mask and helmet separately.
Table 3 illustrates the influence of pressure support level, VT, RRmec and EAdipeak on sensitivity, as defined by a rate of recognition exceeding the median value of sensitivity (0.20), and the prevalence of asynchronies, as defined by AI ≥10%. None of these variables significantly affected either sensitivity or prevalence of asynchronies. Likewise, tables S5 and S6, displaying separate data for mask and helmet, respectively, do not show significant differences for any of the variables considered, irrespective of the interface.
Discussion
Our study shows that: 1) the overall ability of ICU physicians to detect patient–ventilator asynchrony during NIV by inspection of flow and pressure waveforms and the physicians' performance in detecting AI ≥10% is low; 2) expertise off the clinician and geographic origin do not affect the rate of detection; 3) asynchrony detection is slightly, though significantly, higher with mask than with helmet; and 4) the rate of proper detection is inversely related to the prevalence of asynchrony.
To our knowledge, this is the first study that aimed to assess the ability of ICU physicians to recognise asynchronies during NIV by visual inspection of Paw and flow–ventilator waveforms. Previous work conducted in invasively mechanically ventilated patients reports an overall ability to properly recognise patient–ventilator asynchronies that is quite low, as indicated by sensitivity for BA of 22% [14].
Patient–ventilator synchrony during partial ventilatory assistance has increasingly gained attention in the last decade. Poor patient–ventilator synchrony increases the work of breathing [20, 21] and worsens patient comfort [22, 23], which holds true also during NIV [7]. Poor comfort causes NIV intolerance, and represents one of the major determinants of NIV failure and endotracheal intubation [24], both in hypercapnic [25] and hypoxaemic [26] ARF.
Consistent with the results obtained in intubated patients [14], in the present work, asynchrony detection is inversely related to their prevalence, indicating that the chance for waveforms observation to correctly quantify asynchrony gets lower when their occurrence increases. In contrast, while clinical experience affected the ability to recognise asynchronies in the study by Colombo et al. [14], in the present study, we did not observe the same finding. On the one hand, this might suggest that 6 months of training is sufficient to reach a plateau in the learning curve. On the other hand, it might be that detection of asynchronies during NIV is extremely problematic irrespective of the level of experience, which is, in our opinion, the most likely explanation for this finding.
We also found sensitivity to be higher with a mask than a helmet, which is likely to be due to the different physical properties of the two interfaces, the helmet having larger inner volume and being more compliant. To overcome these drawbacks, the NIV settings with the helmet were adjusted according to the indications of Vargas et al. [27], who showed that increasing positive end-expiratory pressure and pressure support improves pressurisation rate and muscle unloading. It is worth remarking that, compared to the mask, the helmet was characterised by more IEs and fewer DTs. Because DTs are easier to detect than IEs because of a stronger signal on Paw and flow tracings, this might partially explain the higher detection with a mask than a helmet.
The tracings with rates of AI ≥10% is consistent with the results of previous investigations [7–10, 15, 28, 29], while our average AI (18.1%) is slightly lower than previously reported by Vignaux et al. [7] (26%, interquartile range 15–54%). It should be noted, however, that in that study, the asynchronies were related to the extent of air leaks and all but six patients received NIV through ventilators not equipped with software for compensation of leaks [7]. Notably, the six patients ventilated with a dedicated ventilator for NIV in the study by Vignaux et al. [7] did not suffer of any asynchronous events. In our study, air leaks were contained and, in addition, we used an ICU ventilator equipped with dedicated NIV software compensating for leaks.
The results of our study are of clinical interest. In fact, as timely detection of asynchrony would lead to interface and/or ventilator settings adjustments [3, 7, 30], not recognising patient–ventilator asynchrony may affect NIV outcome. Automatic detection of asynchronies have been proposed either for invasive ventilation [31–34] or NIV [32, 35] by means of algorithms using different methodologies. Applying a noise filter and an unintentional leak compensation algorithm for IEs and DTs detection during both invasive ventilation (n=10) and NIV (n=10), Mulqueeny et al. [32] reported overall high sensitivity (94.7%) and specificity (95.1%) compared to manual assessment based on transdiaphragmatic pressure measurements. This algorithm, however, considered neither ATs, accounting for 40.5% of all asynchronies in our study, nor IEs occurring during the inspiratory phase. In 14 children with cystic fibrosis undergoing NIV, Cuvelier et al. [35] applied an algorithm using phase portraits of temporal modifications of patient–ventilator interaction and identified 94.6% of all IEs observed from oesophageal pressure tracings, without considering other asynchronies. Because of the poor performance of visual inspection of ventilator waveforms, algorithms able to recognise patient–ventilator asynchrony might indeed represent an important advance for the management of patients undergoing NIV. The preliminary data from these case series, however, need to be confirmed by further studies enrolling much higher numbers of patients.
Applying an algorithm automatically calculating an index based on the EAdi signal analysis [36] in 12 chronic obstructive pulmonary disease (COPD) patients receiving NIV for an episode of ARF, Doorduin et al. [37] recently found that IEs considerably increase when the timing errors between EAdi and Paw reached 20% of the overall breath count, and accordingly suggested that an “acceptable” synchrony should be kept below that threshold. We have computed the sensitivity and specificity for RA, setting different cut-off to higher AI values (15%, 20% and 25%), overall and separately for the two interfaces. Increasing the AI threshold from 10% to 25% progressively worsens sensitivity while not affecting specificity (table S7). It is worth mentioning that in contrast to the study by Doorduin et al. [37], only 12.5% of our tracings include COPD patients.
Our study has strengths including the multicentre study design, the high level of experience of the involved centres, and their acquaintance with both masks and helmets. Considering the lack of differences due to the geographic origin and the high level of NIV-related expertise of the centres involves, it is reasonable assuming the rather poor performance we observed can be generalised to the vast majority of ICUs worldwide.
Our study has one major limitation. We based our study only on ventilator waveform interpretation, without the possibility to analyse additional “visual” signs (i.e. patient's respiratory rate in comparison with the ventilator rate) and parameters available at the bedside, which help the physician to recognise the mismatch between patient's spontaneous breathing and ventilator assistance. Moreover, because during NIV the patients receives no sedation at all or only small amounts of sedatives, the patient is able to communicate the discomfort arising from poor synchrony with the ventilator. To reduce this limitation as much as possible, we purposely avoided considering “minor asynchronies” in our evaluation, such as premature and delayed cycled breaths [38], taking into account only the “major asynchronies”, i.e. IEs, ATs and DTs [3, 14]. Nonetheless, it is reasonable to assume that our result would have been better including patient observation at the bedside.
Conclusions
In conclusion, recognising patient–ventilator asynchrony during NIV by visual inspection of the ventilator-displayed waveforms is difficult. Because the use of invasive means to precisely detecting patient's own respiratory activity is unreasonable for most patients undergoing NIV, the future development of dedicated tools for these purposes is advisable.
Supplementary material
Supplementary Material
Please note: supplementary material is not edited by the Editorial Office, and is uploaded as it has been supplied by the author.
Supplementary material 00075-2017_supp
Disclosures
Supplementary Material
M. Antonelli 00075-2017_Antonelli
L. Heunks 00075-2017_Heunks
S.M. Maggiore 00075-2017_Maggiore
P. Navalesi 00075-2017_Navalesi
Footnotes
This article has supplementary material available from openres.ersjournals.com
Conflict of interest: Disclosures can be found alongside this article at openres.ersjournals.com
- Received June 21, 2017.
- Accepted July 30, 2017.
- Copyright ©ERS 2017
This article is open access and distributed under the terms of the Creative Commons Attribution Non-Commercial Licence 4.0.