2003-2006: Developing the IPDAS checklist
The IPDAS Collaboration’s Steering Group led this initial task; see Elwyn et al. for the procedural details [6]. To begin, the Steering Group sought opinions from the participants in the 2003 ISDM Meeting and from the shared decision making electronic listserve (“SMDM-L”) about which broad evaluative dimensions should be used for assessing the quality of patient decision aids. Twelve such quality dimensions were identified.
From there, 12 workgroups were formed. Each workgroup was assigned one of the quality dimensions and was given the following tasks: 1) offer a definition of that quality dimension; 2) outline the theoretical rationale for considering that dimension as an important aspect of the quality of patient decision aids; 3) provide a summary of the relevant evidence base underlying that quality dimension; and 4) list the relevant theoretical and empirical references for that dimension. The IPDAS Collaboration’s resulting 12-chapter “Original Background Document” was published in 2005; it can be found at the IPDAS collaboration website (http://ipdas.ohri.ca/resources.html). Of the 12 chapters, three were subsequently published as peer-reviewed manuscripts focused on providing information, measuring decision quality, and communicating probabilities [7–9].
The Steering Group envisioned a checklist of standard criteria reflecting these 12 key evaluative dimensions. Therefore, during this initial phase, each workgroup was also asked to propose and draft the specific evaluative criteria that they thought should be used to gauge whether or not a patient decision aid satisfactorily addressed their quality dimension.
Next, a modified Delphi consensus voting process was used to select a final set of criteria for the checklist. Five groups of stakeholders participated in the Delphi voting process: patients, practitioners, developers, researchers, and policy makers or payers. Each voter was provided with a series of 12 half-page summaries on the quality dimensions (e.g. theoretical rational, evidence) plus the dimension specific criteria for voting. For more detailed information on the quality dimensions, the IPDAS Collaboration’s 12-chapter 2005 Original Background Document was provided. Each voter was asked to rate the importance of each criterion on a 9-point scale. More than 100 stakeholders from 14 countries participated in the first phase of voting; in the second phase, stakeholders were provided summary ratings for each criterion from the first round’s results and asked to re-rate their importance on the 9-point scale. Criteria with median voting scores of 7 or higher on the 9-point scale were retained in the final IPDAS checklist. More details of the voting process and results can be found at the IPDAS website (http://ipdas.ohri.ca).
The final IPDAS Checklist included 74 criteria from 11 of the 12 quality dimensions. Criteria from the quality dimension about “addressing patient stories” did not reach the median score threshold, largely due to uncertainty about the potential benefits and biasing effects of stories in patient decision aids. However, these criteria were added to the checklist as additional criteria to consider when stories are used in patient decision aids. A shorter version of the checklist, limited to those criteria receiving ratings of 9, is used to rate the patient decision aids that are included in the Ottawa A to Z Decision Aid Inventory (http://decisionaid.ohri.ca).
2006-2009: Developing the IPDAS instrument
The IPDAS Checklist provides broad assessments of the quality of a patient decision aid across the 12 quality dimensions. But it does not provide precise, quantitative judgments of the decision aid’s quality at criterion (item), dimension, or global levels. To address this concern, the IPDAS Collaboration undertook a project to develop, validate, and report the inter-rater reliability of a measurement instrument designed for quantitatively assessing the quality of patient decision aids—that is, the IPDAS instrument or IPDASi [10].
Criterion-items on the original IPDAS Checklist were refined, items were removed if they did not apply to all decision aids, and the items from the “balancing the presentation of options” dimension were combined with those from the “providing information” dimension. A 4-point rating scale was adopted for each criterion-item, with the following response options: strongly agree, agree, disagree, and strongly disagree. The refinement and confirmation steps yielded 47 items representing 10 dimensions. From there, a validation study was conducted in which the inter-rater reliability of responses on these 47 criterion-items was assessed, using 15 patient decision aids from major producers plus 66 decision aids randomly selected from the Cochrane inventory that is maintained by the Ottawa Patient Decision Aids Group. A short version of the IPDASi was produced, comprising 19 items across 8 of the original dimensions from the checklist [10].
2009-2013: Agreeing on minimal standards for certifying patient decision aids
Recognizing that certification of patient decision aids is becoming a priority of health systems in several countries, the IPDAS Collaboration undertook the challenge of identifying a minimal set of standards that could be used to certify the quality of a patient decision aid [11]. A modified Delphi process was used to assess each IPDASi criterion on the basis of the potential for risk or harmful bias to the patient’s decision making if the criterion were not present or of low quality in a decision aid. One hundred and one individuals with experience in the field of shared decision making and decision aids voted in the first round, and 87 voted in a second round. The initial set of 47 items from the IPDASi was reduced to 44 items.
A panel of 11 experts was established to reach consensus on minimal criteria based on: a) ratings for the 47 IPDASi items from a modified 2-stage Delphi process; b) qualitative feedback from voters on each of the 47 items in this Delphi process; c) original IPDAS consensus process scores; and d) feedback from 4 trained raters. The expert panel grouped the criteria-items into three broad categories: qualifying criteria, certification criteria, and quality criteria. Six “qualifying criteria” were identified as essential for a tool to be considered a patient decision aid; a tool would not be considered a patient decision aid unless all criteria were met. Six additional criteria were identified as “certification criteria” plus an extra 4 certifying criteria if the patient decision aid is about screening. These criteria were scored on a 1-4 scale (1= strongly disagree and 4 = strong agree); a decision aid must have a score of 3 or higher on each certification criterion to reach certification standards. Finally, a large group of criteria were identified as “quality criteria”, and included those items that were not essential for reducing harms to patients when using decision aids. No threshold is offered for the quality criteria, which are scored on the same 1-4 scale.
2011-2013: Updating the evidence underlying the IPDAS checklist
In 2009, the IPDAS Collaboration argued that new concepts and empirical evidence had accumulated since 2005, and consequently there was a need to update the 2005 Original IPDAS Background Document. The IPDAS Background Document Updating Group was charged with this effort.
Strategy
The IPDAS Background Document Updating Group consisted of 12 chapter-writing teams (i.e., one team for each of the original 12 quality dimensions). Volunteers who were interested in serving as team leaders, co-leaders, and/or members were identified using several strategies. These included postings on the listserves “Shared-L” and “SMDM-L”, advertisement at the 2009 International Shared Decision Making Conference in Boston, Massachusetts, review of the roster of participants on the 2005 Original IPDAS Background Document, and informal networking among the participants.
Each chapter-writing team leader was identified from senior individuals who had indicated an interest in serving as a team leader, and had conducted relevant research in the area addressed by the specific chapter. Each confirmed team leader was provided with: a) an outline of the updating processes (see below) that their team should follow; b) lists of the names of potential co-leaders and team members who had volunteered through the recruitment strategies described above; and c) the names and contact information of any individuals—including any of those who had co-authored the original 2005 Background Document chapters—who had not directly volunteered but were experts in the field and who might be interested in being involved.
Team leaders selected their co-leaders, and then together the leaders and co-leaders selected the members for their teams. In doing so, they were instructed to consider the diversity of their team (e.g., a mix of basic scientists, decision aid developers, and clinicians), and the importance of international representation on the teams. Final decisions about team membership were made by the individual leaders and co-leaders. Taken together, these teams involved 92 co-authors from 9 countries.
Updating processes
Each of the 12 chapter-writing teams was charged with creating an updated chapter consisting of 7 major sections:
1. Current authors and affiliations
2. A chapter summary
3. An updated definition (conceptual/operational) of the quality dimension
4. An updated theoretical rationale for inclusion of the quality dimension
5. An updated evidence base underlying the quality dimension
6. Updated references
7. Appendices, including supporting materials (if needed), and the relevant 2005 Original Background Document Chapter.
Sections 3, 4, and 5 required the most work on the part of a writing team. Within each of these sections, a team’s search, retrieval, and appraisal of the relevant theoretical and empirical literature focused primarily on: a) high-quality publications that are squarely in the field of patients' decision aids / patients' decision making; and b) high-quality publications that are clearly in the larger field of health care in general. Within each of these sections, a team could also identify particularly important relevant publications in other non-health-related fields (e.g., psychology, business, adult education); the points raised by these publications could then be outlined in the "emerging issues / future research" sub-sections of the new chapter in the updated Background Document.
Each team followed established writing guidelines and used a common writing format, as they considered, summarized, and presented the theoretical and evidentiary literature relevant to the quality dimension addressed by their chapter. However, each writing team necessarily worked out its own procedures for dividing up the team’s work, circulating initial drafts among its members, resolving points of discussion via e-mail and/or conference call, and preparing the final submitted version of their updated chapter.
These updating efforts resulted in the IPDAS Collaboration’s new 12-chapter “2012 Updated Background Document” [1]; it can be found at the IPDAS collaboration website (http://ipdas.ohri.ca/resources.html).