Design
We evaluated 2 educational interventions (DrugFactsBoxes and the SMART program) using a 2 × 2 factorial research design and adhering to Consolidated Standards of Reporting Trials (CONSORT) guidelines (28). Data were collected at the following 4 time points: baseline and 6 weeks, 3 months, and 6 months following baseline. At each time point, data were collected via a combination of telephone interviews and online questionnaires. Immediately after completion of the baseline interview, we used a 1:1:1:1 allocation sequence to randomly assign participants to 1 of 4 study groups, including DrugFactsBox with the SMART program, DrugFactsBox without the SMART program, other CMI with the SMART program, and other CMI without the SMART program. Participants and all staff involved with data collection were blinded to participants’ group assignment. The study was approved by the Institutional Review Board at the University of North Carolina at Chapel Hill (UNC-CH) and is registered (ClinicalTrials.gov identifier: NCT02820038).
Participants
We recruited participants from the following resources: 1) 4 large academic rheumatology practices; 2) CreakyJoints, an online arthritis patient support community; 3) social media (e.g., Facebook, Twitter); 4) the Carolina Data Warehouse for Health, which includes patients treated at all inpatient and outpatient facilities at UNC-CH; and 5) Join the Conquest, a website administered by the UNC Translational and Clinical Sciences Institute that allows individuals in the general public to volunteer to participate in posted research studies. To be eligible to participate, individuals had to meet the following criteria: be ≥18 years of age, have physician-confirmed RA or be undergoing therapy with a DMARD approved for the treatment of RA, speak English, not have hearing or visual impairments that would prevent being able to complete data-collection procedures, have an email address and internet access, have moderate or high disease activity as evidenced by a score of >6 on the 0–30 Routine Assessment of Patient Index Data 3 (RAPID3) (29, 30) scale, and not have any health problems that prevented changes in his/her RA medication regimen (e.g., ongoing serious infection). Participant recruitment began in September 2016 and ended in May 2018. Data collection was completed in December 2018. Participants received $125 for participating in the study: $25 after completing the baseline, 6-week, and 3-month data collection, and $50 after completing the 6-month data collection.
At rheumatology clinic sites, clinic staff or a research assistant identified potentially eligible patients and obtained verbal consent to administer a screening interview that assessed disease activity, age, email address, and access to the internet. They then contacted the patient’s rheumatologist to obtain confirmation of diagnosis and presence/absence of health problems that would prevent changes in the patient’s medication regimen. If the patient was eligible to participate in the study, written informed consent and Health Insurance Portability and Accountability Act (HIPAA) authorization were obtained. The information collected via these screening procedures was then forwarded to staff in the central office at the UNC-CH to initiate data collection. For potential participants identified via other mechanisms, research staff at the UNC-CH administered the screening interview via telephone. If the patient appeared to be eligible to participate, he/she was mailed a consent form and HIPAA authorization to sign and return. When HIPAA authorization was obtained, staff contacted the patient’s physician to obtain confirmation of diagnosis and presence/absence of health problems that would prevent medication regimen changes.
Interventions
The original DrugFactsBox format used a standardized 2-page summary that followed plain language guidelines and clinical best practices to convey relevant facts to individuals with limited literacy or numeracy skills (18–20). For the present study, we created a website that contained 16 DrugFactsBoxes for those medications most commonly used to treat RA in the US (i.e., abatacept, adalimumab, certolizumab, etanercept, golimumab, hydroxychloroquine, infliximab, leflunomide, methotrexate pill, methotrexate subcutaneous, prednisone, rituximab, sulfasalazine, tocilizumab infusion, tocilizumab subcutaneous, and tofacitinib). A pill bottle icon for each medication appeared on the website landing page. When an icon was clicked, an overview of the medication appeared. The overview included a section labeled “Bottom Line,” which contained a narrative summary of potential medication benefits and harms, emphasizing the gist (31). The overview page also provided links to other pages within the website that contained additional information about the medication. These links were labeled trials, side effects, how to use, lifestyle changes, and interactions. The trials page provided quantitative information concerning potential medication benefits and harms, mirroring the original DrugFactsBox format. Participants in the other CMI groups were given access to a website that contained CMI for the same 16 medications. For medications that have an FDA-approved medication guide (i.e., all biologics and tofacitinib), a link to the guide was provided. For the remaining medications, the website provided a link to CMI developed by the American Society of Health-System Pharmacists, that are similar to the written information given to patients in the US when prescriptions are dispensed.
The SMART program is designed to enhance gist reasoning ability by training participants on the use of the following 3 metacognitive strategies: strategic attention (e.g., ignoring or eliminating distractions to facilitate single-minded focus on understanding the specific topic at hand), integrated reasoning (e.g., strengthening integrative mental capacity to synthesize information from multiple sources), and innovation (e.g., examining multiple perspectives and information sources to best understand the information available) (21–27). The program was delivered by research personnel at the Center for BrainHealth at the University of Texas at Dallas using an online video conferencing platform that permitted synchronous, audio and visual communication between trainers and participants. In most cases, the program was delivered in small groups with 3–4 participants. Initially, the program was delivered in four 90-minute sessions, spanning a 1-month period. However, because many participants had difficulty committing to sessions of this length, midway through the project, we reduced the length of each session to 1 hour in an effort to increase participant engagement.
Measures
Our primary outcome variable was informed decision-making regarding the use of DMARDs. Informed decision-making is typically conceptualized as making a value-consistent decision that is based on adequate knowledge (32–34). To use this approach, the online questionnaires included items asking participants to indicate the extent to which they agreed or disagreed with 10 value statements pertaining to the management of RA (e.g., “It is important to accept the risk of side effects now in order to improve my chances of being healthy in the future”), which were developed based on theory and empirically validated (35). Responses were recorded on a 4-point scale ranging from 1 (strongly agree) to 4 (strongly disagree). Responses were summed and transformed to a composite score ranging from –15 to 15, where positive numbers reflected values favoring aggressive treatment. Participants were classified as meeting the criteria for informed decision-making if they: 1) answered at least 85% of the knowledge items (described below) correctly, scored >0 on the values measure and were taking ≥1 DMARD or 2) answered 85% of the knowledge items correctly, scored ≤0 on the values measure, and were not taking a DMARD. All other individuals were classified as not meeting the criteria for informed decision-making.
Knowledge was assessed by 3 separate instruments administered via telephone interview, including an 8-item measure assessing knowledge concerning methotrexate (which is often first-line therapy for RA) (36), a 20-item measure assessing knowledge concerning biologic treatment options (35), and an 8-item measure assessing knowledge of RA and RA treatment options more generally (37). Correct answers were summed across all 3 measures and transformed to a 100-point scale, reflecting the percentage of questions answered correctly.
DMARD use was assessed via a checklist of 19 RA medications (abatacept, adalimumab, azathioprine, certolizumab pegol, cyclosporine, etanercept, golimumab, gold, hydroxycholoroquine, infliximab, leflunomide, methotrexate pill, methotrexate shot, minocycline, rituximab, sulfasalazine, tocilizumab infusion, tocilizumab shot, and tofacitinib) included in the online questionnaires. Participants were asked to check all of those medications that they were currently being treated with or to check an option labeled “none of the above.”
Gist reasoning ability was assessed by the Test of Strategic Learning (TOSL). The TOSL was developed to systematically quantify participants’ capacity to abstract gist meanings from complex input (26, 38). The TOSL consists of nonmedical text passages varying in length (from 291 to 575 words) and complexity. At each time point, participants read one of the text passages presented via an online questionnaire. After reading the passage, participants clicked on a link to the next page of the questionnaire, which included a single item asking participants to summarize the original text, focusing on bottom-line meaning (i.e., “the moral of the story”) rather than specific details. Participants had up to 5 minutes to complete this task and were not allowed to return to the page on which the passage appeared while writing the summary.
The next page of the questionnaire asked participants to take up to 3 minutes to list the lessons learned (i.e., take-home messages) from the text. A total of 4 different text passages were used. These were balanced across participants over the course of the study such that each participant viewed a different passage at each time point, with the order in which passages were viewed randomized across participants. Participants’ responses were scored using a manualized, objective scoring system by a trained and experienced rater (MK) who was blinded to participants’ group assignment and time of testing. Two separate scores, including complex abstraction and lesson quality, were derived from participants’ responses. To assess interrater reliability prior to the initiation of coding, 2 raters scored 25 responses for each of the 4 text passages. The mean intraclass correlation coefficient for a single score was 0.74 for complex abstraction (range 0.43–0.94) and 0.95 for lesson quality (range 0.84–0.99), indicating good reliability.
Information seeking was assessed using behavioral measures. First, we created a website that provided easy access to information about RA, treatment options, and illness self-management. All participants were emailed a link to the website following the 6-week follow-up, regardless of their group assignment. We used Google analytics to track whether participants accessed the website. Second, after the 6-week follow-up, we also emailed all participants an invitation to participate (free of charge) in BetterChoices, BetterHealth, an online chronic illness self-management program, and tracked class enrollment. Finally, we assessed the following sociodemographic characteristics: age, sex (male, female), race (White, other), ethnicity (Hispanic, non-Hispanic), education (less than bachelor’s degree, bachelor’s degree or more), marital status (currently married, other), and difficulty affording RA medications (no trouble, a little trouble, a lot of trouble).
Analyses
Characteristics of study participants are presented using means and percentages, depending on the measurement properties of the variables. We used logistic regression to assess the effects of the 2 interventions on our primary outcome: informed decision-making at the 6-month follow-up. A separate regression model was performed at each follow-up time point (i.e., 6-week, 3-month, and 6-month). Each model controlled for informed decision-making at baseline (0 = did not meet criteria, 1 = met criteria) and indicator variables for each intervention indexing assignment to the SMART program (0 = no, 1 = yes) and the DrugFactsBox group (0 = no, 1 = yes). We also included three two-way interaction terms in each model. The first interaction term assessed whether the effects of the 2 interventions were dependent on one another. The other interaction terms assessed whether the effects of the interventions varied as a function of informed decision-making at baseline. Interaction terms that were not statistically significant (P < 0.05) were dropped from the models and the models were re-run to examine main effects.
When significant interactions were observed, we used stratified analyses to determine the nature of the interaction. Statistical significance was evaluated with alpha (2-tailed test) set at 0.05. Missing baseline data were imputed by substituting the mean or mode in the full sample for continuous variables and categorical variables, respectively. Missing follow-up data were imputed using multiple imputation methods, via PROC MI and PROC MIANALYZE in SAS. Because the pattern of missing data was not monotonic, we used the Markov chain Monte Carlo method. As recommended by Sullivan and colleagues (39), imputation procedures were carried out separately for each randomized group. In PROC MIANALYZE, we used the EDF option to specify the complete-data degrees of freedom for parameter estimates. Power analyses conducted a priori indicated that a sample of 300 would provide 80% power to detect a between-group difference of 25% in the percentage of participants meeting the criteria for informed decision-making (e.g., 35% versus 60%). This anticipated effect size is based on previous research (35) and corresponds to a moderate-sized effect (40). Power calculations were performed with alpha (2-tailed test) set at 0.05 and allowed for 15% attrition from baseline to final follow-up. All analyses were performed using SAS PC, version 9.4.