Quality and readability of patient information on robotic-assisted hip arthroplasty
Highlight box
Key findings
• Online patient information on robotic-assisted hip arthroplasty (RAHA) is low quality and difficult to read.
• Key issues include lack of citations, unclear risk-benefit discussion, and poor transparency.
What is known and what is new?
• Online health information is often unreliable and hard to understand.
• This study provides the first systematic evaluation of RAHA patient-directed websites. It identifies consistent deficiencies across quality, readability, and compliance with Journal of the American Medical Association standards.
What is the implication, and what should change now?
• Poor-quality information may create misleading expectations, impaired decision-making, and patient dissatisfaction.
• Clear, transparent, and readable patient education materials are urgently needed.
• Healthcare providers and institutions should develop standardized, evidence-based online resources and guide patients toward trustworthy information.
Introduction
Robotic-assisted hip arthroplasty (RAHA) represents a significant advancement in orthopedic surgery. With this technology, robotic systems help surgeons to have better precision and control when performing hip replacement compared to conventional manual techniques. Other potential benefits of RAHA are reduced blood loss, reduced soft tissue damage, shorter recovery periods, and overall improvement in patient outcomes (1).
With the increased popularity and diffusion of RAHA, there is correspondingly an increased need for comprehensive and accessible patient information about this procedure. The Internet has become a primary source of health information for many patients because of the vast array of medical conditions, treatments, and available procedures (2). This accessibility is a double-edged sword. While it offers immediate access to information to empower patients’ education, it presents some challenges over the quality, reliability, and readability of such information. High-quality, easily understandable information is critical for optimal patient care. However, patients could receive outdated, biased, or even poorly written information that may result in misinformation and inappropriate therapeutic decisions.
Despite the critical importance of high-quality patient information, previous studies have shown that online health information often fails to meet these standards (3-12). This is a particular concern for patients undergoing surgery for complex conditions such as RAHA, where there is an important need to understand complex surgical details, the benefits of the surgical intervention, risks, and postoperative care. The need for patients to have appropriate and complete information to make informed decisions about their options for treatment becomes increasingly important.
Effective patient education is essential for ensuring informed consent, adherence to preoperative and postoperative instructions, and enhancing patient satisfaction with their surgical experience. High-quality online health information can improve patients participation with clear, comprehensive information about their medical condition and all options available for treatment. It is such empowerment that reduces anxiety, consolidates realistic expectations, and increases the likelihood of following medical recommendations for better health outcomes. Conversely, poor-quality health information online might lead to misconceptions, increased anxiety, and poor choices related to health (13). Information that is outdated, prejudiced, or too complicated for the average reader may undermine the patient’s ability to understand their condition and the available treatments.
Since the Internet is increasingly being used as a source of health information, there is a need for a systematic evaluation of the quality and readability of patient-oriented educational sources related to RAHA. However, only limited data is currently available about the quality and readability of online information related to RAHA.
This study aims to address this gap by critically evaluating the top 50 websites in Google, Bing, and Yahoo that provide patient-oriented information on RAHA. Using the DISCERN instrument, Flesch-Kincaid Reading Ease (FRE), Flesch-Kincaid Grade Level (FGL), and Journal of the American Medical Association (JAMA) benchmarks, we aim to provide a comprehensive assessment of the current state of online patient information.
Methods
Search strategy
On July 18, 2024, we conducted systematic searches using the query “robotic hip arthroplasty” on three major Internet search engines: Google, Bing, and Yahoo. The term ‘robotic hip arthroplasty’ was selected as a patient-centered umbrella keyword, as it is commonly used in online health information and encompasses related terms such as ‘robotic-assisted hip arthroplasty’ and ‘robotic hip replacement’ within search engine results. Search engines typically recognize these terms as closely related and return overlapping results, thereby minimizing the risk of systematically excluding relevant patient-facing content. Moreover, the term “arthroplasty” is preferred in orthopedic literature for its clinical accuracy, while remaining commonly used in patient education materials. Importantly, websites retrieved using our search strategy frequently employed alternative terminology within their content, indicating that these variants were effectively captured in the analyzed sample.
These searches were performed using standard browser settings and “in incognito” mode to minimize personalization and ensure that the results reflect what a typical patient might encounter. We obtained 1,214,000 English results: the first 50 search results of each search engine were combined for a total of 150 URLs. Duplicate entries, advertisements, non-English websites, video platforms, and websites unrelated to patient education (such as academic journal articles without patient summaries) were excluded from the analysis. By the end, 27 unambiguous websites focusing on RAHA were eligible and included in this study (Table S1). Figure 1 shows the search strategy.
Evaluation tools
The sample of each query was assessed to investigate the affiliation of websites. The source authorship was classified into five categories: universities, national health system facilities, private hospitals, individual surgeons, and scientific-related web magazines (information sites).
Four instruments were used to assess the quality and readability of the selected websites. The DISCERN instrument was a standardized tool designed in 1999 by a scientific panel as a collaboration between the National Health Service and the British Library to evaluate the quality of written health information. It consists of 16 questions divided into three sections of interest: reliability of the publication, specific details concerning the treatment choices, and general quality rating. The first 8 questions aim to assess the reliability of the publication in terms of source of information, aim of publication, and relevance. Questions 9 to 15 delve into specific details regarding treatment options on efficacy, risks of certain treatments, and how a particular treatment works. Additional Question 16 gives the overall summary of the quality rating as determined by the first two sections of the questionnaire. Each question scores on a scale from 1 to 5, with higher scores representing better quality. The instrument assesses the clarity of the aims, relevance, and balance of information, coverage of treatment options, and evidence base supporting the information provided (14).
The two most common measures concerning readability in English texts are the Flesch-Kincaid Readability Tests, commonly applied in both education and professional research to gauge the complexity and readability level of a text. The tests, developed by Rudolf Flesch and later revised by J. Peter Kincaid, comprise two distinct formulas: the FRE and the FGL. FRE considers the average sentence length and the average number of syllables per word. Scores between 60–70 are considered acceptable for the average reader. The FGL, on the other hand, converts that readability score into a U.S. school grade level facilitating an understanding of the education level required to comprehend the text. A lower grade level is preferred; it must be at 8th grade or below for all education materials given to patients. These tests are integral in the design and evaluation of educational materials, ensuring content is properly tailored toward the targeted audience in terms of reading proficiency (15).
The JAMA benchmarks, proposed by Silberg et al. in 1997 evaluate the quality and credibility of online health information, encompassing four key criteria: authorship, attribution, disclosure, and currency. It is a simple model, and the score ranges from 0 to 4. Greater scores are representative of greater quality. The authorship information should include a list of authors or contributors with their qualifications and affiliations. Attribution involves proper citation of sources and references. Websites should provide a list of references or citations for the information presented on the website so that readers will be able to identify the source and accuracy of the content. Disclosure refers to the transparency in any conflict of interest that may influence the content. Websites are supposed to disclose financial relationships or conflicts of interest in a manner that could affect the content, such as sponsorship by some pharmaceutical companies or medical devices. Lastly, currency emphasizes the importance of up-to-date information, requiring the indication of the date of publication or the most recent update. The date of publication or the date of the last update must be noted, preferably within the last two years to ensure currency. These factors taken together should make health information online more reliable and trustworthy for healthcare providers and the general public in decision-making (13).
Data collection and statistical analysis
Three independent reviewers assessed each website by the DISCERN instrument. Individual effect for reviewers was tested via analysis of variance (ANOVA) technique with significance fixed at 0.05 for P value. The final DISCERN score for each source was computed as the average among the three individual scores, providing an overall quality rating for each website. Readability scores were calculated using automated tools on a free specific platform available on the Internet (16). The JAMA benchmarks were evaluated qualitatively, and the presence or absence of each criterion was recorded. Any discrepancies in FRE and FGL readability tests and JAMA benchmarks were resolved through discussion and consensus. The mean (standard deviation) was used to summarize the findings, for DISCERN and readability scores; also, the percentage of websites meeting each JAMA benchmark was reported. Data was analyzed by the software R version 4.2.2 (2022-10-31 ucrt)—“Innocent and Trusting”.
Results
Differences in individual evaluation by reviewers were tested and no systematic and statistically significant divergencies were detected for any of the reviewers (P=0.3). The average DISCERN score was 45.2 (5.8), indicating a fair level of information quality, with the highest score reached by “jointreplacementhawaii.com” (54.3) and the lowest by “americanhipistitute.com” (34.7). The average JAMA score was 0.96 (0.98), suggesting limited transparency and credibility of the evaluated websites, with 2 (7.4%) sites reaching a score equal to 3 and 11 (40.7%) getting a score equal to 0. The average FRE readability ease score was equal to 41.9 (8.5), corresponding to ‘difficult’ readability, typically requiring college-level reading skills, and is well below the recommended level for patient education materials. The average FGL score was 11.6 (1.8), indicating that the content requires approximately an 11th- to 12th-grade reading level, exceeding the recommended maximum of 8th grade for patient-oriented health information. The DISCERN, FRE, and FGL scores are summarized in Table 1, while the JAMA benchmark distributions are reported in Table 2.
Table 1
| Score | Mean (SD) | Min | Site with min score | Max | Site with max score |
|---|---|---|---|---|---|
| DISCERN | 45.2 (5.8) | 34.7 | americanhipinstitute.com | 54.3 | jointreplacementhawaii.com |
| FRE | 41.9 (8.5) | 20 | bswhealth.med | 60 | melbournehipandknee.com.au |
| FGL | 11.6 (1.8) | 8.5 | melbournehipandknee.com.au | 15.3 | bswhealth.med |
FGL, Flesch-Kincaid Grade Level; FRE, Flesch-Kincaid Reading Ease; SD, standard deviation.
Table 2
| JAMA benchmarks | Sites |
|---|---|
| 0 | 11 (40.7) |
| 1 | 8 (29.6) |
| 2 | 6 (22.2) |
| 3 | 2 (7.4) |
Data are presented as n (%). JAMA, Journal of the American Medical Association.
Table 3 reports the scores under analysis by website type. The only Healthcare blog (orthoinfo.aaos.org) had the highest average DISCERN and FRE score, 54 and 48 respectively, and had the lowest average FGL score, equal to 9.7. The only academic-related site (uchicagomedicine.org) had the lowest average FRE and the highest average FGL score, while the sites related to private hospitals reported the lowest average DISCERN.
Table 3
| Category | Scientific-related web magazines | Private hospitals | Universities | Individual surgeons | National health system facilities |
|---|---|---|---|---|---|
| Number | 1 | 17 | 1 | 6 | 2 |
| DISCERN | 54 (NA) | 43.1 (5.7) | 49.7 (NA) | 46.8 (4.6) | 51.3 (2.8) |
| FRE | 48 (NA) | 41.1 (9.3) | 35 (NA) | 45.3 (6.9) | 39 (8.5) |
| FGL | 9.7 (NA) | 11.9 (1.9) | 12.6 (NA) | 10.8 (1.7) | 12 (1.5) |
Data are presented as mean (standard deviation). FGL, Flesch-Kincaid Grade Level; FRE, Flesch-Kincaid Reading Ease; NA, not applicable.
Discussion
RAHA represents an innovative procedure that recently gained wide interest due to possible advantages compared with conventional hip replacement surgeries. Robotic systems in RAHA were designed to enhance surgical precision and minimize damage to soft tissues, improving implant alignment, thus potentially leading to better functional outcomes and increasing the longevity of the implants (17). Patients considering RAHA often have high expectations due to the perceived advantages of robotic technology. Understanding patient expectations and meeting their information needs are important for granting informed decision-making and ensuring satisfaction with the surgical procedure. Brinkman et al. [2022] highlighted that public interest in robotic total joint replacement increased from 2011 through 2020 (18). On the same line, Chang et al. [2024] confirmed that more and more people are reporting some interest in robotic total replacement, but many patients have misunderstandings about robotic total joint replacement, which may impact patient needs and postoperative results (19).
In a complex and appealing context such as RAHA, the quality of patient education is very important. During the last decades, the Internet has become one of the most preferred sources of information regarding medical conditions. Online resources have increasingly been used by patients to learn more about their health; this frequently affects treatment decisions. Patients want to be part of the decision-making process but are also necessarily uninformed regarding the content and quality of their information. For this reason, online information quality is the key to a correct and responsible therapeutic path.
The present study provides a focused evaluation of the quality and readability of patient-oriented online information related to RAHA and identifies several important shortcomings. Overall, the quality of information, as measured by the DISCERN instrument, was rated as fair. While many websites adequately described the procedural concept and potential advantages of RAHA, a substantial proportion lacked comprehensive or balanced discussions of risks, uncertainties, and alternative treatment options. Such omissions are particularly relevant in the setting of elective orthopedic surgery, where informed consent relies on a clear understanding of both benefits and limitations. Incomplete or selectively presented information may contribute to unrealistic expectations and undermine shared decision-making.
Transparency and credibility were further limited, as reflected by the generally low JAMA benchmark scores. Many websites failed to clearly identify authorship, cite sources, disclose potential conflicts of interest, or provide evidence of content currency. Given the increasing commercialization and marketing of robotic technologies, these deficiencies raise concerns about promotional bias and highlight the need for higher standards in patient-facing educational materials.
Readability analysis revealed an additional and clinically meaningful barrier. The deficiencies shared by most of these sites included unclear aims, absence of balanced or unbiased viewpoints, lack of adequate information on risks versus benefits related to treatment, and poor referencing and sources. Readability assessments using the Flesch-Kincaid tests resulted in an average reading grade of 11.6, which reveals that the information is rather too difficult for an average patient to understand. This may leave a bearing on patients’ appropriate decisions in the care of their health conditions. Comparison against the JAMA benchmarks raised further concerns about the credibility and transparency of the information. Whereas 37% of the websites had clear authorship credentials, making their content more credible, only 29% showed proper citation and reference. Disclosure of conflict of interest is highly inadequate, as only 18% of the websites disclosed this factor, making one suspicious about the objectivity of the information. These findings should advise for improvement in terms of quality, readability, and transparency regarding the health information about RAHA. The variable fulfillment of JAMA criteria gives further evidence for the reliability and transparency of the resources available.
The results of our study are consistent with those of several studies investigating the quality and readability of online health information across other medical fields. Venosa et al. in 2021 reviewed the quality of information on the Internet, focusing on stem cells for cartilage disorders in orthopedic practice using the DISCERN scale (249 selected sites in English, Italian, French, and Spanish). According to their results, only 76 sites (38%) were rated as fair or better, so the quality of the information promoted for stem cells in orthopedics could be considered generally low (though a significant minority offering good quality information) (3). Furthermore, in another study performed by Venosa et al. [2024], the authors evaluated the quality of online patient educational tools concerning posterior cruciate ligament (PCL) reconstruction using a different methodological approach (with six readability formulas, JAMA benchmark criteria, and HONcode detection). According to their results, the reading level of online patient educational tools concerning PCL reconstruction is too high for the average reader, requiring high comprehension skills (4). Daraz et al. in a systematic review performed in 2018 assessed the quality of online health information using the DISCERN instrument for different medical conditions (20). The authors found that the mean level of quality across websites was consistent as good, but none of the websites examined received an excellent score for quality. A study by Walsh and Volsko [2008] examined the readability of patient educational tools across the top 5 medical-related causes of death in America (heart disease, cancer, stroke, chronic obstructive pulmonary disease, and diabetes) by using three readability tools: SMOG (Simple Measure of Gobbledygook), Gunning FOG (Frequency of Gobbledygook), and FKG. The authors concluded that most of the articles exceeded the 7th-grade reading level and so were written above the United States Department of Health and Human Services (USDHHS) recommended reading levels (21). Additionally, a study by McInnes and Haglund [2011] focusing on the readability of online information on 22 health conditions by using Gunning FOG, SMOG, and Flesch-Kincaid tests found that most materials are written at a high school or college reading level, which is too difficult for many patients. The authors highlighted that the most readable websites were “.gov” and “.nhs” TLDs, while “.edu” sites were the least readable. Some of the most frequent search results, like Wikipedia pages, were among the most difficult to read (22). This is consistent with our findings and highlights the need for healthcare providers to prioritize readability when creating patient education tools. The aforementioned findings are summarized in Table 4.
Table 4
| Study [year] | Medical field/topic | Methodology | Main findings |
|---|---|---|---|
| Venosa et al. (3) [2021] | Stem cells for cartilage disorders | English, Italian, French, and Spanish; DISCERN scale | Only 38% of websites were rated as fair or better; overall quality of information was generally low |
| Venosa et al. (4) [2024] | Posterior cruciate ligament reconstruction | English language only; six readability formulas, JAMA benchmark criteria, HONcode detection | Reading level exceeded that of the average reader; patient materials required high comprehension skills |
| Daraz et al. (20) [2019] | Multiple medical conditions | Systematic review; English language only; DISCERN scale, HONcode detection | Mean quality across websites was rated as good; no website achieved an excellent quality score |
| Walsh & Volsko (21) [2008] | Top-5 causes of death in the United States | SMOG, Gunning FOG, Flesch-Kincaid Grade | Most materials exceeded the 7th-grade reading level, surpassing USDHHS recommendations |
| McInnes & Haglund (22) [2011] | Top-10 cause of mortality and burden in high-income countries | Gunning FOG, SMOG, Flesch-Kincaid Grade Level and Flesch-Kincaid Reading Ease | Most materials were written at a high school or college level; .gov and .nhs domains were the most readable |
FOG, Frequency of Gobbledygook; JAMA, Journal of the American Medical Association; SMOG, Simple Measure of Gobbledygook; USDHHS, United States Department of Health and Human Services.
The findings of our study suggest several areas for improvement in the quality and readability of online patient information about RAHA. Simplifying medical language and using plain language principles can make the information more accessible. Incorporating visual aids, such as diagrams and videos, can also help patients understand complex concepts. Healthcare organizations should aim to produce materials that meet the recommended readability level of 8th grade or below. This can be achieved by using readability tools and involving patients in the development and review process. Providers must also talk about the information needs of the patients by finding out their anxieties and explaining the procedure. Websites also have to be informative, comprising balanced data from all perspectives of the procedure: how it works, benefits, risks, and other alternative treatments. Using the DISCERN instrument and the readability tools as a guide can help authors ensure that their materials are comprehensive and reliable. Clear authorship and attribution should be provided, along with up-to-date information and proper disclosure of conflicts of interest. Adhering to JAMA benchmarks can enhance the credibility and trustworthiness of the information. Artificial intelligence may be one of the precious tools that can rapidly and effectively bring the reading accessibility of web-based patient education materials up to recommended levels, as confirmed by the recent study performed by Kirchner et al. (23). Healthcare providers should be involved in leading patients to trustworthy high-quality resources. Policymakers and professional organizations should work together by developing standardized guidelines for the development and dissemination of health information on the web. These guidelines should address issues of quality, readability, and transparency so that the information given to the patients is reliable and available. Rather than relying solely on unguided internet searches, patients may benefit from prescriptive recommendations of high-quality websites or institutionally curated educational materials. Such an approach could help mitigate exposure to misleading or low-quality information and strengthen the informed consent process.
Despite the comprehensive approach of our study to assess the quality and readability of online patient information for RAHA, some limitations exist. For example, the study assessed only the top websites based on searches via Google, Bing, and Yahoo. Whereas the selection is comprehensive, it does not reflect the full scale of available online resources, some of which might have been ranked very low by search engines but could provide relevant information. The dynamic nature of online content could also implicate that what was on the web during our research may have changed afterward. While the tools used in the evaluation—DISCERN, Flesch-Kincaid readability tests, and the JAMA benchmarks—are reasonably robust, their limitations exist. For example, the DISCERN tool is subjective, dependent on evaluators’ perceptions, and hence susceptible to bias despite efforts to ensure consistency in the calibration of the evaluators. Similarly, the Flesch-Kincaid readability tests place most of their emphasis on sentence length and word syllable counts, which in themselves cannot account for other critical dimensions of readability, including text structure, the presence of medical jargon, and the broader context in which information is provided. Furthermore, the analysis excluded the multimodal features that are becoming more common, such as video, infographics, and interactivity, which greatly facilitate patient understanding. These aspects can bring useful learning content that may not have been captured by the traditional text-based readability metrics. A further limitation is that regional bias cannot be completely ruled out. The study targeted the English language. This might limit the generalisability of those findings from a linguistic focus to a more universal audience, whose differences in health communication and patterns of Internet use might reveal different insights. In addition, it did not include direct feedback from the patients themselves, which would provide valuable insight into practical utility and comprehensibility from the perspective of the end-user. Patient feedback is invaluable when monitoring the real-life impact of health information, as it reflects directly on the user’s experience and the effectiveness of the materials in support of making informed decisions. Incorporating patient feedback in future studies could provide valuable insight into real-world comprehension, perceived usefulness, and decision-making impact. In addition, this analysis was limited to text-based websites and did not include other increasingly popular sources of patient education, such as video-sharing platforms or social media. Visual and multimedia content—particularly on platforms such as YouTube—may enhance understanding but would require different, dedicated evaluation frameworks. Future research should address these formats to provide a more comprehensive assessment of online patient education.
Lastly, the rapid development of RAHA technology and the literature about it ensures that the information being assessed may shortly be outdated by newly arising research and clinical practices. Continuous updating and longitudinal studies are very important for a fast-moving field like this, where the results of evaluations should be relevant and correct. Further research should take into consideration an expansion of resources such as multimedia content, patient feedback, and regular updating to ensure health information remains accurate, accessible, and relevant.
Conclusions
This study highlights the need for significant improvements in the quality and readability of online patient information regarding RAHA. While some websites provide high-quality, credible information, many fail in clarity, comprehensiveness, and accessibility. By addressing these deficiencies and prioritizing patient education, healthcare providers and organizations can enhance informed decision-making and ultimately improve health outcomes in the context of RAHA. Continued efforts to develop standardized guidelines and collaborate on patient education strategies are essential for achieving these goals.
Acknowledgments
None.
Footnote
Peer Review File: Available at https://aoj.amegroups.com/article/view/10.21037/aoj-2025-1-87/prf
Funding: None.
Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://aoj.amegroups.com/article/view/10.21037/aoj-2025-1-87/coif). The authors have no conflicts of interest to declare.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
- Perets I, Mu BH, Mont MA, et al. Current topics in robotic-assisted total hip arthroplasty: a review. Hip Int 2020;30:118-24. [Crossref] [PubMed]
- Power EGM. Considerations for Effective Communication of Medical Information. Pharmaceut Med 2023;37:97-101. [Crossref] [PubMed]
- Venosa M, Tarantino A, Schettini I, et al. Stem Cells in Orthopedic Web Information: An Assessment with the DISCERN Tool. Cartilage 2021;13:519S-25S.
- Venosa M, Cerciello S, Zoubi M, et al. Readability and Quality of Online Patient Education Materials Concerning Posterior Cruciate Ligament Reconstruction. Cureus 2024;16:e58618. [Crossref] [PubMed]
- Badarudeen S, Sabharwal S. Assessing readability of patient education materials: current role in orthopaedics. Clin Orthop Relat Res 2010;468:2572-80. [Crossref] [PubMed]
- Duncan IC, Kane PW, Lawson KA, et al. Evaluation of information available on the Internet regarding anterior cruciate ligament reconstruction. Arthroscopy 2013;29:1101-7. [Crossref] [PubMed]
- Shah AK, Yi PH, Stein A. Readability of Orthopaedic Oncology-related Patient Education Materials Available on the Internet. J Am Acad Orthop Surg 2015;23:783-8. [Crossref] [PubMed]
- Luciani AM, Foster BK, Hayes D, et al. Readability of Online Spine Patient Education Resources. World Neurosurg 2022;162:e640-4. [Crossref] [PubMed]
- Ghodasra JH, Wang D, Jayakar RG, et al. The Assessment of Quality, Accuracy, and Readability of Online Educational Resources for Platelet-Rich Plasma. Arthroscopy 2018;34:272-8. [Crossref] [PubMed]
- Venosa M, Calvisi V, Iademarco G, et al. Evaluation of the Quality of ChatGPT’s Responses to Top 20 Questions about Robotic Hip and Knee Arthroplasty: Findings, Perspectives and Critical Remarks on Healthcare Education. Prosthesis 2024;6:913-22.
- Mącznik AK, Mehta P, Kaur M. Can We Go Online for Sports Injury Prevention? A Systematic Review of English-Language Websites with Exercise-Based Sports Injury Risk Reduction Programmes. Sports Med Open 2021;7:80.
- Oxman AD, Paulsen EJ. Who can you trust? A review of free online sources of “trustworthy” information about treatment effects for patients and the public. BMC Med Inform Decis Mak 2019;19:35.
- Silberg WM, Lundberg GD, Musacchio RA. Assessing, controlling, and assuring the quality of medical information on the Internet: Caveant lector et viewor--Let the reader and viewer beware. JAMA 1997;277:1244-5.
- Charnock D, Shepperd S, Needham G, et al. DISCERN: an instrument for judging the quality of written consumer health information on treatment choices. J Epidemiol Community Health 1999;53:105-11. [Crossref] [PubMed]
- Kincaid PJ, Fishburne RP, Rogers RL, et al. Derivation Of New Readability Formulas (Automated Readability Index, Fog Count And Flesch Reading Ease Formula) For Navy Enlisted Personnel. Institute for Simulation and Training; 1975.
- Readability test - WebFX. Accessed: July 24, 2024. Available online: https://www.webfx.com/tools/read-able/
- Borsinger TM, Chandi SK, Puri S, et al. Total Hip Arthroplasty: An Update on Navigation, Robotics, and Contemporary Advancements. HSS J 2023;19:478-85. [Crossref] [PubMed]
- Brinkman JC, Christopher ZK, Moore ML, et al. Patient Interest in Robotic Total Joint Arthroplasty Is Exponential: A 10-Year Google Trends Analysis. Arthroplast Today 2022;15:13-8. [Crossref] [PubMed]
- Chang J, Wu C, Hinton Z, et al. Patient Perceptions and Interest in Robotic-Assisted Total Joint Arthroplasty. Arthroplast Today 2024;26:101342. [Crossref] [PubMed]
- Daraz L, Morrow AS, Ponce OJ, et al. Can Patients Trust Online Health Information? A Meta-narrative Systematic Review Addressing the Quality of Health Information on the Internet. J Gen Intern Med 2019;34:1884-91.
- Walsh TM, Volsko TA. Readability assessment of internet-based consumer health information. Respir Care 2008;53:1310-5.
- McInnes N, Haglund BJ. Readability of online health information: implications for health literacy. Inform Health Soc Care 2011;36:173-89. [Crossref] [PubMed]
- Kirchner GJ, Kim RY, Weddle JB, et al. Can Artificial Intelligence Improve the Readability of Patient Education Materials? Clin Orthop Relat Res 2023;481:2260-7. [Crossref] [PubMed]
Cite this article as: Venosa M, Romanini E, Petralia G, Vespasiani A, Ciminello E, Fidanza A, Logroscino G. Quality and readability of patient information on robotic-assisted hip arthroplasty. Ann Jt 2026;11:23.


