Skip to main content

Trustworthy and ethical AI-enabled cardiovascular care: a rapid review

Abstract

Background

Artificial intelligence (AI) is increasingly used for prevention, diagnosis, monitoring, and treatment of cardiovascular diseases. Despite the potential for AI to improve care, ethical concerns and mistrust in AI-enabled healthcare exist among the public and medical community. Given the rapid and transformative recent growth of AI in cardiovascular care, to inform practice guidelines and regulatory policies that facilitate ethical and trustworthy use of AI in medicine, we conducted a literature review to identify key ethical and trust barriers and facilitators from patients’ and healthcare providers’ perspectives when using AI in cardiovascular care.

Methods

In this rapid literature review, we searched six bibliographic databases to identify publications discussing transparency, trust, or ethical concerns (outcomes of interest) associated with AI-based medical devices (interventions of interest) in the context of cardiovascular care from patients’, caregivers’, or healthcare providers’ perspectives. The search was completed on May 24, 2022 and was not limited by date or study design.

Results

After reviewing 7,925 papers from six databases and 3,603 papers identified through citation chasing, 145 articles were included. Key ethical concerns included privacy, security, or confidentiality issues (n = 59, 40.7%); risk of healthcare inequity or disparity (n = 36, 24.8%); risk of patient harm (n = 24, 16.6%); accountability and responsibility concerns (n = 19, 13.1%); problematic informed consent and potential loss of patient autonomy (n = 17, 11.7%); and issues related to data ownership (n = 11, 7.6%). Major trust barriers included data privacy and security concerns, potential risk of patient harm, perceived lack of transparency about AI-enabled medical devices, concerns about AI replacing human aspects of care, concerns about prioritizing profits over patients’ interests, and lack of robust evidence related to the accuracy and limitations of AI-based medical devices. Ethical and trust facilitators included ensuring data privacy and data validation, conducting clinical trials in diverse cohorts, providing appropriate training and resources to patients and healthcare providers and improving their engagement in different phases of AI implementation, and establishing further regulatory oversights.

Conclusion

This review revealed key ethical concerns and barriers and facilitators of trust in AI-enabled medical devices from patients’ and healthcare providers’ perspectives. Successful integration of AI into cardiovascular care necessitates implementation of mitigation strategies. These strategies should focus on enhanced regulatory oversight on the use of patient data and promoting transparency around the use of AI in patient care.

Peer Review reports

Background

Artificial intelligence (AI) is increasingly used in healthcare to improve the prevention, diagnosis, treatment, and maintenance of health conditions [1]. These interventions have enormous potential to assist in the management of cardiovascular diseases, the leading cause of death in the US, given the high number of AI-based devices authorized for use and under review by the FDA for cardiovascular diseases, the breadth of use cases spanning clinical practice to consumer-facing AI-enabled solutions, and the potential for improving clinical outcomes [2,3,4,5].

Previous studies have shown that patients may be willing to accept the use of AI in healthcare and see its potential benefits if certain conditions are met, including transparency about the capture and use of their data by AI systems and the ability to opt out from data sharing at any time [6]. Moreover, patients place a higher level of trust in a healthcare provider’s assessment of their health compared to an AI and often want assurance that their physicians are involved in and ultimately are responsible for AI-enabled decisions due to the concerns about risks of AI failures during care [7, 8]. On a similar note, healthcare providers express specific needs for information transparency, such as explanations about known strengths and limitations of interventions when using AI-based software in clinical decision-making [9]. Healthcare providers also recognize the potential impact of AI on patient-clinician trust and seek support for transparent and effective communication with patients about AI use in their care [10]. Thus, to fully achieve the appropriate uptake of AI in medicine, patients’ and healthcare providers’ ethical and trust concerns must be addressed [11].

Although prior research has begun to explore patient and clinician perspectives on the use of AI in medicine, none have focused explicitly on stakeholders’ transparency, trust, and ethical concerns; nor have studies focused explicitly on cardiovascular care, an area where there has been rapid and transformative recent growth [12]. Accordingly, there remains a significant gap in understanding the specific barriers and facilitators to addressing these stakeholder concerns related to transparency, trust, and ethics when implementing AI in cardiovascular care. This gap could hinder the development of effective practice guidelines and regulatory policies necessary for ensuring the ethical and trustworthy use of AI in medicine. To bridge this gap and to provide actionable insights into the nuanced requirements for trusted use of these AI-based technologies, this study reviewed the literature to identify key ethical concerns, potential mitigation strategies, and barriers and facilitators to trustworthy AI-informed cardiovascular care.

Methods

Inclusion and exclusion criteria

We conducted a rapid review of the literature, a form of information synthesis aiming to generate evidence through a resource-efficient approach by simplifying or removing certain components of the traditional systematic review process [13]. Eligible for inclusion were publications discussing transparency, trust, or ethical concerns (outcomes of interest) associated with AI-based medical devices (interventions of interest) in the context of cardiovascular care from patients’, caregivers’, or healthcare providers’ perspectives. Our search was not limited by date or study design. All papers published as full manuscripts, including qualitative and quantitative analyses, commentaries, editorials, expert opinions, perspective pieces, and guidelines were included. Conference abstracts, book chapters, pre-prints, animal studies, and publications that were not in English were excluded. Prior to the formal article screening process, we conducted a calibration exercise by piloting the screening of 10% of the sample. This ensured that all authors involved in the screening process consistently applied the inclusion and exclusion criteria.

Search strategy and data sources

A medical librarian with literature review expertise (AAG) developed the search strategy with input from all authors. The search was developed as an Ovid Embase search strategy, which was subsequently reviewed by a second librarian not otherwise associated with the project using Peer Review of Electronic Search Strategies (PRESS) [14]. After the strategy had been finalized and unanimously approved by all authors, it was adapted to the syntax and subject headings of other databases. Details on the search strategy can be found in Appendix 1. The search was conducted on the following six bibliographic databases: Cochrane Library, Embase, Google Scholar, Ovid Medline, Scopus, and Web of Science Core Collection, and was completed on May 24, 2022.

Study selection

Search results were downloaded to EndNote 20 (Clarivate, Philadelphia, PA), and duplicate citations were removed using the Yale Deduplicator Tool [15]. Individual citations were ingested into Covidence, a software tool dedicated to literature review management that facilitates collaboration between independent reviewers in the article screening and review processes. The review process was divided into two major steps: title/abstract screening and full-text screening. Titles and abstracts of each paper identified by the search were independently screened by two authors [MM and AMS, AAG, or DWY] against the inclusion criteria. Next, full-text articles were obtained for all studies that had not been excluded at the first level of screening and were assessed by two independent reviewers [MM and AMS or DWY], with the reasoning for exclusions being recorded. Disagreements on eligibility were resolved by consensus or through the input of a third investigator. After screening, CitationChaser was used to perform citation chasing on all included studies to identify other potentially relevant studies [16]. One reviewer [MM, AMS, or DWY] screened the identified papers to decide whether they met the eligibility criteria. Reviewers were not blinded to the journal titles, authors, or institutions.

Data extraction and synthesis

Using the Qualtrics software [17], data extraction was conducted by an author [MM, AMS, or DWY] for the following fields for each included paper: article type; article title; publication year; first author; purpose and indication(s) of AI-based medical device; and device users (patients, caregivers, and healthcare providers). Next, the conceptualization and characteristics used to describe barriers and facilitators of transparency and trust and ethical concerns from patients’, caregivers’, and healthcare providers’ perspectives were recorded. For validation, a second reviewer independently performed data extraction on 20% of the final sample, selected at random [MM, AMS, or DWY]. Disagreements were less than 5% and were resolved by discussion or through the input of a third investigator [JEM]. Data generated from this project will be actively preserved for three years per Yale Research Data and Materials Policy—Retention 6001.2 unless otherwise required by the journal. Content analyses were performed by MM, using Qualtrics 2022 and Microsoft Excel 2018 (Microsoft Corp) to facilitate data management and organization. In keeping with content analyses methods, abstracted data were independently categorized by two researchers [JEM and MM] who then met to discuss and agree upon the final categorization of findings, through iterative discussion with 100% agreement. Categories where then summarized into key themes pertaining to concerns and mitigation strategies for ethics and barriers and facilitators for trust in AI-enabled care, with unanimous agreement among all researchers.

Results

Search results

The search resulted in 10,171 papers, of which 7,925 were unique. After conducting the first level of screening, 7,799 titles and abstracts were excluded, leaving 126 full-text articles for review. Of those, 71 did not meet eligibility criteria due to ineligible area of care, i.e., non-cardiovascular (n = 10); ineligible intervention, i.e., non-AI tools (n = 26); ineligible outcome (n = 22); ineligible format, i.e., conference abstracts, book chapters, or preprints (n = 13), leaving a total of 55 eligible publications. Citation chasing of these articles resulted in 3,603 additional citations, 3,330 of which were eliminated upon title and abstract reviewing. Of the 273 reviewed full-texts, 90 articles were found to be eligible. The reasons for excluding the remaining papers included: ineligible area of care (n = 69), intervention (n = 14), outcome (n = 88), and format (n = 12). Overall, 145 papers were included in this review (Fig. 1). Since we reached information saturation upon reviewing the additional papers identified through citation chasing, we stopped subsequent rounds of citation chasing.

Fig. 1
figure 1

Sample Construction Using the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Diagram

Sample characteristics

Included articles were published from 2014 to 2022, except for one paper [18] published in 1996. Of the 145 articles, 88 (60.7%) were review articles; 32 (22.1%) were commentaries, editorials, or perspective pieces; 22 (15.2%) were original research; and 3 (2.1%) were case studies.

The AI-based interventions discussed in 43 (29.7%) papers were devices used for the diagnosis or monitoring of cardiovascular diseases (e.g., AI-enabled cardiac imaging), while 5 (3.4%) were therapeutic devices (e.g., clinical decision support tools for heart pump implants). The interventions discussed in the remaining papers (101 [69.7%]) included both diagnostic and therapeutic AI-based medical devices. The indications for use of the AI-based devices were not specified in most papers (122 [84.1%]). Among those that specified, arrhythmia was the highest reported indication (8 [5.5%]), followed by heart failure (7 [4.8%]). Although all papers discussed AI-based devices in the cardiovascular context, 88 (60.7%) were specific to the cardiovascular specialty, while the remaining articles also included other areas of medicine.

Among all the reviewed articles, 3 (2.1%) studied devices that were self-management software used directly by patients [19,20,21], whereas the main users of the other devices discussed by 48 (33.1%) papers were healthcare providers. The remaining 94 (64.8%) papers did not specify the users. Only 2 (1.4%) papers specified the device sponsor; both studied HeartMan, a personal decision support system for heart failure management, funded by the Horizon 2020 Framework Programme of the European Union [19, 20].

Ethical concerns and mitigation strategies

There were six key ethical concerns discussed in the literature, which were privacy, security, or confidentiality issues; risk of healthcare inequity or disparity; risk of patient harm; accountability and responsibility concerns; problematic informed consent and potential loss of patient autonomy; and issues related to data ownership (Fig. 2). Three papers discussed the lack of human involvement in patient care and the altered relationship between patients and healthcare providers as an ethical concern associated with AI-enabled medical care [22,23,24]. One paper debated the additional complexity that AI-based medical devices could add to end-of-life care [25].

Fig. 2
figure 2

Ethical Concerns and Mitigation Strategies for the Use of Artificial intelligence-based Medical Devices in Cardiovascular Care

Privacy, security, and confidentiality concerns

Fifty-nine (40.7%) publications discussed ethical concerns related to privacy, security, or confidentiality. Specific concerns included potential inappropriate access to and misuse of personal information stored in medical devices and inadvertent release of private patient healthcare data [22, 26]. Protecting sensitive patient information from data leakage and cyberattacks, especially for data used by private for-profit organizations [27], and protecting the stored medical data, particularly by cloud-assisted AI medical devices or commercial smartphone-based applications with poorly secured servers, were other areas of concern [28, 29]. Moreover, transferring data between institutions for the reproducibility of results could cause additional security problems [30]. Lastly, ensuring confidentiality could be difficult owing to the circulation of sensitive patient information among unregulated companies and a lack of de-identification of raw data input for AI algorithms [30, 31].

Mitigation strategies

We identified mitigation strategies from the literature to address some of the aforementioned ethical concerns. Data de-identification or anonymization and using highly secure data platforms could protect patient data used for the development and training of AI-medical devices [31,32,33]. Additionally, more secure health systems across different localities need to be built, and policymakers could help with constructing the adapted infrastructures and developing guidelines regarding patient privacy, data storage, and data sharing to ensure optimal implementation of AI tools in healthcare [34,35,36]. Several papers emphasized the need for more regulation and legislation on patient data use, such as performing regular privacy audits, mandating security breach notifications, and setting greater penalties for data misuse [27, 33, 37,38,39].

Risk of healthcare inequity or disparity

Thirty-six (24.8%) papers raised concerns that AI-based medical devices could create new or exacerbate healthcare inequities or disparities based on factors such as gender, race, ethnicity, or pathology-driven specificities. Potential unfairness in algorithmically automated decisions was described as the major cause of inequities and disparities. Papers discussed the risk of the AI intervention being less effective or providing inaccurate recommendations for under-represented patients if the training datasets for algorithms are based on unrepresentative patient samples [37, 40]. This in turn could lead to discrimination against certain patient populations and increase the gap in healthcare outcomes among different social groups. Furthermore, some were concerned that data could be used to improperly profile patients and differentially provide healthcare (e.g., avoidance of highest-cost or highest-risk patients) [26]. There were also concerns regarding social justice and potential unfairness in the distribution of the benefits and burdens of AI applications [22].

Mitigation strategies

Several papers described important considerations for the data sources used by AI tools to help healthcare providers recognize when it could be inappropriate to use a specific AI tool for certain patient groups and to ensure that access to AI-based tools is not affected by demographic, geographic, or temporal constraints [41,42,43]. Strategies to mitigate concerns related to health inequity when using AI in medical care include using a balanced dataset through collecting sufficient data from under-represented populations, validating AI algorithms on different minority and low-income groups, and obtaining robust input from different stakeholders involved in the development, use, and regulation of AI tools [44,45,46]. Moreover, creating a distinct algorithm in AI systems for each group of patients, rather than using a universal algorithm for all patients, could improve fairness in decision-making [47]. Lastly, conducting evidence-based assessment and implementing further regulatory oversights could help to ensure the fairness of AI tools [28, 45].

Risk of patient harm

Concerns about the risk of suboptimal care or patient harm associated with AI tools were raised by 24 (16.6%) papers. Inaccurate data used by AI-based decision tools, flawed AI algorithms, and deliberate hacking of algorithms were discussed as potentially leading to erroneous recommendations and patient harm on a massive scale [33, 48]. The risk of errors would be greater when the AI systems function independently with unchecked decision-making and actions [49], particularly in the setting where errors made by complex and untransparent AI systems are difficult to trace and debug [50]. Moreover, the complexity of AI-based systems, potentially unpredictable system output, and the uncertainty of human–AI interactions could result in substantial variation in the performance of AI-based medical devices, causing further safety challenges [51]. Lastly, there were concerns about AI-based devices programmed to function in unethical ways, for example by suggesting clinical actions that generate higher profits without patient care benefits [31].

Mitigation strategies

Several papers described the importance of providing sufficient training to device users to reduce the risk of patient harm, with an emphasis on educating healthcare providers about the potential pitfalls and limitations of AI technologies [48, 52]. Additionally, rigorous validation and continuous assessment of the algorithms used in AI-based medical devices, including conducting clinical trials that compare AI-supported care with the standard of care, could identify potential bias in AI algorithms and minimize patient harm [50, 53,54,55]. Establishing further regulatory and ethical guidelines in the postmarket stage and implementing standard frameworks for regular assessment of the safety of AI tools are also necessary [33, 46].

Problematic informed consent and loss of patient autonomy

We found 17 papers (11.7%) discussing ethical concerns about obtaining informed consent for providing care with AI-enabled medical devices. The main reason leading to problematic informed consent is the lack of transparency and interpretability of AI tools and insufficient information about different aspects of care provided by AI-enabled medical devices [45, 56, 57]. Moreover, informing patients about all aspects of health data collection and its use across different platforms and for training algorithms may not be always feasible [36, 58]. Withdrawing consent for the use of these data would cause further challenges [59]. Eight papers (5.5%) argued that patient autonomy could be negatively affected when using AI-enabled care. This issue specifically is likely to happen if the devices function independently and have unchecked actions [49], which could damage patients’ confidence in their ability to change their medical decisions, i.e. refuse care, if later desired [50].

Mitigation strategies

To improve informed decision-making, several papers described the necessity of providing patients and healthcare providers with sufficient information and ensuring that patients are freely able to change their medical decisions if desired [50, 60]. Moreover, further regulations on obtaining valid unambiguous consent when using patient data should be established [27].

Accountability and responsibility concerns

Another key ethical concern raised by 19 (13.1%) papers was the issue related to accountability and responsibility. Since multiple groups of professionals are involved in the design, manufacture, and use of AI-based medical devices, accountability and liability of the decisions made by these devices could be difficult to determine. While some suggested that users of the devices should ultimately be responsible for the output of algorithms [25, 61], there are considerable debates around the accountability of actions suggested or performed by AI-based technologies and the potential misuse of data [36, 37]. The complexity, opaqueness, and lack of transparency of AI-based medical devices make the accountability and responsibility issues even more challenging [50, 62].

Mitigation strategies

To address questions of accountability, several papers described the importance of improving the engagement of all stakeholders, including physicians and developers. Papers also suggested improving the transparency of AI tools’ function so that the reasons behind decisions and actions taken by the devices are clear [63, 64]. Moreover, there is a need for regulatory and legal systems to oversee the implementation of AI-based medical devices and determine the responsibilities of patients, healthcare providers, and others [65].

Data ownership issues

There were further ethical concerns discussed by 11 (7.6%) papers related to ownership of the patient data being used by AI-based technologies, particularly if the data is identifiable [66]. The rules and regulations related to data ownership vary significantly across different regions and may be absent in some jurisdictions, which makes it unclear whether patients, hospitals, or private companies own the data analyzed by AI tools [67, 68]. This issue is directly associated with how AI and its data are monetized [68], as there are controversies about who should profit from the collected data and for how long these institutions or individuals can and should retain patient health information [69].

Mitigation strategies

To address these concerns, several papers described the importance of clear regulations around data ownership and preparing models of health data ownership with rights to the individual ahead of using AI-based devices in healthcare [33, 38].

Trust barriers and facilitators

We identified 53 (36.6%) and 58 (40.0%) papers discussing trust barriers and facilitators, respectively, from patients’ and healthcare providers’ perspectives when using AI-based medical devices in cardiovascular care (Fig. 3).

Fig. 3
figure 3

Trust Barriers and Facilitators for the Use of Artificial intelligence-based Medical Devices in Cardiovascular Care

Shared (Patient and Healthcare Provider) Perspective

Data privacy and security issues

Data privacy and security concerns were discussed as key trust barriers for patients and healthcare providers [17, 62]. In particular, patients were described as worried about the potential alteration of data, unauthorized use of data, information sharing with commercial partners, and data loss [59, 70]. These issues are specifically concerning in the absence of uniform federal privacy regulations regarding collecting, storing, and using patient health information in different settings [41].

Facilitators

To address data privacy and security concerns, the literature discussed encrypting patient data according to the Health Insurance Portability and Accountability Act of 1996 (HIPAA), removing data identifiers, documenting the purpose of datasets, establishing ethical standards for data use and access, and securing communications between patients and healthcare providers [41, 71, 72]. Regulatory bodies could ensure the competence of AI systems and their users and establish standardized codes of ethics and conduct for device developers [72].

Risk of suboptimal care or patient harm

Users have expressed concerns around the possibility of device malfunction and are hesitant about the trustworthiness of diagnostic decisions or automatically generated medical advice by AI tools, especially if the advice contradicts their previous experiences [50]. Another important trust barrier is the uncertainty about the reliability and quality of the data used in the algorithms, which could be incomplete, unrepresentative, or outdated [73]. This lack of generalizability could exacerbate health inequities, and further decrease trust in the populations who feel that AI would be inaccurate when applied to their cases [74]. Certain populations may also feel that they may not equally benefit from AI technologies because of the deployment and marketing strategies that manufacturers might take [74]. Healthcare providers are also concerned that AI-based medical devices could provide inaccurate or biased recommendations, especially if the systems are not regularly updated [75, 76]. Moreover, clinicians may not trust the generalizability of the outputs of AI systems for their own patients due to the lack of diversity in the clinical dataset [77,78,79].

Facilitators

To address these trust barriers, the literature discussed the importance of keeping AI systems updated by introducing new rules and cases along with routine performance assessments to enhance the accuracy of decisions made by AI-based medical devices [75, 80]. Further regulations and legislation could also increase trust by ensuring the balance between innovation and patient safety and confirming that AI algorithms meet appropriate standards of clinical benefit [81, 82].

Lack of transparency and insufficient knowledge

Substantial barriers to trust in AI-enabled medical devices are the lack of transparency, opaqueness (black box nature), and poor interpretability of the devices [76, 83, 84]. Physicians tend to trust a device less if they do not fully understand how it functions or how its outputs are generated, even if the device performs well [37, 40, 54]. Multiple barriers to transparent AI-based medical devices exist, including the lack of understanding of what information is being used by the AI tools, what the AI systems are learning, and how the AI algorithms reach conclusions based on the inputs [30, 85,86,87]. Also, it could be difficult to achieve algorithmic transparency due to the complicated structure, dynamic learning, and constant evolution of AI algorithms [36, 56]. These factors make AI models difficult to explain and justify, and therefore, uninterpretable [88]. Besides, inadequate education and experience with AI tools can cause additional barriers to trustworthy AI-enabled care [76, 89].

Facilitators

To improve explainability and physicians’ understanding of AI-based medical devices, it is essential to clarify AI algorithm training data, explain the computational model and its output, and acknowledge the existing limitations of AI-based medical devices [76, 78, 87, 90, 91]. Making the datasets, codes, and trained models publicly available and using interpretable models that will allow healthcare providers to review and provide feedback to the AI decision-making tools could further improve transparency [47, 92]. Some argued that healthcare providers may not need detailed explanations of the validated predictions and decisions made by AI-enabled medical devices but need to have sufficient information about the major components that affect the decisions [43]. Additionally, a visual display of the consensus between decision support tools and clinicians’ assessments could enhance clinicians’ trust in AI systems [55].

Restricting the complexity of AI tools as well as providing clarity on how AI devices are regulated could facilitate patient trust [19, 21, 59, 93]. It is also essential to provide patients with appropriate education about how to use AI tools and enhance their engagement in different phases of the design and implementation of AI technologies [50, 89, 94, 95].

Other important factors for facilitating transparency are to clarify all the interactions within and among different sectors that led to the development of AI systems and to maintain open and clear communication between healthcare providers and developers [88, 96]. Regulatory bodies could establish more rigorous regulations for the enforcement of transparency in datasets and algorithms used in AI-based medical devices [47, 92].

Replacing human aspects of care

Patients and healthcare providers seem to trust AI tools less if the devices are meant to entirely replace the human aspect of care [53].

Facilitators

Trust could improve if patients and healthcare providers are assured that AI-based devices are supplementary to care, rather than outright replacing clinicians or other human aspects of care [53, 92].

Patient perspective

Prioritizing profits over patients’ interests

From the patient perspective, trust would be diminished when they feel AI devices are mainly used for economic efficiency at the cost of patient interests and benefits [72].

No facilitators were identified in the reviewed literature for this trust barrier.

Healthcare provider perspective

Lack of robust evidence

A significant barrier to clinician trust is the lack of robust evidence for the accuracy and limitations of AI-based medical devices in addition to the inadequate education and training about the use of AI tools [76, 97, 98].

Facilitators

Several papers argued that while it might not be feasible to explain all aspects of AI, generating more reliable evidence and standards through rigorous internal and external validations, prospective clinical trials in diverse cohorts which demonstrate safety, efficacy, and generalizability of AI devices, and peer-reviewed publications can improve trust [99,100,101,102,103]. Therefore, collaborative practices with healthcare providers for the development and continuous assessment of AI devices are essential [75, 98]. Lastly, complying with the established legislations and regulations is essential when producing trustworthy AI research [88].

Discussion

In this rapid review of the literature on the use of AI-based interventions in cardiovascular care, which included more than 11,000 publications, we identified key stakeholder concerns among healthcare providers and patients that relate to transparency, trust, and ethical concerns around the use of AI in cardiovascular care. Concerns focused on data privacy and security, risk of patient harm, and the possibility that AI-based medical care could exacerbate healthcare inequities or advance unfair algorithmically automated decisions. Inadequately obtaining informed consent from patients regarding the use of AI and various forms of data collection while providing AI-enabled care was also described, as was determining who is ultimately responsible for regulating the development, performance, and use of AI in medicine and who owns the collected data. The absence of rigorous clinical trials to support the safety and efficacy of AI-enabled medical devices and the lack of transparency about the data used by AI devices and their subsequent recommendations remain other significant barriers to patients’ and healthcare providers’ trust. Given the rapid and transformative recent growth of AI in cardiovascular care [12], these challenges should be carefully identified and addressed to ensure that AI systems are developed and implemented in an ethical and trustworthy manner.

We identified mitigation strategies to address most key ethical and trust concerns about the use of AI in medicine, which requires a collaborative effort involving AI developers, regulators, hospital systems, healthcare providers, and patients. Regulatory agencies were identified as having multiple inroads to addressing patient and clinician concerns. Notably, we found that establishing further regulations and legislation around development, adoption, and use of AI in healthcare is a key facilitator for addressing almost all the identified ethics concerns and trust barriers. Certain proposed frameworks and guidance documents have carved out actions for oversight bodies to delineate the scope of liability, strengthen data privacy protections, and clarify data ownership regulations [104, 105]. Moreover, requiring postapproval studies could ensure continuous monitoring of AI devices' performance, potential biases, and unintended consequences.

AI developers similarly have a significant stake in addressing patient and clinician concerns and need to be attentive to data stewardship practices, safety, and transparency as models are researched, developed, and marketed. Moreover, current medical device labeling does not always address the unique challenges of the use of AI-based software, such as training data sources, model accuracy, potential biases, and opting out of use, which can hinder patient-shared decision-making and trust in AI-enabled care. Providing AI model facts labels will establish a clear and standardized communication of information with users and enhance transparency and trust [52]. Furthermore, self-governance approaches may serve as a potential mechanism in tandem with regulatory intervention for implementing mitigation strategies. Submitting to a set of industry standards as well as certification processes may help to mitigate the risks of AI tools and help to facilitate trust in models [106].

Hospital systems and clinicians will also be faced with key decisions regarding AI tools adopted in their practices. As hospitals become a source of data for the development of numerous models, appropriate privacy protections and transparency about data use and model deployment would be relevant, especially as they act in coordination with third-party developers [107]. As end-users of most healthcare AI tools, clinicians may become responsible for providing appropriate information about these systems to patients at the point of care and for appropriately integrating model insights into clinical decision-making.

While our findings are indicative of many strategies that would be taken up by clinical, technical, and regulatory stakeholders, there are also opportunities for including patients. Stakeholder engagement with patient populations and the public in the research and design of AI tools may be relevant to mitigating bias and developing trust, particularly by communicating the underlying design of AI tools in ways that are understandable to patients and leveraging advisory groups to inform the creation of such tools [108]. Identifying opportunities for patient engagement will be incumbent upon all stakeholders with more formal decision-making authority. Thus, regulatory oversight on using and sharing patient information, safety and transparency of AI tools, and responsibilities of healthcare providers, device manufacturers, and patients would facilitate the application of AI in medical care.

Overall, we found that most papers briefly touched upon issues related to trust and ethics and potential mitigation strategies without providing in-depth information. Additional studies translating ethical principles into tangible tools and guidance for stakeholders will be an important next step in implementation of responsible and trustworthy AI-enabled healthcare [109]. Moreover, we did not find any ethical concerns or trust barriers and facilitators from the caregivers’ perspective, necessitating further research in this area.

Our study has limitations. First, similar to all reviews of published literature, publication and reporting biases may have affected our findings. Second, while we identified and reviewed a significant number of relevant papers, the vast majority were review articles and commentaries, editorials, or perspective pieces with fewer original research articles. While our search was very exhaustive, there was an inconsistency in the level of detail, which may have led to papers potentially being missed. However, citation chasing was undertaken to identify additional relevant articles that failed to include the three main concepts of our search. Lastly, this study focused on the use of AI in cardiovascular care and may not generalize to uses in other areas of medicine.

Conclusion

This rapid review of the literature on the use of AI-based interventions in cardiovascular care identified key ethical and trust concerns from patients’ and healthcare providers’ perspectives, including issues related to data privacy and security, potential inequity and bias, risk of patient harm, patient consent and autonomy, and a lack of transparency about the function of AI-based medical devices. Given the rapid and transformative recent growth of AI in cardiovascular care [12], certain mitigation strategies, particularly establishing further regulatory oversight on the use of patient data, and safety and transparency of AI tools seem necessary.

Availability of data and materials

Relevant data are available on reasonable request from the corresponding author.

Abbreviations

AI:

Artificial Intelligence

HIPAA:

Health Insurance Portability and Accountability Act of 1996

References

  1. Fogel AL, Kvedar JC. Artificial intelligence powers digital medicine. NPJ Digit Med. 2018;1:5.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med. 2020;3:118.

    Article  PubMed  PubMed Central  Google Scholar 

  3. de Marvao A, Dawes TJ, Howard JP, O’Regan DP. Artificial intelligence and the cardiologist: what you need to know for 2020. Heart. 2020;106(5):399–400.

    Article  PubMed  Google Scholar 

  4. Ladejobi AO, Cruz J, Attia ZI, van Zyl M, Tri J, Lopez-Jimenez F, et al. Digital health innovation in cardiology. Cardiovasc Digit Health J. 2020;1(1):6–8.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Centers for Disease Control and Prevention. Leading Causes of Death 2023 [Available from: https://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm.

  6. McCradden MD, Sarker T, Paprica PA. Conditionally positive: a qualitative study of public perceptions about using health data for artificial intelligence research. BMJ Open. 2020;10(10): e039798.

    Article  PubMed  PubMed Central  Google Scholar 

  7. Quinn TP, Senadeera M, Jacobs S, Coghlan S, Le V. Trust and medical AI: the challenges we face and the expertise needed to overcome them. J Am Med Inform Assoc. 2021;28(4):890–4.

    Article  PubMed  Google Scholar 

  8. Fritsch SJ, Blankenheim A, Wahl A, Hetfeld P, Maassen O, Deffge S, et al. Attitudes and perception of artificial intelligence in healthcare: A cross-sectional survey among patients. Digit Health. 2022;8:20552076221116772.

    PubMed  PubMed Central  Google Scholar 

  9. Cai CJ, Winter S, Steiner D, Wilcox L, Terry M. "Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making. Proc ACM Hum-Comput Interact. 2019;3(CSCW):Article 104.

  10. Barry B, Zhu X, Behnken E, Inselman J, Schaepe K, McCoy R, et al. Provider Perspectives on Artificial Intelligence-Guided Screening for Low Ejection Fraction in Primary Care: Qualitative Study. JMIR AI. 2022;1(1): e41940.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Reis LM, Christian; Mattke, Jens; Creutzenberg, Marcus; Weitzel, Tim.,. Addressing User Resistance Would Have Prevented a Healthcare AI Project Failure. Bloomington, Ind. 2020.

  12. Elias P, Jain SS, Poterucha T, Randazzo M, Jimenez FL, Khera R, et al. Artificial Intelligence for Cardiovascular Care—Part 1: Advances. J Am Coll Cardiol. 2024;83(24):2472–86.

    Article  PubMed  Google Scholar 

  13. Garritty C, Gartlehner G, C. K, King V, Nussbaumer-Streit B, Stevens A, et al. Interim Guidance from the Cochrane Rapid Reviews Methods Group. Cochrane Rapid Reviews. 2020.

  14. McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. J Clin Epidemiol. 2016;75:40–6.

    Article  PubMed  Google Scholar 

  15. Yale University Harvey Cushing/John Hay Whitney Medical L. Reference Deduplicator. 2021.

  16. Haddaway NR, Grainger MJ, Gray CT. citationchaser: an R package for forward and backward citations chasing in academic searching. 0.0.3 ed2021.

  17. Qualtrics 2022 [Available from: https://www.qualtrics.com/.

  18. Itchhaporia D, Snow PB, Almassy RJ, Oetgen WJ. Artificial neural networks: Current status in cardiovascular medicine. J Am Coll Cardiol. 1996;28(2):515–21.

    Article  CAS  PubMed  Google Scholar 

  19. Derboven J, Voorend R, Slegers K. Design trade-offs in self-management technology: the HeartMan case. Behaviour & Information Technology. 2019;39(1):72–87.

    Article  Google Scholar 

  20. Luštrek M, Bohanec M, Barca CC, Ciancarelli MC, Clays E, Dawodu AA, et al. A personal health system for self-management of congestive heart failure (HeartMan): Development, technical evaluation, and proof-of-concept randomized controlled trial. JMIR Medical Informatics. 2021;9(3).

  21. Kela N, Eytam E, Katz A. Supporting Management of Noncommunicable Diseases With Mobile Health (mHealth) Apps: Experimental Study. JMIR human factors. 2022;9(1):e28697-NA.

  22. Antes AL, Burrous S, Sisk BA, Schuelke MJ, Keune JD, DuBois JM. Exploring perceptions of healthcare technologies enabled by artificial intelligence: an online, scenario-based survey. BMC medical informatics and decision making. 2021;21(1):221-NA.

  23. Davenport TH, Kalakota R. The potential for artificial intelligence in healthcare. Future healthcare journal. 2019;6(2):94–8.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Lekadir K, Leiner T, Young AA, Petersen SE. Current and Future Role of Artificial Intelligence in Cardiac Imaging. Frontiers in Cardiovascular Medicine. 2020;7:137.

    Article  PubMed  PubMed Central  Google Scholar 

  25. Nguyen DN, Ngo B, vanSonnenberg E. AI in the Intensive Care Unit: Up-to-Date Review. J Intensive Care Med. 2020;36(10):1115–23.

    Article  PubMed  Google Scholar 

  26. Rumsfeld JS, Joynt KE, Maddox TM. Big data analytics to improve cardiovascular care: promise and challenges. Nat Rev Cardiol. 2016;13(6):350–9.

    Article  CAS  PubMed  Google Scholar 

  27. Mathur P, Srivastava S, Xu X, Mehta JL. Artificial intelligence, machine learning, and cardiovascular disease. Clinical Medicine Insights: Cardiology. 2020;14:1179546820927404.

    PubMed  PubMed Central  Google Scholar 

  28. Park CW, Seo SW, Kang N, Ko BS, Choi BW, Park CM, et al. Artificial Intelligence in Health Care: Current Applications and Issues. Journal of Korean medical science. 2020;35(42):379-.

  29. Lareyre F, Adam C, Carrier M, Raffort J. Artificial Intelligence in Vascular Surgery: Moving from Big Data to Smart Data. Annals of vascular surgery. 2020;67(NA):e575-e6.

  30. Kowlgi GN, Ezzeddine FM, Kapa S. Artificial Intelligence Applications to Improve Risk Prediction Tools in Electrophysiology. Curr Cardiovasc Risk Rep. 2020;14(9):1–9.

    Article  Google Scholar 

  31. Pesapane F. Legal and Regulatory Framework for AI Solutions in Healthcare in EU, US, China, and Russia: New Scenarios after a Pandemic. Radiation. 2021;1(4):261–76.

    Article  Google Scholar 

  32. Dai H, Younis A, Kong JD, Puce L, Jabbour G, Yuan H, Bragazzi NL. Big Data in Cardiology: State-of-Art and Future Prospects. Frontiers in cardiovascular medicine. 2022;9(NA):844296-NA.

  33. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44–56.

    Article  CAS  PubMed  Google Scholar 

  34. Krittanawong C, Rogers AJ, Johnson KW, Wang Z, Turakhia MP, Halperin JL, Narayan SM. Integration of novel monitoring devices with machine learning technology for scalable cardiovascular management. Nat Rev Cardiol. 2020;18(2):75–91.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Lareyre F, Lê CD, Ballaith A, Adam C, Carrier M, Amrani S, et al. Applications of Artificial Intelligence in Non-cardiac Vascular Diseases: A Bibliographic Analysis. Angiology. 2022;NA(NA):33197211062280-.

  36. Constantinides P, Fitzmaurice D. Artificial intelligence in cardiology : applications, benefits and challenges. Br J Cardiol. 2018;25(3):1–3.

    Google Scholar 

  37. Su J, Zhang Y, Ke Q-q, Su J-k, Yang Q-h. Mobilizing artificial intelligence to cardiac telerehabilitation. Reviews in Cardiovascular Medicine. 2022;23(2):45.

  38. Kheradvar A, Jafarkhani H, Guy TS, Finn JP. Prospect of artificial intelligence for the assessment of cardiac function and treatment of cardiovascular disease. Future Cardiol. 2020;17(2):183–7.

    Article  PubMed  Google Scholar 

  39. Price WN, Cohen IG. Privacy in the age of medical big data. Nat Med. 2019;25(1):37–43.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  40. Turchioe MR, Volodarskiy A, Pathak J, Wright D, Tcheng JE, Slotwiner DJ. Systematic review of current natural language processing methods and applications in cardiology. Heart (British Cardiac Society). 2022;108(12):909–16.

    Google Scholar 

  41. Aggarwal N, Ahmed M, Basu S, Curtin JJ, Evans BJ, Matheny ME, et al. Advancing Artificial Intelligence in Health Settings Outside the Hospital and Clinic. NA. 2020;NA(NA):NA-NA.

  42. Siontis KC, Noseworthy PA, Attia ZI, Friedman PA. Artificial intelligence-enhanced electrocardiography in cardiovascular disease management. Nat Rev Cardiol. 2021;18(7):465–78.

    Article  PubMed  PubMed Central  Google Scholar 

  43. Payrovnaziri SN, Chen Z, Rengifo-Moreno P, Miller T, Bian J-G, Chen JH, et al. Explainable artificial intelligence models using real-world electronic health record data: a systematic scoping review. Journal of the American Medical Informatics Association : JAMIA. 2020;27(7):1173–85.

    Article  PubMed  PubMed Central  Google Scholar 

  44. Paulus JK, Kent DM. Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities. NPJ digital medicine. 2020;3(1):1–8.

    Article  Google Scholar 

  45. Petersen E, Potdevin Y, Mohammadi E, Zidowitz S, Breyer S, Nowotka D, et al. Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions. IEEE Access. 2022;10(NA):58375–418.

  46. Tat E, Bhatt DL, Rabbat MG. Addressing bias: artificial intelligence in cardiovascular medicine. The Lancet Digital health. 2020;2(12):e635–6.

    Article  PubMed  Google Scholar 

  47. Fletcher R, Nakeshimana A, Olubeko O. Addressing Fairness, Bias, and Appropriate Use of Artificial Intelligence and Machine Learning in Global Health. Frontiers in artificial intelligence. 2021;3(NA):561802-.

  48. Lopez-Jimenez F, Attia ZI, Arruda-Olson AM, Carter RE, Chareonthaitawee P, Jouni H, et al. Artificial Intelligence in Cardiology: Present and Future. Mayo Clin Proc. 2020;95(5):1015–39.

    Article  PubMed  Google Scholar 

  49. Kanwar M, Kilic A, Mehra MR. Machine learning, artificial intelligence and mechanical circulatory support: A primer for clinicians. J Heart Lung Transplant. 2021;40(6):414–25.

    Article  PubMed  Google Scholar 

  50. Morley J, Machado CCV, Burr C, Cowls J, Joshi I, Taddeo M, Floridi L. The ethics of AI in health care: A mapping review. Social science & medicine (1982). 2020;260(NA):113172-NA.

  51. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019;25(1):30–6.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  52. van de Sande D, Van Genderen ME, Smit JM, Huiskens J, Visser JJ, Veen RER, et al. Developing, implementing and governing artificial intelligence in medicine: a step-by-step approach to prevent an artificial intelligence winter. BMJ health & care informatics. 2022;29(1):e100495-e.

  53. Kilic A. Artificial intelligence and machine learning in cardiovascular health care. Ann Thorac Surg. 2020;109(5):1323–9.

    Article  PubMed  Google Scholar 

  54. Biller-Andorno N, Ferrario A, Joebges S, Krones T, Massini F, Barth P, et al. AI support for ethical decision-making around resuscitation: proceed with care. J Med Ethics. 2022;48(3):175–83.

    Article  PubMed  Google Scholar 

  55. Yang Q, Steinfeld A, Zimmerman J. Unremarkable AI: Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems; Glasgow, Scotland Uk: Association for Computing Machinery; 2019. p. Paper 238.

  56. Avanzo M, Trianni A, Botta F, Talamonti C, Stasi M, Iori M. Artificial intelligence and the medical physicist: Welcome to the machine. Appl Sci. 2021;11(4):1–17.

    Article  Google Scholar 

  57. Xie Y, Lu L, Gao F, He S-J, Zhao H-J, Fang Y, et al. Integration of Artificial Intelligence, Blockchain, and Wearable Technology for Chronic Disease Management: A New Paradigm in Smart Healthcare. Current medical science. 2021;41(6):1123–33.

    Article  PubMed  PubMed Central  Google Scholar 

  58. Stewart JE, Goudie A, Mukherjee A, Dwivedi G. Artificial intelligence-enhanced echocardiography in the emergency department. Emergency medicine Australasia : EMA. 2021;33(6):1117–20.

    Article  PubMed  Google Scholar 

  59. Petersen SE, Abdulkareem M, Leiner T. Artificial intelligence will transform cardiac imaging—opportunities and challenges. Frontiers in cardiovascular medicine. 2019:133.

  60. Miller DD. Machine Intelligence in Cardiovascular Medicine. Cardiol Rev. 2020;28(2):53–64.

    Article  PubMed  Google Scholar 

  61. Gandhi S, Mosleh W, Shen J, Chow CM. Automation, machine learning, and artificial intelligence in echocardiography: a brave new world. Echocardiography. 2018;35(9):1402–18.

    Article  PubMed  Google Scholar 

  62. Skaria R, Satam P, Khalpey Z. Opportunities and Challenges of Disruptive Innovation in Medicine Using Artificial Intelligence. Am J Med. 2020;133(6):e215–7.

    Article  PubMed  Google Scholar 

  63. Barrett M, Boyne J, Brandts J, Brunner-La Rocca HP, De Maesschalck L, De Wit K, et al. Artificial intelligence supported patient self-care in chronic heart failure: a paradigm shift from reactive to predictive, preventive and personalised care. EPMA Journal. 2019;10(4):445–64.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Pesapane F, Codari M, Sardanelli F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. European radiology experimental. 2018;2(1):35.

    Article  PubMed  PubMed Central  Google Scholar 

  65. Gama F, Tyskbo D, Nygren J, Barlow J, Reed J, Svedberg P. Implementation Frameworks for Artificial Intelligence Translation Into Health Care Practice: Scoping Review. Journal of medical Internet research. 2022;24(1):e32215-e.

  66. Esteva A, Chou K, Yeung S, Naik N, Madani A, Mottaghi A, et al. Deep learning-enabled medical computer vision. NPJ digital medicine. 2021;4(1):1–9.

    Article  Google Scholar 

  67. Krajcer Z. Artificial Intelligence for Education, Proctoring, and Credentialing in Cardiovascular Medicine. Texas Heart Institute journal. 2022;49(2):NA-NA.

  68. Gaffar S, Gearhart A, Chang AC. The Next Frontier in Pediatric Cardiology: Artificial Intelligence. Pediatr Clin North Am. 2020;67(5):995–1009.

    Article  PubMed  Google Scholar 

  69. Gearhart A, Gaffar S, Chang AC. A primer on artificial intelligence for the paediatric cardiologist. Cardiol Young. 2020;30(7):934–45.

    Article  PubMed  Google Scholar 

  70. Taralunga DD, Florea BC. A Blockchain-Enabled Framework for mHealth Systems. Sensors (Basel, Switzerland). 2021;21(8):2828-NA.

  71. Arafati A, Hu P, Finn JP, Rickers C, Cheng AL, Jafarkhani H, Kheradvar A. Artificial intelligence in pediatric and adult congenital cardiac MRI: an unmet clinical need. Cardiovascular diagnosis and therapy. 2019;9(Suppl 2):S310.

    Article  PubMed  PubMed Central  Google Scholar 

  72. Feldman RC, Aldana E, Stein K. Artificial intelligence in the health care space: how we can trust what we cannot know. Stan L & Pol’y Rev. 2019;30:399.

    Google Scholar 

  73. Shaw J, Rudzicz F, Jamieson T, Goldfarb A. Artificial Intelligence and the Implementation Challenge. Journal of medical Internet research. 2019;21(7):e13659-NA.

  74. Fenech ME, Buston O. AI in Cardiac Imaging: A UK-Based Perspective on Addressing the Ethical, Social, and Political Challenges. Frontiers in Cardiovascular Medicine. 2020;7 (no pagination).

  75. Sheikhtaheri A, Sadoughi F, Dehaghi ZH. Developing and Using Expert Systems and Neural Networks in Medicine: A Review on Benefits and Challenges. J Med Syst. 2014;38(9):1–6.

    Article  Google Scholar 

  76. Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res. 2020;22(6): e15154.

    Article  PubMed  PubMed Central  Google Scholar 

  77. Yang Q, Zimmerman J, Steinfeld A, Carey L, Antaki JF, Acm, editors. Investigating the Heart Pump Implant Decision Process: Opportunities for Decision Support Tools to Help2016; San Jose, CA: Assoc Computing Machinery.

  78. Lang M, Bernier A, Knoppers BM. AI in Cardiovascular Imaging: “Unexplainable” Legal and Ethical Challenges? Can J Cardiol. 2021;38(2):225–33.

    Article  PubMed  Google Scholar 

  79. Trayanova NA, Popescu DM, Shade JK. Machine Learning in Arrhythmia and Electrophysiology. Circ Res. 2021;128(4):544–66.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  80. Alaqra AS, Kane B, Fischer-Hübner S. Machine Learning-Based Analysis of Encrypted Medical Data in the Cloud: Qualitative Study of Expert Stakeholders' Perspectives. JMIR human factors. 2021;8(3):e21810-NA.

  81. Adedinsewo DA, Pollak AW, Phillips SD, Smith TL, Svatikova A, Hayes SN, et al. Cardiovascular disease screening in women: leveraging artificial intelligence and digital tools. Circ Res. 2022;130(4):673–90.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  82. Wang F, Preininger AM. AI in Health: State of the Art, Challenges, and Future Directions. Yearb Med Inform. 2019;28(1):16–26.

    Article  PubMed  PubMed Central  Google Scholar 

  83. Manlhiot C, Van den Eynde J, Kutty S, Ross HJ. A primer on the present state and future prospects for machine learning and artificial intelligence applications in cardiology. Can J Cardiol. 2021;38(2):169–84.

    Article  PubMed  Google Scholar 

  84. Itchhaporia D. Artificial intelligence in cardiology. Trends in cardiovascular medicine. 2020.

  85. Ranka S, Reddy M, Noheria A. Artificial intelligence in cardiovascular medicine. Curr Opin Cardiol. 2021;36(1):26–35.

    Article  PubMed  Google Scholar 

  86. Cau R, Cherchi V, Micheletti G, Porcu M, Di Cesare ML, Bassareo PP, et al. Potential Role of Artificial Intelligence in Cardiac Magnetic Resonance Imaging: Can It Help Clinicians in Making a Diagnosis? J Thorac Imaging. 2021;36(3):142–8.

    Article  PubMed  Google Scholar 

  87. Aristidou A, Jena R, Topol EJ. Bridging the chasm between AI and clinical implementation. The Lancet. 2022;399(10325):620.

    Article  Google Scholar 

  88. Vollmer SJ, Mateen BA, Bohner G, Király FJ, Ghani R, Jonsson P, et al. Machine learning and artificial intelligence research for patient benefit: 20 critical questions on transparency, replicability, ethics, and effectiveness. BMJ (Clinical research ed). 2020;368(NA):l6927-NA.

  89. Lee D, Yoon SN. Application of Artificial Intelligence-Based Technologies in the Healthcare Industry: Opportunities and Challenges. International journal of environmental research and public health. 2021;18(1):271-NA.

  90. Palla K, Hyland SL, Posner K, Ghosh P, Nair B, Bristow M, et al. Intraoperative prediction of postanaesthesia care unit hypotension. Br J Anaesth. 2022;128(4):623–35.

    Article  PubMed  Google Scholar 

  91. Triantafyllidis A, Kondylakis H, Katehakis D, Kouroubali A, Koumakis L, Marias K, et al. Deep Learning in mHealth for Cardiovascular Disease, Diabetes, and Cancer: Systematic Review (Preprint). NA. 2021;NA(NA):NA-NA.

  92. Langlais ÉL, Thériault-Lauzier P, Marquis-Gravel G, Kulbay M, So DY, Tanguay J-F, et al. Novel Artificial Intelligence Applications in Cardiology: Current Landscape, Limitations, and the Road to Real-World Applications. Journal of cardiovascular translational research. 2022;NA(NA):NA-NA.

  93. Muehlematter UJ, Daniore P, Vokinger KN. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015–20): a comparative analysis. The Lancet Digital health. 2021;3(3):e195–203.

    Article  CAS  PubMed  Google Scholar 

  94. Sun J-Y, Shen H, Qu Q, Sun W, Kong X. The application of deep learning in electrocardiogram: Where we came from and where we should go? International journal of cardiology. 2021;337(NA):71–8.

  95. Fitzsimons D, Hill L, McNulty A. Back to the future: what patients, carers, nurses and doctors can gain from artificial intelligence-based heart failure solutions. British Journal of Cardiac Nursing. 2021;16(11):1–3.

    Article  Google Scholar 

  96. van Assen M, Banerjee I, De Cecco CN. Beyond the artificial intelligence hype: what lies behind the algorithms and what we can achieve. J Thorac Imaging. 2020;35:S3–10.

    Article  PubMed  Google Scholar 

  97. Zhou SK, Greenspan H, Davatzikos C, Duncan JS, van Ginneken B, Madabhushi A, et al. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. Proc IEEE. 2021;109(5):820–38.

    Article  CAS  Google Scholar 

  98. Winter P, Carusi A. ‘If You’re Going to Trust the Machine, Then That Trust Has Got to Be Based on Something’:: Validation and the Co-Constitution of Trust in Developing Artificial Intelligence (AI) for the Early Diagnosis of Pulmonary Hypertension (PH). Science & Technology Studies. 2022.

  99. Tarakji KG, Silva J, Chen LY, Turakhia MP, Perez M, Attia ZI, et al. Digital Health and the Care of the Patient With Arrhythmia: What Every Electrophysiologist Needs to Know. Circulation. 2020;Arrhythmia and electrophysiology. 13(11):e007953.

  100. Feeny AK, Chung MK, Madabhushi A, Attia ZI, Cikes M, Firouznia M, et al. Artificial intelligence and machine learning in arrhythmias and cardiac electrophysiology. Circulation: Arrhythmia and Electrophysiology. 2020;13(8):e007952.

  101. van den Oever LB, Vonder M, van Assen M, van Ooijen PMA, de Bock GH, Xie XQ, Vliegenthart R. Application of artificial intelligence in cardiac CT: From basics to clinical practice. Eur J Radiol. 2020;128: 108969.

    Article  PubMed  Google Scholar 

  102. Kelly C, Karthikesalingam A, Suleyman M, Corrado GS, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019;17(1):1–9.

    Article  CAS  Google Scholar 

  103. Loncaric F, Camara O, Piella G, Bijnens B. Integration of artificial intelligence into clinical patient management: focus on cardiac imaging. Revista Española de Cardiología (English Edition). 2021;74(1):72–80.

    Article  Google Scholar 

  104. World Health O. Ethics and governance of artificial intelligence for health: WHO guidance. Geneva: World Health Organization; 2021. p. 2021.

    Google Scholar 

  105. Reddy S, Allan S, Coghlan S, Cooper P. A governance model for the application of AI in health care. J Am Med Inform Assoc. 2020;27(3):491–7.

    Article  PubMed  Google Scholar 

  106. Roski J, Maier EJ, Vigilante K, Kane EA, Matheny ME. Enhancing trust in AI through industry self-governance. J Am Med Inform Assoc. 2021;28(7):1582–90.

    Article  PubMed  PubMed Central  Google Scholar 

  107. Ozalp H, Ozcan P, Dinckol D, Zachariadis M, Gawer A. “Digital Colonization” of Highly Regulated Industries: An Analysis of Big Tech Platforms’ Entry into Health Care and Education. Calif Manage Rev. 2022;64(4):78–107.

    Article  Google Scholar 

  108. Banerjee S, Alsop P, Jones L, Cardinal RN. Patient and public involvement to build trust in artificial intelligence: A framework, tools, and case studies. Patterns (N Y). 2022;3(6): 100506.

    Article  PubMed  Google Scholar 

  109. Prem E. From ethical AI frameworks to tools: a review of approaches. AI and Ethics. 2023;3(3):699–716.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

This publication was supported by the Food and Drug Administration (FDA) of the U.S. Department of Health and Human Services (HHS) as part of a financial assistance award [Center of Excellence in Regulatory Science and Innovation grant to Yale University, U01FD005938] totaling $712,431 with 100 percent funded by FDA/HHS. The contents are those of the author(s) and do not necessarily represent the official views of, nor an endorsement, by FDA/HHS, or the U.S. Government.

Author information

Authors and Affiliations

Authors

Contributions

MM and JEM are the guarantors of the review. MM and JEM drafted the protocol. MM, JEM, and AAG developed the search strategy with input from all the authors. MM, AAG, AMS, and DWY screened the articles and extracted the findings. MM summarized the data and wrote the first draft of the article. AAG, AMS, BAB, DWY, JSR, JEM, and XZ critically reviewed and revised the manuscript for publication.

Corresponding author

Correspondence to Maryam Mooghali.

Ethics declarations

Ethics approval and consent to participate

Ethical approval was not required. Publicly available nonclinical datasets were used. Informed consent was not needed because no patient data were used.

Consent for publication

Not applicable.

Competing interests

Dr. Mooghali currently receives research support through Yale University from Arnold Ventures outside of the submitted work. Mr. Stroud has no competing interests. Dr. Yoo has no competing interests. Dr. Barry currently receives research support through the Mayo Clinic Department of Cardiology from Anumana, Inc. Ms. Grimshaw has no competing interests. Dr Ross reported receiving grants from the US Food and Drug Administration; Johnson and Johnson; Medical Device Innovation Consortium; Agency for Healthcare Research and Quality; National Heart, Lung, and Blood Institute; and Arnold Ventures outside the submitted work. Dr. Ross was also an expert witness at the request of relator attorneys, the Greene Law Firm, in a qui tam suit alleging violations of the False Claims Act and Anti-Kickback Statute against Biogen Inc. that was settled in September 2022. Dr. Zhu offers scientific input to research studies through a contracted services agreement between Mayo Clinic and Exact Sciences Corporation outside of the submitted work. Dr. Miller reported receiving grants from the US Food & Drug Administration during the conduct of the study and receiving grants from Arnold Ventures, and Scientific American and serving on the board of the nonprofit Bioethics International, and as bioethics advisor at GalateoBio outside the submitted work.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mooghali, M., Stroud, A.M., Yoo, D.W. et al. Trustworthy and ethical AI-enabled cardiovascular care: a rapid review. BMC Med Inform Decis Mak 24, 247 (2024). https://doi.org/10.1186/s12911-024-02653-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12911-024-02653-6

Keywords