Because of recent advances in technology, the healthcare system produces a vast amount of data. The availability of data types includes behavioral, biological, medical, and environmental data, which are collected through diverse sources (e.g., wearables, medical devices, electronic health records, and social media). Given the availability of these data, it is not surprising that big data has become the main driving force for the transformation of the healthcare industry. The human capability alone to analyze such data reaches its limits. This paves the way for technological assistance. Breakthroughs in algorithmic methods such as machine learning and deep-learning-based artificial intelligence (AI) have helped to unlock the potential of big data for healthcare analytics [1,2,3].
AI can increase the speed and reduce the costs of high-quality healthcare [4, 5]. Yet the key to creating beneficial AI applications strongly depends on the quality and quantity of relevant health data [6]. The data need to be disclosed and they have to be valid and reliable (if made available). AI applications can create value for patients, clinicians, healthcare organizations, pharmaceutical companies, and health insurers, among others. It is well-known that the entity that requests personal information from individuals influences their likelihood to disclose data, with the highest willingness to disclose data for hospitals [7]. However, the explanatory mechanisms for differences compared to other stakeholders, such as pharmaceutical companies [7], and their boundary conditions often remain unexplored. We argue that there are differences because individuals attribute motives to the requesting entities (particularly for-profit organizations vs. not-for-profit organizations) with different consequences on intentions to disclose. Beside the resulting consequences of who the entities are that request information, we assess when and how entities may increase the likelihood that the request is successful. The latter is particularly important to for-profit organizations such as pharmaceutical companies that can use these data to improve their products and services and innovate [8, 9].
The goal of the present study is to assess the downstream effects of who requests personal information from individuals for AI-based healthcare research purposes—be it a pharmaceutical company (as an example of a for-profit organization) or a university hospital (as an example of a not-for-profit organization)—as well as their boundary conditions on individuals’ likelihoods to release personal information about their health. For the latter, we consider two dimensions: the tendency to self-disclose (which is aimed to be high so that AI applications can reach their full potential) and the tendency to falsify (which is aimed to be low so that AI applications are based on both valid and reliable data). Both dimensions have been shown to be important in past research [10].
We conducted a series of experimental studies and contribute to the literature by (1) introducing motive perception pathways that shape individuals’ likelihoods to disclose personal information depending on the type of requester (for-profit vs. not-for-profit organization) and (2) considering both message appeal and message endorser characteristics as important moderators of the relationship between the requesting entity, motive perception, and likelihood of release of personal information (Additional file 4).
The remainder of this article is organized as follows. We briefly review the relevance of AI in healthcare and introduce our conceptual framework. We then sequentially motivate and present the results of three experimental studies. We conclude with a general discussion of our findings and illustrate the limitations and opportunities for future research.
Artificial intelligence in healthcare
AI applications in healthcare are expected to advance medical decision-making systems by leveraging the large amounts of patient-level data. Decision-makers such as healthcare organizations or clinicians can benefit from improved workflow and reduced medical errors. Healthcare analytics model risks of adverse events based on clinical and/or non-clinical patterns in data. The prediction of future health-related outcomes, such as medical complications [11], treatment responses [12], patient readmissions [13], and patient mortality [14], increases efficiency and precision to the mutual benefit of patients and healthcare organizations.
AI applications can also consider various patient-specific factors and assist healthcare providers in assessing patients’ risks more granularly and attain the goals of preventive and personalized care [15]. Pattern recognition using deep learning supports clinicians in many disciplines (e.g., radiology, pathology, dermatology, and cardiology); the rapid and accurate interpretation of medical scans can facilitate accurate diagnoses [16]. The tools have also been shown useful in many other clinical settings such as for paramedics in identifying heart attacks or helping anesthesiologists avoid low oxygenation during surgery [17, 18].
Pharmaceutical companies invest in AI since it shows promising results in the realm of drug discovery [6]. Here, the most obvious advantage of algorithms is their capability to increase efficiency by examining millions of molecular structures, searching biomedical literature with high speed as well as designing and making new molecules [8, 9]. Another promising aspect is that they can identify entirely new drugs, operating detached from existing expert techniques [19], and discover previously unidentified drug interactions leveraging pooled datasets [20]. By predicting off-target effects, toxicity, and the right dose for experimental drugs, unintended adverse effects can be reduced [21].
Another benefit of AI is that healthcare can be personalized to individual needs along all stages of care, including prevention, diagnosis, treatment, and follow up [22]. With their value-based care framework, Agarwal et al. (2020) highlight that the availability of data and analytical tools creates an opportunity for healthcare to increase patient empowerment. Information about individuals’ preferences does not only help gain a better understanding of what outcomes really matter to patients, but the information can also improve decision making [23]. Treatment plans can be tailored to individual needs according to their genomic characteristics, personality traits or situational context.
While the amount of health data increases, so do the concerns [24]. Efforts in technological advancement can be diminished when the main source of health data runs dry. Patients may restrict access to their health information if they perceive more risks than benefits. Privacy concerns are a constant topic in healthcare information technology research [7, 25,26,27]. Since health data are perceived as sensitive, individuals ascribe high risk to revealing such information and are often reluctant to disclose sensitive information [26, 28,29,30].
Further major concerns are the exposure of personal health information and the legitimate use of health data. One of the main reasons is the fear of real consequences of discrimination in health insurance and employment-based discrimination depending on preexisting health conditions [4]. The growing reluctance of patients to give their data to healthcare organizations is not only related to privacy risks but also to the perception of being exploited. Even if patients release personal information for purposes of AI-based research on improving health, healthcare organizations earn the majority of financial benefits, while the contributors may get nothing (or only little) in return [5]. Since healthcare research is increasingly performed by for-profit companies that serve investors’ needs (according to the rules of the capital market), individuals will be even more cautious with their data. Even though these organizations may protect individuals’ privacy by only using anonymized data, identities can still be leaked by third-party firms that link pieces of data together [31].
Besides their hesitance to self-disclose personal health information, individuals engage in control strategies. In particular, they falsify information—that is, they create and convey wrong information to others [32] to protect their privacy [33, 34]. Misrepresentation facilitates self-protection in response to a request for sensitive information. To reduce their vulnerability to opportunistic behavior, individuals might fabricate such information. This enables them to keep their privacy and simultaneously placate or satisfy others [33]. Misrepresentation of information does not disturb the social exchange, but allows individuals to proceed with an interaction. This behavior is detrimental to the effectiveness of big data technologies in healthcare since it may negatively affect the validity and reliability of results and may thus have further negative downstream consequences. In the healthcare environment, accurate information is critical to achieve high-quality outcomes for patients. To this end, the present research considers both factors of disclosure management: the behavioral intention to self-disclose personal information and the behavioral intention to falsify this information. Information boundary management, which will be explained next, provides the conceptual framework for studying these two behavioral intentions.
When individuals release true personal information
Communication Privacy Management Theory was initially developed to understand how individuals make decisions regarding the disclosure of information in interpersonal relationships [35, 36]. The theory has also been used to explain individual-organization interactions in both the for-profit and the not-for-profit sector [7, 37]. It uses the metaphor of boundaries to illustrate how individuals control and govern the information flow with others. A boundary represents a psychological contract between the information sender and the receiver and defines the amount, nature, and circumstances of information exchange [38]. When individuals wish to reveal private information, boundaries are opened and the flow of information to and from the self is not restricted, which encourages further information requests. When individuals wish to restrict information exchange, boundaries are closed.
Individuals control their boundaries based on the ratio of benefits and risks associated with the privacy of the information (see the various benefits and risks for AI applications in healthcare above). Important to the present research, Communication Privacy Management Theory has been successfully applied to person-organization relationship contexts [7, 39,40,41], supporting the relevance of the key variables in the business-to-consumer domain. The formation of boundary rules such as culture, context, and risk-benefit ratio, is determined by criteria that are salient to individuals at the point of time that they make the decision [42]. Another factor that is of particular interest to the present research is the perception of motives [42]. This becomes relevant when individuals wonder why an entity might ask for their personal information. In the following we argue that differences in the type of requester (for-profit vs. not-for-profit organization) will influence individuals’ perception of motives related to the request from the organization with resulting consequences for the release of true personal information.
The type of requester of personal information and motive perception
Based on differences in objectives, performance criteria, ownership level, and trust [7, 43,44,45,46,47,48,49,50,51], individuals may attribute different motives to health organizations when these organizations request personal information. This is because individuals make use of cues available in their environment to make causal inferences. While the ownership structure of pharmaceutical companies often reflects the status of for-profit organizations whose activities are governed by capital market-oriented structures, the ownership structure of hospitals often reflects the status of not-for-profit or public organizations, mostly financed by the state, charities or research and education funds.
Attribution Theory illustrates the underlying cognitive process by which individuals assess the motives of others’ behaviors. It is based on the assumption that individuals seek to develop an understanding of the events that they observe or experience [52, 53]. Individuals, exposed to some form of marketing activity of organizations (here: requests for personal healthcare information), make inferences about their motives, which then drive evaluations and behaviors [54,55,56,57]. Individuals have been shown to attribute two main types of motives: altruistic motives that aim at the well-being of individuals external to the firm and egoistic motives that focus on the potential benefit to the organization itself. Prior research used various labels for these two motives including socially motivated versus profit-motivated [58] and public-serving versus firm-serving [56].
Altruistic motives are attributed to organizations when individuals perceive that they perform a behavior because they care about others’ welfare [59] and are driven by sincere and benevolent intentions [60]. These attributions affect individuals’ responses positively [61]. Given their not-for-profit ownership (and the mission behind this structure to benefit the community, which might increase trust [7]), we expect individuals to attribute higher altruistic motives to university hospitals’ requests for personal information (as an example of healthcare research-relevant, not-for-profit healthcare organizations) compared to when pharmaceutical companies request personal information (as an example of research-relevant for-profit healthcare organizations) for healthcare research purposes.
Egoistic motives center around ego-driven needs and self-interests of organizations. Goals such as increased market share or publicity are highlighted. Egoistic motives cause negative responses among individuals [58], because their activities are judged as manipulative [60]. Given their for-profit ownership (and the mission behind this structure to benefit the organization), we expect individuals to attribute egoistic motives to pharmaceutical companies as compared to university hospitals, when they request personal information for healthcare research purposes. The former are inferred to exist due to their ability to profit from their relations with consumers [59], while this might not be true for the latter. In this context, consumers might evaluate for-profit organizations from a profit maximization logic, where they expect the organization to act mainly out of self- or egoistic interests [62, 63]. This might not be the case for not-for-profit organizations such as a university hospital. H1a and H1b are stated as follows:
Hypothesis 1a
Attributions of altruistic motives for the request of personal information for healthcare research purposes will be lower for pharmaceutical companies compared to university hospitals.
Hypothesis 1b
Attributions of egoistic motives for the request of personal information for healthcare research purposes will be higher for pharmaceutical companies compared to university hospitals.
The downstream relations of perceived motives
The underlying motives that individuals attribute to a health organization’s information request might relate to individuals’ information disclosure tactics. We argue that, first, the perception of altruistic motives will associate with the opening of borders and facilitate information flow between the individual and the healthcare organization (and hence affect self-disclosure of information), and, second, the perception of egoistic motives will prompt individuals to make information-protective behavior more likely (in the form of falsification of information). In what follows, we explain our arguments in more detail.
Individuals are aware that they need to release personal information in exchange for certain benefits to satisfy their needs [64]. The exchange of information is part of what is known as a social contract: individuals have something of value to others and both parties decide to engage in a mutually agreeable trade [65]. Altruistic motives-driven perceptions indicate that healthcare organizations emphasize the creation of social and common benefits. As a consequence, these perceptions might open the boundary and make individuals more likely to disclose personal information. Altruistic motives lower the barrier for action. Hence, altruistic motives should act as a mediator between the type of organization that requests personal information (pharmaceutical company vs. university hospital) and the willingness to self-disclose personal information.
Hypothesis 2a
Attributed altruistic motives mediate the relationship between the health care organization that is requesting personal information (pharmaceutical companies vs. university hospitals) and an individual’s self-disclosure intentions.
The social contract between requesters and releasers of personal information comprises commonly understood obligations or social norms for both parties; this is critical for the prevention of opportunistic behaviors [66]. Most importantly to the present study, we can assume that when individuals attribute egoistic motives to the information requester, they might be concerned that the organization may not honor the social contract, so that they act only in their own best interest. The egoistic motive might fuel individuals’ skepticism and lead to negative reactions [56]. To retain control while still reaping the benefits of the exchange, individuals may misrepresent their data [34, 67]. This need for a defensive tactic might stem from the underlying motives that individuals attribute to the information request. Subsequently, individuals will be more likely to misrepresent their data in the information exchange with the health organization. We therefore postulate that egoistic motives act as a mediator between the type of organization that requests personal information (pharmaceutical company vs. university hospital) and the willingness to falsify personal information.
Hypothesis 2b
Attributed egoistic motives mediate the relationship between the health care organization that is requesting personal information (pharmaceutical companies vs. university hospitals) and individuals’ falsification intentions
Figure 1 provides an overview of the conceptual model that guided our research. Study 1, which is presented in the following, aims to test H1 and H2.
Study 1
The purpose of Study 1 is to provide initial evidence that individuals make different motive attributions to not-for-profit (vs. for-profit) healthcare organizations’ requests to share certain personal information with them. Moreover, the study assesses whether attributed altruistic and egoistic motives mediate the relationship between the type of information requester and individuals’ intentions to self-disclose or falsify personal information.