Skip to main content

Visualized Emotion Ontology: a model for representing visual cues of emotions

Abstract

Background

Healthcare services, particularly in patient-provider interaction, often involve highly emotional situations, and it is important for physicians to understand and respond to their patients’ emotions to best ensure their well-being.

Methods

In order to model the emotion domain, we have created the Visualized Emotion Ontology (VEO) to provide a semantic definition of 25 emotions based on established models, as well as visual representations of emotions utilizing shapes, lines, and colors.

Results

As determined by ontology evaluation metrics, VEO exhibited better machine-readability (z=1.12), linguistic quality (z=0.61), and domain coverage (z=0.39) compared to a sample of cognitive ontologies. Additionally, a survey of 1082 participants through Amazon Mechanical Turk revealed that a significantly higher proportion of people agree than disagree with 17 out of our 25 emotion images, validating the majority of our visualizations.

Conclusion

From the development, evaluation, and serialization of the VEO, we have defined a set of 25 emotions using OWL that linked surveyed visualizations to each emotion. In the future, we plan to use the VEO in patient-facing software tools, such as embodied conversational agents, to enhance interactions between patients and providers in a clinical environment.

Background

Emotions form the core of people’s thought processes, decisions, and actions, so it is crucial to investigate and understand them [1, 2]. In particular, some of the most highly emotional experiences for patients arise in healthcare scenarios regarding both acute and chronic illnesses. Clearly, the emotions felt in these situations are very complex, as evidenced by mixed emotions that may arise concerning a surgery—on one hand, hope that the surgery will successfully treat the patient’s disorder; on the other, fear that the surgery could fail and even jeopardize the patient’s life. A patient’s journey involves moving from sub-event to sub-event within one overarching emotion episode (for example, going from an emergency room visit to an inpatient hospital stay) in a state of continuous emotional engagement [3]. Unfortunately, these heightened emotions are likely to have a negative effect on patients, and influence the choices they make. For instance, patients experiencing high levels of anxiety tend to prefer safer (low-risk, low-reward) options, while patients experiencing high levels of sadness tend to prefer more comforting (high-risk, high-reward) options [4]. Moreover, patients often feel a sensation of powerlessness and lack of control over their bodies as well as their mental states, which may ultimately result in motivational, cognitive, and emotional deficits, and even depression [5]. These negative outcomes are reflected in the emotions of healthcare providers as well, who experience a significant amount of stress that may even increase their likelihood to commit malpractice [6].

On the other hand, fortunately, positive emotions initiate upward spirals toward enhanced emotional well-being [7]. Furthermore, patients who report higher levels of positivity tend to also participate more during health care service encounters [8], which is beneficial for all parties involved in a clinical experience, improving both perceived quality of service and customer satisfaction. Thus, this underscores the importance of promoting positive emotions in one’s patients. To do so, a phenomenon called emotional contagion, or “the tendency to automatically mimic and synchronize facial expressions, vocalizations, postures, and movements with those of another person and to converge emotionally” [9], can be utilized to invoke certain emotions—a healthcare provider could purposely express positive emotions so that the patient mirrors them. Additionally, emotional contagion has been confirmed by neural mechanisms, because an fMRI study has revealed that observing others’ happiness activates the left anterior cingulate gyrus, while observing others’ sadness activates the right inferior frontal gyrus [10]. Nonetheless, it does not make sense for physicians to remain positive all the time, as they often need to deliver upsetting diagnoses or prognoses, so communication skills training [11, 12] would be useful in teaching them how to give bad news while minimizing detrimental effects to a patient’s mental state. In any case, it is essential that healthcare providers can adequately understand and respond to their patients’ emotions to best ensure their well-being.

The first step in understanding emotions is to define what, exactly, an emotion is. According to Paul Ekman, emotions correspond to six universal facial expressions: joy, sadness, anger, disgust, fear, and surprise [13]. However, variations in response have undermined the reliability of using facial expressions to distinguish emotions, as well as using other characteristics such as skin conductance, heart rate variability, distinctive behaviors, patterns of feeling, and neuroimaging [14]. Rather, Ortony, Clore, & Collins’ models of emotions (“the OCC model”) differentiates 22 emotions depending on the psychological scenario that causes the emotion and the subsequent affective reactions that appear [15]. These affective reactions may include bodily, expressive, experiential, and behavioral responses—for example, the emotion “fear” is reflected by wide-eyed facial expressions and anxious thoughts that are caused by a threat.

The OCC model corresponds to a psychological constructivist approach for understanding emotion. According to this approach to the mind, discrete emotion categories are represented by general brain networks rather than localized ones corresponding to specific brain functions [16]. In fact, it is the interactions of domain-general networks like the salience network that cause different emotions to arise [17]. The OCC model is compatible with this constructivist approach because it proposes that emotions are comprised of a collection of behaviors rather than independent entities that then cause the behaviors. This model is especially helpful for our use, because describing emotions based on situations rather than on patterns of physiology, neurology, experience, expression, and motivation is more straightforward and reliable for computers to understand. Additionally, the OCC model organizes emotions into three categories: those concerning consequences of events, actions of agents, and aspects of objects. For instance, one can be happy or sad about a consequence (of an event), proud or ashamed about an action (of an agent), and love or hate an aspect (of an object). Additionally, in 2009, the emotions “interest” and “disgust” were added to the OCC model and its logical structure was changed so that it became inheritance-based [18]—this “revised OCC model” is what helped inform the structure for our own emotion model.

Even though the revised OCC model is inheritance-based and popular in the computer science realm [15], it has not yet been formally incorporated into a machine-readable artifact, so we decided to represent its information by constructing an ontological model. An ontology describes domain knowledge or domain space that represents and connects concepts of the domain. These concepts and relationships can be encoded to a machine language using semantic web coding languages (e.g., OWL and RDF), thereby allowing machines to process and understand the domain knowledge. The resulting software artifact can then be integrated with other software components to provide extended capabilities, perform tasks, and enable machine reasoning.

Thus, the first purpose of our Visualized Emotion Ontology (VEO) is to semantically define emotions based on the Ekman [13] and revised OCC [18] models in a machine-readable artifact; the second purpose of the VEO is to create visualizations for each of the 25 emotions Footnote 1 in our model by connecting them to shapes, lines, and colors.

To investigate the relationship between emotions and shapes, Bar and Neta [19] asked subjects to rate pictures of everyday objects (e.g., a watch or a sofa) with either curved or angular features, finding that participants liked curved objects significantly more than angular objects. Similarly, other studies have found that humans associate circles with positive emotions and triangles with negative emotions [20, 21]. In particular, humans find triangles with downward-pointing vertices to be the most unpleasant shape, because in comparison to triangles with upward-pointing vertices, viewing downward-pointing triangles resulted in significantly higher levels of activation in the threat detection areas of the brain such as the amygdala, subgenual anterior cingulate cortex (ACC), superior temporal gyrus (STG), and fusiform gyrus [22]. One plausible explanation is that these shapes mirror human facial features–when people are happy, their facial expressions naturally appear rounder, but when people are angry, their facial expressions appear more angular, much like a downward-pointing triangle [20]. Moreover, mirroring the findings about emotions and shapes, studies about emotions and lines have established that curved lines evoke a positive response while sharp lines evoke a negative response [19], and that a greater number of lines provokes a stronger response [20].

In terms of the relationship between emotions and colors, in one study, when participants were asked to categorize anger and sadness words presented in red or blue, they categorized anger words faster and more accurately when the font color was red rather than blue, and vice versa for sadness words [23]. Multiple other studies have confirmed that the color red is associated with anger [2427] and danger [28], though it is also associated with romance [24, 29]. Additionally, people identify yellow with happiness [24, 25] and orange with cheerfulness [25], though they associate blue with sadness [2325] as well as calmness [24]. Green is linked to success [30] and safety [28], but disgust as well [24, 25]. Brown is also associated with disgust [25] and white is connected to innocence and hope [24], while purple and black are both linked to power, contempt, sadness, and fear [24, 25].

Thus, the VEO serves as a machine-understandable artifact with human-friendly visualizations; as such, one of the future directions of this work is towards human-computer integration. The focus on using situations to define emotions in the revised OCC model could help computers understand how different emotions arise and provide some artificial emotional intelligence to machines. In the next section, we discuss some applications for emotion-related ontologies; however, the aim, aside from modeling the emotion domain and visualizations, is to incorporate our emotion images into embodied conversational agents as an alternative to more complex virtual facial features and to create an ontology-driven “face-plate”, specifically for use in healthcare applications.

Overall, we assert that 1) we can faithfully represent a high quality ontological artifact of the OCC model of emotion using a semantic web language (OWL2) that links evidence-based visualized cues for each defined emotion, and 2) that the aforesaid visualizations can accurately symbolize each emotion defined in our ontology. For the first assertion, we will evaluate the ontology using the Burton-Jones’ semiotic metric suite that measures quality based upon dimensions from semiotic theory. The ratings will be produced by Ontokeeper, and we will compare the results with other cognitive ontologies. For the second, we will use a survey submitted through a crowdsourcing platform to gauge the symbolic visualizations of emotions.

This paper extends on our previous work, introduced in [31] where we briefly discussed the design of the VEO. In this paper, we expound on the detailed design motivations behind the VEO and its linked visualizations, and in addition, we provided an evaluation of the ontology using Burton-Jones’ semiotic metric suite and validated the visualizations using a crowdsourcing platform.

Related studies on emotion ontologies and visualization

The Human Emotion Ontology (HEO) by Grassi [32] was an ontology aimed at annotating emotions in multimedia content. Developed in OWL, the central concept of the ontology was Emotion which incorporated components of emotions described by the W3C Emotion Incubator Group [33]. Also, HEO models concepts and ideas from Ekman’s and Douglas-Cowie’s classification of emotions, the actions related to emotions by [3] and Scherer’s appraisal model [34]. It also represents the modality of the emotion, ranging from voice, text, gesture, and face. At the time of publication, HEO is not publicly available, and there is no evidence of further updates since the 2009 publication.

An ontology that converges on similar ideas as the VEO is the Smiley Ontology [35] for “representing the structure and semantics" of an emoticon. In their ontology model, each emoticon is associated with an emotion, and the emoticon is further defined by concepts concerning the verbal features of the emoticon, the textual context, analogous human facial expression, etc. Like HEO, the Smiley Ontology is no longer active. Another important work involves Garcia-Rojas and colleagues’ use of ontology to semantically annotate MPEG-based facial animation characteristics for virtual human characters [36]. While not an emotion ontology, WN-AFFECT [37] is an extension of the WordNet ontology with annotations that describe the emotional valence of words based on the W3C lists of emotions.

The Emotion Ontology (EMO) [38] is another formal representation for emotions that related affective phenomena and is aligned with the Basic Formal Ontology (BFO) [39, 40] and the Ontology of Mental Disease (OMD) [41], which allows it to express philosophical concepts. It distinguishes “emotions proper”, such as anger and fear, from appraisals (cognitive judgments, e.g., “appraisal of dangerousness”) and subjective feelings (inner awareness of affective feelings, e.g., “feeling restless”) [42]. We decided to align the VEO with the EMO, though we chose not to use all of the emotions in EMO because our model is more concise in regard to the number of emotions it includes, leaving out behavioral and cognitive responses that are not technically emotions, such as confusion, boredom, and guilt. Rather than being emotions themselves, they would appear in response to an emotion; for instance, “guilt” would stem from the emotion “shame”.

Additionally, one research group utilized visualizations to model emotions by developing a mobile messaging system called eMoto for users to send and receive affective messages [43]. Users navigated a circular background of colors, shapes, and animations where the vertical axis indicates arousal (moving upward corresponds to increasing arousal, from a few slow animations to many fast animations) and the horizontal axis indicates valence (moving right corresponds to increasing positive valence, from blue-purple-red to green-yellow-orange and from sharper shapes to rounder shapes). Compared to the VEO, eMoto was driven by the user’s interpretation of the emotions of their message, so it was much more fluid in both the types of shapes and the spectrum of hues that it uses, whereas the VEO provides fixed combinations of colors and shapes representing specific emotions.

Methods

Development of the Visualized Emotion Ontology

We designed the Visualized Emotion Ontology (VEO) that is organized on the revised OCC model, pairing the positive (solid-lined boxes) and negative emotions (dotted-lined boxes) (Fig. 1).

Fig. 1
figure 1

The VEO model of emotions framed from [15, 18]. The boxes with solid lines are of positive valence and the boxes with dotted lines are of negative valence

Our ontology is defined as a polyarchy with five branches, including Action, Aspect, Consequence, Emotion, and Visualization (Fig. 2). An Action is defined as either an Action of Self Agent or an Action of Other Agent, an Aspect is defined as either a Familiar Aspect or Unfamiliar Aspect of an object, and a Consequence is defined as either a Prospective Consequence or an Actual Consequence of an event. A Prospective Consequence can be further divided into Prospective Desirable Consequence or Prospective Undesirable Consequence, and an Actual Consequence can be further divided into a Consequence Desirable for Other or a Consequence Undesirable for Other, as well as a Confirmed Consequence or Disconfirmed Consequence. These terms are all in accordance with the revised OCC model [18]. As an example, a person would feel relief when a prospective undesirable consequence is disconfirmed, and in our model, that would be represented as a Disconfirmed Undesirable Consequence. Similarly, a person feels happy for another person when the other person experiences a desirable consequence, which we express as a Consequence Desirable for Other.

Fig. 2
figure 2

Brief class level conceptualization of the VEO

An Emotion is divided into either a Positive Emotion or a Negative Emotion subclass, which then can be further divided into Approving/Disapproving, Liking/Disliking, and Pleased/Displeased subclasses, respectively. Then, emotions are categorized into one or more of these subclasses in accordance with the revised OCC model. Beyond being defined hierarchically, they are defined further semantically. For instance, the emotion Joy is a subclass of Pleased and inherits the property concernsConsequence, but clarifies that the type of Consequence the property describes is an Actual Consequence; following this, the emotion Satisfaction, which is a subclass of Joy, further classifies the type of Actual Consequence as a Confirmed Desirable Consequence. Similarly, the emotion Gloating is also a subclass of Joy, but the type of Actual Consequence that it concerns is a Consequence Undesirable for Other. As another example, the emotion Anger is a subclass of both Distress and Reproach, which are subclasses of Displeased and Disapproving, respectively, so it inherits both properties of concerning an Actual Consequence and an Action of Other Agent. Finally, to show an example for the Liking/Disliking branch, the emotion Love inherits the property concernsAspect of a Familiar Aspect.

Finally, Visualization (see Fig. 3) contains the subclasses Color, Shape, Lines, and Composite Visualization. The Color class involves Black, Blue, Brown, Green, Grey, Orange, Pink, Purple, Red, White, and Yellow; the Shape class includes Circle and Triangle, which can be either a Downward Pointing Triangle or an Upward Pointing Triangle; and the Lines class consists of Curved Lines and Sharp Lines. Within the Curved/Sharp Lines classes, we defined two subclasses Curved/Sharp Line and Curved/Sharp Lines Doubled, which have the data property hasNumberOfLines with value 1 and 2, respectively. Ultimately, these three subclasses of Visualization allowed us to create the Composite Visualization class, which combines a Color and a Shape or a Color and Lines to create such visualizations as Yellow Circle and Black Sharp Line by using the object properties hasColor, hasShape, and/or hasLines. Furthermore, Composite Visualization has an association with one Emotion with an object property called isEmotionallyLinkedTo. This allows us to define individual emotion visualizations, such as Admiration Visualization, which is a combination of a Pink Circle and a Red Curved Line that is linked to the emotion Admiration.

Fig. 3
figure 3

Concepts of Visualization classes from the VEO

We defined the Emotion class in the VEO as equivalent to the emotion process class in EMO as well as any emotions that overlapped between the two ontologies, though we must recognize that there are emotions listed in our ontology that are not in EMO (e.g., “happy-for”) and vice versa (e.g., “boredom”). The Emotion classes equivalent between the VEO and the EMO were Positive Emotion, Pride, Interest, Pleased (pleasure), Hope, Joy (happiness), Negative Emotion, Shame, Disgust, Hate, Distress (sadness), Anger, Disappointment, Fear, and Surprise. Additionally, the Action class in the VEO was set as equivalent to the behavior and behavior process classes in EMO. For each emotion in the VEO, we included a definition, a description of the visualization, as well as a link to an actual image. Our initial version of the VEO is available here: https://bioportal.bioontology.org/ontologies/VEO.

Development of visualizations for emotions

Next, in terms of visualizing the emotions, we combined results from current literature about emotions and their relationships with colors, shapes, and lines to create a unique visualization for each emotion (Table 1). The visualizations were created on Microsoft Word using standard colors (with the exception of pink, brown, and grey) and basic shapes. We decided to use two colors for each emotion–one color for the shape and one for the lines–because it is possible for a color to have either a positive or a negative connotation (e.g., red can represent anger or romance/love [24, 2729]), so using more colors will help pinpoint the emotion that the visualization is supposed to represent. This would also ensure that no two emotions have the same visualization. However, it is important to recognize that not all of the emotions in the revised OCC model have yet been examined by other studies and linked to exact colors (e.g., pride), but in these instances, we made assumptions based on the connotations of the color and emotion. All of the positive emotions (e.g., joy) were portrayed as circles surrounded by curved lines and all of the negative emotions (e.g., distress) were portrayed as downward-pointing triangles surrounded by sharp lines. Also, after noticing that most of the emotions in the Ekman model overlapped with those in the OCC model (joy, distress, anger, disgust, fear), we indicated those emotions by doubling the lines surrounding the shape to increase their perceived significance.

Table 1 Visualization motifs for emotions

Thus, joy is visualized as a yellow circle surrounded by double curved orange lines due to the association of the color yellow with happiness and orange with cheerfulness [24, 25]. Distress, anger, disgust, and fear are all depicted as downward-pointing triangles surrounded by double sharp lines, with colors of the triangles and lines as blue and purple, red and black, green and brown, and black and purple, respectively. Both blue and purple are associated with sadness, red and black with anger, green and brown with disgust, and black and purple with fear [24, 25]. Hupka et al. [44] found that even across cultures (Germany, Mexico, Poland, Russia, and the United States), people associate anger with black and red, fear with black, and jealousy with red.

Though surprise is not in the revised OCC model, it is in the Ekman model, so we decided to add it as an emotion in our VEO model with the property that it arises when a consequence disconfirms a prospective consequence. However, an interesting issue arises because people can experience surprise in either a positive or negative context–for instance, in the workplace, receiving a raise would be a good surprise, while getting laid off would be a bad surprise. So in the VEO model, we decided to include surprise as both a positive or negative emotion: as a subclass of both joy and distress. Due to these two parent classes, we expressed its colors as yellow and blue [25], and its shape as an upward-pointing triangle (because its valence is between that of a circle and a downward-facing triangle) [22]. Thus, its complete visualization is a yellow upward-pointing triangle with double sharp blue lines.

Naturally, positive emotions are linked to joy, so the color yellow appears often in the visualizations for other positive emotions as well. For example, happy-for is visualized as a yellow circle surrounded by curved orange lines, which is the same color and shape combination as joy, except the lines are not doubled, indicating that joy is the “stronger” of the two emotions. Interest is depicted as an orange circle surrounded by curved yellow lines, which also indicates a sense of cheerfulness and joy, but the circle being orange rather than yellow lends a sense of unfamiliarity to the visualization (since interest is liking an unfamiliar aspect of an object) (See Table 2). Next, hope is portrayed as a white circle surrounded by curved yellow lines due to the association of white with hope and yellow with joy.

Table 2 Definition of positive emotions

Furthermore, we illustrated pride as a purple circle surrounded by curved yellow lines and gloating as a purple circle surrounded by black lines because purple has the connotations of arrogance and power, corresponding to both pride and gloating. However, pride is taking joy in one’s own accomplishments, so we used yellow as a complementary color to purple to express the relative positivity of this emotion, whereas gloating is taking joy in another’s misfortunes, so we used black as a complementary color to the purple to express the relative negativity of that emotion. Similarly, gratification is visualized as a yellow circle surrounded by curved purple lines, combining the colors of the visualizations of joy and pride in accordance to its definition (Table 2).

Next, love is presented as a red circle surrounded by curved pink lines due to the color red’s connection with romance [24, 29]. The color pink is technically a lighter shade of red created from mixing red and white, so by extension, it is also connected to romance. Due to this, we depicted admiration as a pink circle surrounded by curved red lines because it is very similar to love, while at the same time, possessing more emotional distance and less romantic feelings than love. Likewise, gratitude is portrayed as a yellow circle surrounded by curved pink lines, combining the colors of the visualizations of joy and admiration in accordance to its definition (Table 2). Additionally, satisfaction is illustrated as a green circle surrounded by curved yellow lines due to the association between green and success [30] as well as yellow and joy. Meanwhile, relief is illustrated as a green circle surrounded by curved blue lines due to the association between green and safety [28] as well as blue and calmness [24].

As for negative emotions, we depicted fears-confirmed as a black downward-pointing triangle surrounded by sharp purple lines, which is the same color and shape combination as fear, except the lines are not doubled, indicating that fear is the “stronger” of the two emotions. Next, hate is portrayed as a black downward-pointing triangle surrounded by sharp red lines due to the association of black and red with fear, anger, and a sense of evil. Consequently, reproach is presented as a green downward-pointing triangle surrounded by sharp black lines due to the connections with green and disgust and black and hate. Then, we characterized pity as a brown downward-pointing triangle surrounded by sharp blue lines due to the feelings it evokes of disgust and distress. Additionally, we characterized disappointment as a red downward-pointing triangle surrounded by sharp blue lines due to the association with red and failure [28, 30] as well as blue and sadness.

In addition, many negative emotions are related to distress (See Table 3 for negative emotion definitions), so blue is a prominent color among these visualizations. For instance, resentment is depicted as a blue downward-pointing triangle surrounded by sharp black lines due to its connotations of distress and contempt toward another person. Shame is presented as a grey downward-pointing triangle surrounded by sharp blue lines because both grey and blue have associations with sadness and depression [24, 25], but using grey also represents the contempt toward oneself that shame evokes. Similarly, remorse is presented as a blue downward-pointing triangle surrounded by sharp grey lines, with a reversed color and shape combination because it is derived from shame but places more emphasis on sorrow than on self-hatred.

Table 3 Definition of negative emotions

Surveys

We conducted surveysFootnote 2 to validate and assess our visualizations of emotions that we designed for adult participants (n = 1082) of any gender residing in the United States, recruited through Amazon Mechanical Turk (MTurk). Studies have shown that data obtained from MTurk are at least as reliable as those obtained via traditional methods [45]. Using Qualtrics, we created a 51-question survey involving our 25 distinct emotions, in which we asked MTurk participants to rate the validity of a statement matching an emotion to an image based on our model. The incorrect emotion-image pairs were selected randomly from the 24 other emotions in our model. For instance, the word “distress” displayed with our visualization for “distress” would be a correctly-matched emotion-image pair, but the word “distress” displayed with our visualization for “fear” would be an incorrectly-matched pair. Finally, we included one randomly placed control question in each survey (e.g., “So we can be sure that you are reading the questions carefully, please answer ’Strongly agree’ to this question.”) to identify and remove participants who rushed through the survey. Each MTurk Human Intelligence Task (HIT) included one assignment with a link to this Qualtrics survey; the HIT was launched from August 5-14, 2017, and the reward was $0.20 per assignment. In total, 1189 people completed the HIT, but 107 failed to answer the control question and were filtered out to give the 1082 responses used in our data analysis. The order in which all questions were presented was randomized (Fig. 4 shows one example of a question).

Fig. 4
figure 4

Example of a survey question for hope visualization

Results

Visualized Emotion Ontology

The VEO was encoded in the Protégé ontology authoring tool [46] in OWL2 format. The ontology contains a total of 126 classes, 11 object and data properties, and 25 instances. We scored the quality of the VEO using OntoKeeper, a web application currently in development [47]. We compared the VEO to a sample of five cognitive ontologies (Mental State Assessment, Emotion Ontology, Mental Functioning Ontology, the Behavior Change Technique Taxonomy, and the Cognitive Atlas Ontology), which would provide us with a baseline measurement. Results of our comparison are presented in Table 4.

Table 4 Quality scores comparing the VEO with cognitive ontologies

For the VEO, the syntactic score, a score that measures the machine-readability of the ontology, based on breaches of syntax (lawfulness metric) and utilization of ontology features (richness metric), was rated at 0.76, with lawfulness and richness at 1.00 and 0.54, respectively. The semantic score, a score that measures the label quality of the ontology based on the consistency of labeling of concepts and instances (consistency metric), the ambiguity of term labels (clarity metric), and the meaning of ontology term labels (interpretability metric), was rated at 0.97, with consistency, clarity, and interpretability at 1.00, 0.99, and 0.97, respectively.

The pragmatic score, a score that assesses the utility of the ontology based on the comprehensiveness metric (i.e., domain coverage), was 0.82. The overall quality score based on equal weighting of syntactic (0.76), semantic (0.97), and pragmatic (0.82) scores was 0.85.

We calculated the z-scores using the data to evaluate our metrics compared to that of the sample of cognitive ontologies. The z-scores for the syntactic, semantic, and pragmatic metrics yielded 1.12, 0.61, and 0.39, respectively, indicating above-average machine-readability, linguistic quality, and domain coverage. Also, the z-score for the final overall quality was 0.98, indicating higher overall quality for the VEO than other cognitive ontologies.

Additionally, we reviewed and conferred with each other on the ontology’s veracity, and we agreed that the ontology reflected the information described in the revised OCC model. Two of the co-authors (RL, CL) have cognitive science backgrounds.

Crowdsourced survey

In total, 1082 participants were surveyed through Amazon Mechanical Turk, and for each emotion-image pair, we determined the percentage of people that disagreed (1 or 2), were neutral (3), and agreed (4 or 5) that the image represented the emotion (Table 5).

Table 5 Survey results of visualization

For the majority of the emotions (17 in total – p < 0.001 for 16 emotions, and p = 0.014 for emotion of shame), people tended to agree that our visualization matched the emotion more than they disagreed, which validates our model; these emotions included admiration, anger, fear, fears-confirmed, gratification, gratitude, happy-for, hate, hope, interest, joy, love, pride relief, satisfaction, shame, and surprise. This conclusion is based on a rigorous hypothesis testing procedure. Specifically, we assumed that the choice of each participant was distributed as a multinomial distribution with parameters p1, p2, p3 corresponding to the proportions of “Disagreed”, “Neutral”, and “Agreed”. Respectively, we then performed one-sided hypothesis tests to test whether the proportion of people who agreed is greater than the proportion of people who disagreed for each of the 25 emotions, i.e. H0:p1<p3 for each emotion. Bonferroni correction was applied to control the family-wise error rate at 5%.

P-values were reported in Table 5. In statistical hypothesis testing, p-value is a probability value which quantifies the evidence from the data to support the alternative hypothesis against the null hypothesis. A smaller p-value indicates strong evidence against the alternative hypothesis. A critical value is a cut-off of the p-value to determine whether to reject the null hypothesis. Here in this study, the alternative hypothesis is that the proportion of participants agreed is greater than the proportion of participants disagreed while the null hypothesis is that they are equal. Accounting for the multiple testing, we reject the null hypothesis for p-value less than 0.002. Significant results of higher proportion of agreed than disagreed (p<0.001) were found for 16 out of 25 emotions including all of the emotions previously stated except for shame (p=0.014).

For the remaining eight emotions, more people disagreed than agreed with our visualization. However, for five of these emotions, including disappointment, disgust, gloating, pity, and remorse, more people agreed with our emotion-image pairs than they did for the incorrect emotion-image pairs. In these cases, the randomly-selected incorrect emotion-image pairs included disappointment-interest, disgust-satisfaction, gloating-gratitude, pity-admiration, and remorse-gratification, respectively. For distress, reproach, and resentment, however, more people agreed with the incorrect emotion-image pairs than they did with the correct ones; these incorrect pairs included distress-fear, reproach-resentment, and resentment-disappointment, respectively.

Discussion

In the future, we could expand the VEO by creating nuances within certain emotion types–for instance, fear-like states can range from those that are mild (e.g., concern) to those that are intense (e.g., terror). These types of states could be included as subclasses in the ontology. We also intend to expand the terminological space with some of the affective terms found in WN-AFFECT. Additionally, we could add instances in the future that represent an individual user’s emotions.

Overall, the survey results validated the accuracy of our emotion visualizations. More people agreed than disagreed that the image matched the emotion displayed for 17 emotions (with 16 out of these 17 emotions found to be statistically significant), and vice versa for eight emotions. However, only for the three emotions of distress, reproach, and resentment did people prefer the incorrect emotion-image pair to the correct one. One reason the incorrect emotion-image pair was preferred for distress could be due to its name—-distress and sadness have slightly different connotations, and if we had used the name “sadness”, perhaps the percentage of people that agreed with our visualization would be higher. After all, even though people thought that the image for fear represented distress (in the incorrect emotion-image pair), they still confirmed that the image for fear was accurate at a high rate (65.0%).

Additionally, in future studies and from the findings of the survey results, it would be helpful to further investigate the eight emotions that did not support our visualizations by comparing them to different incorrect emotion-image pairs. This could allow us to understand whether the specific randomly-chosen incorrect pair had any influence on our results or if they still hold with different pairs used. If so, these results can inform us in regard to editing our visualizations so that they are more representative of each emotion. Our research also does not consider the use of motion, which could enhance the visualizations in the future.

This study will permit machines to utilize the VEO to interpret and understand emotions, with the purpose of improving interaction with human users, such as patients. For clarification, recall that ontologies are artifacts of encoded knowledge to help machines understand domain concepts and the relationships between them. Codifying affective knowledge would help intelligent agents, specifically conversational agents, to understand the underlying emotions during their interactions with humans. Looking at an emotion like love, which according to the OCC model, contains positive emotional valence involving the appraisal of some aspect of an object, or anger, which contains negative emotional valence relating to someone’s actions and the subsequent outcomes of the actions. A software agent can potentially capture contextual information and emotional valence data, and through the use of descriptive logic queries, reason what the user is feeling or expressing (see Fig. 5). The use of ontologies to define emotions for machines and then comprehend the emotions of users makes this possible. Further research could investigate processing of the user’s emotions from utterances or other modalities of expression. This would also include developing the software that interfaces with the ontology and employing it in conversational agents.

Fig. 5
figure 5

Utilization of the VEO and the processing of expression information to infer emotion of the patient. “People Patient Male Icon” by Icons-Land [48], and “Steampunk Robot Icon” by mirella.design [49] - licensed free for non-commercial use

Conclusion

Based on metrics for ontology evaluation, the Visualized Emotion Ontology (VEO) revealed to have better domain coverage, machine readability, and linguistic quality than the selected cognitive ontologies from the BioPortal. The VEO also links to composite visualizations, based on published research, that expressed each emotion defined in the VEO. From the Amazon Mechanical Turk survey we conducted, we determined that the majority of the visualizations accurately represented their emotions, validating our model.

The genesis of this work was to provide a means to enhance the patient-provider communication for patient education by defining emotions for machines. Specifically, conversational agents assisting physicians for vaccine counseling could augment the experience by emoting through visualizations to enhance the synthesized, deadpan utterances. This would serve as alternative to more complex and resource expensive options like avatars or computer-generated faces. The visualized emotions and the VEO could presumably be utilized in other applications that involve human computer interaction.

Notes

  1. The Ekman emotions and the revised OCC model list of emotions overlapped, except for one emotion, surprise, which was added to VEO.

  2. With approval by the University of Texas Health Science Center’s Committee for the Protection of Human Subjects – HSC-SBMI-17-0641

Abbreviations

ACC:

Anterior cingulate cortex

BFO:

Basic formal ontology

EMO:

The emotion ontology

HEO:

Human emotion ontology

HIT:

Human intelligence task

MTurk:

Amazon mechanical turk

OCC:

Ortony, clore, & collins

OMD:

Ontology of mental disease

OWL:

Web ontology language

OWL2:

Web ontology language, version 2

RDF:

Resource description framework

STG:

Superior temporal gyrus

VEO:

Visualized emotion ontology

W3C:

World wide web consortium

References

  1. McColl-Kennedy JR, Danaher TS, Gallan AS, Orsingher C, Lervik-Olsen L, Verma R. How do you feel today? Managing patient emotions during health care experiences to enhance well-being. J Bus Res. 2017; 79:274–259.

    Article  Google Scholar 

  2. Lerner JS, Li Y, Valdesolo P, Kassam KS. Emotion and decision making. Annu Rev Psychol. 2015; 66:799–823.

    Article  PubMed  Google Scholar 

  3. Fridja NH. Moods, emotion episodes, and emotions. Handbook of emotions. New York: The Guilford Press; 1993. p. 381–404.

    Google Scholar 

  4. Raghunathan R, Pham MT, Corfman KP. Informational properties of anxiety and sadness, and displaced coping. J Consum Res. 2006; 32(4):596–601.

    Article  Google Scholar 

  5. Faulkner M. Empowerment, disempowerment and the care of older people: Mark Faulkner considers the effects on patients’ independence of nursing care that empowers, and the consequences of disempowering care, with reference to two key psychological theories: learned mastery and learned helplessness. Nurs Older People. 2001; 13(5):18–20.

    Article  PubMed  CAS  Google Scholar 

  6. Mor S, Rabinovich-Einy O. Relational malpractice. Seton Hall L Rev. 2012; 42:601.

    Google Scholar 

  7. Fredrickson BL, Joiner T. Positive emotions trigger upward spirals toward emotional well-being. Psychol Sci. 2002; 13(2):172–5.

    Article  PubMed  Google Scholar 

  8. Gallan AS, Jarvis CB, Brown SW, Bitner MJ. Customer positivity and participation in services: an empirical test in a health care context. J Acad Mark Sci. 2013; 41(3):338–56.

    Article  Google Scholar 

  9. Hatfield E, Cacioppo JT, Rapson RL. Emotional contagion. Curr Dir Psychol Sci. 1993; 2(3):96–100.

    Article  Google Scholar 

  10. Harada T, Hayashi A, Sadato N, Iidaka T. Neural correlates of emotional contagion induced by happy and sad expressions. J Psychophysiol. 2016; 30(3):114.

    Article  Google Scholar 

  11. Back AL, Arnold RM, Baile WF, Fryer-Edwards KA, Alexander SC, Barley GE, et al.Efficacy of communication skills training for giving bad news and discussing transitions to palliative care. Arch Intern Med. 2007; 167(5):453–60.

    Article  PubMed  Google Scholar 

  12. Back AL, Arnold RM, Baile WF, Tulsky JA, Barley GE, Pea RD, et al.Faculty development to change the paradigm of communication skills teaching in oncology. J Clin Oncol. 2009; 27(7):1137–41.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Ekman P, Friesen WV, Ellsworth P. Emotion in the Human Face: Guide-lines for Research and an Integration of Findings: Guidelines for Research and an Integration of Findings. Elmsford, NY: Pergamon; 1972.

    Google Scholar 

  14. Clore GL, Ortony A. Psychological construction in the OCC model of emotion. Emot Rev. 2013; 5(4):335–43.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Ortony A, Clore GL, Collins A. The cognitive structure of emotions. Cambridge, UK: Cambridge university press; 1990.

    Google Scholar 

  16. Lindquist KA, Wager TD, Kober H, Bliss-Moreau E, Barrett LF. The brain basis of emotion: a meta-analytic review. Behav Brain Sci. 2012; 35(3):121–43.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Touroutoglou A, Lindquist KA, Dickerson BC, Barrett LF. Intrinsic connectivity in the human brain does not reveal networks for ’basic’emotions. Soc Cogn Affect Neurosci. 2015; 10(9):1257–65.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Steunebrink BR, Dastani M, Meyer JJC. The OCC model revisited. In: Proc. of the 4th Workshop on Emotion and Computing. Palo Alto: Association for the Advancement of Artificial Intelligence: 2009.

    Google Scholar 

  19. Bar M, Neta M. Humans prefer curved visual objects. Psychol Sci. 2006; 17(8):645–8.

    Article  PubMed  Google Scholar 

  20. Aronoff J, Woike BA, Hyman LM. Which are the stimuli in facial displays of anger and happiness? Configurational bases of emotion recognition. J Pers Soc Psychol. 1992; 62(6):1050.

    Article  Google Scholar 

  21. Larson CL, Aronoff J, Steuer EL. Simple geometric shapes are implicitly associated with affective value. Motiv Emot. 2012; 36(3):404–13.

    Article  Google Scholar 

  22. Larson CL, Aronoff J, Sarinopoulos IC, Zhu DC. Recognizing threat: a simple geometric shape activates neural circuitry for threat detection. J Cogn Neurosci. 2009; 21(8):1523–35.

    Article  PubMed  Google Scholar 

  23. Fetterman AK, Robinson MD, Meier BP. Anger as “seeing red”: evidence for a perceptual association. Cogn Emot. 2012; 26(8):1445–58.

    Article  PubMed  Google Scholar 

  24. Epps HH, Kaya N. Color matching from memory. In: AIC 2004 Color and Paints, Interim Meeting of the International Color Association, Proceedings. Porto Alegre: 2004. p. 18–21.

  25. Oberascher L, Gallmetzer M. Colour and emotion. In: Proceedings of AIC 2003 Bangkok: Color Communication Management. Bangkok: 2003. p. 370–374.

  26. Stephen ID, Oldham FH, Perrett DI, Barton RA. Redness enhances perceived aggression, dominance and attractiveness in men’s faces. Evol Psychol. 2012; 10(3):147470491201000312.

    Article  Google Scholar 

  27. Wiedemann D, Burt DM, Hill RA, Barton RA. Red clothing increases perceived dominance, aggression and anger. Biol Lett. 2015; 11(5):20150166.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Pravossoudovitch K, Cury F, Young SG, Elliot AJ. Is red the colour of danger? Testing an implicit red–danger association. Ergonomics. 2014; 57(4):503–10.

    Article  PubMed  Google Scholar 

  29. Elliot AJ, Niesta D. Romantic red: red enhances men’s attraction to women. J Pers Soc Psychol. 2008; 95(5):1150.

    Article  PubMed  Google Scholar 

  30. Moller AC, Elliot AJ, Maier MA. Basic hue-meaning associations. Emotion. 2009; 9(6):898.

    Article  PubMed  Google Scholar 

  31. Lin R, Amith MT, Liang C, Tao C. Designing an ontology for emotion-driven visual representations. In: 2017 IEEE International Conference on Bioinformatics and Biomedicine (BIBM). Kansas City: IEEE: 2017. p. 1280–1283.

    Google Scholar 

  32. Grassi M. Developing HEO human emotions ontology. In: European Workshop on Biometrics and Identity Management. Berlin: Springer: 2009. p. 244–251.

    Google Scholar 

  33. W, 3C Emotion Incubator Group. Available from: https://www.w3.org/2005/Incubator/emotion/XGR-emotion-20070710. Accessed 27 Jul 2017.

  34. Scherer KR, Schorr A, Johnstone T. Appraisal processes in emotion: Theory, methods, research. New York: Oxford University Press; 2001.

    Google Scholar 

  35. Radulovic F, Milikic N. Smiley ontology. In: Proceedings of The 1st International Workshop On Social Networks Interoperability (SNI 2009) in conjuction with the 4th Asian Semantic Web Conference. Shanghai: 2009.

  36. Vexo F, Thalmann D, Raouzaiou A, Karpouzis K, Kollias S, Moccozet L, et al.Emotional face expression profiles supported by virtual human ontology. Comput Animat Virtual Worlds. 2006; 17(3-1):259–69.

    Google Scholar 

  37. Strapparava C, Valitutti A, et al.WordNet Affect: an Affective Extension of WordNet. In: Proceedings of the International Conference on Language Resources and Evaluation. Libson: 2004. p. 1083–86.

  38. Hastings J, Ceusters W, Smith B, Mulligan K. The emotion ontology: enabling interdisciplinary research in the affective sciences. In: International and Interdisciplinary Conference on Modeling and Using Context. Berlin: Springer: 2011. p. 119–123.

    Google Scholar 

  39. Smith B, Grenon P. The Cornucopia of Formal-Ontological Relations. Dialectica. 2004; 58(3):279–96.

    Article  Google Scholar 

  40. Grenon P, Smith B, Goldberg L. Biodynamic ontology: applying BFO in the biomedical domain. Studies in health technology and informatics.Amsterdam: IOS Press; 2004. p 20–38.

    Google Scholar 

  41. Ceusters W, Smith B. Foundations for a realist ontology of mental disease. J Biomed Semant. 2010; 1(1):10.

    Article  Google Scholar 

  42. Hastings J, Brass A, Caine C, Jay C, Stevens R. Evaluating the Emotion Ontology through use in the self-reporting of emotional responses at an academic conference. J Biomed Semant. 2014; 5(1):38.

    Article  Google Scholar 

  43. Ståhl A, Sundström P, Höök K. A foundation for emotional expressivity. In: Proceedings of the 2005 conference on Designing for User eXperience. San Francisco: AIGA: American Institute of Graphic Arts: 2005. p. 33.

    Google Scholar 

  44. Hupka RB, Zaleski Z, Otto J, Reidl L, Tarabrina NV. The colors of anger, envy, fear, and jealousy: A cross-cultural study. J Cross-Cult Psychol. 1997; 28(2):156–71.

    Article  Google Scholar 

  45. Buhrmester M, Kwang T, Gosling SD. Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data?Perspect Psychol Sci. 2011; 6(1):3–5.

    Article  PubMed  Google Scholar 

  46. Musen MA. The protégé project: a look back and a look forward. AI Matters. 2015; 1(4):4–12.

    Article  PubMed  PubMed Central  Google Scholar 

  47. Amith M, Tao C. A Web Application Towards Semiotic-based Evaluation of Biomedical Ontologies In: Song D, Fermier A, Tao C, Schilder F, editors. Proceedings of International Workshop on Biomedical Data Mining, Modeling, and Semantic Integration: A Promising Approach to Solving Unmet Medical Needs (BDM2I 2015). No 1428 in CEUR Workshop Proceedings: 2015. Available from: http://ceur-ws.org/Vol-1428/BDM2I_2015_paper_5.pdf. Accessed 28 Jan 2018.

  48. Icons-Land. People Patient Male Icon. IconArchive.com. Free for non-commercial use. 2013. Available from: http://www.iconarchive.com/show/medical-icons-by-icons-land/people-patient-male-icon.html. Accessed 28 Jan 2018.

  49. mirella design. Steampunk Robot Icon. IconArchive.com. Free for non-commercial use. 2013. Available from: http://www.iconarchive.com/show/steampunk-icons-by-mirella-gabriele/Steampunk-Robot-icon.html. Accessed 28 Jan 2018.

Download references

Acknowledgements

This research was supported by the UTHealth Innovation for Cancer Prevention Research Training Program Summer Intern Program (Cancer Prevention and Research Institute of Texas grant # RP160015), the National Library of Medicine of the National Institutes of Health under Award Number R01LM011829, and the National Institute of Allergy and Infectious Diseases of the National Institutes of Health under Award Number R01AI130460. Dr. Yong Chen was supported in part by the following NIH grants: 1R01LM012607, 1R01AI130460, R01AI116794, 7R01LM009012, K24AR055259,P50MH113840.

Funding

Publication of this article was supported by the UTHealth Innovation for Cancer Prevention Research Training Program Summer Intern Program (Cancer Prevention and Research Institute of Texas grant # RP160015), the National Library of Medicine of the National Institutes of Health under Award Number R01LM011829, the National Institute of Allergy and Infectious Diseases of the National Institutes of Health under Award Number R01AI130460, and the following National Institute of Health grants under 1R01LM012607, 1R01AI130460, R01AI116794, 7R01LM009012, K24AR055259, P50MH113840.

About this supplement

This article has been published as part of BMC Medical Informatics and Decision Making Volume 18 Supplement 2, 2018: Selected extended articles from the 2nd International Workshop on Semantics-Powered Data Analytics. The full contents of the supplement are available online at https://bmcmedinformdecismak.biomedcentral.com/articles/supplements/volume-18-supplement-2.

Author information

Authors and Affiliations

Authors

Contributions

The work presented here was carried out in collaboration among all authors. MA conceptualized the research project. MA, RL, and CT designed methods and experiments. RL and MA carried out the experiments. CL provided input from cogitative perspectives. RD and YC conducted the statistical analysis. All authors have been attributed to, seen, and approved the manuscript.

Corresponding author

Correspondence to Cui Tao.

Ethics declarations

Ethics approval and consent to participate

The University of Texas Health Science Center’s Committee for the Protection of Human Subjects approved this study (HSC-SBMI-17-0641).

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lin, R., Amith, M.“., Liang, C. et al. Visualized Emotion Ontology: a model for representing visual cues of emotions. BMC Med Inform Decis Mak 18 (Suppl 2), 64 (2018). https://doi.org/10.1186/s12911-018-0634-6

Download citation

  • Published:

  • DOI: https://doi.org/10.1186/s12911-018-0634-6

Keywords