Article Type : Research Article
Authors : Pauline G, Pierre B, Juliette P, Vitalie R, Alessandra T, Pauline N, and Nathalie E
Keywords : Emotion; Social; Laughter; Smile; Neuropsychology; Facial expression
Objective:
Facial mimicry, or congruent facial muscle activation in response to a
perceived emotional facial expression, has mainly
been
explored in electromyography. The objectives of this study were to establish
whether facial mimicry in healthy adults could be
documented
for joyful expressions from the visual observation of video recordings and to
test the applicability of this methodology in
neuropsychology.
Method:
Twenty-two healthy participants and four brain damaged patients (parietal or
frontal lesion) were included. While being
videotaped,
participants judged stimuli expressing different emotions that varied in
presentation medium and emotion transmitter.
Three
independent raters assessed participants’ happy facial expressions (presence
and intensity).
Results:
Healthy participants produced more joyful expressions for stimuli expressing
joy compared to other emotions and for
laughter
compared to smiles, suggesting that the video visual observation method could
enable facial mimicry to be quantified. In
contrast,
inconsistent results were obtained for the intensity of joyful expressions
expressed. In patients, impaired performances were
obtained,
with one patient expressing significantly more joyful expressions than controls
with normal judgments and another one
presenting
the reverse pattern.
Conclusions:
The findings obtained in healthy participants suggest that is possible to
quantify facial mimicry through visual
observation,
allowing an evaluation of emotional productions in clinical practice. The
application in neurological patients suggests a
double
dissociation between explicitly emotional judgment and facial mimicry and
highlights the importance of having a test available
to
clinically assess facial emotional productions.
Mimicry consists of the unconscious and unintentional imitation of the behavior of an interaction partner, such as the other person’s posture, prosody or facial expressions [1]. Facial mimicry refers to the congruent muscular facial activations in response to an emotional facial expression perceived in another person [2]. Even though mimicry supposedly leads to a mirroring of another’s expression, the mechanisms of emotional communication can lead to facial expressions that are congruent with the perceived emotion but expressed in another modality. For instance, we can express joy by our facial expression when laughter is heard. The notion of facial mimicry will be used in this sense in this article. Facial mimicry plays an important role in the communication of affective states [3-4], participating in social cognition [5-8]. In human adults, the existence of facial mimicry has been robustly demonstrated thanks to facial electromyography (EMG). It consists in recording the action potentials of the motor units of the facial muscles by placing electrodes on the surface of the skin. Studies show that these facial reactions are rapidly triggered, in less than 500 ms [9-10], even when the stimuli are not consciously perceived [11-12]. These results suggest that emotional stimuli provoke an automatic muscular facial reaction.Using EMG, facial mimicry has been documented for different types of material: images of faces [2,9,12-15], video excerpts of faces [15-19], image morphing to generate videos of faces morphing from a neutral expression into an emotional expression [10,20] and sounds [21]. However, an absence of mimicry has sometimes been observed whilst using pictures of faces [15,22- 23]. Considering the nature of the emotion, the congruence between the stimulus emotion and the one expressed by contagion is robust for anger and joy [2,9-10,12-13,17-18,21,24-26]. Data for sadness and surprise are scarcer [14], or even diverging for fear and disgust [11,14-15,17-18]. Several variables may impact the presence of mimicry as measured by EMG (e.g. gender and/or age, personality, task instructions). For example, stronger facial mimicry is observed in women compared to men [14,27-29]. Older adults are also more expressive than young people but only for disgust [19], no difference being documented for joy or anger [19,30-31]. Regardless of sex and in younger people, facial mimicry has been found to be more pronounced in highly empathetic people in comparison to less empathetic people, who present poor to no mimicry [15,32-35]. The task in which the subject is engaged is also thought to have an influence, with a non-emotional judgment generating a reduction [36] or even a disappearance of facial mimicry [24], in comparison to a treatment centered on emotions. Concerning the emitter of the emotion, an increase in mimicry is observed when the emotion is expressed by a woman when compared to a man for joy, sadness and anger [27,37-38], regardless of the sex of the observer. As for possible interactions between emotions, a paradoxical habituation effect could exist since participants report a form of disgust after numerous laughter presentations [39]. These studies have several limitations, however. Even though the influence of the emotion emitter’s sex has been reported, most studies only include women [9,18,24,26,40-43]. The stimuli are often very selective, conveyed by a single material such as images, sounds or videos, and focus on a limited number of emotions. Most of them are emotionally intense, presented in a static manner [2,9,12-13,14,25,44] and selected among the “images of facial affect” [45]. In an attempt to overcome some of these drawbacks, some studies used more artificial stimuli, such as avatars [22,26,42] or morphed images [10,20,23,46]. The effect of the task to be accomplished has seldom been considered [47- 49], the stimuli being often processed in a passive manner, without any specific instructions. The perception of emotional facial expressions has been widely documented, while the assessment of emotional facial productions has been neglected, due to the cumbersomeness of the methodology used to objectify it. In order to collect expressive variables, previous studies mainly used EMG as the criterion that reflects facial mimicry. With this cumbersome methodology, however, it is difficult to document facial productions, whether
experimentally or in clinical practice. Despite its obvious clinical interest with respect to neuropsychological or psychiatric diagnoses and the importance of facial productions in social cognition [5-8], simple visual observation instead of EMG has rarely been applied [50-51]. Moreover, the results of the above two studies are debatable since the former used an insufficient number of facial expressions to be exploitable experimentally, whilst the latter demonstrated the feasibility of this methodology. The aim of the present study was to establish whether the facial mimicry of healthy adults could be reliably documented by untrained coders from the simple visual observation of video recordings. This approach was chosen with the possible transfer to clinical practice in mind. We focused on joyful expressions because of the robust effects of facial mimicry demonstrated with EMG for this emotion. We predicted that healthy participants would exhibit more joyful expressions and that their expressions would be more intense for stimuli expressing joy than for other emotions. The study also had two secondary goals. The first one was to select the experimental conditions that make it possible to observe the greatest number of joyful facial responses in healthy participants to maximize its future neuropsychological application. To this effect, we manipulated different variables linked to the stimuli (material, emitter of the emotion) and to the task carried out by the participants. Consistent with the literature, we predicted that healthy participants’ facial expressions would be more joyful and more intense for dynamic joyful stimuli than for static joyful stimuli and for stimuli judged on an emotional characteristic rather than a non-emotional characteristic. The second goal was to verify the applicability of this methodology in neuropsychology in order to document the emotional expressive capacities of patients. We therefore included four patients with a frontal or parietal lesion, who was likely to present sociocognitive disorders with respect to humor [52-58], the theory of mind59-60 and more generally the mirror neuron system [61-62]. We predicted that patients might be impaired both in the perception and expression of emotions.
Materials and Methods
Participants: Twenty-two healthy participants (11 men and 11
women; mean age = 22.64 ± 1.59 (20-25); average number of
years of study = 14.36 ± 1.62 (12-17)), without any reported
neurologic or psychiatric history, participated in this study. The
data included were obtained in compliance with the Helsinki
Declaration. All the participants signed a first informed consent
form. In order to avoid biasing the results of the study, the
participants were filmed, the goal being initially presented as
observing ocular movements during the treatment of social
stimuli. At the end of the protocol, the participants signed a
second informed consent form (necessary for the exploitation of
their experimental data) which indicated the real goal (the analysis of emotional facial expressions). None of the participants
decided to quit the study during this second signature.
The study also included four patients (Table 1). JM, a 23-year-old
man with an educational level of 15 years, presented with a
revealed intra-parenchymal expansive cerebral process of the
posterior part of the right parietal lobe (Table 1) indicative of a
grade three glioma, revealed by two epileptic seizures. No new
seizure had occurred since the introduction of treatment. SL, a 27-
year-old woman with an educational level of 15 years, presented
an intra-parenchymal expansive cerebral process of the left
precentral region indicative of a low-grade glioma. LM, a 54-
year-old man with an educational level of 14 years, presented
with an intra-parenchymal expansive cerebral process of the left
superior frontal lobe indicative of a low-grade glioma, revealed
by one epileptic seizure. IS, a 56-year-old woman with an
educational level of 12 years, presented with an expansive intraparenchymal cerebral process of the right frontal region indicative
of a meningioma. Only IS was tested postoperatively, while the
other three patients (JM, LM, SL) were tested preoperatively.
Patients also signed the two consent forms for the experimental
protocol along with a third one allowing the exploitation of these
clinical data and cerebral imagery for this study.
Material: The stimuli for the facial mimicry task were free of
rights or their use for research had been approved by their authors
(e.g. https://freesound.org/; https://www.istockphoto.com/fr;
https://www.youtube.com/; (Figure 1). The average duration of
the stimuli (static and dynamic stimuli) was 6 seconds (± 2.15
seconds) and they were selected by the four authors for their
cultural naturalness for a French public. Seventy-two stimuli
expressed a joyful emotion (36 smiles; 36 laughs), twenty-seven a
negative emotion (anger, fear or sadness) and thirty-six a positive
emotion of surprise, totalling 135 stimuli. Concerning the laughs,
one third of the items were visually static stimuli (n=12), one
third dynamic visual stimuli (n=12) and one third dynamic
acoustic stimuli (n=12). For the smiles, one third were static
stimuli (n=12) and the other two-thirds were dynamic stimuli
(n=24). For the laughs, the dynamic visual stimuli were further
broken down into strictly visual stimuli (n=6) and stimuli
associated to sounds (n=6). For all the joyful stimuli, the emitter
was a baby (n=24), a woman (n=24) or a man (n=24).
Complementary to the experimental task, questionnaires were
administered. Positivism was evaluated thanks to a French
adaptation of the Life Orientation Test (scale LOT-R [63]). It
consisted in an auto-questionnaire comprising 10 items for which
the participants had to indicate to what extent they agreed with
the proposition on a 5-point scale (0=don’t agree at all; 4=agree
completely). The anxiodepressive profile was evaluated with the
HADS auto-questionnaire (Hospital Anxiety and Depression
Scale [64]). Answers to the 14 questions were in the form of a 4-
point Likert scale and concerned the previous week. Empathy was measured with the IRI auto-questionnaire (Interpersonal
Reactivity Index [65]). It consisted of 28 items, for each of which
the participant had to indicate to what extent the statement
corresponded to him/her on a 5-point Likert scale (1=doesn’t
describe me at all; 5=describes me perfectly), giving an overall
result and four sub-scores (“Perspective taking”, “Empathy
concern”, “Personal distress” and “Fantasy”) Procedure: Following the reading of the information note and the
signature of the first informed consent form, the participants
completed the two facial mimicry tasks, during which they were
filmed. One of these tasks focused on a specifically emotional
goal (judgment of the subjective emotional intensity) whereas the
second focused on the natural or real aspect of the stimuli
(judgment of realism). The questionnaires were then
administered, followed by the second informed consent.
The judgment of emotional intensity task consisted in indicating
orally how intense the presented emotion was on a 10-point scale
(1 = not at all intense; 10 = very intense). For the judgment of
realism task, the participant had to indicate how natural or real the
presented emotion was on a 10-point scale (1 = not at all natural
or real; 10 = very natural or real). Each participant completedboth tasks. Furthermore, two series were constituted (n= 67, n=
68) from the 135 original stimuli. The order of the tasks and the
two series of stimuli were counterbalanced between participants
according to a Latin square plan. The total time taken to complete
the task was timed for each participant. At the end of the test,
participants were asked to evaluate on Likert scales their feelings
regarding the task (from 1 very pleasant to 5 very unpleasant) and
its subjective duration (from 1 very short to 5 very long).
Coding of facial mimicry: The participants were filmed during
the facial mimicry task thanks to a camera installed on the
personal computers of the scorers. A sound was added between
each of the 135 stimuli in order to sequence the videos of the
participants more easily for each stimulus (“photos” application
of the Windows program). Once this separation had been
achieved, the raters scored the facial expressions of the
participants, without seeing the stimuli, from the sequenced files
which were presented randomly and without sound.
Three independent scorers, without any specific training in the
treatment of facial emotions, assessed the joyful facial
expressions of the participants. A binary scoring of joyful
emotions was applied for each stimulus (presence or absence of a
joyful facial expression) along with a quantitative scoring of the
expressive intensity (from 0 “lack of expression” to 5 “very
strong expression”). Analysis of the film for each participant took
about six hours and a half (one hour of sequencing, one hour and
a half of scoring for each scorer, one hour of data entry)
We analyzed two expressive variables (the number of joyful expressions and the expressive intensity of these emotions) as well as two behavioral variables (emotional intensity and realism) (Table 2). Considering the number of participants, non-parametric tests were applied (ANOVA of Kruskal-Wallis, Wilcoxon tests, Spearman correlations). The patient's performances were compared to that of the group of healthy participants with Crawford's t-tests. The JAMOVI program [66] was used and the significance threshold was set at p?0.05.
Results of the healthy participants
Effect of the order of tasks and the series of stimuli: The task order and series of stimuli did not have any significant effect on the average number of joyful expressions (tasks: U=47, p=0.4; series: U=42, p=0.24), their intensity (tasks: U=57, p=0.85; series: U=40, p=0.19), the judgment of emotional intensity (tasks: U=56, p=0.79; series: U=55.5, p=0.77) or the judgment of realism (tasks: U=57, p=0.85; series: U=46, p=0.37). The data of the four conditions were therefore confounded in the following analyses. Effect of demographic variables: The sex of the participants did not have a significant effect on the number of joyful expressions (U=52, p=0.63), their intensity (U=38, p=0.16), the judgment of emotional intensity (U=44, p=0.31) or of realism (U=45, p=0.35). The data of the two sexes were therefore confounded in the following analyses. The following calculated correlations were found not to be significant: between the number of joyful expressions for joyful stimuli and age or level of education (age: rho=-0.11, p=0.64; level of education: rho=0.4, p=0.07); between the intensity of the joyful expressions for joyful stimuli and the age or level of education (age: rho=0.12, p=0.59; level of education: rho=0.23, p=0.3); between the score of judgment of emotional intensity and the age or level of education (age : rho=-0.2, p=0.1; level of education : rho=-0.2 ; p=0.38). A significant correlation was found, however, between the judgment of realism and both the age and level of education (age: rho=0.44, p=0.04; level of education: rho=0.42, p=0.05).
Transversal scores
The interrater reliability was measured with correlations by averaging the scores of the three scorers. The interrater reliability was high for the average number of joyful expressions (rho=0.71, p<0.001) and the mean intensity of joy expressed (rho=0.71, p<0.001). The healthy participants judged the task as relatively pleasant (mean score of 2.7) and as moderately short (mean score of 3.1). The average duration of the task was 24.6 minutes.
Effects of the factors linked to the stimuli
Judgment of the emotional intensity: The healthy participants judged the stimuli expressing joy as significantly more intense than those expressing a negative emotion (W=65, p=0.05). On the other hand, the intensity was judged as similar when the stimuli expressed joy in comparison to another emotion (W=98, p=0.37) or to surprise considered separately (W=128.5, p=0.96). Furthermore, when the stimuli corresponded to laughter, the emotional intensity was evaluated as significantly higher than when the stimuli expressed another emotion (W=240, p<0.001), whether it was a negative emotion (W=238, p<0.001) or surprise (W=244, p<0.001). The stimuli corresponding to laughter were also judged as significantly more intense than the stimuli corresponding to a smile (W=253, p<0.001). The reverse profile was observed when the stimuli corresponded to a smile, which was judged as significantly less intense (W=2, p<0.001), whether it was a negative emotion (W=3, p<0.001) or surprise (W=15, p<0.001). The dynamic stimuli expressing joy were judged as significantly more intense than the static stimuli (W=252, p<0.001). Among the dynamic stimuli, the visual and acoustic dynamic stimuli (W=174.5, p=0.01) and the visual dynamic stimuli (W=231, p<0.001) were perceived as more intense than the acoustic dynamic stimuli. On the other hand, the distinction between visual dynamic stimuli and visual and acoustic dynamic stimuli was not statistically significant (W=164, p=0.23). Concerning the emitter of the expressed emotion, the joy expressed by the babies was judged as significantly more intense than that expressed by the adults (W=52, p=0.02) but no significant difference was found depending on the sex of the adult emitter (W=65, p=0.08). The judgment of men did not differ from that of women according to the emitter of the emotion (emitter men: W=53.5, p=0.69; emitter women: W=49.5, p=0.51).
Judgment of realism
The healthy participants judged the stimuli expressing joy as significantly more natural than those expressing another emotion (W=225, p=0.002), whether it was a negative emotion (W=209, p=0.006) or surprise (W=252, p<0.001). Similarly, when the stimuli corresponded to laughter, the expressed emotion was judged as significantly more natural than when the stimuli expressed another emotion (W=253, p<0.001), whether it was a negative emotion (W=253, p<0.001) or surprise (W=253, p<0.001). The stimuli corresponding to laughter were also judged as significantly more natural than the stimuli corresponding to a smile (W=253, p<0.001). When the stimuli corresponded to a smile, the expressed emotion was judged as significantly more natural in comparison to surprise considered separately (W=188.5, p=0.05) but no significant difference was found in comparison to all the stimuli expressing another emotion than joy (W=92, p=0.28) or a negative emotion (W=3, p<0.001). The dynamic stimuli expressing joy were judged as significantly more natural than the static stimuli expressing joy (W=236, p<0.001) and, more particularly, the visual and acoustic dynamic stimuli (W=253, p=0.01) and the visual dynamic stimuli (W=235, p<0.001) in comparison to the acoustic dynamic stimuli. On the other hand, the distinction between visual dynamic stimuli and visual and acoustic dynamic stimuli was not statistically significant (W=99, p=0.84). Concerning the emitter of the expressed emotion, the joy expressed by the babies was judged as significantly more natural than that expressed by the adults (W=0, p<0.001). The joy expressed by a man was judged as more natural than that expressed by a woman (W=166, p=0.02). The judgment of men did not differ from that of women according to the emitter of the emotion (emitter men: W=51, p=0.58; emitter women: W=56.5, p=0.84).
Number of joyful expressions
The healthy participants produced more joyful expressions when the stimuli expressed joy than when they expressed another emotion (W=20, p<0.001), whether it was a negative emotion (W=238, p<0.001) or surprise (W=202, p=0.01). Among the joyful stimuli, the healthy participants produced more joyful expressions when the stimuli corresponded to laughter than when they expressed another emotion (W=246, p<0.001), whether it was a negative emotion (W=248, p<0.001) or surprise (W=18, p<0.001). The healthy participants also produced more joyful expressions for smiles than for the negative emotions (W=46, p=0.009) but this difference disappeared when we considered all the other emotions (W=169, p=18) or solely surprise (W=137, p=0.75). Furthermore, the number of joyful expressions was higher for laughs than for smiles (W=248, p<0.001). The dynamic stimuli generated more joyful expressions than the static stimuli (W=0, p<0.001). Among the dynamic stimuli, the visual and acoustic dynamic stimuli (W=68, p=0.06) and the visual dynamic stimuli (W=183.5, p=0.019) led to a greater number of joyful expressions than the acoustic dynamic stimuli. No significant difference was observed between the visual dynamic stimuli and the visual and acoustic dynamic stimuli (W=117, p=0.33). Concerning the emitter of the expressed emotion, babies elicited more joyful emotions than adults (W=66, p=0.05) but no significant difference appeared depending on the sex of the adult emitter (W=127.5, p=0.69). The task carried out did not have a significant effect on the number of joyful emotions expressed (W=149.5, p=0.24). In order to target the number of expressions specifically related to facial mimicry in comparison to the number of joyful expressions produced in total, we subtracted the number of joyful expressions obtained from stimuli expressing a negative emotion from the number of joyful expressions recorded for joyful stimuli for each participant individually. With this weighting, the participants produced in average again a greater number of joyful expressions for laughs than for smiles (W=5, p<0.001). Considering the greater number of joyful expressions collected for the visual dynamic stimuli (visual stimuli and visual and acoustic stimuli) and for babies, we applied the same weighting to these variables. The difference between laughs and smiles was found to be significant for the visual dynamic stimuli (W=218, p=0.002) but not for the babies (W=152, p=0.009). Men did not produce more joyful expressions than women according to the emitter of the emotion (emitter men: W=49.5, p=0.51; emitter women: W=48.5, p=0.47).
Intensity of the emotion expressed
The intensity of the joyful expressions (table 1) did not differ between the stimuli expressing joy and those expressing another emotion (W=89, p=24), including surprise (W=110, p=0.87). On the other hand, the intensity of the joyful expressions was significantly higher when the stimuli expressed a negative emotion in comparison to joy (W=59, p=0.03). When we considered laughter separately among the joyful stimuli, the intensity did not differ for laughter when compared to the other stimuli (W=164, p=0.24), whether they were negative emotions (W=130, p=0.92) or surprise (W=176, p=0.11). On the other hand, smiles considered separately among the joyful stimuli generated less intense joyful expressions than the other emotions (W=46, p=0.01), whether they were negative emotions (W=31, p=0.001) or surprise (W=185, p=0.06). The joy expressed was more intense for laughter than for smiles (W=223, p<0.001). In order to target the emotional intensity of joyful expressions specifically related to facial mimicry in comparison to that produced overall, we applied the following formula: we subtracted the intensity of joyful expressions recorded for the stimuli expressing negative emotions from the value of the joyful expressions’ intensity for the joyful stimuli. With this method of calculation, the participants expressed joy more intensely in response to laughter than to smiles (W=223, p=0.001). The joyful expressions produced by the men were not significantly more intense that those of the women according to the emitter of the emotion (emitter men: W=58, p=0.92; emitter women: W=42, p=0.25).
Correlations
The calculated correlations between the number of joyful expressions for the joyful stimuli and the intensity of joyful expressions were not significant (rho=0.19, p=0.4). The correlations for the number of joyful expressions and the scores obtained for the questionnaires (HAD, LOT-R, IRI) only revealed a statistical link with the LOT-R (rho=0.48, p=0.02). No significant correlation was found between the intensity of the joy expressed and the questionnaire scores. The number of joyful expressions for the joyful stimuli and the joyful expressions’ intensity were not significantly correlated with the values of the judgment of emotional intensity (number of joyful expressions: rho=0.01, p=0.97; joyful expression’s intensity: rho=0.04, p=0.88) or of realism (number of joyful expressions: rho=-0.07, p=0.75; joyful expressions’ intensity: rho=0.03, p=0.91). The correlations for these same variables and the scores obtained in the questionnaires (HAD, LOT-R, IRI) revealed a moderate statistical link only between the value of the judgment of emotional intensity and the scores of the LOT-R (rho=0.47, p=0.03), of the IRI (rho=0.46, p=0.02) and the IRI sub-score evaluating “Perspective taking” (rho=0.55, p=0.01) and between the value of the judgment of realism and the scores of the IRI (rho=0.44, p=0.04) and the IRI sub-score evaluating “Empathy concern” (rho=0.43, p=0.05). The group of healthy participants was split into two subgroups according to the IRI score: the half with the highest scores were in the “high empathy” group, while the other half were in the “low empathy” group. While the “high empathy” group did not express
a greater number of joyful emotions (U=44, p=0.3), nor more intense joyful expressions (U=49, p=0.48) than the “low empathy” group. The “high empathy” group judged the joyful expressions as more intense (U=30, p=0.05) and more natural (U=12, p=0.02) than the “low empathy” group. For the HAD, no participant exceeded the depression cut-off. For anxiety, six participants obtained a score greater than or equal to 11 and had a certain risk of anxiety. Since social anxiety may negatively moderate facial mimicry (Dimberg et Thunberg, 2007), we controlled the effect of pathological anxiety on the number of happy facial expressions produced when the items were happy and their intensity by contrasting the results of the six participants with a pathological anxiety score versus the healthy participants. The results did not show a significant difference between the groups (number of joyful expressions: U=40, p=0.59; joyful expressions’ intensity: U=32, p=0.26; judgment of the emotional intensity: U=44.5, p=0.83, judgment of realism: U=40, p=0.58).
Results of the Patients
Neuropsychological data
As indicated in Table 1, the patient JM presented a normal IQ (107, WAIS-IV), with a dissociation between his preserved verbal performances (episodic memory, language) and his affected visuo-spatial capacities (constructive disturbances, problem with the perception of the overall shape, low visual abstraction, difficulty reading time on dials, slowness in copying tasks). These data are in favor of a classic functional cerebral lateralization, with dysfunctions at the level of the minor hemisphere. The patient LM also presented a normal IQ (90, WAIS-IV), with a dissociation between his preserved verbal performances (episodic memory, language) and his affected visuo-spatial capacities (constructive disturbances, visual abstraction, bells test time, simultagnosia). These data are in favor of a reverse functional cerebral lateralization, with dysfunctions at the level of the minor hemisphere. The patient SL presented a normal low IQ (82, WAIS-IV), with a disturbance of numeracy and language skills selectively and a preservation of other skills (episodic memory, constructive and gnosia skills). These data are in favor of a classic functional cerebral lateralization, with dysfunctions at the level of the major hemisphere. The patient IS presented a normal IQ (95, WAIS-IV), with a disturbance of constructive and visual executive skills (abstraction, planning, attention) and preservation of executive verbal, numeric, lexical, gnotic and executive skills. These data are in favor of a classic functional cerebral lateralization, with dysfunctions at the level of the minor hemisphere. The patients JM, LM and SL did not present a negativity bias (LOT-R scores: JM: t=-0.14, p=0.44; LM: t=-1.02, p=0.16; SL: t=-0.72, p=0.24), or a pathological anxio-depressive tendency (HAD anxiety scores: JM: t=-0.7, p=0.25; LM: t=-0.23, p=0.41; SL: t=0.49, p=0.32; HAD depression scores: JM: t=0.45, p=0.33; LM: t=-0.79, p=0.22; SL: t=0.45, p=0.33) and the empathy was normal (IRI scores: JM: t=-1.16, p=0.13; LM: t=-0.47, p=0.32; SL: t=-0.16, p=0.43). The patient IS did not show either a pathological anxiety tendency (HAD anxiety score: t = 1.44, p = 0.08) or reduced empathy (IRI score: t = 1.03, p = 0.16) but presented a negativity bias (LOT-R score: t = 2.73, p = 0.006) and a pathological depressive tendency (HAD depression score: t = 2.93, p = 0.004).
Judgments
For the task of judgment of emotional intensity, there was no significant difference between the patient's scores of JM, LM and SL and those of the healthy participants, regardless of the category of the items (Table 2). However, the patient IS judged the stimuli to be more intense than the healthy participants, regardless of the emotion expressed. Moreover, for joyful stimuli, the patient IS judged the emotion to be more intense than the healthy participants, regardless of the emitter and the sex of the carrier of the emotion, but selectively for static and acoustic stimuli. For the task of judgment of realism, there was no significant difference between the scores of the patients JM and IS and those of the healthy participants, regardless of the category of the items. The patient LM judged the stimuli to be more real than the healthy participants, when they expressed joy (laughter and smiles) or another emotion (surprise) and the patient SL judged the stimuli to be more real than the healthy participants, but selectively for smiles. For joyful stimuli, the patient LM judged the emotion to be more real than the healthy participants, when the stimuli were static, expressed by male adults, and the patient SL judged the emotion to be more real than the healthy participants, when the stimuli were static and expressed by women. The patients JM, LM and SL also evaluated the characteristics of the tasks similarly to the healthy participants (pleasantness: JM: t=0.89, p=0.32; LM : t=-1.04, p=0.16; SL: t=-1.04, p=0.16; subjective duration: JM: t=-0.13, p=0.45; LM: t=-0.13, p=0.45; SL: =-0.13, p=0.45) and their completion times were comparable (objective duration: JM: t=-0.92, p=0.18; LM: t=0.7, p=0.25; SL: t=-0.92, p=0.18). The patient IS also evaluated the duration of the tasks similarly to the healthy participants (subjective duration: t=- 0.13, p=0.45) and her completion time was comparable (objective duration: t=-0.92, p=0.18) but she found the tasks to be more pleasant than the healthy participants (t=-2.57, p=0.009).
Number of joyful expressions
In view of the results obtained in healthy participants, only the
number of joyful facial productions was analyzed.
When compared with the healthy participants (Table 2), no
significant difference was found between the joyful expressions
of the patient LM, regardless of the category of the items (Table
2). The patients SL and IS expressed significantly more joy, but
selectively when the item expressed a negative emotion (SL:
t=2.67, p=0.01; IS: t=2.67, p=0.01). No other significant
difference was observed between the joyful expressions of the
patients SL and IS and those of the healthy participants,
regardless of the category of the items.
Moreover, the patient JM expressed significantly more joy when
the item expressed a joyful emotion (t=2.24, p=0.02). This greater
number of joyful expressions was observed both for stimuli
expressing laughter (t=2.36, p=0.01) and smiles (t=2.01, p=0.03),
regardless of the sex of the emitter (woman: t=2.47, p=0.01;
man: t=1.75, p=0.05) and the task carried out (judgment: of
realism t=2.11, p=0.02; judgment of emotional intensity: t=2.01,
p=0.03). Concerning the emitter of the emotion, the patient JM
expressed significantly more joy when the emitter was an adult
(t=2.11, p=0.02), while the difference with healthy participants
was smaller when the emitter was a baby (t=1.6, p=0.06).
Concerning the presentation format of the stimuli, the patient JM
expressed significantly more joy when the item was presented in
the form of images (t=3.62, p=0.001), with a lesser difference
with healthy participants for dynamic items (t=1.47, p=0.08),
whatever the modality (sounds: t=1.61, p=0.06; multimodal
videos: t=1.21, p=0.12; unimodal videos: t=0.67, p=0.26).
However, the patient JM also expressed significantly more joy
than the healthy participants when the item expressed a negative
emotion (t=3.39, p=0.001), another emotion than joy (t=3.2,
p=0.002) or surprise (t=2.65, p=0.01).
Since the technique previously used experimentally to document
facial mimicry, EMG, is extremely cumbersome, and precludes
any transfer to clinical practice, the goal of this study was to show
the feasibility of quantifying, from the simple observation of
videos, the number of joyful facial expressions produced in
reaction to emotional stimuli. Four elements seem to support the
validation of simple observation as an alternative methodology.
Firstly, and in accordance with our main hypothesis, the number
of emotions produced was sufficiently high when the stimuli
expressed joy (40% of the productions of the healthy
participants), enabling a quantitative analysis to be applied on this
variable. Even though this protocol is less precise than
electromyography, which also allows the recording of non-visible
muscular responses, the video observation of faces could enablethe documenting of joyful facial mimicry with less cumbersome
equipment. Secondly, a greater number of joyful expressions were
recorded for stimuli involving laughter than other emotions or
smiles. This suggests that the joyful facial productions observed
are not incidental and are believed to be linked on the one hand to
the nature of the emotions presented and on the other hand to the
intensity of the joyful expressions, in favor of facial mimicry.
Thirdly, the scoring of the different observers was convergent.
This high interrater reliability, in the absence of any training
regarding the scoring of facial expressions and without seeing the
stimuli, would allow easy application in standard clinical practice.
This supports the results of Sato and Yoshikawa [51] who
reported the validity of naked-eye visual observation both with
qualified scorers and naïve evaluators. Our results show however
a higher occurrence of joyful facial expressions (double that
reported by Sato and Yoshikawa, 2007), which can be explained
by the use of more polymorph stimuli than those used by these
authors. Fourthly, the tasks seem to be well tolerated by the
participants since the stimuli were judged as natural, particularly
the joyful stimuli, and the task duration was evaluated as short.
All these elements warrant considering future studies based on the
present methodology to quantify facial mimicry from video
recordings.
A secondary goal of the present study was to select the
experimental conditions that generate the best joyful facial
response possible for the operationalization of facial mimicry.
The tasks presented need to be reduced in number to allow
clinical and experimental application. Among the two facial
indices recorded in the present study (number of joyful
expressions and intensity of these expressions), intensity does not
seem to constitute a good indicator. The intensity expressed was
equivalent for laughter and for non-joyful emotions, and lower for
smiles in comparison to non-joyful emotions. Thus, joyful stimuli
generate a higher number of joyful expressions than non-joyful
stimuli, but the expressions are less intense. This result can be
explained by a defensive process, as laughter can have the
function of relieving tension [67]. Intensity cannot therefore be
retained as a good indicator of facial mimicry, even when
weighted by the intensity recorded for joyful stimuli compared to
negative stimuli. The number of joyful expressions, on the other
hand, constitutes a variable that seems consistent with the
emotional category of the stimuli (a greater number of joyful
expressions were observed for joyful emotions in comparison to
non-joyful emotions) and with the intensity expressed (laughter
generated a greater number of joyful expressions than smiles).
Among the variables linked to the support and in accordance with
our first secondary hypothesis and with the literature [15,22-
23,46], a greater number of joyful responses were produced for
dynamic stimuli in comparison to static stimuli. The same
difference was also found when we weighted the number of joyful
expressions produced according to a base level, suggesting its
robustness. We also recorded a greater number of joyful
expressions for stimuli involving babies in comparison to adults.
For this variable however, the difference disappeared when
weighting was applied. The responses recorded to babies cannot
therefore be linked to facial mimicry but are rather in favor of a
positive emotional feeling towards them. This category of stimuli
does not seem relevant therefore for future studies regarding the
observation of joyful facial mimicry. It can be concluded that
dynamic stimuli involving adults and corresponding to laughter
are the most suitable stimuli. Future studies will have to verify
however the possible anchoring effects produced if the variability
of the emotional stimuli presented is diminished [68-70]. Besides
the selection of stimuli, we took the precaution of weighting the
number of joyful expressions produced with the number of joyful
expressions recorded for negative emotions. This method of
analysis should make it possible to quantify facial mimicry more
specifically [71-72] by distinguishing it from emotional
expressions non-linked to imitation [73]. All of these elements
(stimuli selected and calculation method) should lead to an
optimization of the scoring of the recorded joyful expressions. A
last methodological precaution concerns the statistical links
between the number of joyful expressions and the score of the
positivity scale (LOT-R) which will require a control of this
parameter in future studies, both for the group results and for the
individual clinical case.
Another secondary goal was the clinical application of the present
protocol in order to explore the explicit and implicit processing of
joyful emotions. The explicit processing was operationalized
using the judgment tasks (emotional intensity and realism) and
the implicit treatment using the joyful expressions produced.
Concerning the present protocol, JM presented explicit judgments
(emotional intensity and naturalness) equivalent to those of the
control group. On the other hand, his rate of joyful expressions
was higher regardless of the stimuli presented, including those
expressing negative emotions. LM presented the opposite profile:
his rate of joyful expressions was equivalent to that of the control
group, regardless of the category of stimuli. On the other hand,
although his judgments in terms of emotional intensity were
equivalent to those of the control group, his judgments in terms of
realism were significantly higher for stimuli expressing joy
(laughs and smiles), surprise, as well as for static stimuli and
stimuli expressed by men. The results of JM and LM suggest a
double dissociation between explicit emotional judgments and
implicit facial mimicry. Clinically, these two impairments do not
lead to the same disability and have to be handled differently.
Whereas LM may not require specific care if no ecological
difficult exists, the inappropriate expressions of JM have to be
improved because of the social consequences in non-verbal
communication. Finally, SL and IS presented intermediateprofiles: their rate of joyful expressions was higher than those of
the control group, but the difference was significant only for
stimuli expressing a negative emotion, suggesting a profile
similar to JM but of lesser magnitude. For SL and IS, judgments
were affected, but selectively for one type of judgment
respectively and for certain categories of stimuli only. A possible
explanation for these impairments could be a thymic bias but the
scores obtained with the scales of anxiety, depression and
positivity were normal, excluding such an alternative explanation
for JM, LM and SL. In total, these results suggest a disturbance of
facial mimicry for three of the four patients included, with a
selective impairment of facial mimicry for one patient while his
judgments seemed preserved. These results support the
importance of a clinical assessment of facial emotional
productions.
With respect to future studies of brain-damaged or psychiatric
patients and considering the absence of difference obtained
between the judgment of realism and that of emotional intensity,
only the latter should be retained as the operationalization of the
explicit processing of emotions. Furthermore, contrary to the
judgment of realism, the judgment of intensity was not correlated
to the demographic data (age and level of education). It will
however be important to consider the links between this variable
and empathy since a correlation was evidenced with the IRI scale
and the judgment of intensity.
Several results obtained with the present protocol seem to diverge
from what was documented with the EMG technique. The effect
of the sex of the participant [14,27,28-29] was not found in our
study. Other studies will be necessary to evaluate the robustness
of this result. In EMG, the hypothesis of a high sensitivity of the
recording in women when compared to men, possibly due to the
thinness of the skin, has been proposed [74]. This possible bias
could explain the presence of an effect of the sex which is
selective to the EMG technique. The documented links with
empathy and joyful expressions in EMG [15,32-35] did appear
selectively with the judgments but were not evidenced with the
number of joyful expressions. To our knowledge, this point has
never been explored, as the two studies that used the observation
of videos did not document this empathetic capacity.
Complementary studies based on observation will be necessary.
The effect of the sex of the emitter of the emotion demonstrated
in the case of joy [27,37-38] as not observed. This result could be
explained by the presence of various emitters in the present
protocol, including babies, possibly reducing the effect of sex
between the adults. Finally, while the judgment of non-emotional
characteristics was described as generating a lesser mimicry in
comparison to an emotional judgment in EMG [24,36], the
judgment of emotional intensity and of realism tasks generated a
comparable number of joyful expressions in our study. Two
hypotheses seem to explain this profile. Either the absence of
effect could be linked to the fact that the judgment of realism task
necessitates in any case an analysis of emotions; or the
observation of videos does not enable the difference, only
demonstrable thanks to the EMG technique, to be taken into
account. Unfortunately, we were not able to create a more neutral
instruction for our protocol concerning the processing of
emotions, considering the variability in format of the stimuli
presented. The presence of anxiety-depressive symptoms was not
considered an exclusion factor from the study, which is a
limitation. Indeed, six participants exceeded the threshold of
pathological anxiety, which could be explained by the current
health crisis. However, the anxious participants did not differ
significantly from the non-anxious participants on the variables of
interest.
In conclusion, the results of this preliminary study seem to
indicate the feasibility of the quantification of joyful facial
emotions through the observation of videos, in healthy
participants and in brain-damaged patients, suggesting a possible
clinical application. The data collection could be optimized by
using dynamic stimuli rather than static ones and by taking into
account in the calculation method of the recorded expressions a
relative level of expression to optimize the quantification of facial
mimicry in clinical neuropsychology. The judgment of emotional
intensity task could be favorable. This methodology should make
it possible to carry out group studies in neurology and in
psychiatry. These studies should lead to the integration of the
expressive dimension of emotions in future clinical practice, this
point having been neglected until now despite the possibility of a
disturbance of facial mimicry (here, three of the four patients are
concerned) and the major impact of facial expressive impairments
in non-verbal communication. Finally, this methodology of visual
observation seems to represent a good alternative to EMG to
document the production of facial emotional expressions in
clinical and experimental practice, a point that has been neglected
until now.
We are grateful to the patients JM, SL, LM and IS for their
participation in this study, and to Elizabeth Rowley-Jolivet for
English language editing.
The authors report no conflict of interest
1.
Seibt B, Mühlberger A, Likowski K, Weyers P. Facial
mimicry
in its social setting. Frontiers Psychol. 2015; 6:
1122.
2.
Dimberg U. Facial reactions to facial expressions.
Psychophysiology.
1982; 19: 643-647.
3.
Lipps T. Das wissen von fremden Ichen. Psychologische
untersuchungen.
1907; 1: 694-722.
4.
Bavelas JB, Black A, Lemery CR, Mullett J. “I show how
you
feel": Motor mimicry as a communicative act. J
Personality
Social Psychol. 1986; 50: 322.
5.
Adolphs R. Cognitive neuroscience of human social
behaviour.
Nature Reviews Neuroscience. 2003; 4: 165-178.
6.
De Gelder B, Snyder J, Greve D, Gerard G, Hadjikhani N.
Fear
fosters flight: à mechanism for fear contagion when
perceiving
emotion expressed by a whole body. Proceedings
of
the National Academy of Sciences. 2004 ; 101: 16701-
16706.
7.
Hess U, Philippot P, Blairy S. Mimicry: Facts and fiction.
The
social context of nonverbal behavior. 1999; 213-241.
8.
Levenson RW. Blood, sweat, and fears: The autonomic
architecture
of emotion. Annals of the New York Academy
of
Sciences. 2003; 1000: 348-366.
9.
Dimberg U, Thunberg M. Rapid facial reactions to emotional
facial
expressions. Scandinavian J Psychology. 1998; 39: 39-
45.
10.
Achaibou A, Pourtois G, Schwartz S, Vuilleumier P.
Simultaneous
recording of EEG and facial muscle reactions
during
spontaneous emotional mimicry. Neuropsychologia.
2008;
46: 1104-1113.
11.
Tamietto M, de Gelder B. Emotional contagion for unseen
bodily
expressions: evidence from facial EMG. In2008 8th
IEEE
International Conference on Automatic Face & Gesture
Recognition.
2008.
12.
Dimberg U, Thunberg M, Elmehed K. Unconscious facial
reactions
to emotional facial expressions. Psychological
science.
2000; 11: 86-89.
13.
Dimberg U, Thunberg M, Grunedal S. Facial reactions to
emotional
stimuli: Automatically controlled emotional
responses.
Cognition & Emotion. 2002; 16: 449-471.
14.
Lundqvist LO, Dimberg U. Facial expressions are
contagious.
J Psychophysiol. 1995; 9: 203-203.
15.
Rymarczyk K, ?urawski ?, Jankowiak-Siuda K, Szatkowska
I.
Emotional empathy and facial mimicry for static and
dynamic
facial expressions of fear and disgust. Frontiers in
psychology.
2016; 7: 1853.
16.
Vaughan KB, Lanzetta JT. Vicarious instigation and
conditioning
of facial expressive and autonomic responses to
a
model's expressive display of pain. J Personality Social
Psychology.
1980; 38: 909.
17.
McHugo GJ, Lanzetta JT, Sullivan DG, Masters RD, Englis
BG.
Emotional reactions to a political leader's expressive
displays.
J Personality Social Psychology. 1985; 49: 1513.
18.
Hess U, Blairy S. Facial mimicry and emotional contagion to
dynamic
emotional facial expressions and their influence on
decoding
accuracy. Int J Psychophysiology. 2001; 40: 129-
141.
19.
Hühnel I, Fölster M, Werheid K, Hess U. Empathic reactions
of
younger and older adults: No age related decline in
affective
responding. J Exp Soc Psychol. 2014; 50: 36-143.
20.
Sato W, Kochiyama T, Yoshikawa S, Naito E, Matsumura
M.
Enhanced neural activity in response to dynamic facial
expressions
of emotion: an fMRI study. Cognitive Brain
Research.
2004; 20: 81-91.
21.
Hietanen JK, Surakka V, Linnankoski I. Facial
electromyographic
responses to vocal affect expressions.
Psychophysiology.
1998; 35: 530-536.
22.
Weyers P, Mühlberger A, Hefele C, Pauli P.
Electromyographic
responses to static and dynamic avatar
emotional
facial expressions. Psychophysiology. 2006; 43:
450-453.
23.
Sato W, Fujimura T, Suzuki N. Enhanced facial EMG
activity
in response to dynamic facial expressions.
International
Journal of Psychophysiology. 2008; 70: 70-74.
24.
Hess U, Philippot P, Blairy S. Facial reactions to emotional
facial
expressions: Affect or cognition?. Cognition &
Emotion.
1998; 12: 509-531.
25.
Dimberg UL, Petterson M. Facial reactions to happy and
angry
facial expressions: Evidence for right hemisphere
dominance.
Psychophysiology. 2000; 37: 693-696.
26.
Likowski KU, Mühlberger A, Gerdes AB, Wieser MJ, Pauli
P,
Weyers P. Facial mimicry and the mirror neuron system:
simultaneous
acquisition of facial electromyography and
functional
magnetic resonance imaging. Frontiers in human
neuroscience.
2012; 6: 214.
27.
Dimberg U, Lundquist LO. Gender differences in facial
reactions
to facial expressions. Biological psychology. 1990;
30:
151-159.
28.
Deckers L, Kuhlhorst L, Freeland L. The effects of
spontaneous
and voluntary facial reactions on surprise and
humor.
Motivation and Emotion. 1987; 11: 403-412.
29.
Thunberg M, Dimberg U. Gender differences in facial
reactions
to fear-relevant stimuli. Journal of Nonverbal
Behavior.
2000; 24: 45-51.
30.
Bailey PE, Henry JD. Subconscious facial expression
mimicry
is preserved in older adulthood. Psychology and
aging.
2009; 24: 995.
31.
Bailey PE, Henry JD, Nangle MR. Electromyographic
evidence
for age-related differences in the mimicry of anger.
Psychology
and aging. 2009; 24: 224.
32.
Dimberg U, Andréasson P, Thunberg M. Emotional empathy
and
facial reactions to facial expressions. Journal of
Psychophysiology.
2011; 25: 26.
33.
Sonnby–Borgström M. Automatic mimicry reactions as
related
to differences in emotional empathy. Scandinavian
journal
of psychology. 2002; 43: 433-443.
34.
Sonnby-Borgström M, Jönsson P, Svensson O. Emotional
empathy
as related to mimicry reactions at different levels of
information
processing. Journal of Nonverbal behavior.
2003;
27: 3-23.
35.
Harrison NA, Morgan R, Critchley HD. From facial mimicry
to
emotional empathy: a role for norepinephrine?. Social
Neuroscience.
2010; 5: 393-400.
36.
Cannon PR, Hayes AE, Tipper SP. Sensorimotor fluency
influences
affect: Evidence from electromyography.
Cognition
& Emotion. 2010; 24: 681-691.
37.
Calder AJ, Young AW, Rowland D, Perrett DI. Computer[1]enhanced emotion in
facial expressions. Proceedings of the
Royal
Society of London. Series B: Biological Sciences.
1997;
264: 919-925.
38.
Vrana SR, Gross D. Reactions to facial expressions: effects
of
social context and speech anxiety on responses to neutral,
anger,
and joy expressions. Biological Psychology. 2004; 66:
63-78.
39.
Provine RR. Contagious laughter: Laughter is a sufficient
stimulus
for laughs and smiles. Bulletin of the Psychonomic
Society.
1992; 30: 1-4.
40.
Dimberg U, Thunberg M. Speech anxiety and rapid
emotional
reactions to angry and happy facial expressions.
Scandinavian
Journal of Psychology. 2007; 48: 321-328.
41.
Dimberg U. Psychophysiological reactions to facial
expressions.
1997.
42.
Likowski KU, Mühlberger A, Seibt B, Pauli P, Weyers P.
Modulation
of facial mimicry by attitudes. Journal of
experimental
social psychology. 2008; 44: 1065-1072.
43.
Likowski KU, Weyers P, Seibt B, Stöhr C, Pauli P,
Mühlberger
A. Sad and lonely? Sad mood suppresses facial
mimicry.
Journal of Nonverbal Behavior. 2011; 35: 101-117.
44.
Dimberg U, Thunberg M. Empathy, emotional contagion,
and
rapid facial reactions to angry and happy facial
expressions.
PsyCh Journal. 2012; 1: 118-127.
45.
Ekman P. Pictures of facial affect. Consulting Psychologists
Press.
1976.
46.
Rymarczyk K, Biele C, Grabowska A, Majczynski H. EMG
activity
in response to static and dynamic facial expressions.
International
Journal of Psychophysiology. 2011; 79: 330-
333.
47.
Oberman LM, Winkielman P, Ramachandran VS. Face to
face:
Blocking facial mimicry can selectively impair
recognition
of emotional expressions. Social neuroscience.
2007;
2: 167-178.
48.
Kulesza WM, Cis?ak A, Vallacher RR, Nowak A, Czekiel M,
Bedynska
S. The face of the chameleon: The experience of
facial
mimicry for the mimicker and the mimickee. The
Journal
of social psychology. 2015; 155: 590-604.
49.
Murata A, Saito H, Schug J, Ogawa K, Kameda T.
Spontaneous
facial mimicry is enhanced by the goal of
inferring
emotional states: evidence for moderation of
“automatic”
mimicry by higher cognitive processes. PloS
one.
2016; 11: e0153128.
50.
Cacioppo JT, Petty RE, Losch ME, Kim HS.
Electromyographic
activity over facial muscle regions can
differentiate
the valence and intensity of affective reactions.
Journal
of Personality and Social Psychology. 1986; 50: 260.
51.
Sato W, Yoshikawa S. Enhanced experience of emotional
arousal
in response to dynamic facial expressions. Journal of
Nonverbal
Behavior. 2007; 31: 119-135.
52.
Neely MN, Walter E, Black JM, Reiss AL. Neural correlates
of
humor detection and appreciation in children. Journal of
Neuroscience.
2012; 32: 1784-1790.
53.
Campbell DW, Wallace MG, Modirrousta M, Polimeni JO,
McKeen
NA, Reiss JP. The neural basis of humour
comprehension
and humour appreciation: The roles of the
temporoparietal
junction and superior frontal gyrus.
Neuropsychologia.
2015; 79: 10-20.
54.
Wild B, Rodden FA, Rapp A, Erb M, Grodd W, Ruch W.
Humor
and smiling: cortical regions selective for cognitive,
affective,
and volitional components. Neurology. 2006; 66:
887-893.
55.
Goel V, Dolan RJ. The functional anatomy of humor:
segregating
cognitive and affective components. Nature
neuroscience.
2001; 4: 237-238.
56.
Iwase M, Ouchi Y, Okada H, Yokoyama C, Nobezawa S,
Yoshikawa
E, Tsukada H, Takeda M, Yamashita K, Takeda
M,
Yamaguti K. Neural substrates of human facial
expression
of pleasant emotion induced by comic films: a
PET
study. Neuroimage. 2002; 17: 758-768.
57.
Samson AC, Zysset S, Huber O. Cognitive humor
processing:
different logical mechanisms in nonverbal
cartoons
- an fMRI study. Social neuroscience. 2008; 3: 125-
140.
58.
Chan YC, Lavallee JP. Temporo-parietal and fronto-parietal
lobe
contributions to theory of mind and executive control:
an
fMRI study of verbal jokes. Frontiers in psychology.
2015;
6: 1285.
59.
Saxe R, Powell LJ. It's the thought that counts: specific brain
regions
for one component of theory of mind. Psychological
science.
2006; 17: 692-699.
60.
Saxe R. The right temporo-parietal junction: a specific brain
region
for thinking about thoughts. Handbook of theory of
mind.
2010; 1-35.
61.
Dinstein I, Hasson U, Rubin N, Heeger DJ. Brain areas
selective
for both observed and executed movements. Journal
of
Neurophysiology. 2007; 98: 1415-1427.
62.
Iacoboni M. Imitation, empathy, and mirror neurons. Annual
review
of psychology. 2009; 60: 653-670.
63.
Carver CS, Scheier MF, Segerstrom SC. Optimism. Clinical
psychology
review. 2010; 30: 879-889.
64.
Zigmond AS, Snaith RP. The hospital anxiety and depression
scale.
Acta psychiatrica scandinavica. 1983; 67: 361-370.
65.
Davis MH. A multidimensional approach to individual
differences
in empathy. Catalog Sel Doc Psychol. 1980; 10:
1-17.
66.
The jamovi project (2020).
67.
Freud S. Jokes and their relation to the unconscious. WW
Norton
& Company. 1960.
68.
Hess U, Kappas A, McHugo GJ, Lanzetta JT, Kleck RE. The
facilitative
effect of facial expression on the self-generation
of
emotion. International Journal of Psychophysiology. 1992;
12:
251-265.
69.
Laird JD, Alibozak T, Davainis D, Deignan K, Fontanella K,
Hong
J, et al. Individual differences in the effects of
spontaneous
mimicry on emotional contagion. Motivation
and
Emotion. 1994; 18: 231-247.
70.
Schneider F, Gur RC, Gur RE, Muenz LR. Standardized
mood
induction with happy and sad facial expressions.
Psychiatry
research. 1994; 51: 19-31.
71.
Izard CE. Human Emotions. New York, NY: Plenum Press.
1977.
72.
Fridlund A. Human Facial Expression. An Evolutionary
View.
San Diego, CA: Academic Press. 1994.
73.
Darwin C. The expression of emotions in animals and man.
London:
Murray. 1872.
74.
Thunberg M. Rapid facial reactions to emotionally relevant
stimuli
(Doctoral dissertation, Universitetsbiblioteket). 2007