논문 복기를 통한 실험 통계

Weekly content



좋은 논문의 조건

  • 명확한 연구 질문 및 목표: 문제제기의 중요성

  • 좋은 포지셔닝: 기존 연구와의 관련성, 차별점

  • 알맞은 연구 방법론의 선택

  • 결과의 담백한 해석

  • 간결하고 이해하기 쉬운 서술: 문장 구조, 논리 구조, 시각화 활용


Paper summary

1. 연구 배경 및 목적

  • 사람들의 기본 욕구: 사람들은 자신이 이해받고, 확인받고, 가치있다고 느끼기를 원함.

  • 연구 질문: AI가 이러한 감정을 제공할 수 있는가? 그리고 사람들이 그것이 AI로부터 온 것이라고 알 때 반응은 어떻게 달라지는가?

2. 연구 방법

  • 실험 설계: 2 x 2 실험 설계.

    • 메시지 제공 출처: AI 대 인간

    • 라벨: AI 대 인간

  • 참가자들은 복잡한 상황을 설명하고, AI 또는 인간 응답을 받음.

3. 주요 결과

  • AI 응답의 강점:

    • AI가 감정을 정확하게 인식하고 더 높은 수준의 감정적 지원을 제공.

    • AI 응답은 인간 응답보다 사람들을 더 많이 “들어줬다”고 느끼게 함.

  • AI 라벨의 약점:

    • 사람들이 응답이 AI에서 온 것임을 알게 되면, 해당 응답에 대해 부정적 반응을 보임.

    • AI 라벨은 응답의 가치를 낮추는 효과가 있음.

4. 추가 연구 발견

  • AI와 인간 응답의 차이:

    • AI는 감정적 지원을 더 잘 제공, 실질적 조언 제공은 부족.

    • 인간은 개인 경험을 더 자주 공유했으나, 감정적 지지를 제공하는데 AI에 비해 덜 효과적.

5. 응용 및 시사점

  • AI와 인간 협력:

    • AI는 감정 인식과 감정적 지원 제공에 유리함.

    • 사람들은 AI가 아닌 인간과의 상호작용을 더 가치 있게 여길 수 있음.

    • AI는 감정적 지원을 필요로 하는 상황에서 인간의 도움을 보완할 수 있음.

    • AI 사용 시, AI 응답임을 투명하게 공개하는 것이 중요할 수 있음.

6. 미래 연구 방향

  • AI와 인간 협력에서 AI의 투명성과 신뢰 구축에 대한 연구가 필요함.

  • AI 응답이 인간 간 관계 개선에 어떤 역할을 할 수 있는지 추가 연구가 필요.

7. 결론

  • AI의 가능성: AI는 감정적 지원에서 중요한 역할을 할 수 있으며, 특히 사람들 간의 이해를 증진시킬 수 있는 잠재력을 가지고 있음.

  • 한계점: AI 라벨이 부정적 영향을 미칠 수 있다는 점에서 인간 상호작용을 대체하기 어려움.


Introduction 의 조건 (좋은 서론이란)

  • 흥미를 끄는 도입부 (연구 배경)

    • 깔대기 구조, but 초반에 너무 거창하지 않게.
  • 명확한 문제 제기

    • 해결하려는 구체적인 문제를 제시

    • 연구가 왜 필요한지에 대한 근거를 제시하는 역할

  • 연구 배경 및 문헌 검토

    • 주로 2장의 한-두 단락 요약

    • 연구의 포지셔닝과 기여를 제시하기 위한 빌드업

  • 연구 질문 및 가설 제시

  • 연구의 목적과 중요성 명시

    • 학문적 또는 실질적으로 어떤 의미를 가지는지, 왜 중요한지를 강조
  • 연구 방법에 대한 간단한 소개

    • 독자가 연구 과정에 대한 큰 그림을 그릴 수 있도록

    • 서론에서는 연구의 접근 방식에 대한 개요를 제공 (너무 상세하지 않게)

  • 논문의 구조 안내 (앞으로 펼쳐질 이야기)



Introduction

Background

The rapid integration of artificial intelligence (AI) in various aspects of daily life has led to significant discussions around its potential and limitations in fulfilling fundamental human psychological needs. One crucial aspect of human interaction is the desire to “feel heard,” which involves perceiving oneself as understood, validated, and valued. This perception impacts both mental and physical health, as being heard is associated with better well-being.

The study titled “AI Can Help People Feel Heard, but an AI Label Diminishes This Impact” delves into whether AI can effectively simulate human-like empathy and whether people truly feel heard when they know the response is AI-generated.

Key Questions

  1. Can AI generate responses that make people feel heard?

  2. How do recipients react when they believe the response is generated by AI compared to a human?

Experiment Overview

To answer these questions, the study employed a between-subjects design, varying the source (AI or human) and the label (AI or human) provided to participants. Participants were asked to describe a complex personal situation and then received responses that were either AI- or human-generated. The responses were labeled as coming from either a human or an AI, allowing the researchers to disentangle the “response effect” and the “label effect.”

Significance of Findings

Initial findings indicated that AI-generated responses were more effective at making recipients feel heard, demonstrating high empathic accuracy in understanding emotions. However, when recipients knew the response was from an AI, the sense of being heard was diminished. This reflects a bias toward AI and suggests that while AI can excel at creating emotionally supportive responses, human perceptions of AI’s capabilities influence the impact of these responses.

The results also highlighted that AI-generated responses excelled at providing emotional support but did not engage in practical suggestions as much as human responses did. This aligns with the idea that emotional validation can be more effective for the feeling of being heard than practical advice.

Methodological Note

The experiment used a 2x2 factorial design:

  • Response Source: Human vs. AI

  • Response Label: Human vs. AI

This setup allowed for a comprehensive examination of how actual and perceived sources influence participants’ feelings of being heard, perceived accuracy, and connection to the responder.


Data Explanation

The dataset used for this analysis contained responses from participants who described complex personal situations. These responses were paired with either AI- or human-generated replies, with each reply being assigned a label indicating its source. Key variables included:

  • Feeling Heard Score: A composite measure based on participants’ ratings.

  • Perceived Response Accuracy: The degree to which participants believed the response accurately captured their sentiments.

  • Connection Score: A measure reflecting how connected participants felt to the responder.

The data was analyzed using ANOVAs to assess the main effects of the response source and label on these dependent variables.


Methods

Study Design

The study was conducted using a 2 (response source: human vs. AI) × 2 (response label: human vs. AI) between-subjects experimental design. This design enabled the researchers to evaluate the independent effects of both the actual source of the response and the label given to the participants.

Phases of the Study

The study was divided into three main phases:

  1. Part 1: Initial Situation Description

    • Participants were recruited through Prolific, an online research platform, and asked to describe a complex personal situation they were currently dealing with. These descriptions were recorded as audio files and transcribed using Phonic AI.

    • Participants then rated the intensity of six basic emotions (happiness, sadness, fear, anger, surprise, and disgust) they felt in that situation using a 7-point Likert scale (1 = not at all, 7 = very much).

  2. Part 2: Response Generation

    • Participants from Part 1 were divided into two groups: those who would receive human-generated responses and those who would receive AI-generated responses.

    • In the human response condition, participants were paired with another participant recruited to read and respond to the situation descriptions. These human responders were instructed to write replies that would make the original participant feel understood. The length of responses was standardized by using the median word count of previous responses.

    • For the AI response condition, Bing Chat was used to generate replies. The AI was prompted with the transcribed situation and instructed to respond empathetically. The response length was adjusted to match the median length of human responses for consistency.

  3. Part 3: Response Evaluation

    • The original participants from Part 1 were invited back to read the response they received. They were informed whether the response came from another human or Bing Chat, creating the label manipulation.

    • Participants were asked to evaluate how much the response made them feel heard, the perceived accuracy of the response, and their connection to the responder. These measures included multiple items adapted from established scales:

      • Feeling Heard: Measured using six items (e.g., “This response makes me feel understood”, “This response makes me feel affirmed”).

      • Response Accuracy: Participants rated how accurately the response captured their sentiments (e.g., “The response accurately summarizes what I said”).

      • Connection to Responder: Measured using questions like “How connected do you feel to the responder?”

Sample Characteristics

  • A total of 540 participants completed Part 1, but 39 participants were excluded for insufficient descriptions (e.g., one-sentence responses), resulting in a final sample of 501.

  • In Part 2, 233 participants received human-generated responses, and 250 received AI-generated responses.

  • 456 participants completed Part 3, providing evaluations of the responses they received.

  • Demographics included diverse age and gender distributions, with participants primarily from the United States.

Statistical Analysis

  • The data were analyzed using analysis of variance (ANOVA) to explore the main effects and interactions between response source and label on dependent variables such as feeling heard, perceived accuracy, and connection.

  • Moderator analyses were conducted to examine if attitudes toward AI or perceived agency of the AI influenced the effects.


Hands-on Practice


Table 1

library(tidyverse)
── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr     1.1.4     ✔ readr     2.1.5
✔ forcats   1.0.0     ✔ stringr   1.5.1
✔ ggplot2   3.5.1     ✔ tibble    3.2.1
✔ lubridate 1.9.3     ✔ tidyr     1.3.1
✔ purrr     1.0.2     
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag()    masks stats::lag()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
library(ggpubr)
library(gridExtra)

Attaching package: 'gridExtra'

The following object is masked from 'package:dplyr':

    combine
library(plotrix)
library(ggplot2)
library(lsr)

###############################################

all = read.csv('data/Being Heard by AI OSF.csv')

all %>% glimpse
Rows: 455
Columns: 117
$ id                    <chr> "614ea93d581d6f4281e9d232", "5dd5378596afdf4eb31…
$ happiness_d           <int> 2, 4, 4, 2, 2, 1, 2, 2, 1, 3, 2, 4, 1, 1, 2, 1, …
$ sadness_d             <int> 7, 6, 4, 4, 6, 7, 6, 1, 4, 5, 4, 5, 4, 6, 7, 1, …
$ fear_d                <int> 5, 5, 1, 4, 1, 7, 5, 4, 6, 2, 5, 6, 4, 5, 3, 5, …
$ anger_d               <int> 5, 5, 1, 3, 5, 7, 1, 1, 2, 1, 5, 1, 1, 5, 5, 6, …
$ surprise_d            <int> 4, 1, 1, 2, 4, 7, 2, 1, 1, 1, 1, 2, 1, 6, 2, 2, …
$ disgust_d             <int> 3, 1, 1, 2, 4, 7, 1, 1, 3, 1, 5, 1, 1, 7, 1, 6, …
$ age_d                 <int> 38, 44, 37, 25, 33, 52, 48, 40, 36, 56, 36, 41, …
$ gender_d              <chr> "Female", "Female", "Make", "Male", "female", "F…
$ edu_d                 <int> 3, 5, 3, 5, 4, 3, 2, 5, 3, 2, 3, 5, 5, 5, 2, 3, …
$ race_d                <chr> "3", "3", NA, "2", "3", "3", "3", "3", "3", "3",…
$ race_d_5_TEXT         <chr> "", "", "", "", "", "", "", "", "", "", "", "", …
$ employment_d          <int> 9, 12, 9, 9, 9, 14, 9, 9, 16, 9, 9, 9, 9, 10, 13…
$ political_d           <int> 2, 1, 3, 4, 6, 2, 6, 6, 2, 1, 5, 3, 5, 3, 5, 2, …
$ class_d               <int> 2, 2, 2, 3, 2, 1, 3, 3, 2, 3, 1, 2, 2, 2, 1, 2, …
$ ladder_d              <int> 4, 4, 5, 6, 4, 2, 5, 5, 4, 6, 1, 4, 4, 3, 5, 4, …
$ AI.response           <chr> "That sounds like a very difficult situation to …
$ happiness_ai          <int> 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, 1, …
$ sadness_ai            <int> 6, 6, 4, 5, 7, 7, 6, 5, 5, 6, 4, 5, 7, 6, 5, 3, …
$ fear_ai               <int> 5, 7, 3, 6, 5, 7, 6, 6, 4, 5, 5, 6, 6, 5, 3, 4, …
$ anger_ai              <int> 4, 5, 3, 4, 6, 7, 2, 4, 2, 3, 3, 3, 4, 4, 4, 5, …
$ surprise_ai           <int> 2, 4, 2, 3, 2, 6, 3, 2, 1, 2, 2, 1, 2, 2, 1, 2, …
$ disgust_ai            <int> 3, 2, 2, 2, 4, 7, 1, 3, 1, 2, 2, 2, 3, 3, 2, 4, …
$ Human.response        <chr> NA, NA, NA, NA, NA, NA, NA, "I am sorry for the …
$ happiness_r           <int> NA, NA, NA, NA, NA, NA, NA, 3, NA, 3, 2, 4, NA, …
$ sadness_r             <int> NA, NA, NA, NA, NA, NA, NA, 4, NA, 4, 5, 5, NA, …
$ fear_r                <int> NA, NA, NA, NA, NA, NA, NA, 6, NA, 5, 7, 6, NA, …
$ anger_r               <int> NA, NA, NA, NA, NA, NA, NA, 1, NA, 2, 4, 2, NA, …
$ surprise_r            <int> NA, NA, NA, NA, NA, NA, NA, 1, NA, 1, 3, 1, NA, …
$ disgust_r             <int> NA, NA, NA, NA, NA, NA, NA, 1, NA, 1, 4, 1, NA, …
$ understood            <int> 6, 7, 5, 6, 7, 7, 6, 7, 6, 7, 3, 5, 3, 7, 6, 7, …
$ validated             <int> 5, 7, 4, 6, 7, 7, 4, 7, 6, 7, 5, 5, 3, 6, 5, 7, …
$ affirmed              <int> 5, 7, 4, 5, 7, 7, 4, 7, 6, 7, 5, 5, 3, 7, 6, 7, …
$ seen                  <int> 6, 7, 3, 6, 7, 7, 6, 6, 6, 7, 3, 6, 3, 6, 6, 7, …
$ accepted              <int> 6, 7, 3, 6, 7, 7, 6, 6, 6, 7, 4, 4, 3, 6, 5, 4, …
$ caredfor              <int> 6, 7, 4, 5, 7, 7, 7, 6, 6, 7, 5, 3, 3, 6, 6, 4, …
$ accuracy1             <int> 6, 7, 4, 5, 7, 4, 5, 6, 6, 7, 3, 5, 2, 7, 6, 7, …
$ accuracy2             <int> 6, 7, 5, 5, 7, 4, 7, 7, 6, 7, 2, 5, 2, 7, 6, 7, …
$ knewmean_p            <int> NA, NA, NA, 6, 7, 7, 5, 7, NA, NA, 2, NA, 2, 7, …
$ knewmean_b            <int> 7, 7, 4, NA, NA, NA, NA, NA, 6, 7, NA, 3, NA, NA…
$ understood_p          <int> NA, NA, NA, 5, 7, 7, 6, 7, NA, NA, 3, NA, 2, 7, …
$ understood_b          <int> 7, 7, 4, NA, NA, NA, NA, NA, 6, 7, NA, 5, NA, NA…
$ close_p               <int> NA, NA, NA, 5, 6, 7, 5, 7, NA, NA, 3, NA, 2, 7, …
$ connect_p             <int> NA, NA, NA, 5, 6, 7, 5, 7, NA, NA, 3, NA, 2, 7, …
$ trust_p               <int> NA, NA, NA, 6, 5, 7, 6, 7, NA, NA, 3, NA, 4, 7, …
$ close_b               <int> 6, 7, 3, NA, NA, NA, NA, NA, 6, 7, NA, 4, NA, NA…
$ connect_b             <int> 6, 7, 3, NA, NA, NA, NA, NA, 6, 7, NA, 4, NA, NA…
$ trust_b               <int> 6, 7, 3, NA, NA, NA, NA, NA, 6, 7, NA, 5, NA, NA…
$ lonely                <int> 3, 1, 1, 3, 2, 4, 4, 1, 1, 6, 4, 4, 2, 1, 6, 1, …
$ connected             <int> 3, 7, 4, 5, 6, 6, 3, 7, 6, 2, 4, 4, 3, 6, 2, 4, …
$ distressed            <int> 3, 1, 1, 3, 1, 2, 3, 1, 1, 7, 1, 2, 1, 1, 6, 1, …
$ excited               <int> 3, 4, 1, 2, 4, 3, 2, 3, 1, 1, 4, 2, 5, 5, 1, 4, …
$ upset                 <int> 4, 1, 1, 2, 1, 4, 2, 1, 1, 2, 1, 1, 1, 2, 2, 1, …
$ guilty                <int> 5, 1, 1, 2, 1, 1, 3, 1, 1, 7, 1, 1, 1, 5, 5, 1, …
$ scared                <int> 4, 1, 1, 2, 1, 2, 5, 1, 1, 1, 2, 1, 1, 2, 5, 1, …
$ enthusiastic          <int> 4, 5, 1, 3, 4, 6, 2, 5, 1, 2, 4, 3, 5, 5, 4, 5, …
$ ashamed               <int> 4, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 2, 1, …
$ nervous               <int> 4, 1, 1, 2, 1, 4, 6, 2, 1, 5, 1, 1, 1, 1, 1, 1, …
$ happy                 <int> 2, 5, 4, 4, 5, 4, 2, 5, 6, 4, 6, 4, 5, 6, 4, 7, …
$ sad                   <int> 4, 1, 1, 2, 1, 5, 3, 1, 1, 2, 2, 1, 1, 2, 6, 1, …
$ surprised             <int> 2, 5, 1, 2, 4, 3, 1, 4, 5, 1, 1, 3, 1, 1, 1, 1, …
$ hopeful               <int> 3, 7, 4, 4, 6, 6, 5, 5, 5, 4, 6, 5, 4, 6, 4, 6, …
$ optimistic            <int> 5, 7, 4, 5, 5, 7, 6, 5, 6, 4, 6, 4, 5, 6, 5, 6, …
$ ambivalent            <int> 5, 2, 4, 2, 4, 3, 1, 4, 1, 5, 1, 1, 1, 5, 1, 1, …
$ uneasy                <int> 1, 1, 1, 2, 1, 4, 2, 1, 1, 6, 1, 3, 1, 1, 2, 4, …
$ unnerved              <int> 1, 1, 1, 4, 4, 2, 1, 1, 1, 5, 2, 1, 1, 1, 2, 1, …
$ creeped               <int> 1, 1, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, …
$ uncomfortable         <int> 4, 1, 1, 2, 1, 3, 4, 1, 1, 6, 1, 5, 1, 1, 1, 1, …
$ bothered              <int> 6, 1, 1, 2, 1, 4, 4, 1, 1, 5, 1, 2, 1, 5, 1, 1, …
$ loneliness1           <int> 4, 2, 3, 4, 4, 5, 4, 1, 2, 6, 2, 5, 4, 4, 6, 1, …
$ loneliness2           <int> 3, 4, 3, 3, 4, 5, 4, 1, 2, 4, 5, 4, 2, 5, 4, 1, …
$ loneliness3           <int> 3, 4, 3, 4, 4, 5, 4, 1, 2, 5, 5, 5, 2, 4, 5, 1, …
$ convey.thoughts       <int> 6, 5, 4, 2, 5, 5, 6, 3, 5, 7, 4, 5, 1, 5, 2, 5, …
$ have.exp              <int> 5, 6, 3, 2, 6, 5, 4, 4, 5, 6, 3, 5, 3, 5, 5, 4, …
$ longing.or.hoping     <int> 5, 1, 2, 1, 4, 2, 5, 4, 1, 7, 2, 2, 1, 7, 2, 1, …
$ exp.embrssment        <int> 4, 1, 2, 1, 4, 3, 2, 2, 1, 5, 1, 2, 1, 5, 2, 1, …
$ understand.feeling    <int> 6, 1, 3, 3, 6, 5, 6, 4, 5, 7, 3, 5, 1, 5, 5, 6, …
$ feel.afraid           <int> 4, 1, 2, 1, 5, 2, 4, 2, 1, 4, 1, 2, 1, 3, 2, 1, …
$ feel.hungry           <int> 1, 1, 2, 1, 2, 3, 1, 1, 1, 4, 1, 1, 1, 1, 2, 1, …
$ exp.joy               <int> 5, 2, 2, 2, 4, 5, 2, 4, 1, 4, 1, 2, 1, 5, 5, 4, …
$ remember              <int> 7, 7, 5, 5, 6, 5, 4, 5, 6, 7, 6, 7, 7, 6, 6, 6, …
$ tell.right.from.wrong <int> 6, 1, 3, 4, 6, 5, 4, 3, 5, 7, 2, 5, 2, 6, 3, 6, …
$ exp.pain              <int> 2, 1, 2, 1, 4, 2, 1, 2, 1, 6, 2, 2, 1, 2, 5, 1, …
$ personality           <int> 5, 6, 2, 2, 5, 3, 4, 5, 1, 7, 3, 4, 2, 5, 3, 7, …
$ make.plans            <int> 5, 2, 4, 4, 4, 5, 5, 3, 2, 6, 6, 6, 4, 6, 6, 4, …
$ exp.pleasure          <int> 2, 1, 2, 1, 4, 2, 1, 4, 1, 7, 3, 2, 1, 1, 2, 4, …
$ exp.pride             <int> 5, 1, 2, 2, 5, 2, 3, 4, 1, 4, 1, 5, 1, 3, 3, 6, …
$ exp.anger             <int> 2, 1, 2, 1, 4, 3, 3, 2, 1, 1, 1, 3, 1, 6, 2, 1, …
$ self.restraint        <int> 4, 1, 3, 2, 4, 5, 6, 4, 1, 5, 6, 3, 1, 3, 5, 4, …
$ think                 <int> 7, 7, 4, 4, 6, 5, 6, 3, 6, 7, 6, 7, 1, 7, 7, 6, …
$ familiar_bing         <int> 3, 4, 1, 4, 4, 1, 2, 3, 4, 1, 1, 4, 1, 2, 4, 7, …
$ familiar_gpt          <int> 3, 7, 3, 6, 5, 2, 6, 6, 6, 4, 7, 4, 2, 6, 5, 4, …
$ familiar_bard         <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5, 2, 1, …
$ often_bing            <int> 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 4, 1, 1, 1, 4, …
$ often_gpt             <int> 3, 6, 3, 3, 2, 1, 4, 5, 2, 2, 7, 4, 1, 6, 3, 1, …
$ often_bard            <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 5, 1, 1, …
$ atti_bing             <int> 3, 3, 0, 0, 0, 1, 0, 0, 2, 0, 0, 1, -3, 0, -1, 2…
$ atti_gpt              <int> 2, 3, 0, 1, 0, 1, 0, 2, 2, 1, 3, 2, -3, 2, 0, 2,…
$ atti_bard             <int> 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, -3, 1, 0, 0,…
$ loneliness            <dbl> 3.333333, 3.333333, 3.000000, 3.666667, 4.000000…
$ responseR             <chr> "ai response", "ai response", "ai response", "ai…
$ labelR                <chr> "ai label", "ai label", "ai label", "human label…
$ empathicaccuracy.ai   <dbl> 0.8333333, 1.3333333, 1.5000000, 1.0000000, 1.50…
$ empathicaccuracy.r    <dbl> NA, NA, NA, NA, NA, NA, NA, 1.0000000, NA, 0.833…
$ experience            <dbl> 3.636364, 2.000000, 2.090909, 1.363636, 4.272727…
$ agency                <dbl> 5.857143, 3.428571, 3.714286, 3.428571, 5.285714…
$ feelheard             <dbl> 5.666667, 7.000000, 3.833333, 5.666667, 7.000000…
$ accuracy              <dbl> 6.0, 7.0, 4.5, 5.0, 7.0, 4.0, 6.0, 6.5, 6.0, 7.0…
$ understoodme          <dbl> 7.0, 7.0, 4.0, 5.5, 7.0, 7.0, 5.5, 7.0, 6.0, 7.0…
$ connection            <dbl> 6.000000, 7.000000, 3.000000, 5.333333, 5.666667…
$ statelonely           <dbl> 3.0, 4.0, 2.5, 4.0, 4.0, 5.0, 3.5, 4.0, 3.5, 4.0…
$ excitement            <dbl> 3.5, 4.5, 1.0, 2.5, 4.0, 4.5, 2.0, 4.0, 1.0, 1.5…
$ hope                  <dbl> 4.0, 7.0, 4.0, 4.5, 5.5, 6.5, 5.5, 5.0, 5.5, 4.0…
$ fear                  <dbl> 4.0, 1.0, 1.0, 2.0, 1.0, 3.0, 5.5, 1.5, 1.0, 3.0…
$ discomfort            <dbl> 2.000000, 1.000000, 1.000000, 2.666667, 2.000000…
$ distress              <dbl> 4.25, 1.00, 1.00, 2.25, 1.00, 3.75, 3.00, 1.00, …
$ shame                 <dbl> 4.5, 1.0, 1.0, 2.0, 1.0, 1.0, 2.0, 1.0, 1.0, 4.5…
  • tidyverse: A collection of R packages for data manipulation, visualization, and analysis.

  • ggpubr: Used for creating publication-ready plots.

  • gridExtra: Provides functions to arrange multiple grid-based plots.

  • plotrix: Contains various plotting functions, including those for error bars.

  • ggplot2: Part of the tidyverse, specifically for creating data visualizations.

  • lsr: Used for calculating effect sizes, including eta-squared.


etaSquared(aov(feelheard~responseR*labelR,all), type = 2, anova = T) %>% 
  as_tibble %>% round(.,3)
# A tibble: 4 × 7
  eta.sq eta.sq.part      SS    df     MS      F      p
   <dbl>       <dbl>   <dbl> <dbl>  <dbl>  <dbl>  <dbl>
1  0.044       0.047  40.5       1 40.5   22.0    0    
2  0.06        0.063  55.8       1 55.8   30.3    0    
3  0           0       0.011     1  0.011  0.006  0.937
4  0.899      NA     830.      451  1.84  NA     NA    
etaSquared(aov(accuracy~responseR*labelR,all), type = 2, anova = T) %>% 
  as_tibble %>% round(.,3)
# A tibble: 4 × 7
  eta.sq eta.sq.part     SS    df    MS     F      p
   <dbl>       <dbl>  <dbl> <dbl> <dbl> <dbl>  <dbl>
1  0.059       0.06   55.8      1 55.8  28.8   0    
2  0.014       0.015  13.1      1 13.1   6.78  0.01 
3  0.001       0.001   1.10     1  1.10  0.57  0.451
4  0.927      NA     874.     451  1.94 NA    NA    
etaSquared(aov(understoodme~responseR*labelR,all), type = 2, anova = T) %>% 
  as_tibble %>% round(.,3)
# A tibble: 4 × 7
  eta.sq eta.sq.part      SS    df     MS      F      p
   <dbl>       <dbl>   <dbl> <dbl>  <dbl>  <dbl>  <dbl>
1  0.059       0.062  62.9       1 62.9   29.9    0    
2  0.056       0.06   60.3       1 60.3   28.7    0    
3  0           0       0.146     1  0.146  0.069  0.792
4  0.888      NA     949.      451  2.10  NA     NA    
etaSquared(aov(connection~responseR*labelR,all), type = 2, anova = T) %>% 
  as_tibble %>% round(.,3)
# A tibble: 4 × 7
  eta.sq eta.sq.part       SS    df      MS      F      p
   <dbl>       <dbl>    <dbl> <dbl>   <dbl>  <dbl>  <dbl>
1  0.038       0.041   51.6       1  51.6   19.3    0    
2  0.077       0.079  104.        1 104.    38.9    0    
3  0           0        0.243     1   0.243  0.091  0.763
4  0.889      NA     1205.      451   2.67  NA     NA    



  • aov() Function: Performs an ANOVA to test the main and interaction effects of the factors responseR (the source of the response) and labelR (how the response was labeled) on each dependent variable (feelheard, accuracy, understoodme, and connection).

  • etaSquared() Function: Calculates the effect size (eta-squared) for the ANOVA results. The type = 2 argument specifies the use of Type II sum of squares, which is appropriate for balanced designs or when you do not have a clear nesting order.

    • The etaSquared() function provides an indication of how much of the variance in each dependent variable is explained by the independent variables (responseR, labelR, and their interaction). This helps in understanding the practical significance of the findings.

    • The anova = T argument ensures that the output includes ANOVA results alongside the effect size.

Purpose of the Analysis

  • feelheard: Measures how much participants felt heard based on the response.

  • accuracy: Evaluates the perceived accuracy of the response.

  • understoodme: Assesses how well participants felt understood.

  • connection: Captures the perceived connection to the responder.

Explanation of the Output

  1. Eta Squared (η²):

    • eta.sq: The proportion of total variance explained by each factor (overall effect size). For instance, responseR has an eta squared value of approximately 0.0439, indicating that it accounts for about 4.39% of the variance in the “feeling heard” scores.

    • eta.sq.part: The partial eta squared, which represents the proportion of variance explained by each factor, excluding other variables. This is often used for interpretation as it isolates the effect of each factor. (like marginal effect)

  2. Sum of Squares (SS):

    • SS: The variability attributed to each factor. Higher SS values mean that the factor contributes more to the variability in the dependent variable.
  3. Degrees of Freedom (df):

    • Represents the number of values that are free to vary for each factor. For responseR and labelR, df = 1 because they are categorical variables with two levels (e.g., AI vs. human).
  4. Mean Squares (MS):

    • The average variability per degree of freedom. Calculated as SS/df.
  5. F-statistic (F):

    • Indicates the ratio of the variance explained by the factor to the variance within groups (residual variance). A higher F value signifies a more substantial effect.

    • For responseR, the F-statistic is approximately 22.00, which is highly significant with a p-value of 3.61e-06, showing that the source of the response significantly impacts how participants feel heard.


all %>% group_by(labelR) %>% 
  summarise(feelheard = mean(feelheard,na.rm=T),
            accuracy = mean(accuracy,na.rm=T),
            understoodme = mean(understoodme,na.rm=T),
            connection = mean(connection,na.rm=T)) %>%
  mutate_at(vars(feelheard:connection), round, 3)
# A tibble: 2 × 5
  labelR      feelheard accuracy understoodme connection
  <chr>           <dbl>    <dbl>        <dbl>      <dbl>
1 ai label         5.13     5.44         5.09       3.94
2 human label      5.81     5.75         5.80       4.88
all %>% group_by(labelR) %>% 
  summarise(feelheard = sd(feelheard,na.rm=T),
            accuracy = sd(accuracy,na.rm=T),
            understoodme = sd(understoodme,na.rm=T),
            connection = sd(connection,na.rm=T)) %>%
  mutate_at(vars(feelheard:connection), round, 3)
# A tibble: 2 × 5
  labelR      feelheard accuracy understoodme connection
  <chr>           <dbl>    <dbl>        <dbl>      <dbl>
1 ai label         1.46     1.51         1.62       1.86
2 human label      1.30     1.35         1.35       1.42


all %>% group_by(responseR) %>% 
  summarise(feelheard = mean(feelheard,na.rm=T),
            accuracy = mean(accuracy,na.rm=T),
            understoodme = mean(understoodme,na.rm=T),
            connection = mean(connection,na.rm=T)) %>%
  mutate_at(vars(feelheard:connection), round, 3)
# A tibble: 2 × 5
  responseR      feelheard accuracy understoodme connection
  <chr>              <dbl>    <dbl>        <dbl>      <dbl>
1 ai response         5.74     5.92         5.79       4.71
2 human response      5.17     5.24         5.07       4.07
all %>% group_by(responseR) %>% 
  summarise(feelheard = sd(feelheard,na.rm=T),
            accuracy = sd(accuracy,na.rm=T),
            understoodme = sd(understoodme,na.rm=T),
            connection = sd(connection,na.rm=T)) %>%
  mutate_at(vars(feelheard:connection), round, 3)
# A tibble: 2 × 5
  responseR      feelheard accuracy understoodme connection
  <chr>              <dbl>    <dbl>        <dbl>      <dbl>
1 ai response         1.22     1.12         1.30       1.61
2 human response      1.56     1.65         1.67       1.79

The end of the TABLE 1


Figure 1

all$labelRR = ifelse(all$labelR=='ai label',"AI label","human label")
all$responseRR = ifelse(all$responseR=='ai response',"AI response","human response")

dodge = position_dodge(width=0.9)

apatheme=theme_bw()+theme(panel.grid.major=element_blank(),
                          panel.grid.minor=element_blank(),
                          panel.border=element_blank(),
                          axis.line=element_line(),
                          text=element_text(family='Helvetica',size=15),
                          axis.text.x = element_text(color="black"))


diff.heard.label = independentSamplesTTest(feelheard~labelR,all)
Warning in independentSamplesTTest(feelheard ~ labelR, all): group variable is
not a factor
delta.heard.label= diff.heard.label$mean[1]-diff.heard.label$mean[2]
conf.heard.label = diff.heard.label$conf.int

diff.heard.label

   Welch's independent samples t-test 

Outcome variable:   feelheard 
Grouping variable:  labelR 

Descriptive statistics: 
            ai label human label
   mean        5.131       5.813
   std dev.    1.463       1.301

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  -5.261 
   degrees of freedom:  451.406 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [-0.937, -0.427] 
   estimated effect size (Cohen's d):  0.493 
delta.heard.label
  ai label 
-0.6819172 
conf.heard.label
[1] -0.9366634 -0.4271710
attr(,"conf.level")
[1] 0.95
diff.accuracy.label =independentSamplesTTest(accuracy~labelR,all)
Warning in independentSamplesTTest(accuracy ~ labelR, all): group variable is
not a factor
delta.accuracy.label= diff.accuracy.label $mean[1]-diff.accuracy.label $mean[2]
conf.accuracy.label= diff.accuracy.label$conf.int
diff.understoodme.label = independentSamplesTTest(understoodme~labelR,all)
Warning in independentSamplesTTest(understoodme ~ labelR, all): group variable
is not a factor
delta.understoodme.label= diff.understoodme.label$mean[1]-diff.understoodme.label$mean[2]
conf.understoodme.label = diff.understoodme.label$conf.int
diff.connection.label = independentSamplesTTest(connection~labelR,all)
Warning in independentSamplesTTest(connection ~ labelR, all): group variable is
not a factor
delta.connection.label = diff.connection.label$mean[1]-diff.connection.label$mean[2]
conf.connection.label = diff.connection.label$conf.int
diff.heard.response = independentSamplesTTest(feelheard~responseR,all)
Warning in independentSamplesTTest(feelheard ~ responseR, all): group variable
is not a factor
delta.heard.response = diff.heard.response$mean[1]-diff.heard.response$mean[2]
conf.heard.response = diff.heard.response$conf.int
diff.accuracy.response = independentSamplesTTest(accuracy~responseR,all)
Warning in independentSamplesTTest(accuracy ~ responseR, all): group variable
is not a factor
delta.accuracy.response = diff.accuracy.response $mean[1]-diff.accuracy.response $mean[2]
conf.accuracy.response = diff.accuracy.response$conf.int
diff.understoodme.response = independentSamplesTTest(understoodme~responseR,all)
Warning in independentSamplesTTest(understoodme ~ responseR, all): group
variable is not a factor
delta.understoodme.response = diff.understoodme.response$mean[1]-diff.understoodme.response$mean[2]
conf.understoodme.response = diff.understoodme.response$conf.int
diff.connection.response = independentSamplesTTest(connection~responseR,all)
Warning in independentSamplesTTest(connection ~ responseR, all): group variable
is not a factor
delta.connection.response = diff.connection.response$mean[1]-diff.connection.response$mean[2]
conf.connection.response = diff.connection.response$conf.int
deltaplot = data.frame(effect=c('Label','Label','Label','Label','Response','Response','Response','Response'),
                       dv = c('Feeling\nheard',"Response\naccuracy","Responder\nunderstood me",
                              "Connection\nto responder",'Feeling\nheard',"Response\naccuracy","Responder\nunderstood me",
                              "Connection\nto responder"),
                       delta =c(delta.heard.label,delta.accuracy.label,delta.understoodme.label,delta.connection.label,
                                delta.heard.response,delta.accuracy.response,delta.understoodme.response,delta.connection.response),
                       lower = c(conf.heard.label[1],conf.accuracy.label[1],conf.understoodme.label[1],conf.connection.label[1],
                                 conf.heard.response[1],conf.accuracy.response[1],conf.understoodme.response[1],conf.connection.response[1]),
                       high=c(conf.heard.label[2],conf.accuracy.label[2],conf.understoodme.label[2],conf.connection.label[2],
                              conf.heard.response[2],conf.accuracy.response[2],conf.understoodme.response[2],conf.connection.response[2]))

deltaplot$dv= factor(deltaplot$dv, levels = c("Feeling\nheard", "Response\naccuracy", "Responder\nunderstood me","Connection\nto responder"))

deltaplot
    effect                       dv      delta      lower        high
1    Label           Feeling\nheard -0.6819172 -0.9366634 -0.42717101
2    Label       Response\naccuracy -0.3174962 -0.5809131 -0.05407935
3    Label Responder\nunderstood me -0.7046254 -0.9786939 -0.43055695
4    Label Connection\nto responder -0.9347243 -1.2394719 -0.62997669
5 Response           Feeling\nheard  0.5747863  0.3150993  0.83447332
6 Response       Response\naccuracy  0.6899196  0.4289092  0.95092994
7 Response Responder\nunderstood me  0.7207139  0.4435613  0.99786651
8 Response Connection\nto responder  0.6431205  0.3287753  0.95746568
apatheme=theme_bw()+theme(panel.grid.major=element_blank(),
                          panel.grid.minor=element_blank(),
                          panel.border=element_blank(),
                          axis.line=element_line(),
                          text=element_text(family='Helvetica',size=12),
                          axis.text.x = element_text(color="black"))
figure1=deltaplot  %>%
  ggplot(aes(x = dv, y = delta, fill = effect))+
  geom_bar(stat='identity', position=dodge)+
  geom_errorbar(aes(ymin= lower, ymax = high), 
                position = dodge,width = 0.1)+
  ylab('Delta (AI - Human)')+xlab('')+
  apatheme+
  scale_fill_manual(values = c("darkred", "lightgreen"),name="Manipulations")

figure1
Warning in grid.Call(C_stringMetric, as.graphicsAnnot(x$label)): font family
not found in Windows font database
Warning in grid.Call(C_stringMetric, as.graphicsAnnot(x$label)): font family
not found in Windows font database
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database

  • Findings Highlighted:

    • AI response with a human label often yielded higher ratings compared to when labeled as AI. This suggests that participants felt more positively about responses when they were believed to be human, despite being generated by AI.

    • Human response with an AI label showed lower ratings compared to when labeled as human, demonstrating a consistent bias where an AI label diminished the perceived quality of responses.

    • The interaction between response source and label was not significant, as shown by overlapping error bars for some conditions. This supports the independent effects of response and label without an interactive component.

Interpretation:

  • The figure underscores that while AI-generated responses can effectively make participants feel heard, an AI label diminishes this perception due to potential biases. Conversely, human responses labeled as AI also suffer reduced ratings, highlighting a broader skepticism toward AI in empathetic communication


Multi group contrast

########multi group contrast###############

temp1 = subset(all,responseR=='ai response'&labelR=='ai label')
temp2 = subset(all,responseR=='human response'&labelR=='human label')
temp3 = subset(all,responseR=='human response'&labelR=='ai label')
temp4 = subset(all,responseR=='ai response'&labelR=='human label')

temp1$cond = 'ai response ai label'
temp2$cond = 'human response human label'
temp3$cond = 'human response ai label'
temp4$cond = 'ai response human label'

all.long = rbind(temp1,temp2,temp3,temp4)
all.long$cond= factor(all.long$cond, levels = c("ai response human label", "ai response ai label", "human response human label","human response ai label"))

all.long %>% head(10)
                         id happiness_d sadness_d fear_d anger_d surprise_d
1  614ea93d581d6f4281e9d232           2         7      5       5          4
2  5dd5378596afdf4eb31c657b           4         6      5       5          1
3  6413586e7bdde1f6740283b8           4         4      1       1          1
9  5efb31fa8cd32f04bf048643           1         4      6       2          1
16 63a92e2605e59f352e566b83           1         1      5       6          2
17 5f0fd43f114b6a07e3c05d1b           3         5      4       4          4
19 5e79340b6d9a624e3f39586d           1         6      2       5          2
24 60eb1a7c95641da67ae7d611           1         6      4       3          3
25 5e0d256753e8464d64940925           1         6      4       4          1
28 5a80bf6e2a842c0001fc9064           1         7      7       4          1
   disgust_d age_d gender_d edu_d race_d race_d_5_TEXT employment_d political_d
1          3    38   Female     3      3                          9           2
2          1    44   Female     5      3                         12           1
3          1    37     Make     3   <NA>                          9           3
9          3    36   female     3      3                         16           2
16         6    50   female     3      3                         10           2
17         3    47     Male     6      1                         11           3
19         2    45     male     5      5                          9           4
24         2    63   Female     5      3                         10           7
25         4    56   female     5      3                         12           4
28         7    38     Male     2      2                         11           4
   class_d ladder_d
1        2        4
2        2        4
3        2        5
9        2        4
16       2        4
17       3        6
19       3        7
24       3        5
25       4        7
28       1        1
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           AI.response
1  That sounds like a very difficult situation to deal with. I can understand why you feel conflicted about sending your son to live with his dad. You love your son and you want what's best for him, but you also struggle with his behavior and your own frustration. You are not a failure as a mom, you are just facing a challenge that many parents face. Maybe you could talk to his dad and see if he has any ideas on how to help Sebastian. Maybe you could also try some different strategies to communicate with your son and reward him for positive behavior. Sending him away might not be the only option, but if you decide to do that, it doesn't mean you are giving up on him. It just means you are trying a different approach. Whatever you do, I hope you find a way to make things better for yourself and your son. You deserve to be happy and so does he.
2                                                                                                                                                                                                                                                                                                                                                                                                                  Wow, that sounds like a very complex and difficult situation. I'm so sorry that your partner was hit by a car and suffered such serious injuries. That must have been terrifying for both of you. I'm glad that you are staying with her and taking care of her, even though it's hard and exhausting. You are very brave and compassionate. I hope that she recovers well and that your relationship continues to heal and grow. You deserve happiness and peace.❤️
3                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          That sounds like a very stressful situation to be in. I can understand how you feel torn between fixing your current truck or buying a new one. Both options have pros and cons, and it's not easy to decide what to do. You have to weigh the costs, benefits, risks, and opportunities of each choice. I hope you can find a solution that works for you and your budget.
9                                                                                                                                                                                                                                                                                                                                                                                                                                                                        That sounds like a very difficult situation to be in. You have a lot of responsibilities and expectations to balance, and you want to do what's best for your family and yourself. I can understand why you feel torn between going back to work or staying home. There is no easy answer, but I hope you can find a solution that works for you and your loved ones. You deserve to be happy and fulfilled.🌻
16                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       Wow, that sounds like a very frustrating and stressful situation. You are doing your best to learn the lottery machine and follow the rules, but your coworker is not clearing it out and causing problems for you. You have tried to talk to her and your manager, but nothing has changed. You must feel very angry and helpless. I'm sorry you have to deal with this. You deserve better.
17                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             I'm so sorry to hear that you got laid off. That must have been a huge shock and a lot of stress for you. You have every right to feel worried and overwhelmed by this situation. It's not easy to deal with such a sudden change and uncertainty. I hope you can find some support and resources to help you cope and plan your next steps. You are not alone in this.
19                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                That sounds like a very difficult and painful situation. I'm sorry you have to deal with that. It must be hard to see your mom change and act in ways that hurt you and your family. You have every right to feel confused and frustrated. I hope you can find some support and help for yourself and your mom. You deserve to be treated with respect and kindness.
24                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      That sounds like a very difficult situation to be in. You care about the young assistant and you don't want to see her get hurt by her husband's behavior. You also don't want to interfere in their marriage or cause any trouble at work. I can understand why you are feeling disturbed and conflicted about what to do. You have a good heart and you want the best for everyone involved.
25                                                                                                                                                                                                                                                                                                                                                                                                                                        Wow, that sounds like a very stressful and frustrating situation. You must feel overwhelmed by your cousin's problems and their lack of initiative. It's admirable that you've tried to help them, but you can't do everything for them. They need to take some responsibility for their own lives and well-being. You deserve to have some peace of mind and not worry about their choices. I hope things get better for you and them soon.
28                                                                                                                                                                                                                                                                                                                                                                                                                                      I'm sorry to hear that you're going through so much. You have a lot of challenges and responsibilities that are weighing on you. It's understandable that you feel overwhelmed and stressed. You're not a fool, you're a human being who made some mistakes and learned from them. You're also a caring son who is looking after your father. You have some goals and aspirations that you want to pursue. You deserve compassion and support.
   happiness_ai sadness_ai fear_ai anger_ai surprise_ai disgust_ai
1             1          6       5        4           2          3
2             2          6       7        5           4          2
3             1          4       3        3           2          2
9             1          5       4        2           1          1
16            1          3       4        5           2          4
17            1          6       5        4           7          2
19            1          6       5        4           3          3
24            1          6       4        5           3          4
25            1          6       5        4           2          3
28            1          6       5        4           2          3
   Human.response happiness_r sadness_r fear_r anger_r surprise_r disgust_r
1            <NA>          NA        NA     NA      NA         NA        NA
2            <NA>          NA        NA     NA      NA         NA        NA
3            <NA>          NA        NA     NA      NA         NA        NA
9            <NA>          NA        NA     NA      NA         NA        NA
16           <NA>          NA        NA     NA      NA         NA        NA
17           <NA>          NA        NA     NA      NA         NA        NA
19           <NA>          NA        NA     NA      NA         NA        NA
24           <NA>          NA        NA     NA      NA         NA        NA
25           <NA>          NA        NA     NA      NA         NA        NA
28           <NA>          NA        NA     NA      NA         NA        NA
   understood validated affirmed seen accepted caredfor accuracy1 accuracy2
1           6         5        5    6        6        6         6         6
2           7         7        7    7        7        7         7         7
3           5         4        4    3        3        4         4         5
9           6         6        6    6        6        6         6         6
16          7         7        7    7        4        4         7         7
17          5         5        5    5        5        5         4         5
19          6         6        6    5        4        3         6         6
24          6         6        6    6        6        6         6         6
25          7         7        7    7        7        7         7         7
28          7         7        7    7        7        7         7         7
   knewmean_p knewmean_b understood_p understood_b close_p connect_p trust_p
1          NA          7           NA            7      NA        NA      NA
2          NA          7           NA            7      NA        NA      NA
3          NA          4           NA            4      NA        NA      NA
9          NA          6           NA            6      NA        NA      NA
16         NA          7           NA            7      NA        NA      NA
17         NA          5           NA            5      NA        NA      NA
19         NA          4           NA            3      NA        NA      NA
24         NA          6           NA            6      NA        NA      NA
25         NA          7           NA            7      NA        NA      NA
28         NA          7           NA            6      NA        NA      NA
   close_b connect_b trust_b lonely connected distressed excited upset guilty
1        6         6       6      3         3          3       3     4      5
2        7         7       7      1         7          1       4     1      1
3        3         3       3      1         4          1       1     1      1
9        6         6       6      1         6          1       1     1      1
16       5         5       6      1         4          1       4     1      1
17       5         5       5      5         4          3       3     2      3
19       2         2       4      1         7          1       4     1      1
24       5         5       5      1         6          1       5     1      1
25       7         7       7      1         7          1       3     1      1
28       6         6       6      4         5          4       3     4      3
   scared enthusiastic ashamed nervous happy sad surprised hopeful optimistic
1       4            4       4       4     2   4         2       3          5
2       1            5       1       1     5   1         5       7          7
3       1            1       1       1     4   1         1       4          4
9       1            1       1       1     6   1         5       5          6
16      1            5       1       1     7   1         1       6          6
17      2            5       2       2     4   2         4       6          6
19      1            6       1       1     6   2         1       5          5
24      1            5       1       1     6   1         6       6          6
25      1            3       1       1     6   1         4       5          7
28      5            5       1       3     3   4         5       7          5
   ambivalent uneasy unnerved creeped uncomfortable bothered loneliness1
1           5      1        1       1             4        6           4
2           2      1        1       1             1        1           2
3           4      1        1       3             1        1           3
9           1      1        1       1             1        1           2
16          1      4        1       1             1        1           1
17          4      3        3       3             2        4           4
19          3      1        1       1             1        1           1
24          1      1        1       1             1        1           3
25          1      1        1       1             1        1           1
28          4      4        1       3             3        4           6
   loneliness2 loneliness3 convey.thoughts have.exp longing.or.hoping
1            3           3               6        5                 5
2            4           4               5        6                 1
3            3           3               4        3                 2
9            2           2               5        5                 1
16           1           1               5        4                 1
17           4           4               2        1                 2
19           1           1               5        2                 2
24           2           2               4        5                 1
25           1           1               6        1                 2
28           6           6               5        5                 4
   exp.embrssment understand.feeling feel.afraid feel.hungry exp.joy remember
1               4                  6           4           1       5        7
2               1                  1           1           1       2        7
3               2                  3           2           2       2        5
9               1                  5           1           1       1        6
16              1                  6           1           1       4        6
17              2                  1           1           1       2        6
19              2                  5           2           1       2        4
24              3                  4           2           1       1        6
25              2                  6           1           1       2        6
28              4                  4           4           1       4        7
   tell.right.from.wrong exp.pain personality make.plans exp.pleasure exp.pride
1                      6        2           5          5            2         5
2                      1        1           6          2            1         1
3                      3        2           2          4            2         2
9                      5        1           1          2            1         1
16                     6        1           7          4            4         6
17                     4        1           2          4            2         1
19                     5        1           3          5            2         2
24                     4        2           4          3            1         2
25                     5        1           1          5            1         3
28                     4        5           7          6            4         4
   exp.anger self.restraint think familiar_bing familiar_gpt familiar_bard
1          2              4     7             3            3             1
2          1              1     7             4            7             1
3          2              3     4             1            3             1
9          1              1     6             4            6             1
16         1              4     6             7            4             1
17         2              2     1             4            5             3
19         2              2     5             1            5             3
24         2              4     3             4            4             1
25         1              4     5             3            3             1
28         4              4     7             4            7             1
   often_bing often_gpt often_bard atti_bing atti_gpt atti_bard loneliness
1           1         3          1         3        2         0   3.333333
2           1         6          1         3        3         0   3.333333
3           1         3          1         0        0         0   3.000000
9           1         2          1         2        2         0   2.000000
16          4         1          1         2        2         0   1.000000
17          2         5          1         0        1         0   4.000000
19          1         5          3         0        2         2   1.000000
24          3         3          1         0        0         0   2.333333
25          2         2          1         2        2         0   1.000000
28          2         7          1         2        3         0   6.000000
     responseR   labelR empathicaccuracy.ai empathicaccuracy.r experience
1  ai response ai label           0.8333333                 NA   3.636364
2  ai response ai label           1.3333333                 NA   2.000000
3  ai response ai label           1.5000000                 NA   2.090909
9  ai response ai label           0.8333333                 NA   1.363636
16 ai response ai label           1.0000000                 NA   2.818182
17 ai response ai label           1.3333333                 NA   1.545455
19 ai response ai label           1.0000000                 NA   1.909091
24 ai response ai label           0.6666667                 NA   2.181818
25 ai response ai label           0.5000000                 NA   1.454545
28 ai response ai label           1.3333333                 NA   4.181818
     agency feelheard accuracy understoodme connection statelonely excitement
1  5.857143  5.666667      6.0          7.0   6.000000         3.0        3.5
2  3.428571  7.000000      7.0          7.0   7.000000         4.0        4.5
3  3.714286  3.833333      4.5          4.0   3.000000         2.5        1.0
9  4.285714  6.000000      6.0          6.0   6.000000         3.5        1.0
16 5.285714  6.000000      7.0          7.0   5.333333         2.5        4.5
17 2.857143  5.000000      4.5          5.0   5.000000         4.5        4.0
19 4.428571  5.000000      6.0          3.5   2.666667         4.0        5.0
24 4.000000  6.000000      6.0          6.0   5.000000         3.5        5.0
25 5.285714  7.000000      7.0          7.0   7.000000         4.0        3.0
28 5.285714  7.000000      7.0          6.5   6.000000         4.5        4.0
   hope fear discomfort distress shame  labelRR  responseRR
1   4.0    4   2.000000     4.25   4.5 AI label AI response
2   7.0    1   1.000000     1.00   1.0 AI label AI response
3   4.0    1   1.000000     1.00   1.0 AI label AI response
9   5.5    1   1.000000     1.00   1.0 AI label AI response
16  6.0    1   2.000000     1.00   1.0 AI label AI response
17  6.0    2   2.666667     2.75   2.5 AI label AI response
19  5.0    1   1.000000     1.25   1.0 AI label AI response
24  6.0    1   1.000000     1.00   1.0 AI label AI response
25  6.0    1   1.000000     1.00   1.0 AI label AI response
28  6.0    4   2.666667     4.00   2.0 AI label AI response
                   cond
1  ai response ai label
2  ai response ai label
3  ai response ai label
9  ai response ai label
16 ai response ai label
17 ai response ai label
19 ai response ai label
24 ai response ai label
25 ai response ai label
28 ai response ai label
library(rempsyc)
Suggested APA citation: Thériault, R. (2023). rempsyc: Convenience functions for psychology. 
Journal of Open Source Software, 8(87), 5466. https://doi.org/10.21105/joss.05466
table.stats1 <- nice_contrasts(
  response = "feelheard",
  group = "cond",
  data = all.long
)

(my_table1 <- nice_table(table.stats1))

Dependent Variable

Comparison

df

t

p

d

95% CI

feelheard

ai response human label - ai response ai label

451

4.00

< .001***

0.52

[0.31, 0.75]

ai response human label - human response human label

451

3.33

.001***

0.45

[0.21, 0.68]

ai response human label - human response ai label

451

7.10

< .001***

0.96

[0.70, 1.21]

ai response ai label - human response human label

451

-0.58

.561

-0.08

[-0.35, 0.22]

ai response ai label - human response ai label

451

3.31

.001**

0.43

[0.14, 0.71]

human response human label - human response ai label

451

3.78

< .001***

0.51

[0.20, 0.81]

table.stats2 <- nice_contrasts(
  response = "accuracy",
  group = "cond",
  data = all.long
)

(my_table2 <- nice_table(table.stats2))

Dependent Variable

Comparison

df

t

p

d

95% CI

accuracy

ai response human label - ai response ai label

451

1.34

.181

0.18

[-0.03, 0.38]

ai response human label - human response human label

451

3.20

.001**

0.43

[0.19, 0.68]

ai response human label - human response ai label

451

5.55

< .001***

0.75

[0.49, 1.01]

ai response ai label - human response human label

451

1.95

.051

0.26

[0.00, 0.50]

ai response ai label - human response ai label

451

4.37

< .001***

0.57

[0.29, 0.85]

human response human label - human response ai label

451

2.36

.019*

0.32

[0.00, 0.63]

table.stats3 <- nice_contrasts(
  response = "understoodme",
  group = "cond",
  data = all.long
)


(my_table3 <- nice_table(table.stats3))

Dependent Variable

Comparison

df

t

p

d

95% CI

understoodme

ai response human label - ai response ai label

451

4.02

< .001***

0.53

[0.31, 0.74]

ai response human label - human response human label

451

4.00

< .001***

0.54

[0.29, 0.75]

ai response human label - human response ai label

451

7.53

< .001***

1.02

[0.75, 1.26]

ai response ai label - human response human label

451

0.09

.926

0.01

[-0.27, 0.29]

ai response ai label - human response ai label

451

3.73

< .001***

0.49

[0.19, 0.77]

human response human label - human response ai label

451

3.55

< .001***

0.48

[0.18, 0.77]

table.stats4 <- nice_contrasts(
  response = "connection",
  group = "cond",
  data = all.long
)

(my_table4 <- nice_table(table.stats4))

Dependent Variable

Comparison

df

t

p

d

95% CI

connection

ai response human label - ai response ai label

451

4.26

< .001***

0.56

[0.32, 0.80]

ai response human label - human response human label

451

2.85

.005**

0.38

[0.17, 0.61]

ai response human label - human response ai label

451

7.40

< .001***

1.00

[0.73, 1.26]

ai response ai label - human response human label

451

-1.33

.183

-0.17

[-0.45, 0.09]

ai response ai label - human response ai label

451

3.36

.001***

0.44

[0.15, 0.72]

human response human label - human response ai label

451

4.57

< .001***

0.61

[0.32, 0.89]

# print(my_table1,my_table2,my_table3,my_table4, preview ="docx")
# flextable::save_as_docx(my_table1,my_table2,my_table3,my_table4, path = "contrasts.docx")


Figure 2

## four condition plots

all.long=all.long%>%
  mutate(cond = recode(cond, 
                       "ai response human label" = "AI Response\nHuman Label",
                       "ai response ai label" = "AI Response\nAI Label",
                       "human response human label" = "Human Response\nHuman Label",
                       "human response ai label" = "Human Response\nAI Label"))
apatheme=theme_bw()+
  theme(panel.grid.major=element_blank(),
        panel.grid.minor=element_blank(),
        panel.border=element_blank(),
        axis.line=element_line(),
        text=element_text(family='Helvetica',size=10,colour='black'),
        axis.text.x = element_text(color="black"))
heard = all.long %>%
  group_by(cond)%>%
  summarize(ratings = mean(feelheard), se.ratings = std.error(feelheard))%>%
  ggplot(aes(x = cond, y = ratings,fill=cond))+
  geom_bar(stat='identity', position=dodge,color="black",linewidth=1)+
  geom_errorbar(aes(ymin= ratings - se.ratings, ymax = ratings + se.ratings), 
                position = dodge,width = 0.1)+coord_cartesian(ylim = c(1,7))+
  scale_y_continuous(breaks=seq(1,7,1))+
  ylab('Feeling Heard')+xlab('')+  
  scale_fill_manual(values = c("black", "darkgrey","grey","white"))+
  apatheme
heard
Warning in grid.Call(C_stringMetric, as.graphicsAnnot(x$label)): font family
not found in Windows font database
Warning in grid.Call(C_stringMetric, as.graphicsAnnot(x$label)): font family
not found in Windows font database
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database

accuracy = all.long %>%
  group_by(cond)%>%
  summarize(ratings = mean(accuracy), se.ratings = std.error(accuracy))%>%
  ggplot(aes(x = cond, y = ratings,fill=cond))+
  geom_bar(stat='identity', position=dodge,color="black",linewidth=1)+
  geom_errorbar(aes(ymin= ratings - se.ratings, ymax = ratings + se.ratings), 
                position = dodge,width = 0.1)+coord_cartesian(ylim = c(1,7))+
  scale_y_continuous(breaks=seq(1,7,1))+
  ylab('Response Accuracy')+xlab('')+  
  scale_fill_manual(values = c("black", "darkgrey","grey","white"))+
  apatheme
accuracy
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database

understood = all.long %>%
  group_by(cond)%>%
  summarize(ratings = mean(understoodme), se.ratings = std.error(understoodme))%>%
  ggplot(aes(x = cond, y = ratings,fill=cond))+
  geom_bar(stat='identity', position=dodge,color="black",linewidth=1)+
  geom_errorbar(aes(ymin= ratings - se.ratings, ymax = ratings + se.ratings), 
                position = dodge,width = 0.1)+coord_cartesian(ylim = c(1,7))+
  scale_y_continuous(breaks=seq(1,7,1))+
  ylab('Responder Understood Me')+xlab('')+  
  scale_fill_manual(values = c("black", "darkgrey","grey","white"))+
  apatheme
understood
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database

connection = all.long %>%
  group_by(cond)%>%
  summarize(ratings = mean(connection), se.ratings = std.error(connection))%>%
  ggplot(aes(x = cond, y = ratings,fill=cond))+
  geom_bar(stat='identity', position=dodge,color="black",linewidth=1)+
  geom_errorbar(aes(ymin= ratings - se.ratings, ymax = ratings + se.ratings), 
                position = dodge,width = 0.1)+coord_cartesian(ylim = c(1,7))+
  scale_y_continuous(breaks=seq(1,7,1))+
  ylab('Connnection to Responder')+xlab('')+  
  scale_fill_manual(values = c("black", "darkgrey","grey","white"))+
  apatheme
connection
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database

figure2=ggarrange(heard, accuracy,understood,connection, ncol=2, nrow=2, common.legend = TRUE, legend="none")
figure2
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call(C_textBounds, as.graphicsAnnot(x$label), x$x, x$y, : font
family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database
Warning in grid.Call.graphics(C_text, as.graphicsAnnot(x$label), x$x, x$y, :
font family not found in Windows font database


Understanding Moderators in Statistical Analysis

Moderators are variables that affect the strength or direction of the relationship between independent and dependent variables. Including moderators in your analysis allows you to explore complex interactions and understand under what conditions certain effects hold true.

How to Include Moderators in R

In R, moderators can be incorporated into linear models to examine their interaction with independent variables. The general approach involves specifying the interaction term in the model formula.

Example Code for Moderator Analysis

Below is an example of how to perform moderator analysis in R:

# Perform ANOVA with moderator interaction
anova(lm(feelheard ~ labelR * atti_bing, all))
Analysis of Variance Table

Response: feelheard
                  Df Sum Sq Mean Sq F value    Pr(>F)    
labelR             1  52.85  52.852  33.623 1.259e-08 ***
atti_bing          1 132.62 132.623  84.372 < 2.2e-16 ***
labelR:atti_bing   1  29.37  29.374  18.687 1.897e-05 ***
Residuals        451 708.92   1.572                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(accuracy ~ labelR * atti_bing, all))
Analysis of Variance Table

Response: accuracy
                  Df Sum Sq Mean Sq F value    Pr(>F)    
labelR             1  11.46  11.457  6.4862 0.0112025 *  
atti_bing          1 110.10 110.102 62.3322 2.227e-14 ***
labelR:atti_bing   1  24.11  24.114 13.6518 0.0002471 ***
Residuals        451 796.63   1.766                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(understoodme ~ labelR * atti_bing, all))
Analysis of Variance Table

Response: understoodme
                  Df Sum Sq Mean Sq F value    Pr(>F)    
labelR             1  56.43  56.430  30.122 6.780e-08 ***
atti_bing          1 131.88 131.883  70.397 6.291e-16 ***
labelR:atti_bing   1  34.93  34.934  18.647 1.935e-05 ***
Residuals        451 844.90   1.873                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(connection ~ labelR * atti_bing, all))
Analysis of Variance Table

Response: connection
                  Df Sum Sq Mean Sq F value    Pr(>F)    
labelR             1  99.30  99.303  48.804 1.020e-11 ***
atti_bing          1 285.82 285.823 140.472 < 2.2e-16 ***
labelR:atti_bing   1  53.11  53.109  26.101 4.792e-07 ***
Residuals        451 917.67   2.035                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Explanation:

  • lm(): Creates a linear model to analyze the relationship between dependent and independent variables.

  • labelR * atti_bing: Specifies that the interaction between labelR (the label of the response) and atti_bing (participants’ attitudes toward Bing) should be included.

  • anova(): Performs ANOVA to determine the significance of the main and interaction effects.

Steps for Conducting Moderator Analysis

  1. Fit the Model: Use lm() to fit a linear model with the interaction term.

  2. Run ANOVA: Use anova() to evaluate the significance of the interaction effect.

  3. Interpret the Results:

    • Check the p-value for the interaction term to see if it is statistically significant.

    • If significant, this indicates that the effect of labelR on the dependent variable varies depending on atti_bing.

  4. Report the Findings:

    • Include the F-statistic, degrees of freedom, and p-values to show the significance of the interaction.

    • Discuss how the moderator influences the relationship between the independent and dependent variables.


Example Interpretation:

If the interaction term labelR:atti_bing is significant, this suggests that participants’ attitudes toward Bing moderate the impact of response labeling on feeling heard. For instance, participants with more positive attitudes may be less affected by an AI label compared to those with neutral or negative attitudes.

Main Findings

The analysis revealed several key outcomes regarding the effectiveness of AI-generated versus human-generated responses:

  1. Feeling Heard: AI-generated responses scored significantly higher on participants’ feeling heard (mean = 5.74) compared to human responses (mean = 5.17). This demonstrates that, in terms of content quality, AI responses had a superior effect on making recipients feel heard.

  2. Perceived Accuracy and Understanding: Participants rated AI-generated responses as more accurate (mean = 5.92) and understanding (mean = 5.79) compared to human responses (mean = 5.24 for accuracy and 5.07 for understanding). This suggests that AI responses were perceived as better at capturing the essence of the participants’ emotions.

  3. Connection to the Responder: While AI responses were effective at making participants feel heard, they fell short in fostering a connection when labeled as AI-generated. Participants reported a lower sense of connection to AI responders (mean = 4.71) compared to human responders (mean = 4.06), indicating that knowing the response was from an AI reduced perceived personal closeness.

  4. Label Effects: The analysis showed that labeling responses as AI reduced the perceived impact across all measures, including feeling heard, response accuracy, and connection. The “AI label” effect demonstrated a consistent decrease in ratings by approximately 0.68 points for feeling heard and 0.31 points for response accuracy.


Let’s see the real code:

Figure 3

###################Moderators####################
summary(lm(feelheard~labelR*atti_bing,all))

Call:
lm(formula = feelheard ~ labelR * atti_bing, data = all)

Residuals:
    Min      1Q  Median      3Q     Max 
-5.0169 -0.5647  0.2236  0.9023  3.1716 

Coefficients:
                            Estimate Std. Error t value Pr(>|t|)    
(Intercept)                  4.87483    0.08592  56.736  < 2e-16 ***
labelRhuman label            0.90153    0.12167   7.410  6.3e-13 ***
atti_bing                    0.57102    0.05747   9.936  < 2e-16 ***
labelRhuman label:atti_bing -0.41234    0.09539  -4.323  1.9e-05 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 1.254 on 451 degrees of freedom
Multiple R-squared:  0.2326,    Adjusted R-squared:  0.2275 
F-statistic: 45.56 on 3 and 451 DF,  p-value: < 2.2e-16
anova(lm(feelheard~labelR*atti_bing,all))
Analysis of Variance Table

Response: feelheard
                  Df Sum Sq Mean Sq F value    Pr(>F)    
labelR             1  52.85  52.852  33.623 1.259e-08 ***
atti_bing          1 132.62 132.623  84.372 < 2.2e-16 ***
labelR:atti_bing   1  29.37  29.374  18.687 1.897e-05 ***
Residuals        451 708.92   1.572                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(accuracy~labelR*atti_bing,all))
Analysis of Variance Table

Response: accuracy
                  Df Sum Sq Mean Sq F value    Pr(>F)    
labelR             1  11.46  11.457  6.4862 0.0112025 *  
atti_bing          1 110.10 110.102 62.3322 2.227e-14 ***
labelR:atti_bing   1  24.11  24.114 13.6518 0.0002471 ***
Residuals        451 796.63   1.766                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(understoodme~labelR*atti_bing,all))
Analysis of Variance Table

Response: understoodme
                  Df Sum Sq Mean Sq F value    Pr(>F)    
labelR             1  56.43  56.430  30.122 6.780e-08 ***
atti_bing          1 131.88 131.883  70.397 6.291e-16 ***
labelR:atti_bing   1  34.93  34.934  18.647 1.935e-05 ***
Residuals        451 844.90   1.873                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(connection~labelR*atti_bing,all))
Analysis of Variance Table

Response: connection
                  Df Sum Sq Mean Sq F value    Pr(>F)    
labelR             1  99.30  99.303  48.804 1.020e-11 ***
atti_bing          1 285.82 285.823 140.472 < 2.2e-16 ***
labelR:atti_bing   1  53.11  53.109  26.101 4.792e-07 ***
Residuals        451 917.67   2.035                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~labelR*experience,all))
Analysis of Variance Table

Response: feelheard
                   Df Sum Sq Mean Sq F value    Pr(>F)    
labelR              1  52.85  52.852 29.0762 1.125e-07 ***
experience          1  45.48  45.476 25.0184 8.149e-07 ***
labelR:experience   1   5.66   5.658  3.1129   0.07835 .  
Residuals         451 819.78   1.818                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(accuracy~labelR*experience,all))
Analysis of Variance Table

Response: accuracy
                   Df Sum Sq Mean Sq F value   Pr(>F)   
labelR              1  11.46 11.4571  5.6664 0.017707 * 
experience          1  18.90 18.9045  9.3497 0.002363 **
labelR:experience   1   0.06  0.0555  0.0274 0.868494   
Residuals         451 911.89  2.0219                    
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(understoodme~labelR*experience,all))
Analysis of Variance Table

Response: understoodme
                   Df Sum Sq Mean Sq F value    Pr(>F)    
labelR              1  56.43  56.430 26.4095 4.121e-07 ***
experience          1  47.63  47.630 22.2911 3.132e-06 ***
labelR:experience   1   0.42   0.418  0.1958    0.6583    
Residuals         451 963.67   2.137                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(connection~labelR*experience,all))
Analysis of Variance Table

Response: connection
                   Df  Sum Sq Mean Sq F value    Pr(>F)    
labelR              1   99.30  99.303 40.8039 4.196e-10 ***
experience          1  139.59 139.593 57.3591 2.073e-13 ***
labelR:experience   1   19.42  19.419  7.9792  0.004942 ** 
Residuals         451 1097.59   2.434                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~labelR*agency,all))
Analysis of Variance Table

Response: feelheard
               Df Sum Sq Mean Sq F value    Pr(>F)    
labelR          1  52.85  52.852 31.9319 2.834e-08 ***
agency          1 119.41 119.407 72.1428 2.931e-16 ***
labelR:agency   1   5.04   5.041  3.0459   0.08162 .  
Residuals     451 746.47   1.655                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(accuracy~labelR*agency,all))
Analysis of Variance Table

Response: accuracy
               Df Sum Sq Mean Sq F value    Pr(>F)    
labelR          1  11.46  11.457  5.9881   0.01478 *  
agency          1  66.55  66.554 34.7846 7.232e-09 ***
labelR:agency   1   1.39   1.389  0.7262   0.39458    
Residuals     451 862.91   1.913                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(understoodme~labelR*agency,all))
Analysis of Variance Table

Response: understoodme
               Df Sum Sq Mean Sq F value    Pr(>F)    
labelR          1  56.43  56.430 28.7923 1.291e-07 ***
agency          1 121.56 121.563 62.0245 2.555e-14 ***
labelR:agency   1   6.24   6.236  3.1816   0.07514 .  
Residuals     451 883.92   1.960                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(connection~labelR*agency,all))
Analysis of Variance Table

Response: connection
               Df Sum Sq Mean Sq  F value    Pr(>F)    
labelR          1  99.30  99.303  45.0419 5.806e-11 ***
agency          1 241.50 241.503 109.5407 < 2.2e-16 ***
labelR:agency   1  20.78  20.783   9.4268  0.002268 ** 
Residuals     451 994.31   2.205                       
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
library(interactions)
library(jtools)
library(ggplot2)
m1=lm(feelheard~labelRR*atti_bing,all)
f1=interact_plot(m1, pred = atti_bing, modx = labelRR,interval=T,y.label = "Feel Heard",x.label = 'Attitude towards Bing Chat',legend.main = c("Label")) + theme_apa()
m2=lm(accuracy~labelRR*atti_bing,all)
f2=interact_plot(m2, pred = atti_bing, modx = labelRR,interval=T,y.label = "Response Accuracy",x.label = 'Attitude towards Bing Chat',legend.main='Label') + theme_apa()
f2

m3=lm(understoodme~labelRR*atti_bing,all)
f3=interact_plot(m3, pred = atti_bing, modx = labelRR,interval=T,y.label = "Responder Understood Me",x.label = 'Attitude towards Bing Chat',legend.main='Label') + theme_apa()
f3

m4=lm(connection~labelRR*atti_bing,all)
f4=interact_plot(m4, pred = atti_bing, modx = labelRR,interval=T,y.label = "Connection to Responder",x.label = 'Attitude towards Bing Chat',legend.main='Label') + theme_apa()
f4

m5=lm(connection~labelRR*agency,all)
f5=interact_plot(m5, pred = agency, modx = labelRR,interval=T,y.label = "Connection to Responder",x.label = 'Mind Perception of Bing Chat \n- Agency',legend.main='Label') + theme_apa()
f5

m6=lm(connection~labelRR*experience,all)
f6=interact_plot(m6, pred = experience, modx = labelRR,interval=T,y.label = "Connection to Responder",x.label = 'Mind Perception of Bing Chat \n- Experience',legend.main='Label') + theme_apa()
f6

library(grid)
library(gridExtra)

figure3=ggarrange(f1, f2,f3,f4,f5,f6,nrow=2,ncol=3,common.legend = TRUE,legend='bottom')
figure3

The end of the Figure 3


Empathic Accuracy

######################empathic accuracy########################


all$happiness_ai.d = abs(all$happiness_ai-all$happiness_d)
all$sadness_ai.d = abs(all$sadness_ai-all$sadness_d)
all$fear_ai.d = abs(all$fear_ai-all$fear_d)
all$anger_ai.d = abs(all$anger_ai-all$anger_d)
all$surprise_ai.d = abs(all$surprise_ai-all$surprise_d)
all$disgust_ai.d = abs(all$disgust_ai-all$disgust_d)
all$happiness_r.d = abs(all$happiness_r-all$happiness_d)
all$sadness_r.d = abs(all$sadness_r-all$sadness_d)
all$fear_r.d = abs(all$fear_r-all$fear_d)
all$anger_r.d = abs(all$anger_r-all$anger_d)
all$surprise_r.d = abs(all$surprise_r-all$surprise_d)
all$disgust_r.d = abs(all$disgust_r-all$disgust_d)
pairedSamplesTTest( formula= ~happiness_ai.d + happiness_r.d, data=all )
Warning in pairedSamplesTTest(formula = ~happiness_ai.d + happiness_r.d, : 234
case(s) removed due to missingness

   Paired samples t-test 

Variables:  happiness_ai.d , happiness_r.d 

Descriptive statistics: 
            happiness_ai.d happiness_r.d difference
   mean              0.864         0.995     -0.131
   std dev.          0.949         1.085      0.961

Hypotheses: 
   null:        population means equal for both measurements
   alternative: different population means for each measurement

Test results: 
   t-statistic:  -2.03 
   degrees of freedom:  220 
   p-value:  0.044 

Other information: 
   two-sided 95% confidence interval:  [-0.259, -0.004] 
   estimated effect size (Cohen's d):  0.137 
pairedSamplesTTest( formula= ~sadness_ai.d + sadness_r.d, data=all )
Warning in pairedSamplesTTest(formula = ~sadness_ai.d + sadness_r.d, data =
all): 234 case(s) removed due to missingness

   Paired samples t-test 

Variables:  sadness_ai.d , sadness_r.d 

Descriptive statistics: 
            sadness_ai.d sadness_r.d difference
   mean            1.249       1.579     -0.330
   std dev.        1.155       1.265      1.208

Hypotheses: 
   null:        population means equal for both measurements
   alternative: different population means for each measurement

Test results: 
   t-statistic:  -4.066 
   degrees of freedom:  220 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [-0.49, -0.17] 
   estimated effect size (Cohen's d):  0.274 
pairedSamplesTTest( formula= ~fear_ai.d + fear_r.d, data=all )
Warning in pairedSamplesTTest(formula = ~fear_ai.d + fear_r.d, data = all): 234
case(s) removed due to missingness

   Paired samples t-test 

Variables:  fear_ai.d , fear_r.d 

Descriptive statistics: 
            fear_ai.d fear_r.d difference
   mean         1.579    1.846     -0.267
   std dev.     1.261    1.494      1.320

Hypotheses: 
   null:        population means equal for both measurements
   alternative: different population means for each measurement

Test results: 
   t-statistic:  -3.007 
   degrees of freedom:  220 
   p-value:  0.003 

Other information: 
   two-sided 95% confidence interval:  [-0.442, -0.092] 
   estimated effect size (Cohen's d):  0.202 
pairedSamplesTTest( formula= ~disgust_ai.d + disgust_r.d, data=all )
Warning in pairedSamplesTTest(formula = ~disgust_ai.d + disgust_r.d, data =
all): 234 case(s) removed due to missingness

   Paired samples t-test 

Variables:  disgust_ai.d , disgust_r.d 

Descriptive statistics: 
            disgust_ai.d disgust_r.d difference
   mean            1.163       1.430     -0.267
   std dev.        1.075       1.465      1.410

Hypotheses: 
   null:        population means equal for both measurements
   alternative: different population means for each measurement

Test results: 
   t-statistic:  -2.815 
   degrees of freedom:  220 
   p-value:  0.005 

Other information: 
   two-sided 95% confidence interval:  [-0.454, -0.08] 
   estimated effect size (Cohen's d):  0.189 
pairedSamplesTTest( formula= ~surprise_ai.d + surprise_r.d, data=all )
Warning in pairedSamplesTTest(formula = ~surprise_ai.d + surprise_r.d, data =
all): 234 case(s) removed due to missingness

   Paired samples t-test 

Variables:  surprise_ai.d , surprise_r.d 

Descriptive statistics: 
            surprise_ai.d surprise_r.d difference
   mean             1.281        1.471     -0.190
   std dev.         1.076        1.441      1.654

Hypotheses: 
   null:        population means equal for both measurements
   alternative: different population means for each measurement

Test results: 
   t-statistic:  -1.708 
   degrees of freedom:  220 
   p-value:  0.089 

Other information: 
   two-sided 95% confidence interval:  [-0.409, 0.029] 
   estimated effect size (Cohen's d):  0.115 
pairedSamplesTTest( formula= ~anger_ai.d + anger_r.d, data=all )
Warning in pairedSamplesTTest(formula = ~anger_ai.d + anger_r.d, data = all):
234 case(s) removed due to missingness

   Paired samples t-test 

Variables:  anger_ai.d , anger_r.d 

Descriptive statistics: 
            anger_ai.d anger_r.d difference
   mean          1.611     1.611      0.000
   std dev.      1.173     1.434      1.526

Hypotheses: 
   null:        population means equal for both measurements
   alternative: different population means for each measurement

Test results: 
   t-statistic:  0 
   degrees of freedom:  220 
   p-value:  1 

Other information: 
   two-sided 95% confidence interval:  [-0.202, 0.202] 
   estimated effect size (Cohen's d):  0 


Follow up study

######################follow up study####################################
library(dplyr)
library(lsr)
fu = read.csv('data/Followup Study OSF.csv')
fu$id = fu$OriginalDiscloserID
all = left_join(all,fu,by='id')
independentSamplesTTest(m_emotional ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_emotional ~ aiorhuman.response, fu, :
group variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_emotional 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     5.763 4.641
   std dev. 0.778 1.380

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  11.089 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.924, 1.321] 
   estimated effect size (Cohen's d):  1.011 
independentSamplesTTest(m_practical ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_practical ~ aiorhuman.response, fu, :
group variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_practical 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     3.284 4.314
   std dev. 1.380 1.699

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  -7.327 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [-1.306, -0.754] 
   estimated effect size (Cohen's d):  0.668 
independentSamplesTTest(m_specifics_1 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_1 ~ aiorhuman.response, fu, :
group variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_1 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.497 0.354
   std dev. 0.239 0.278

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  6.061 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.097, 0.189] 
   estimated effect size (Cohen's d):  0.552 
independentSamplesTTest(m_specifics_2 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_2 ~ aiorhuman.response, fu, :
group variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_2 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.583 0.416
   std dev. 0.201 0.293

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  7.325 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.122, 0.211] 
   estimated effect size (Cohen's d):  0.668 
independentSamplesTTest(m_specifics_3 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_3 ~ aiorhuman.response, fu, :
group variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_3 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.569 0.462
   std dev. 0.216 0.277

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  4.75 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.063, 0.152] 
   estimated effect size (Cohen's d):  0.433 
independentSamplesTTest(m_specifics_4 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_4 ~ aiorhuman.response, fu, :
group variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_4 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.449 0.349
   std dev. 0.248 0.278

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  4.178 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.053, 0.147] 
   estimated effect size (Cohen's d):  0.381 
independentSamplesTTest(m_specifics_5 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_5 ~ aiorhuman.response, fu, :
group variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_5 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.324 0.367
   std dev. 0.258 0.277

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  -1.779 
   degrees of freedom:  480 
   p-value:  0.076 

Other information: 
   two-sided 95% confidence interval:  [-0.091, 0.005] 
   estimated effect size (Cohen's d):  0.162 
independentSamplesTTest(m_specifics_6 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_6 ~ aiorhuman.response, fu, :
group variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_6 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.417 0.335
   std dev. 0.264 0.279

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  3.323 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.034, 0.131] 
   estimated effect size (Cohen's d):  0.303 
independentSamplesTTest(m_specifics_7 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_7 ~ aiorhuman.response, fu, :
group variable is not a factor
Warning in independentSamplesTTest(m_specifics_7 ~ aiorhuman.response, fu, : 1
case(s) removed due to missingness

   Student's independent samples t-test 

Outcome variable:   m_specifics_7 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.609 0.462
   std dev. 0.186 0.267

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  7.012 
   degrees of freedom:  479 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.105, 0.187] 
   estimated effect size (Cohen's d):  0.64 
independentSamplesTTest(m_specifics_8 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_8 ~ aiorhuman.response, fu, :
group variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_8 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.201 0.248
   std dev. 0.222 0.265

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  -2.1 
   degrees of freedom:  480 
   p-value:  0.036 

Other information: 
   two-sided 95% confidence interval:  [-0.09, -0.003] 
   estimated effect size (Cohen's d):  0.191 
independentSamplesTTest(m_specifics_9 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_9 ~ aiorhuman.response, fu, :
group variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_9 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.408 0.222
   std dev. 0.266 0.247

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  7.92 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.139, 0.232] 
   estimated effect size (Cohen's d):  0.722 
independentSamplesTTest(m_specifics_10 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_10 ~ aiorhuman.response, : group
variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_10 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.345 0.208
   std dev. 0.264 0.256

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  5.764 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.09, 0.184] 
   estimated effect size (Cohen's d):  0.525 
independentSamplesTTest(m_specifics_11 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_11 ~ aiorhuman.response, : group
variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_11 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.251 0.143
   std dev. 0.236 0.217

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  5.242 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.068, 0.149] 
   estimated effect size (Cohen's d):  0.478 
independentSamplesTTest(m_specifics_12 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_12 ~ aiorhuman.response, : group
variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_12 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.252 0.127
   std dev. 0.236 0.201

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  6.227 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.085, 0.164] 
   estimated effect size (Cohen's d):  0.568 
independentSamplesTTest(m_specifics_13 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_13 ~ aiorhuman.response, : group
variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_13 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.256 0.147
   std dev. 0.245 0.208

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  5.275 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.069, 0.151] 
   estimated effect size (Cohen's d):  0.481 
independentSamplesTTest(m_specifics_14 ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_specifics_14 ~ aiorhuman.response, : group
variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_specifics_14 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     0.490 0.275
   std dev. 0.244 0.287

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  8.87 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.167, 0.263] 
   estimated effect size (Cohen's d):  0.808 
independentSamplesTTest(m_motivation ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(m_motivation ~ aiorhuman.response, fu, :
group variable is not a factor

   Student's independent samples t-test 

Outcome variable:   m_motivation 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     4.897 4.303
   std dev. 0.865 1.251

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  6.104 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.403, 0.786] 
   estimated effect size (Cohen's d):  0.556 
fu$heard = with(fu,apply(data.frame(m_understood,m_validated,m_affirmed,m_seen,m_accepted,m_caredfor),1,mean,na.rm=T))
independentSamplesTTest(heard ~aiorhuman.response,fu,var.equal = T)
Warning in independentSamplesTTest(heard ~ aiorhuman.response, fu, var.equal =
T): group variable is not a factor

   Student's independent samples t-test 

Outcome variable:   heard 
Grouping variable:  aiorhuman.response 

Descriptive statistics: 
               ai human
   mean     5.636 5.128
   std dev. 0.777 1.053

Hypotheses: 
   null:        population means equal for both groups
   alternative: different population means in each group

Test results: 
   t-statistic:  6.062 
   degrees of freedom:  480 
   p-value:  <.001 

Other information: 
   two-sided 95% confidence interval:  [0.344, 0.674] 
   estimated effect size (Cohen's d):  0.553 
######correlation table###########
library(apaTables)
apa.cor.table(all[,c(107,141:156)], filename = "table.all.doc", table.number = 1)


Table 1 

Means, standard deviations, and correlations with confidence intervals
 

  Variable           M    SD   1           2            3          4         
  1. feelheard       5.46 1.43                                               
                                                                             
  2. m_practical     3.79 1.62 -.02                                          
                               [-.11, .07]                                   
                                                                             
  3. m_specifics_1   0.43 0.27 .10*        -.07                              
                               [.01, .19]  [-.16, .02]                       
                                                                             
  4. m_specifics_2   0.50 0.26 .12*        -.12*        .40**                
                               [.02, .20]  [-.21, -.03] [.32, .47]           
                                                                             
  5. m_specifics_3   0.52 0.25 .11*        .06          .46**      .48**     
                               [.02, .20]  [-.04, .15]  [.39, .53] [.41, .55]
                                                                             
  6. m_specifics_4   0.40 0.27 .14**       -.03         .31**      .37**     
                               [.04, .23]  [-.12, .06]  [.22, .39] [.28, .44]
                                                                             
  7. m_specifics_5   0.34 0.27 .01         .34**        .20**      .12*      
                               [-.08, .10] [.26, .42]   [.11, .29] [.03, .21]
                                                                             
  8. m_specifics_6   0.38 0.27 .02         .13**        .15**      .22**     
                               [-.07, .11] [.04, .22]   [.06, .24] [.13, .31]
                                                                             
  9. m_specifics_7   0.54 0.24 .13**       -.13**       .45**      .58**     
                               [.04, .22]  [-.22, -.04] [.38, .52] [.51, .64]
                                                                             
  10. m_specifics_8  0.22 0.24 .04         .14**        .15**      .15**     
                               [-.05, .13] [.05, .23]   [.06, .24] [.06, .24]
                                                                             
  11. m_specifics_9  0.32 0.27 .09*        -.07         .36**      .34**     
                               [.00, .18]  [-.16, .02]  [.28, .44] [.26, .42]
                                                                             
  12. m_specifics_10 0.28 0.27 .15**       .07          .25**      .25**     
                               [.06, .24]  [-.02, .16]  [.16, .33] [.16, .34]
                                                                             
  13. m_specifics_11 0.20 0.23 .12**       -.07         .22**      .21**     
                               [.03, .21]  [-.16, .03]  [.13, .30] [.12, .30]
                                                                             
  14. m_specifics_12 0.19 0.23 .09         -.08         .21**      .26**     
                               [-.00, .18] [-.17, .01]  [.12, .29] [.18, .35]
                                                                             
  15. m_specifics_13 0.21 0.24 .10*        -.05         .19**      .30**     
                               [.01, .19]  [-.14, .05]  [.10, .28] [.21, .38]
                                                                             
  16. m_specifics_14 0.39 0.29 .07         -.15**       .33**      .38**     
                               [-.02, .16] [-.24, -.06] [.24, .41] [.30, .46]
                                                                             
  17. m_motivation   4.60 1.09 .19**       .35**        .27**      .23**     
                               [.10, .28]  [.26, .43]   [.18, .35] [.14, .31]
                                                                             
  5          6          7          8          9          10         11        
                                                                              
                                                                              
                                                                              
                                                                              
                                                                              
                                                                              
                                                                              
                                                                              
                                                                              
                                                                              
                                                                              
                                                                              
                                                                              
                                                                              
  .46**                                                                       
  [.39, .53]                                                                  
                                                                              
  .36**      .21**                                                            
  [.27, .44] [.12, .29]                                                       
                                                                              
  .36**      .29**      .36**                                                 
  [.28, .44] [.21, .37] [.28, .44]                                            
                                                                              
  .59**      .43**      .18**      .29**                                      
  [.53, .65] [.35, .50] [.09, .26] [.20, .37]                                 
                                                                              
  .21**      .20**      .25**      .23**      .13**                           
  [.12, .30] [.11, .29] [.17, .34] [.14, .31] [.04, .22]                      
                                                                              
  .34**      .37**      .23**      .39**      .41**      .25**                
  [.26, .42] [.29, .45] [.14, .31] [.30, .46] [.33, .48] [.16, .34]           
                                                                              
  .37**      .32**      .27**      .40**      .33**      .22**      .62**     
  [.29, .45] [.24, .40] [.19, .36] [.32, .47] [.24, .41] [.13, .31] [.56, .67]
                                                                              
  .25**      .20**      .12*       .19**      .26**      .14**      .36**     
  [.16, .33] [.11, .28] [.03, .21] [.10, .27] [.17, .34] [.04, .23] [.28, .44]
                                                                              
  .21**      .28**      .16**      .27**      .27**      .16**      .37**     
  [.12, .30] [.20, .37] [.07, .25] [.18, .35] [.18, .35] [.07, .25] [.29, .44]
                                                                              
  .20**      .28**      .14**      .29**      .32**      .17**      .37**     
  [.11, .29] [.19, .36] [.05, .23] [.21, .38] [.24, .40] [.08, .25] [.29, .45]
                                                                              
  .39**      .35**      .15**      .28**      .50**      .11*       .42**     
  [.31, .46] [.27, .43] [.06, .24] [.20, .37] [.43, .57] [.01, .20] [.34, .49]
                                                                              
  .38**      .25**      .31**      .24**      .29**      .15**      .33**     
  [.30, .46] [.16, .34] [.22, .39] [.16, .33] [.20, .37] [.06, .24] [.24, .40]
                                                                              
  12         13         14         15         16        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
                                                        
  .32**                                                 
  [.24, .40]                                            
                                                        
  .33**      .35**                                      
  [.25, .41] [.27, .43]                                 
                                                        
  .35**      .30**      .65**                           
  [.26, .43] [.21, .38] [.59, .70]                      
                                                        
  .37**      .40**      .45**      .47**                
  [.28, .44] [.32, .47] [.38, .52] [.39, .54]           
                                                        
  .35**      .18**      .27**      .26**      .28**     
  [.27, .43] [.09, .26] [.18, .35] [.18, .35] [.20, .36]
                                                        

Note. M and SD are used to represent mean and standard deviation, respectively.
Values in square brackets indicate the 95% confidence interval.
The confidence interval is a plausible range of population correlations 
that could have caused the sample correlation (Cumming, 2014).
 * indicates p < .05. ** indicates p < .01.
 
apa.cor.table(all[all$labelR=='ai label',c(107,141:156)], filename = "table.ai label.doc", table.number = 2)


Table 2 

Means, standard deviations, and correlations with confidence intervals
 

  Variable           M    SD   1           2            3           4          
  1. feelheard       5.13 1.46                                                 
                                                                               
  2. m_practical     3.71 1.60 -.06                                            
                               [-.19, .07]                                     
                                                                               
  3. m_specifics_1   0.42 0.25 .09         -.19**                              
                               [-.04, .21] [-.31, -.06]                        
                                                                               
  4. m_specifics_2   0.50 0.25 .13*        -.20**       .32**                  
                               [.01, .26]  [-.32, -.07] [.20, .43]             
                                                                               
  5. m_specifics_3   0.51 0.25 .08         .04          .41**       .46**      
                               [-.05, .20] [-.08, .17]  [.30, .51]  [.35, .56] 
                                                                               
  6. m_specifics_4   0.39 0.26 .11         -.13         .28**       .41**      
                               [-.02, .23] [-.25, .00]  [.16, .40]  [.30, .51] 
                                                                               
  7. m_specifics_5   0.33 0.26 .01         .35**        .12         .07        
                               [-.12, .13] [.23, .46]   [-.01, .24] [-.06, .20]
                                                                               
  8. m_specifics_6   0.40 0.26 .03         .09          .11         .18**      
                               [-.10, .16] [-.04, .21]  [-.02, .24] [.05, .30] 
                                                                               
  9. m_specifics_7   0.53 0.24 .11         -.18**       .44**       .60**      
                               [-.02, .24] [-.30, -.05] [.33, .54]  [.51, .67] 
                                                                               
  10. m_specifics_8  0.23 0.23 .00         .11          .05         .05        
                               [-.13, .13] [-.02, .24]  [-.08, .18] [-.08, .18]
                                                                               
  11. m_specifics_9  0.34 0.27 .06         -.17**       .34**       .31**      
                               [-.06, .19] [-.30, -.05] [.22, .45]  [.19, .42] 
                                                                               
  12. m_specifics_10 0.29 0.26 .11         .05          .20**       .22**      
                               [-.01, .24] [-.08, .18]  [.07, .32]  [.10, .34] 
                                                                               
  13. m_specifics_11 0.20 0.24 .11         -.11         .25**       .25**      
                               [-.02, .23] [-.23, .02]  [.13, .37]  [.13, .37] 
                                                                               
  14. m_specifics_12 0.18 0.22 .03         -.11         .15*        .27**      
                               [-.10, .16] [-.24, .02]  [.03, .28]  [.14, .38] 
                                                                               
  15. m_specifics_13 0.21 0.23 .08         -.14*        .14*        .32**      
                               [-.05, .21] [-.26, -.01] [.02, .27]  [.20, .43] 
                                                                               
  16. m_specifics_14 0.39 0.29 .01         -.20**       .31**       .38**      
                               [-.12, .13] [-.32, -.07] [.19, .42]  [.27, .49] 
                                                                               
  17. m_motivation   4.54 1.11 .15*        .25**        .20**       .21**      
                               [.02, .27]  [.13, .37]   [.07, .32]  [.09, .33] 
                                                                               
  5           6          7           8          9          10        
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
  .48**                                                              
  [.37, .57]                                                         
                                                                     
  .37**       .17**                                                  
  [.25, .47]  [.04, .29]                                             
                                                                     
  .42**       .24**      .40**                                       
  [.30, .52]  [.11, .35] [.29, .50]                                  
                                                                     
  .56**       .42**      .11         .24**                           
  [.47, .64]  [.31, .52] [-.02, .24] [.12, .36]                      
                                                                     
  .12         .13*       .24**       .17*       .13*                 
  [-.01, .25] [.00, .25] [.11, .36]  [.04, .29] [.00, .26]           
                                                                     
  .40**       .37**      .18**       .41**      .42**      .21**     
  [.28, .50]  [.25, .47] [.05, .30]  [.29, .51] [.31, .52] [.09, .33]
                                                                     
  .44**       .32**      .34**       .47**      .33**      .24**     
  [.33, .54]  [.20, .43] [.22, .45]  [.36, .56] [.21, .44] [.11, .35]
                                                                     
  .35**       .23**      .14*        .23**      .26**      .19**     
  [.24, .46]  [.10, .35] [.01, .27]  [.10, .35] [.13, .37] [.06, .31]
                                                                     
  .27**       .22**      .17**       .25**      .25**      .21**     
  [.15, .38]  [.10, .34] [.04, .29]  [.12, .37] [.12, .36] [.08, .33]
                                                                     
  .23**       .23**      .15*        .25**      .32**      .17**     
  [.10, .35]  [.11, .35] [.02, .28]  [.12, .36] [.20, .43] [.04, .29]
                                                                     
  .43**       .37**      .18**       .28**      .50**      .14*      
  [.32, .53]  [.26, .48] [.05, .30]  [.16, .39] [.40, .59] [.01, .27]
                                                                     
  .40**       .18**      .27**       .29**      .30**      .14*      
  [.29, .50]  [.05, .30] [.15, .38]  [.17, .41] [.18, .42] [.01, .26]
                                                                     
  11         12         13         14         15         16        
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
  .57**                                                            
  [.47, .65]                                                       
                                                                   
  .39**      .35**                                                 
  [.27, .49] [.23, .45]                                            
                                                                   
  .27**      .26**      .26**                                      
  [.15, .38] [.14, .38] [.14, .38]                                 
                                                                   
  .33**      .31**      .23**      .61**                           
  [.21, .44] [.19, .42] [.10, .35] [.52, .68]                      
                                                                   
  .40**      .35**      .40**      .44**      .43**                
  [.29, .50] [.24, .46] [.29, .50] [.33, .54] [.32, .53]           
                                                                   
  .27**      .28**      .17**      .23**      .25**      .32**     
  [.15, .39] [.16, .40] [.04, .29] [.10, .35] [.13, .37] [.20, .43]
                                                                   

Note. M and SD are used to represent mean and standard deviation, respectively.
Values in square brackets indicate the 95% confidence interval.
The confidence interval is a plausible range of population correlations 
that could have caused the sample correlation (Cumming, 2014).
 * indicates p < .05. ** indicates p < .01.
 
apa.cor.table(all[all$labelR=='human label',c(107,141:156)], filename = "table.human label.doc", table.number = 3)


Table 3 

Means, standard deviations, and correlations with confidence intervals
 

  Variable           M    SD   1           2           3          4         
  1. feelheard       5.81 1.30                                              
                                                                            
  2. m_practical     3.87 1.63 .00                                          
                               [-.13, .13]                                  
                                                                            
  3. m_specifics_1   0.43 0.28 .12         .04                              
                               [-.01, .25] [-.09, .17]                      
                                                                            
  4. m_specifics_2   0.49 0.27 .11         -.05        .47**                
                               [-.02, .24] [-.18, .09] [.36, .56]           
                                                                            
  5. m_specifics_3   0.53 0.25 .13         .07         .51**      .51**     
                               [-.00, .26] [-.07, .20] [.40, .60] [.41, .60]
                                                                            
  6. m_specifics_4   0.41 0.27 .16*        .06         .33**      .33**     
                               [.03, .29]  [-.07, .19] [.21, .44] [.21, .44]
                                                                            
  7. m_specifics_5   0.36 0.28 -.01        .33**       .28**      .16*      
                               [-.15, .12] [.21, .44]  [.15, .40] [.03, .29]
                                                                            
  8. m_specifics_6   0.36 0.28 .05         .19**       .19**      .26**     
                               [-.08, .18] [.06, .31]  [.06, .31] [.13, .38]
                                                                            
  9. m_specifics_7   0.54 0.24 .15*        -.09        .46**      .56**     
                               [.02, .28]  [-.22, .05] [.35, .56] [.46, .64]
                                                                            
  10. m_specifics_8  0.22 0.25 .09         .17*        .24**      .23**     
                               [-.04, .22] [.04, .29]  [.11, .36] [.10, .35]
                                                                            
  11. m_specifics_9  0.29 0.28 .18**       .04         .38**      .38**     
                               [.05, .30]  [-.09, .18] [.26, .49] [.26, .48]
                                                                            
  12. m_specifics_10 0.26 0.28 .23**       .10         .29**      .28**     
                               [.10, .35]  [-.03, .23] [.17, .41] [.15, .40]
                                                                            
  13. m_specifics_11 0.20 0.23 .16*        -.02        .18**      .17*      
                               [.02, .28]  [-.15, .11] [.05, .31] [.04, .30]
                                                                            
  14. m_specifics_12 0.20 0.23 .14*        -.05        .26**      .27**     
                               [.01, .27]  [-.18, .08] [.13, .38] [.14, .38]
                                                                            
  15. m_specifics_13 0.21 0.25 .14*        .04         .23**      .27**     
                               [.00, .26]  [-.09, .18] [.10, .35] [.15, .39]
                                                                            
  16. m_specifics_14 0.38 0.28 .16*        -.09        .35**      .38**     
                               [.02, .28]  [-.22, .04] [.23, .46] [.26, .49]
                                                                            
  17. m_motivation   4.66 1.08 .23**       .44**       .34**      .24**     
                               [.10, .35]  [.33, .54]  [.22, .45] [.11, .36]
                                                                            
  5          6          7           8          9          10         
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
                                                                     
  .45**                                                              
  [.34, .55]                                                         
                                                                     
  .35**      .24**                                                   
  [.23, .46] [.11, .36]                                              
                                                                     
  .32**      .35**      .34**                                        
  [.20, .43] [.23, .46] [.21, .45]                                   
                                                                     
  .62**      .43**      .24**       .34**                            
  [.53, .70] [.32, .53] [.11, .36]  [.21, .45]                       
                                                                     
  .29**      .27**      .27**       .28**      .13*                  
  [.17, .41] [.14, .39] [.14, .39]  [.15, .40] [.00, .26]            
                                                                     
  .30**      .38**      .28**       .36**      .40**      .29**      
  [.17, .41] [.26, .49] [.16, .40]  [.24, .47] [.28, .50] [.16, .41] 
                                                                     
  .31**      .33**      .22**       .33**      .33**      .21**      
  [.18, .42] [.20, .44] [.09, .34]  [.20, .44] [.20, .44] [.08, .33] 
                                                                     
  .13*       .16*       .09         .14*       .26**      .08        
  [.00, .26] [.03, .29] [-.04, .22] [.01, .27] [.14, .38] [-.05, .21]
                                                                     
  .15*       .34**      .16*        .29**      .28**      .13        
  [.02, .27] [.22, .46] [.03, .28]  [.16, .41] [.16, .40] [-.00, .26]
                                                                     
  .18**      .33**      .13         .34**      .32**      .16*       
  [.05, .31] [.21, .44] [-.01, .25] [.21, .45] [.20, .43] [.03, .29] 
                                                                     
  .34**      .34**      .12         .29**      .51**      .07        
  [.22, .45] [.22, .45] [-.01, .25] [.16, .40] [.40, .60] [-.06, .20]
                                                                     
  .36**      .33**      .34**       .21**      .26**      .16*       
  [.24, .47] [.20, .44] [.22, .46]  [.08, .33] [.14, .38] [.03, .29] 
                                                                     
  11         12         13         14         15         16        
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
                                                                   
  .67**                                                            
  [.59, .73]                                                       
                                                                   
  .33**      .30**                                                 
  [.21, .44] [.17, .41]                                            
                                                                   
  .47**      .40**      .45**                                      
  [.36, .57] [.28, .51] [.34, .55]                                 
                                                                   
  .41**      .38**      .37**      .68**                           
  [.30, .52] [.26, .49] [.25, .48] [.61, .75]                      
                                                                   
  .44**      .38**      .40**      .47**      .50**                
  [.32, .54] [.26, .48] [.28, .50] [.36, .57] [.40, .60]           
                                                                   
  .39**      .43**      .18**      .31**      .27**      .24**     
  [.27, .50] [.31, .53] [.05, .31] [.19, .43] [.15, .39] [.12, .36]
                                                                   

Note. M and SD are used to represent mean and standard deviation, respectively.
Values in square brackets indicate the 95% confidence interval.
The confidence interval is a plausible range of population correlations 
that could have caused the sample correlation (Cumming, 2014).
 * indicates p < .05. ** indicates p < .01.
 
#no sig moderation by label condition
anova(lm(feelheard~m_emotional*labelR,all))
Analysis of Variance Table

Response: feelheard
                    Df Sum Sq Mean Sq F value    Pr(>F)    
m_emotional          1  31.68  31.678 17.0197 4.406e-05 ***
labelR               1  51.29  51.292 27.5580 2.353e-07 ***
m_emotional:labelR   1   1.38   1.381  0.7422    0.3894    
Residuals          451 839.42   1.861                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_practical*labelR,all))
Analysis of Variance Table

Response: feelheard
                    Df Sum Sq Mean Sq F value    Pr(>F)    
m_practical          1   0.36   0.360  0.1868    0.6658    
labelR               1  53.39  53.390 27.7083 2.186e-07 ***
m_practical:labelR   1   1.00   1.001  0.5196    0.4714    
Residuals          451 869.02   1.927                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_1*labelR,all))
Analysis of Variance Table

Response: feelheard
                      Df Sum Sq Mean Sq F value    Pr(>F)    
m_specifics_1          1  10.08  10.079  5.2756   0.02208 *  
labelR                 1  51.98  51.981 27.2073 2.791e-07 ***
m_specifics_1:labelR   1   0.05   0.046  0.0239   0.87730    
Residuals            451 861.66   1.911                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_2*labelR,all))
Analysis of Variance Table

Response: feelheard
                      Df Sum Sq Mean Sq F value    Pr(>F)    
m_specifics_2          1  12.27  12.267  6.4555   0.01139 *  
labelR                 1  54.04  54.040 28.4379 1.533e-07 ***
m_specifics_2:labelR   1   0.44   0.442  0.2325   0.62994    
Residuals            451 857.02   1.900                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_3*labelR,all))
Analysis of Variance Table

Response: feelheard
                      Df Sum Sq Mean Sq F value    Pr(>F)    
m_specifics_3          1  10.59  10.593  5.5445   0.01897 *  
labelR                 1  51.21  51.208 26.8026 3.401e-07 ***
m_specifics_3:labelR   1   0.30   0.303  0.1586   0.69061    
Residuals            451 861.66   1.911                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_4*labelR,all))
Analysis of Variance Table

Response: feelheard
                      Df Sum Sq Mean Sq F value   Pr(>F)    
m_specifics_4          1  17.10  17.102  9.0188 0.002821 ** 
labelR                 1  51.11  51.110 26.9529 3.16e-07 ***
m_specifics_4:labelR   1   0.34   0.345  0.1817 0.670141    
Residuals            451 855.21   1.896                     
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_5*labelR,all))
Analysis of Variance Table

Response: feelheard
                      Df Sum Sq Mean Sq F value    Pr(>F)    
m_specifics_5          1   0.07   0.070  0.0360    0.8496    
labelR                 1  52.79  52.791 27.3406 2.615e-07 ***
m_specifics_5:labelR   1   0.09   0.089  0.0462    0.8300    
Residuals            451 870.82   1.931                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_6*labelR,all))
Analysis of Variance Table

Response: feelheard
                      Df Sum Sq Mean Sq F value    Pr(>F)    
m_specifics_6          1   0.40   0.402  0.2085    0.6482    
labelR                 1  53.81  53.811 27.9098 1.982e-07 ***
m_specifics_6:labelR   1   0.01   0.012  0.0061    0.9378    
Residuals            451 869.54   1.928                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_7*labelR,all))
Analysis of Variance Table

Response: feelheard
                      Df Sum Sq Mean Sq F value   Pr(>F)    
m_specifics_7          1  15.46  15.456  8.1305 0.004553 ** 
labelR                 1  52.43  52.427 27.5794 2.33e-07 ***
m_specifics_7:labelR   1   0.17   0.169  0.0887 0.765936    
Residuals            450 855.43   1.901                     
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_8*labelR,all))
Analysis of Variance Table

Response: feelheard
                      Df Sum Sq Mean Sq F value    Pr(>F)    
m_specifics_8          1   1.44   1.442  0.7494    0.3871    
labelR                 1  53.09  53.091 27.5856 2.321e-07 ***
m_specifics_8:labelR   1   1.24   1.245  0.6467    0.4217    
Residuals            451 867.99   1.925                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_9*labelR,all))
Analysis of Variance Table

Response: feelheard
                      Df Sum Sq Mean Sq F value    Pr(>F)    
m_specifics_9          1   8.13   8.132  4.2785   0.03917 *  
labelR                 1  56.52  56.523 29.7379 8.164e-08 ***
m_specifics_9:labelR   1   1.90   1.902  1.0008   0.31766    
Residuals            451 857.21   1.901                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_10*labelR,all))
Analysis of Variance Table

Response: feelheard
                       Df Sum Sq Mean Sq F value    Pr(>F)    
m_specifics_10          1  20.92  20.924 11.1772 0.0008971 ***
labelR                  1  57.01  57.012 30.4555 5.771e-08 ***
m_specifics_10:labelR   1   1.57   1.566  0.8365 0.3608789    
Residuals             451 844.27   1.872                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_11*labelR,all))
Analysis of Variance Table

Response: feelheard
                       Df Sum Sq Mean Sq F value    Pr(>F)    
m_specifics_11          1  14.07  14.072  7.4135  0.006725 ** 
labelR                  1  53.32  53.318 28.0882 1.817e-07 ***
m_specifics_11:labelR   1   0.28   0.282  0.1484  0.700250    
Residuals             451 856.10   1.898                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_12*labelR,all))
Analysis of Variance Table

Response: feelheard
                       Df Sum Sq Mean Sq F value    Pr(>F)    
m_specifics_12          1   7.73   7.733  4.0431   0.04495 *  
labelR                  1  51.39  51.388 26.8681 3.294e-07 ***
m_specifics_12:labelR   1   2.07   2.069  1.0815   0.29891    
Residuals             451 862.58   1.913                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_13*labelR,all))
Analysis of Variance Table

Response: feelheard
                       Df Sum Sq Mean Sq F value    Pr(>F)    
m_specifics_13          1  10.06  10.061  5.2725   0.02212 *  
labelR                  1  52.84  52.842 27.6908 2.205e-07 ***
m_specifics_13:labelR   1   0.23   0.234  0.1228   0.72621    
Residuals             451 860.63   1.908                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
anova(lm(feelheard~m_specifics_14*labelR,all))
Analysis of Variance Table

Response: feelheard
                       Df Sum Sq Mean Sq F value    Pr(>F)    
m_specifics_14          1   4.22   4.216  2.2063    0.1381    
labelR                  1  53.38  53.378 27.9323 1.961e-07 ***
m_specifics_14:labelR   1   4.33   4.330  2.2659    0.1330    
Residuals             451 861.85   1.911                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
############### EMOTIONS#####################
etaSquared(aov(hope~responseR*labelR,all), type = 2, anova = T)
                      eta.sq eta.sq.part          SS  df        MS        F
responseR        0.010180055 0.010274948   12.602707   1 12.602707 4.682110
labelR           0.005670529 0.005749556    7.020002   1  7.020002 2.608045
responseR:labelR 0.004037889 0.004100952    4.998827   1  4.998827 1.857145
Residuals        0.980584553          NA 1213.944280 451  2.691672       NA
                          p
responseR        0.03100264
labelR           0.10702212
responseR:labelR 0.17363467
Residuals                NA
etaSquared(aov(distress~responseR*labelR,all), type = 2, anova = T)
                      eta.sq eta.sq.part         SS  df       MS        F
responseR        0.011139204 0.011244133   9.052433   1 9.052433 5.128772
labelR           0.004706754 0.004782141   3.825011   1 3.825011 2.167109
responseR:labelR 0.005075130 0.005154489   4.124377   1 4.124377 2.336719
Residuals        0.979528947          NA 796.028192 451 1.765029       NA
                          p
responseR        0.02400621
labelR           0.14168854
responseR:labelR 0.12705648
Residuals                NA
etaSquared(aov(uncomfortable~responseR*labelR,all), type = 2, anova = T)
                      eta.sq eta.sq.part         SS  df       MS         F
responseR        0.006528221 0.006562075   6.479152   1 6.479152 2.9790444
labelR           0.004461404 0.004493876   4.427870   1 4.427870 2.0358870
responseR:labelR 0.001033812 0.001044944   1.026041   1 1.026041 0.4717626
Residuals        0.988312836          NA 980.884199 451 2.174910        NA
                          p
responseR        0.08503395
labelR           0.15431646
responseR:labelR 0.49253042
Residuals                NA
etaSquared(aov(creeped~responseR*labelR,all), type = 2, anova = T)
                      eta.sq eta.sq.part         SS  df        MS        F
responseR        0.003993515 0.004057479   1.653895   1 1.6538946 1.837378
labelR           0.009730801 0.009829360   4.029963   1 4.0299630 4.477048
responseR:labelR 0.006420865 0.006507657   2.659169   1 2.6591693 2.954178
Residuals        0.980242180          NA 405.962451 451 0.9001385       NA
                          p
responseR        0.17593699
labelR           0.03490043
responseR:labelR 0.08634208
Residuals                NA
etaSquared(aov(ambivalent~responseR*labelR,all), type = 2, anova = T)
                       eta.sq  eta.sq.part           SS  df          MS
responseR        6.564518e-05 6.641173e-05 8.229828e-02   1  0.08229828
labelR           1.026786e-02 1.028164e-02 1.287265e+01   1 12.87264518
responseR:labelR 1.316776e-03 1.330468e-03 1.650820e+00   1  1.65082040
Residuals        9.883919e-01           NA 1.239131e+03 451  2.74751803
                          F         p
responseR        0.02995368 0.8626730
labelR           4.68519043 0.0309477
responseR:labelR 0.60084061 0.4386637
Residuals                NA        NA
etaSquared(aov(happy~responseR*labelR,all), type = 2, anova = T)
                       eta.sq  eta.sq.part           SS  df           MS
responseR        2.300237e-06 2.320692e-06 3.052793e-03   1  0.003052793
labelR           1.275157e-03 1.284847e-03 1.692344e+00   1  1.692343894
responseR:labelR 7.541573e-03 7.551203e-03 1.000891e+01   1 10.008910481
Residuals        9.911831e-01           NA 1.315463e+03 451  2.916770291
                           F          p
responseR        0.001046635 0.97420586
labelR           0.580211578 0.44662753
responseR:labelR 3.431504535 0.06461697
Residuals                 NA         NA
etaSquared(aov(shame~responseR*labelR,all), type = 2, anova = T)
                       eta.sq  eta.sq.part          SS  df        MS         F
responseR        0.0032635947 0.0032865604   1.9364521   1 1.9364521 1.4871263
labelR           0.0067334277 0.0067571992   3.9952756   1 3.9952756 3.0682295
responseR:labelR 0.0005459829 0.0005513338   0.3239587   1 0.3239587 0.2487887
Residuals        0.9897486226           NA 587.2668014 451 1.3021437        NA
                          p
responseR        0.22330024
labelR           0.08051608
responseR:labelR 0.61817245
Residuals                NA
etaSquared(aov(excitement~responseR*labelR,all), type = 2, anova = T)
                       eta.sq  eta.sq.part           SS  df         MS
responseR        0.0078426455 0.0078560298   10.6304646   1 10.6304646
labelR           0.0001975106 0.0001993745    0.2677195   1  0.2677195
responseR:labelR 0.0015780287 0.0015907039    2.1389694   1  2.1389694
Residuals        0.9904536595           NA 1342.5294599 451  2.9767837
                          F          p
responseR        3.57112428 0.05943355
labelR           0.08993584 0.76439691
responseR:labelR 0.71855048 0.39706942
Residuals                NA         NA
etaSquared(aov(fear~responseR*labelR,all), type = 2, anova = T)
                       eta.sq  eta.sq.part          SS  df        MS         F
responseR        0.0005258186 0.0005283172   0.4454088   1 0.4454088 0.2383970
labelR           0.0038366008 0.0038420510   3.2498960   1 3.2498960 1.7394480
responseR:labelR 0.0009797304 0.0009839372   0.8299070   1 0.8299070 0.4441927
Residuals        0.9947448279           NA 842.6253880 451 1.8683490        NA
                         p
responseR        0.6256036
labelR           0.1878781
responseR:labelR 0.5054465
Residuals               NA
etaSquared(aov(surprised~responseR*labelR,all), type = 2, anova = T)
                       eta.sq  eta.sq.part           SS  df        MS
responseR        0.0009369237 0.0009388514    1.5142541   1 1.5142541
labelR           0.0018196104 0.0018217429    2.9408504   1 2.9408504
responseR:labelR 0.0001466783 0.0001470966    0.2370612   1 0.2370612
Residuals        0.9970098169           NA 1611.3650749 451 3.5728716
                          F         p
responseR        0.42381991 0.5153687
labelR           0.82310554 0.3647582
responseR:labelR 0.06635031 0.7968443
Residuals                NA        NA
etaSquared(aov(statelonely~responseR*labelR,all), type = 2, anova = T)
                       eta.sq  eta.sq.part           SS  df          MS
responseR        2.334348e-04 2.341368e-04 5.217395e-02   1 0.052173954
labelR           2.932779e-03 2.933657e-03 6.554922e-01   1 0.655492204
responseR:labelR 9.161450e-06 9.191071e-06 2.047634e-03   1 0.002047634
Residuals        9.967680e-01           NA 2.227831e+02 451 0.493975907
                           F         p
responseR        0.105620442 0.7453371
labelR           1.326972015 0.2499549
responseR:labelR 0.004145211 0.9486935
Residuals                 NA        NA


Discussion

The discussion section typically synthesizes the study’s findings and places them in the context of existing research. It also addresses the implications, limitations, and future research directions. Here’s what should be explained:

  1. Interpretation of Findings:

    • Summarize how the results support or challenge previous assumptions or findings in the field. For instance, the study found that while AI responses were effective at making participants feel heard, labeling them as AI reduced their perceived effectiveness. This finding is significant as it highlights human biases against AI, even when the content is supportive.
  2. Implications:

    • Discuss the real-world applications of these findings. In practical terms, this could mean that organizations using AI for customer support or therapy might consider ways to mitigate the negative impact of an AI label or design interfaces where AI interactions appear more human-like to foster connection.
  3. Limitations:

    • Acknowledge any limitations of the study, such as the representativeness of the participant sample, potential biases in response interpretation, or limitations in generalizing findings across different AI platforms or interaction types.
  4. Future Directions:

    • Suggest areas for further investigation. For example, future research could explore how different types of AI, like chatbots with varying levels of anthropomorphism or transparency about their AI nature, impact perceptions. Additionally, testing these dynamics in more diverse and real-world settings could provide more insights.
  5. Theoretical Contributions:

    • Detail how the study contributes to theoretical frameworks, such as media richness theory or models of human-computer interaction. It supports the idea that emotional connection and perceived understanding can be influenced by how AI is presented and perceived by users.
  6. Practical Recommendations:

    • Offer actionable strategies, such as training AI to mimic human-like empathy cues more convincingly or developing hybrid models where human agents complement AI responses to maintain high levels of perceived connection.


Now, see the Discussion of the paper,

  • Integration of Findings with Existing Literature: The discussion starts by positioning the findings within the broader context of existing literature on human-AI interaction and empathy. The study confirms that AI can be highly effective in creating responses that make individuals feel heard, aligning with prior research on the capability of natural language processing systems to simulate human-like empathy. However, the discussion emphasizes the new insight that AI labels reduce the perceived quality of these responses, showcasing a bias against AI that persists despite its high performance.

  • Interpretation of the AI Label Effect: The section delves into why labeling a response as AI might negatively impact its reception. It theorizes that individuals may associate AI with a lack of genuine emotional capacity or authenticity, which diminishes their emotional connection. This bias can lead participants to undervalue the empathetic quality of AI responses, even when the content itself is indistinguishable from that of a human.

  • Implications for AI Design and Human-AI Collaboration: The findings have important implications for designing AI systems, particularly in fields like customer service, mental health support, and personal assistance. Developers and stakeholders are advised to consider the framing and transparency of AI interactions. The study suggests that reducing the salience of the AI label or integrating AI with human oversight might help mitigate biases and enhance the perceived quality of interaction.

  • The Role of Emotional Validation vs. Practical Support: A notable aspect of the discussion highlights that AI responses focused more on emotional validation rather than practical advice, which proved effective for making participants feel heard. This reinforces the idea that validation of emotions is a critical component of empathy and supportive communication. Future development of AI systems should prioritize emotional attunement when the goal is to enhance perceived empathy.

  • Moderating Factors: The study identifies that individual differences, such as attitudes toward AI, play a moderating role. Participants with more favorable views of AI were less influenced by the negative labeling effect. This insight prompts further research into how user characteristics, like trust in technology or familiarity with AI, affect the reception of AI-generated content.

  • Limitations and Future Research Directions: The authors acknowledge the limitations, such as the demographic homogeneity of the sample (e.g., primarily U.S.-based participants) and the artificial nature of the experimental setup compared to real-world applications. They suggest that future studies should include more diverse populations and real-world scenarios to enhance external validity. Additionally, examining other types of AI (e.g., those with more advanced emotional intelligence features) and varying levels of anthropomorphism could provide more comprehensive insights.

  • Theoretical Contributions: The paper contributes to theories of media richness and human-computer interaction by demonstrating that the perceived empathy of a response depends not just on the content but also on the framing and perceived source of the response. It underscores that biases can shape user experiences in ways that could limit the effectiveness of even highly capable AI systems.

  • Concluding Thoughts: The discussion wraps up by emphasizing the dual-edged nature of AI in supportive roles: it is capable of understanding and validating emotions effectively, but societal biases and perceptions may limit its impact. Addressing these biases is crucial for maximizing the potential of AI in roles traditionally reserved for humans.