[Air-L] CFM: Symbolic Interaction and AI--2nd Call

Sarina Chen sarina.chen at uni.edu
Mon Jan 29 06:25:22 PST 2024


Dear Colleagues,

Greetings!

Below please find the 2nd call of CFM: *Symbolic Interaction and AI*, Vol.
61 of *Studies in Symbolic Interaction: A Bi-annual Book Series*, published
by Emerald Publishing.

Thank you very much for your consideration.

Looking forward to hearing from you.

Sincerely,

Shing-Ling Sarina Chen
*Studies in Symbolic Interaction*

*********************************************

CFM:  Symbolic Interaction and AI


Artificial Intelligence (AI), an umbrella term that includes any device capable
of emulating or even exceeding human capabilities, emerged as computer
technologies became more sophisticated and complex.  While AI currently
evokes frightened responses associated with human replacement, it has a
history of being useful regarding human enterprises. For instance,
mathematical analysts have used AI to find patterns in huge sets of data.
However, recent rapid developments in AI have utilized the pattern-matching
capacity to create words, images, and sounds independent of human intention
, often referred to as Generative AI.


In 2021, Open-AI introduced Dall-E, a-text-to-image generator that creates
artworks after receiving text prompts from users.  In 2022, Open-AI
unveiled ChatGPT, a computer program designed to simulate conversation with
users.  After receiving inputs from users, ChatGPT displayed the capacity to
 provide lifelike answers, including grammatically correct essays that
appeared filled with human insight.


Also in 2022, Google introduced an AI text-to-video generator that created
high-definition videos from text prompts.  When considering the above
developments, Al has become equated with raw human intelligence, from
authoring cogent essays to completing artistic and intricate designs.  Most
alarming, those who consume information and design feel that they cannot easily
discern the difference between a genuine human creation or the work of AI.


As mentioned in regard to mathematical analysis, using AI for analytical
purposes is not a new phenomenon.  Neither is such use restricted to our
current day-to-day lives. Before Dall-E, ChatGPT, and the AI video tool,
humans have long employed "assisted living," using AI technologies
such as Alexa
and Siri to create and assist voices, turn on and off appliances, answer
short queries, and make transmission of information easier to accomplish.  In
addition, various AI programs appeared at workplaces that served as useful
ways to type, make phone calls, and even to transport people from place to
place.


One dimension of AI that fascinates students of technology is not only its
capacity to simulate human activity, but to learn ways of communicating.
Such learning emerges through the use of mathematical systems to find
connections in huge data sets.  The responses AI generate consist of pattern
matching, or a process that connects words, images, and sounds to sequences
of subsequent symbols.


In effect, AI is powered by large language models in which a bot can
ingest huge
amounts of data. Such ingestion enables a bot to predict and generate the
most appropriate word or symbol to generate.  While these chatbots do not
duplicate human intelligence in that they lack the capacities to understand
the environment around them or distinguish factual from non-factual
information, they still represent potential dangers. Their incapacity to
duplicate human intelligence becomes a source of intriguing content;
the chatbots
may churn out wrong or nonsensical information (a phenomenon called "
hallucinations") regarded as potential insights. In response to these
hallucinations, humans work closely with AI as attendants in various fields
including business, medicine, education, journalism, and public safety.


Despite the rapid advancement of AI, and the astonishing potential of AI in
transforming social life, social scientists have not accumulated much
knowledge regarding human-AI interaction.  For this reason, it seems crucial
to explore the implications of the emergent interaction between humans and
AI technologies.  *Studies in Symbolic Interaction: A Bi-annual Book Series*,
Vol. 61, *Symbolic Interaction and AI*, is devoted to examinations of the
implications of utilizing AI in human reflexivity and social interaction.


Human Reflexivity and AI:  From Dyad to Triad


AI, although designed to enhance efficiency and productivity, challenges
human reflexivity in ways that we did not anticipate.  When typing a
sentence, AI accompanies a user in an intellectually intimate fashion by
suggesting what is to be typed (specific words or phrases to be used)
and flagging
misspelled words and grammatical errors.  Hence the user not only authors an
intended statement but also constantly attends to AI suggestions.  AI
assisted typing may expedite the typing and enhance the quality of the
text.  However, the level of reflexivity needed when doing assisted typing
seems to be substantial and laborious.  AI assisted writing also gives a
new definition to single authorship.


To further examine the activity of reflexivity and the use of AI, one may
examine the interaction of AI with (a) I and Me, (b) Self and Generalized
Other, and (c) Self and Other(s) in SI literature.


I, Me, and AI


In classic SI literature, reflexivity is conceptualized as the interaction
between the I and the Me.  Using AI creates the appearance of a generated I and
Me, one that emerges in the creative process. Such emergence leads to
question the relationship of the newly formed triad--I, Me, and AI.
Specifically,
is AI now challenging the authority of me? The question relates to how AI
seems to be based on vast (objective) databases, while Me is based on
one's experiential
(and subjective) socialization.


Self, Generalized Other, and AI


Mead discussed the Self in reference the Generalized Other, especially
when engaged
in the process of making an existential decision. Currently, however, AI’s
emergence on the scene could make AI appear to be more applicable than
the Generalized
Other. As an immediate (and often forceful) source of correction, AI
represents the seemingly reliable patterns found in vast databases, while
Generalized Other appears less linked to current technology (and more
associated with a less sophisticated process of socialization). As
interactionists, we ask, what is the relationship between AI and the Self
 and AI and the Generalized Other?  Are we in the process of confronting a
newly formed triad consisting of the Self, Generalized Other, and AI?


Self, Other(s), and AI


Interactionists have maintained that humans take each other into account when
engaging in social interaction. However, the emergence of AI leads to the
question of how humans incorporate AI in their interaction with others.
Should humans assign more weight to AI than the responses from other human
interactants, if after all, AI becomes a more valid and objective source of
information?


However, while AI may seem as more valid, it could subvert the human’s
creative process. Suggestions by AI could actually be a form of second
guessing, providing a barrage of useless suggestions. AI could clog a
user's creative process, by dispensing warnings or “error signals” that
interrupt the flow of human activity. Instead of coming up with new
insights, AI can interrupt the reflexivity associated with creativity.



Social Interaction and AI


Among different forms of interaction with AI, the one that has garnered
much attention is the use of AI companions for mental health.  Although lacking
the distinctly familiar companionship that humans experience with one
another, an AI companion exhibits potential for mental health care.  AI
companions can assist with users’ wellness and provide social support.


AI companions work to support mental wellness, first, by allowing a user to
describe his/her mood through writing, voice, note, or video.  Then, the AI
companion would quickly analyze the user's statement for signs of anxiety,
depression, or loneliness, and provide a personalized, conversational
recommendation for how the user can feel his/her best.


Chatbots, designed to provide digital companionship, provide conversations
that appear helpful owing to the immediacy of responsiveness.  Using the
vast exemplary examples of human responsiveness, chatbots effectively (at
least seemingly so) display provocative personality traits and
conversational flair.  In so doing, AI companions allow users to share their
feelings and thoughts, provide empathetic language to support them, and
help humans deal with loneliness, anxiety, and depression. The AI
“therapists” provide a sense of connection, and contribute to users'
well-being.


Questions to be addressed in this area also include how users develop
relationships with AI companions like they do with traditional mental
health providers, how trauma is processed using AI companions.  Given the
vast possibilities associated with the above, we welcome empirical research
articles or conceptual papers that analyze and assess how AI is experienced
in all human life endeavors.  Topics to be addressed include, but not limit
to:


Mind and AI

Self and AI

Mental Health and AI

Social Relationships and AI

Medicine and AI

Business and AI

Education and AI

Public Safety and AI


Please send an abstract of no more than 750 words to Shing-Ling Sarina Chen
(sarina.chen at uni.edu) by March 1, 2024.

If the abstract is selected for inclusion, the final manuscript is due
September
1, 2024.


Looking forward to hearing from you.


Sincerely,


Shing-Ling Sarina Chen

Studies in Symbolic Interaction


***********************************************


More information about the Air-L mailing list