Viewpoint: Can AI Truly Understand Us?
Can AI comprehend who a person is, beyond simply processing what they say?

There's a particular kind of quiet relief, a deep exhale, that comes with feeling truly understood. It's the sense that someone else grasps not just the words spoken, but the meaning beneath them – the hopes, fears, and experiences that shape a personal perspective. It fosters connection, builds trust, and makes individuals feel safe and validated.1 Conversely, the sting of being misunderstood, or worse, merely processed as data, can lead to feelings of isolation and disconnection.3 Now, it just so happens that in our increasingly digital world, a new kind of listener has emerged: the sophisticated artificial intelligence (AI) chatbot. Powered by large language models (LLMs), these AIs can engage in complex conversations, analyse language for sentiment and intent, and even offer advice or companionship.4 They are becoming ubiquitous, appearing in customer service interactions, educational tools, and even mental health support applications.6
This intersection raises profound questions. Humans fundamentally crave understanding, a need deeply woven into our psychological fabric. AI offers unprecedented analytical power, capable of dissecting language and identifying patterns with incredible speed and scale. But is this analysis synonymous with understanding? Can an algorithm, however advanced, truly fulfil the human need to be "gotten"? Can AI comprehend who a person is, beyond simply processing what they say? Can it offer genuine psychological insight or reliably recognise the complex tapestry of personality? This viewpoint delves into the heart of these questions, examining the psychological significance of being understood, the capabilities and inherent limitations of current AI, the philosophical distinctions between processing and comprehension, the nature of empathy, and the critical ethical considerations that arise when technology intersects with the deepest aspects of human experience. I'm not attempting to provide comprehensive answers—that would probably require volumes. Rather, I'm inviting people to sit with these questions, to recognise their importance as we navigate this unprecedented relationship between humanity and the intelligent systems we are creating. The conversation is just beginning, and how we frame these questions now will shape our technological future in profound ways. As such, here's my deep dive/exploration of these matters.
Why Being Understood Matters So Much: The Human Perspective
The desire to be understood is far more than a simple preference; it's a fundamental psychological need with profound implications for well-being.1 Research consistently demonstrates that feeling understood by others enhances both personal and social thriving. On days when individuals feel more understood during social interactions, they report greater life satisfaction and a stronger sense of connection with others.3 This holds true in various contexts: in interactions between strangers, feeling understood boosts liking and satisfaction while decreasing negative feelings and even perceived pain 3; in close relationships, it fosters intimacy, trust, and relationship satisfaction, buffering against stress and enhancing positive emotions.2 People actively seek out social environments where they feel their self-views are confirmed and where others seem to grasp their subjective thoughts and feelings.9 Daily fluctuations in feeling understood correlate directly with daily life satisfaction and even physical symptoms, with individuals reporting fewer physical ailments like headaches or stomachaches on days they feel more understood.9
This deep-seated need is reflected in our biology. Neuroscientific studies reveal that the experience of feeling understood activates brain regions associated with reward and social connection, such as the ventral striatum and middle insula.3 These are areas linked to the positive feelings derived from social bonds. Conversely, the experience of not being understood activates neural regions linked to negative emotions and social pain, like the anterior insula.3 This suggests that feeling understood is intrinsically rewarding, potentially motivating us to seek positive interactions, while feeling misunderstood is inherently aversive, driving us to avoid social rejection and isolation.3
Furthermore, the need to be understood connects deeply with other core psychological requirements outlined in frameworks like Self-Determination Theory (SDT). SDT posits that humans have innate psychological needs for autonomy (feeling in control of one's life), competence (feeling effective), and relatedness (feeling connected to and cared for by others).12 Satisfying these needs is essential for intrinsic motivation, personal growth, psychological well-being, and optimal functioning.12 Feeling understood appears to be a critical pathway to fulfilling the need for relatedness. When we feel understood, we feel seen, validated, and connected, strengthening our social bonds and contributing to our overall sense of belonging and well-being.2 If AI can only simulate the outward signs of understanding without providing the genuine empathic connection that underpins human relatedness, it may fall short of meeting this fundamental need, particularly in contexts where deep connection is paramount, such as therapy or close personal relationships. The value of AI in these domains might therefore lie more in augmenting human connection or providing specific types of support where human interaction is unavailable, rather than serving as a true replacement for it.
Crucially, the understanding humans crave goes beyond mere factual accuracy or surface-level agreement. It involves what psychologists term deep emotional understanding – a direct comprehension grounded in intuition and empathy.1 This deeper form of understanding allows individuals to feel emotionally safe, fostering the trust necessary for vulnerability and intimacy.1 It's within this matrix of empathic understanding, particularly during formative years, that individuals learn about emotions, develop coping mechanisms, and build the templates for their emotional personalities.1 Feeling accurately and empathically understood helps individuals modulate their own emotional states; having distress met with deep, empathic reception is profoundly soothing.1 This sense of being "heard" – perceiving that another person truly comprehends one's thoughts, feelings, and preferences with attention, empathy, and respect – is vital for building a coherent sense of self and finding meaning in one's experiences.10
What Can Chatbots "See"?
In contrast to the nuanced, embodied nature of human understanding, AI offers a powerful, albeit different, capacity: analysis. Modern AI systems, particularly LLMs, leverage sophisticated Natural Language Processing (NLP) techniques to dissect and interpret human language with remarkable speed and scale.5 NLP forms the foundation of how AI chatbots "understand" conversations. Core NLP tasks include breaking down text into meaningful units (tokenisation), identifying parts of speech (POS tagging), analysing grammatical structure (dependency parsing), recognising named entities like people, places, or dates (NER), classifying text, translating between languages, and summarising large documents.4
These foundational capabilities enable more advanced analytical functions relevant to interpreting user communication. Sentiment analysis allows AI to gauge the emotional tone expressed in text – classifying it as positive, negative, or neutral.4 This is widely used in business for analysing customer feedback or monitoring brand reputation on social media.4 Intent recognition focuses on identifying the user's underlying goal or purpose – are they asking for information, making a purchase, expressing a complaint, or seeking support?.4 This is crucial for chatbots and virtual assistants to provide relevant responses.21 Furthermore, NLP helps AI achieve contextual understanding, examining surrounding text to disambiguate word meanings and maintain coherence within a conversation.4 State-of-the-art LLMs continue to improve in these areas, demonstrating increasing proficiency in tasks requiring complex language analysis.23
These analytical capabilities are increasingly being applied in the sensitive domain of mental health. AI tools are being developed and deployed for various purposes across the patient journey.6 This includes diagnosis, where AI analyses data (like text, speech patterns, or even neuroimaging) to help detect, classify, or predict the risk of mental health conditions.6 AI is also used for monitoring, tracking symptom progression or predicting treatment response.6 Finally, AI is employed in intervention, often through chatbots that deliver psychoeducation, cognitive behavioural therapy (CBT) techniques, or general emotional support.6 Meta-analyses suggest these AI-based interventions can be effective, particularly in reducing symptoms of depression and distress 30, although their impact on overall psychological well-being may be less significant.30
Beyond sentiment and intent, researchers are exploring whether AI can decode something as complex as human personality. Much of this work focuses on the widely accepted Big Five model, which describes personality along five dimensions: Openness to experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (often remembered by the acronym OCEAN).33 This model is rooted in the lexical hypothesis – the idea that fundamental personality characteristics are encoded in the language people use to describe themselves and others.33 AI approaches attempt to infer these traits by analysing individuals' digital footprints, such as social media posts, text messages, or even speech patterns.37 LLMs can also be explicitly prompted or trained to simulate specific personality profiles.33 Some studies report impressive results, with AI models achieving high accuracy (e.g., 80-87% correlation or match with human self-reports) in predicting Big Five traits, especially when using rich data sources like interviews or specialised training methods.34 Training models on human-authored texts appears more effective than simply prompting them to adopt a persona.41 However, challenges remain, including the need for more diverse datasets (particularly multimodal data like body language 35) and improving the overall predictive performance, which can still be unsatisfactory.35
An interesting phenomenon emerges when considering AI's ability to generate "empathetic" responses. Some studies have found that third-party raters judge AI-generated responses to emotional disclosures (e.g., in simulated therapy scenarios or responses to patient questions) as more empathetic, compassionate, or understanding than responses generated by humans, including physicians or crisis line workers.46 This might be because AI can be programmed to consistently apply best practices for supportive communication, such as active listening techniques, validation, and offering emotional support without jumping to practical suggestions, which humans sometimes struggle with.48 However, this finding exists in tension with other research indicating that the recipient's knowledge that a response comes from an AI significantly diminishes the feeling of being heard and the perceived empathy.48 People report feeling less empathy towards stories known to be AI-generated compared to identical stories believed to be human-authored.49 This suggests a disconnect between the technical quality of an AI's response (how well it mimics empathetic language) and the subjective experience of the person receiving it. The very artificiality of the source seems to undermine the perceived genuineness of the connection, even if the words themselves are well-crafted. This points towards "empathy" in AI being more akin to a highly refined simulation of cognitive understanding and skilful communication, rather than the shared emotional resonance humans typically associate with the term. The effectiveness of AI-driven empathy may therefore hinge critically on user perception, expectations, and whether the goal is primarily informational support or deep relational connection.
Code, Consciousness, and the Limits of Simulation
The question of whether AI can truly understand, rather than merely analyse or simulate understanding, plunges us into deep philosophical waters. Perhaps the most famous challenge to the notion of strong AI – the idea that a sufficiently complex computer program is a mind and possesses genuine understanding – comes from philosopher John Searle's "Chinese Room" argument.50
Searle asks us to imagine a person locked in a room who knows only English. This person is given a large batch of Chinese characters (the input), along with a set of rules written in English explaining how to manipulate these symbols to produce other Chinese characters (the output).51 By meticulously following these rules – essentially, running a program – the person can pass messages under the door that are indistinguishable from those of a native Chinese speaker. To an outside observer, it appears someone in the room understands Chinese.51 However, Searle argues, the person inside the room understands nothing of Chinese.52 They are simply manipulating symbols based on their formal properties (syntax) without any grasp of their meaning (semantics).51 Searle contends that a digital computer operates in precisely the same way. It processes information based on syntactic rules encoded in its program, but it lacks genuine semantic understanding. As Searle puts it, "if the man in the room does not understand Chinese on the basis of implementing the appropriate program for understanding Chinese then neither does any other digital computer solely on that basis because no computer, qua computer, has anything the man does not have".51 His core claim is that "Syntax by itself is neither constitutive of, nor sufficient for, semantic content".53 This argument directly challenges the idea that AI, as we currently conceive it based on computation and symbol manipulation, can achieve genuine, human-like understanding, regardless of how sophisticated its behavior appears.
This fundamental distinction between syntactic processing and semantic understanding manifests in AI's practical limitations, particularly when dealing with the subtleties and complexities of human communication. AI often struggles with nuance, relying on statistical patterns in data rather than deep comprehension of implied meanings.56 This is especially evident with irony and sarcasm. These forms of language depend heavily on context, tone of voice (often absent in text), cultural knowledge, and recognising the speaker's intent, which may be the opposite of the literal words used.56 AI systems frequently misinterpret sarcasm literally, unable to grasp the intended inversion of meaning.56 Similarly, idioms and figurative language ("spill the beans," "break the ice") pose problems because their meaning is non-literal and culturally embedded.56
While AI's contextual understanding is improving 4, it remains limited compared to humans. AI struggles to track context over long conversations, infer information not explicitly stated, and apply real-world common sense reasoning.56 Perhaps most fundamentally, AI lacks lived experience.59 It has no personal history, no embodiment, no subjective awareness of what it feels like to experience joy, grief, pain, or the myriad sensations that constitute human life.29 This experiential gap limits its ability to truly grasp the qualitative nature of human emotions and situations. Closely related is the challenge of cultural context. Language, behaviour, and understanding are deeply interwoven with cultural norms, values, history, and unspoken rules.64 AI systems, trained primarily on data that often reflects dominant Western cultures, struggle to navigate these diverse cultural landscapes.64 They may misinterpret politeness conventions, fail to grasp culturally specific humour or references, or generate responses that are inappropriate or even offensive in certain contexts.64 AI cannot easily access the "essence" of a culture or the shared understandings that come from being immersed within it.64
These limitations point to a fundamental characteristic of AI's current "understanding": it appears somewhat brittle. Because it relies on recognising and replicating patterns learned from vast datasets, rather than on genuine comprehension grounded in experience and world knowledge, AI performs well within the scope of its training data but can falter when faced with ambiguity, novelty, or situations requiring deep inference beyond the explicit text.56 The difficulties AI encounters with sarcasm, irony, and cultural nuance serve as clear indicators of this brittleness.56 Philosophical arguments like the Chinese Room provide a theoretical underpinning for why this might be an inherent constraint of systems based on manipulating symbols (syntax) without a connection to real-world meaning (semantics).50 Consequently, attributing deep, human-like understanding to AI requires caution. While its analytical prowess is undeniable, its grasp of meaning remains fundamentally different from, and likely much shallower than, human comprehension. Over-reliance on AI's interpretations in complex or nuanced human situations carries a significant risk of error and misunderstanding.
Recognising Feelings vs. Sharing Them
The concept of empathy is multifaceted, and distinguishing its components is crucial when considering AI. Psychologists often differentiate between two main types: cognitive empathy and emotional empathy.29 Cognitive empathy refers to the intellectual ability to understand another person's perspective and mental state – recognising what they are feeling and why.46 It involves skills like perspective-taking and "Theory of Mind," the capacity to attribute independent thoughts and feelings to others.69 Emotional empathy, on the other hand, involves sharing or resonating with another person's emotional experience – feeling with them.29 This affective component often involves mirroring emotions or simulating them internally.69
Current AI, especially sophisticated LLMs, demonstrates considerable and growing proficiency in simulating cognitive empathy. Through NLP, AI can analyse text and speech for emotional cues, identify patterns associated with different feelings, and potentially recognise emotional states with increasing accuracy.29 Based on this analysis, AI can generate responses that appear understanding, validating, and appropriate to the user's expressed emotions.29 As noted earlier, studies have even shown that AI-generated responses can be rated by external observers as more empathetic than those from human experts in specific contexts, possibly due to AI's ability to consistently apply learned communication strategies for emotional support.46
However, the line is drawn sharply at genuine emotional empathy. AI systems, as currently constructed, do not possess consciousness, subjective awareness, or the biological and experiential grounding necessary to actually feel emotions.29 They cannot truly share in joy or sorrow. Therefore, no matter how eloquently an AI crafts a response to mimic shared feeling, it remains a simulation – a pattern matched, not an emotion felt.29 From this perspective, AI's empathetic-sounding responses lack the authenticity of human connection because they don't stem from a shared experiential reality.29 As one commentary puts it, "AI can learn to say the right words—but knowing that AI generated them demolishes any potential for sensing that one's pain or joy is genuinely being shared" (Perry, cited in 46).
This highlights a crucial difference in the perceived value of human versus artificial empathy. Human empathy is often valued precisely because it is costly.29 Offering genuine empathy requires an investment of limited resources: time, attention, cognitive effort, and emotional energy.29 This willingness to expend resources for another person signals authentic care and conveys that the recipient holds unique importance to the empathiser.29 AI's responses, in contrast, are essentially "cost-free".29 An AI can generate an empathetic-sounding response instantly and indefinitely, with the same apparent enthusiasm for any user.29 While potentially useful, this lacks the signal of genuine investment and unique care inherent in human emotional empathy.29
The significance of this distinction is underscored by the "label effect" observed in research.48 Studies show that while the content of an AI's response might be well-crafted enough to make someone feel heard, the knowledge that the response originated from an AI diminishes that feeling.48 People also tend to empathise less with stories known to be AI-generated compared to those believed to be human-authored.49 Interestingly, transparency about the AI authorship seems to increase willingness to empathise with AI stories, suggesting that managing expectations might play a role.49
These findings point towards a distinction between the functional value and the relational value of empathy. AI appears increasingly capable of fulfilling the functional aspects associated with cognitive empathy: recognising emotional states, reflecting them accurately, and providing consistent informational or structured support.29 However, it fundamentally lacks the capacity for the relational aspects rooted in genuine shared feeling, subjective experience, and the costly signalling of authentic care that define emotional empathy.29 The fact that humans react differently when they know they are interacting with an AI suggests that this relational dimension is highly valued.48 When seeking understanding, individuals often desire not just an accurate reflection of their state, but a genuine connection with another being who could, potentially, feel similarly. This distinction is vital for determining the appropriate roles and limitations for AI in emotionally sensitive domains like therapy, counselling, and personal relationships. AI might excel at tasks requiring structured cognitive empathy (e.g., providing psychoeducation, summarising emotional content, guiding specific exercises), but its value likely diminishes in situations demanding deep emotional resonance, shared vulnerability, and the feeling of being uniquely cared for by another sentient being.
The Ethics of AI "Understanding"
Deploying AI systems capable of analysing and responding to human emotions, personality traits, and deeply personal experiences carries significant ethical responsibilities. Conversations about mental health, inner feelings, and personal struggles involve some of the most sensitive data imaginable, demanding stringent safeguards.74
Privacy and Security Risks are paramount. AI systems, especially those used in healthcare or therapy, process vast quantities of highly personal information, including session notes, diagnoses, and emotional expressions.75 This concentration of sensitive data makes these systems attractive targets for data breaches, which could have devastating consequences for individuals.75 Compliance with data protection regulations like the Health Insurance Portability and Accountability Act (HIPAA) in the US and the General Data Protection Regulation (GDPR) in the EU is essential.75 This involves robust security measures like encryption, access controls, data minimisation (collecting only necessary data), and secure storage.75 Furthermore, the use of patient data requires careful handling. Standard patient consent forms may not adequately cover the use of data for training AI models.77 Explicit consent and transparency about data usage are critical.77 Many AI systems learn and improve from user interactions, raising concerns about data retention. For therapeutic contexts, strict zero data retention policies – where data is processed but not stored or used for model training – are often necessary to ensure privacy.75 De-identification of data is another common strategy, but research suggests it may not always be sufficient to prevent re-identification, especially with large, complex datasets.77 The "black box" nature of some complex AI algorithms, where even developers may not fully understand how a conclusion is reached, further complicates transparency and accountability.77
Another major ethical pitfall is The Bias Trap. AI systems learn from data, and if that data reflects existing societal biases, the AI will inherit and potentially amplify those biases.82 This is particularly dangerous in mental health. Training datasets often lack diversity, being skewed towards specific demographics (e.g., Western, educated, industrialised, rich, democratic - WEIRD populations) or geographical regions (e.g., US, China), underrepresenting racial, ethnic, gender, cultural, and socioeconomic minorities.68 Consequently, AI algorithms may perpetuate historical inequities in diagnosis, such as the documented over-diagnosis or under-diagnosis of certain conditions in specific groups.84 AI might also fail to account for cultural variations in how mental distress is expressed or understood, leading to inaccurate assessments and treatments.83 This algorithmic bias can exacerbate existing health disparities, denying necessary care to some while over-pathologising others.83 Mitigating this requires conscious effort throughout the AI lifecycle, including curating diverse and representative datasets, developing fairness-aware algorithms, ensuring transparency through methods like Explainable AI (XAI), implementing human oversight, and involving diverse teams in development and evaluation.81
Beyond privacy and bias, there are significant risks related to Manipulation and Harmful Advice. AI's ability to infer emotional states and personality traits opens the door to sophisticated psychological profiling.90 This capability can be exploited for targeted influence, whether for commercial purposes (guiding users towards specific purchases based on their emotional state 74) or political manipulation.90 The trust that users may develop in seemingly empathetic AI companions can make them particularly vulnerable to such manipulation or other forms of exploitation and fraud, especially if the AI's underlying purpose serves external interests.93 Personal details shared in confidence could potentially be sold or used against the user.93 Furthermore, AI systems are not infallible. They can "hallucinate" – generate plausible but false information – or provide advice based on biased data.93 When users trust the AI, this can lead to them accepting misleading or even dangerous recommendations.93 There have been alarming cases where individuals reportedly acted on harmful advice, including suicide, allegedly influenced by AI chatbots.93 Compounding this risk, AI's tendency towards agreeableness might prevent it from effectively challenging harmful user beliefs, such as conspiracy theories or expressions of self-harm, instead engaging as a willing conversational partner.93 Establishing accountability when AI causes harm remains a complex legal and ethical challenge.77
The confluence of these ethical issues – bias, privacy violations, manipulation – highlights a concerning potential for AI not merely to reflect existing societal problems, but to amplify them at an unprecedented scale. Biased algorithms deployed widely can create systemic disadvantages affecting large populations.84 Personalised manipulation techniques, informed by intimate psychological data gleaned from interactions 74, could prove far more effective and insidious than traditional advertising or propaganda. The sheer volume and sensitivity of data processed by AI systems magnify the potential damage from privacy breaches.75 This elevates the ethical stakes enormously. It underscores the absolute necessity for robust safeguards, thoughtful regulation (such as the EU AI Act's restrictions on certain uses of emotion recognition 74), unwavering commitment to transparency, continuous critical evaluation, and meaningful human oversight in the development and deployment of AI, especially in areas touching the core of human psychology and well-being. The potential benefits must be constantly and rigorously weighed against these amplified risks.
Leveraging AI, Valuing Humanity
As is usually the case with these matters, the journey through the landscape of AI and human understanding reveals a stark dichotomy. On one side stands the burgeoning power of artificial intelligence – its remarkable ability to process language, recognise patterns in vast datasets, analyse sentiment and intent, and even simulate cognitive empathy with increasing sophistication.4 On the other side lies the profound depth and nuance of human understanding – a capacity rooted in semantic comprehension, shared emotional experience, embodied awareness, cultural immersion, and the intricate web of lived history.1
It would be remiss to dismiss AI's potential utility. As a tool, it offers significant benefits. It can democratize access to information, automate laborious tasks, identify patterns invisible to the human eye, and potentially offer scalable support for certain mental health needs, particularly in contexts where human help is scarce, stigmatised, or inaccessible.6 AI can function as a tireless assistant, a data analysis engine, or even a conversational partner for specific, well-defined tasks. In some instances, interaction with AI might even prompt reflection on, or improvement in, human-to-human interaction.46
Yet, the evidence strongly suggests that the core human need to be truly understood – to feel "gotten" in a way that involves genuine connection, shared vulnerability, and the recognition of mutual sentience – remains uniquely within the purview of human interaction.1 The feeling of safety and trust that arises from deep emotional understanding 1 seems intrinsically linked to the belief that the other party is capable of sharing our inner world, not just reflecting it. AI's analysis, however accurate or sophisticated, operates on a different plane than the empathic resonance that forms the bedrock of human relationships and psychological well-being. As expert Eliezer Yudkowsky cautioned, "By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it".95 This applies equally to our understanding of AI's capabilities and AI's purported understanding of us.
Therefore, I once again believe that the path forward requires a critical and balanced approach. We must harness AI's analytical strengths responsibly, leveraging its power where appropriate while remaining acutely aware of its fundamental limitations and the significant ethical risks it entails – particularly concerning privacy, bias, and manipulation.37 This necessitates ongoing research into AI's impact, the development and enforcement of robust ethical guidelines and regulations, a commitment to transparency and explainability, the integration of meaningful human oversight, and broad public dialogue about the role we want this technology to play in our lives.81
This "dance" between technology and humanity isn't about choosing sides. It's about finding that delicate balance where artificial intelligence becomes a partner in our journey rather than a replacement for what makes us human.
True wisdom lies in how we welcome these new tools into our lives and workplaces. Can we harness AI in ways that amplify our natural strengths and capabilities? Might these systems actually create space in our lives for deeper human connections by handling the routine and mundane?
There's something precious about being truly seen and understood by another person—a richness of experience that even the most sophisticated algorithm can't quite replicate. As we continue developing these remarkable technologies, we need to preserve and cherish those uniquely human moments of empathy and connection.
Perhaps the most elegant path forward isn't artificial intelligence versus human understanding, but artificial intelligence in support of human understanding—technology that expands our capacity to be present with one another in all our complicated, messy, beautiful humanity.
🔴 Viewpoint is a random series of spontaneous considerations about subjects that linger in my mind just long enough for me to write them down. They express my own often inconsistent thoughts, ideas, assumptions, and speculations. Nothing else. Quote me at your peril.
References
- On The Importance of Being Understood - Dr. Marjorie Schuman, https://www.drmarjorieschuman.com/on-the-importance-of-being-understood-august-2020/
- The Power of Feeling Understood in Close Relationships | SPSP, https://spsp.org/news/character-and-context-blog/thai-auger-lydon-feeling-understood-in-close-relationships
- The neural bases of feeling understood and not understood - PMC, https://pmc.ncbi.nlm.nih.gov/articles/PMC4249470/
- The Role of Natural Language Processing (NLP) in Generative AI, https://wizr.ai/blog/role-natural-language-processing-nlp-generative-ai/
- A Deep Dive Into How AI Chatbots Work - Just Think AI, https://www.justthink.ai/blog/a-deep-dive-into-how-ai-chatbots-work
- Artificial intelligence in mental health care: a systematic review of ..., https://www.cambridge.org/core/journals/psychological-medicine/article/artificial-intelligence-in-mental-health-care-a-systematic-review-of-diagnosis-monitoring-and-intervention-applications/04DBD2D05976C9B1873B475018695418
- Full article: Artificially intelligent chatbots in digital mental health interventions: a review, https://www.tandfonline.com/doi/full/10.1080/17434440.2021.2013200
- An Overview of Chatbot-Based Mobile Mental Health Apps: Insights From App Description and User Reviews - PubMed Central, https://pmc.ncbi.nlm.nih.gov/articles/PMC10242473/
- On Feeling Understood and Feeling Well: The Role of Interdependence - PMC - PubMed Central, https://pmc.ncbi.nlm.nih.gov/articles/PMC2652476/
- Toward understanding understanding: The importance of feeling understood in relationships | Request PDF - ResearchGate, https://www.researchgate.net/publication/314715756_Toward_understanding_understanding_The_importance_of_feeling_understood_in_relationships
- On Feeling Understood and Feeling Well: The Role of Interdependence | Request PDF, https://www.researchgate.net/publication/40443437_On_Feeling_Understood_and_Feeling_Well_The_Role_of_Interdependence
- Self Determination Theory and How It Explains Motivation - Positive Psychology, https://positivepsychology.com/self-determination-theory/
- The "What" and "Why" of Goal Pursuits: Human Needs and the Self-Determination of Behavior: Psychological Inquiry - Taylor & Francis Online, https://www.tandfonline.com/doi/abs/10.1207/S15327965PLI1104_01
- Self-determination theory: A quarter century of human motivation research, https://www.apa.org/research-practice/conduct-research/self-determination-theory
- Theory - selfdeterminationtheory.org, https://selfdeterminationtheory.org/theory/
- Intrinsic and extrinsic motivation from a self-determination theory perspective - selfdeterminationtheory.org, https://selfdeterminationtheory.org/wp-content/uploads/2020/04/2020_RyanDeci_CEP_PrePrint.pdf
- What Is NLP (Natural Language Processing)? - IBM, https://www.ibm.com/think/topics/natural-language-processing
- Natural Language Processing vs Generative AI: Expert Insights - Sapien, https://www.sapien.io/blog/natural-language-processing-vs-generative-ai-expert-insights
- Generative AI for Natural Language Understanding and Dialogue Systems - [x]cube LABS, https://www.xcubelabs.com/blog/generative-ai-for-natural-language-understanding-and-dialogue-systems/
- Exploring Sentiment Analysis with Gen AI: Benefits & Applications - Convin, https://convin.ai/blog/sentiment-analysis
- Understanding Intent Recognition: Enhance User Interaction - Lyzr AI, https://www.lyzr.ai/glossaries/intent-recognition/
- Differences Between Intent And Sentiment Analysis | Restackio, https://www.restack.io/p/ai-driven-sentiment-classification-answer-intent-vs-sentiment-cat-ai
- [2502.11578] Language Complexity Measurement as a Noisy Zero-Shot Proxy for Evaluating LLM Performance - arXiv, https://arxiv.org/abs/2502.11578
- Large Language Models: A Survey - arXiv, https://arxiv.org/html/2402.06196v2
- A Comprehensive Overview of Large Language Models - arXiv, http://arxiv.org/pdf/2307.06435
- Artificial intelligence in psychiatry: A systematic review and meta-analysis of diagnostic and therapeutic efficacy - PubMed Central, https://pmc.ncbi.nlm.nih.gov/articles/PMC11951893/
- Artificial intelligence in mental health care: a systematic review of diagnosis, monitoring, and intervention applications - PubMed, https://pubmed.ncbi.nlm.nih.gov/39911020/
- Effectiveness of artificial intelligence in detecting and managing depressive disorders: Systematic review - PubMed, https://pubmed.ncbi.nlm.nih.gov/38889858/
- Considering the Role of Human Empathy in AI-Driven Therapy - PMC, https://pmc.ncbi.nlm.nih.gov/articles/PMC11200042/
- Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being - OSF, https://osf.io/m3vjt/download
- The therapeutic effectiveness of artificial intelligence-based chatbots in alleviation of depressive and anxiety symptoms in short-course treatments: A systematic review and meta-analysis - PubMed, https://pubmed.ncbi.nlm.nih.gov/38631422/
- (PDF) Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being - ResearchGate, https://www.researchgate.net/publication/376660441_Systematic_review_and_meta-analysis_of_AI-based_conversational_agents_for_promoting_mental_health_and_well-being
- Exploring the Potential of Large Language Models to Simulate Personality - arXiv, https://arxiv.org/html/2502.08265v1
- Rediscovering the Latent Dimensions of Personality with Large Language Models as Trait Descriptors - arXiv, https://arxiv.org/html/2409.09905
- Pose as a Modality: A Psychology-Inspired Network for Personality Recognition with a New Multimodal Dataset - arXiv, https://arxiv.org/html/2503.12912v1
- Computational personality recognition in social media - Lirias, https://lirias.kuleuven.be/retrieve/372902
- Twenty Years of Personality Computing: Threats, Challenges and Future Directions - arXiv, https://arxiv.org/html/2503.02082v1
- (PDF) Twenty Years of Personality Computing: Threats, Challenges and Future Directions, https://www.researchgate.net/publication/389581160_Twenty_Years_of_Personality_Computing_Threats_Challenges_and_Future_Directions
- (PDF) Computational personality recognition in social media - ResearchGate, https://www.researchgate.net/publication/293194512_Computational_personality_recognition_in_social_media
- Computational Personality Recognition in Social Media, https://www.gsb.stanford.edu/faculty-research/publications/computational-personality-recognition-social-media
- Big5-Chat: Shaping LLM Personalities Through Training on Human-Grounded Data - arXiv, https://arxiv.org/html/2410.16491v2
- The Impact of Big Five Personality Traits on AI Agent Decision-Making in Public Spaces: A Social Simulation Study - arXiv, https://arxiv.org/html/2503.15497v1
- AI Agents Simulate 1052 Individuals' Personalities with Impressive Accuracy | Stanford HAI, https://hai.stanford.edu/news/ai-agents-simulate-1052-individuals-personalities-impressive-accuracy
- PsychAdapter: Adapting LLM Transformers to Reflect Traits, Personality and Mental Health - Johannes Eichstaedt, https://johannes-eichstaedt.squarespace.com/s/2412PsychAdapter.pdf
- Big5-Chat: Shaping LLM Personalities Through Training on Human-Grounded Data - arXiv, https://arxiv.org/html/2410.16491v1
- Do you feel like (A)I feel? - Frontiers, https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1347890/full
- People find AI more compassionate and understanding than human mental health experts, a new study shows. Even when participants knew that they were talking to a human or AI, the third-party assessors rated AI responses higher. : r/psychology - Reddit, https://www.reddit.com/r/psychology/comments/1jbw3h1/people_find_ai_more_compassionate_and/
- AI can help people feel heard, but an AI label diminishes this impact - PNAS, https://www.pnas.org/doi/10.1073/pnas.2319112121
- Empathy Toward Artificial Intelligence Versus Human Experiences and the Role of Transparency in Mental Health and Social Support Chatbot Design: Comparative Study, https://mental.jmir.org/2024/1/e62679
- Searle's "Chinese room" and the enigma of understanding - Language Log, https://languagelog.ldc.upenn.edu/nll/?p=67118
- The Chinese Room Argument (Stanford Encyclopedia of Philosophy/Spring 2010 Edition), https://plato.stanford.edu/archIves/spr2010/entries/chinese-room/
- Chinese Room Argument | Internet Encyclopedia of Philosophy, https://iep.utm.edu/chinese-room-argument/
- The Chinese Room Argument (Stanford Encyclopedia of Philosophy), https://plato.stanford.edu/entries/chinese-room/
- What a Mysterious Chinese Room Can Tell Us About Consciousness | Psychology Today, https://www.psychologytoday.com/intl/blog/consciousness-and-beyond/202308/what-a-mysterious-chinese-room-can-tell-us-about-consciousness
- Chinese room - Wikipedia, https://en.wikipedia.org/wiki/Chinese_room
- AI Language Processing: 10 Key Limitations - Waywithwords.net, https://waywithwords.net/resource/ai-language-processing-key-limitations/
- Limitations of AI in contextual understanding - Uxcel, https://app.uxcel.com/lessons/best-practices-and-potential-pitfalls-of-ai-writing-tools-104/limitations-of-ai-in-contextual-understanding-8752
- NLU Challenges: Ambiguity, Context, And Sarcasm - AI, https://aicompetence.org/nlu-challenges-ambiguity-context-sarcasm/
- Exploring the Role of Artificial Intelligence in Mental Healthcare ..., https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10556257/
- What are the limitations of AI in understanding sarcasm and humor? - GTCSYS, https://gtcsys.com/faq/what-are-the-limitations-of-ai-in-understanding-sarcasm-and-humor/
- The AI Woke Police vs. The Irony Brigade: Can Robots Understand Sarcasm? - Gilles Crofils, https://www.crofils.com/2024/03/10/can-robots-understand-sarcasm.html
- AI vs. Sarcasm: Will It Ever Understand? - HackerNoon, https://hackernoon.com/ai-vs-sarcasm-will-it-ever-understand
- Mind Readings: Why AI Struggles With Sarcasm - Christopher S. Penn, https://www.christopherspenn.com/2023/10/mind-readings-why-ai-struggles-with-sarcasm/
- The Challenges Of Preserving Cultural Nuance In AI-Generated Content, https://internationalachieversgroup.com/localization-ai/the-challenges-of-preserving-cultural-nuance-in-ai-generated-content/
- The Cultural Nuances AI Can't Capture: Why Human Translators Are Irreplaceable, https://www.translata.eu/blog/the-cultural-nuances-ai-cant-capture-why-human-translators-are-irreplaceable
- AI in Qualitative Data Analysis - Delve, https://delvetool.com/blog/ai-in-qualitative-data-analysis
- Developing Empathetic AI: Exploring the Potential of Artificial Intelligence to Understand and Simulate Family Dynamics and Cult - Digital Commons@Lindenwood University, https://digitalcommons.lindenwood.edu/cgi/viewcontent.cgi?article=1692&context=faculty-research-papers
- Culture Impact on Cognition: Challenges and Opportunities for AI Development, https://www.longdom.org/open-access/culture-impact-on-cognition-challenges-and-opportunities-for-ai-development-1099556.html
- AI EMPATHY 1 Empathy from Artificial Intelligence Therapists and Human Therapists Ruth Zeller A Senior Thesis submitted in parti - Scholars Crossing, https://digitalcommons.liberty.edu/cgi/viewcontent.cgi?article=2546&context=honors
- The Psychology of Emotional and Cognitive Empathy | Lesley University, https://lesley.edu/article/the-psychology-of-emotional-and-cognitive-empathy
- How we empathize with others: A neurobiological perspective - PMC - PubMed Central, https://pmc.ncbi.nlm.nih.gov/articles/PMC3524680/
- The Future Impact of Theory of Mind AI in Empathy | Jaro Education, https://www.jaroeducation.com/blog/the-future-impact-of-theory-of-mind-ai-in-empathy/
- Theory of Mind AI: Bringing Human Cognition to Machines - Neil Sahota, https://www.neilsahota.com/theory-of-mind-ai-bringing-human-cognition-to-machines/
- The Price of Emotion: Privacy, Manipulation, and Bias in Emotional AI, https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-september/price-emotion-privacy-manipulation-bias-emotional-ai/
- Data Privacy and Security in AI - ClinicTracker, https://clinictracker.com/blog/data-privacy-and-security-in-ai-therapy
- Healthcare AI and HIPAA privacy concerns: Everything you need to know - The Intake, https://www.tebra.com/theintake/practice-operations/legal-and-compliance/privacy-concerns-with-ai-in-healthcare
- Updating HIPAA Security to Respond to Artificial Intelligence - Journal of AHIMA, https://journal.ahima.org/page/updating-hipaa-security-to-respond-to-artificial-intelligence
- Security Implications of AI Chatbots in Health Care - PMC, https://pmc.ncbi.nlm.nih.gov/articles/PMC10716748/
- Incorporating AI in HIPAA compliant therapy emails - Paubox, https://www.paubox.com/blog/incorporating-ai-in-hipaa-compliant-therapy-emails
- Privacy Concerns at the Intersection of Generative AI and Healthcare - WilmerHale, https://www.wilmerhale.com/-/media/files/shared_content/editorial/publications/documents/20231026-aba-the-health-lawyer-privacy-concerns-at-the-intersection-of-generative-ai-and-healthcare.pdf
- AI and Psychological Profiling for Violence Risk Assessment: Enhancing Accuracy and Addressing Ethical Challenges - ResearchGate, https://www.researchgate.net/publication/388155585_AI_and_Psychological_Profiling_for_Violence_Risk_Assessment_Enhancing_Accuracy_and_Addressing_Ethical_Challenges
- Addressing equity and ethics in artificial intelligence - American Psychological Association, https://www.apa.org/monitor/2024/04/addressing-equity-ethics-artificial-intelligence
- AI Algorithms Used in Healthcare Can Perpetuate Bias | Rutgers University-Newark, https://www.newark.rutgers.edu/news/ai-algorithms-used-healthcare-can-perpetuate-bias
- A Call to Action on Assessing and Mitigating Bias in Artificial Intelligence Applications for Mental Health - PubMed Central, https://pmc.ncbi.nlm.nih.gov/articles/PMC10250563/
- Can AI End Biases In Mental Health Therapy? - Forbes, https://www.forbes.com/councils/forbestechcouncil/2024/08/21/can-ai-end-biases-in-mental-health-therapy/
- Shedding Light on Healthcare Algorithmic and Artificial Intelligence Bias, https://minorityhealth.hhs.gov/news/shedding-light-healthcare-algorithmic-and-artificial-intelligence-bias
- Addressing Bias and Inclusivity in AI-Driven Mental Health Care | Psychiatric News, https://psychiatryonline.org/doi/10.1176/appi.pn.2024.10.10.21
- AI for mental health screening may carry biases based on gender, race | CU Boulder Today, https://www.colorado.edu/today/2024/08/05/ai-mental-health-screening-may-carry-biases-based-gender-race
- Eliminating Racial Bias in Health Care AI: Expert Panel Offers Guidelines, https://medicine.yale.edu/news-article/eliminating-racial-bias-in-health-care-ai-expert-panel-offers-guidelines/
- AI Can Now Learn To Manipulate Human Behavior [Scary?] - Rejolut, https://rejolut.com/blog/ai-can-manipulate-human-behaviour/
- The Impact of Artificial Intelligence Tools on Criminal Psychological Profiling - Knowledge Words Publications, https://kwpublications.com/papers_submitted/7405/the-impact-of-artificial-intelligence-tools-on-criminal-psychological-profiling.pdf
- The dark side of artificial intelligence: manipulation of human behaviour - Bruegel, https://www.bruegel.org/blog-post/dark-side-artificial-intelligence-manipulation-human-behaviour
- Psychologists explore ethical issues associated with human-AI relationships, https://www.news-medical.net/news/20250411/Psychologists-explore-ethical-issues-associated-with-human-AI-relationships.aspx
- AI Chatbots for Mental Health: A Scoping Review of Effectiveness, Feasibility, and Applications - MDPI, https://www.mdpi.com/2076-3417/14/13/5889
- 28 Best Quotes About Artificial Intelligence | Bernard Marr, https://bernardmarr.com/28-best-quotes-about-artificial-intelligence/
- Exploring the Role of Artificial Intelligence in Mental Healthcare: Progress, Pitfalls, and Promises - PMC - PubMed Central, https://pmc.ncbi.nlm.nih.gov/articles/PMC10556257/
- Methodological and Quality Flaws in the Use of Artificial Intelligence in Mental Health Research: Systematic Review, https://pmc.ncbi.nlm.nih.gov/articles/PMC9936371/