Addicted to AI?

Imagine talking to an AI that gives you answers faster than any human—but… you feel uneasy. You’re not sure if you can trust it. You sense you’re losing something—maybe control, maybe the meaning of interaction. Or maybe the opposite—you find yourself becoming dependent on AI, on the illusion of understanding and the instant life advice. This isn’t just about technology—it’s about relationships. New tools don’t just change how we work. They shape our emotions: from awe to fear, from excitement to confusion.
In a world where AI is becoming part of everyday communication—both with clients and within teams—the ability to manage emotions and build human–technology relationships is emerging as a key psychological competence.
📌 Why does this matter?
- Working with AI can lead to alienation or boost efficiency—depending on the emotional context.
- Without a language to describe our relationship with technology, we fall into denial, passivity, or hostility.
- Teams that understand emotions toward AI build trust faster and adapt more effectively.
📊 What does research show?
- Research by Afroogh, S., Akbari, A., Malone, E., Kargar, M., & Alambeigi, H. (2024) shows that trust in artificial intelligence is neither uniform nor static. It depends on the context of use, the type of interaction (human–machine, machine–human, machine–machine), as well as on users’ psychological traits—such as openness to exploration, need for control, or level of anxiety. AI’s trustworthiness is not just about accuracy and technical reliability, but also about alignment with social and ethical values.
- According to Kirk, H. R., Gabriel, I., Summerfield, C., Vidgen, B., & Hale, S. A. (2025), AI companions are not simply new technologies but powerful tools of emotional interaction that can become addictive. People aim to create safe AI systems aligned with their goals and under their control. However, as AI develops, a new challenge emerges: the formation of deeper, more lasting relationships between humans and AI systems. This becomes especially evident as AI acquires more personalized and agentic (i.e., autonomous) features.
- Findings from Polish researchers Maj, K., Grzyb, T., Doliński, D., et al. (2025) suggest that although robots can elicit obedience, the effect is weaker compared to humans. For employers and HR departments, this highlights the importance of considering the psychological aspects of introducing robots into the workplace—specifically their perception as authority figures, the level of trust they inspire, and potential resistance to following their commands.
📌 What will you learn in this module?
- Recognizing emotions triggered by AI
Learn to talk about fear, distrust, excitement, or frustration—both your own and your team’s. - Building a human–technology language
Develop ways to describe relationships with AI without oversimplifying or demonizing—using metaphors, narratives, and micro-behaviors. - Managing emotions in AI-augmented teamwork
How to foster empathy and adaptation when part of the team is an algorithm? How to build trust and avoid techno-tensions in hybrid projects? - Handling unpredictability and AI errors
Build emotional resilience when AI makes mistakes, surprises you, or “performs too well.”
🧭 Technology won’t replace emotions—but it can trigger them
Working with AI is not just a task—it’s a new language of presence, influence, and responsibility at work and in life.
🎓 Do you want to learn how to collaborate with AI without feeling a loss of control or meaning?
Join our module “Managing Emotions and Communicating with AI.”
You will learn how to:
- recognize and regulate emotions triggered by technology,
- build an empathetic human–AI collaboration language,
- create teams that trust one another—no matter who’s the algorithm.
👉 If you manage people, implement new technologies, or simply want to understand yourself better in the AI age—this module is for you.
📚 Sources and research:
- Afroogh, S., Akbari, A., Malone, E., Kargar, M., & Alambeigi, H. (2024). Trust in AI: progress, challenges, and future directions. Humanities & Social Sciences Communications, 11, Article 1568.
- Kirk, H. R., Gabriel, I., Summerfield, C., Vidgen, B., & Hale, S. A. (2025). Why human–AI relationships need socioaffective alignment. arXiv preprint.
- Maj, K., Grzyb, T., Doliński, D., & Franjo, M. (2025). Comparing obedience and efficiency in tedious task performance under human and humanoid robot supervision. Cognition, Technology & Work.
🔔 Sign up before frustration with AI becomes the new normal.
This module will be led by Dr. Anna Syrek-Kosowska and Dr. Konrad Maj.
