Decisions and Ethics in the Age of Algorithms

đ§ Your team is rolling out a new tool to support HR decision-making. AI scans rĂŠsumĂŠs, compares profiles, and recommends candidates. Over time, managers start asking: âWhy should I even verify this, if the system already knows?â Why does this matter at work?
This isnât science fiction. The future of work is also about the relationship between human judgment, data, and AI-driven processes. According to researchers Pflanzer, Traylor, Lyons, DubljeviÄ, & Nam (2022), models of human judgment and decision-making will be critical for embedding ethical principles into AI systemsâand for explaining why people trust (or distrust) machines.
In the era of AI recommendations, itâs not just about efficiencyâitâs about who truly decides and who bears responsibility.
In the workplace:
⢠A lack of clear rules for shared responsibility leads to decision chaos and team demoralization.
⢠Blind trust in algorithms weakens ethical reflection and conversations about values.
⢠AI can strengthen good decisionsâbut it can also legitimize biases, unless someone asks âwhy?â
đ Why must leaders and teams learn ethical co-decision-making?
AI introduces three key risk areas: privacy, bias, and the erosion of human agency.
- Many people perceive AI recommendations as objective or emotion-freeâleading to automation of trust faster than automation of decision-making itself.
- Research by Koslosky & Lohn (2023) shows that humans tend to over-rely on AI recommendations (automation bias)âwhich can result in poor decisions, even when users know the system might be wrong.
đ How to strengthen ethical thinking when working with AI?
- Keep humans in the loop â ensure human involvement in decisions that directly affect people (e.g., hiring, promotions, loans, evaluations).
- Analyze intentions and impacts â donât just ask âdoes it work?â but âwho benefits?â, âwho is excluded?â, âdoes this support our organizationâs values?â
- Create space for dialogue on dilemmas â ethics begins with conversation. Host regular team sessions to reflect on AI-supported decisions.
- Treat AI as a partner, not an oracle â instead of surrendering, learn to collaborate. Define clear rules for when the system supports a decision, and when humans must have the final say.
đ§ AI is not just a toolâitâs a mirror of our values
Algorithms donât decide in a vacuum. We either teach them what mattersâor let them learn from our unconscious data. In an age of automation, we need conscious humans, not passive operators.
đ Join our module âDecisions and Ethics in the Age of Algorithmsâ and learn how to:
⢠Identify hidden dilemmas in AI-supported decisions
⢠Strengthen team accountability and responsibility
⢠Build trust without blind faith in algorithms
⢠Lead difficult conversations about fairness, errors, and values
đ If youâre using AI tools to make decisionsâthis module is for you.
Because a decision is not just an outcomeâitâs a responsibility.
đ Sources and research:
- Pflanzer, M., Traylor, Z., Lyons, J. B., DubljeviÄ, V., & Nam, C. S. (2022). Ethics in humanâAI teaming: principles and perspectives. AI and Ethics, 3, 917â935.
- Koslosky, L., & Lohn, A. (2023). AI Safety and Automation Bias. Center for Security and Emerging Technology (CSET).
- Pazzanese, C. (2020, October 26). Ethical concerns mount as AI takes bigger decision-making role. The Harvard Gazette.
đ Sign up before you make your next decision you canât explain.
This module will be led by Dr. Anna Syrek-Kosowska and Agnieszka Bochacka.
