Ethical design of AI offers the balance we need between innovation and respect for humanity and as such must be embraced, says María José Tellez, senior design manager EGGS, part of Sopra Steria Spain.
Today, it can be difficult to remember what the world was like before Artificial Intelligence and generative AI. Neural networks and unsupervised learning emerged years ago, and now we find ourselves immersed in AI workflows, agents, and soon, LWMs – AI embedded in robots, brain-computer interfaces, and other technological innovations we can't yet fully imagine.
All of this presents different opportunities and threats and gives rise to a range of responses: euphoria, passive integration, caution, paralysis, or outright denial. Finding the right space for opportunity and adaptation is difficult due to the frenzy of change and distorted narratives, but going back to the past no longer seems like an option.
Adaptation, change, and transition are often costly. Sometimes cognitive effort is needed for transformation to be meaningful, changing beliefs and behaviours and asking “why?” Often, the right frameworks to support these changes and move beyond short-term effects are missing.
AI for humanists
In the face of exponential technological advancement, its algorithms, the imperative of maximum productivity, and legal frameworks that demand audits and certifications, the humanities-based approach from the field of design offers concrete actions and narratives that integrate empathy, cultural context, and social responsibility – bridging the gap between innovation and human values.
While engineering prioritises efficiency, and law ensures compliance with regulation, design –guided by a humanistic and systemic vision – can identify and address conflicts. These include dilemmas like those between security and privacy posed by facial recognition cameras, the creation of journey maps that contrast technical data flows with emotional experiences, the organisation of shared human-robot spaces, and the management of consequences and potential externalities.
We already know that AI doesn’t operate in a vacuum: it requires enormous amounts of energy and natural resources like minerals or water. It can also replicate biases, influence critical decisions, and redefine social interactions. At the same time, we know we have the capacity to apply containment measures, pursue eco-efficiency, and take responsibility for AI’s impact – on how we use it, how we communicate, and how we safeguard, through our roles, fundamental rights, intellectual property, neuro-rights, and data sovereignty. If we don't pay attention, we won’t be able to intervene.
The need for ethical design
Thus, the ethical design approach – or digital ethics applied to the design of digital products and services – is not just about avoiding harm but about creating systems that promote fairness and wellbeing beyond the simplistic triangle of saviour, victim, and perpetrator. Through design frameworks, we can help anticipate risks and vulnerabilities with critical thinking, participatory inquiry, humanistic perspectives, and questioning as a key tool for reorienting and co-creating more responsible solutions.
This involves being transparent with interfaces that explain how AI makes decisions, optimising digital footprints, prioritising modular components that extend the lifecycle of design systems, and creating experiences that encourage healthy habits. Above all, it means asking questions that prompt reflection within teams.
AI will never be neutral, as it reflects the priorities of those who design it. So, how can we ensure that each solution we create contributes to a more just, meaningful society and a more liveable world? Beyond working in multidisciplinary teams and involving all stakeholders actively in design and usage, we must assess whether a technology enhances capabilities or merely adds noise and understand its full lifecycle, from design and development to integration, use and obsolescence.
While compliance with regulation can serve as a trigger to prevent greater harm, it’s not enough. A genuine commitment is needed to build the futures we want, starting now. At the crossroads AI presents, where technology risks becoming an end in itself, we must design with the intention of finding a balance between innovation and respect for humanity. In this context, design brings a crucial layer of operational ethical intentionality. We can either drift toward dystopia – or strive to use technology in service of life’s intelligence.