Biometric technologies are already part of our daily lives. Through fingerprints, facial recognition, or iris scans we can unlock our smartphones, access mobile apps, or enter our workplaces. Private companies use these tools to enhance security and streamline processes or consumer experiences.
Governments, in turn, employ these technologies in identity systems and migration control, embedding them in e-passports, visas, and official procedures. They are also central to population registries, licensing, and public security services.
The expansion of biometric technologies promises greater security, efficiency, and organizational control, but it also opens an unsettling ethical frontier: how do they impact human dignity?
Biometric data not only safeguard identities and prevent risks but can also monitor, dehumanize, and perpetuate inequalities. These tools act as a double-edged sword. In a world shaped by surveillance capitalism, the challenge for organizations is not whether to adopt biometrics, but how to do so without sacrificing the intrinsic value of people.
The article “Supporting and Humiliating Dignity with Biometric Technologies: An Affordance Perspective” (Journal of Business Ethics, 2024), coauthored with Jayson Killoran (University of Victoria, Canada), Jasmin Manseau (Telfer School of Management, Canada), and Andrew Park (University of Victoria, Canada), analyzes the ethical impacts of biometric technologies on human dignity, a topic less explored compared to the frequent debates on privacy and discrimination.
In this study, we propose a conceptual framework based on affordance theory (the possibilities for action that a technology offers) to explain how biometrics can simultaneously support and humiliate human dignity in organizational contexts.
Evolution of Biometrics
The development of biometrics can be understood in three main generations, each marked by technical advances and specific ethical dilemmas.
- Who are you?: The first generation, which became popular in the late 1990s, focused on answering the question “who are you?” Its goal was to authenticate a person’s identity based on static physiological attributes, such as fingerprints, iris, or retina. Everyday examples include using fingerprints to unlock ATMs, open a car door, or gain access to restricted buildings.
The main ethical risk at this stage was invasion of privacy, since the mass storage of personal biometric data raised questions about who had access to this information and how it was being used.
- How are you?: With the second generation, consolidated in the 2010s, the question shifted to “how are you?” The focus was no longer just on identity verification but also on the interpretation of dynamic behaviors. Technologies such as voice recognition, keystroke patterns, cursor movements, or gait analysis began to be used in both security sectors and workplace or consumer environments. Banks, for instance, adopted handwriting analysis systems to detect fraud, while call centers started using voice recognition to evaluate employees’ emotional tone.
However, this generation introduced a new ethical dilemma: discrimination. By identifying gender, age, ethnicity, or sexual orientation, these systems could reinforce biases and exclusionary practices, such as denying job opportunities to certain groups or generating risk profiles that perpetuate stigmatization.
- Why are you this way? / What will happen next?: In recent years, a third generation of biometrics has emerged, aiming to answer deeper questions: “why are you this way?” and “what will happen next?” Driven by artificial intelligence (AI) and deep learning, this stage goes beyond simple identification or behavioral monitoring to address the interpretation of intentions and prediction of future outcomes. Illustrative examples include Amazon’s use of AI-enabled cameras to analyze delivery drivers’ facial expressions and emotional states, in order to correct risky behaviors before accidents occur. Another case is mining giant BHP Billiton, which introduced smart helmets capable of detecting signs of fatigue in workers and predicting when tasks might become unsafe. Similarly, health and fertility apps collect body temperature and menstrual rhythm data to predict reproductive cycles, sparking controversy over the sale of this information to insurers without explicit consent.
Although this third generation offers benefits in terms of accident prevention, service personalization, and workplace health, it raises a major ethical concern: surveillance capitalism. In this model, biometric information becomes a resource to control and monetize human behavior. Intimate data cease to be mere indicators and instead become valuable commodities, consolidating the power of a handful of tech corporations and threatening to dehumanize individuals by reducing them to mere data flows.
Reclaiming vs. Humiliating Human Dignity
Discussing biometric technologies involves more than questions of security, efficiency, or privacy. At the heart of the debate lies a fundamental value that spans philosophy, ethics, and organizational life: human dignity. This concept refers to the recognition of every person’s intrinsic worth and the obligation to treat individuals with respect, without reducing them to mere instruments of control or production.
DFrom Kant to contemporary human rights frameworks, dignity has been understood as the core of human existence. Yet its expressions vary: from inherent dignity that all people possess by simply existing, to behavioral dignity earned through virtuous conduct, or meritocratic dignity achieved through accomplishments and social recognition. Biometrics can either uphold these forms of dignity (claims) or undermine them (affronts)—for example, by protecting a worker’s safety or, conversely, reducing them to a dataset. Understanding this duality is essential to grasp the true ethical implications of biometric adoption in workplaces and organizations.
Six Biometric Affordances
Affordances are the possibilities for action that an object, technology, or environment offers to its users. Biometric data create distinct action potentials that shape how people, companies, and governments interact with this technology and manage identity. These affordances can be grouped into two categories:
A) Inhibiting (control):
- Authenticability: enables quick and secure identity verification without the need for passwords—for example, e-passports or fingerprint scans used in airport security or ATMs.
- Behavioral controllability: optimizes operations by constraining human movement, such as reducing inefficiencies. For instance, facial recognition and gait analysis can monitor employees’ actions and correct behaviors deemed unproductive.
- Adhering to safety: reduces workplace accidents and injuries. At BHP Billiton, smart helmets detect miners’ fatigue and alert supervisors to adjust shifts and prevent injuries.
B) Augmenting (knowledge and feedback):
- Acquiring strategic insight: leverages aggregated organizational data for future-oriented opportunities. Companies use facial recognition in warehouses to analyze employee movements, productivity, and inefficiencies, redesigning layouts for optimal inventory retrieval.
- Facilitating knowledge sharing: fosters learning and collaboration across the organization. For example, call centers employ behavioral biometrics to analyze voice patterns in cases of aggression and prompt real-time adjustments, creating shared learning experiences.
- Digitizing personal feedback: provides employees with objective performance data. Wearable devices like Fitbit or Apple Watch capture health indicators such as heart rate or temperature; these data can demonstrate improved perform
Paradoxical Tensions of Biometrics
The analysis of biometric affordances shows that these technologies not only have the capacity to support dignity but also to humiliate it. This generates a set of ethical tensions that in the study we conceptualize as paradoxes, since they present contradictory forces that coexist simultaneously. Below, six of the most relevant tensions are explained.
- Maintaining safety vs. imposing surveillance: Biometric technologies can contribute to protecting the physical integrity of workers by identifying risks before they materialize. However, that same constant monitoring can turn into a form of invasive surveillance, where every movement is observed and evaluated. Thus, what initially seeks to safeguard inherent dignity —the right to a safe workplace— may end up undermining behavioral dignity, by restricting autonomy and generating an atmosphere of distrust.
- Enhancing accountability vs. omitting context: The collection of biometric data allows organizations to attribute responsibilities with precision: knowing who accessed a restricted area or who committed an operational error. This supports meritocratic dignity, since it guarantees that recognition or sanction falls on the correct person. Nevertheless, this same objectivity can become blind to the human context. A camera can record that an employee arrived late, but not capture that they were caring for a sick family member. By ignoring the circumstances, the organization risks labeling as negligence what actually responds to legitimate factors, thus humiliating the dignity of someone who is unjustly judged.
- Strengthening security vs. perpetuating discrimination: The use of biometrics to authenticate identities offers high levels of security in access to banks, airports, or mobile devices. This potential supports behavioral dignity, since it protects people from impersonation or fraud. However, facial recognition algorithms and other tools are not neutral: research has shown that they can present racial or gender biases, increasing surveillance of certain groups and reproducing exclusionary practices. In this case, the same mechanism that seeks to protect may end up humiliating inherent dignity, by treating some people as less trustworthy than others.
- Providing feedback vs. dehumanizing employees: Biometric data offer a valuable source of objective feedback: a worker can use performance records to request a promotion or improve their work practices. This strengthens meritocratic dignity by recognizing achievements based on concrete evidence. However, when biometric metrics become the only evaluation criterion, employees risk being seen as mere numbers or physiological patterns. Instead of valuing the complexity of their experience and creativity, their humanity is reduced to measurable data, humiliating inherent dignity.
- Standardizing training vs. homogenizing performance: Biometrics allow companies to identify optimal behaviors and design training programs that raise the quality of work. This supports behavioral dignity by offering employees the opportunity to align with the organization’s values and best practices. However, when training is excessively standardized, there is a risk of nullifying the diversity of talents. Workers stop standing out for their unique skills and feel pressured to fit into a uniform mold, which limits the possibility of achieving distinctions and individual recognition. In this way, meritocratic dignity is humiliated, since the opportunity to excel is restricted.
- Liberating autonomy vs. encouraging short-term goals: Some biometric systems offer information that helps establish clear and measurable objectives, allowing employees to freely choose how to achieve them. This supports inherent dignity by recognizing their capacity for self-determination. However, autonomy can also lead to shortsighted behaviors, where workers prioritize immediate rewards —such as meeting monthly sales targets— at the expense of more sustainable objectives. Thus, even if individual freedom is respected, there is a risk of humiliating behavioral dignity by encouraging unethical actions or those contrary to the common good of the organization.
Taken together, these tensions highlight that biometrics are a double-edged tool: they can be a mechanism to protect, recognize, and empower people, but also to monitor, homogenize, and dehumanize them. The ethical responsibility of organizations lies in recognizing these paradoxes and consciously managing them, so that the value of human dignity is not sacrificed in the name of efficiency or control.
In our article, we argue that companies must assume the ethical responsibility of managing these tensions, balancing value creation with the protection of human dignity. Affordance theory provides a framework for designing policies and practices that recognize both the risks and the opportunities of biometrics in the workplace.