Trust is at the heart of virtually all organizational interactions. It facilitates collaboration among actors and increases the effectiveness of formal and informal transactions. But the current wave of digital transformation is fundamentally altering the dynamics of organizational trust. Instead of trusting the integrity, competence, and benevolence of individuals, are we now left to trust the reliability, functionality and usefulness of machines?
To assess the shift in the forms, modes and targets of trust in organizations, Oliver Schilke published the article “Organizational Trust in the Age of the Fourth Industrial Revolution” (Journal of Management Inquiry, 2023), co-authored with Fabrice Lumineau (HKU Business School) and Wenqian Wang (Krannert School of Management). In this essay, he argues that the advent of the Fourth Industrial Revolution (4th IR) is substantially changing trust patterns with and across organizations.
The 4th IR is currently under way with the proliferation of autonomous systems that facilitate high levels of interconnectivity and interoperability among humans and machines. These technologies include blockchain, the Internet of Things (IoT), cloud computing, machine learning (ML), and artificial intelligence (AI).
The 4th IR is associated with an explosion of the volume, variety, and velocity of data being analyzed nearly in real-time--for example, online footprints, social connections, locations or facial recognition. Analyzing all these data involves computationally powerful machines that can assist or even replace human agents. Ongoing advances in AI provide the basis for unprecedented computerization of intelligence, meaning that machine learning techniques are used to learn from experience and perform key tasks. Economic activities are increasingly executed automatically. For example, information is collected by smart sensors and mobile devices and uploaded to the cloud, to then feed systems with routinized decision-making tasks prescribed by algorithms. To facilitate such automated, fast, and flexible adaptations, the structure for organizing economic activities is becoming more decentralized.
All these trends make it clear that the 4th IR fundamentally alters how employees and organizations interact. As a result, it is important to revisit when and how to trust. The 4th IR is likely to have important consequences for trust especially when two conditions are met:
An increasing number of tasks can be executed quasi-automatically, obviating the need for human intervention. As a result, system trust becomes increasingly important. An example is blockchain, where people trust the information they receive without the need for interpersonal trust in other participants. This technology is employed together with smart contracts, which support the autonomous execution of agreements, thereby limiting human interaction. For example, there are banks that now rely on an entirely digital loan approval process, with no involvement of any human bankers. So, the trust is shifting from the individual level to the reliability, functionality, and usefulness of the technological system.
One of the downsides, however, may be the possibility of overtrusting the system, as was the case with the Air France Flight 447 crash or self-driving Uber car accidents. Further, the loss of human agency may create a sense of alienation from decisions, ultimately leading to frustration among organizational members. Also, in system failures responsibilities tend to be more distant and diffuse, leading to ambiguity regarding the real cause of the failure and the identity of the actors responsible for the trust breach. Such ambiguity also creates significant hurdles for trust repair. In short, the implications of the shifts in trust for the future of work are significant.
Many technologies of the 4th IR rely on a set of protocols and codes that determine the trustworthiness of a prospective partner not exclusively based on past interactions but also through categorization. This means that if the category to which a trustee belongs is deemed trustworthy, the trustee will also be considered trustworthy. As such, the type of information on which trust decisions are based is shifting from past interpersonal experiences to data that allow for categorizing an actor. For example, insurance companies offer customized products to applicants based on their background information, and e-commerce providers determine the trustworthiness of reviewers based on category matching.
One of its downsides, however, is that algorithms merely follow pre-defined rules of actions and do not leave room for affect-based decision-making. While both interpersonal and interorganizational trust typically involve a mix of rational and emotional components, this shift may potentially take away some of the uniquely human skills that help discriminate trustworthy targets from untrustworthy ones. Another problem relates to the lack of transparency and replicability of technologies relying on AI. Most people, sometimes even code developers, do not understand the process or the outputs from ML. This means that AI decisions could be difficult to predict, and the logic behind each decision made tends to be poorly understood.
Even though the importance of interpersonal trust tends to decrease with the 4th IR, there will nonetheless always be a need to trust certain people or entities. However, these actors are not exclusively the counterparts to a collaboration but rather third parties in charge of developing and maintaining digital systems, such as developer engineers and companies providing technological infrastructure. Trustors may face considerable uncertainty regarding who designed the system, who provides the information that feeds the algorithm, and who has access to the data. For example, it is possible that a coder or system architect may have introduced, intentionally or not, cultural and/or personal biases into the code supporting the technology, such as Google’s allegedly racist image labeling. On the other hand, bad input may produce mistakes ranging from topology errors to malicious attacks, so trust in the parities providing data is key.
Finally, much of the data shared is based on the trusting expectation that these data will be kept confidential and gathered only with consent. But in reality a great deal of sensitive information is collected without consent and employed for a variety of purposes, such as with the Facebook Cambridge Analytica scandal.
In sum, in the era of the 4th IR, organizational trust undergoes a fundamental shift from individuals to technology, presenting both opportunities and challenges in redefining when, how, and whom to trust.
The author is Distinguished Visiting Professor in Leadership and Effective Organizations at EGADE Business School.