Whether it is the messages we send, the commands we relay to our smart assistant, or the photos we upload online, data-flows from individuals to organisations have increasingly come to define our interactions in the digital space. While we may be unaware of the inferences and decisions made using our data, the sharing of this data is something that we usually consent to – through the lengthy notice and consent privacy forms we (unwittingly) accept. Governments across the world are looking to create interoperable digital spaces to generate greater value from existing datasets, while protecting users. In Europe, the proposed Data Governance Act recognises a new class of data intermediaries to enable data exchange while safeguarding the individuals’ interests. In India, the draft Non-Personal Data Framework seeks to increase access to community-generated data for domestic actors. The interests of these communities are to be safeguarded by data trustees. These interventions attempt to address the failures of current consent mechanisms by recognizing intermediaries that can protect users’ interests on the internet while facilitating data sharing.
Delineating the interests that the law seeks to protect will help us evaluate the effectiveness of legal mechanisms in making meaningful interventions. The paper introduces this by briefly tracing the evolution of privacy harms in law and the difficulties of current legal interventions such as notice-and-consent mechanisms in recognising them in the data economy. Building on this, the paper examines theoretical approaches to viewing relationships in the data economy through a fiduciary lens. Finally, the paper looks at data trusts as an intervention, identifying legislative and regulatory challenges in recognising them.
Understanding privacy harms
To appreciate why consent should be meaningful, we need to understand the evolution of privacy harms in law and its constraints. Prosser’s seminal work in the 1960s was one of the early efforts to conceptualise these harms. Prosser viewed these through the lens of specific injuries – reputational and proprietary – caused by four kinds of activities. Relying on tort law, he outlined four types of invasions: intrusion upon the plaintiff’s seclusion, public disclosure of private facts, defamation, and appropriation of one’s identity. Intrusion seeks to capture the gaps between the offences of trespass and nuisance. It refers to prying beyond physical intrusion and aims to address the mental discomfort a reasonable person may experience. Public disclosure of facts and defamation are similar to the extent that both are concerned with an individual’s reputation. However, defamation is concerned with lies, whereas public disclosure is concerned with truths. Lastly, the tort of appropriation of identity in effect confers certain proprietary rights upon an individual’s own name and identity.
However, Citron and Solove argue that Prosser’s conceptualisation of privacy through tort law adopts a narrow and outdated interpretation of privacy which, among other aspects, does not factor in the law of informational privacy. The law of privacy torts emphasizes identifying tangible and concrete harms, which limits how courts can intervene meaningfully. It is also particularly challenging to identify or attribute the cause of harm to corporations in the data economy, where harms arising from the collection and processing are not necessarily tangible, financially or physically. Moreover, the use of aggregated data may mean that harms can also occur at a collective level or as an aggregate of numerous minor harms.
While lawmakers have turned towards notice-and-consent mechanisms to address this, in practice, however, individuals cannot bargain or resist these practices and are left with no choice but to accept long and convoluted privacy policies set by corporations that are often intrusive. Such mechanisms also leave questions of accountability unanswered. Even in instances where there is a privacy breach, individuals or communities have little to no recourse to seek compensation for it.
Naturally, there is a growing realisation that data regulation needs to address these privacy harms by recognising the asymmetry in knowledge and control in the data economy. The much-celebrated work of Balkin on ‘information fiduciaries’ attempts to address this imbalance. Drawing from established fiduciary relationships such as doctors and patients, or lawyers and clients, the paper proposes recognizing companies engaged in data-driven algorithmic processing as ‘information fiduciaries’. However, the paper does not classify all corporations as information fiduciaries, but specifically restricts this to corporations that claim to be privacy friendly and present themselves as having secure and purpose-defined data sharing practices. To Balkin, these corporations ought to be bound by a higher standard of care, beyond what they claim in their privacy policies or terms and conditions. The fiduciary role protects consumers from harms which notice-and-consent frameworks alone do not address.
Balkin’s proposal received support from sections of lawmakers, practitioners, and corporations (including Facebook). However, it also saw pushback from scholars like Khan, Pozen, and Grimmelmann, who, while fundamentally in agreement with the view that technology corporations wield disproportionate power and influence, differed with Balkin on the idea of imposing fiduciary obligations on technology corporations. They contend that exercising such fiduciary duties is not practicable as corporations will have divided loyalties between consumers and shareholders of the companies, and these interests are bound to conflict.
The Indian approach to information fiduciaries
Within the Indian context, ‘data trustees’ and ‘fiduciaries’ find mention – although varying conceptually – in the proposed Personal Data Protection Bill (PDPB) and the Report by the Committee of Experts on Non-Personal Data Governance Framework (NPDR). The PDPB uses ‘fiduciary’ to define the functional equivalent of the GDPR’s data controllers, essentially legal or juristic persons that determine the purposes and means of processing data subjects’ personal data. The PDPB places various obligations such as notice-and-consent requirements, purpose limitation, and fair and reasonable processing on the data fiduciary. While these obligations can be viewed as efforts to contour these interactions as fiduciary relationships, framing these entities as ‘data fiduciaries’ can be misleading. Firstly, all data processors need not fall within the fiduciary classification. PDPB’s scope extends to all entities (regardless of size or scope of activities) that determine the means and purpose of processing data. Secondly, the obligations and rights emanating from the PDPB are fundamentally based on the notice-and-consent frameworks that do not impose a higher standard of duty of care. Lastly, the draft legislation does not bind entities to act in the best interests of the data principal, an element that is essential in a fiduciary relationship.
The NPDR proposes ‘data trustees’ to act as intermediaries between data custodians – who undertake the collection, storage, and processing of NPD – and data requesters to manage high-value datasets (HVD). Data trustees can either be a Government organization or a non-profit private organization (Section 8 company / Society / Trust). The government can function both as a data trustee and as a data custodian. In representing the rights of communities, like with data custodians, data trustees are only bound by a duty of care. In the absence of clear fiduciary responsibilities, the possibility of ensuring data trustees represent the best interests of the communities will depend on how the rights of communities and liabilities of data trustees are defined. Given the different roles and expectations from a data custodian and a data trustee, the government’s ability to assume both these roles may create a conflict of interest.
Data trusts as an alternative
However, problems arising from digitally mediated consent cannot be addressed solely through legislative interventions focused on data protection. Such regulations fall short in addressing the asymmetry of control and information. Similarly, attempting to impose fiduciary duties on data collectors that process information ignores the underlying issue in these relationships in our data economy: divided loyalties between consumers and shareholders. In recent times, academics like Delacroix, Lawrence, and Mcdonald have proposed data trusts as a stewardship model to overcome these challenges. Data stewards are intermediaries that facilitate decision-making for users on the use of their data. These models may vary based on the nature of the steward’s involvement and the sharing of data between individuals and communities. Data trusts seek to push back against asymmetries in the data economy enabled by the atomised nature of data rights by devising an intermediary that can pool together user data (or the rights over it) to enhance the value of their data. The concept of data trusts draws inspiration from the common law legal trust framework to recognise a fiduciary relationship between trustees and beneficiaries, underscored by the trustee’s duty of care, skill, and undivided loyalty towards the trust’s beneficiaries (data subjects). Data trusts can represent the interests of stewards to ensure that the participation of data producers in the data economy is not reduced to just providing consent.
For data trusts to be realised in practice, three underlying issues need to be addressed. First, in legal systems that do not recognise trusts, legislative interventions or functionally equivalent structures must be identified that can instantiate data trusts. And even within legal systems – like India – that recognise trusts, there are questions around the treatment of data as subject matter and the delegation of rights over data. This may require legislative and regulatory reforms that address these gaps and uncertainties. Second, designing the governance and objects of data trusts will require careful consideration of its objects and governance structures, which balance suitably incentivising the trustee and advancing the beneficiaries’ interests. The absence of top-down intervention from policymakers to recognise this could also lead to homogeneity of data trusts, prioritizing monetisation of data over other purposes. Sustainable models will safeguard data trusts from regulatory capture by corporations. Third, and as an interrelated point, the sustainability of data trusts as a model of stewardship is also contingent upon scale and plurality. The operation of data trusts at scale will enhance the negotiating capabilities of data trusts as intermediaries, and a plurality of data trusts will ensure healthy competition by giving data producers the freedom to move from one data trust to another.
While laws like GDPR and PDPB aim to protect personal data through notice-and-consent requirements, they fail to address the imbalance of knowledge and control between individuals and organisations, which can render notice-and-consent requirements ineffectual. Relying primarily on consent mechanisms runs the risk of overlooking the actual harms, both individually and collectively. Having explored the limitations, it is clear that policymakers need to create pathways that address concerns posed by the atomised nature of data rights. Data trusts offer one pathway that can complement data protection legislation and make consent on the internet more meaningful. Imposing fiduciary obligations on independent intermediaries – rather than data requesters – does not just address concerns around these technology corporations’ conflicting interests but also provides a platform for individuals and communities to come together to advance or support a vision. Therefore, at its heart, these structures need to be foregrounded by clear fiduciary obligations that ensure intermediaries prioritise users’ interests over everything.