top of page
Writer's pictureSteve Quenette

Are privacy protections applied to technology platforms enough to enable AI?

Some recent work and articles have us thinking...


Are privacy protections applied to technology platforms enough to enable AI (and the growth of industries from data) without overly weakening the liberties of individuals? Where do we see strong data liberties leading to greater AI potential?

The linked article is interesting, as it calls out some shortfalls if we rely on privacy alone. For example:


"It transfers the responsibilities for the harms of the data ecosystem to the individuals without giving them the information or tools to enact change."

That is, individuals are empowered to control who can use the data at the point of providing the data. We influence how it is shared. However, we are not afforded the same opportunity during data use. And the potential for use is endless. Moreover, platform business models focus on driving more data input through personalisation and attention-grabbing, providing more data for more undetermined future use.


Great business model! Except for the degree to which harm is readily accessible and prevalent. Moreover, there may be better ways to attain the scale of data needed to draw value from AI.



The article's solution is to establish data cooperatives - entities that hold the data on individuals' behalf and, as an extensive collection of users, they can counter the weight of the platforms.


We're not suggesting this is commercially wise for an individual platform or even a desperate societal priority. Instead, all types of organisations face this dynamic. We're asking:


"If one begins investing in a strategic AI future, are there other models worth considering?"

It is helpful to point out that data collectives are pervasive in the research sector. We've been through a decade or so of building such collectives, where initially, we did not know how the data would be used, nor the ROI of the effort. Data collectives / repositories / registries are emerging as the primary prerequisite by which the research sector applies AI to itself. The resultant datasets are more significant than the hoarded dataset of one researcher, one group and sometimes one discipline. Hence, the ability to coordinate large datasets is increasingly the rate-limiter to discoveries. The lubricant enabling trust and buy-in into large datasets is belief in the governance over the dataset/collective.

bottom of page