Privacy by Design and Theories of Organizational Resistance Thereto

Garrett Groos
12 min readMar 10, 2021

Every day, most of the world interacts with code. In the information age, if you access something on your phone, drive your car, or watch your favorite movie or tv show, you’re enabled to do so via code. That code is written, usually, by a human, who has all of the associated quirks and fallibilities. Yet, by some estimates, only one-third of one percent of humans know how to code, and therefore know how to write scripts that the streaming services, phones, cars and websites need to work.[i] This is not to say that humans in our modern society needs to know how to know how to code, or to build everything they use–that amount of knowledge could scarcely be acquired within a lifetime, and wouldn’t be very useful–but the resulting inequality forms the crux of a very large and very real problem for modern organizational change.

When you click on a website link, or tap or swipe on your phone, a data point is generated, which is stored for later use. This can be kept by the person that wrote the code that enabled you to generate the data point, or it can be sold to someone that can use it for their own ends. By seeing what you’ve chosen in the past and predicting what you might like in the future, a company or service can start to get a more and more accurate picture of who you are so that they can push to you more of their products that you might buy. These days, this phenomenonological process can happen nearly instantaneously. In some cases, this rapidly-developing profile is extremely useful and helpful–for example, whenever you need to search the internet for something that you use often, or when you need to “friend” the person at work you just met. Sometimes, however, that same profile can be extremely harmful–for example, when you need to buy insurance or file an insurance claim, or when you need to open a new line of credit at your bank.

Most of us generate thousands, if not millions, of data points per day. But few, less than than a third of one percent, have the knowledge at their disposal to understand how that data can be accessed, who can access it, by what means, or to what use it could be put. Further, no one is able to predict what aggregate effects may modify our society by way of our reliance on systems that are built in such a way.[ii] The argument for privacy by design (henceforth, “pbd”), in its largest scope, is essentially to that this informational and educational inequality–the aforementioned “crux,” needs to be regulated. Pbd’s aim is to alter the designs of the devices and services that we are dependent on for modern living to make them more secure against actors that can use the data we generate to manipulate our worlds without our knowledge, ability to know what we might have been led to without the manipulation, or ability to stop those manipulations that we don’t want–precisely because most people aren’t in the less than one third of one percent of people that have the knowledge to correct the structure of those devices and services at the source.

“less than than a third of one percent [of people] have the knowledge at their disposal to understand how. . .data can be accessed, who can access it and by what means, or to what use it could be put. . . No one is able to predict what aggregate effects may modify our society by way of our reliance on systems that are built in such a way.”

The alternative to pbd, in terms of a company doing their privacy due diligence, is compliance with current privacy laws — something that this paper will call “privacy by law.” There are a host of problems, ethical and otherwise, with merely complying with current privacy laws in the US, however; this will be discussed later in this paper. Discussion of the merits and drawbacks of taking either of the two very divergent paths has only picked up in intensity and earnestness with each new privacy debacle, and shows no signs of becoming more tame anytime soon. Indeed, the privacy issue has escalated to the status of one of the great questions to arise out of the information age and the turn toward a more all-encompassing information economy. Whichever way the question is resolved will undeniably alter what the future of technology — and society — looks like. The current level of privacy efforts has resulted in breaches of some of the largest technology companies the world has ever seen, the exposition of over one billion private records to bad actors, and an identity theft occurrence rate of nearly every two seconds in the United States alone.[iii] This should not be acceptable to consumers. With stakes this high, and emphasis on the triple-bottom line goal of social consciousness playing such a heavy-handed role in today’s market, it’s a wonder that more companies haven’t made greater efforts to protect their customers by integrating more pbd functions.[iv]

The pbd ideology, although initially enshrined in very early and rudimentary form in 70s-era legislation, is still catching on, gaining relevance, precision, form, and popularity, with the help of entrepreneurs, concerned (one hopes) tech philanthropists, and certain new regulations in the EU, California, and other States. The questions of pbd’s virtues and path to normalization, however, are still very much open among many companies today. This discussion is not confined to what people commonly think of as “technology” companies anymore. The focus of this article is to tease out pbd’s virtues as well as its faults, and to examine the stolidity of many companies to move toward a more pbd-forward structure in their products and service platforms. Additionally, although this paper is framed as an empirical analysis, the fact is that the data that would be most useful to testing my theory and hypothesis does not yet exist in a complete enough form. My hope is that, by first examining what useful data for this analysis could look like, it will become easier to seek or establish methods to create it.

“the privacy issue has escalated to the status of one of the great questions to arise out of the information age. . .Whichever way the question is resolved will undeniably alter what the future of technology — and society — looks like.”

It’s understandable that companies can become afflicted with decision paralysis regarding how to cover the privacy risk inherent in their offerings. Privacy by law stands on shifting sands. Even whilst companies have started to bring privacy risk mitigation in-house with the addition of Chief Privacy Officers, who oversee companies’ comprehensive privacy programs, the full breadth CPO’s messages seem to have not yet permeated many companies’ cultures.[v] Privacy testing continues to be relegated to the end of the design process, and is still fully synonymous with or at least barely tangent to penetration testing. A better theory of building in pbd frames privacy testing as an iterative process, with periodic review as different features are designed to test for integration and overall synergy with other aspects of the product.[vi]

Pbd thus offers companies a critical chance to take privacy into their own hands, and circumvent the ambiguity in the law’s conception of their designs and engineering standards.[vii] It may require philosophical or incentive changes at some levels of the organization to engrain the idea into existing design process structure, but company pivots to areas with untapped customer demand aren’t unheard of, especially among the technology companies that are traditionally thought of to be the chief sources consumer privacy harms.[viii] So why aren’t more companies utilizing pbd structure in their products or platforms?

I posit that companies will be more resistant to change according to the power they have over consumers, exemplified by the magnitude of changes in a product or platform’s usage rates due to impactful privacy events like breach, integration with other products or platforms, or a decision to deploy pbd. This theory is inclusive only of companies or industries in which privacy is a relevant concern. This sample of companies or industries would be quite large in today’s information-gathering ethos, however; many different types of companies utilize data gathering and analysis techniques, from grocery stores and materials suppliers to social networking sites and search engines.

A. Testable Hypothesis

I would begin to prove my theory by first testing the hypothesis that powerful companies with products or platforms that have very active users will be less likely to institute pbd changes than will less powerful companies with less active users. Both power and activity are needed because I hypothesize that the magnitude of this inverse relationship would be dependent on the privacy elasticity of demand for the product, such that companies where negative privacy events are more likely to reduce a product or platform’s number of daily active users (henceforth, “DAUs”), i.e., those with a greater privacy elasticity of demand, will be more likely to make positive pbd changes, all else being equal.[ix]

Low privacy elasticity of demand is one piece of evidence that a company has a high amount of power over consumers. This could be due to, for example, products or services that face a small number of competing products or services, because of attributes like novelty (this attribute could describe, for example, a market first-mover or innovation), use necessity, integration with other platforms, or market-share hegemony.

B. Data and Methods

In order to test my hypothesis, I’d need data about companies in various industries, and varying market environments. In order to test for products that carry sensitive data with varyingly-active user bases, I’d need a cross-section of company data from industries in which data breaches and transitions to pbd have been common — industries like technology, retail, healthcare, and hospitality. Additionally, in order to test for varying amounts of power over consumers, I’d need another cross-section of company data for companies operating within market environments such as near-perfect competition, oligopolistic competition, and monopolistic competition.

Furthermore, wherever the two cross-sections may converge, as may be the case when describing a monopolistic, highly-used, highly-integrated product like Google’s g-suite, I’d need to split the data in two; one data set describing the company in their U.S. conception, and another set describing the company’s E.U. counterpart under their General Data Protection Regulation (or “GDPR”). The reason for this is threefold. First, monopolistic companies, by definition, have no real competitors; thus, instead of analyzing other, much smaller companies in the same market that don’t represent real competition, I could more easily see the effects of integrating pbd functions by comparing the monopolist’s product metrics under two different regimes. Second, the EU’s General Data Protection Regulation provides an excellent pseudo-counterfactual scenario because it mandates a kind of pbd. Chapter 4, Article 25, which states:

[A] controller shall implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual’s intervention to an indefinite number of natural persons.[x]

Third, while this article endeavors to generalize its conclusions to companies that utilize sensitive data across the board, the most notable privacy concerns of the day lie with large technology companies that are monopolists with high-use products and incredible amounts power relative to their users, such as Facebook, Google, Microsoft, Netflix, Amazon, and Apple.

The variables I’d use to test my hypotheses would be: (1) pbd_likelihood, (2) num_daus, (3) percent_daus, (4) market_share, (5) privacy_elasticity_of_demand, (6) company_power, and (7) org_structure.

Thus, utilizing an OLS regression analysis, the model would take the following form:

pbd_likelihood = ß0 + ß(num_daus) + ß(percent_daus) + ß(market_share) + ß(privacy_elasticity_of_demand) + ß(company_power) + ß(org_structure)

Discussion of each variable follows, below.

The variable pbd­_likelihood. The model’s sole dependent variable, pbd_likelihood represents the degree to which a company (or its European counterpart) would incorporate pbd features into their product offerings. This variable is a continuous ratio measurement, with scores of 0 to 100; here, a score of 0 would mean that the company is completely resistant to integrating pbd features in its products, whereas a score of 100 would mean that the company would employ as many pbd features as possible in its products, given the impetus to do so. A feature might be considered a pbd feature if it embodies the seven foundational principles of pbd framed by Ann Cavoukian, former information and privacy commissioner of Ontario, Canada.[xi] The threshold of what is considered a pbd feature may initially be lowered to be sure that enough initial data can be gathered at this point in the trend’s lifecycle. As time goes on, hopefully, the trend will continue to grow so that researchers can be more selective about the data they gather.

The variables num­­­_daus and percent_daus. These variables, both ratio measurements, represent the number of daily active users of a product and the percentage of daily active users in the company’s user base, respectively. Since this paper’s testable hypothesis concerns whether the installment of pbd features affects a company’s active daily users, the inclusion of these variables is necessary to determine the relationship between the two.

Unless I used time-series data here, this couldn’t explain the shape of the relationship over time — that is, whether a company’s product offering would suffer large a hit to DAUs initially, and recover later, or whether DAUs would be lost forever. All that I’m able to say as a result of this variable is whether a company with a higher pbd_likelihood would tend to have more DAUs or not. Because an absolute number can mean varying thing to daily active user rate because of the size of a product’s daily active user base, I include percent_daus to control for the effect.

The variables market­­_share, privacy_elasticity_of_demand, and company_power. The ratio measurement market_share represents a company’s market share, while the interval measurement privacy_elasticity_of_demandrepresents the degree to which users would continue to demand use of the product or platform given various privacy positions. The position of a company would become more negative with fallout from breaches or with “privacy lurches” wherein a change in company policy increases privacy risk for users of the product or platform. [xii] The position of a company would become more positive with the institution of pbd features or other privacy-positive changes to their privacy program. Company_power, another interval measure, embodies an interaction of privacy_elasticity_of_demand and market­_share, since either one of those factors or the combination of the two might have more impact upon a company’s embrace of largely external directives such as pbd.

These measures assume that company power by-novelty companies, in particular, will have large market shares because they will operate in a blue-ocean market. A blue-ocean market is lacking in competitors, often because a category of products or services are new.[xiii] However, this may not be the case, particularly in industries like the technology where small competitors or disruptors can quickly gain popularity, even in what would seem to be crowded markets.

The variable org_structure. The model’s variables, to this point, assume that more activity and company power would equal more organizational resistance to change. There may, however, be another impactful relationship underlying that relationship. Companies like Facebook claim to utilize a “flatter” organizational structure, which embodies the philosophy that those with more responsibility (like managers, as opposed to other employees under their management) don’t necessarily or at least unilaterally control the actions of those with less responsibility.[xiv] This may or may not increase the impact or rate of dynamism of organizational messages, so I include this variable to control for it. There are five main types of organizational structures, each with their own score, ranging from one to five, respectively: (1) a traditional hierarchical structure, (2) a flatter structure, (3) a flat structure, (4), a flatarchy structure, and (5) a holocratic structure.[xv]

[i] Vox (@voxdotcom), Twitter (Oct. 24, 2019), https://twitter.com/voxdotcom/status/1187432316202819584?lang=en.

[ii] Julie E. Cohen, What Privacy is for, 126 Harv. L. Rev. 1904, 1904–08 (2013).

[iii] 30 Eye-Watering Identity Management Statistics, SelfKey Blog (Jan. 8, 2019), https://selfkey.org/30-eye-watering-identity-management-statistics/.

[iv] Triple Bottom Line (TBL), Investopedia (May 3, 2019), https://www.investopedia.com/terms/t/triple-bottom-line.asp.

[v] Kenneth A. Bamberger & Dierdre K. Mulligan, Privacy on the Books and on the Ground, 63 Stan. L. Rev. 247, 249–279 (2011).; Ari Ezra Waldman, Designing Without Privacy, 55 Hous. L. Rev. 659, 659–705 (2018).

[vi] Waldman, supra note 1.

[vii] Birnhack, et. al., Privacy Mindset, Technological Mindset, 55 Jurimetrics 55, 114 (2014).; Dierdre K. Mulligan & Jennifer King., Privacy Jurisprudence As An Instrument Of Social Change: Bridging the Gap Between Privacy and Design, 14 U. Pa. J. Const. L. 989, 997 (2012).

[viii] Waldman, supra note 1.

[ix] privacy elasticity of demand would measure the change in demand relative to either privacy increases or decreases.

[x] E.U. General Data Protection Regulation, Ch. 4, A. 25.

[xi] Ann Cavoukian, Pbd The 7 Foundational Principles, https://iapp.org/media/pdf/resource_center/Privacy%20by%20Design%20-%207%20Foundational%20Principles.pdf (last visited December 16, 2019).

[xii] Paul Ohm, Branding Privacy, 97 Minn. L. Rev. 907, 908–913 (2013).

[xiii] Steve Denning, Moving to the Blue Ocean Strategy: A Five-Step Process to Make the Shift, Forbes (Sept. 24, 2017), https://www.forbes.com/sites/stevedenning/2017/09/24/moving-to-blue-ocean-strategy-a-five-step-process-to-make-the-shift/#24d43f707f11.

[xiv] Jacob Morgan, The 5 Types of Organizational Structures: Part 2, ‘Flatter’ Organizational Structures, Forbes (Jul. 8, 2015), https://www.forbes.com/sites/jacobmorgan/2015/07/08/the-5-types-of-organizational-structures-part-2-flatter-organizations/#6d832c206dac.

[xv] Id.

--

--

Garrett Groos

Technology-proficient Juris Doctor / MBA. Loves music, comedy, and technology, especially of the artificial intelligence variety.