Source
Automatically imported from: http://commons.somewhere.com:80/rre/1999/RRE.Market.Power.and.Tec.html
Content
This web service brought to you by Somewhere.Com, LLC.
[RRE]Market Power and Technologies of Privacy
``` ---
This message was forwarded through the Red Rock Eater News Service (RRE). Send any replies to the original author, listed in the From: field below. You are welcome to send the message along to others but please do not use the "redirect" command. For information on RRE, including instructions for (un)subscribing, see http://dlis.gseis.ucla.edu/people/pagre/rre.html or send a message to requests@lists.gseis.ucla.edu with Subject: info rre
---
Identity and Choice: The Implications of Market Power for the Technologies of Privacy
Phil Agre http://dlis.gseis.ucla.edu/pagre/
April 1999
Paper presented at CFP'99.
Draft -- references to follow.
You are welcome to forward this article electronically to anyone for any noncommercial purpose.
2200 words.
The success of the Internet has provided a simple and ubiquitous platform for the construction of a vast range of technologically mediated relationships. Interactions between individuals and organizations that had formerly been constrained by the practicalities of paper mail, office visits, and the telephone can now be structured in new ways through electronic mail, the World Wide Web, and an emerging generation of information appliances. Internet-based media afford considerable opportunities, but they also pose considerable risks. Perhaps the most significant of these risks is the threat that inappropriate application of new digital media can pose to personal privacy. As technology makes it possible for organizations to capture ever greater amounts of information about the individuals with whom they transact business, an awareness has grown of the need for clear rules, and in particular of the ways in which the technologies themselves both articulate and enforce such rules. It follows that the ways that personal information end up getting handled will be determined through the interaction of three seemingly quite different realms: technology, markets, and law. How do these interactions work, and how can we draw on our understanding of them to design the optimal combination of technological architectures, market mechanisms, and legal rules to protect personal privacy?
Two stories are circulating in response to this question. The first story imagines technologies of choice. On this story, organizations will use the new digital media to offer individuals a menu of privacy options that they can take or leave, individuals will decide for themselves which options they like best, and competition will ensure that the options offered bear a rational relationship to the benefits to be obtained by choosing them.
The second story imagines technologies of identity. On this story, the focus of concern is personal identification. Information that is not personally identifiable is vastly less sensitive than information that is, and technologies both old and new provide a wide and complex range of architectural options form the traditional methods of complete identifiability to the opposite extreme of complete anonymity. Designers will choose the option that provides the parties to a transaction with all of the functionality and guarantees they need while minimizing the degree of identifiability of any information that pertains to individuals.
It will be noted that the technologies invoked in these two stories are compatible with one another, even if the design strategies that the stories prescribe are not. The gap arises because underlying these stories are quite different pictures of the place of personal information in the market. Let us, therefore, consider the two stories separately, beginning with the technologies of choice.
The argument for technologies of choice rests on a certain argument from neoclassical economics. In a perfect neoclassical market, rights to the use of personal information will be commodities like any others, freely bought and sold, either singly of in bundles with other commodities. Businesses buy these information rights from consumers if they can profit by doing so, for example by providing the consumers with customized products, and consumers will sell these rights if the benefits they obtain by doing so exceed the costs. This picture is intuitively compelling, but everybody agrees that it does not conform to reality. First of all, perfect neoclassical markets presuppose that businesses and consumers already have all of the information about one another that they need, thus making the whole issue moot. And secondly, the transaction costs of this trade in information rights are numerous and considerable. Consumers must expend the effort to comprehend their options, and predict what consequences each option would have. This prediction is especially difficult because the consequences arise through the workings of numerous institutions that are likely to be distant from the consumer's experience. Organizations must expend the effort to explain the options and capture the consumers' choices. And then individuals must monitor whether organizations stick to their promises, and take legal or other action to enforce their agreements if not.
Proponents of technologies of choice do not claim to eliminate these transaction costs, but they do claim to reduce them to such an extent that individualized negotiation can replace uniform rules. Let us accept for the sake of argument that the economic analysis behind this claim is accurate and adequate, so that moral and political considerations, for example, do not greatly change the overall picture. When is the claim true?
To evaluate the choice proponents' claim, let us begin by observing -- as they have themselves -- the conceptual convergence between their own proposals and the European Union's Data Protection Directive. This convergence would seem odd, given that the EU Directive is often held up as the opposite of the market-driven approach. In particular, it would seem odd because the Directive, having originated with a concern over databases maintained by a centralized welfare state, is explicitly framed in terms of a set of political rights: rights of notification, correction, and so on. But the Directive can also be interpreted in the market context as a conventional regulatory strategy to correct a market failure, providing consumers with necessary information about market offerings when the market does not produce enough such information by itself. In this light, technologies of choice might be viewed as providing the technological conditions for market institutions to self-correct, so that regulatory intervention is no longer needed. Is this view accurate? One item of evidence against it is the fact that industry is developing and adopting these technologies under insistent threats of regulatory intervention.
Whatever the case, the underlying continuity between the regulatory and technical approaches suggests a spectrum of ways in which the two approaches might be complementary. The technologies might be viewed as a means of implementing the regulations, or regulation may be necessary to compel stragglers to adopt the technology. Whatever the right answer, the great virtue of this whole family of approaches to privacy protection is conceptual uniformity: whatever its problems, the EU Directive (unlike current US policy) applies a common conceptual framework to the full range of privacy issues. This allows citizens to economize on intellectual effort: having comprehended the system in one domain, they can apply their understanding in other domains. It also facilitates institutional learning: experience implementing the policy in one domain is likely to be transferrable to other domains. The adoption of technologies of choice may have the same virtues, inasmuch as they propose a standardized architectural platform that is equally applicable in a wide range of domains.
That said, the hopeful story about reducing transaction costs in the exchange of information rights threatens to mask a deeper assumption. Personal information is typically exchanged in the context of a transaction concerning some underlying product or service, and the question remains whether the market in that product or service works correctly. A vendor with market power can extract money rents, of course, but also informational rents. To the extent that technologies of choice make it easier to capture personal information, in the context of market power they threaten to lower the barriers to the capture of information rents. Because this is quite the opposite of their stated purpose, the question of market power should loom large in any evaluation of the choice proponents' claims.
What is market power? Although the term "power" suggests a political concept, market power is an economic phenomenon that is found to the extent that a single buyer or seller in a market has the unilateral capacity to affect the prices or terms under which exchanges in that market take place. Obvious examples include monopoly and monopsony, but market power is likely to be present in any oligopoly whose members focus their attention on somewhat different, if overlapping, parts of the market. Industry concentration should at least raise the question as to whether the largest firms have power to affect the market. Although industry structure can be affected by many variables, a particularly salient variable in the present context is information technology. Although the Internet has raised hopes for a return to Adam Smith's vision of large numbers of small businesses meeting through an impersonal price system, in fact the Internet very plausibly contributes to industry consolidation by amplifying the vast economies of scale that are inherent in information work. The Internet is surely not the sole cause of the current spectacular wave of mergers, but economic theory would urge attention to the hypothesis.
When and where these concerns are valid, and to the extent that they are valid, technologies of choice risk becoming the opposite of what their proponents claim for them: not levellers of the playing field but incliners of it. This is a serious matter. What is more, to the extent that the proponents of technologies of choice are themselves part of or allied with firms in the computer industry whose control of de facto standards provides them with the leverage to shape the development of further standards, existing market power threatens to amplify market power in a systematic way throughout the economy, and to do so while flying the flag of market efficiency and voluntary local choice.
These considerations throw light on a longstanding concern about the EU's Data Protection Directive, that while placing a robust conceptual grid around the practices of capturing and using personal information, it is essentially powerless to stop the proliferation of new technologies and practices of data collection. This concern brings to the surface a conflict of assumptions at the heart of privacy policy. To the extent that one regards present-day democratic political systems as functioning correctly, privacy policies to restrain new technologies of data collection should not be necessary, since the political system will accurately express the collective will in relation to each next technology that is proposed. Likewise, to the extent that one regards the market as functioning according to the idealizations of neoclassical economics, the market in rights to personal information should operate to determine which technologies of data collection provide a net social benefit and which do not. In each case, the conceptual framework of both the EU Directive and the technologies of choice should suffice. But if those things are not true, then something more or different may be required.
These considerations should encourage us to return our attention to the other main category of technologies of privacy protection, namely technologies of identity. Whereas technologies of choice presuppose and facilitate an idealized market whose participants have no power to influence the terms of exchange beyond the allocations determined by market efficiency, technologies of identity make no such presumption. Where technologies of choice create a commodity -- personal information -- and enable bargaining over it, technologies of identity prevent that commodity from existing in the first place. As such they promise to deliver many of the benefits of new digital media without lending themselves to the extraction of informational rents. Of course, it is conceivable that technologies of identity would be created in a perfectly functioning market, simply as a way of implementing the assignment of information rights that is dictated by a particular (and enduring) market equilibrium. But the special interest of technologies of identity lies in their potential for solving some of the problems posed by an imperfect market. The most obvious means of adopting such technologies is through the democratic political process, as the parties who suffer from the distortions of market power organize to redress those distortions and restore something more closely resembling a fair bargaining situation. But technologies of identity might also be adopted voluntarily, for example to economize on the potentially considerable costs of monitoring an organization's contractual commitments not to abuse personal information. If this scenario seems far-fetched, consider the situation of a business that contemplates a shift from paper-and-cash-based transactions to computerized ones; consumer resistance to computerized information capture, on the assumption that the captured information will be abused, may cause the business to refuse or postpone the shift.
The foregoing analysis, then, offers potential criteria for determining which technologies of privacy are appropriate or inappropriate in a given situation. Because market power is rarely an all-or-nothing matter, of course, most real situations will be found somewhere in the middle of the spectrum I have sketched. Further progress thus depends on an analysis of this spectrum, and indeed of several spectra, not least that between identification and anonymity, and between the technologies of identity and choice. It will be important to understand the full range of ways in which real markets -- both the market in information rights and the markets in the various underlying commodities -- do and do not correspond to the neoclassical ideal, and what consequences these market structures hold for the gains to informational trade and the depredations of informational rents.
It is common to frame these matters in terms of a metaphorical contrast between "top-down" and "bottom-up" approaches to protecting privacy. I think that this is exactly right. We should resist the top-down regime by which anoligopolist extracts informational rents in an unfair bargaining situation, or by which a software monopolist imposes potentially unfair rules on large sections of the market. Instead, we should strengthen the bottom-up practices of democracy whereby societal values about information exchange and collective resistance to dysfunctional market power are given expression in rules that create a level playing field for all.
end ```
This web service brought to you by Somewhere.Com, LLC.