Source
Automatically imported from: http://commons.somewhere.com:80/rre/1998/Communities.and.Institut.html
Content
This web service brought to you by Somewhere.Com, LLC.
Communities and Institutions
``` Communities and Institutions: The Internet and the Structuring of Human Relationships
Philip E. Agre Department of Communication University of California, San Diego La Jolla, California 92093-0503 USA
http://communication.ucsd.edu/pagre/
Copyright 1998 by the author.
This is a draft. Please do not circulate or quote without permission. Comments welcome. References and footnotes to follow.
1/ Introduction
One day last fall I received a phone call from a prosecutor in the Orange County District Attorney's office. She was trying the case of a troubled young man who had sent an e-mail message filled with vile ethnic slurs to sixty Asian-American students at UC Irvine, blaming them for everything that was wrong with the university and swearing to hunt them down and kill them. As the trial proceeded, she had been taken aback when the defense produced an expert witness, a well-known academic expert on the social psychology of e-mail, who had testified that the young man's message was, in her words, "a classic flame" of a sort that is common on the Internet, and that no reasonable person should have felt threatened by it. The prosecutor, though experienced in the matter of death threats, had never encountered this notion before, and, bewildered, was calling around to find an expert who could testify to the contrary the next morning at 9AM. I was booked, so she found someone else. Yet even then the case resulted in a hung jury, and the prosecutor only obtained a conviction at the retrial by introducing a wider range of evidence of the young man's malicious intent beyond what, to her, was the plain meaning of his e-mailed words.
I want to use this story as a point of departure for a brief analysis of the changing ways that we understand the Internet and its place in society. What was news to the prosecutor is not news to most of us in academia -- the sense, inchoate perhaps but widespread nonetheless, that things that happen on the Internet are not quite real, or real in a different way, on a different plane, in a place apart called cyberspace that operates on different principles than the corporeal world. I find this idea terribly limiting, both as an empirical matter and as a guide to useful action, and I want to examine it and propose some ways of getting beyond it.
My substantive thesis is that the Internet is only now, and only slowly, becoming integrated with the institutional world around it, and that the very notion of cyberspace as a single, separate zone of reality is an artifact of this transient situation. On another level, my epistemological thesis is that we only understand any technology, and a fortiori the Internet, once technology is 5% of our story. It does no good to frame an investigation by asking "what effect will the Internet have on this?" or "what effect will the Internet have on that?". Rather, we must start from a serious analysis of the institutional world from which the Internet arose, and of the many and various institutional worlds with which the Internet is now coevolving, and make sense of the technology in that dynamic context. It's a big job, to be sure, but I think we can see its outlines beginning to emerge in the literature.
2/ Cyberspace as American culture
It is clear enough, I hope, that cyberspace is a utopian idea that stands in the main line of a long millennialist tradition. The vendors of this idea, after all, promise that it will level hierarchies, decentralize power, and bring peace and prosperity to the world. Technologists, on this account, are the vanguard of an inevitable revolution. Much has been written about this kind of technological determinism and the secularized religion that has so often shaped the imagination of the engineers -- see, for example, David Noble's new book on religious themes in the history of technology.
For our purposes here, however, I want to focus on a particularly American aspect of cyberspace -- its understanding of community. Of course, the word community has been used in many ways, but I have in mind the utopian conception of community found in the reformed- Protestant communitarianism of the colonial period, and that continues to influence many varieties of American political culture to this day.
Beginning at the beginning, many observers remarked way back at the dawn of the cyberspace era, around 1993 and 1994, on its recurring use of colonialist tropes -- electronic frontiers, the necessity of civilizing cyberspace, and so on. And indeed, the intellectual construction of cyberspace has proceeded along lines very similar to what the historian Jack Greene called "the intellectual construction of America". "Once [Columbus'] discoveries had been identified as a new world, ... no aspect of that world probably operated upon more powerfully upon the European imagination than did its immense space". Quoting the Dutch historian Henri Baudet, he said that America had become a place "onto which all identification and interpretation, all dissatisfaction and desire, all nostalgia and idealism seeking expression could be projected". Indeed, he cites evidence that Thomas Moore had America in mind when, shortly thereafter, he initiated the European utopian tradition. Nothing becomes so dated as yesterday's tomorrow, however, and Greene points out that "[the] early utopias, like European perceptions of Amerindians, were all heavily shaped by older European intellectual traditions. Almost without exception", he says, "they looked backward to Europe's 'own ideal past' rather than forward into some wholly novel world of the future." And indeed, he points out, "virtually every one of the new English colonies established in America ... represented an effort to create in some part of the infinitely pliable world of America ... some specific old world vision for the recovery of an ideal past ina new and carefully constructed society".
Such are the origins of the American ideal of community. In Barry Shain's account, this ideal had two crucially counterposed moments. One was the freedom of conscience that figures so heavily in what he calls "the myth of American individualism". The other, disproving the myth, was the reformed-Protestant communitarianism that produced in each separate community an almost totalitarian order of invasive social control. Freedom of conscience, he argues, was not the freedom to rebel against this order, but rather the freedom to leave one such community and settle in another. If this picture of society is not overtly present in the writings of the tiny elite who produced the Constitution, that only illustrates what is for Shain, a political conservative, a defining tension of American history, a tension between a localist communitarianism and a nationalist individualism. Whatever its utility as historiographical narrative, Shain's account illuminates many contemporary conflicts, from the internal politics of the Republican party to the emerging war over Internet content filtering in community institutions such as schools and libraries.
3/ The fall of cyberspace
Even if the conjectured causality is difficult to prove, then, the analogy is clear enough. The intellectual construction of both America and cyberspace has proceeded along similar lines: utopian visions projected onto a putatively blank space in the form of consciously designed communities. Of course, in each case the original and more rigorous utopias fell apart, giving way to a proportionally greater focus on money-making. Yet in each case too, the original conception of discrete, self-regulating, homogeneous communities of intimates continued to shape both thought and practice in profound ways. In the modern world of academic research on cyberspace, we see this in the focus upon assumed identities and the peculiar practice of studying online communities -- MUDs and the like -- without finding out who their participants are.
Yet this is changing. Research on organizational computing, which has not generally used the language of cyberspace, provides at least one model for work that investigates online relationships in terms of their embedding in some larger context. And as the technologies of cyberspace migrate into organizational practice, they are made to shed their aura of irreality -- numerous business people, for instance, have explained with a straight face that MUD stands for Multi User Domain, or Dimension, or something like that.
A significant example of the internal tensions of cyberspace theories can be found in the influential work on cyberlaw by David Johnson and David Post. For them, cyberspace is a place whose boundary can be found in "the screens and passwords that separate the virtual world from the 'real world' of atoms", and it deserves to be understood as a jurisdiction unto itself, distinct from existing geographical jurisdictions.
Although framed in the language of contemporary libertarianism, Johnson and Post's theory tracks the historians' America in some detail. Far from conceiving cyberspace as a single unified entity, they envision it as a population of distinct "spaces" or "territories" -- that is, self-governing communities. And central to their theory of government is the idea that individuals can move freely from one community to the next, thereby creating a sort of market in government. As they point out, however, this market will only function correctly if it lacks externalities -- that is, if the rules chosen by one community have no consequences for other communities. This requires strict border controls that regulate the movement of information across borders. The problem, of course, is that much information will cross borders in people's minds. If the borders of cyberspace are located in screens and passwords, then the residents of cyberspace in fact perpetually straddle at least two jurisdictions -- their online community and the state where they are sitting. Someone who defames another in an online community with weak laws of defamation also slanders that same person's reputation in other jurisdictions as well.
Johnson and Post's border problems get worse, moreover, as we contemplate the ongoing integration of the Internet into the world around it. Why should a corporate intranet, for example, be reckoned part of cyberspace? And where should we locate the borders of cyberspace when TCP/IP begins flowing in the veins of cars or appliances or other kinds of embedded systems? The borders between cyberspace and real life are less obvious than they seem, and they are getting less distinct every day.
4/ Individuals and institutions
These are some of the reasons why I see a transition in the works, from an intellectual analysis of the Internet based on utopian concepts such as cyberspace community to an analysis based on the technology's place in the larger institutional world. By institutions here I do not mean organizations but rather the more fundamental set of arrangements from which organizations and other typified human relationships are built. Institutional phenomena are understood using various metaphors -- connective tissue, raw material, rules of the game, nervous system, grammar. As the economic historian Paul David has pointed out, an institution such as the corporation or the family can remain stable in its workings across centuries, even as particular corporations and families come and go. The concept of institution entered social analysis of computing partly in an attempt to explain the famous productivity paradox -- the longstanding difficulty of demonstrating clear-cut productivity improvements from industry's vast investments in information technology, beyond very specific applications such as banking in which an existing variety of informational work could be automated in place without much affecting its surroundings. One explanation for this mystery, John King has suggested, is that information technology can only have a profound impact when institutions change. Institutions, however, are built deeply into laws, customs, language, installed bases of technology, and much else, and so they change only very slowly. The story of information technology, on this account, is one of history's great episodes of an irresistable force meeting an immovable object, and this tension may be responsible for a diffuse sense that the information revolution has its feet planted on both the accelerator and the brake.
Several distinct literatures have described the historical evolution of institutions. One economic tradition, descended from Thorstein Veblen and John Commons, portrays the common-law underpinnings of the market, for example, as the outcome of successive episodes of collective bargaining among social groups. Another, more recent tradition, descending from the work of scholars like Ronald Coase and Douglass North, views economic institutions as successive approximations to the idealizations of neoclassical economics. Pitting themselves against economic explanations and particularly the individualism of rational choice theories, sociologists such as Paul DiMaggio have described the diverse mechanisms that bring about a remarkably high degree of isomorphism among the organizations in a given institutional field. And numerous political scientists have provided descriptive, and increasingly prescriptive, accounts of the design of political institutions -- accounts that are being put into practice in several emerging democracies. What these literatures provide are powerful means of understanding the interactions among economics, law, organizational form, and social structure through which institutions evolve over long periods. What they lack, thus far, is a sophisticated account of the interaction between institutional structures and information infrastructure.
Disputes over technology, after all, are frequently disputes about the workings of institutions. Electronic commerce protocols, for example, seek to reshape important market institutions, and as Robin Mansell as pointed out, the workings of those protocols are the object of a series of generally low-profile contests over the biases that will be built into the market's new electronic playing field. Information technology has also given new life to proposals about political institutions such as plebiscitary democracy; such proposals, however, founder on technical difficulties such as authentication, and on analytical difficulties such as their failure to distinguish between the distribution of political information, which the technology does facilitate, and the analysis and synthesis of that information, which remain very much the province of intermediaries such as interest groups and legislative staffs. And controversies about Web content filtering are also controversies about the family, as children gain new abilities to establish social connections, for good or ill, beyond their parents' control.
5/ Relationships and boundaries
I want to focus particularly on the role of institutions in defining the individual. To see this connection, think of Austin's theory of speech acts, according to which the import of a human utterance depends on its institutional context. A promise or a wedding vow has both prerequisites and consequences that are defined in institutional terms -- that is, they only make sense, they only are what they are, against a particular institutional background.
This perspective on institutions is particularly useful in understanding the role of information technology in establishing or undermining the conditions of personal privacy. Far beyond the state of seclusion or the right to be left alone, privacy is more usefully understood as integral to the very construction of a moral individual -- a person. People become who they are largely through relationships with others, and information technology increasingly establishes the ground rules under which human relationships are negotiated. Yet, as the daily newspaper makes clear, the negotiation of human relationships over the Internet is in a state of crisis. Much of the problem can be understood in the traditional terms of privacy policy analysis, for example the secondary use of personal information that might be captured by an online service. But more fundamentally, I think, the Internet is currently failing its users by providing them with inadequate technical means of constructing boundaries for themselves.
Consider, for example, the problem of computer viruses. In a provocative paper, the sociologist of science Steve Woolgar observes that tales of computer viruses bear all the marks of an urban myth. Urban myths almost always concern disasters that follow from accidental and surprising breaches of personal boundaries, and so it is with computer viruses. Putting aside the ontological problems posed by Woolgar's reflexive theory of social reality, the origins of the problem lie with personal computer operating systems. The people who invented personal computers thought of them as personal, and their tacit model was that each person used his or her own computer quite in isolation from others. As a result of this model, and the minute memory and mass storage capacities of the early PC's, the PC operating systems did not incorporate the well-understood security mechanisms of time-sharing operating systems such as Multics. In fact, of course, real persons use their computers in a much more social way than the designers imagined, and yet they enjoy little technological protection against the hazards of sharing software.
A similar analysis applies to privacy and security problems on the Web. The client-server model of computing originated in environments in which the institutional relationship between client and server was fixed and well-understood by all; as a result, it made sense to render the boundary between client and server invisible to the user. Such is not the case, however, on the Web, where the client-server model has been generalized into a platform for an arbitrary variety of relationships. Yet because the boundary between client and server is invisible, users have little powe to comprehend, much less control, the flow of potentially sensitive information across it.
Another example is unsolicited bulk e-mail, or spam. The influx of paper junk mail, annoying though it is, is naturally regulated by the cost of sending it; only those vendors who can expect a fairly substantial profit from each response will be economically motivated to conduct a mailing. On the net, however, this useful limitation of the physical world is no longer present. As a result, the metaphor of e-mail is starting to break down, and computer scientists are investigating mechanisms for establishing boundaries around one's electronic mailbox.
A final example is content filtering. In the normal, non-virtual world, anything new becomes part of our life through a relatively slow incorporation into a set of routines: whole categories of publications, for example, never enter our lives unless we intentionally propel ourselves into a different aisle of the bookstore. Web browsers, however, are capable of bringing anybody into contact with anything on little notice. Although it may sound convenient in the abstract to have the whole world at one's fingertips, in practice one learns the benefits of the relatively stable connections between people and things that characterize our meatspace lives. More generally, the Internet's digital abilities to establish arbitrary connections instantly is proving a big much. Content filtering is an understandable response, but it is also proving far too simple a response. We are learning, I think, that institution-building on the net has hardly begun, and a central issue in defining the necessary institutions is the social organization of boundaries: allocating the power to define them, set them, and keep the very fact of them to oneself.
6/ Networks and institutions
Consideration of the institutional aspects of online interaction, then, leads us into a new world of social and technical work to be done. Let me fill out the picture with a series of three conjectures about larger-scale phenomena concerning the intersection of information technology and institutions.
First is the question of the mechanism by which advances in information technology lead to institutional change. The question may never be settled, of course, because much more is changing in the world than information technology. And yet many people feel that the immense capabilities of new information technologies will necessarily bring massive, qualitative, discontinuous changes in the workings of institutions such as higher education. We migth call this the creative destruction model, after Schumpeter's theory of the succession of older, hidebound firms by entrepreneurial upstarts better attuned to changing conditions. Observe, however, that Schumpeter was speaking of firms, not institutions. Institutions are so deeply woven into the larger social order that we might well question whether their wholesale replacement is even possible. Another potential model might be called the digestion model: as a new technology arises, various organized groups of participants in an existing institutional field selectively appropriate the technology in order to do more of what they are already doing -- assimilating new technology to old roles, old practices, and old ways of thinking. And yet once this appropriation takes place, the selective amplification of particular functions disrupts the equilibrium of the existing order, giving rise to dynamics internal to the institution and the eventual emergence of a new, perhaps qualitatively different equilibrium. Which model, if any, actually applies in a given case is of course an empirical matter. I would conjecture that the digestion model is the norm, but in any case the simple existence of alternative models may help rescue us from premature conclusions.
Second is the question of the evolution of technical architectures. Information technologies, of course, evolve in part through the straightforward acceleration of processing speeds, multiplication of bandwidths, refinement of screen displays, and so on. But these quantitative improvements tell us little about the qualitative organization of technical systems and how that organization evolves. Even though textbooks generally present the modularity of technical systems as the product of ahistorical design norms, in fact the modularity of systems interacts to a considerable degree with the dynamics of markets. Consider, for example, the victory of the IBM model of personal computing over the model of Apple. Although the cognitive power of the IBM brand was surely one factor, another was the (largely accidental) decision to standardize the PC's modularity and to open the standardized components to competition. Selling its computer as a package deal, Apple was never able to obtain the manufacturing economies of the PC clone-makers, and as the PC began to dominate the market, all of the firms that participated in the PC model began to enjoy overwhelming economies of scale that reinforced the PC's position and ensured Apple's defeat. Something similar may well be happening in the emerging competition between the telephone system and the Internet: the Internet decouples functions that the phone companies bundle together. The general point, and the second conjecture, is that the market wants functionalities on different layers to be decoupled whenever significant economics of scope arise for the application of one layer to differnt purposes.
Third is the question of professions. Ever since the rise of the industrial economy, knowledge-intensive work has been organized in a matrix structure, with professions cross-cutting organizations. This phenomenon is often not visible in research that reifies either organizations or professions as entities unto themselves, but it has enormous consequences for the dynamics of social learning. In recent years, however, the relationship between organizations and professions has begun to shift. The private sector, for example, has seen the stupendous rise of two seemingly contradictory phenomena: concentration and outsourcing. Corporate mergers worldwide are running at around $2 trillion/year, and yet functions as complex as information systems management are now contracted out in similarly large amounts. What these developments have in common, as a rough generalization, is an increase in the homogeneity of the activities that take place within a firm. Although these trends find their conditions partly in globalization and regulatory change, I conjecture that they are also driven by advances in distributed information technology. The reason for this is information economics: if two firms in the same industry are meged, then all activities that distribute information to all points in the firm -- personnel policies, for example, or payroll software -- become much more efficient. If this trend continues, the distinction between organizations and professions will largely collapse. Already, for example, many of the traditional functions of the American Medical Association have been taken over by the bureaucracies of large HMO's. The result is an emerging global corporatism.
7/ Standards and rules
Having sketched a formidably messy world in the interaction of institutions and information technology, let me conclude with some comments on rule-making and rules. Joel Reidenberg has observed that the technical standards embodied in digital media perform a rule-setting function: software is a kind of law, and the software that underwrites human relationships also regulates them. The law, and particularly the English common law, is conservative in its reactive and deliberative approach to social rule-setting. Yet I would argue that information technology, too, despite its revolutionary reputation, is likewise conservative. The development of technical practice is much like that of the common law: a reactive response to problems, and the periodic systematization of accumulated experience, for example in the emergence of the entity-relationship model in database design.
Information technology is conservative in another way: technical compatibility standards, once entrenched in a sufficient proportion of the installed base, tend to persist in the marketplace. For reasons of backward compatibility, therefore, as well as network effects and economies of scale, competition among information technology standards has a winner-take-all character. In these ways and more, inormation technology markets increasingly resemble legislatures that set rules for a whole population. Legislatures are already well-known for their increasing resemblance to markets, and this convergence between the institutional dynamics of economics and politics is already a daily fact of life in the computer industry, and becoming more so as technical disputes are increasingly fought in actual legislatures as wel as standards organizations.
The deeper phenomenon, then, is not markets or polities as such, but the agenda-setting process by which our increasingly global society decides what values should be embodied in its institutions and its information technologies alike. Software is not just developed by vendors and standards are not just developed by standards bodies; both evolve through the interaction of activities in many sites. It is a process that already far exceeds the simple imaginations of the utopians. To engage in this process well, we need a post-utopian imagination that embraces the complexity of human institutional life, as well as a critical technical practice that embraces the complex coevolution of technologies and institutions. Both the imagination and the practice are far off, but both can nonetheless be seen taking form around us.
/* Acknowledgements
This paper was presented as part of a symposium at the University of Michigan, and I appreciate the comments of many of the symposium's participants.
end ```
This web service brought to you by Somewhere.Com, LLC.