Source
Automatically imported from: http://commons.somewhere.com:80/rre/1997/Social.Science.Technical.html
Content
This web service brought to you by Somewhere.Com, LLC.
Social Science, Technical Systems and Cooperative Work
``` ---
This message was forwarded through the Red Rock Eater News Service (RRE). Send any replies to the original author, listed in the From: field below. You are welcome to send the message along to others but please do not use the "redirect" command. For information on RRE, including instructions for (un)subscribing, send an empty message to rre-help@weber.ucsd.edu
---
Date: Mon, 22 Sep 1997 10:30:04 -0500
From: "Geoffrey C. Bowker"
[...]
SOCIAL SCIENCE, TECHNICAL SYSTEMS AND COOPERATIVE WORK Beyond the Great Divide edited by Geoffrey C. Bowker Susan Leigh Star University of Illinois, Urbana-Champaign William Turner CERESI/CNRS Les Gasser University of Southern California A Volume in the Computers, Cognition and Work Series
This book is the first to directly address the question of how to bridge what has been termed the "great divide" between the approaches of systems developers and those of social scientists to computer supported cooperative work -- a question that has been vigorously debated in the systems development literature. Traditionally, developers have been trained in formal methods and oriented to engineering and formal theoretical problems; many social scientists in the CSCW field come from humanistic traditions in which results are reported in a narrative mode. In spite of their differences in style, the two groups have been cooperating more and more in the last decade, as the "people problems" associated with computing become increasingly evident to everyone.
The authors have been encouraged to examine, rigorously and in depth, the theoretical basis of CSCW. With contributions from field leaders in the United Kingdom, France, Scandinavia, and Mexico as well as the United States, this volume offers an exciting overview of the cutting edge of research and theory. It constitutes a solid foundation for the rapidly coalescing field of social informatics.
Divided into three parts, this volume covers social theory, design theory, and the sociotechnical system with respect to CSCW. The first set of chapters looks at ways of rethinking basic social categories with the development of distributed collaborative computing technology -- concepts of the group, technology, information, user, and text. The next section concentrates more on the lessons that can be learned at the design stage given that one wants to build a CSCW system incorporating these insights -- what kind of work does one need to do and how is understanding of design affected? The final part looks at the integration of social and technical in the operation of working sociotechnical systems. Collectively they make the argument that the social and technical are irremediably linked in practice and so the "great divide" not only should be a thing of the past, it should never have existed in the first place.
Contents: G.C. Bowker, S.L. Star, W. Turner, L. Gasser, General Introduction. Part I: Social Theory and CSCW. S.L. Star, Introduction: Social Theory and CSCW. M. Lea, R. Giordano, Representations of the Group and Group Processes in CSCW Research: A Case of Premature Closure? J.A. Goguen, Towards a Social, Ethical Theory of Information. Y. Rogers, Reconfiguring the Social Scientist: Shifting From Telling Designers What to do to Getting More Involved. W. Sharrock, G. Button, Engineering Investigations: Practical Sociological Reasoning in the Work of Engineers. J. Yoneyama, Computer Systems as Text and Space: Towards a Phenomenological Hermeneutics of Development and Use. Part II: Design Theory and CSCW. L. Gasser, Introduction: Design Theory and CSCW. P.E. Agre, Toward a Critical Technical Practice: Lessons Learned in Trying to Reform AI. E. Axel, Creating Meaningful Tools Within the Organization of Concrete Work Situations. J. Blomberg, L. Suchman, R. Trigg, Relating Work Practice and System Design: Two Cases From a Law Firm. S. Bodker, E. Christiansen, Scenarios as Springboards in CSCW Design. J-P. Poitou, Building a Collective Knowledge Management System: Knowledge Editing Versus Knowledge Eliciting Techniques. M. Robinson, "As Real as It Gets..." -- Taming Models and Reconstructing Procedures. C.A. Macias-Chapula, An Approach to Identifying the Role of "Information" In a Health Care System Implications for the Quality of Health. Part III: The Sociotechnical System and CSCW. W. Turner, Introduction: The Sociotechnical System and CSCW. M. Berg, Formal Tools and Medical Practices: Getting Computer-Based Decision Techniques to Work. I.A. Monarch, S.L. Konda, S.N. Levy, Y. Reich, E. Subrahmanian, C. Ulrich, Mapping Sociotechnical Networks in the Making. L. Bannon, Dwelling in the "Great Divide": The Case of HCI and CSCW. J. Taylor, G. Gurd, T. Bardini, The Worldviews of Cooperative Work. I. Wagner, On Multidisciplinary Grounds: Interpretation Versus Design Work. K. Keller, Understanding of Work and Explanation of Systems. 0-8058-2402-2[cloth]/1997/496pp/$99.95 0-8058-2403-0[paper]/1997/496pp/$49.95
Introduction to Beyond the Great Divide: Social Science Research, Technical Systems, and Cooperative Work
"Things perceived as real are real in their consequences." - -W.I.Thomas and D. Thomas
Geoffrey C. Bowker, William Turner, Susan Leigh Star and Les Gasser
I.) Background Philosopher Michel Serres (1980) has characterized the charting of a passage between the human sciences and the natural sciences as a form of "northwest passage." This image is meant to invoke the sense of uncertainty, the lack of institutional safe havens, the water raging with ever shifting ice floes that can wreck the ship of the intrepid traveler passing between the old world and the new. Within the field of computer supported cooperative work (CSCW) or interpretive informatics, one such great divide between the realms (cf Snow's (1963) famous "two cultures" designation) has pitted the complex, contingent, political, and emotion-laden human landscape against the equally complex, but rational, formal (or formalizable), universal nature of the systems development process. At its extremes, the differences between the two worldviews takes on a caricatural form: the humane, soft-headed social scientists seeing contingency everywhere as against the impersonal technocratic developers and computer scientists who seek only working systems of information flow.
For over two decades, researchers drawn from the shores of both continents have worked to navigate a path, and in so doing have created the conditions of a new partnership. The goal of this book is to set out the foundations on which this community of interests is being built. Social scientists are not just adding in human factors to system design; computer scientists are not simply providing tools for social scientists to use in their own research. Rather, this synergy is changing the nature of both social science and of computer science. Indeed, even identities are shifting -- as a social scientist, one is just as likely to attend conferences on CSCW or interpretive informatics as, say, the American Sociological Association. As a computer scientist, one might read social theorists ranging from Georg Simmel or Karl Marx on the one hand to Gloria Anzald?a or Patricia Hill Collins on the other (Star, 1995). In the first case, the study of large-scale system raises old questions of cooperation and conflict, public and private, and the nature of scalability (level of analysis), and binds them to an emergent technological infrastructure. In the second case, equally venerable questions of community and who is a member of any community (virtual or not) become crucial for designing tools for large-scale information spaces, and understanding the nature of community boundaries.
As we collectively move toward a highly dense, distributed information technological infrastructure (the Internet, collaboratories, digital libraries, and attendant integrated technologies), such questions take on new urgency. Orwell's 1984 seems wholly plausible from a technological standpoint, as do the wilder reaches of pleasurable virtual reality and its manipulation. As we live in this transition time, we stand to learn a great deal from the work presented in this volume about how the contingent, messy, and emotional/political aspects of people's work and leisure are linked with new technological developments and visions. Cui bono? The answer rests in part on making accessible tools of analysis that will help us navigate a northwest passage and so permit the growth of humanly informed technology and technically informed analysis.
The attempt to understand how human and technical issues come together in computing systems is now widespread but its inception as an area of academic research only goes back to the 1960s. Initially, such research was informed by concerns about automation (e.g., "deskilling," stratification, and job loss), as well as by psychological and management studies of user interface design and efficiency. It also included a healthy dose of social criticism and philosophically-informed debate about the nature of thought and decision-making. The Scandinavian tradition of participatory design of computing ("co-determination" or "co-design"), began in the late 1960s and was conducted jointly by social scientists, computing scientists, unions and workers (B=AFdker 1991; B=AFdker and others 1991). Researchers from this approach (see Christiansen and B=AFdker, this volume; Greenbaum and Kyng, 1991) have studied workplaces in an ethnographic style, using the results to fuel the development of easier-to-use systems which seek to enhance the workplace, not impoverish it.
Artificial intelligence -- as possibility and as a rich, going concern in computer science research -- played a large role in social science/computer science joint research from the 1960's on. Many writers, such as Hubert Dreyfus (1972) and Joseph Weizenbaum (1976), assembled philosophical arguments about whether human beings could ever be replicated by machines. They claimed a unique cluster of human attributes focused on the drive from artificial intelligence to replicate and model human intelligence. What is human about human beings? Close examinations of machine functioning in real world settings saw computers as rigid, often agents of power and bureaucracy but not sentient in their own right. Computer scientists began to show interest in ethnographic studies of practice of computer use. Such studies continually focused on human creativity and the local nature of contingency in workplaces, making universal, formal, rational systems a seemingly impossible goal (Star, 1989a). Suchman's (1987) landmark book and Forsythe's (1992) ethnographic work on artificial intelligence research, along with Gasser's (1986) study of people struggling with standardized systems at work, for instance, all emphasized how people always "work around" the rigidities or remoteness of computing from human experience. Each of these studies conveyed a sense of the limits of computers, of formal modeling, and of rationalization.
But aside from the limits, this work also began to point the way to an exciting new landscape of research possibilities. The tendency for people to wrestle with and change the meaning, attributes, and consequences of system design became in a sense a feature, not a bug -- a topic, not just a resource.
In the late 1970's and early 1980's, social scientists increasingly found themselves invited to provide researchers in artificial intelligence with metaphors for sophisticated modeling of collective cognitive processes. For example, at the Message Passing Semantics Group at the Massachusetts Institute of Technology in the United States, system designers actively explored the possibility of using the scientific community as a metaphor for large artificial intelligence systems. As robust natural problem-solving entities, it was felt that scientific communities might prove superior to brains or neurons as models for decision-making in large distributed systems (Hewitt, 1985). But as technological infrastructures improved, the nature of relationships between social and computer scientists has also changed and we are now no longer locked into a structure of metaphor-supplier/model developer (Bannon, 1990).
The idea of computer-supported cooperative work is accepted as a feature of the 1990s work environment, even if results have not always been up to the hopes put into these new technical support systems. Engineers have joined sociologists in wanting to build mechanisms into their systems which will account for the social costs of changing the infrastructural arrangements of working together. Even in the realm of scientific cognition, the metaphor of scientists as robust problem-solvers is now giving way to models of expertise which attempt to account for issues related to such contingent and messy things as disciplinary and professional politics, or the nature, availability and rights of access to laboratory materials (Berg, forthcoming 1996).
However, the biggest impetus for closer cooperation between system designers and social scientists probably lies outside the realm of artificial intelligence or CSCW and more in the field of information science. Government agencies are actively sponsoring joint research projects; this is easy to understand given levels of public investments in digital highways. Will these highways lead to heaven or hell? Will they be a fabulous opportunity of working creatively together or the call of the siren luring us onto the rocks to drown in the flow of infojunk currently polluting the Web? The dynamic nature of open systems means that the benchmarks needed to fix the cognitive limits of joint action are not easy to determine. Fence building and fence-tending in systems like the Internet have become major concerns. What tools can help to build the frames of reference needed to organize and coordinate collective intellectual activity?
Since the 1970's, a shift has occurred in open system's research from individualist, formal cognitive models to models situated in and grappling with, complex real world dynamics (Hutchins, 1995). Computer scientists working in such areas as distributed artificial intelligence, or on such questions as modeling artificial life, have developed new approaches to system dynamics based on the hypothesis of unequal individual access to collective information resources. Information is not universally available in central stores (libraries, databases, organizational memories); rather it is a private good distributed with parsimony for a multitude of reasons given personal strategies of cognitive development (Callon, 1994). This shift in perspective has had considerable impact on empirical studies of collective action and work which we will now discuss briefly in order to illustrate the synergy which is building up between computer and social science research. If, in fact, it is now common to hear the call for fuller recognition of the following methodological rule: technology models social structures but is shaped in return by its use in the workplace (see Thomas, 1994 for some rich case studies of technology in organizations). As this book shows, the conceptual and methodological tools needed to understand this adjustment process are still in the process of development and are the source of on-going debate. It is precisely this debate which makes work in the great divide such an intellectual challenge.
Studies of scientific work have traditionally focused on knowledge production practices in a laboratory but it is far from clear that the models, results and interpretations can be easily transposed to account for these same practices in a collaboratory. The collaboratory concept has been defined as a new organizational structure for scientific activity that specifically accounts for computer-mediated collaborations. Collaboratories are "centers without walls, in which the nation's researchers can perform research without regard to geographical location- interacting with colleagues, accessing instrumentation, sharing data and computational resources, [and] accessing information in digital libraries." (Lederberg and Uncapher, 1989: 19). One of us (Bowker, 1994a and 1994b) argues that current thinking in sociology has not sufficiently studied the impact of 'infrastructural inversion' on knowledge production practices. Simply stated, infrastructural inversion means that in order to understand the dynamics of cognitive change, what is generally held as being behind the scenes as a backdrop to the action played out on stage has, in fact, to be analyzed as determining our understanding of the plot. If we take the example of the Internet, a massive densification of networks is underway with more and more people using their personal computer as external memory for their specific, knowledge production strategies. These external memories serve to store documents relating to work underway locally, those that arrive over the net through email, or those again that are downloaded from web sites. A wide variety of cognitive operations -- cut, copy, paste, annotate, associate, link, combine -- can be applied to these document sets, generating new perspectives suitable for grounding individual plans of action.
Jack Goody (1986) has already made the point: new cognitive technologies tend to encourage social practices of criticism which constantly call into question the legitimacy of recognized sources of authority whether they be moral, institutional, organizational or consensual. It could be that the densification of the Internet infrastructure is akin to opening Pandora's box and that the goal of harnessing technology to the yoke of collective action will in fact destroy the capacity of social groups to work within strategic frames of reference: collective goals may no longer be sufficient to channel the energy that their members invest in the pursuit of their own objectives. System designers, notably in France, are working actively to capture the impact of network densification on strategic planning by using statistical techniques to model information flows, communication patterns and change in symbolic notation systems over time. Statistical models are one way of making visible the background conditions needed to fully understand the limits of collective cognitive actions in a context of rapid network densification. The French group uses the notion of hybrid intelligence networks to position its work in the perspective of a new sociology of collaboratory studies (Turner, 1994): knowledge hybridization makes the development of what is now generally known as knowledge discovery methods in databases (KDD) an integral part of observation practices in the sociology of science field. These techniques are needed to identify the cognitive attractors structuring the information flows that characterize life in collaboratories, and enable us to study how these attractors serve as boundary objects (Star and Griesemer, 1989; Star and King forthcoming) to articulate a wide variety of individual, and largely independent knowledge production strategies.
We have introduced a number of concepts -- collaboratories, network densification, infrastructural inversion, KDD, hybrid intelligence networks, strategic planning and now boundary objects -- in order to show that the sector of research that this book deals with is a generative one, with its share of debates and controversies on how to best understand the open system feedback dynamics through which technology models society and society shapes technology. The great divide which formerly left social scientists stranded on one side of the shore and system's engineers on the other is increasingly a thing of the past. Social and computer scientists are actively working together in a great many fields and in a great many different ways, seeking to explore a set of issues which we will now describe briefly.
II.) Issues for action
Although the chapters in this book display a robust diversity, there are a number of themes which, it appears to us, are common to all workers in the great divide:
1.) Heterogeneity: This is no doubt the subject which has proven to be the most important common meeting ground between social and computer scientists. Open systems are necessarily internally contradictory -- because multiple viewpoints necessarily arise from different circumstances, different situated actions; because information in the real world is distributed asynchronously-- there is no global broadcast system, and the timing and order in which one receives information changes the semantics of that information. The notion of necessary contradiction implies the need to better understand how people manage these contradictions in the workplace and how systems can be designed to provide frameworks for their negotiation.
2.) Parallelism: Frames of reference can be devised to run individual strategies along parallel tracks with no overlap -- this is a version of "separate but equal" (or unequal as the case may be). In sociological terms, this is an organic division of labor. This solution is very unstable, since as soon as cooperation is required, power struggles begin over the nature of shared language.
3.) Regime of Translation: This is the classic case described by Callon and Latour, where the symbolic systems of different groups are translated into a coherent and monolithic representational regime, as with the case of Pasteur (Callon, 1986; Latour, 1987). Think of this graphically as a sort of funnel shape with information channeled into a spokesperson. Members' concerns are re- represented (Star, 1989b) under a single rubric, and people learn to see (or are forced to see) their concerns in a second language. Models are typical translation structures in the sense that they are used by system designers to re-represent the diversity of individual symbolic systems.
4.) The Politics of Formalism: It is difficult formally to distinguish a regime of translation from a universal language for the reason that the design of any universal language is itself historically and politically situated, and there is no such thing as a neutral vantage point, even for mathematics (or especially for mathematics?). Consequently, the question is to know how multiple perspectives are integrated into a single worldview. Much research concerns the role of language as a tool of integration but work is progressing as well on the importance of the availability of materials -- stuff -- and the physical organization of infrastructures to manage information flows.
5.) Working together: What does it mean to work together? Who has the power of decision? Is it the spokesperson who speaks for others or is power everywhere, and nowhere, because it is diffusely distributed over what neural network scientists call the links in the system. System designers are confronted by this alternative: either to set up obligatory passage points (spokespersons) for managing the information flows in their systems or to use distributed passage points (boundary objects) that are diversely activated by the information flowing through them, and which serve to dynamically connect people, strategies and things. This alternative obviously opens up a wide range of options for creating learning algorithms for working together.
6.) Boundary objects: A framework for cooperative cognition which preserves the sovereignty of viewpoints without the parallelism or the re- representations of the first options is provided by the notion of boundary objects (Star and Griesemer, 1989). How do scientific communities, for example, create lasting arrangements where they address systems of objects which are simultaneously local and global, common and specialized, shared and segregated? The boundary object answer to this question assumes the fundamental ambiguity of objects (an idea taken directly from the Pragmatist notion that meaning is given in use, not in antecedent characteristics) and the durability of arrangements to manage that ambiguity in cooperative ventures. The durability implies the need to develop conventional or routine ways of working with the ambiguity; those conventions themselves may be seen as data structures from the design point of view; as material structures from the organizational point of view; or as working treaties from the political point of view.
7.) Conventions and routines: If everything is open, and all is apt to change, how do durable cooperative arrangements come to exist? Some suggest that a set of organizational, institutional and symbolic conventions for getting on with routine business despite heterogeneity is required, but if these conventions cannot be assumed a priori, how are they constructed? Others seek to overcome this problem by questioning the need for rule-based behavior for getting most things done. "Good enough" is as good as it ever gets, whether you are speaking formally or informally about any form of social cooperation. But in that case, what about the situation which we talked about earlier where machines are becoming active members of hybrid intelligence networks: without rules, how can we define what is "good enough" for a machine in supporting cooperative work?
8.) Ethics and global responsible thinking: Langdon Winner, in a widely-cited article, asks the question in its title: "Do Artifacts Have Politics?" (1986). His primary example is taken from the construction of infrastructure in the city of New York, where city planner Robert Moses decided to build bridge overpasses between New York City and Long Island that were too low in height to allow buses to pass through. From his racist and classist perspective, people who couldn't afford cars were not welcome in Long Island, a rich suburban area, and he built this politics directly into the infrastructure of the roads and transportation systems. What sort of politics will our artifacts have? Will it be possible for us to include in our systems the possibility for compassion, ethics, and globally responsible thinking?
Technically, many of the problems of open system dynamics appear as an overwhelming list of differences: heterogeneous representations, incompatible data sources or the constantly changing contextual meaning of information across dispersed groups are just some examples. Politically, the problem of dealing with differences is one of preserving the primacy and sovereignty of individual experiences without creating regimes of coercion. We must be aware that it is impossible to separate the technical from the political which clearly means that we cannot ignore the macro-implications of our research as it is taken up and used to build sociotechnical systems for use in the real world.
III) Outline of the book
We have divided the book into three parts, covering in turn social theory, design theory and the sociotechnical system with respect to CSCW. Clearly a book about superseding divides does not want to spawn new ones; and indeed none of the papers fit into only one of the categories! However, broadly speaking the first set of chapters looks at ways in which we are rethinking basic social categories with the development of distributed collaborative computing technology - concepts of the group, technology, information, user and text. The second set concentrates more on the lessons that can be learned at the design stage: given that one wants to build a CSCW system incorporating these insights, what kind of work does one need to do - and how is our understanding of design affected? The third set looks at the integration of social and technical in the operation of working sociotechnical system; collectively they make the argument that the social and the technical are irremediably linked in practice and so the great divide not only should be a thing of the past - it should be seen never to have existed in the first place. An introduction at the beginning of each section points to some of the themes linking the papers in that section together.
IV) Acknowledgements
This book is the outcome of a conference organized in Paris under the auspices of the CNRS (Centre National de Recherche Scientifique) with a grant from the Department for Specialized Information of the French Research Ministry, and the editors wish to thank both for their support, and hospitality. It grew out of a series of workshops partly funded by the CNRS and the British Council, which latter we should also like to thank. ```
This web service brought to you by Somewhere.Com, LLC.