[RRE]hazards of design 2/3writing

rreauto-importedrre-post
14 min read · Edit on Pyrite

Source

Automatically imported from: http://commons.somewhere.com:80/rre/1998/RRE.hazards.of.design.2..html

Content

This web service brought to you by Somewhere.Com, LLC.

[RRE]hazards of design 2/3

``` was organized around a fundamental technical schema: the schema that permitted the workings of an artifact to be narrated as actions. The digital computer, too, was founded in a similar narrative innovation; as Schaffer [ref] points out, Babbage assigned central importance to the functions of memory and anticipation in the planned workings of his Analytical Engine.

Practices of ascribing future-orientation to machines, then, have repeatedly provided the spark for extended and highly productive interactions between discursive forms and technical innovations. And yet it is equally important not to reduce the complexity of users' accounting practices to the simple ascription of intentionality. The accounts that users construct are, and of necessity must be, frequently very complicated. It is not just a matter of machines taking actions, or even of machines making assertions; the behavior of a contemporary computer is regularly accounted as conveying the actions and voices of numerous parties in various relationships. A computer that displays an e-mail message from another user, for example, is understood to be performing a complex metalinguistic operation, framing some reported speech in a matrix of symbols and speech acts and files. A data entry in a company database, likewise, is not just -- or not at all -- a computer's opinion, but more likely the opinion of someone else in the organization or elsewhere, reflexively accounted against the same background of situational contingencies as any other record. Although computers use language in ways that get successfully interpreted in specific situations of use, therefore, we should not succumb to any simple model of the machine as the agent or the author of those linguistically motivated actions. At a minimum, we need something like Goffman's [ref] typology of the distinguishable footings that a speaker might take up in producing a sequence of words, and the methods by which these footings are made reflexively accountable. Beyond that, however, we can be much more specific about the kind of character, so to speak, that a computer plays in the narrative practices of its designers and users. On one hand, designers really do inscribe into their machines the raw materials for the construction of accounts that confer agency on the machines themselves. Computers are absolutely not designed to be accountable in terms of their designers' reasons or motivations, or the other design choices they might have made, or even the kinds of stylistic signatures that a previous generation of literary critics have sought to characterize in other sorts of texts. And yet, on the other hand, the agency that is ascribed to actual, commercially distributed software systems is precisely delineated. Virtually all systems in actual use conform to what Alan Kay [ref] calls the user illusion: the accountable construction of the system's agency as flowing from specific acts of delegation by a user. This is not a limitation of the technology, I should emphasize, but a narrative convention. Artificial intelligence research has sometimes (much less often than is generally imagined) sought to construct software systems whose workings lend themselves to narration in terms of an agency of unrestricted free will, but hardly anybody finds such things very useful in practice.

//5 The concept of hazard

I have already indicated the central dilemma that designers face: their machines will be evaluated in terms of their usefulness in unknown situations potentially distant in space and time from the lived work of designing them. I want to refer to this cluster of difficulties as the hazards of design. I take this word hazard from Keane's anthropological study of ritual in Indonesia. Keane observes that power is always coupled with hazard: power is enacted and manifested through public representations, and yet representation always carries the risk of misinterpretation or infelicity. More specifically, representations are always, to use Derrida's [ref] term, iterable, capable of being repeated in an endless variety of different situations, in which they might take on a variety of interpretations and forces. Neither Keane nor Derrida would argue that these different contextualizations are arbitrary or wholly impossible to anticipate, or that the authors of representations are entirely helpless to influence the uses to which their signs are put. The point is simply that the objectification of meaning in signs, and the autonomous materiality of those signs, creates a complex landscape of potentials for the unravelling of whatever material positions those signs might be bound up in. Many representational practices can be understood in part as attempts to manage the hazards of iterability; consider, for example, the highly developed art form of the sound bite, which seeks to limit the potentials for recontextualization through the miracle of audio or video editing.

Designers resemble Schutz's and Garfinkel's portrayals of the sociological analyst, reifying vernacular categories at a privileged distance from the practical enmeshments of daily life, but with one very significant exception: they are liable to get blamed when things go wrong. It is crucial to understand that, as an ethnomethodological matter, this assignment of blame is just as rational, just as orderly, as the smoothly functioning scenarios of use that designers would naturally prefer to hear about. Although designers aim to facilitate the constructions of accounts of use that interrelate users and machines, they are social actors, too, and well-known to be parties to every situation in which their machines are used; their own identities and attributes, accordingly, are no more or less an accomplishment of situated action than anyone else's.

In ethnomethodological terms, the hazardous circumstance that Derrida calls iterability is understood as indexicality: the production of a representation's meaning on every next occasion by the members of successive settings of use. Although a designer may have some sense of control over the mimetic function of users' accounts of machine behavior -- their systematic relationships to a supposedly given ontology -- it is much clearer that the designer can have little control over the actions that users employ these accounts to perform. Put another way, although system design is centrally concerned with the lexicon and grammar of possible accounts of a machine's behavior, these design practices exert little influence over the force that particular deployments of an account might take on.

More fundamentally, designers require the practical reality of settings of use to have the opposite properties from those discovered by Schutz and Garfinkel. Whereas Schutz described the cognitive problem of order to be prior to classical sociological questions of cooperation and conflict, system designers need this problem to be presolved in order to maintain any sense of knowing what their systems' behavior will amount to in practice. And whereas Garfinkel emphasizes that the full phenomenal detail of any given practical setting is a self-organizing accomplishment of that setting, the designer's scenarios must presuppose that detail to be predetermined, roughly speaking to be the same phenomenal detail that the designers themselves encounter in their own sites.

//6 Asymmetrical intersubjectivity

As a result of these circumstances, every computer system threatens to become a full-time breaching experiment. This is the central tension of computing: even though they are laboriously and systematically designed to be intelligible as the authors of accountable actions, computers are incapable of the improvisational work of upholding the morally sanctioned cognitive order of the situations in which they supposedly participate; they thus inevitably breach innumerable maxims of interactional order to which their putative status of accountable agency would otherwise subject them. If the production of rational action presupposes the reflexive awareness and reciprocal attribution of the normative accountability of that action, then it follows that a computer cannot participate in the concerted production of rational order at all. It will be objected that computers as such have never been proven definitively incapable of such accomplishments, even if contemporary machines do not reach the necessary standard; surely it is folly to make such broad claims against the unknowable range of future technical advances. But in speaking of computers we are not talking about artifacts in general; the difficulty arises specifically because computers are designed to lend themselves to complex and consequential attributions of intentionality. The practices of computer system design comprise a historically specific formation that, popular representations notwithstanding, has not changed very much in its essentials from its early days. Technical schemata may multiply and evolve, but the methods by which computers are intendedly rendered accountable have not changed in any fundamental way, and would have to change in a fundamental way before this argument would need revisiting.

Let us consider the question more precisely. Computer system design, despite its daunting hazards, is successful in the significant sense that people relate to computers in much the same way that, on Schutz's analysis, they achieve intersubjectivity with one another [cf Nass]. Schutz affirms that people cannot know one another's subjective experience in a direct way, so that their knowledge of one another is inherently a matter of inference and their capacity to share a situation depends crucially on a series of otherwise unjustified assumptions, what Schutz refers to as the general thesis of reciprocal perspectives. The first assumption, an assumption of symmetry, is the interchangeability of standpoints -- the other, although like me, occupies a distinct location and possesses a distinct perspective. The second assumption, an assumption of identity, is the congruency of the system of relevancies -- as a practical situation unfolds, the other experiences the same circumstances and appreciates their consequences for action in the same way. These assumptions conflict to a certain degree and are defeasible in specific detail, and yet their massive and ongoing application as defaults is the sole grounds on which intersubjectivity is maintained.

When applied to a computer, these two assumptions are a considerable and ongoing act of charity. And since they are predominantly tacit, it can easily escape notice just how much intentional ascription is really going on from moment to moment, and just how completely the machine is failing to reciprocate. This failing has been remarked by authors such as Button [ref] who have taken vehement exception to the appropriation of, for example, conversation analysis as the discursive raw material for the fashioning of technical grammars of action. Yet this critique does not help us to identify the considerable success that such systems have sometimes had in the world, or to trace the members' construction of categories such as success and failure. The ethnomethodological principle of non-irony, of course, recommends that such members' critiques be treated in a neutral way, as local accomplishments, and not raised up to principles of exogenous evaluation. At the same time, the ethnomethodological analysis provides resources for reconstructing the nature of an instability that seems inherent in the social project of computing, and for specifying the systemic difficulties with which both designers and users contend in making whatever accounts they do make of what is going on, here and now, in the use of a particular machine for a particular purpose. What is more, a central question for the members -- who, if anyone, is taking action here? -- also provides an analytical caution for the ethnomethodologist: just as the identity of particular actions is something accomplished by a setting's members, the enumeration of those members and the nature of their membership is likewise produced in and as the setting. It is not good enough to say that the computer is human, or to say that the computer takes actions if the people think it does. Instead, we are compelled to take seriously the proviso that these accomplishments are just for all practical purposes in the setting, those purposes themselves being provisional accomplishments of the setting's members as well. If we fall in with an uncritical anthropomorphization of the computer then we will be faced with an anomalous social situation in which the reflexivity of interpretation does admit time out, and in which many actions are not submitted to reflexive interpretation as constituting an ongoing order of activity. A shortcoming in theorists' claims for a machine would be transformed into an altogether radical shortcoming in a particular setting of action.

//7 Patterns of trouble

Such are the challenges facing the otherwise understandable effort to employ ethnomethodological investigation in the analysis of the patterning of what, by members' lights, is troublesome in computer use. Suchman [ref], for example, cites cases in which a machine issued the same instruction twice by simply repeating it; the resulting sequence of interaction was perfectly orderly and accomplished any number of topics of rational order. To speak of repair work in such a setting is perfectly valid, provided that we understand that repair glosses members' categories, and that it is the interaction that the members accounted as broken, not the methodic production of rational order itself.

That being understood, ethnomethodological accounts can be given of a wide variety of breakdown in computer use; these breakdowns will interest designers who have produced them as troublesome, as recurrently so, and so on. Ethnomethodology cannot rule on such matters, but it can provide the "what's more" by which they are produced. In doing so, of course, the ethnomethodologist is liable to set in motion a formalistic topicalization of interpretive themes that have been designed to refuse such treatment. Such misconstruals are inevitable if system design carries on in the exact same fashion as previously. That remains to be seen, but it is not inevitable.

To give one example of what I have in mind, let us consider the problem of "where" one is located in using a computer. "Where" one is, whether in the word processor or Web browser or operating system command interface, is perhaps the central relevancy for producing what's going on in using a computer. Much of the phenomenal detail of a machine's outward states depends for its very identity on where on is; likewise, what action one is taking by typing or moving the mouse also depends on this "where". Yet many beginners are not able to see where they are, and other beginners discover that the methods they know for finding out where they are depend on where they are. The instructible visibility of "where" is one prominent requirement of good interface design by the lights of the interface design movement, yet experts routinely underestimate the difficulty of seeing it. Beginners are regularly befuddled, for example, by applications that are themselves open but have no open windows, since they have learned to see where they are by inspecting certain features of the currently opened windows.

The example is trivial, but the pattern is pervasive: an expert in computing who proceeds within the frame of the natural attitude will see, as transparently evident, all manner of detail that is not at all visible to the beginner, and attempts to explain "what's wrong" or "what's going on" or "what to do next" will persistently presuppose details whose necessarily instructed visibility is actually the crux of the difficulty. This is, of course, a potential in any situation of differential expertise. What is distinctive in computing is the nature of the expertise: the expert is not just interpreting the visible evidences of an object in an instructed manner, but is recovering with some reliability the very accounts of the artifact that were originally inscribed into it by the designer. Whereas beginners will formulate accounts that dwell upon more culturally generalized features of the interface, it is as if the expert can see inside the box. Phenomenologically, the skill is similar to that of air traffic controllers who maintain a vivid imaginative sense of the whole picture in the skies, larger than and phenomenally subsuming the monitor upon which the various data about those skies are displayed. But it is different, too, in a crucial way: for the expert, the hand of the system's designer is tangible and the ontology and grammar inscribed in the system are legible. Above all, the abstractions that are real to the designer are real to the expert. This is what expertise in computer use is, and this communion between designer and expert user, laborious yet normally unremarkable, takes considerable pressure off of the thesis of reciprocal perspectives in the case of the expert, while only intensifying it in the case of the beginner. The point is not that the expert no longer accounts for the machine's behavior in terms of actions, but that the very category of actions is transformed, or transcoded, into something resembling the technical version of the category that guided the designers. This transformation is not simply rhetorical but represents in some degree the expert's capacity to orient to the designer's dilemma, and to impute reflexively to machine and designer alike awareness of only those relevancies that were, and are, actually available to them.

//8 The theology of engineering

It has often been observed that the rise and institutionalization of quantifying science in the West brought with it a "view from nowhere", a discursively constructed epistemological standpoint that has underwritten generations of claims to objective knowledge and neutral reason, as well as being a material instrument for the centralization of numerous types of authority. Recent research has begun to recover some sense of the massive campaigns of infrastructure and standardization that made the view from nowhere possible [Porter, Bud-Frierman, Latour, Crosby]. The point of this research is sometimes difficult for its subjects -- the technical specialists -- to appreciate. If the universe is ultimately governed by uniform laws then surely no special effort should be necessary to ensure that measurements made in on location are commensurable with those made in other locations. But even the most basic physical reality becomes a human reality, meaningful in ways that can have social consequences, through the mediation of institutions and their practices. Infrastructures are hybrids of the human and nonhuman [Latour], and the workings of these hybrids must be sought in the locales where the work is done, and not just in the results that they authoritatively produce. Bowker [ref] has thus justly emphasized the infrastructural preconditions of computing. In order for the numbers that circulate in a computer to have any consequences in the human world worth worrying about, numbers must also flow in a much larger institutional circuitry; numbers are only meaningful relative to the practices that make them so.

Nonetheless, as I remarked at the outset, a considerable tradition identifies the subject of quantification with a certain conception of God. This is a God, for the most part, whose sole properties are omniscience (perfect knowledge of a technically ordered universe) and omnipotence (having created it all) -- a God not so much known as implied, a discursive location projected into the skies, the supplement of technical reason. Nor can this theology be understood ```

This web service brought to you by Somewhere.Com, LLC.