TNO 3(4), worth the wait.writing

militaryhistoryeducationinternationalmediacivil-libertiesinternet-policyprivacycryptographylaborlibrariesdemocracylawcommerceforwarded-contentgovernment-infohealthcommunity-networking
33 min read · Edit on Pyrite

Source

Automatically imported from: http://commons.somewhere.com:80/rre/1996/TNO.3.4.worth.the.wait.html

Content

| | | | --- | --- | | Red Rock Eater Digest | Most Recent Article: Thu, 30 Nov 2000 |

TNO 3(4), worth the wait.

``` ---

T H E N E T W O R K O B S E R V E R

VOLUME 3, NUMBER 4 APRIL 1996

---

"You have to organize, organize, organize, and build and build, and train and train, so that there is a permanent, vibrant structure of which people can be part."

-- Ralph Reed, Christian Coalition

---

This month: From librarians to communitarians Methods for spontaneous noticing Toward a universal event calendar

---

Welcome to TNO 3(4).

This month I begin a two-part series on the future of libraries. Right now is a good time to be thinking about this, as experience accumulates from "digital library" research projects. A great deal is at stake: we can just digitize everything as fast as possible, or we can back up and rethink how people think together and how technology could possibly help. The consequences for librarians are particularly serious: on the first scenario their jobs get smaller as the ideal of public access to information gets narrowed into a technical question; on the second scenario their jobs get bigger as we use technology in new ways to support the diverse needs of different communities.

I've also included a peculiar article about an experiment that my friends and I conducted many years ago at MIT. We felt, and I still feel, that technical work -- at least as it has been organized historically -- incorporates a distinctive way of thinking that can separate us from our experience of our own lives. We wanted to reverse this effect, both for our own good and to help us develop better technical ideas. Here I describe what we did, and what happened. I think that our experiences suggest the outlines of a teachable skill that can actually make people smarter.

A footnote. The Internet is heavily burdened with myths, many of which I have already discused in TNO. But none of these myths astounds me more than the idea that the Internet owes its success to free enterprise. At practically every public debate about it Internet, it seems, someone gets up during the question period (if not on the panel itself) and launches into a rant about how entrepreneurial foresight and freedom built up the Internet and now the government is coming in to regulate it. This word "regulate" is often spoken with a constricted throat and slow-motion hand gestures that look like the speaker is grabbing at or throttling something. The problem, of course, is that the Internet was a government program from the beginning, and was almost exclusively government-supported for most of its life. The vision for the Internet came from some guys within the military who were influenced not by the methods of industry vendors, with their closed proprietary standards, but by the methods of the scientific community, with its desire for openly shared information. As the Arpanet grew and became the Internet, they saw correctly that a network based on strong principles of interoperability would powerfully support both an open society and a strong economy. The mythology to the contrary is nowhere more confusing than in the case of the Communications Decency Act, which is willy-nilly ascribed to a nebulous bogey called "the government" -- or just "government" -- without any serious analysis of who the various players are both inside and outside the workings of the state. This most recently came home to me in one of the reports from Declan McCullagh on the ACLU v Reno trial in Philadephia. With the trial under way, the point is to guess the judges' views from their questions and comments. Describing one of the judges, Stewart Dalzell, and his interaction with one of the ACLU's expert witnesses, Scott Bradner, he says:

Dalzell has a keen sense of humor and seems sympathetic to our arguments. In fact, I'd guess he's been doing some out- of-court web-surfing himself. In an astounding question at the end of the day, he asked Bradner: "Isn't it true that the exponential and incredible growth of the Internet came about because the government kept their hands off of it?"

Bradner gladly agreed. (What else would he say?)

For one thing, he could have told the truth. Declan has done plenty of good things for the cause of free speech, but in this particular case he is repeating some of the most thoughtless cant of the Internet advocacy movement. Let's get real. "Government" is neither good nor bad as an abstraction. Government is a field over which different forces conflict, in which different sets of values become entrenched and then dislodged, and in which psycho- pathic ambition jostles cheek-by-jowl with people who work long hours in crummy conditions for low pay because they believe in the principle of public service. This is called democracy, and it's pretty good. It works when the people make it work, and when they understand it rather than generalizing wildly about it. "Government" didn't cause the CDA; the CDA was caused by the fund-raising imperatives of an authoritarian social movement that loses nothing by proposing hare-brained solutions to social problems because it can so readily tar anybody who speaks out against it with the broad brush of a nebulous enemy of its own.

---

The end of information and the future of libraries.

My work is thinking about basic ideas of technology in ways that let us see them as products of social processes, and as part of social processes. For example, computing has very particular ideas about how to represent human activities. These ideas have histories. They could be different, and they have significant consequences for privacy.

Let us consider another basic idea of computing, information. We all think we know what information is. Computer people and librarians both define their work in relation to something they call information. But I want to suggest that information might be an obsolete concept, and that emerging technologies are yelling in our ears to move along to other, different ideas.

What is information? We can define it in a narrow technical way. Shannon defined one notion of information in his theory of the capacity of a communications channel; information for him is measured in bits, and each bit is a distinction that is meaningful to the parties on each end of the channel. Bateson said something similar when he defined information as differences that make a difference. Computer people often speak of information in terms of the states of digital circuits that represent binary states of affairs in the world.

In each case, information is an idea that builds a bridge between the states of artifacts and meanings in people's lives. We often hear that this is an information age, or an information revolution, or that information rather than capital is now driving the global economy. It is not at all clear what any of this means. I think that in practice we tell three stories to ourselves about information. Each story profoundly affects our thinking by encoding particular views us about the relationship between designers, information users, and information itself. I will refer to these stories as information processing, masculine transcendentalism, and information professionalism.

(1) Information processing

Computers originate in automation; "computer" was originally a job title, not a machine. Early computing methodologies were modeled on industrial automation methods -- a flowchart is really an industrial process chart. When you hear the phrase information processing, therefore, I want you also to hear phrases like food processing and sand and gravel processing. Information, according to this story, is an industrial material like corn or oil or metal.

The information processing story assigns particular roles to designers, users, and information:

designers - gods users - factory machines information - processed material

(2) masculine transcendentalism

I take this marvelous phrase, masculine transcendentalism, from the historian of technology David Noble. We can see masculine transcendentalism at work in Wired magazine, or in all of the hype around artificial intelligence or virtual reality. The story is this: someday soon, the physical world is going to wither away. Everything is going to become digital. All of our minds will be downloaded onto machines. All of our books and paintings will move into digital media. We will no longer have bodies, and most amazingly of all, we will work in the paperless office. Noble's brilliant insight is that this is a religious worldview, and his historical research demonstrates compellingly that it developed out of a religious worldview without any particular discontinuity along the way. It is a millenarian worldview in that it posits a perfect future in which everything will be transformed. It is a transcendental worldview in that it calls for the whole world to be raised up and dissolved into incorporeal realm that leaves the body and all the messy stuff in the social world behind. It sounds funny and hyperbolic when you frame it this way, but it is an enormously influential way of speaking in industry and elsewhere.

Here, then, are the basic relationships posited by masculine transcendentalism:

designers - prophets users - caught up in an inevitable rapture information - the fabric of heaven

(3) information professionalism

Information professionalism is a story that both computer people and librarians tell, but I want to focus on the librarians' version here. This story goes: we are professionals; there is this stuff called information; and our professional expertise consists of managing large bodies of information and connecting people with information. These professionals are generalists, or specialized at most to very broad areas, and libraries treat very disparate kinds of stuff in the same way. This view is understandable when you have a dozen librarians in a library building, and they are buying, cataloguing, and managing information that a hundred different kinds of people are using. The librarians need to routinize their work, and they need highly rationalized, detailed procedures so that the product of their work -- a catalog, for example -- is uniform and so that this product can be produced efficiently. Libraries have themselves been factories in many ways -- thousands of books just have to get catalogued. None of this is a criticism of librarians, who have been working within the constraints of particular technologies and institutions.

Here, then, are the relationships that the information professionalism story posits:

designers - professionals users - individuals with information needs information - homogenous stuff to be stored and retrieved

I do believe that information technology is contributing to a major change in the world, but I think that this is precisely a change that makes each of these stories obsolete. The old- fashioned factory story is already under heavy attack -- we've automated an awful lot of tasks already, and the resulting machinery requires a lot of skill and expertise to use. But it is striking that we haven't often questioned this view in the context of information.

Masculine transcendentalism, for its part, is really one of those yesterday's tomorrows, like the Jetsons. If we look at what is really happening in the world, we see information technology as a nervous system for the physical world, not as a replacement for it. (See, for example, TNO 1(5).)

But it's information professionalism that I really want to focus on. The problem with information professionalism is really a problem that the others share underneath: it treats information as a homogenous substance. A good way to think about information is that it's the professional object of librarianship. Every profession has its object: for law everything is a case, for medicine everything is a disease, and for librarianship everything is information. In each case, someone walks in the door with a problem, and the professional's job is to find their object in that problem, and to talk about the problem in a way that makes it sound like a case, a disease, or information that can be compared with other cases, other diseases, or other information.

There's a deep trade-off: each profession achieves generality by reducing everything to a common denominator, leveling everything to common terms. Each profession can help everyone, but they cannot help them very well. Library materials are indexed in a very sophisticated way -- certainly much more sophisticated than the keyword searches that prevail on the Internet -- but it is one uniform indexing scheme, despite the many different places that different patrons might be coming from in their lives.

We can think about solving this problem by using information technology to support several different coding schemes, and I think this is a good thing to do. But I want to back up and suggest a more radical approach. Let's get beyond the stories we have told ourselves about information and tell different stories about different sorts of objects.

I want to suggest that the defining feature of our new world is that people talk to each other, a lot, routinely, across distances, by several media. It makes no sense any more to ask how individuals use information. Instead, let us ask how communities conduct their collective cognition. Let's define a community, as per TNO 2(7), as a set of people who occupy analogous structural locations in society. The residents of Palo Alto are a community, but so are cancer patients, corporate librarians, and people who are in the market to buy any particular sort of product. Emerging technologies allow communities to think together. The fact that cancer patients can think together is already turning medicine inside-out. The fact that customers in computer-related markets talk intensively to one another on the Internet is increasing the amount and variety of information in the marketplace. The future, in my view, belongs not to information but to this active process of collective cognition in communities.

It might be objected that we will always have libraries and bookstores, and they will still be full of information. But that's not the best way to look at it. The first thing that library cataloguing schemes lose is the dialogic nature of articles and books: they are all turns in a conversation, responding to a particular literature or cultural background and addressed to a particular audience. Every community conducts its collective cognition through diverse mechanisms, from rumors to conferences to newsletters to wandering bards to Internet mailing lists to articles and books. The library is one window on this whole dynamic interplay, but it is not a window that lets us see that dynamic interplay very clearly. Perhaps it is an artificial window, a means to serve a subset of "information needs" that is largely an accident of past technologies and institutions. Many different kinds of energy pass through the library, but the library reduces them all to information retrieval, a homogenous category that it can work with.

The solution, I think, is not to pave the cowpaths by automating the institutions we have now. Instead, I think we should explore the full range of means by which we can support the collective cognition of communities. Every community has its own mix of communications mechanisms, its own history and institutions, its own symbols and vocabularies, its own typified activities, its own constellation of relationships, and perhaps most importantly, as TNO 2(11) suggests, its own genres of communicative materials. If we want a focal concept to replace information, we might want to choose genres. Genres are stable, expectable forms of communication that are well-fitted to certain roles in the life of some particular communities. Business memos, opinion columns, action-adventure movies, Interstate Highway signs, business cards, and talking-head TV political shows all have stable forms that evolve to serve needs in the midst of particular activities.

I don't think we should be automating information professionals out of business. Quite the contrary, I think we should be giving them a bigger job: reaching out to support the collective cognition of particular communities. This might include systems to support the creation, circulation, and transformation of particular genres of materials. It might include setting up and configuring mailing lists or other, more sophisticated tools for shared thinking. It might include both face-to-face and remote assistance. Distributed alliances of librarians might support specific distributed communities, while comparing notes with one another and sharing tools.

This view has many consequences. It follows, for example, that a digital library isn't one big system but a federation of potentially quite different systems, each embracing a range of functionalities and fitting into people's lives in potentially quite different ways.

It also follows that each community will have, to some extent, its own infrastructure with its own evolution. Standards are crucial. Tools for shared thinking work best when everyone is using them, and so supporting a community's transition to new tools will require consensus-building, well-timed coordination, training, and a shifting division of labor between professional librarians -- or, as we might start calling them, communitarians -- and mutual aid and self-help among a community's members. No more factories, no more millenarian fantasies, no more isolated information warehouses. Instead, perhaps, we might be able to build, and help other people to build, the interconnected pluralistic society that we so badly need.

---

A story about noticing.

In graduate school I worked in artificial intelligence. I had chosen to study AI from adolescent sorts of motives: it seemed cool at the time. Along the way, though, I started to grow up. This is never easy, but my lack of a broad liberal education made it much harder. In studying AI, I became socialized into a distinctive way of thinking and using language that made it hard to think anything else. I gradually emerged from the AI worldview by a peculiar route. AI people have frequently used informal introspection to guide their model-building. Together with some friends, I tried something similar but (as it turned out) importantly different, and the results were remarkable if difficult to communicate. I think the story is worth telling, both because of the intrinsic value of the research methods we developed and because of the larger lessons we might learn about technical thinking and technical work.

Many linguists and others have noticed an interesting phenomenon: if you spend a good part of your workday studying a certain formal aspect of human life -- say, a certain grammatical form -- then you will start spontaneously noticing examples of it in your life outside of your research. Many linguists collect the examples they notice this way, making themselves nuisances at dinner parties when they suddenly point out the unusual phrase construction of someone's previous utterance. But we had never heard of anybody actually making this phenomenon into a deliberate strategy of research. That's what we tried to do.

Our basic motivation was our belief that AI's ways of talking about people's lives were wildly at odds with the reality of those lives. But this was a hard argument to make, since AI regularly proceeds by making up little stories that sound like plausible things that could happen in real life while also corresponding conveniently to the capacities of particular technical schemes. How could we show that these types of stories misrepresented everyday life (i.e., real, genuine, authentic everyday life and not the fictional constructions of it in AI papers)? Could we show that such things never happened? That they were atypical of everyday life in some statistical sense?

The only way to begin, we thought, was to start collecting real stories of everyday life. But how to select these stories? We shot several videotapes of people as they made dinner, but then we decided that we weren't about to invent a coding scheme to categorize an hour of complicated videotape. This, then, was the attraction of the spontaneously noticed stories: they were relevant to the theoretical issues that interested us, and we didn't have to undertake any special effort to gather them beyond remembering to write them down. It will no doubt be objected that we couldn't remember the stories accurately. But keep in mind that our baseline was the (most commonly) totally fictional stories of AI papers; if anything the biases of reconstructive memory would bring the real stories back into line with that sort of artificial neatness. And we were only after heuristic stimulation, not hard data in any traditional sense.

Through much trial and error we developed a methodology. The first step is choosing a broad category of real-life situations that you're interested in, let us say "mistakes". Now, it turns out that "mistakes" is far too abstract and general to provoke much noticing. But let's say that we notice a particular mistake and take the trouble to write it out. For example, last night I was using an automatic teller machine and twice hit too many zeroes when entering amounts with the keyboard. To provoke the noticing of further such events, it turns out to be crucial to (1) write out the story from memory in extreme detail, as much detail as you can remember, and then (2) invent a category that includes this story but is half as abstract -- let us say, mistakes caused by trying to do something repetitive too fast, or even mistakes caused by trying to do something repetitive too fast and doing one too many -- and then write out an explanation of that category into one's notebook. We referred to this second step as "intermediation", since it involved the invention of a category that is intermediate in its abstraction between the existing abstract category and the specific concrete example at hand. It doesn't matter whether you formulate this category "correctly" -- different people would no doubt formulate it in different ways. What matters is the act of formulating it, explaining to yourself in writing how it subsumes the example, and explaining to yourself in writing how the more abstract category subsumes it in turn.

Intermediation is a sure-fire way to provoke noticing. The effect is amazing. What's really amazing is what happens if you make a habit of it. We spent an hour or two every day writing out episodes that we had noticed and intermediating from them. The more we did this, the more episodes we would notice. After a while we learned that we could deliberately "steer" our noticing in one direction or another, depending on what theoretical questions we were interested in -- just choose the aspect of the new episode that interests you most and define an intermediate category appropriately. With this method we can investigate a given category of phenomena in much more detail and complexity than we could by just collecting anecdotes (already a common method in psychology). After a while you'll accumulate what mathematicians call a "lattice" -- a structure defined by a partial order, in this case the order "is a more general category than". It helps if you can draw the lattice on a sheet of paper.

You may ask, what earthly use is this? We found it fabulously useful as a way to establish contact between abstract theories and empirical reality. It is similar in this regard to ethnography or other kinds of qualitative description. It is less appealing in that it doesn't seek a thick theorization of its materials, but on the other hand it grounds the concepts in one's own subjectivity as spontaneously noticed and not in the systematically observed behavior of someone else. I find it hard to explain except to say that I found it compelling and kept doing it, as I say, for a few years.

Let me describe a particular observation we made using the method. For a while we used it to explore a particular theory invented by my friend David Chapman, called "semantic cliches". Semantic cliches are simple formal structures that seem to recur frequently in the world's ideas. Mostly they correspond to simple mathematical structures. Take for example the notion of a total order: a structure consisting of a set of entities and a relation on them, such that every pair of entities which are different from one another has a "greater" or a "lesser" according to the relation. Examples are endless in the folk theories of the world: temperature, loudness, smartness, hotness, powerfulness, etc. The point isn't that the reality has that structure but that the ideas have that structure, though of course the relationship between the ideas and reality is probably not arbitrary. In his paper on semantic cliches -- which, true to the culture of the lab where we did our graduate work, was only published as an internal lab report -- David identified a few dozen of these cliches. Another one is propagation: you have a mathematical graph, and one of the vertices has a certain property at a certain time, and then this property spreads out across the arcs of the graph to successively broader sets of vertices. You can refine the cliches to make them more specific (once again, in a lattice). So for example, one kind of total order is a finite totally ordered set, which of course will have a greatest and a least element.

We were studying semantic cliches, then, and this caused us to notice things in the world that were examples of the particular semantic cliches we were studying. So of course we set about intermediating the various semantic cliches and the various other concepts associated with the semantic cliches. Along the way we discovered a great many examples of semantic cliches, based on episodes we noticed in which one or another property of them was at stake. And more interestingly, we started noticing lots of analogies between different parts of life that we did not formerly think of as analogous. More interestingly still, we noticed our lives start changing rapidly -- not in deep, meaningful ways, but in lots and lots of small, simple, logistical sorts of ways. For a long time we thought that we had discovered a previously undetected phenomenon: the continual evolution of the most ordinary routines of daily life. And so we set about intermediating on the category of routine evolution. This was what my dissertation was originally going to be about, until I got derailed by the immense difficulty of using AI's technical concepts to build anything that has any genuine relationship to people's everyday lives.

In any case, we eventually discovered that the routine evolution we were noticing had various components -- that is, a variety of qualitatively different mechanisms of change -- and that the most productive of these components was being induced by our method of investigation. That is, the cycle of noticing, writing down stories, intermediating, and noticing again was causing our lives to change. Why? Precisely because the various categories, and most especially the semantic cliches, were mediating numerous analogies in our heads. Now, if you've read the literature on analogical reasoning (and particularly if you've read Jean Lave's critique of it in "Cognition in Practice") then you're aware that people only really make experience-distant formal analogies when their attention is somehow brought to the analogy. Their attention can be directed in several ways: experimenters can point out the analogy, metaphors or other linguistic means can be used to draw the analogous situations under a common description, printed forms or other mediating artifacts can be used to structure the situations within a common form of activity, and so on. Some of these means might be consciously aimed at causing people to notice analogies. Others might be fortuitous, or might be part of a culture's more deeply meaningful set of metaphors and categorizations, or whatever. In our case, our attention was drawn to the analogies because we were deliberately using a certain abstract vocabulary to describe the forms of everyday events, and the common vocabulary we assigned to otherwise dissimilar situations was causing us to spontaneously notice analogies between them. These analogies, moreover, were frequently causing us to notice slightly better ways to do things that we already did in basically acceptable ways on a routine basis everyday.

Let me give you an example. We had an acetyline torch in our kitchen that was operated by a trigger that generated a piezoelectric spark. I often used this torch in the dark. Don't ask why. The problem was that the torch only worked when a certain knob, which turned to one of perhaps four positions, was turned to the second position. For a long time I would have to squint at the knob in the darkness to see if it was in the right position. Eventually, somehow, I came up with the idea of turning the knob all the way to its counterclockwise limit and then turning it one notch clockwise, after which I could guarantee that it would be in the second position. Well, it so happened that a few days later I was in a car with an automatic transmission, shifting back and forth between drive and reverse repeatedly to get out of a tight parking space while pedestrians kept jumping between cars trying to get me to break their legs. Whereupon poof I noticed the analogy with the torch and started whacking the shifter into park and then one notch right into reverse rather than looking at the shifter and clunking it two notches to the left each time I shifted from drive into reverse.

I'm quite sure that the semantic cliche of a finite total order mediated this analogy. Why am I sure of this? Because I was quite conscious of it at the time. Why did I notice and think to write down the fact that the analogy was mediated by a semantic cliche? Yes, that's right, because I had been intermediating on the phenomena of analogical transfer through intermediated categories. By that time it had grown quite common for noticings to trigger other noticings to three or four deep: I would notice an instance of some intermediated category in the midst of taking out the trash, whereupon I would notice that that noticing was itself an instance of some completely different intermediated category, whereupon I would notice that that noticing was itself an instance of yet a third completely different intermediating category. It would take quite a while to write all of this down on paper. If I wrote it all down right away, or within an hour or two, I could be confident of having remembered it all pretty accurately, since the intermediated categories provided a precise vocabulary for articulating what had just happened. I also intermediated extensively on the process of writing the stuff down. I'll never forget one day when I was writing out a particularly complex chain of these noticings, and found that something I had just thought while writing had triggered a sequence of noticings that chained so fast that I could not remember it all. It was a bizarre, quasi-mystical experience. It persuaded me that it was time to stop this absurd exercise and start writing my dissertation.

What did I gain from all this? It would be hard to tell you, much less convince you. For my own purposes, though, I am convinced that a couple of years of regular intermediation literally made me much smarter. I think the part of it that made me the most smarter was intermediating on the formation of analogies. As I wrote out my thoughts on a variety of topics in my notebook, I would often notice analogies between ideas that I had never connected together before, and even if the analogies seemed pointless I always wrote them out and followed through all of the suggestions that each analogous thought would make for the line of thinking represented by the other. Many of my best ideas in graduate school arose this way, and it is commonly held that many important discoveries (ones far more important than mine) also arose through the noticing of analogies. By intermediating on the process of noticing and working through analogies, I found that I noticed lots more analogies than I had before, and that I therefore had many more ideas than I had had before. They were not always good ideas, but that's alright, since you only need one really good idea to contribute something to the world before you die.

Eventually I stopped intermediating and stopped noticing things in that spontaneous way -- at least I stopped noticing things any more than anybody else does. But I do believe that my experience of intermediation left me thinking much more clearly than I did after my rigorless schooling and the murky commercial culture upon which I wasted so much of my childhood. I got some idea of what concreteness means, and abstractness, and the difference between an idea that sounds good as words and an idea that I can see in my own experience. I learned to be open to spontaneous noticing, and I learned to have respect for the immense complexity and wisdom and order of my own everyday life beyond my conscious awareness of it. And above all I learned to get intellectual concepts -- those of AI, and by extension all others -- in perspective. We don't really know that much, but we know a few good things, and through discipline and humility we can open ourselves to learning more from the simplest things around us.

---

Wish list.

I wish for a universal event calendar. It would be easy to do. Someone would set up a simple WWW form with all the entries you might want: date, time, venue, title, speaker, price, parking, etc. In the simplest mode, the server could generate web pages with a nice layout. Or it could have several different forms for different kinds of events: one-time, weekly, monthly, speaker series, competitive sporting events, political rallies, church services, weekend workshops, film showings, book signings, art exhibitions, legislative hearings, and so on. In successively more complex modes, with more complex form interfaces, event promoters could design more complex ads. These ads might have, for example, links to pages for venues that include directions and a map to print out. City council hearing announcements could include a link to the agenda. The whole thing would be linked to a relational database, some search engines, a notification service, and so on.

The universal event calendar could be a viable business. It would be free for the event-seeker. It would also be free for nonprofit advertisers using the simplest formats. Others would pay, though it wouldn't take much to keep the computers running. Once people started consulting the calendar regularly, it would profit event organizers to list their events, which would then motivate more people to consult the calendar.

The calendar service could be interconnected with an awful lot of other online services through a distributed object system with a relational database for queries. Take concerts. You could have an object for each artist and each venue, and calendar entries could point to them. If the Firefly music-searching system (which I mentioned in TNO 2(9) when it was still an MIT Media Lab project called HOMR) used these objects then users could easily ask about concerts by bands they have found through Firefly, and they could also jump into Firefly to ask about records by bands they have found out about through the calendar service. Firefly could even be set to notify you whenever a band is coming to town that you are likely to want to see, based on the correlations between your tastes and those of other users.

The relational database would be good for travelers. Someone who traveled a lot on business and always wanted to know where the Alcoholics Anonymous meetings or Unitarian Universalist Church services were could arrange for them to be listed in advance, complete with schedules and maps for each city on a prospective itinerary.

The calendar service would also be a good way for an organization to remind its members of meetings. If the organization posts its meetings and the members use a reminder service then they will always have up-to-date information. If the system lists events that are not public then it will need an authentication system, which might be a nuisance. It will need an authentication system anyway, though, to avoid bogus entries (e.g., an enemy of Greenpeace announcing a Greenpeace meeting in a dark alley late at night) or bogus changes to existing entries.

---

This month's recommendations.

Irresistible Rhythms, Route 1 Box 1320, Buckingham VA 23921. The Fall/Winter 1995-96 Irresistible Rhythms catalog is the best world music catalog I've seen. You could order at random from it and be guaranteed of getting terrific music. Its strengths are in the most commercial areas -- African pop, Latin, Caribbean, and Cajun/Zydeco; it makes little attempt to cover music from Asia or the indigenous peoples of Australia or North America. But that's okay; the stuff it does list is all great. If you don't trust the hype or want more detailed information, refer to the Rough Guide to World Music, which I recommended in TNO 2(3). The Smithsonian Folkways catalog is cool, too, and some of it is on the Web: http://www.si.edu/products/shopmall/records/start.htm With Folkways, though, beginners should be sure to choose modern recordings with good stereo imaging; the old ethnomusicological recordings often sound more like laboratory data than music you would want to party to.

Joan Greenbaum, Windows on the Workplace: Computers, Jobs, and the Organization of Office Work in the Late Twentieth Century, New York: Monthly Review Press, 1995. The more hype we hear about impending total revolutions, the more we need historical research that puts things in perspective. Joan Greenbaum's book about the modern development of office work could therefore not be more timely. In focusing on the point of view of the people who actually work in the offices, it escapes the pitfalls of technology-driven utopian and dystopian scenarios, as well as the overly simple visions of managers and consultants who don't really know what their employees do anyway. Applicable history.

Community Technology Center News and Notes. This is the semi- annual newsletter of the Community Technology Centers' Network (CTCNet), which is now part of the Education Development Center in Newton, MA. It's a good place to read about all the excellent things that people are doing with community access to technology in the US. CTCNet was originally known as the Playing to Win Network; it was founded by the most excellent Antonia Stone, who was doing this stuff way before it was fashionable. Subscribing for $20/year can keep you in touch, and if you have more money they could really use it. EDC, 55 Chapel St, Newton MA 02158, (617) 969-7100 x2727, ctcnet@edc.org, or http://www.ctcnet.org

William F. Hanks, Language and Communicative Practices, Boulder: Westview Press, 1996. This textbook is the most accessible introduction to an important intellectual tradition that seeks to synthesize two equally strong but seemingly irreconcilable approaches to the study of human language. Hanks defines the first approach through its emphasis on the "irreducibility" of language -- that is, the sense that language has its own autonomous structure, particularly grammatical structure, that we can study without much reference to the actual activities in which language is used. And he defines the second approach through its emphasis on the "relationality" of language -- that is, the sense that we can only make full sense of language in the context of particular, complicated, historically specific, ongoing relationships between people. The first approach, taken to extremes, produces the militant formalism of contemporary structural linguistics downstream from Noam Chomsky. The second approach, taken likewise to extremes, produces the militant relativism of some postmodernist and poststructuralist analyses of meaning. Going to extremes can be valuable for a while, but somebody has to bring things back to a synthesis before too many generations of students get schooled in the rhetoric and tactics of intellectual intolerance. And that, potentially, is the value of Hanks' book, which is intellectually demanding in the good sense: it requires the reader to travel deeply into the phenomena of language, postponing the leap to classifications and allowing the phenomena to speak through the refractions of various theorists. The resulting synthesis will not please anybody, since so many tensions will remain, but those tensions simply indicate that we have a long way to go yet in our understanding of the actual phenomenon of language.

---

Follow-up.

My wish for a life expectancy server in TNO 2(9) got reprinted in Wired and provoked some replies. One person described the Health Risk Appraisal (HRA) system that the Centers for Disease Control built in the early 1980's "that would take inputs on about 20 variables and predict your life expectency, highest risks for death, and the changed life expectency if you changed certain behavior." It asked you questions; his favorite was "Do you frequently argue with or criticize strangers?" He thinks it loaded up on the homicide-risk factor.

In TNO 2(12) I noted the recent bifurcation in the English word "victim". When pronounced with a normal stress it refers to people who have been harmed by some people's bogeys, but when pronounced with an extra-heavy stress on the first syllable it refers to people who choose to portray themselves as having been harmed by some other people's bogeys. People in the former group have boundless moral authority; people in the latter group make us sick with their whining and refusal to accept personal responsibility. On public radio's "Marketplace" program the other day, I heard a fascinating extension of this distinction. In a report on downsizing, a representative of the National Association of Manufacturers referred obliquely to those who "would rather victimize" the laid-off employees than see their situation as the natural order of things. In other words, the verb "to victimize" can now be used to mean "to portray as a victim". The amateur linguist in me was thrilled.

Web picks.

A thorough directory of Internet resources for job hunters can be found at http://www.jobtrak.com/jobguide/ To sample the hype from the recruiters' end, check out the IBJ interview at http://www.phoenix.ca/sie/publish/ibj/recruit.html Of course it's a good thing to have efficient ways to find a job. But assuming there's a danger too. If employers' costs of hiring are greatly reduced then, by basic economics, other things being equal, jobs will become less secure. Employers' needs evolve continually, so there probably exists a person who is better qualified for your job than you are. If it is expensive to find and hire that person then it's in your employer's interest to provide you with the training and other opportunities that you need to get up to date. But if it's cheap to find and hire that person then you're out the door tomorrow. What is more, as the "human resources" person who was interviewed in IBJ remarked, the net is especially good for finding people who already have jobs but are browsing around to see what might be better. As the costs of hunting for new jobs decreases, the number of people who are eyeing your job increases, thus forcing you to spend a lot of time eyeing other people's jobs as well. This can be terribly inefficient, given how expensive it is to change jobs, but those costs lie squarely on employees, not employers. So think twice before you embrace this brave new world in which you're an interchangeable part.

A company called Offshore Information Services Ltd., located on the 88-square-mile Caribbean island of Anguilla, claims to make it easy to create an "offshore online identity". Their publicity asserts that "Anguilla has no restrictions on publications about dead presidents of France, or information about birth control, etc." Their URL is http://online.offshore.com.ai/

My local webmaster, Bruce Jones , has pointed out that two useful critiques of html programming style are: Art and the Zen of Websites http://www.tlc-systems.com/webtips.shtml and the HTML Bad Style Page http://www.earth.com/bad-style/ Also, a useful guide called "Creating well-designed Web pages that are efficient to transmit and navigate (or: being kind to people with slow modems and those in developing countries with expensive Internet access)" by Philip Bogdonoff of the World Bank Electronic Media Center is at http://www.worldbank.org/html/emc/ documents/zippywww.html#fewscreens

A good collection of Web resources on transportation can be found at http://dragon.Princeton.EDU:80/~dhb/

A pretty good brief survey of the conflicting statistics about net use can be found at http://webcom.com/~piper/9512/whois.html

Easily the coolest thing at CFP'96 was a moot court concerning a hypothetical US law banning domestic cryptography. Most of the resulting documents are on the Web in the CFP Web pages at http://swissnet.ai.mit.edu/~switz/cfp96/plenary-court.html

---

Phil Agre, editor pagre@ucsd.edu Department of Communication University of California, San Diego +1 (619) 534-6328 La Jolla, California 92093-0503 FAX 534-7315 USA

---

The Network Observer is distributed through the Red Rock Eater News Service. To subscribe to RRE, send a message to the RRE server, rre-request@weber.ucsd.edu, whose subject line reads "subscribe firstname lastname", for example "Subject: subscribe Jane Doe". For more information about the Red Rock Eater, send a message to that same address with a subject line of "help". For back issues etc, use a subject line of "archive send index". TNO is also on WWW at http://communication.ucsd.edu/pagre/tno.html

---

Copyright 1996 by the editor. You may forward this issue of The Network Observer electronically to anyone for any non-commercial purpose. Comments and suggestions are always appreciated.

--- ```

| | | --- | | ProcessTree Network TM For-pay Internet distributed processing. | | Advertising helps support hosting Red Rock Eater Digest @ The Commons. Advertisers are not associated with the list owner. If you have any comments about the advertising, please direct them to the Webmaster @ The Commons. |