Source
Automatically imported from: http://commons.somewhere.com:80/rre/1999/RRE.notes.from.RRE.reade.html
Content
| | | | --- | --- | | Red Rock Eater Digest | Most Recent Article: Thu, 18 Jan 2001 |
[RRE]notes from RRE readers
``` [I have enclosed a batch of messages that RRE readers have sent me on various topics that I have raised on the list. You may recall that I promised to circulate all of the messages that I received in response to my call for comments about educational and community computing. Well, I've failed to deliver on a couple of such promises in the past, and now I've remembered why. For a dozen miscellaneous reasons it turned out to be quite complicated to determine which messages could be circulated, so I ended up asking everyone for permission. Several people revised their messages, and others asked me to tell you that they hadn't been writing for a broad audience. After I got done with all of that, not many messages on those two topics were left. So I've enclosed (by permission) several more interesting messages by RRE readers on topics such as Microsoft, online bookselling, typography, and science fiction. All the messages have been reformatted. My apologies for any problems with the messages themselves (wrong version sent to the list, etc) or the formatting. I did my best.]
---
This message was forwarded through the Red Rock Eater News Service (RRE). Send any replies to the original author, listed in the From: field below. You are welcome to send the message along to others but please do not use the "redirect" command. For information on RRE, including instructions for (un)subscribing, see http://dlis.gseis.ucla.edu/people/pagre/rre.html or send a message to requests@lists.gseis.ucla.edu with Subject: info rre
---
Date: Mon, 05 Jul 1999 09:49:30 +0001
From: Bruce ONeel
The NetFuture newsletter might be a bit of a help here. You can find it at:
http://www.oreilly.com/people/staff/stevet/netfuture/
They've had a nice set of articles recently about distance learning in
universities with a lot of thoughtful coments. It doesn't tend toward
the tech cheerleading end of things
Good luck with your travels.
cheers
bruce
---
Reality is 80m polygons - Alvy Ray Smith Bruce O'Neel - beoneel@mindspring.com http://homepage.iprolink.ch/~bioneel/beo/beo.html - daily stuff
Date: Mon, 05 Jul 1999 11:25:32 -0500
From: "Kenneth D. Forbus"
[...]
A couple of things off the top of my head:
** Classic mistake #1: Technology without curriculum. Some places love making short movies, java applets, etc. that demonstrate, in a flashy way, some principle or idea. Certainly can be more attention- getting than a picture in a textbook or on a whiteboard. But without a solid curriculum setting, students won't learn anything from it (or at least, not what you think they are learning from it). In a constructivist curriculum, for instance, most of these small demonstrators play at best a minor role, as a resource for students to draw on while doing project work. Technological supports for student project work involve more complex design issues.
** Classic mistake #2: High-tech filmstrips. Software that you can't really do anything with, aside from running a small range of canned examples, isn't that much better than traditional media. Can the student use the software in their own investigations? Does the software help students tie what they are learning back to the world somehow?
** Classic problem: Not enough computers, and not enough good computers. Example: My group is working with several Chicago Public School teachers, developing inquiry curriculum as part of an NSF education center that has NWU, U Michigan, and the Chicago and Detroit public school systems as partners. The classrooms we're dealing with have maybe a computer lab somewhere in the building, or a computer that someone can borrow. Over 90% of the students are below the poverty line, so passing out CD's for them to use at home isn't an option, as it is with some schools. And what they do have is well behind the state of the art, which, again, forces one to be very careful and creative in design.
All that said, if you compare having hundreds of students sitting through a traditional lecture to a well-designed multimedia application, they're probably better off with the app, or even a video, to go at their own pace and have some control over what they do when. And of course fast-forwarding through the tedious bits :-). Given the need for lifetime learning and distance learning, it's really important to figure out how to do these sorts of things right.
Date: Tue, 06 Jul 1999 13:29:53 -0500
From: "Kenneth D. Forbus"
We do have some other papers on our web site about our other educational software, and you can download stuff yourself and kick the tires if you have the inclination:
Papers: http://www.qrg.ils.nwu.edu/papers/papers.htm Articulate virtual laboratories project description: http://www.qrg.ils.nwu.edu/projects/NSF/avl.htm Virtual Solar System project (public outreach project, NASA-sponsored): http://www.qrg.ils.nwu.edu/projects/vss/index.htm Various pieces of software: http://www.qrg.ils.nwu.edu/software/software.htm
[...]
Ken
---
Prof. Kenneth D. Forbus Qualitative Reasoning Group The Institute for the Learning Sciences Northwestern University 1890 Maple Avenue Evanston, Illinois, 60201, USA
email: forbus@ils.nwu.edu voice: (847) 491-7699 fax: (847) 491-5258 http://www.qrg.ils.nwu.edu/
---
Date: Mon, 05 Jul 1999 14:34:24 -0400
From: flaps@dgp.toronto.edu (Alan J Rosenthal)
To: Phil Agre
>So my question is, what can go wrong with technology-based >teaching, especially in large college classes?
I trust you're familiar with the various effects of the asymmetric social connection between, say, two rooms, each of which contains half the class, and the video/audio connection is (all or most of the time) only from the room with the lecturer to the other room? People in the other room can't be heard by the lecturer, so they're freer to talk, to say things amusing to the rest of the room which they wouldn't dare say in front of the lecturer, etc. Sometimes this results in the lecture declining to television status as most people in the room basically ignore it, but I'd imagine it always has SOME effect.
I haven't been in such a class, but I've seen and heard about this situation in "offload" rooms at popular talks, etc.
regards,
Date: Mon, 5 Jul 1999 11:56:12 -0500
From: Chuck Huff
Phil,
This reference is not directly to the point, but I expect you will find it helpful. Look at the winter 1999 issue of Daedalus. Astin (I forget the first name) has an excellent review of the literature on the effects of higher education on students, with particular emphasis on liberal arts colleges. He lists the variables that have the largest effect on student outcomes and satisfaction. Peer-peer and student-faculty interaction rank high on this list. You might compare this with what the current technology is good at giving to students.
A bonus would be to take the other items Astin lists and do the same analysis: what matters most in education vs. what technology can help with. My issue is at home (on the nightstand) so I can't give you the exact reference from here.
-Chuck
PS: I would warn you away from the "internet-addiction" issues. I have found very little good data to suggest that internet access per se is the problem here. I can give some references here if you need them.
Date: Mon, 5 Jul 1999 12:02:59 -0500
From: Chuck Huff
Alexander W. Astin, (1998, Winter). How the liberal arts college affects students. Daedalus, (vol 128 no. 1), p 77-100.
-Chuck
Date: Tue, 6 Jul 1999 11:18:51 -0700 (PDT)
From: Alan Weiss
Dear Phil,
In response to your request for help (anecdotes, rants, etc.) on what can go wrong in technology-based teaching, I'd rant that one major thing that can go wrong is that a lot of potential students will be missed. In addition to all the people who are simply afraid of computers, there would be people like me who have no interest whatsoever in that kind of experience. I've been playing with computers (and getting paid for it) for about thirty years now. I like computers - they're a lot of fun, and they can do lots of useful things, but when I want to learn in depth about something (as opposed to looking up some facts), I want to have face-to-face dealings with a human.
Cheers, Alan
---
Alan Weiss (805) 893-4633 SAASB 4101NN, Computer Center University of California Santa Barbara, California 93106-3020 USA Alan.Weiss@isc.ucsb.edu
Date: Wed, 7 Jul 1999 03:33:49 -0500
From:
>lectures with interactive multimedia productions that can be used
>over the Internet (e.g.,
>workshop organizers asked me to give a talk about "what can go >wrong".
Classroom feedback is often necessary for the lecturer to judge the level at which to teach. Without it, the 'net material can skip steps and have more errors.
In addition, developing on-line courseware is significantly more difficult for a lecturer than just sharing his everyday knowledge. There is great reluctance in spending the effort to create good on-line material due to the time involved (and there is a perception that you can more easily make a fool of yourself with a lesson being on-line).
In addition, a lot of learning happens more informally in the classroom, as the lecturer shares personal anecdotes or uses the material to explain a current event (like the Mars Pathfinder bus contention problem). I still carry around and use some MIT freshman lecturer anecdotes to use in my own teaching.
Now to address these problems (at least in part): Having just used Gregory Abowd's Classroom 2000 (www.cc.gatech.edu/fce) set-up, I think this sort of design can aid for both in-lecture communication and in helping transition material to the web (the lecture, slides, student notes, and references become part of the on-line material that can be annotated and indexed). Not only that, but it is great for self-feedback and when a new professor takes on an old class and needs to find ways of wording different concepts - he can just go back and listen to his predecessor.
Hope that helps - I'm writing on no sleep.
Thad
Date: Wed, 7 Jul 1999 07:34:09 -0700 (PDT)
From: Christina Prell
I taught for one semester a satellite course and found that the amount and quality of feedback I gave to my students declined. Why? Too many students. I have also learned from my explorations into human factors research that teaching physical tasks is near to imposible within the computer /mediated context. Again, it is a feedback issue: one needs to involve many senses when learning a physical task and mediated channels limit the amount of cues (i.e. senses) one can receive/use/ etc.
It is also harder to develop class plans and/or lectures for mediated classes. So you tend to limit what you teach:what you teach tends to be highly structured and somewhat narrow.
My opinions.... Hope this helps! Chris Prell Ph.D. student in Communication and Rhetoric Renssealear Polytechnic Institute
Date: Mon, 5 Jul 1999 17:43:15 -0600
From: Ted Logan
On your topic of technology-based teaching:
My client, a medical publisher, acquired marketing rights to an unusually sophisticated medical teaching program, which I'll call TP. It is a series of high quality multimedia CDs, each using excellent still photographs plus extended movie clips plus atlases and interactive tests with voice-over narration based on pronunciations from Nomina Anatomica (the Bible of the field) to teach medical students anatomy of the heart, the eye, the hand, the skull, and so on. Neuroanatomy, a notoriously tough subject, alone takes up 3 CDs. Production is of the highest caliber -- by former BBC producers who have been unstinting in their production expenditures. They have acquired outstanding medical photographs and video clips from leading teaching centers around the world with pedagogical input from an international team of anatomy professors.
My client and I spent nearly three years introducing this material to medical schools in the United States and Canada. Nearly everyone we spoke to, including instructors themselves, had the highest praise for the program: "the future of medical instruction," "makes cadavers and models obsolete in student teaching," "destined to supplant all conventional teaching methods in medicine," and so on.
But we never asked the students themselves and earlier this year received the following email from the chairman of a medical department at a large medical school in the University of California system:
"Last year we undertook an experiment in our gross anatomy laboratory and used [TP] CD based material from [my client]. Our experience with the CDs was quite negative. The factors are varied and include the need to run an outdated version of Quick Time, an inability to use multitasking with the program, and some still unresolved glitches that caused the programs to crash during use. However, overall, the big problem we had was that the students had little interest, and some had rather intense dislike, for the CD based approach. Almost unanimously, the students preferred the old approach of dissection and referral to text books."
Most medical publishers have similar stories to tell. Terrifically expensive efforts to introduce CDs in place of printed textbooks have bombed all around. Even drug reaction reference books, where rapid information retrieval is all important, go nowhere on CDs. Doctors say they like going on the Internet to search for journal articles and such, but when they're working with patients in office or hospital settings, only printed reference books will do -- they're easy to use, fast and portable, just as books have always been.
Ted Logan
Logan Writing, Inc. 210 N. 6th Ave. Cleveland, OK 74020-3203 Toll-free: 1-800-484-2957 (Access code: 3833) Fax & alternate voice: 918-358-5920
Date: Thu, 8 Jul 1999 10:17:54 -0600
From: Ted Logan
Phil:
You're welcome to send my note to your list but I can't cite any sort of authority concerning other attempts to replace textbooks with CDs. Medical publishing, despite its enormous impact on society, is a tiny part of publishing, now controlled almost entirely by a few conglomerates. My client, who is small by comparison, is one of the few independent medical publishers remaining who puts out nearly 100 new titles annually. The rest have been acquired or have gone under.
The large conglomerates in medical publishing, like all large firms, don't publish their self-analyses; they prefer to hide their mistakes. The one association, the American Medical Publishers Association, is moribund and collects no meaningful statistics. All medical book and CD sales in North America go through four wholesale distributors and library jobbers and accounts from them are purely anecdotal (they tell me privately that CD publishing is a bust).
If ever there was a hammer looking for a nail, CDs substituting for books is it. Consider the well-designed STM book, with its readable measure, portability, table of contents and index, durability, and accessibility. They work and probably always will work. But technologists have nevertheless spent time and money -- in gigantic amounts -- applying their new tools not to something that needs fixing but to books, which don't.
Somewhere there's an important lesson in all this, I think, but I can't prove it.
Ted
Date: Fri, 9 Jul 1999 22:07:32 -0600
From: Ted Logan
> I guess with book publishing you always know (or think you know) > what your customer does with your product. But with CD-ROM's the > space of possible ways that the customer might use the product is > vast and complicated. If the publisher doesn't have the skills to > understand that space, or the clues to even realize that they have > to understand it, then failure is guaranteed.
STM publishers aren't often well equipped with knowledge about the markets they serve. Most scientists and medical specialists don't work for publishers, which leaves publishers dependent for market information on authors who present their ideas of books or CDs for publication based on their assumptions and personal agendas. Publishers' editors, though many are paid well enough to know their fields better, will often simply take an author's proposals at face value -- as long as they agree with the editor's own preconceptions. Publishers figure they can't spend five-figure or six-figure sums on market research -- much less technological research -- to sell a book or CD. If it sells, fine. If not, move on.
Anyway, I'm not sure that examples from my tiny branch of publishing can help you. Your earlier example about a CD-based system of instruction for learning anatomy on cadavers wouldn't start with a publisher; that and all concepts like it have to start with the users. Publishers survive because they don't have r&d departments, which would add enormously to costs.
You asked if publishers are stuck in the automate-and-replace paradigm. As long as their authors and customers are, they will be, too. It's not what the technology can do from the technologist's viewpoint, it's what the user wants to do once he comprehends the technology. How can we combine a clinical anatomist and skilled computerist? Put them in the same room and pay them to talk to each other or wait until they catch the same bus one day and hope they sit together.
Date: Sat, 27 Feb 1999 12:33:16 -0600
From: Ted Logan
> It sounds like someone has a market opportunity to open an online > bookstore that sells only the books you describe, has them shipped > directly from publisher or distributor to customer, and accepts the > same ftp files of tables of contents as amazon, while doing much > cheaper targeted marketing in the publications that address that > very coherent market segment. Such a company could probably cut 30% > of Amazon's prices, judging by your account of the market. Don't > you think?
Phil:
Yes, except for pricing. My client allows Amazon.com (and B&N and Borders) only 20% discount off list because that is the normal bookstore discount for short-run STM titles. Academic Press was (maybe still is) an exception at 25%, but I don't think anyone goes above 27% discount to retailers, and then only for selected titles on first release, rarely list-wide.
My engineer and computerist friends invariably focus on "saving money." I joke that a true engineer will spend $20 in gasoline to find a swap meet where he can save $10 on a motherboard. But after 35 years selling highly specialized STM books ("Sex Steroids and the Cardiovascular System" from one symposium; "Amyloid and Amyloidosis" from another) I am convinced that maximum sales of these titles come from maximum exposure of title, editors/authors, and contents list for three months before and three months after publication. Price appears to be nearly inconsequential (outside ridiculous extremes of charging $100+ for 150-page festschrifts, as certain European and American houses did for years and some still do) because these are generally must-have, not discretionary, purchases for their specialized readerships. Most of the purchasing decisions for these kinds of books are price-insensitive as long as they are under $150 (the point at which library acquisitions need supervisory approval). Even upwards of $500, price usually doesn't matter if the book contains vital data (CRC in chemistry and West Publishing in law reference volumes, for instance) because individuals aren't paying out of their personal pocketbooks.
A site offering the information you describe would never make money off direct sales. But it could possibly attract some high-level advertisers once it became well-known. STM publishers are bombarded by overpriced advertising media (magazine ad rates that work for equipment manufacturers who can bill $10,000 on a sale don't work for book publishers billing $100 a copy) and have almost no place other than direct mail to turn to advertise their symposium-based and other esoteric titles. Many only list them in catalogs that are mailed to libraries and distributed at conferences, making news of the contents of short-run specialized books essentially unavailable to individual scientists and students, who tend to belong to direct-mail groups that are too large and too expensive for mailings on this or that small- market title.
I would guess that only a few STM publishers would pay for banner ads on your kind of site, but most would upload information as long as the database construction is kept simple with the fewest possible number of fields (Amazon.com offers an excellent model). As to other kinds of advertisers, like employment firms, software companies, universities, pharmaceutical and equipment manufacturers and such, I can't say because I've never worked for them.
This has always been a dilemma in my business. How to get details of specialized STM titles to the people who need to know about them and who will recommend their purchase while they're still new -- but only then. I've toyed with the idea of developing my own Web site, but have neither the computer expertise nor the time. I'll be glad to offer advice to anyone who wants to try, however.
Incidentally, your notion about drop-shipment orders directly from publisher (or publisher's distributor, which in medicine usually means any one of four main organizations, namely Login Brothers -- Chicago, Ohio, NJ, and Canada, J.A. Majors -- Dallas, Houston, Atlanta, LA, Rittenhouse --Philadelphia, and Matthews-McCoy, St. Louis) to the buyer, is, I think, doable. Drop-shipments are rare in mass-market retailing but are commonplace in STM publishing. J.A. Majors, Rittenhouse, and Matthews-McCoy still stick to medical titles almost exclusively, but Login Brothers has been adding scientific and technical titles, including law, to their distribution list for the past couple of years. I know the Login Brothers operation well and could maybe open some doors if wanted.
Still, I would caution anyone contemplating building such a Web site that it will not receive more than a fraction of the sales generated by the information it offers. Most purchases will come from departments and libraries ordering from distributors and jobbers or directly from publishers. Online ordering should be viewed and offered as a fillip, newness and and software that searches tables of content as well as titles being the key attractions for scientists and students. Once in a while someone might buy one of the lower- priced titles online (Third World and even European scientists use Amazon.com), but as a money-making venture you have to think eyeballs and advertising.
Ted
P.S. Login Brothers is at http://www.lb.com (sign in for searching, only). Medscape (http://www.medscape.com) links itself (as the "Bookstore" selection under "Databases" under "Search") to http://medbookstore.com , which appears to work off the Login Brothers database. If I were thinking of building an STM titles Web site, I would probably want to talk to Login Brothers first as my key source of new title listings (though not of detailed contents information, which only publishers can supply) and as my inventory source. No relation, by the way.
TL
Date: Tue, 14 Sep 1999 16:26:25 +0200
From: Elisabeth_Binder@pskdd.psk.co.at
To: Phil Agre
What can go wrong with using courseware? First, if you are talking about large classes, you are probably also talking about using off-the-shelf software (I think WebCT is pretty popular, at least I've heard from different schools that they are using this system). Then faculty and students are bound to a system which doesn't leave much room for modification. Not that large classes offer numerous teaching options, but it might still make a difference to the lecturers/faculty if they feel more in control.
Standardization of teaching is only the last step in the standardization of the academy. One should also not forget that there are only two or three serious software products for administrating student records (ask any administrator and listen to their complaints). Add in standardized testing plus the word processing and spreadsheet software faculty, students and administrators use on a daily basis, and it becomes clear that universities and colleges are increasingly losing some of the autonomous status which is associated with the freedom of academic research and teaching.
I also assume that switching to courseware has something to do with the idea of saving money. Taking distance education/online courses seriously, this calculation usually doesn't work out. Saving money in undergraduate education might also have implications for graduate education. Let's suppose courseware is implemented to reduce the number of teaching assistants. I don't have an idea what the exact implications would be for the whole system of graduate education, but there must be some. Certainly a good leverage against graduate student unions.
Another problem area is the kind of cross-cultural learning (sorry, for the buzzword) that could or could not happen in multimedia courseware. As a former "overseas student advisor" I know that a number of universities/colleges value international students for the diversity they bring to the classroom (not only for the money they bring in).
Question number one is, whether US universities will hold much attraction for international undergraduate students if parts of the freshman and sophomore curriculum is delivered over computer. The reputation of US universities is still very high abroad, but the competition (UK, Canada, Australia, South Africa) is not sleeping. In terms of income, some of those losses in the number of international students might be offset by selling those courses in the form of distance education abroad. But that's not doing much good for giving American students the chance to interact with an international student population.
I also doubt that the developers of courseware are really aware of any problems involving international students. I have to admit that I am quite ignorant of the kind of interactions that are possible between students and between students and faculty/teaching assistants through courseware programs. However, I think that there is a great potential for misunderstandings if students with a different level of exposure to computer-mediated communication are together in a course. M my experience tells me that when international students are involved (or any other kind of group with participants from different countries) nothing works better than f2f, at least for part of the learning process.
Date: Wed, 11 Aug 1999 19:37:11 +0100 (BST)
From: ritter@psychology.nottingham.ac.uk
To: Phil Agre
I have only a short note and late about using multi-media to teach. Appended for your ease of use is notes our story of teaching Soar with the web. It's not even very multimedia (ok, it's not at all, it's just text), but it referneces a running program and is interative hypermedia at least.
Cheers,
Frank
Ritter, F. E., & Young, R. M. (1999). Moving the Psychological Soar Tutorial to HTML: An example of using the Web to assist learning. In D. Peterson, R. J. Stevenson, & R. M. Young (Eds.), Proceedings of the AISB '99 Workshop on Issues in Teaching Cognitive Science to Undergraduates. 23-24. The Society of the study of Artificial Intelligence and Simulation of Behaviour.
[Also used as a case study for New Lecturers' Course Teaching and learning in a technology rich environment at the University of Nottingham.]
Moving the Psychological Soar Tutorial to HTML: An example of using the Web to assist learning
Frank E. Ritter School of Psychology University of Nottingham Nottingham NG7 2RD Frank.Ritter@nottingham.ac.uk
Richard M. Young Former Special Professor of Psychology at Nottingham Department of Psychology University of Hertfordshire Hatfield AL10 9AB R.M.Young@herts.ac.uk
April 13, 1999
This reports on updating and translating the Psychological Soar Tutorial into an online version based on the HTML mark-up language and viewable through the World Wide Web. There are several lessons for those who would like to move teaching documents onto the web. We note the ways in which it is not easy. The payoff should be carefully computed first. There are several tools that are likely to be useful. An extended version of this abstract may be available from the first author.
Background
Cognitive modelling, like the use of simulation in other sciences, consists of creating mathematical models implemented on a computer. Cognitive models simulate human cognition by duplicating the information processing.
Specialised computer languages, called cognitive architectures are used to implement this information processing. The idea is that the mechanisms, power, and limitations of the computer language duplicates the mechanisms, power, and limitations of human cognition. The programs represent the knowledge humans use to perform the same task. Programs in these languages should perform a task in the same way and the same pace as a human with the same program (or knowledge). These cognitive architectures are an important trend in psychology, one that advanced students should be exposed to if not converted by.
Soar is an example cognitive architecture. There are over fifty people around the world working with it to characterise human behaviour. It intentionally suffers under several known constraints of human behaviour, such that while memory access is rapid, learning facts is slow and must be deliberate.
Work with these cognitive architectures have been often limited by training materials. The best way to learn in the past was to visit an existing site and serve a two to 18 month apprenticeship. In 1993 we prepared a six to eight hour tutorial to teach workshop participants about Soar. Over the next two years we offered it five times at conferences and workshops (Ritter, Jones, & Young, 1996; Ritter & Young, 1994). In 1995 we received a grant to move the tutorial to the web to be used as a foundation for an advanced undergraduate / postgraduate class at Nottingham.
The teaching and learning environment
The initial tutorial consisted of about 30 overheads, two sample
programs (models), printouts of sample runs, and thirteen exercises
and answers. With a summer student's help, we translated the
overheads from Word to HTML using a program called RTF2HTML. The
30 overheads were revised by us and the student, who had taken the
tutorial, and additional textual and graphic material was included.
Hypertext links were added to the overheads, including references
to glossary items, to the exercises, and to other web-based help
resources on Soar and cognitive modelling. The tutorial is at
Student Assessment
Ritter initially believed that students would quite willingly sit down in the first class meeting and read the tutorial while he sat quietly waiting in the background for questions. The first time he tried this at Nottingham, in 1996 the students rebelled and demanded lectures to be presented. (Although when the web version was introduced at a workshop, the audience clapped.) Since then, the online tutorial has only been used as an adjunct textbook for students' revision and use during programming exercises, a purpose it serves very well. Because the material appears more polished students appear to take the material more seriously, and they often print it. They appear to have less questions about programming in Soar, and also appear to have less misconceptions because they can review the explanations and rationale for this approach. The online tutorial appears to be mostly used for help and reminders while doing the exercises.
A key aspect of such a course is hands-on experience manipulating and extending cognitive models. The exercises are now well understood and have been adjusted to provide a reasonable but increasingly difficult series of tasks to teach the practical implementation of theoretical constructs. In earlier versions of this and other materials for teaching about programming cognitive architectures, there have been dishearteningly difficult problem sequences. Answers are available for most of the exercises, but some answers are not provided so that as the exercises can serve as continuously accessed work. Having the material on the web supports revision by the students and continual updating by the instructor.
Students at Nottingham have a week between hourly lectures to complete the exercises. Those students who spend 30 to 60 minutes working will generally get full marks. People taking the tutorial at a workshop can see in ten minutes what working with Soar would be like, but do not have time to complete the exercises. While this seems somewhat odd, it works well. They can see what the question is, what types of knowledge would be necessary, and what the answer should be like. They just do not develop the full ability to answer the question. This approach allows a more complete story to be presented in limited time.
Completion of the tutorial leaves students ready to pursue independent work creating cognitive models. This level of sophistication is rarely achieved at an undergraduate level. While none of these projects have lead to fully published work, one of the models has ended up as an examples in the published Soar code, and several more have been archived as useful starting places for models.
The materials have been used at other universities for formal classes (Scotland, Japan, Australia, and Bulgaria) and for informal study (e.g. Brazil, Australia). Oddly enough, we have mostly found out about their use accidentally. Only one person has asked us or told us that they are using it.
Evaluation
This tutorial has been offered three times at Nottingham as part of an advanced class, and ten times at conferences and once as staff development at a university. This and associated work is mentioned as an important part of the Soar enterprise by the (US) National Research Council (Pew & Mavor, 1998).
The development of the Psychological Soar Tutorial, including the move to the web, would not be worth doing if there was just a single audience. However, it has been useful for us, teaching undergraduates, training post-graduates, and for the Soar community.
There were several resources that made this work possible. It was useful to have help with the translation to HTML. This amount of effort, about eight weeks of a summer student, can be done on your own, but it would have been hard to find this time.
It was useful to have technical support, people nearby who were familiar with HTML and the web. They were able to help us over some technical hurdles at crucial times. Without them, we would have had to spend an extra week of reading to find the answers, which were available but not simple.
If you are considering developing such online materials, you should keep your audiences in mind, and the costs and benefits should be carefully weighed. These particular materials did not recoup their cost from any single audience. The development was worthwhile because there were multiple audiences. For example, for only 10 students a year, it would not probably not be worth the effort to move material onto the web. It would be better value to photocopy the handouts each year. On the other hand, we now often prepare OHPs and teaching materials in HTML, which removes the need for translation.
Acknowledgements
This work was funded by an Enterprise in Education grant through the University of Nottingham. Gary Jones did the translation. Jeni Tennison and David Osbourne explained some aspects of HTML.
References
Pew, R. W., & Mavor, A. S. (Eds.). (1998). Modeling human and organizational behavior: Application to military simulations. Washington, DC: National Academy Press.
Ritter, F. E., Jones, G., & Young, R. M. (1996). Report on Tutorial 1: Introduction to the Soar cognitive architecture. AISB Quarterly, 95, 18.
Ritter, F. E., & Young, R. M. (1994). Practical introduction to the Soar cognitive architecture: Tutorial report. AISB Quarterly, 88, 62.
Date: Mon, 05 Jul 1999 11:15:16 -0400 (EDT)
From: "Michael Froomkin - U.Miami School of Law"
Probably not exactly what you had in mind, but I'm a non-executive director for fledgling Out2.com, which hopes to enable geographic community on the web. It wants to be every small town's and suburb's local newspaper.
For the beta see http://www.out2.com/stuart
---
A. Michael Froomkin | Professor of Law | froomkin@law.tm U. Miami School of Law, P.O. Box 248087, Coral Gables, FL 33124 USA +1 (305) 284-4285 | +1 (305) 284-6506 (fax) | http://www.law.tm --> It's hot here. <--
Date: Fri, 30 Jul 1999 21:52:23 -0600
From: "Dan Yurman"
The 12th edition of the Chicago Manual of Style still considered hot type as a current medium in the process of book design, and contained detailed instructions for designing a book accordingly. The 14th edition [isbn 0-226-10389-7] incorporates instructions for using word processors to prepare the overall design. The 14th edition notes (pg. 768, 18.13) that "a previous edition of this manual illustrated various typefaces [10 linotype faces] . . . with the advent of electronic composition specimen typefaces no longer serve a similar purpose because no one of them would be universally applicable."
The manual goes on to explain (18.15) that, "the number of characters per pica in a given typeface is no longer fixed." Then the manual makes my earlier point about the crucial role of the typesetter. It says, ". . . the amateur designer should consult the typesetter who is to produce the work before making a decision on which typeface to use."
Market forces have segmented the use of design principles based on the nature of the book, its intended reading audience, hence, its market, and the parallel use of typesetters or word processors. It used to be that popular paperbacks from best sellers used the type set for the hardcopy through a photocomposition process. Today, many popular and trade paperbacks are designed using desktop publishing programs without ever benefiting from a typographer's rule. However, on the high end of the book publishing market, various segments still use typographic design principles since they can insure a level of quality demanded by customers for these products.
The embedding of "intelligence" in word processors and desktop publishing software to help with layout and design could advance the use of typographic design principles, but only if the software houses themselves adhere to the conceptual foundations laid down in lead over the past several hundred years. For instance, it might be useful from a knowledge engineering perspective to build a forward chaining, rule-based expert system that led a designer through the stages of laying out a book, with subsidiary rules for charts, tables, etc., linked back to overall design.
Might not some enterprising researcher compile the conceptual typographic design principles in an inference engine and license it to software houses such as Adobe, etc. Just as in the famous case of Campbell's soup capturing the expertise of its manufacturing engineers to keep the product fully cooked and properly canned, so an typesetter might be teamed with a developer of expert systems to embed the rules (ahem, pun) of the field in software.
The widespread distribution of desktop publishing software is an expression of our "instant" culture with its egalitarian value that "anybody" can be a designer. That's not going to change. Why not harness the distribution model that is already in place and inject product into the market which does the right thing when it comes to type?
Dan Yurman djy@srv.net Map: 43N 112W Our mountains are high and the emperor is far away
Date: Fri, 30 Jul 1999 21:08:19 -0400 (EDT)
From: steven cherry
On Fri, 30 Jul 1999, Phil Agre wrote:
> [I enclose a sample of the typographical passions that I stirred up > among RRE readers by taking sides in the one-space-versus-two-spaces- > after-a-period wars. Forwarded with permission and reformatted to 70 > columns -- but with the spaces unaltered.]
Phil,
The fact that you needed to reformat Ms Kirkland's letter to fewer than 80 columns suggests that however much she knows about type, she still hasn't adjusted to the post-photolithography era, let alone the current one.
While my type bona fides are utterly scant compared to hers, I do know a thing or two, having run an in-house typesetting unit at a NYC publisher for five years, and having served as an executive editor including a couple of magazine start-ups, supervising everything including the design.
Ms Kirkland could have just asserted that desktop publishing software does lousy kerning, and left it at that. Adobe fonts, for example, come with only dozens of kern pairs (thereby knowing to overlap the box surrounding a 'W' and an 'A', for example, or a 'T' and an 'i') instead of the literally hundreds that one needs for quality type (for example 'P' and 'y' as in "Monty Python").
But one thing desktop publishing software does know is that there needs to be additional space at the end of a sentence. Heck, even your word processor knows that.
And sure enough, Ms Kirkland's example makes one wonder if she uses this software at all. She says:
> Well, any professional designer will tell you that most > professional page layout programs aren't very professional at all. > Designers think in terms of line length for copy blocks (like, any > line longer than 18 picas will tax your readers eye muscles swooping > from left to right, so don't do it if your copy is long and you want > to keep them) yet page layout programs demand margins before you > even open a file--making me work assbackwards.
First of all, that file can be opened in the word processor it was created in, and that word processor can tell you how many characters are in the file, how many words, how many lines, and how many paragraphs there are. This is a hell of a lot more information than I or Ms Kirkland had in the old days, when we had to take the manuscript, count the number of lines in the whole thing, multiply by 65 (the number of 10 pitch characters in a line that has one-inch margins, check the manuscripts actual margins and adjust accordingly, all to come up with a estimated character count that MS Word gives you exactly with three keystrokes or mouse clicks.
Ms Kirkland cites journalism schools producing brochures that look like high school work. There are two sorts of people cranking out that work, computer-literate folk who have never been trained in type and layout and design, and, sadly, print designers like Ms Kirkland who can't be bothered to learn the tools of the trade, the tools of the actual trade she practices today, instead of the tools of the trade as she used to practice it.
Oddly, I have to agree with one of Ms Kirkland's conclusions. You should continue to double-space, Phil, because you're writing for a literate crowd who are probably reading your missives in a monspaced font, as I am in my Unix shell (and maybe you're still actually writing them that way too!). But when you turn your manuscripts over to a publisher for a journal article or a book, just know that stripping out the extra spaces is one of the first things that will get done.
That's it for now. Next week we can discuss the topic of French spacing as that phrase was used by British typographers of the late 19th century, and how the spacing after a period is just one aspect of it.
Steven Cherry Senior Managing Editor IEEE Magazines
Date: Mon, 1 Mar 1999 15:12:49 +1100
From: "Adrian Tulloch"
Hi Phil, I really value your list and the intellect and thoughtfullness you bring to it. However, I really think that, in this case you are not analyzing gnu vs shrinkwrap and NT vs Linux with your normal vigor. At the risk of sounding like a religous Microsoft supporter, I'd like to take you up on some of these points.
Before I start I guess that I should get some of my personal baggage out of the way. I used to work as a developer at Microsoft. However, please don't dismiss me as a ex-msft apologist. I'm no longer at Microsoft because the powers that be made me and my work mates redundant (they decided that it just didn't make sense to have an Australian development group). I certainly don't hold a candel for Microsoft, but I think that my experience at MS means that I've got a rather different view of many issues than you do.
The general point I want to make is that you're doing a disservice to yourself and the list if you repeat the generally accepted Usenet wisdoms about NT and Linux. The nature of these operating systems mean that their users will create very different public perceptions of the OSs. Given this, you can't just swallow generally accepted wisdom -- instead, you need to look very hard at the actual technical performance of NT vs Linux before you claim that one is superior to the other.
The open source nature of Linux means that it engenders a sense of pride and ownership amongst its users. This in turn means that people are far less likely to blame their problems on "bugs" in the operating system, and also less likely to tell the world about their ordeal. Instead, the sense of ownership and the hacker mentality means that members of the Linux community rather enjoy running into and solving these glitches. By contrast, the closed, commercial nature of WinNT means that it'll engender exactly the opposite emotions. When you run into features that you don't work in the way you expect, it is terribly easy and tempting to blame the faceless (and, as everyone knows, egregious) Microsoft for your woes. Furthermore, there's a much greater tendency to tell the world about your Microsoft woes.
Putting this another way, imagine a possible world where Microsoft developed sold Linux commercially, and NT was open source. In this possible world, I really believe that you'd see the glowing advocacy for NT that our world reserves for Linux, and correspondingly you'd see shrill complaints about Linux which are very similar to your accepted wisdom about NT.
I'd also like to discuss your final point, where you say:
I think the linux people aren't very good at selling their work.
I think that exactly the opposite is true! It seems to me that many people have similar opinions to you on Linux's stability, scalability and security; views which simply are not yet supported by the evedence. Linux still has a comparitively small user base who use it for relatively similar tasks. Until Linux faces the blow-torch of mass acceptance and use, we have to say that the jury is still out on most of these questions. Despite this, many, many people like you accept the Linux assertions. That's a pretty good sales job!
In closing, I'd just like to repeat that I really do value your list and your contributions to the net. I value them so hightly because normally you go beyond the net's generally accepted wisdom. Please go on discussing interesting, emperical points, such as how network effects work in MS's favour, but, again, please avoid gratitous backhands against what you perceive as "bad software".
Adrian
Date: Fri, 18 Jun 1999 00:18:37 -0700
From: Nicholas Bretagna II
Phil Agre wrote:
>
> [I am astonished that the whole global economy is becoming dependent
> on a technology that is so fundamentally shoddy.
>
> Date: Thu, 17 Jun 1999 08:01:29 -0700 (PDT)
> From: risks@csl.sri.com
>
> RISKS-LIST: Risks-Forum Digest Thursday 17 June 1999 Volume 20 : Issue 45
>
> ----------------------------------------------------------------------
>
> Date: Tue, 15 Jun 1999 16:48:25 -0500
> From: Bruce Schneier
I have to say I take serious issue with the notion, put forth in this missive, that any and all these security issues are -NOT-, directly or indirectly, almost the sole fault of M$ as the market leader and standard setter.
Clearly it IS their fault.
M$'s so-called "OS"es have never been -anything- of the sort. They are far better described as "a GUI tacked onto a BIOS". They have never dealt seriously or effectively with memory management and access, process isolation, resource tracking and control, or any other of a whole host of related or lesser issues that are clearly the combined job of the OS and CPU/motherboard designers.
Granted, given Intel's CPU and the now-creaking IBM-PC architecture, some of these tasks are daunting, if not impossible, but, had M$ ever taken the task of "OS" writing seriously, then they would long since have come up with a set of expectations for future generations of CPUs and systems which would define elements of the command sets and hardware which would have provided them with the necessary levels of isolation and control over activities. These needs would have been easily described within the purview of a senior-level programming class at any major CIS program in the country.
Intel would -have- to follow suit, otherwise AMD and Cyrix would steal a march over them and implement the command sets, allowing them to claim superior security under M$'s OSes -- thus creating the threat of a major selling point Intel would not dare to ignore. Thus, in this area, M$ is clearly able to call the shots, and get the future computer they want to run their OS on.
Note that this would not (have) require(d) some sudden break with the past -- it could easily have been phased in over the course of the last 10 or so years and the next 5 to 10 years, as well -- as needed. This easily would have given the software time to migrate into compiance or fall into disuse, as appropriate for the advancing industry in question.
Yes, there would be exceptions, but those would be as rare as use of an MS-DOS program is getting these days.
Instead, M$ has concentrated on bells and whistles, adding new screen savers (long since unnecessary with modern monitor designs) and assorted gewgaws and frippery which do little to resolve well-known and well-documented historical shortcomings of Windows in all its nefarious flavors.
M$ has repeatedly violated its own published standards, causing other companies to do so simply to stay competitive, creating a haphazard environment ripe with opportunities for calamity just by fiddling with a few bits at a critical point identified by the presumed malefactor -- less code, more trouble to detect...
It has failed to provide any reliably straightforward method for tracking system extensions (DLLs, etc.). Unless this has changed with Win98, they still add one to a counter everytime someone registers a DLL -- which means that if the count is screwed up the extension DLL can be removed while it is still in use by a user program. Bingo, system crash.
The operating system itself should not generally, except by isolated flaws in the security scheme, be modifiable by external code -- and thus, all installations should be under the control of the OS, which would/should have a complete system for isolating and identifying which programs use which DLLs at any point. Any efforts to modify system components directly should trigger an exception condition via the CPU.
Programs which have any need of such access should be running, almost always, on virtual machines which isolate them from the real OS components and only allow certain types of carefully defined and controlled access for each component.
In short, most systems should support supervisor level security only in the hands of the OS and the OS designers. Individual systems may, under the auspices of a programmer, have such security reduced or disabled, but the vast majority of systems out there require no such reduction in security. This security reduction could easily be something requiring actual physical action to override, thus providing an assurance of security in the face of anything except idiots tampering with things they do not understand and have no business touching, ever... Yes, that will happen, but said idiots will also quickly find themselves working in the fast-food service industry, where they belong.
In short, do not understate the tremendously critical role M$ has had in the development of the current situation. Had they had a programmer cadre, in charge of development, with even -half- the talent that their marketing department has shown, then this situation would not exist. Basic, fundamental design flaws and issues would not have developed, or would have been headed off at much earlier stages than the current border-line crisis.
Instead, you have the situation where the chief supervisor of the initial VBasic project (ca. 1990) does not even have enough programming knowledge to be familiar with the term "BNF grammar" (FACT!). I have no doubt, seeing the results, that similarly incompetent individuals have been in supervisory roles over many other critical development projects.
I do not argue that marketing is a critical element of the business plan. In the short term, I even grant it is more significant... Those, of course, are the very key words: "In the short term". Now the long term flaws of putting marketing-only people in charge of key projects begins to become glaringly self-evident.
A supervisor certainly may or even should have significant training in marketing and marketing issues. But it is also clearly impossible to determine where the likely weaknesses of a design scheme are if you do not understand even the most basic sophmore/junior-level programming issues -- such as those known to anyone who understands the term "BNF Grammar".
Hence, hiring people who only have marketing backgrounds, to supervise complex projects, inevitably results in the decidedly inferior products with which M$ has managed to associate itself. This includes but is not limited to the almost total lack of reasonable security and reliability which has become the virtual hallmark of all systems utilizing one of M$'s OSes.
M$ is at fault, and no one else bears a significant fraction of the responsibility.
End of flame.
---
------- --------- ------- -------- ------- ------- ------- Nicholas Bretagna II mailto:afn41391@afn.org
Date: Fri, 18 Jun 1999 15:27:06 -0700
From: Nicholas Bretagna II
Phil Agre wrote:
> Good flame.
Thanks.
> I personally subscribe to the theory that it is principally > Microsoft's fault, and that traditional computer science concepts > (such as those embodied in Multics) provide a large part of the > necessary solution. They're not the whole solution because the > problems have a different shape in a networked world, but they are > much better than what we have on our desks now.
I am not familiar with Multics (my early training was in an IBM OS/HASP/JES2 universe) but certainly many of the mainframe security concepts are at least extensible to the micro domain. The networked world has a number of problems of its own, as you suggest, but certainly any effort to deal with that problem has to do with first creating a machine that is reasonably secure within itself.
The problems with MS-DOS were not unreasonable in 1981, when the machine was created. By the late 80s, however, the simple lack of anything resembling memory & resource managers (ones that allowed voluntary compliance, as the CPUs limited the OS to) was already wreaking havoc on the systems of the time -- many arguments between device drivers and TSRs (theres a historical word for you!) occurred not because the programmers were incompetent or even mistake prone but because they were waltzing in a pitch-dark ballroom, thanks to M$.
Things have only gotten worse, as M$ has repeatedly short-sheeted the system (a 2 gig HD limit? a 4 gig RAM limit? another HD limit around 16 gig? endless problems with removable media, like zips, because the system registers them exactly like fixed HDs, which they most definitely aren't!).
Each of these demonstrates a lack of foresight which, in my opinion, goes straight to the top. I can recall seing a comment by Gates in Byte magazine ca. 1984 or so that had him indicating in a speech that he saw no reason a machine should need more than 16Mb of RAM. His own "OS" alone needed more than that a mere 5 years later.
I looked at the design for NT and immediately noticed that 4G (actually, 3) memory limit, and went -- ok, so what happens when we start to get systems with more that that?? This was when we were just breaking free of the 640k problem, mind you... I suppose it may be resolvable via the CPU -- but I don't presume such. NT's basic design has, the last I checked, the memory space from 3G to 4G allocated for itself... What happens when systems routinely have more memory than that?? Has M$ thought of this? I see no reason to presume so. That would take rudimentary foresight. M$'s implementation of NT4 and its inability to install on large hard disks demonstrate their inability to look ahead much over a year.
I cannot say what an endless nuisance Zip disks are, because they are treated by the system as fixed HDs, not as true removable media. The zip software behaves in all sorts of wierd ways (NT seems to really, really want to do a media scan on every zip disk you have at least once if you boot with it in the drive, where it can "see" it during powerup -- in other words, it treats it as a new hard disk just on line). In general, I should also point out that it is extremely dangerous to presume that little button on the front of the Zip drive communicates with the driver at all to cause it to flush the write buffers -- I am utterly certain it does not (got screwed once that way, ask for details). You could argue that this is a flaw from Iomega, but if you stop and think, in most ways this is still part of the OS design. Certainly high-capacity removable media were on the horizon well before the Zip was introduced -- someone was going to have to come up with it or its functional equivalent -- which means that M$ should have defined a set of basic expectations of how that would communicate with the OS and what basic functions it should support -- like "flush on eject".
I know that NT4 has problems with HDs over 2 gig and 8 gig (I believe svc. pack 4 addresses some of these problems) -- what the hell is with that? They couldn't see HDs going over that size?? When 4 was first released systems were pushing at that limit!!! Again, no foresight or defensive anticipation at all.
> Do you know of any published articles or credible documented > web sites that cover the same ground as your rant?
Unfortunately, no. The VB info I got first hand from a friend, a software company owner/president who attended a developer conference out in Seattle when VB was initially rolled out. There he met the person in charge of the project (VB turned out bigger than M$ expected it to, I gather, so the person in question was not as high-level as it might seem they should be), and, during casual conversation at a luncheon, he happened to mention a BNF Grammar to this guy. The guy had no clue whatsoever what it was. I believe my friend, after further polite probing, came to the conclusion that the guy was your basic marketdroid with no programming background whatsoever. He was in charge of the project, not just the meeting, mind you.
I asked him again about this, and here is what he wrote me (I didn't ask to quote him, so I decline to divulge his name): "The person , who was the 'VB Project Leader', i.e. 'Technical Project Leader', had no knowledge of pre-fix or post-fix notation.... Microsoft does or did (then) want these people to work as go betweens between Marketing and Development, but make no mistake...this was an actual "coding person" (and, by the way there were other VB project members there that could have spoken up and saved his ass. They did not seem to know what I was talking about either.) My only conclusion is that they are simply building layers on top of work done by other people and do not understand the low-level concepts themselves...."
He's also the source of the "counter" information which he learned as he was researching "install" operations for his software. M$'s install process merely increments a counter when it adds a DLL to the system, rather than registering the "using app", as any intelligent programmer would have done in a modern system (I mean, exactly what are they using all those multi-megs of RAM and HD storage for, for Christ's sake??)
He can also clarify specifically cases of them violating their own standards -- taking, for example, core components which are supposedly ("officially") stable and immutable and re-naming them for no clear reason and without notice. M$ changed the name, apparently, of "msvcrt.dll" to "msvcrt40.dll", not bothering to notify concerned parties of the change -- the time you discover this, of course, is when you send out 10,000 copies of your new software and about 30% of your buyers find the install crashing for no reason which you should have had to worry about. Who gets the blame? You do, for writing a "buggy" install routine. Who's at fault?? M$, of course. It appears as though they may have done the same thing more recently with olepro32.dll, renaming it to oleaut32.dll...
Another fun element is M$'s own install routines. In some cases (I can get you a ref on this, I think), the install routine tells you it must reboot to reset critical files... but somehow never does it -- which means you get the same reboot message every time and the program won't run. It won't run, not because of anything in your code, it won't run because the install routines, written by M$ themselves, won't complete their own self-update processes despite the re-boot.
All these problems are in areas which are clearly not all that complicated -- they mostly deal with things being kept track of in an orderly manner, something computers are REAL GOOD at. Think about this -- M$ manages to make the computer into a bumbling idiot at the very task it's naturally well suited for. Such incompetence is so downright awesome, it's scary. > What is your opinion of NT?
Most of my experience with NT has been in a non-internet environment, i.e., I have not tried to use it as part of an internet server -- I have used it as a network control system and I'm not impressed all that much.
One of the many problems, again, deals with the inability of the system to isolate itself, which is indirectly M$'s fault, as I commented in the flame -- within days of my first experience implementing it, I managed to blue screen it -- there was a buggy video driver... This is not directly their fault, the driver was not theirs, but it WAS stomping all over memory, including and most especially memory not the driver's own -- and that, they should have been able to prevent, for reasons espoused in the rant.
There are numerous similar problems with the inability to determine who has which resources and so on (such as serial ports -- NO ONE should be programming directly to serial ports these days!) -- because M$ makes no serious effort to track who uses what. NT has this shortcoming, too, like all of their OSes. The DLL counter is a clear example -- this should be a list, not a counter, and any idiot who was allowed to graduate without being able to see this needs to have his diploma pulled. In modern systems, the storage space necessary to track this data is utterly insignificant -- measured in mere fractions of a kilobyte of RAM/secondary storage.
Some of this resource tracking is tough with the current CPU architecture, no argument -- but M$ makes no effort to even track ones who would voluntarily comply, which is the source of many crashes and "mystery glitches"... too many chefs making the soup, and no one overseeing what goes in the pot.
I also am not impressed with the way that NT handles a LAN -- if one of the systems/resources it is linked to is off-line (gee, that never happens in the real world) it basically yaks and complains to you during the boot process -- and sits there and WAITS until you tell it "OK, I know and don't care, don't bug me about it!" -- which means you often have to shepherd it through the boot process if you are not bringing up everything at once. And there appears to be no way to shut this idiot warning off. Simply detailing in a window at the desktop what resources were unavailable at start time would be perfectly sufficient for most people and most cases.
There are also other similar annoyances in the behavior of the OS when it "updates" after a component registers changes (such as a new file). By all indications, it goes and queries each existing component, rather than just inquiring of it when attention is being paid to it... which means that if some component has gone off-line (such as a machine being shut off), then you suffer a substantial wait as it times out.
All around, there are just so many idiot choices made by M$ in their stuff, it's amazing. I have not encountered too many people who actually have a remotely positive attitude about M$ programmers -- I believe the applicable Jargon term for is "Honeywell Brain Damaged"... :-S
I think that term will eventually shift to "Microsoft Brain Damaged", since I truly find it hard to believe that anything Honeywell did back then compares to the endless river of dunderheaddedness flowing from Redmond...
Let me know if I can be of any more help or provide any more clarification.
NOTE: Since I wrote this, I have since installed NT onto a "legacy" (4 years old) P166 with a new 17G HD. The rigamarole required to get NT onto this system has been nothing short of preposterous. There is not the slightest reason why any version of NT4 (including the initial release) should have ever presumed that the HD would be smaller than some utterly preposterous size -- as in "terabytes". Yet somehow NT manages to be unable to install without help from OnTrack's disk manager on this legacy system, because it does not recognize the hard drive. Further utilization of it also requires Ontrack, without a BIOS update (Why? Why should M$'s OS be dependent on BIOS limitations beyond the initial software install boot kernel and/or boot to the OS kernel?) It's not like it doesn't know the HD is large -- it just refuses to believe its own "eyes". Ridiculous!
Large HDs that currently exist are easily within a reasonable extrapolation of historical trends, and 4 years or so is not that long for a system to be in use. Even if it is no longer the "main" system at a location, it can easily do secondary duties with a few component updates, and installation of the OS should not be an issue, not by a long shot. A P166 with a new HD makes a perfectly adequate SOHO server, and there is no reason it cannot be a backup server for a larger office, either.
I would also comment that very recent revelations regarding the Java security holes in IE and the internet subcomponents of the OS demonstrate once more that M$ is unable to handle the task they have not merely taken on, but have demanded, essentially.
---
------- --------- ------- -------- ------- ------- ------- Nicholas Bretagna II mailto:afn41391@afn.org
Date: Wed, 14 Apr 1999 16:49:00 -0400
From: "Andrew, Suzanne: OCA"
I must say that while I very much appreciate works of thought- provoking cybersocial commentary of this sort, I found that some of the strong comments you made about Gibson and Neuromancer were off the mark. If it was your intent to be contentious by saying Gibson did a "disservice" to society via his cyberfiction, then you succeeded, however, I cannot be convinced that this is true. Instead, it is my view that as a work of fiction, Neuromancer instead did exactly what good works of fiction do, and that is capture ideas and possibilities and spin a story that is imaginative yet grounded in culturally relevant metaphor and significance:
1. First off, interviews with and biographical works about Gibson make it clear that in writing Neuromancer and the succeeding novels in his cyberspace trilogy, Gibson's intent wasn't necessarily to become the cyberguru/visionary he has since been named by theorists appropriating his ideas. Essentially, Gibson, along with other writers in the cyberpunk genre, were projecting literary and imaginative ideas onto the near future -- ideas that weren't meant to be "forecasts" as you wrote, as much as possibilities. In fact Gibson didn't even have a computer when he wrote Neuromancer and was startled by how noisy they actually were when he turned one on. Neuromancer , _Count Zero_ and _Mona Lisa Overdrive_, the other two oft-forgotten books in the trilogy, are important cultural products as imaginative works of science fiction. While they could be described as "speculative," they are not cybertheory and certainly not "forecast."
2. This is not to say, however, that the trilogy was not a significant accomplishment or to diminish its importance as one of the preeminent works of its genre and a deeply socially significant cultural product. It was all of these things for its time, not to mention that the ideas and possibilities Gibson explores still have relevance today. By "its time" I wish to point out that Neuromancer was written in the early 1980's, which is almost a full 20 years ago, and as you are aware, cybertheorists in the 1990's such as Mark Dery have agreed, technology, and indeed our society, is moving at a pace of "escape velocity." With that in mind, I find your argument that "science fiction has disserved us" is a false comparison, since you are comparing a cultural product written almost 20 years ago during this phase of "escape velocity" with what you see is happening with the internet in 1999. If you wish to make a comparison between a work of science fiction and actual events, which is an exercise of comparing fictional to actual and thus always a flawed comparison, you would do better to chose something of this genre written in the late 1990's. For although cyberpunk literature posits ideas for the near-distant- future, one of its accomplishments is to explain possibilities of where we might be headed from a given moment, providing a contemporary piece of time stretched out into possibility and unknown. Reading Gibson's trilogy as a "forecast" in as literal a way as reading a weather forecast or a map is flawed.
3. In your essay you misquote Gibson's statement about cyberspace existing as a consensual hallucination and posit it as simply "a hallucination" (taken in context, a mere hallucination has a very different meaning from his larger idea). You then argue that this idea is problematic since cyberspace is embedded in the world not growing away from the world, as you take Gibson's quote to mean. You further go on to say that "the boundary between the real world and the world of computer-mediated services is steadily blurring away" -- a statement which has more to do with Gibson's actual quote taken in context of the trilogy works than you noted at the beginning of your essay. Yes, the characters in the trilogy are in a fully developed cyber realm when they are "jacked-in" but they still maintain their corporeal existence to jack out when necessary, not to mention the fact that Gibson's cyberspace with all of its gleaming cubes representing the gamut of society from governments to business to avatars representing actual people does describe the multifaceted economy in the late 1990's. I would also point out that Gibson does describe corporate concentration as well as weakening governmental powers in his trilogy and should not be lumped in with your paragraph on "the early visions of cyberspace have disserved us... information connotated freedom.." Your argments and judgements simply aren't supported here by the actual texts.
4. Finally, I wish to point out that cybertheorists and those studying the socio-economical aspects of the internet society have repeatedly quoted William Gibson and his work out of context. Neuromancer is quoted as if it is not a small part of a much larger work (the trilogy) and the "consensual hallucination" quote is repeatedly used to embellish, without proper reading into what Gibson meant by it. As a result, most people now have the impression that Neuromancer, and indeed, the trilogy itself is about cyberspace, while in fact the novels are about Artificial Intelligences, the possibility of Artificial Intelligences achieving sentience, and the connection between a technologized society and spirituality (voodoo). Gibson's cyberspace was an environment he invented around these themes that had some plausible connections to the nascent internet and which captured the collective imagination.
Gibson's texts by no means constitute a "disservice." I would encourage anyone wishing to discuss Gibson's ideas or quote his texts to take the time to revisit the trilogy. Since the media seems to have caused an erasure effect here through prolific repetition of anecdotal messages about Neuromancer, those who may have read all three of the texts back in the early 80's might be pleasantly surprised by the depth of ideas Gibson posits.
Suzanne Andrew.
end ```
| | | --- | | ProcessTree Network TM For-pay Internet distributed processing. | | Advertising helps support hosting Red Rock Eater Digest @ The Commons. Advertisers are not associated with the list owner. If you have any comments about the advertising, please direct them to the Webmaster @ The Commons. |