1) Hardware. Religious wars here... I have a Mac Quadra AV, and an EISA-486-50
(MPC-2 compatible) and I don't see any reason to favor one platform over a
another. If you fix on a hardware-based solution, you are focusing on the
wrong issue. Sure, more power is better, and all the rest is flame bait.
2) OS. Nobody has mentioned this yet. I wonder if everyone assumes that
their own solution is the "obvious" choice? If so, my own obvious preference
is the X Window System - not because it is an ideal environment, but because
it is here, functional, on the right track, and non-proprietary. (So for my
486 system, I'm either stuck with an X Window package for DOS, or a UNIX...
as it happens, I think Linux is the greatest thing since the development of
gcc, so that is what I use preferentially.) But, both hardware and OS are
relatively unimportant, because of the following issues.
3) Open systems and portability. In order for VR to become truly multi-user
interactive, it has to be a networked technology, and this implies building
on standards and existing technology.
4) Flexibility and appropriate technology. There are reasons why you will
want to design interfaces with alternative access (and parallel interface
paradigms (PIP)). Some users are limited by hardware, others by external
limits (isolation, poor telephone line quality, etc.) and others by physical
limits (blind, deaf, physically disabled, etc.) I see no reason why *anyone*
should be cut out of VR participation entirely because of accidents of fate or
finance.
5) Scalability. As Greg noted in his post.
Now, back to the initial premise. HyTime is the right technology to go with.
It addresses current issues of real-time communications, supporting graphics,
animation, audio, event synchronization, etc. (I'll let Eliot expound on this
further) and is a logical extension of SGML. The technology is scalable and
practical; we can start with current SGML/HTML authoring tools and build from
there. This permits us to begin RIGHT NOW with existing tools, such as Mosaic
and Lynx, and emacs/w3/psgml/html-mode, and begin practical work immediately -
and to bring existing documents into the polyverse. (Polyverse = polymorphic
virtual universe.) I suggest that we begin by adopting SGML and HyTime
philosophically, and begin with an HTML+ DTD which we extend (and stay
compatible with, since HTML+ is still not yet fully developed) for VR
applications.
Someday, you'll be able to walk into your virtual room, have your virtual dog
(BIFF, no doubt) bring you slippers and your newspaper, where you can read
your custom newspaper assembled from wire service feeds - read an ad and
place an order on the spot... Listen to an Internet Radio performance and
send in an email criticism on the spot with a few choice epithets in your own
voice. You can reach back and pull a volume of Shakespeare off the shelf and
compare performances of Hamlet by Laurence Olivier and Mel Gibson. You get
the idea. Point is, you'll never get to integrate these diverse elements into
your polyverse if you fail to accomodate the emerging hypermedia doc
standards, which are SGML-based. HyTime simply adds the realtime structural
elements to the SGML mix that are necessary for implementing virtual reality.
So, we need to start with functional technology: SGML. Create a DTD that
encompasses today's technology - HTML+, MIME, etc. Extend it with HyTime
functionality for VR needs. Work with tools that can be extended to use
all the hardware capabilities that there are, from voice command to mouse,
from power glove to keyboard. But keep the "documents" in a platform-
independent form - SGML - that will support PIP. Unless a structure is
adopted that accomodates parallel interface paradigms, we'll be crippled by
* no common platform for our development work (OS or hardware)
* narrow audience/acceptance because of platform restrictions
* slow progress because we cannot readily assimilate existing materials and
immediate "real world" communications such as email and netnews into our
virtual environment
* exclusion from educational/governmental use because we fail to provide
accessibility to the handicapped (true of the USA, at least)
I'm not saying that we couldn't devise a fine VR system without SGML/HyTime,
I'm just suggesting we should adopt these practical hypermedia standards that
exist, as a starting point.
Getting back to the "real" development environment: I plan to use my
MPC-2 compliant Intel system as my VR platform, running Linux. I'll use
both a 14.4kbps modem and an ethernet link to the Internet. I'll do my
authoring in emacs, using psgml mode (and sgmls) and an SGML DTD that is a
superset of HTML+; I'll access my multimedia documents and databases with
emacs w3 mode, XMosaic, and Lynx - whichever method is most convenient for
my purposes of authoring or viewing. I'll display in X bitmaps or filter
PostScript (ghostscript) to the X display (ghostview) or print if I wish.
Linux provides audio support for my PAS 16 sound card (through SoundBlaster
compatibility) for my audio work, and supports my ET4000-based graphics
coprocessor. ALL of the software I'll use, right down to the OS, is free.
Final suggestion: I think emacs offers the only portable, extensible, publicly
available platform for building a system such that you can both create and
view multimedia documents at this time, so I seriously suggest that we look
at adopting GNU Emacs as a technology to extend and implement a first-
generation VR solution. This finesses the GUI issue a bit, and allows us to
make rapid progress - down a mainstream of technological development, rather
than up a blind alley (if I may mix my metaphors).
-lar
Lar Kaufman lark@walden.com
"Smile when you say that." _The Virginian_, by Owen Wister