::
:: To simplify things and cover 96% of what people want to do, I think the
:: concept of 'viewer size' is all that need be specified in the VRML
:: file. Based on the size of the viewer (am I a God? A person? An
:: ant?), a viewers can set reasonable defaults for eye-separation, focal
:: distance, and walk/fly/movement speed. Viewers can also support
:: changing the defaults to cover the other 4% of what people want to do
:: (e.g. I want to move as slowly as an ant, but have eyes 10 feet
:: apart). (yeah, right).
::
:: "Gavin Bell" <gavin@krypton.engr.sgi.com>
Thanks for the thoughts. I simply wanted to assure myself that the data
objects being considered would support such calculations on the part
of VRML viewers. The authors of the languages being considered as the
`springboard` specification would know more about whether the data objects
will support such processing by a *true* virtual reality viewer.
Take for instance, the question of virtual reality conferences. It seems
to me that the `file` being transferred by a VRML request would have to
be altered for each participant in such a conference. In other words, as
participants `arrive` a viewing position would have to be `claimed` by
that participant and the file altered to reflect the presence of the
participant. The new altered file would then be transferred by any
additional VRML transaction requests. Can you have a crowded room in
Cyberspace? It seems to me that most of the discussion revolves around
`solitaire` or single participant views on a data set.
If the data object definitions will support a virtually infinite number
of views on the data than all seems well to me and the future development
of viewers would not be hindered. Calculating a pair of views(eyes) for
each VRML transaction out of an infinite number would not be impossible.
Perhaps I have a mistaken idea of the data definitions that would be
transferred by a VRML transaction request?
Thanks, Bob.