I have been following the discussion about units
of measure, orientation, motions, navigation, etc.
with interest. When this group began, I noticed
that there seemed a generic, albeit unquantified,
agreement that a VRML would be based on abstractions:
that one would enter the space at one abstraction
"level," but could move to more specific or more
general abstraction levels (e.g., point, to room,
to hall, to ..., to city, to ..., to space*) at
will, and that the language would support this
"movement" by changing the abstraction coherent to
the movement.
The notion of space is very much attached to the
notion of abstraction "level." A unit of measure
takes on a completely different meaning when one
"travels" between planets than when one is peering
at microorganisms in a microscope, and an abstraction
level needs to know what the generic unit of
measure is for this kind of "navigational" purpose.
I quote the word navigational, because I think this
motion-based (i.e., behavioral) notion of navigation
is more limited than it need be. I would prefer a
functional nature of measurement, one based on what
the user is trying to do, one of which may often be
navigation. Observation of events, however, is not
navigational, but the same concerns apply.
I am sure that none of this is a revelation to
anyone, but I wanted to say that the notion of
abstraction spaces is very much a concern in the
artificial vision, model-based reasoning, and
function-based reasoning communities, quite apart
from the graphics communities. Perhaps some amount
of benefit could be gained by crosstalk at the
level of VRML, since this is one of the areas (the
most notable other being robotics) where AI (about
objects and processes) and imagery have a lot in
common.
Jack Hodges
San Francisco State University
hodges@huckleberry.sfsu.edu
http://futon.sfsu.edu