Sure, but I think it is important for it to define the scale relative
to an absolute, agreed-upon scale.
The goal is to be able to take your teapot, put it on my table next to
their cup and saucer, and have everything be the right size without any
user intervention.
>For
> example, should I choose to implement a model of the local galactic
cluster, I
> might find meters a touch on the small side
Easy to support-- just put a Scale that transforms from meters into
light-years at the top of your scene.
> I also think "inside space" co-ordinate systems and "outside space"
> co-ordinate systems are different things. What about the galactic
model
> sitting, fishtank like, on my bookshelf ? On the outside it is quite
small, but
> inside .. really huge ...
That same Scale at the top of your scene can be adjusted to make its
outside space as large or as small as you like.
So, imagine a user visits your galactic "page" (we do need jargon for
this...). How does the VRML browser know that navigation gestures
(e.g. move the mouse forward to walk into the world) should move in
light-year-size steps?
I tend to think of this as the "size of observer" problem-- if I'm
looking at a galaxy, I'd better have legs that are light years long to
move me around fast enough to see anything interesting. So, I'd be
inclined to encode the size of the observer into the camera's
information (side note: if using Inventor, I'd wedge this into the
focalDistance field of the camera, reasoning that when walking around I
tend to focus farther ahead than an ant).
Again, having a default coordinate system of real-world coordinates
makes it easy for the common case where the virtual observer in the
virtual world is the same size as real people in the real world
(molecule and galaxy visualizations are exceptions, of course).