Having read through some of the available existing docs on the wired www
server, the emphasis of those documents seems to be primarily on creating a
three dimensional information browser, with some future implication of
communication between users of the web and the perhaps the notion of
"active objects" in the moo sence. The implementation vision of a "front
endian" parser taking a Yaccish grammar desribing a world, parsing this,
then turning it into a bunch of three dimensional objects inside a renderer,
which has user interaction hooks added so that attached URLs can be used to
navigate to different world areas in hypertext style links is, I think, an
emmenantly dooable and worthwhile task (if I read it correctly). However, as
the current vougue seems to be to talk about areas outside of that vision, I'd
like to add my .2c worth as a toolbuilder :-
1) I'd like to have parametised hooks (events) from the "renderer/interactor"
("renderactor" ???) environment out to an arbitary application (better still;
"object") of my choice. Things like collision detection, interactions with
polygonal faces of objects, etc. Initially, this would of cource be a very simple
set of events. It would be "just peachy" if these events could be described
using some event description language which allows me to build complex
events from lower level ones. This allows me to do is to externally process
those events externally, in a scripting language of my own choosing, then ...
2) Using hooks back the other way I can dynamically instantiate a **local**
object (in the local machine) in the renderactor, set its properties, for
example, move it about, or color the topmost polygon "red", etc, etc. And
potentially destroy it after I am done with it. On the script language side I'd
need to maintain an object desriptor of some kind (a COBRA object reference;
or even a simple uni-platform handle). Note, I'm not (here) suggesting shared
objects in some for of OODMBS at the graphical level, just access to the
object hooks inside the renderactor at the local level.
I can thus affect the state of the system in varied ways; and hook user
interaction. One this basic goal is there, I can do all sorts of cool things. Like
implement an editing system by dynamically instantiating objects in the
"renderactor" and then affecting their properties. (For this to be useful, the
renderactor would be able to save out my dymanically constructed objects as
VRML. Spit as well as swallow VRML ... ). Or I could implement a point-point
or moo-like communications system by affecting the state of objects on
different user's machines or .. lots of other cool things.
If such an interface were defined, it breaks down the work of implementing a
"Stephensonian" vision into a couple of neat packages, which we can then all
go off and build (discuss seperatly :) (Hees, quietly, knowing that this is no
where near as simple as it sounds). Hey, given the existence of this two way
interface I could even hook up a VRML renderactor to existing stuff, like a
moo .... or new stuff, like a VRML based visual language.
Mike (Tamarac)