
Short report from Computer Graphics 91 (UK)
Quote:
>Donning the eyephones and entering the virtual world revealed the
>immersed view to be quite different: the heavy use of lenses meant
>that the image was marred by concentric rings, although though these
>disappeared somehow after a minute or two. The eyephones were
>uncomfortable, with a tendency to slip forward and away from the
>eyes, due to most of the weight being concentrated at the front of
>the unit. The goggles also seemed rather warm, causing slightly more
>discomfort to the eyes.
Sounds a lot like an unmodified LEEP {*filter*}face unit. The optics
are plastic lenses, and give lots of internal reflections. These
disappear because they are stationary to your head, and are low
resolution, so your brain edits them out fairly quickly.
The {*filter*}face mount never was much good. I think most buyers
replace it with their own mount as soon as they try it out.
Quote:
>Despite the problems I managed to stumble
>into the Teapot room and inspect the teapot in the middle of the
>floor by kneeling close by. I then picked it up and walked over to
>the TV and inserted the Teapot into the side of the TV. This
>revealed two problems that I presume are common to most VR systems
>today.
This illustrates both the problem with and the need for high-level
representational languages for VR. Present VR languages (what there
are) aren't really set up to handle every possible object manipulation
possible, you must list them. Sounds like someone either slipped up
in specifying one of the object's parameters, or the whole thing was
put together without a HLL at all.
What we need is libraries of objects (and a standard, first!) that have
realistic parameters automatically assigned: i.e. noninterpenetrability,
a weight (future, of course), center of gravity, sound when tapped, etc.
No one can throw together a VR world if they have to do everything in
C! That's the VR system writer's job.
Quote:
>Firstly the lack of depth cues meant I was having difficulty
>finding the television because I was so close to it and it seemed to
>have disappeared. Secondly, due to the lack of any force-feedback or
>"bump-detection" I found myself bumbling around inside the space
>occupied by the television wondering where it had gone. Still
>confused I managed to find the door and went back into the corridor -
>teapot still in hand - and entered another room after being greatly
>baffled by the door due to standing in the doorway intersecting the
>closed door!
>My brain gave up at this point and all I could make out
>in the new room were the light blue walls, apparently missing the
>sticks-and-balls molecule-like creature bouncing up and down on the
>floor (this could easily be seen on the monitors, however). Despite
>the visual problems the sound was helpful, with the doors creaking
>opening when necessary (sometimes). The teapot also made a sort of
>quacking noise that become louder as it was approached (!).
Sounds like a AWFULLY bad "trip"! Wonder how much of this it would take
to drive someone completey nuts? Seriously, I think that the low
resolution (lack of detail) in today's VR systems make it easy for
the brain to ignore things. This is similar to the "Ganzfeld" effect
where a lack of detail in the visual field (induced by half-pingpong
balls over the eyes) causes a subject to lose all vision after a
few minutes. Lack of intersection feedback and force feedback
don't help make you believe, either.
Here's to a better, and let's hope, *sharper*, future.
--------------------------------------------------------------------------
| My life is Hardware, | |
| my destiny is Software, | Dave Stampe |
| my CPU is Wetware... | |
__________________________________________________________________________