virtual reality modeling virtual reality modeling
play

Virtual Reality Modeling Virtual Reality Modeling from - PowerPoint PPT Presentation

Electrical and Computer Engineering Dept. Virtual Reality Modeling Virtual Reality Modeling from http://www.okino.com/ Modeling Modeling & & VR Toolkits VR Toolkits System architecture The VR object modeling cycle: The VR object


  1. Multi- -texturing (texture blending): texturing (texture blending): Multi � Realistic-looking lighting can be done with 2-D textures called “light maps”; � Not applicable to real-time (need to be recomputed when object moves) Light map texture overlaid Standard lighting map on top of wall texture. Realistic 2-D texture and low polygon count. Not real-time! (from NVIDIA technical brief)

  2. KINEMATICS MODELING: KINEMATICS MODELING: � Homogeneous transformation matrices; � Object position; � Transformation invariants; � Object hierarchies; � Viewing the 3-D world.

  3. Object Hierarchies: Object Hierarchies: � Allows models to be partitioned into a hierarchy, and become dynamic; � Segments are either parents (higher level object) or children (lower level objects). � The motion of a parent is replicated by its children but not the other way around. � Example – the virtual human and the virtual hand; � At the top of the hierarchy is the “world global transformation” that determines the view to the scene .

  4. VR Kinematics Modeling VR Kinematics Modeling b) a) Model hierarchy: a) static model (Viewpoint Datalabs); b) segmented model.

  5. T 3 T 3 ← ← fingertip fingertip Camera system of coordinates Camera system of coordinates T 2 T (t) 3 (t) 2 ← ← 3 T global T (t) y fingertip (t) y global ← ← fingertip T 1 T (t), 2 (t), 1 ← ← 2 z z x x T palm T (t) 1 (t) palm ← ← 1 T global T (t) W (t) global ← ← W T W T (t) palm (t) W ← ← palm Receiver system Receiver system of coordinates of coordinates T W T W ← ← source source Source system of coordinates Source system of coordinates World system of coordinates World system of coordinates • (t) • T global T global T W T source T (t) = T (t) T source T fingertip (t) = W (t) palm (t) ← fingertip ← W ← source ← palm global ← global ← W ← source ← T palm T 1 T 2 T 3 T (t) T (t) T (t) T 1 (t) 2 (t) 3 (t) ← 1 ← 2 ← 3 ← fingertip palm ← 1 ← 2 ← 3 ← fingertip

  6. Physical modeling � Physical characteristics of the object and the way they change – Inertia, surface roughness & texture, compliance, (hard/soft) deformation mode (elastic/plastic) � Handled by the haptics rendering pipeline (should be synchronized with the graphics pipeline)

  7. The Haptics Haptics Rendering Pipeline (revisited) Rendering Pipeline (revisited) The Traversal Force Tactile Display Traversal Force Tactile Display Collision Force Force Force Haptic Collision Force Force Force Haptic Detection Calculation Smoothing Mapping Texturing Detection Calculation Smoothing Mapping Texturing Application Geometry Rasterizer Display Application Geometry Rasterizer Display Scene View Scene View Lighting Projection Texturing Lighting Projection Texturing Traversal Transform Traversal Transform adapted from (Popescu, 2001)

  8. The Haptics Haptics Rendering Pipeline Rendering Pipeline The Traversal Force Tactile Display Traversal Force Tactile Display Collision Force Force Force Haptic Collision Force Force Force Haptic Detection Calculation Smoothing Mapping Texturing Detection Calculation Smoothing Mapping Texturing Equivalent to haptic clipping

  9. Collision detection: Collision detection: � Uses bounding box collision detection for fast response; � Two types of bounding boxes, with fixed size or variable size (depending on enclosed object orientation). � Fixed size is computationally faster, but less precise Variable size Bounding Box Fixed size Bounding Box Variable size Bounding Box Fixed size Bounding Box

  10. Collision response � Object deformation (if objects are non- rigid) � Parametric surfaces vs polygonal meshes

  11. Surface cutting: Surface cutting: � An extreme case of surface “deformation” is surface cutting. This happens when the contact force exceed a given threshold; � When cutting, one vertex gets a co-located twin . Subsequently the twin vertices separate based on spring/damper laws and the cut enlarges. Mesh before before cut cut Mesh after after cut cut Mesh Mesh Cutting instrument Cutting instrument V1 V1 V2 V2 V2 V2 V1 V1

  12. Collision response – – surface deformation surface deformation Collision response

  13. The Haptics Haptics Rendering Pipeline Rendering Pipeline The Traversal Force Tactile Display Traversal Force Tactile Display Collision Force Force Force Haptic Collision Force Force Force Haptic Detection Calculation Smoothing Mapping Texturing Detection Calculation Smoothing Mapping Texturing

  14. Haptic interface Haptic interface Haptic Interface Point Haptic Interface Point I - -Haptic Interface Point Haptic Interface Point I Haptic Interface Point Haptic Interface Point Penetration distance Penetration distance Object polygon Object polygon

  15. Force output for homogeneous elastic objects Force output for homogeneous elastic objects { { K • • d, d, K ≤ d ≤ d for 0 ≤ d ≤ for 0 d max max F = F = F max F < d max < for d max d for d max F max where F is that haptic interface maximum output force where max is that haptic interface maximum output force saturation saturation F max F max t t c c Soft object Soft object e e j j b b o o d d r r a a H H d max 1 d max 2 Penetration distance d d d Penetration distance d max 1 max 2

  16. Force Calculation – – Force Calculation Elastic objects with harder interior Elastic objects with harder interior { { K 1 • d, d, K 1 • for 0 ≤ ≤ d d ≤ ≤ d d discontinuity for 0 discontinuity F = F = K 1 • d d discontinuity + K 2 • (d (d – –d d discontinuity ) , K 1 • discontinuity + K 2 • discontinuity ) ≤ d discontinuity ≤ for d discontinuity d , for d where d discontinuity is object stiffness change point where d discontinuity is object stiffness change point F F d discontinuity Penetration distance d d Penetration distance d discontinuity

  17. Force Calculation – – Force Calculation Virtual pushbutton Virtual pushbutton F = K 1 • d (1 d (1- -u u m ) + F = K 1 • m ) + F r • u u m + F r • m + F F K 2 K 2 • • (d (d – – n) u n) u n n where u m and u n are unit step where u m and u n are unit step functions at m and n functions at m and n Virtual wall Virtual wall F r F r m m n Penetration distance d n Penetration distance d

  18. Force Calculation – – Force Calculation Plastic deformation Plastic deformation m m n m m n ≤ d ≤ m ≤ d ≤ n d for 0 ≤ d ≤ for 0 ≤ d ≤ F initial F initial = K = K • • d for 0 m F subsequent F subsequent = K = K 1 1 • • d d • • u u m m for 0 n F = 0 during relaxation, F = 0 during relaxation, F = 0 during relaxation, F = 0 during relaxation, where u m where u m is unit step function at m is unit step function at m

  19. Moving into the wall Moving into the wall Virtual wall Virtual wall F F < 0 V < V 0 Force Calculation – – Force Calculation Virtual wall Virtual wall Insufficient stiffness Insufficient stiffness time time Generate energy due to sampling time- - Generate energy due to sampling time To avoid system instabilities we add a To avoid system instabilities we add a Virtual wall Moving away from the wall Virtual wall Moving away from the wall damping term damping term F F ≥ 0 V ≥ V 0 { { , for v • Δ Δ x + B v x + B v , K wall K wall • for v < < 0 0 F = F = • Δ Δ x, K wall x, for v K wall • ≥ 0 for v ≥ 0 where B is a directional damper where B is a directional damper time time

  20. � Wallness: crispness of initial contact, cleanliness of the final release.

  21. The Haptics Haptics Rendering Pipeline Rendering Pipeline The Traversal Force Tactile Display Traversal Force Tactile Display Collision Force Force Force Haptic Collision Force Force Force Haptic Detection Calculation Smoothing Mapping Texturing Detection Calculation Smoothing Mapping Texturing

  22. Force shading: Force shading: Non- -shaded contact shaded contact Real cylinder contact Non Real cylinder contact Contact forces after Contact forces after forces forces forces forces shading shading { { N , , for 0 K object • d d • • N K object • for 0 ≤ ≤ d d ≤ ≤ d d max max F smoothed = F smoothed = F max • N, N, for d F max • for d max < d max < d where N is the direction of the contact force based on vertex normal interpolation

  23. The haptic mesh: The haptic mesh: � A single HIP is not sufficient to capture the geometry of fingertip-object contact as in a haptic glove; � The curvature of the fingertip, and the object deformation need to be realistically modeled. Screen sequence for squeezing an elastic virtual ball Screen sequence for squeezing an elastic virtual ball

  24. Haptic mesh Haptic mesh Mesh point i Mesh point Penetration distance Penetration distance for mesh point i for mesh point

  25. Haptic mesh force calculation Haptic mesh force calculation Penetration distance Penetration distance Haptic Interface Point i Haptic Interface Point for mesh point i for mesh point : For each haptic interface point of the mesh : For each haptic interface point of the mesh F haptic = K object • d d mesh i • N N surface F mesh i = K object • mesh i • haptic- -mesh i surface d mesh i where d are the interpenetrating distances at the where mesh i are the interpenetrating distances at the N surface mesh points, N is the weighted surface normal mesh points, surface is the weighted surface normal of the contact polygon of the contact polygon

  26. The Haptics Haptics Rendering Pipeline Rendering Pipeline The Traversal Force Tactile Display Traversal Force Tactile Display Collision Force Force Force Haptic Collision Force Force Force Haptic Detection Calculation Smoothing Mapping Texturing Detection Calculation Smoothing Mapping Texturing

  27. Force mapping Force mapping Force displayed by the Rutgers Master interface: Force displayed by the Rutgers Master interface: Σ F θ = ( Σ cos θ F displayed F haptic )• • cos F displayed = ( mesh ) haptic- -mesh θ it the angle between the mesh force resultant and the piston where θ it the angle between the mesh force resultant and the piston where

  28. The Haptics Haptics Rendering Pipeline Rendering Pipeline The Traversal Force Tactile Display Traversal Force Tactile Display Collision Force Force Force Haptic Collision Force Force Force Haptic Detection Calculation Smoothing Mapping Texturing Detection Calculation Smoothing Mapping Texturing

  29. Tactile mouse Tactile mouse Forces only in the z direction Forces only in the z direction Tactile patterns produced by the Logitech mouse Tactile patterns produced by the Logitech mouse Force Force Time → → Time → → Time Time → Time → Time

  30. Textures can change according to movement direction: velvet haptic mouse texture simulation

  31. Surface haptic haptic texture produced by the texture produced by the Surface PHANToM interface interface PHANToM � Forces in all directions � Friction simulation: force analogous to normal force � Viscosity: force analogous to velocity � Inertia: m*a

  32. Surface haptic texture produced by the PHANToM PHANToM Surface haptic texture produced by the interface interface Equivalent to displacement (bump) map

  33. Haptic interface Haptic interface Haptic Interface Point Haptic Interface Point F texture = A sin(m x) • • sin(n y), sin(n y), F texture = A sin(m x) Y Y where A, m, n are constants: where A, m, n are constants: � A gives magnitude of vibrations; A gives magnitude of vibrations; � X X � m and n modulate the frequency of m and n modulate the frequency of � Z Z vibrations in the x and y directions vibrations in the x and y directions Object polygon Object polygon F can be perceived as shape or � F can be perceived as shape or � friction friction

  34. BEHAVIOR MODELING BEHAVIOR MODELING � The simulation level of autonomy (LOA) is a function of its components � Thalmann et al. (2000) distinguish three levels of autonomy. The simulation components can be either “guided” (lowest), “programmed” (intermediate”) and “autonomous (high) Simulation LOA = f(LOA(Objects),LOA(Agents),LOA(Groups)) Simulation LOA = f(LOA(Objects),LOA(Agents),LOA(Groups)) Simulation Simulation Autonomous Autonomous Autonomous LOA LOA Programmed Programmed Programmed Guided Guided Guided Interactive object Intelligent agent Group of agents Interactive object Intelligent agent Group of agents adapted from (Thalmann et al., 2000)

  35. Interactive objects: Interactive objects: � Have behavior independent of user’s input (ex. clock); � This is needed in large virtual environments, where it is impossible for the user to provide all required inputs. System clock System clock Automatic door Automatic door – reflex behavior reflex behavior –

  36. Interactive objects: Interactive objects: � The fireflies in NVIDIA’s Grove have behavior independent of user’s input. User controls the virtual camera;

  37. Agent behavior: Agent behavior: � A behavior model composed of perception , emotions , behavior , and actions ; � Perception (through virtual sensors) makes the agent aware of his surroundings. Perception Emotions Perception Emotions Behavior Behavior Actions Actions Virtual world Virtual world

  38. Reflex behavior: Reflex behavior: � A direct link between perception and actions (following behavior rules (“cells”); � Does not involve emotions. Perception Perception Emotions Emotions Behavior Behavior Actions Actions

  39. Object behavior Object behavior Autonomous virtual human Autonomous virtual human User- -controlled hand avatar controlled hand avatar User Another example of reflex behavior – “Dexter” at MIT [Johnson, 1991]: Hand shake, followed by head turn

  40. Agent behavior - - Agent behavior avatars avatars User- -controlled hand avatar controlled hand avatar Autonomous virtual human User Autonomous virtual human If user maps to a full-body avatar, then virtual human agents react through body expression recognition : example dance. Swiss Institute of Technology, 1999 (credit Daniel Thalmann)

  41. Emotional behavior: Emotional behavior: � A subjective strong feeling (anger, fear) following perception; � Two different agents can have different emotions to the same perception, thus they can have different actions. Emotions 1 Perception Perception Emotions 2 Emotions 1 Perception Perception Emotions 2 Behavior Behavior Behavior Behavior Actions 1 Actions 2 Actions 1 Actions 2 Virtual world Virtual world

  42. Crowds behavior Crowds behavior � Crowd behavior emphasizes group (rather than individual) actions; � Crowds can have guided LOA, when their behavior is defined explicitly by the user; � Or they can have Autonomous LOA with behaviors specified by rules and other complex methods (including memory). Political demonstration Political demonstration Autonomous crowd Autonomous crowd Guided crowd Guided crowd Group perceives info on User needs to specify its environment and Intermediate path points decides a path to follow to reach the goal (Thalmann et al., 2000)

  43. MODEL MANAGEMENT MODEL MANAGEMENT � It is necessary to maintain interactivity and constant frame rates when rendering complex models. Several techniques exist: � Level of detail management; � Cell segmentation; � Off-line computations; � Lighting and bump mapping at rendering stage; � Portals.

  44. Level of detail management: Level of detail management: � Level of detail (LOD) relates to the number of polygons on the object’s surface. Even if the object has high complexity, its detail may not be visible if the object is too far from the virtual camera (observer). Tree with 27,000 polygons Tree with 27,000 polygons (details are not perceived) (details are not perceived) Tree with 27,000 polygons Tree with 27,000 polygons

  45. Static level of detail management: Static level of detail management: � Then we should use a simplified version of the object (fewer polygons), when it is far from the camera. � There are several approaches: � Discrete geometry LOD; � Alpha LOD; � Geometric morphing (“geo-morph”) LOD.

  46. Discrete Geometry LOD: Discrete Geometry LOD: � Uses several discrete models of the same virtual object; � Models are switched based on their distance from the camera (r < r 0 ; r 0 < r < r 1 ; r 1 < r < r 2 ; r 2 < r) r 1 r 2 r 0 LOD 0 LOD 0 LOD 1 LOD 1 LOD 2 LOD 2

  47. Alpha Blending LOD: Alpha Blending LOD: � Discrete LOD have problems on the r 0 = r, r 1 = r, r 2 = r circles, leading to “popping” and cycling. Objects appear and disappear suddenly. One solution to cycling is distance hystheresis. � A solution to popping is alpha blending by changing the transparency of the object. Fully transparent objects are not rendered. Hystheresis zone zone Hystheresis r 1 r 2 r 0 Opaque Opaque LOD 0 LOD 0 LOD 1 Less opaque LOD 1 Less opaque LOD 2 LOD 2 Fully transparent Fully transparent

  48. Geometric Morphing LOD: Geometric Morphing LOD: � Unlike geometric LOD, which uses several models of the same object, geometric morphing uses only one complex model. � Various LOD are obtained from the base model through mesh simplification � A triangulated polygon mesh: n vertices has 2n faces and 3n edges Mesh before before simplification simplification Mesh after after simplification simplification Mesh Mesh V2 V2 V1 V1 V1 V1 Collapsing edges Collapsing edges

  49. Single- -Object adaptive level of detail LOD: Object adaptive level of detail LOD: Single � Used where there is a single highly complex object that the user wants to inspect (such as in interactive scientific visualization. � Static LOD will not work since detail is lost where needed- example the sphere on the right loses shadow sharpness after LOD simplification. Sphere with 8192 triangles Sphere with 512 triangles – – Sphere with 8192 triangles Sphere with 512 triangles – Uniform high density Uniform high density Static LOD simplification – Static LOD simplification (from Xia et al, 1997)

  50. Single- -object Adaptive Level of Detail object Adaptive Level of Detail Single Sometimes edge collapse leads to problems, so vertices need to be split again to regain detail where needed. Xia et al. (1997) developed an adaptive algorithm that determines the level of detail based on distance to viewer as well as normal direction (lighting). Refined Mesh Simplified Mesh Refined Mesh Simplified Mesh Edge collapse Edge collapse V2 V2 V1 V1 V1 V1 Vertex Split Vertex Split V1 is the “ “parent parent” ” vertex vertex V1 is the (adapted from Xia et al, 1997)

  51. Single- -object Adaptive Level of Detail object Adaptive Level of Detail Single Sphere with 8192 triangles Sphere with 8192 triangles – Uniform high density, Uniform high density, – 0.115 sec to render 0.115 sec to render Sphere with 537 triangles Sphere with 537 triangles – adaptive LOD, 0.024 sec adaptive LOD, 0.024 sec – to render (SGI RE2, single to render (SGI RE2, single R10000 workstation) R10000 workstation) (from Xia et al, 1997)

  52. Single- -object Adaptive Level of Detail object Adaptive Level of Detail Single Bunny with 69,451 Bunny with 69,451 triangles – – Uniform high Uniform high triangles density, 0.420 sec to render density, 0.420 sec to render Bunny with 3615 triangles Bunny with 3615 triangles – adaptive LOD, 0.110 sec adaptive LOD, 0.110 sec – to render (SGI RE2, single to render (SGI RE2, single R10000 workstation) R10000 workstation) (from Xia et al, 1997)

  53. Static LOD: Static LOD: � Geometric LOD, alpha blending and morphing have problems maintaining a constant frame rate. This happens when new complex objects appear suddenly in the scene (fulcrum). frame i+1 frame i fulcrum m u r c l u f Camera “ “fly fly- -by by” ” Camera LOD 1 LOD 1 LOD 1 LOD 1 LOD 2 LOD 2 LOD 2 LOD 2

  54. Architectural “ “walk walk- -through through” ” Architectural (UC Berkeley Soda Hall) (UC Berkeley Soda Hall) Camera path through auditorium Start Start A C End End B No LOD menegament menegament, 72,570 polygons , 72,570 polygons No LOD A Time 0.2 sec A Time 0.2 sec Time 1.0 sec Time 1.0 sec C C B B 0 Frames 250 0 Frames 250 0 Frames 250 0 Frames 250 Static LOD management No LOD management from (Funkhauser and Sequin, 1993)

  55. Adaptive LOD Management- -continued: continued: Adaptive LOD Management � An algorithm that selects LOD of visible objects based on a specified frame rate; � The algorithm (Funkhauser and Sequin, 1993) is based on a benefits to cost analysis, where cost is the time needed to render Object O at level of detail L, and rendering mode R. � The cost for the whole scene is Σ Cost (O,L,R) ≤ Target frame time Σ Cost (O,L,R) ≤ Target frame time � where the cost for a single object is Cost (O,L,R) = max (c1Polygons(O,L) + c2 Vertices(O,L), c3 Pixels(O,L)) s(O,L)) Cost (O,L,R) = max (c1Polygons(O,L) + c2 Vertices(O,L), c3 Pixel c1, c2, c3 are experimental constants, depending on R and type of computer f computer c1, c2, c3 are experimental constants, depending on R and type o

  56. Adaptive LOD Management: Adaptive LOD Management: � Similarly the benefit for a scene is a sum of visible objects benefits; Σ Benefit(O,L,R) Σ Benefit(O,L,R) � where the benefit of a given object is Benefit(O,L,R) = size(O) * Accuracy(O,L,R) * Importance(O) * Focus(O) * cus(O) * Benefit(O,L,R) = size(O) * Accuracy(O,L,R) * Importance(O) * Fo Motion(O) * Hysteresis(O,L,R Hysteresis(O,L,R) ) Motion(O) * � Value= Benefit(O,L,R)/Cost(O,L,R) Value= Benefit(O,L,R)/Cost(O,L,R) � Objects with higher value are rendered first � Sort according to value, display objects until target cost is reached

  57. Level of detail segmentation - - elision elision Level of detail segmentation A Time 1.0 sec Time 1.0 sec C No detail elision, 72,570 polygons No detail elision, 72,570 polygons 0 Frames 250 0 Frames 250 Time 0.2 sec Time 0.2 sec 0 Frames 250 0 Frames 250 Optimization algorithm, 5,300 poly. Optimization algorithm, 5,300 poly. 0.1 sec target frame time (10 fps) 0.1 sec target frame time (10 fps) from (Funkhauser and Sequin, 1993)

  58. Level of detail segmentation – – rendering mode rendering mode Level of detail segmentation Optimization, 1,389 poly., No detail elision, 19,821 polygons 0.1 sec target frame time Level of detail – darker gray means more detail from (Funkhauser and Sequin, 1993)

  59. Cell segmentation: Cell segmentation: � It is another method of model management, used in architectural walk-through; � To maintain the “virtual building” illusion it is necessary to have at least 6 fps (Airey et al., 1990) � Necessary to maintain interactivity and constant frame rates when rendering complex models.

  60. Model management Model management Only the current “universe” needs to be rendered PVS (Potentially Visible Sets) PORTALS

  61. Cell segmentation – – increased frame rate increased frame rate Cell segmentation � Buildings are large models that can be partitioned in “cells” automatically and off-line to speed up simulations at run time; � Cells approximate rooms; � Partitioning algorithms use a “priority” factor that favors occlusions (partitioning along walls) Automatic floor plan partition (Airey et al., 1990)

  62. Cell segmentation Cell segmentation � Building model resides in a fully associative cache; � But cell segmentation alone will not work if the model is so large that it exceeds available RAM; � In this case large delays will occur when there is a page fault and data has to be retrieved from hard disk; Page faults Page faults Frame time (s) Frame time (s) Frames Frames From (Funkhauser, 1993)

  63. Combined Cell, LOD and database Combined Cell, LOD and database methods methods � It is possible to add database management techniques to prevent page faults and improve fps uniformity during walk-through; � It is possible to estimate how far the virtual camera will rotate and translate over the next N frames and pre-fetch from the hard disk the appropriate objects. Visibility Detail Visibility Detail Monitor Monitor Render Render Determ. Determ . Ellision Ellision User User Interface Interface Look- - Cache Look Cache I/O I/O ahead Manage- - ahead Manage Database Database Oper Determ. . ment Oper Determ ment Database management Database management Floor plan partition (Funkhouser, 1993)

  64. Database management Database management LOD 3 LOD 3 LOD 2 LOD 2 LOD 0 – – lowest level of detail (loaded first) lowest level of detail (loaded first) LOD 0 LOD 2 LOD 2 LOD 1 LOD 1 …. … . LOD 1 LOD 1 LOD 3 - LOD 3 - highest level of detail (loaded last) highest level of detail (loaded last) LOD 0 LOD 0 LOD 1 LOD 1 LOD 0 LOD 0 LOD 0 LOD 0 High-LOD are LOD 0 LOD 0 loaded for adjacent Frame time (s) Frame time (s) cells only Frames Frames Floor plan visibility and highest LOD (Funkhouser, 1990)

Recommend


More recommend