Enhancing Online 3D Products through Crowdsourcing Thi Phuong Nghiem, Axel Carlier, Geraldine Morin Vincent Charvillat University of Toulouse IRIT ENSEEIHT ACM Workshop on Crowdsourcing for mmedia - ACM MM'12 ACM Workshop on Crowdsourcing for mmedia - ACM MM'12
Outline ● Motivation through three initial observations ● Our crowdsourcing set-up for 3D content ● Results ● Conclusion and perspective 2
Observation #1 ● Crowdsourcing definition ● Workers may be motivated by ... ● Entertainment/personal enjoyment ● Altruism ● Financial reward ● Specific incentives in an E-commerce set-up ? ● Would you like free PnP ? ● Discount code ?
Observation #2 ● E-commerce Tasks: what could be outsourced to customers ? ● [Little et al. 10] proposed two main categories : ● Decision tasks (opinion production, rating, comparison) ● Creation Tasks (content production, composition, edition)
Observation #2 (cont.)
Observation #2 (cont.)
Observation #3(D) ● 3D content and E-commerce: an emerging topic ? ● 3D product presentation as an alternative to image and/or video
Observation #3(D) ● 3D content and E-commerce: an emerging topic ? ● 3D product presentation as an alternative to image and/or video
Observation #3(D) ● 3D content and E-commerce: an emerging topic ? ● 3D product presentation as an alternative to image and/or video
Our idea Product Product Features Features Product Description Product Description Features Features Textual Textual Description Semantic Description Semantic Representative links Customers Representative links Customers Views Views Motivated Motivated Preferences workers Preferences workers 3D Model 3D Model of the Product of the Product 3D model 3D model of a product of a product 10
A product webpage with text and 3D Product Product Features Features Product Description Product Description Features Features Textual Textual Description Semantic Description Semantic Representative links Customers Representative links Customers Views Views Motivated Motivated Preferences workers Preferences workers 3D Model 3D Model of the Product of the Product 3D model 3D model of a product of a product We use x3dom to render 3D objects (plug-in free solution). 11
Linking product features and representative 3D views (2) (1) (1) Crowdsourced tasks: (2) Use crowdsourced data to Locate each visible feature on the 3D compute Recommended Views model and create Semantic Link Semantic Links are presented as blue bullets with question mark ”?” These links help gathering knowledge about a product. They also ease browsing its 3D model. 12
Semantic Link Definition A Semantic Link matches a product feature description with its ● possibly visible position on the 3D model. Example of Semantic Link for Mode Dial feature ● Semantic Link 13
Basic Task (1) A product feature (text item) is selected using checkbox. (2) 3D objects are manipulated with mouse interactions (zoom/pan/rotate). (2) Feature position is located by double-clicking on it and a red dot appears to mark the position (called marked-point ) on the 3D object (3) Level of expertise/confidence is indicated by the worker 14
Video Part #1 15
Crowdsourced Data Collected data are as follows: ● The selected (textual) feature ● The amount of time it took the user to find each feature's ● visualization in 3D. The world coordinates of the 3D marked-point, of its normal ● vector and its camera position (if done or "I don't know" events if users cannot locate the feature) Level of expertise about each product. ● Types of events created by users (zoom/pan/rotate/click). ● 17
Recommended View Computation We aim at producing one representative view for each visible feature ● To generate a Recommended View, we compute the camera ● parameters so as its look-at point is the median of marked-points from the 'crowd'. The quality of recommended views is correlated with the dispersion ● of marked-points. Recommended view for the 'mode button' 18
? 19
The 4 Steps (1) Association / Linking : what we just saw ! (2) Exploitation / Evaluation Same operations as in (1), except that each time users choose a feature, a recommended view is automatically displayed to suggest the corresponding 3D visualization of the feature. Then, for each feature, we ask the user if the recommended view was helpful or not. (3) Helpfullness evaluation Same as (2) except that they are given the results of the helpfulness evaluation of the recommended views from Part 2. (4) New interface evaluation We assess that the enriched interface is better than the initial one (3D product along with its textual description) 20
The 4 Steps (1) Association / Linking : what we just saw ! (2) Exploitation / Evaluation Same operations as in (1), except that each time users choose a feature, a recommended view is automatically displayed to suggest the corresponding 3D visualization of the feature. Then, for each feature, we ask the user if the recommended view was helpful or not. (3) Helpfullness evaluation Same as (2) except that they are given the results of the helpfulness evaluation of the recommended views from Part 2. (4) New interface evaluation We assess that the enriched interface is better than the initial one (3D product along with its textual description) 21
Recommended View Integration - Part (2) Recommendation is given automatically to user at each textual name selection 22
Video Part #2 23
The 4 Steps (1) Association / Linking : what we just saw ! (2) Exploitation / Evaluation Same operations as in (1), except that each time users choose a feature, a recommended view is automatically displayed to suggest the corresponding 3D visualization of the feature. Then, for each feature, we ask the user if the recommended view was helpful or not. (3) Helpfullness evaluation Same as (2) except that they are given the results of the helpfulness evaluation of the recommended views from Part 2. (4) New interface evaluation We assess that the enriched interface is better than the initial one (3D product along with its textual description) 24
Helpfulness Influence (Part 3) Recommended View is integrated with Helpfulness Score at each textual name selection 25
Video Part #3 26
The 4 Steps (1) Association / Linking : what we just saw ! (2) Exploitation / Evaluation Same operations as in (1), except that each time users choose a feature, a recommended view is automatically displayed to suggest the corresponding 3D visualization of the feature. Then, for each feature, we ask the user if the recommended view was helpful or not. (3) Helpfullness evaluation Same as (2) except that they are given the results of the helpfulness evaluation of the recommended views from Part 2. (4) New interface evaluation We assess that the enriched interface is better than the initial one (3D product along with its textual description) 27
Semantic Link Creation (Part 4) Recommended View for Mode Dial is shown when its Semantic Link is clicked 28
Video Part #4 29
Protocol of User Study Protocol: 82 participants (47 males + 35 females, [19-40 y.] ) divided ● into 4 parts: Outsourcing tasks are first given to users (Part 1 - 20). ● Outsourcing tasks with recommendations (Part 2 - 28). ● – All types of users – Expert users Helpfulness evaluation of Recommended Views (Part 3 - 14). ● Novel interface evaluation (Part 4 - 20). ● 30
Our Models ● Six models from evermotion (see http://www.evermotion.org) ● Different complexity, aesthetics ● Easy / Simple /common (coffee m. filter holder) ● Technical details electric guitars (jack socket) ● Stabilizer switch of the camera is hard to find 31
Results 32
Results Average time for users to locate a feature in 3D: ● The recommendation helps users to execute the tasks quicker (part 2). 33
Results Recommendation helps users to execute the tasks more efficient: ● The number of good answers increases while the number of wrong answers decrease (part 2). 34
Results Influence of helpfulness evaluation (part 3): ● Part 3 gets less ”I don't know” answers for Technical Features (eg. jack socket-guitar) and less ”Wrong” answers for Hard Features (eg. stabilizer switch) than part 2 does. 35
Novel Interface Evaluation Percentage of users who prefer our enriched interface (part 4): ● 36
Conclusion Our experiments have shown two particular points: ● first, the proposed enhancement of the 3D interactive product has proved useful in two ways: -qualitatively, users have appreciated it, -quantitatively, it has improved their performances. ● Second, crowdsourcing has proved useful in this context … 37
Could this semantic association be created by the seller or an expert hired by the seller ? ● Does it scale ? 15-20 features per product, thousands of product … ● The experts are expensive (and may be biased towards advertising the products). ● Our experts are inexpensive and the customers are the best possible "interesting features detectors". 38
Recommend
More recommend