Start a new topic

creating video drawables dynamically

creating video drawables dynamically

Hi, I adapted the code from the wikitude sample app (I'm using the cordova plugin) to create the augmented experiences dynamically. The target names and medias are in their respective arrays and they are read in a loop to create the resoureces, drawables and Trackable2DObjects. It works fine for images, but it doesn't work at all for video drawable. Actually if I try the same code for the video, outside of a loop, it works. I paste the code below, please note that they use the same target collection and targets indeed but as I note in the comments I tested them separately and I'm only pasting them together here for pratical reasons, thanks for your help, regards:

Axel

CODE:

 

createOverlays: function createOverlaysFn() {
this.tracker = new AR.ClientTracker("assets/magazine.wtc", {
onLoaded: this.worldLoaded
});

var tgtsArray = ;
var videoArray = ;
var imgsArray = ;

//this loop works. It creates an aumented image for each target
for (var i = 0; i<tgtsArray.length; i++){
var imgRes = new AR.ImageResource(imgsArray);
var overlay = new AR.ImageDrawable(imgRes, 1, {
offsetX: 0,
offsetY: 0
}
);

var aumenta = new AR.Trackable2DObject(this.tracker,  tgtsArray, {drawables: {cam: }});
}

//this loop does not work - it points to the same targets as the loop above so I tested them separately (commented one out and teste the other)
for (var i = 0; i<tgtsArray.length; i++){
var overlay = new new AR.VideoDrawable(videoArray, 1, {
offsetX: 0,
offsetY: 0,
isTranparent:false
}
);

var aumenta = new AR.Trackable2DObject(this.tracker,  tgtsArray, {drawables: {cam: }});
}
},

 

Hi Axel,

try this code:
var tgtsArray = ;
var video = ;
for (var i = 0; i<tgtsArray.length; i++){
var overlay = new AR.VideoDrawable(video, 0.40, {
offsetY: -0.3
}
);
var pageOne = new AR.Trackable2DObject(this.tracker, tgtsArray, {
drawables: {
cam:
},
onEnterFieldOfVision: function onEnterFieldOfVisionFn() {
overlay.play(-1);
}
});
}

Best regards,

Simon
Hi Simon, thanks a lot for your answer but it did not do the trick. I made only one minimal alteration to your code, that being that the array of videos in your version was

 var video = ;

and I change it to 

var video = ;

since the thing I'm trying to do is actually for each target to have its respective video augmented and played when the target is recognized.  So the code almos works and I had an earlier version of mine that got to the same place and presented the same problem. It will always play the last video in the array. If you scan the target for the first video, the app plays only the sound of the video but always the last video in the array. If you scan the last target it works fine. If you could help me with this issue, I already tried always destroying the video before playing it, puttin the drawable object (the overlay) in an array and other stuff with no luck.
Hi Axel,
of course, you are right, this is a JavaScript problem. I made up a rough implementation, you probably are able to find a better solution, but it works:

 

var tgtsArray = ;
var video = ;
World.overlays = ;
World.trackables = ;
for (var i = 0; i<tgtsArray.length; i++){
World.overlays = new AR.VideoDrawable(video, 0.40, {
offsetY: -0.3
});
World.trackables = new AR.Trackable2DObject(this.tracker, tgtsArray, {
drawables: {
cam:
},
onEnterFieldOfVision: (function (ii) {
return function(e) {
World.overlays.play(-1)
};
})(i)
});
}

 

But now the videos never stop playing, which is what you probably want if you start a new. Please have a look at the example Video - Playback States.

Best regards,
Simon
Hi Simon, thanks again for your answer. The code works perfectly and we're very, very happy here! About the issue of the videos not stopping, I added the pause on onExitFieldOfVision and it works just fine. I'm pasting the final code below if anybody might need a similar solution. Actually I'm not exactly a javascript expert (though not exactly a begginer) and I could not follow precisely the syntax you used in the onEnterFieldOfVision function (and that I copied into onExitFieldOfVision) and if you could enlighten me about it I would appreciate, jus for knowledge's sake. Thanks again, a lot, regards:
Axel

for (var i = 0; i<tgtsArray.length; i++){
World.overlays = new AR.VideoDrawable(video, 0.40, {
offsetY: -0.3
}
);
World.trackables = new AR.Trackable2DObject(this.tracker, tgtsArray, {
drawables: {
  cam:
},

onEnterFieldOfVision: (function (ii) {
return function(e) {
World.overlays.play(-1)
};
})(i),
onExitFieldOfVision: (function (ii) {
return function(e) {
World.overlays.pause()
};
})(i)
  }    
);
}
Login or Signup to post a comment