I have my wtc file generated and 3 test images which should fire 3 videos. The imagerecognition.js file is set like this
createOverlays: function createOverlaysFn() { /* First an AR.Tracker needs to be created in order to start the recognition engine. It is initialized with a URL specific to the target collection. Optional parameters are passed as object in the last argument. In this case a callback function for the onLoaded trigger is set. Once the tracker is fully loaded the function worldLoaded() is called.
Important: If you replace the tracker file with your own, make sure to change the target name accordingly. Use a specific target name to respond only to a certain target or use a wildcard to respond to any or a certain group of targets. */ this.tracker = new AR.ClientTracker("assets/tracker.wtc", { onLoaded: this.worldLoaded });
/* The next step is to create the augmentation. In this example an image resource is created and passed to the AR.ImageDrawable. A drawable is a visual component that can be connected to an IR target (AR.Trackable2DObject) or a geolocated object (AR.GeoObject). The AR.ImageDrawable is initialized by the image and its size. Optional parameters allow for position it relative to the recognized target. */
/* Create overlay for page one */ var imgOne = new AR.ImageResource("assets/FG7809.png"); var overlayOne = new AR.ImageDrawable(imgOne, 1, { offsetX: 0, offsetY: 0 });
var video = new AR.VideoDrawable("assets/wax2012.mp4", .5, { offsetX: 0, offsetY: 0, onLoaded: function videoLoaded() { video.enabled = true; }, onPlaybackStarted: function videoPlaying () { video.playing = true; video.enabled = true; }, onFinishedPlaying: function videoFinished () { video.playing = false; video.enabled = false; }, onClick: function videoClicked () { if (video.playing) { video.pause(); video.playing = false; } else { video.resume(); video.playing = true; } } });
var pageOne = new AR.Trackable2DObject(this.tracker, "imgOne", { drawables: { cam: }, onEnterFieldOfVision: function onEnterFieldOfVisionFn () { if (video.playing) { video.pause(); } }, onExitFieldOfVision: function onExitFieldOfVisionFn () { if (video.playing) { video.pause(); } } });
If I change the imgOne in the pageOne definition to * and add in display.alert("hello"); the alert shows. However, no video.
I have twenty or thirty trigger images with the same number of videos to display, so I really need to make sure that I have the imagerecognition.js file set up correctly.
Am I doing this correctly and what should the parameter inside of the quotes on pageOne = (this,tracker, "...") be? I have a project where it is imgOne and another where it is pageOne.
Thanks
PFJ
A
Andreas Fötschl
said
almost 8 years ago
Hi there!
Please try remote WebView debugging to figure out potential JS errors and apply your changes step-by-step from the provided sample applications.
best regards
P
Paul Johnson
said
almost 8 years ago
Hi,
The web console is showing no errors at all which is really annoying! If I put the trigger on pageOne to "*", the it looks to be working, so the only thing I can summise is that what I have for * is incorrect in my other pages.
In case you just have 3 targetImages / videos you may also just create the trackables in a loop using same approach as in the sample application's "multiple targets".
Using "*" as targetName is fine if you use the very same augmentations but needs some extra work (smart caching and add/removeCamdrawables) in terms of asset management, which is something to consider for more than 5 videos.
Paul Johnson