Start a new topic
Solved

Extended tracking - multiple models

Hi, 

I'm using javascript iOS SDK for create an augmented reality app that show some model based on different images.

Using the examples I'm able to show one model, my problem is that I'm not able to show 2 different models based on 2 different image.


Using my code I can see only the model of FIRST tracked image, if I try to track the other image nothing appear, only the first work.

If I track the second image I see the second image but I'm not able to see the first model. This is my code:


 

    init : function() {

        try {
            // create resource to load tracker from
            var targetCollectionResource = new AR.TargetCollectionResource(this.WTC_PATH);

            // create pipes model.
            trainer = new AR.Model(this.WT3_PATH, {
                scale: {
                    x: 0.1,
                    y: 0.1,
                    z: 0.1
                },
                onDragBegan: function(x, y) {
                    World.requestedModel = 1;
                    oneFingerGestureAllowed = true;
                },
                onDragChanged: function(relativeX, relativeY, intersectionX, intersectionY) {
                   
                    if (oneFingerGestureAllowed)
                        this.translate = {x:intersectionX, y:intersectionY};
                
                },
                onDragEnded: function(x, y) {
                // react to the drag gesture ending
                },
                onRotationBegan: function(angleInDegrees) {
                    // react to the rotation gesture beginning
                    oneFingerGestureAllowed = false;
                },
                onRotationChanged: function(angleInDegrees) {
                    this.rotate.z = rotationValues[0] - angleInDegrees;
                },
                onRotationEnded: function(angleInDegrees) {
                    rotationValues[0] = this.rotate.z;
                    oneFingerGestureAllowed = true;
                },
                onScaleBegan: function(scale) {
                    // react to the scale gesture beginning
                },
                onScaleChanged: function(scale) {
                    //var scaleValue = scaleValues[0] * scale;
                    //this.scale = {x: scaleValue, y: scaleValue, z: scaleValue};
                },
                onScaleEnded: function(scale) {
                    //scaleValues[0] = this.scale.x;
                    //oneFingerGestureAllowed = true;
                }
            });

            rotationValues.push(defaultRotationValue);
            scaleValues.push(defaultScaleValue);

            World.requestedModel = 1;
            World.camDrawables.push(trainer);
			lastAddedModel = trainer;




            // create pipes model.
            earth = new AR.Model(this.WT3_PATH_2, {
                scale: {
                    x: 0.1,
                    y: 0.1,
                    z: 0.1
                },
                onDragBegan: function(x, y) {
                    World.requestedModel = 2;
                    oneFingerGestureAllowed = true;
                },
                onDragChanged: function(relativeX, relativeY, intersectionX, intersectionY) {
                   
                    if (oneFingerGestureAllowed)
                        this.translate = {x:intersectionX, y:intersectionY};
                
                },
                onDragEnded: function(x, y) {
                // react to the drag gesture ending
                },
                onRotationBegan: function(angleInDegrees) {
                    // react to the rotation gesture beginning
                    oneFingerGestureAllowed = false;
                },
                onRotationChanged: function(angleInDegrees) {
                    this.rotate.z = rotationValues[1] - angleInDegrees;
                },
                onRotationEnded: function(angleInDegrees) {
                    rotationValues[1] = this.rotate.z;
                    oneFingerGestureAllowed = true;
                },
                onScaleBegan: function(scale) {
                    // react to the scale gesture beginning
                },
                onScaleChanged: function(scale) {
                    //var scaleValue = scaleValues[0] * scale;
                    //this.scale = {x: scaleValue, y: scaleValue, z: scaleValue};
                },
                onScaleEnded: function(scale) {
                    //scaleValues[0] = this.scale.x;
                    //oneFingerGestureAllowed = true;
                }
            });

            rotationValues.push(defaultRotationValue);
            scaleValues.push(defaultScaleValue);

            World.requestedModel = 2;
            World.camDrawables_2.push(earth);
			lastAddedModel = earth;



			console.log(camDrawables);
			console.log(camDrawables_2);


            // define tracker object
            var tracker = new AR.ImageTracker(targetCollectionResource, {
				onError: function () {

				},
				onDisabled: function () {

				},
                onTargetsLoaded: function() {
                    
                    try {
                        // add AR.ImageTrackable once tracker is active. Using wild-card as targetname to augment every target in the wtc file the same way
                        var first = new AR.ImageTrackable(tracker, "trImage", {
                            enableExtendedTracking: true, // activates extended tracking
                            drawables: {
                                cam: World.camDrawables // initially no camDrawables exist, they are added and updated once sensor values arrive
                            },

                            // callback function indicating quality of SLAM tracking.
                            onExtendedTrackingQualityChanged: function (targetName, oldTrackingQuality, newTrackingQuality) {

                            	console.log("trImage: " + oldTrackingQuality + " -> " + newTrackingQuality);

                            },

                            // one of the targets is visible
                            onImageRecognized: function onEnterFieldOfVisionFn(targetName) {

								console.log(targetName);

                                this.isVisible = true;
                            },

                            // when losing the target -> snapToScreen is enabled and the close button appears, so user can dismiss rendering of the video
                            onImageLost: function (targetName) {
                                //this.isVisible = false;

                                console.log("onImageLost: " + targetName);
                            },
                            onDragBegan: function onDragBeganFn(xPos, yPos) {

                                oneFingerGestureAllowed = true;
                                World.updatePlaneDrag(xPos, yPos);
                            },
                            onDragChanged: function onDragChangedFn(xPos, yPos) {
                                World.updatePlaneDrag(xPos, yPos);
                            },
                            onDragEnded: function onDragEndedFn(xPos, yPos) {

                                World.updatePlaneDrag(xPos, yPos);
                                World.initialDrag = false;
                            }
                        });



                        var second = new AR.ImageTrackable(tracker, "target_iot", {
                            enableExtendedTracking: true, // activates extended tracking
                            drawables: {
                                cam: World.camDrawables_2 // initially no camDrawables exist, they are added and updated once sensor values arrive
                            },

                            // callback function indicating quality of SLAM tracking.
                            onExtendedTrackingQualityChanged: function (targetName, oldTrackingQuality, newTrackingQuality) {

                            	console.log("target_iot: " + oldTrackingQuality + " -> " + newTrackingQuality);


                            },

                            // one of the targets is visible
                            onImageRecognized: function onEnterFieldOfVisionFn(targetName) {
                                this.isVisible = true;
                            },

                            // when losing the target -> snapToScreen is enabled and the close button appears, so user can dismiss rendering of the video
                            onImageLost: function (targetName) {
                                //this.isVisible = false;

                                console.log("onImageLost: " + targetName);
                            },
                            onDragBegan: function onDragBeganFn(xPos, yPos) {

                                oneFingerGestureAllowed = true;
                                World.updatePlaneDrag(xPos, yPos);
                            },
                            onDragChanged: function onDragChangedFn(xPos, yPos) {
                                World.updatePlaneDrag(xPos, yPos);
                            },
                            onDragEnded: function onDragEndedFn(xPos, yPos) {

                                World.updatePlaneDrag(xPos, yPos);
                                World.initialDrag = false;
                            }
                        });





                    } 
                    catch (err) {
                        World.onError(err);
                    }
                }
            });
        } catch (err) {
            World.onError(err);
        }
    },

 

where is the error?


Hello Francesco,

The idea behind Extended Tracking is to use the image as an initialization to start tracking the environment of the 2D target. If you want to recognize another target then what you would need to do is use the dedicated "stopExtendedTracking"- functionality provided by our SDK. This allows you to recognize other images in your scene. However this cancels your current tracking of the first image target.

 

Thanks

Eva

Dear Eva, 


thank you so much for this quickly answer...so if I want show different model with different image I need to stopExtendedTracking when an image is lost (we call this image image A) then, after stop it I need to re-create ImageTrackable A (in case the user want to rescan image A)?


thank you

francesco

I'm playing with this function but I don't understand why if I call stopExtendedTracking on "onImageLost"  not work, if I call the function on onExtendedTrackingQualityChanged then all work correctly (without recreate nothing) but obviously the model flickering



Francesco

Hello Francesco,

You can call the 'stopExtendedTracking' function from wherever you wish, so it is not mandatory that you call it only when the image is lost. Could you please explain more in details what do you mean when you say that the model obviously flickers? Could you maybe send a video demonstrating this behavior?

Thanks
Eva

 

Dear Eva,

thanks for this answer, what I want is to call stopExtendedTracking on imageLost for start to tracking other image, the problem is that if I call it on imageLost don't change anything, infact if I scan other image nothing happens, no modal is show.

If I call it on onExtendedTrackingQualityChanged I'm able to scan other images and see the mdoels but the models appear and disappear, appear and disappear, appear and disappear....I think that this is caused because onExtendedTrackingQualityChanged is called a lot of time and then every time it was called I stop and restart the model rendering, and then I get this flickering (the model appear and disappear, appear and disappear, ecc....)

Later I insert a video!

Hi Francesco,

Why are you using extended tracking? In order to continue tracking when the target image is not visible in the camera frame anymore, or to draw multiple models at the same time?


Our SDK can only track one image at the same time. Extended tracking does not help you in that matter.


`stopExtendedTracking` is usually a function that you call once the user explicitly want's to stop extended tracking. It's nothing you would connect with one of our other callbacks. Calling this function stops any ongoing extended tracking that is happening at the moment and starts a new image recognition phase. Once image recognition found a new target in the camera frame, it starts tracking this one and just this one.


I'm still not sure what you want to achieve, so simply drop me a line what your intention is and I'm happy to help you from that point further.


Best regards,

Andreas

Hi, I had the same issue, thanks to Eva and Andreas for the reply.


What I did to be able to track other target after tracking an "ExtendedTracking Target" is to do this in the [onExitFieldOfVision] :

    public void onTrackableLost(string trackable)
    {
        this.GetComponentInParent<ImageTracker>().StopExtendedTracking();
    }

 

Then I was able to track other target and then track again my extendedtracking target without loosing the extended tracking featur, to check that I add, this time on my [onEnterFieldOfVision] :


 

  public void onTrackableFound(string trackable)
    {
        Debug.Log("For " + trackable + " extended tracking is set to " + this.GetComponent<TrackableBehaviour>().ExtendedTracking);
    }

 

It's still set to true.


For my project I needed to Use [getComponentInParent] but we just need to access the <ImageTracker> component wherever it is, don't forget "using wikitude" for compilation matter.


First Time I'm explaining my solution,

Hope it help.

Caroline.


Hello Caroline,

Thank you for posting your solution here so that other users can also see how this can work. In addition, this is a very interested and efficient approach.

Highly appreciated!
Eva

 

Login or Signup to post a comment