Start a new topic
Solved

Extended tracking - multiple models

Hi, 

I'm using javascript iOS SDK for create an augmented reality app that show some model based on different images.

Using the examples I'm able to show one model, my problem is that I'm not able to show 2 different models based on 2 different image.


Using my code I can see only the model of FIRST tracked image, if I try to track the other image nothing appear, only the first work.

If I track the second image I see the second image but I'm not able to see the first model. This is my code:


 

    init : function() {

        try {
            // create resource to load tracker from
            var targetCollectionResource = new AR.TargetCollectionResource(this.WTC_PATH);

            // create pipes model.
            trainer = new AR.Model(this.WT3_PATH, {
                scale: {
                    x: 0.1,
                    y: 0.1,
                    z: 0.1
                },
                onDragBegan: function(x, y) {
                    World.requestedModel = 1;
                    oneFingerGestureAllowed = true;
                },
                onDragChanged: function(relativeX, relativeY, intersectionX, intersectionY) {
                   
                    if (oneFingerGestureAllowed)
                        this.translate = {x:intersectionX, y:intersectionY};
                
                },
                onDragEnded: function(x, y) {
                // react to the drag gesture ending
                },
                onRotationBegan: function(angleInDegrees) {
                    // react to the rotation gesture beginning
                    oneFingerGestureAllowed = false;
                },
                onRotationChanged: function(angleInDegrees) {
                    this.rotate.z = rotationValues[0] - angleInDegrees;
                },
                onRotationEnded: function(angleInDegrees) {
                    rotationValues[0] = this.rotate.z;
                    oneFingerGestureAllowed = true;
                },
                onScaleBegan: function(scale) {
                    // react to the scale gesture beginning
                },
                onScaleChanged: function(scale) {
                    //var scaleValue = scaleValues[0] * scale;
                    //this.scale = {x: scaleValue, y: scaleValue, z: scaleValue};
                },
                onScaleEnded: function(scale) {
                    //scaleValues[0] = this.scale.x;
                    //oneFingerGestureAllowed = true;
                }
            });

            rotationValues.push(defaultRotationValue);
            scaleValues.push(defaultScaleValue);

            World.requestedModel = 1;
            World.camDrawables.push(trainer);
			lastAddedModel = trainer;




            // create pipes model.
            earth = new AR.Model(this.WT3_PATH_2, {
                scale: {
                    x: 0.1,
                    y: 0.1,
                    z: 0.1
                },
                onDragBegan: function(x, y) {
                    World.requestedModel = 2;
                    oneFingerGestureAllowed = true;
                },
                onDragChanged: function(relativeX, relativeY, intersectionX, intersectionY) {
                   
                    if (oneFingerGestureAllowed)
                        this.translate = {x:intersectionX, y:intersectionY};
                
                },
                onDragEnded: function(x, y) {
                // react to the drag gesture ending
                },
                onRotationBegan: function(angleInDegrees) {
                    // react to the rotation gesture beginning
                    oneFingerGestureAllowed = false;
                },
                onRotationChanged: function(angleInDegrees) {
                    this.rotate.z = rotationValues[1] - angleInDegrees;
                },
                onRotationEnded: function(angleInDegrees) {
                    rotationValues[1] = this.rotate.z;
                    oneFingerGestureAllowed = true;
                },
                onScaleBegan: function(scale) {
                    // react to the scale gesture beginning
                },
                onScaleChanged: function(scale) {
                    //var scaleValue = scaleValues[0] * scale;
                    //this.scale = {x: scaleValue, y: scaleValue, z: scaleValue};
                },
                onScaleEnded: function(scale) {
                    //scaleValues[0] = this.scale.x;
                    //oneFingerGestureAllowed = true;
                }
            });

            rotationValues.push(defaultRotationValue);
            scaleValues.push(defaultScaleValue);

            World.requestedModel = 2;
            World.camDrawables_2.push(earth);
			lastAddedModel = earth;



			console.log(camDrawables);
			console.log(camDrawables_2);


            // define tracker object
            var tracker = new AR.ImageTracker(targetCollectionResource, {
				onError: function () {

				},
				onDisabled: function () {

				},
                onTargetsLoaded: function() {
                    
                    try {
                        // add AR.ImageTrackable once tracker is active. Using wild-card as targetname to augment every target in the wtc file the same way
                        var first = new AR.ImageTrackable(tracker, "trImage", {
                            enableExtendedTracking: true, // activates extended tracking
                            drawables: {
                                cam: World.camDrawables // initially no camDrawables exist, they are added and updated once sensor values arrive
                            },

                            // callback function indicating quality of SLAM tracking.
                            onExtendedTrackingQualityChanged: function (targetName, oldTrackingQuality, newTrackingQuality) {

                            	console.log("trImage: " + oldTrackingQuality + " -> " + newTrackingQuality);

                            },

                            // one of the targets is visible
                            onImageRecognized: function onEnterFieldOfVisionFn(targetName) {

								console.log(targetName);

                                this.isVisible = true;
                            },

                            // when losing the target -> snapToScreen is enabled and the close button appears, so user can dismiss rendering of the video
                            onImageLost: function (targetName) {
                                //this.isVisible = false;

                                console.log("onImageLost: " + targetName);
                            },
                            onDragBegan: function onDragBeganFn(xPos, yPos) {

                                oneFingerGestureAllowed = true;
                                World.updatePlaneDrag(xPos, yPos);
                            },
                            onDragChanged: function onDragChangedFn(xPos, yPos) {
                                World.updatePlaneDrag(xPos, yPos);
                            },
                            onDragEnded: function onDragEndedFn(xPos, yPos) {

                                World.updatePlaneDrag(xPos, yPos);
                                World.initialDrag = false;
                            }
                        });



                        var second = new AR.ImageTrackable(tracker, "target_iot", {
                            enableExtendedTracking: true, // activates extended tracking
                            drawables: {
                                cam: World.camDrawables_2 // initially no camDrawables exist, they are added and updated once sensor values arrive
                            },

                            // callback function indicating quality of SLAM tracking.
                            onExtendedTrackingQualityChanged: function (targetName, oldTrackingQuality, newTrackingQuality) {

                            	console.log("target_iot: " + oldTrackingQuality + " -> " + newTrackingQuality);


                            },

                            // one of the targets is visible
                            onImageRecognized: function onEnterFieldOfVisionFn(targetName) {
                                this.isVisible = true;
                            },

                            // when losing the target -> snapToScreen is enabled and the close button appears, so user can dismiss rendering of the video
                            onImageLost: function (targetName) {
                                //this.isVisible = false;

                                console.log("onImageLost: " + targetName);
                            },
                            onDragBegan: function onDragBeganFn(xPos, yPos) {

                                oneFingerGestureAllowed = true;
                                World.updatePlaneDrag(xPos, yPos);
                            },
                            onDragChanged: function onDragChangedFn(xPos, yPos) {
                                World.updatePlaneDrag(xPos, yPos);
                            },
                            onDragEnded: function onDragEndedFn(xPos, yPos) {

                                World.updatePlaneDrag(xPos, yPos);
                                World.initialDrag = false;
                            }
                        });





                    } 
                    catch (err) {
                        World.onError(err);
                    }
                }
            });
        } catch (err) {
            World.onError(err);
        }
    },

 

where is the error?


Hi Rainer,


I'll add it to our list of feature requests for future releases and discuss further in the team to see if this is something we can include in the future.


As of now, unfortunately, the option to stop the ExtendedTracking when you / your end users want to switch the target is unfortunately the way to switch between targets.


Thx and greetings

Nicola

Hi Nicola,

thanks for the feedback.


Would it be hard to allow for another marker to be detected? This would make it much easier to work with multiple markers and extended tracking.

Now I only see the "workaround" to show the user that he's "locked in" on a marker and the user needs to explicitly exit (via a button) that marker to get to another one.


Thanks again

Hi Rainer,


The approach and behavior for Extended Tracking is still the same than what is being discussed here - so you still have to invoke the stopExtendedTracking method in case you wish to start / restart Extended Tracking with another target.


In case there are changes on the approach, we'll include you to the list of partners to inform.


Should you need anything further, just let me know anytime.


Greetings

Nicola

Hi,

I found a few threads about this topic. This seems to be the one which helped a few people. Since this is three years old, I wanted to ask what the current approach is. Let's say I have a project with 4 markers. I use ExtendedTracking for all markers (e.g. to allow zooming in and out to the marker and have better results than without ExtendedTracking).


What I want:

  • Detect marker A and use ExtendedTracking there for a better experience.
  • When I move to marker B I want ExtendedTracking for marker B.
  • Marker A should stop tracking completely. Is there a function to call and then Wikitude "ImageLost" for marker A?

What I have at the moment:

  • When I use ExtendedTracking and set Concurrent Targets to 2 I can detect one image. ExtendedTracking is working but the no second image is detected.


So one / the solution would be to create a button for the user to explicitly stop ExtendedTracking [with ImageTracker.StopExtendedTracking()] for the current marker, e.g. marker A. When the camera points still at the marker A, the content / scene would start again. This means I must explain to the user how to switch between markers / scenes, which is not a good experience.
Perfect would be when the user can point on any of the four marker and the correct content would start there.

Any way to achieve this?
Thanks

Hello Caroline,

Thank you for posting your solution here so that other users can also see how this can work. In addition, this is a very interested and efficient approach.

Highly appreciated!
Eva

 

Hi, I had the same issue, thanks to Eva and Andreas for the reply.


What I did to be able to track other target after tracking an "ExtendedTracking Target" is to do this in the [onExitFieldOfVision] :

    public void onTrackableLost(string trackable)
    {
        this.GetComponentInParent<ImageTracker>().StopExtendedTracking();
    }

 

Then I was able to track other target and then track again my extendedtracking target without loosing the extended tracking featur, to check that I add, this time on my [onEnterFieldOfVision] :


 

  public void onTrackableFound(string trackable)
    {
        Debug.Log("For " + trackable + " extended tracking is set to " + this.GetComponent<TrackableBehaviour>().ExtendedTracking);
    }

 

It's still set to true.


For my project I needed to Use [getComponentInParent] but we just need to access the <ImageTracker> component wherever it is, don't forget "using wikitude" for compilation matter.


First Time I'm explaining my solution,

Hope it help.

Caroline.


Hi Francesco,

Why are you using extended tracking? In order to continue tracking when the target image is not visible in the camera frame anymore, or to draw multiple models at the same time?


Our SDK can only track one image at the same time. Extended tracking does not help you in that matter.


`stopExtendedTracking` is usually a function that you call once the user explicitly want's to stop extended tracking. It's nothing you would connect with one of our other callbacks. Calling this function stops any ongoing extended tracking that is happening at the moment and starts a new image recognition phase. Once image recognition found a new target in the camera frame, it starts tracking this one and just this one.


I'm still not sure what you want to achieve, so simply drop me a line what your intention is and I'm happy to help you from that point further.


Best regards,

Andreas

Dear Eva,

thanks for this answer, what I want is to call stopExtendedTracking on imageLost for start to tracking other image, the problem is that if I call it on imageLost don't change anything, infact if I scan other image nothing happens, no modal is show.

If I call it on onExtendedTrackingQualityChanged I'm able to scan other images and see the mdoels but the models appear and disappear, appear and disappear, appear and disappear....I think that this is caused because onExtendedTrackingQualityChanged is called a lot of time and then every time it was called I stop and restart the model rendering, and then I get this flickering (the model appear and disappear, appear and disappear, ecc....)

Later I insert a video!

Hello Francesco,

You can call the 'stopExtendedTracking' function from wherever you wish, so it is not mandatory that you call it only when the image is lost. Could you please explain more in details what do you mean when you say that the model obviously flickers? Could you maybe send a video demonstrating this behavior?

Thanks
Eva

 

I'm playing with this function but I don't understand why if I call stopExtendedTracking on "onImageLost"  not work, if I call the function on onExtendedTrackingQualityChanged then all work correctly (without recreate nothing) but obviously the model flickering



Francesco

Dear Eva, 


thank you so much for this quickly answer...so if I want show different model with different image I need to stopExtendedTracking when an image is lost (we call this image image A) then, after stop it I need to re-create ImageTrackable A (in case the user want to rescan image A)?


thank you

francesco

Hello Francesco,

The idea behind Extended Tracking is to use the image as an initialization to start tracking the environment of the 2D target. If you want to recognize another target then what you would need to do is use the dedicated "stopExtendedTracking"- functionality provided by our SDK. This allows you to recognize other images in your scene. However this cancels your current tracking of the first image target.

 

Thanks

Eva

Login or Signup to post a comment