Start a new topic
Solved

Extended tracking - multiple models

Hi, 

I'm using javascript iOS SDK for create an augmented reality app that show some model based on different images.

Using the examples I'm able to show one model, my problem is that I'm not able to show 2 different models based on 2 different image.


Using my code I can see only the model of FIRST tracked image, if I try to track the other image nothing appear, only the first work.

If I track the second image I see the second image but I'm not able to see the first model. This is my code:


 

    init : function() {

        try {
            // create resource to load tracker from
            var targetCollectionResource = new AR.TargetCollectionResource(this.WTC_PATH);

            // create pipes model.
            trainer = new AR.Model(this.WT3_PATH, {
                scale: {
                    x: 0.1,
                    y: 0.1,
                    z: 0.1
                },
                onDragBegan: function(x, y) {
                    World.requestedModel = 1;
                    oneFingerGestureAllowed = true;
                },
                onDragChanged: function(relativeX, relativeY, intersectionX, intersectionY) {
                   
                    if (oneFingerGestureAllowed)
                        this.translate = {x:intersectionX, y:intersectionY};
                
                },
                onDragEnded: function(x, y) {
                // react to the drag gesture ending
                },
                onRotationBegan: function(angleInDegrees) {
                    // react to the rotation gesture beginning
                    oneFingerGestureAllowed = false;
                },
                onRotationChanged: function(angleInDegrees) {
                    this.rotate.z = rotationValues[0] - angleInDegrees;
                },
                onRotationEnded: function(angleInDegrees) {
                    rotationValues[0] = this.rotate.z;
                    oneFingerGestureAllowed = true;
                },
                onScaleBegan: function(scale) {
                    // react to the scale gesture beginning
                },
                onScaleChanged: function(scale) {
                    //var scaleValue = scaleValues[0] * scale;
                    //this.scale = {x: scaleValue, y: scaleValue, z: scaleValue};
                },
                onScaleEnded: function(scale) {
                    //scaleValues[0] = this.scale.x;
                    //oneFingerGestureAllowed = true;
                }
            });

            rotationValues.push(defaultRotationValue);
            scaleValues.push(defaultScaleValue);

            World.requestedModel = 1;
            World.camDrawables.push(trainer);
			lastAddedModel = trainer;




            // create pipes model.
            earth = new AR.Model(this.WT3_PATH_2, {
                scale: {
                    x: 0.1,
                    y: 0.1,
                    z: 0.1
                },
                onDragBegan: function(x, y) {
                    World.requestedModel = 2;
                    oneFingerGestureAllowed = true;
                },
                onDragChanged: function(relativeX, relativeY, intersectionX, intersectionY) {
                   
                    if (oneFingerGestureAllowed)
                        this.translate = {x:intersectionX, y:intersectionY};
                
                },
                onDragEnded: function(x, y) {
                // react to the drag gesture ending
                },
                onRotationBegan: function(angleInDegrees) {
                    // react to the rotation gesture beginning
                    oneFingerGestureAllowed = false;
                },
                onRotationChanged: function(angleInDegrees) {
                    this.rotate.z = rotationValues[1] - angleInDegrees;
                },
                onRotationEnded: function(angleInDegrees) {
                    rotationValues[1] = this.rotate.z;
                    oneFingerGestureAllowed = true;
                },
                onScaleBegan: function(scale) {
                    // react to the scale gesture beginning
                },
                onScaleChanged: function(scale) {
                    //var scaleValue = scaleValues[0] * scale;
                    //this.scale = {x: scaleValue, y: scaleValue, z: scaleValue};
                },
                onScaleEnded: function(scale) {
                    //scaleValues[0] = this.scale.x;
                    //oneFingerGestureAllowed = true;
                }
            });

            rotationValues.push(defaultRotationValue);
            scaleValues.push(defaultScaleValue);

            World.requestedModel = 2;
            World.camDrawables_2.push(earth);
			lastAddedModel = earth;



			console.log(camDrawables);
			console.log(camDrawables_2);


            // define tracker object
            var tracker = new AR.ImageTracker(targetCollectionResource, {
				onError: function () {

				},
				onDisabled: function () {

				},
                onTargetsLoaded: function() {
                    
                    try {
                        // add AR.ImageTrackable once tracker is active. Using wild-card as targetname to augment every target in the wtc file the same way
                        var first = new AR.ImageTrackable(tracker, "trImage", {
                            enableExtendedTracking: true, // activates extended tracking
                            drawables: {
                                cam: World.camDrawables // initially no camDrawables exist, they are added and updated once sensor values arrive
                            },

                            // callback function indicating quality of SLAM tracking.
                            onExtendedTrackingQualityChanged: function (targetName, oldTrackingQuality, newTrackingQuality) {

                            	console.log("trImage: " + oldTrackingQuality + " -> " + newTrackingQuality);

                            },

                            // one of the targets is visible
                            onImageRecognized: function onEnterFieldOfVisionFn(targetName) {

								console.log(targetName);

                                this.isVisible = true;
                            },

                            // when losing the target -> snapToScreen is enabled and the close button appears, so user can dismiss rendering of the video
                            onImageLost: function (targetName) {
                                //this.isVisible = false;

                                console.log("onImageLost: " + targetName);
                            },
                            onDragBegan: function onDragBeganFn(xPos, yPos) {

                                oneFingerGestureAllowed = true;
                                World.updatePlaneDrag(xPos, yPos);
                            },
                            onDragChanged: function onDragChangedFn(xPos, yPos) {
                                World.updatePlaneDrag(xPos, yPos);
                            },
                            onDragEnded: function onDragEndedFn(xPos, yPos) {

                                World.updatePlaneDrag(xPos, yPos);
                                World.initialDrag = false;
                            }
                        });



                        var second = new AR.ImageTrackable(tracker, "target_iot", {
                            enableExtendedTracking: true, // activates extended tracking
                            drawables: {
                                cam: World.camDrawables_2 // initially no camDrawables exist, they are added and updated once sensor values arrive
                            },

                            // callback function indicating quality of SLAM tracking.
                            onExtendedTrackingQualityChanged: function (targetName, oldTrackingQuality, newTrackingQuality) {

                            	console.log("target_iot: " + oldTrackingQuality + " -> " + newTrackingQuality);


                            },

                            // one of the targets is visible
                            onImageRecognized: function onEnterFieldOfVisionFn(targetName) {
                                this.isVisible = true;
                            },

                            // when losing the target -> snapToScreen is enabled and the close button appears, so user can dismiss rendering of the video
                            onImageLost: function (targetName) {
                                //this.isVisible = false;

                                console.log("onImageLost: " + targetName);
                            },
                            onDragBegan: function onDragBeganFn(xPos, yPos) {

                                oneFingerGestureAllowed = true;
                                World.updatePlaneDrag(xPos, yPos);
                            },
                            onDragChanged: function onDragChangedFn(xPos, yPos) {
                                World.updatePlaneDrag(xPos, yPos);
                            },
                            onDragEnded: function onDragEndedFn(xPos, yPos) {

                                World.updatePlaneDrag(xPos, yPos);
                                World.initialDrag = false;
                            }
                        });





                    } 
                    catch (err) {
                        World.onError(err);
                    }
                }
            });
        } catch (err) {
            World.onError(err);
        }
    },

 

where is the error?


Hi,


Could you please provide the following details:


  • Which version of the SDK are you using?
  • Are you working with the JS API or the Native API?
  • Are you using any of our Extensions (Cordova, Xamarin, Unity)? If yes, which version are you using?


As we do offer Extended Tracking samples in the sample app and the documentation, could you please provide details if you checked these.


Thx and greetings

Nicola


1 person likes this

how to enable extended tracking in wikitude 9?

1. Wikitude_Expert_Edition_Unity_9-5-0_2020_11_30_04_23_05

2. I am not sure. i just download this Wikitude Unity package and import it in unity. Kindly explain please?

3. Unity


i am using samples scenes.

Hi,


you can find details on the approach used in the Expert Edition here:

https://www.wikitude.com/external/doc/expertedition/Concepts.html#tracking-for-augmented-reality


Please check for the Extended Tracking Chapter.


Should you need anything further, just let me know anytime.


Greetings

Nicola

thanks for helping.

i wanna ask another thing.

i downloaded the wtc zip 8.5+ from Wikitude studio which has more than 10 image targets.

when i import it in unity StreamingAssets folder and elect that zip file in 'Target Collection'. the Image Trackable targets are not showing.

wiki.PNG
(176 KB)

Hi, I had the same issue, thanks to Eva and Andreas for the reply.


What I did to be able to track other target after tracking an "ExtendedTracking Target" is to do this in the [onExitFieldOfVision] :

    public void onTrackableLost(string trackable)
    {
        this.GetComponentInParent<ImageTracker>().StopExtendedTracking();
    }

 

Then I was able to track other target and then track again my extendedtracking target without loosing the extended tracking featur, to check that I add, this time on my [onEnterFieldOfVision] :


 

  public void onTrackableFound(string trackable)
    {
        Debug.Log("For " + trackable + " extended tracking is set to " + this.GetComponent<TrackableBehaviour>().ExtendedTracking);
    }

 

It's still set to true.


For my project I needed to Use [getComponentInParent] but we just need to access the <ImageTracker> component wherever it is, don't forget "using wikitude" for compilation matter.


First Time I'm explaining my solution,

Hope it help.

Caroline.


Hello Caroline,

Thank you for posting your solution here so that other users can also see how this can work. In addition, this is a very interested and efficient approach.

Highly appreciated!
Eva

 

Hi,

I found a few threads about this topic. This seems to be the one which helped a few people. Since this is three years old, I wanted to ask what the current approach is. Let's say I have a project with 4 markers. I use ExtendedTracking for all markers (e.g. to allow zooming in and out to the marker and have better results than without ExtendedTracking).


What I want:

  • Detect marker A and use ExtendedTracking there for a better experience.
  • When I move to marker B I want ExtendedTracking for marker B.
  • Marker A should stop tracking completely. Is there a function to call and then Wikitude "ImageLost" for marker A?

What I have at the moment:

  • When I use ExtendedTracking and set Concurrent Targets to 2 I can detect one image. ExtendedTracking is working but the no second image is detected.


So one / the solution would be to create a button for the user to explicitly stop ExtendedTracking [with ImageTracker.StopExtendedTracking()] for the current marker, e.g. marker A. When the camera points still at the marker A, the content / scene would start again. This means I must explain to the user how to switch between markers / scenes, which is not a good experience.
Perfect would be when the user can point on any of the four marker and the correct content would start there.

Any way to achieve this?
Thanks

Hi Rainer,


The approach and behavior for Extended Tracking is still the same than what is being discussed here - so you still have to invoke the stopExtendedTracking method in case you wish to start / restart Extended Tracking with another target.


In case there are changes on the approach, we'll include you to the list of partners to inform.


Should you need anything further, just let me know anytime.


Greetings

Nicola

Hi Nicola,

thanks for the feedback.


Would it be hard to allow for another marker to be detected? This would make it much easier to work with multiple markers and extended tracking.

Now I only see the "workaround" to show the user that he's "locked in" on a marker and the user needs to explicitly exit (via a button) that marker to get to another one.


Thanks again

Hi Rainer,


I'll add it to our list of feature requests for future releases and discuss further in the team to see if this is something we can include in the future.


As of now, unfortunately, the option to stop the ExtendedTracking when you / your end users want to switch the target is unfortunately the way to switch between targets.


Thx and greetings

Nicola

Hello Francesco,

The idea behind Extended Tracking is to use the image as an initialization to start tracking the environment of the 2D target. If you want to recognize another target then what you would need to do is use the dedicated "stopExtendedTracking"- functionality provided by our SDK. This allows you to recognize other images in your scene. However this cancels your current tracking of the first image target.

 

Thanks

Eva

Dear Eva, 


thank you so much for this quickly answer...so if I want show different model with different image I need to stopExtendedTracking when an image is lost (we call this image image A) then, after stop it I need to re-create ImageTrackable A (in case the user want to rescan image A)?


thank you

francesco

I'm playing with this function but I don't understand why if I call stopExtendedTracking on "onImageLost"  not work, if I call the function on onExtendedTrackingQualityChanged then all work correctly (without recreate nothing) but obviously the model flickering



Francesco

Login or Signup to post a comment