Start a new topic

Instant Tracking onTrackingStopped, get Last Model position?

I'm using the Android - JS SDK6 and I would like to make the model stick where it is when I get the onTrackingStopped event on my Instant Tracking. 


So basically when the instant tracking stop working I would like to get the last model position and make him stay there and not disappear. 


PS: What happen when this event is trigger, do the model get delete or its just not showed ? I check the model when the event is trigger and the enabled attribute is still true even when it's not showed...


Thanks! 


Hi Alex,

This is currently not possible with the JS API SDK. You would need to use our Unity plugin or the Native API SDK in order to store the last known model matrix and then continue rendering the model at that position although tracking is lost. We have a similar concept for image tracking called snapToScreen. Would that be enough for you or do you want to continue rendering it at the exact same position in 3d space? Anyhow, I will add it to our list of customer feature requests.


Regarding your second question: In case tracking is lost we don't delete the model but just set an internal flag to not render the model anymore. It's expected that the enabled property does not reflect this.


Best regards,

Andreas

Thanks for the quick answer !   


The problem is, I would like to continue playing an animation during the time the tracking is lost and later call the animation onFinish callback.


So will it be possible to keep the model rendering with the animation? 


For a solution can I remove the internal flag event from the AR object in JS? Or maybe creating the same model at a new RelativeLocation with the animation, it would likely be relative to the tracker location to stay stable? 


Thanks,

it's a amazing software ! 


Alex Gilbert

HI Alex,

I'm afraid that if you instand tracking, you have no chance to keep the model rendered(playing an animation) when tracking is lost. Not with our JS API SDK.


You also can't change the rendering behaviour from the JS API. That's hidden in the C++ part of our SDK and is not reachable from JS.

In case you create a relative location, the model would be rendered but it doesn't stay at the exact same position where tracking was lost. You can although use the same AR.Model instance and add it as a drawable to the AR.GeoObject that you initialise with the relative location. Give it a try, but I guess it will not be accurate enough.


Best regards,

Andreas


PS: Thx for the compliment :)

Okay Thanks ! 


I'm trying to set up this answer to get where my user is facing :


http://www.wikitude.com/developer/developer-forum/-/message_boards/message/287473


Did you have a better way of doing it ?

I implemented it and it's fully working until I tried to put the onEnterFieldOfVision event on it, nothing get call:


Here my code that create those models around me :


var maxBallAround = 8;
var valueNorth = [2, 1, 0, -1, -2, -1, 0, 1];
var valueEast = [0, 1, 2, 1, 0, -1, -2, -1];
var scale = 0.2;

for (var idx = 0; idx < maxBallAround; idx++ ) {
var position = new AR.RelativeLocation(null, valueNorth[idx], valueEast[idx], 0);

//should be 2 meters in front of me
var model = new AR.Model("js/ball.wt3",{
scale: { x: scale, y: scale, z: scale }
});

var obj = new AR.GeoObject(position, {
drawables: {
cam: [model]
}
});

obj.onEnterFieldOfVision = function () {
console.log("this ball is visible : " + idx);
};
obj.onExitFieldOfVision = function () {
console.log("this ball is invisible : " + idx);
};

obj.idx = idx;

World.local.ballArounds.push(obj);

I will like to get the evenonEnterFieldOfVision when my user is facing my models and set value that will give me their idx to identifier there position later. The event dosen't get called but when I console log my AR.model, I can the function is assigned to the events. 


Also can I make those model opacity 0% to still have the visibility trigger onEnterFieldOfVision ?

Hi Alex,

An AR.Model opacity does not prevent a onEnterFieldOfVision trigger call. In that trigger is not called, something is wrong with your relative location. Please remember that relative locations have magnetic north as 0/0 reference point. This means that if you e.g. look south and loose instant tracking, the model you locate might be actually behind you. That's why I doubt that relative locations will work for you (You could add a direction indicator to indicate where the model is, but I guess you don't want your users to rotate the device until the location is in sight).


Best regards,

Andreas

Thanks for the answer! 


I just want to get my user direction, I want to change 3D wt3 model depending on where the user is facing! If my user is not facing a specific zone I will remove the model 


(I can't use GeoLocation because I need it indoor). 

Right now, I use IndoorAtlas SDK for indoor location with beacons, it's very precise and I can get my user lat/lon, X and Y value in meters on a floor plan and my user bearing (the problem is that the direction bearing is not precise when the phone is vertical, I tried getting the pitch/azimut/roll).


That's why I use the solution to create invisible 3D spheres around my user to see where he his facing with the onEnterFieldOfVision/Exit events.


So what should I do if I can't use relative location? How can I get my user bearing and get a point that represent where he his facing 10 meters away?  

Ex: User Location : 

X = 10m, Y =10m

Bearing = 90degree (so East)

Distance = 10m

Result: X =20m, Y=10m (X axis is align with East let's say)

Thanks so much !!!


Hi Alex,

Thx for describing your thoughts. This makes it much easier for me to help you.

You choose an interesting attempt to find out where the user is looking at. You don't need to use a 3d model of a sphere. It can also be a e.g. AR.Circle. But I guess there will always be some offset coming from the compass which will the drawables slightly misplace. You can choose this attempt to get the rough direction like north, north-east, east, south-east, .... 

But you're right, currently that's your best solution to get somehow to know the user orientation if you don't want to implement compass/gyro access in ObjC/Java yourself and inject these information into your Architect World.


Best regards,

Andreas


I'm already injecting the Pitch/Azimut/Roll inside my AR World. Is there something else to get my use direction ? 


I will want to combine those value with your approach (Relative Loc) to make thing really precise when the phone is in vertical! Maybe I can use the AR.radar that you provide to see if I'm facing a ball then check the relative position of the ball to see if i'm facing North/West, etc ? 


Thanks,

Alex Gilbert

The Wikitude SDK unfortunately has no public API (both, JS and native) to get the user direction. Also the AR.radar does not give you any information.

Right now I don't know how you could get more precise information other then injecting more Android SDK/iOS SDK related information from there APIs.


Best regards,

Andreas

Login or Signup to post a comment