Start a new topic

Image & Object tracking at the same time ?

Hi Wikitude,


Congrats for the last version of your SDK ! All these new features are pretty exciting :-)


I work on an app for museum, and until now, we were only able to recognize and augment paintings with our app. Now I hope we will be able to also recognize sculptures and other 3D objects.


However, I can foresee a first problem in our case: when a user enter a room, we don't know if they will start to interact with a sculpture (Object recognition and tracking) or a painting (Image recognition and tracking).


Is there a way to have an ObjectTracker and ImageTracker running at the same time ? Otherwise what solution would you recommend ?


Thanks,


Amaury.


Hello Amaury,

Thank you for your feedback. Unfortunately your request is not supported since you may only have one tracker be active at a time. If you enable a new one, we disable the current one in the background. One workaround for you would be to have two buttons to load these two experiences separately. So the user would have to select first whether we wishes to recognize a painting and or a sculpture and click on the corresponding button.

Thank you
Eva

 

Hi Eva,


Thanks for your quick answer. From an UX perspective, it would be too detrimental to ask our users to manually switch between a 3D or 2D recognition depending on what type of object they have in front of them.


We envision 2 workarounds here :

  1. constantly switching between 2D and 3D trackers every 500ms (and stop when something is tracked), but I can imagine that it would generate a lot a CPU consumption (??).
  2. making a lot of image targets all around our 3D objects and switching to 3D recognition as soon one of these "special" 2D targets is recognized. These 2D targets, would be special as they would be only here to trigger 3D recognition. In this scenario, when 3D recognition will be active and nothing would be tracked for some time (~2s), we would switch back to 2D recognition.


We would be very interested by your feedback about these 2 workarounds.

Thanks,


Amaury.




Hi Amaury,


I talked with the team about this and the general conclusion was the second workaround sounds better than the first one but the issue with the second one is that if you create target images around the object very closely to each other it may be that the target images are having the exact same features which can cause issues with the image recognition. If you do this approach you would have to make sure to create the targets from different sides but even then we cannot guarantee that this is going to work.


I would suggest that you place markers somewhere near the objects which will trigger the switch from image to object tracking.


Best Regards,

Alex


Hi,


I am doing some tests and something unexpected is happening : a 2D and a 3D tracker are just working at the same time, independently and simultaneously :-) On the same screen I can see a 3D object tracked with some augmentation on one one side, and an Image tracked with its own augmentation on the other side.


I am using a beta version of SDK 7.1, is it why I have such unexpected feature enabled ? Also I am doing my test with an iPad Pro. Can I rely on this behavior on slower devices ?

Thanks.

Hello Amaury,



I would not expect this to be possible. Would you mind sharing the code you are using to make this work so I can have a look for myself?


Also, which version of the SDK are you using exactly? What's the source of this pre-release package?



- Daniel


Hi Daniel,


I use the version related to this link, that you sent me some weeks ago :


http://wikitude-web-hosting.s3.amazonaws.com/sdk/support/forum/5000084251/WikitudeSDK.framework.zip


A simplified version of my code :  

 var targetCollectionResource3D = new AR.TargetCollectionResource(wtoUrl);

 this.current3DTracker = new AR.ObjectTracker(targetCollectionResource3D,{
                onTargetsLoaded : () => {
                  console.log("3D world loaded !", wtcUrl);
                },
                onError : (err) => {
                  console.error("3D world loading error !", err);
                }
          });

          this.joker3DTrackable = new AR.ObjectTrackable(this.current3DTracker, "*", {
            drawables: {
              cam: []
            },
            onObjectRecognized : (targetName) => this.onSomethingRecognized(targetName, true),
            onObjectLost : (targetName) => this.onSomethingLost(targetName),
          });

 Then I add some drawables dynamically depending on detected target.


Later I execute very similar code for 2D tracking (basically replacing ObjectTracker with ImageTracker, and loading a WTC), without extended and multiple tracking.


It seems to me that enabling a 2DTracker doesn't automatically disable the active tracker and vice-versa. So both tracker can run at the same time.


When both tracker run at the same time, I notice some flickering from the camera feed and CPU going up to ~180% in XCode (on an 9.7 iPad Pro). Usually I think the CPU stays under 100% when only one tracker is activated.


Really hope that you can make a feature out of this bug :-) The problematic use case I presented when I began this post is, well, problematic if 2D and 3D recognition can't be done together.


Good morning Amaury,



very interesting to know that this is actually working. I believe this might be something we want to investigate and have as a properly implemented and tested feature in the SDK. For now, however, I cannot recommend the use of image tracking and object tracking at the same time; or any other combination of trackers for that matter.


As it is not a feature we had in mind yet, it may break with future releases and we obviously cannot provide any guarantees regarding its functionality.


We do not have this planned as an immediate feature yet. So I cannot make any promises on when this feature might be available either.


I definitely can see the validity of your use case though, but I'm afraid, for now, to conform with the functionality which we officially provide, you'd have to find a mechanism to decide which type of tracking to actually use for a particular room.



- Daniel


I would like to revisit this as we have the exact same issue. From a user UIX perspective, having to manually toggle image + object tracking is very weird. E.g. in a gallery with paintings and sculptures, it's not a seamless experience to have AR content across all exhibits.


@Amaury what did you end up doing?

Hi Shen Heng ! 


We ended up decomposing the museum between 2D and 3D recognition areas, and relying on our indoor positioning system to switch between 2D and 3D recognition. 


Note that, interestingly, 3D object recognition works well on flat elements. So when we have an area with volume (3D) and flat (2D) objects mixed together, we can use 3D object recognition.


Hope this can help ! Feel free to contact us if you could be interested in our global AR solution for museums (amaury@museopic.com).

Hi Amaury,


I have been trying to get multiple object tracking to work. It sounded like you were able to implement it. Could you kindly share how you had gotten it to work? 


For some reason, only the first object in my collection ever gets detected



Login or Signup to post a comment