I have managed to compile an Android application to track a JoyCon using the sample scenes found in the Unity project, and while the app seems to be detecting and tracking my real object (despite some minor alignment issues that I am sure it is something I have to fine tune in Unity) it also seems that the engine has some serious trouble trying to stabilize the alignment even if the mobile phone is static. The situation is better shown in the attached video, where it can be seen that the capturing camera does not move, yet the alignment mesh and cones do shake when they should not.
The JoyCon's .wto file was generated using five photos and the point cloud looked fine to me. The alignment mesh is a derivative work from an already existing mesh I found online.
Unity version is 2019.4.11f1, Wikitude Unity SDK package is version 9.4.0. I can provide model training photos or the .wto upon request.
Any clue on what am I doing wrong or whether if I should setup my own scene instead of replacing an existing one? Is there any issue on the way I generated the training dataset?
Thank you very much in advance.
you can also extend the current target by another set of images taken with different backgrounds. This might also improve recognition and tracking! If you are using the expert edition plugin, using ARFoundation or ARBridge there could also greatly improve tracking stability.
I think that might be just because the target isn't very detailed and tracking therefore is a little shaky. You could try to add a script to smoothen out the translation from frame to frame to counteract this jittering.