Hello, I am working on an image on target project and need rotational invariance concerning dragging the drawable around when marker is found.
After an issue came up on my project I tested your "Gestures" example and found out that the issue is not related to my code, but general how the dragging works.
When I scan the marker up-side-down and want to drag the beared eg. up, it goes down. Hence: the dragging (touches on screen) of the virtual object is always interpreted in markerspace, but not worldspace and I cant really get a camera transform to calculated the screen touches in worldspace, so I can drag the virtual object the right way.
In my opinion this is a major issues, since now I always have to scan the marker from the "front" and cant walk around and drag the drawable because it behaves not how I expect.. Or did I oversee a function/callback where this is fixed? (using SDK8 and Wikitude JS API with XAMARIN)
I can confirm the behaviour and we we have already ideas how to conceptually solve this. We will tackle that in on of the next upcoming releases (not 8.1 though).
Do you have an estimation when it will be available?
Also: do you have any ideas for a workaround with the SDK 8 version? Access to camera/world or camera/object transformation?
Do you may have any ideas for a workaround?