Hi. I'm developing in Unity 2019.2.14.
Wikitude SDK for Unity 8.10.0
How can I use all the engines (Instant Tracking, Object Tracking, Cloud Recognition) with InputPlugin? The examples and the documentation only show how to use the Image Tracker engine. I need to use the input plugins API to alter the inputs of the Wikitude SDK with Instant Tracking.
Did you already check the reference here https://www.wikitude.com/external/doc/documentation/latest/unity/inputpluginsapiunity.html#input-plugins-api in our Unity documentation regarding Input Plugins API?
Yes, I've checked the documentation and the examples available, it only shows how to use the Image Tracker engine. I need to use the others engines such as Instant Tracking, Object Tracking, Cloud Recognition.
Thx for your feedback. Could you please provide further details on your use case, so we can provide the needed technical details on how you can best achieve your use case with our SDK. Do you wish to use all the Trackers at the same time - Image Recognition, Object Recognition and Instant Tracking (similar to what we show in this video)? How do you plan to use the InputPlugin feature?
Thx and greetings
I need to use InputPlugin to get the video stream before rendering (and send it via web) and input it in other device with Wikitude (the only way is using InputPlugin). I wish to use, initially, the Instant Tracker.
- Local devide: Local stream + InputPlugin (Wikitude Render + Instant Tracker)
- Remote devide: Remote stream (via web) + InputPlugin (Wikitude Render + Instant Tracker) I have already sent streams between the devices and applied to the InputPlugin sample Advanced Custom Camera (that uses Image Tracker) but I could not use the Instant Tracker engine.
I'm developing in:
Wikitude SDK for Unity 8.10.0
One of the problems with using InputPlugins with InstantTracking is that InstantTracking needs access to sensor data during the initialization phase to properly align the ground plane. Are you able to get that information from your remote stream? Without that, it should still work the same as Image Tracking, but the ground plane would be incorrect. You could still use plane detection to determine that, but it's not as reliable.
Were there any other reasons why you couldn't get InstantTracking to work? Did you have any errors?
I was able to input the video stream via InputPlugin and use it with InstantTracking. But, as the devices have its own sensors, the objects placed are not at the same place.
As you sad, "InstantTracking needs access to sensor data during the initialization phase to properly align the ground plane."
So, how can I obtain the Wikitude sensors data from one device and how to apply this data in the second Wikitude device?
The InstantTracking Documentation does not cover this issue.
I apologize for the late reply.
If I understand correctly, what you are trying to achieve is to stream the camera frame and sensor data from device A to device B, both running Wikitude? And then run Instant Tracking on device B?
Setting the sensor data is currently not available in the Unity API, only in the Native and JS APIs. We can add that in a future update, but until then, wouldn't it be simpler to run Instant Tracking on device A and stream the tracking pose to device B, instead of the sensor data?