we are developing a solution for Epson BT-300 and the samples that we are evaluating are: "Image on target", "Extended tracking", "QR & Barcode".
The samples made using the Epson SDK are stable, but not very performing (they take a long time to find the image to recognise and you need to be very close to the image). Then we tried the Android native API on other devices and they seemed very performing, but once we tried them on the Epson BT-300 they were unstable and some examples crashed.
Our questions are:
1) Is it actually possible to use the Android native API on the Epson BT-300 device?
1.1) If it is possible, how can i get rid of the video played on the screen of the device?
2) How can we improve the performances of the Epson SDK? (for instance, making the calibration box smaller or bigger has any effect?)
Thank you for your help.
with the SDK 7.2 release for the Epson BT-300 we included fixes that caused crashes with instant and object tracking. Those fixes did not make it into 7.2 Native or JS SDK; which explains your crashes with those samples. Also please note that the performance can be drastically different between different devices depending on their hardware.
1) Generally it is possible to use the Native SDK on the BT-300 but there is no calibration included in it.
1.1) You could use RenderExtension.useSeparatedRenderAndLogicUpdates and only call onUpdate and not onDrawFrame.
2) Making the calibration box larger of smaller will only affect the distance between you and the target image for the calibration steps, it will not improve the recognition itself.