Start a new topic

Wikitude Native api question

Wikitude Native api question

Hello, I recently switched from wikitude JavaScript Api to native android api.

I checked the example native android app provided by wikitude and I found out there isn't much about rendering a 2d image on top of the cameraview.

For example, in JavaScript Api it just simply take one line of code to display the image (AR.ImageDrawable). 

The only solution I can think of now is to write a custom opengl renderer that display image as a texture. This is too unefficient and hard to be implemented.

So does anyone know a better solution?



Yes our Native API provides access to the Wikitude computer vision engine natively for Android (Java) and iOS (ObjC). The Native API does not include image, video or 3D model rendering - this has to be implemented by you.

The API is intended for developer who want to implement their own rendering. So if you want to make use of our 2D rendering, then you'd need to stay with the JS API.




i have integrate the SDK in android and IOS in both SDK i have the same issue i user sample instant tracking but i shown no output like no AR view while scan the image 


TEll me what the output of this image when i scan this image in SDK sample 

Hello Gagandeep,

Since you are working with Native and you are scanning an image then, you should see an orange rectangle. However, you also mentioned that you are using our sample Instant Tracking. Instant tracking is an algorithm that, contrary to those previously introduced in the Wikitude SDK, does not aim to recognize a predefined target and start the tracking procedure thereafter, but immediately start tracking in an arbitrary environment.

These are basics things that are covered in our documentation here. So I would suggest that you have a look at our documentation in order to understand how it works.



Login or Signup to post a comment