Start a new topic

how to implement image target with iOS Native SDK?

Hi , i'm new to Wikitude, I use Vuforia Unity SDK before.

I already saw Wikitude iOS document, and image target sample, but I couldn't figure it out, how to recognize image, and then draw an image to cover that target?

The document is confusing,  as it says, WTWikitudeNativeSDK is the only object developer have to create manually, what else object I need to use? 

and then I need cover a image or video on that image target, I can only use opengl? Can I use other iOS framework?

Thanks a lot!

1 Comment

Hello Will,

Unity is based on our Native API. If you only see an orange border around the image in the camera view once it’s recognized then you are using our Native API and this is expected behavior. The Native API does not provide any rendering options - you’ll need to take care of the rendering on your end with this API. If you want to make use of the Wikitude rendering you’d need to work with our JS API or Cordova, Titanium, Xamarin extensions. You can find a comparison page here.

If you wish to work with Unity then, you need to download the Unity package and check the respective documentation section. Each section includes a set-up guide, which helps you set up your project and where to put the key. For instance, in our sample for Unity you will find how we place an augmentation (and not the orange border) when an image is scanned.

Finally, if you search our forum then you will find many forum posts that could help you get started, as for instance this one.




Login or Signup to post a comment