Start a new topic

Information about Wikitude iOS Sdk

Information about Wikitude iOS Sdk



Can I use Wikitude iOS SDK without any image tracking?

I want to show one 3D modal while a user try to capture a video OR image using device camera in iOS.

Can I do this functionality using Wikitude iOS SDK? I do not want to use any image tracking and recognize. I have downloaded Wikitude iOS SDK and tested sample example but it works with image tracking, recognize and Geo based, can we do without this?

Hi Hitesh,
You can use the Wikitude SDK without image tracking. Geo AR works fine without creating any image tracking related objects.

You can position a 3d model at a certain gps location or relative to your current position.

It is not possible to capture a video with our SDK unless you write your own plugin which does this for you (Plugins have access to each camera frame which you then need to convert into a video format). To capture a screenshot at a certain point, please have a look at our 'Browsing Pois - Bonus: Capture Screen' example.

I hope this ansers your question.

Best regards,


Hello Andreas,

Thanks for valuable answer. I am having one more confution, let me explain what i want to do.

In my requirment for now i need to capture a image and during the capture image I need to put a 3d modal like .fbx or .obj modal in the middle on the camera and user can manually adjust drag that 3d modat and take a screenshot programatically.

And also i want to to use Face detection and according to face detection can we use a 3d modal of Hat on the top of user head.

Also please suggest me there is two sdk's available for iOS




which one is valuable for me in my case.

Your input will be valuable for me.

Best regards,

Hitesh Veshnav

Hi Hitesh,
You can convert .fbx files into .wt3 files (thats our 3d model format we support. you can create them using our 3D Encoder desktop application) and render them in our JavaScript API SDK.

In order to render a 3d model in the center of the screen, it has to be recognized from our cv engine before (Please refer to our example '3d Models - Snap to screen'. It does almost exactly what you want). After the user positioned the 3d model correctly, they can trigger a screenshot generation by e.g. pressing a button.

To position the 3d model using a custom face recognition plugin, you need to wait for our next release (5.3/1.4) which will be released end of August. It contains a feature that allows you to position AR.Drawables from within a plugin.

In generell, you could develop your app with both APIs, but when using the native API, you need to take care about 3d model rendering yourself. So the JavaScript API fits your needs most I would say.

Best regards,

Login or Signup to post a comment