Start a new topic

HTMLDrawable implementation of API reference is very difficult

I always appreciate it.


I am developing xamarin.

Let me questions.

I mention htmlDrawable on this site, but is this old as information? It seems that overlay display does not appear even if this code is drawn.

HTMLDrawable implementation of API reference is very difficult.

Funny display is done even if you write google site etc. in html place in sample code.

Is there any concrete code? I am very embarrassed. Thank you.


HTMLDrawables is only supported in our old Studio here and it is not supported in our new Studio Editor here. So basically, you have two options:
  • Use the old Studio to create HTMLDrawables and export this project and integrated it with Xamarin SDK or
  • Use the new Studio Editor but you need to find a way to include HTMLDrawables in the exported project.
Either way, please note that, you can export the project you created with Studio and try to further edit the code and implement it in xamarin but, we do not offer any support or guidance in how to accomplish that.



Thank you.

Let me Four question.

1:I can't access old Studio dawnload site.and new Studio dawnload.


This API Reference is old? Even with the new SDK, you can use this API, right?

3:Please advice my code.


var World = {
	loaded: false,

	init: function initFn() {

	createOverlays: function createOverlaysFn() {
			First an AR.ClientTracker needs to be created in order to start the recognition engine. It is initialized with a URL specific to the target collection. Optional parameters are passed as object in the last argument. In this case a callback function for the onLoaded trigger is set. Once the tracker is fully loaded the function worldLoaded() is called.

			Important: If you replace the tracker file with your own, make sure to change the target name accordingly.
			Use a specific target name to respond only to a certain target or use a wildcard to respond to any or a certain group of targets.
        this.tracker = new AR.ClientTracker("assets/*******-**.wtc", {
			onLoaded: this.worldLoaded

			The next step is to create the augmentation. In this example an image resource is created and passed to the AR.ImageDrawable. A drawable is a visual component that can be connected to an IR target (AR.Trackable2DObject) or a geolocated object (AR.GeoObject). The AR.ImageDrawable is initialized by the image and its size. Optional parameters allow for position it relative to the recognized target.

		/* Create overlay for page one */
		var imgOne = new AR.ImageResource("assets/imageOne.png");

        //Please look this
        var htmlTop = "<!DOCTYPE html><html>\n<head>\n" +

            "<title>Test</title>" +

            "<meta charset='utf-8' />" + "\n" +

            "<meta http-equiv='Content-Type:application/json; charset=UTF-8' />" + "\n" +

            "<meta name='viewport' content='target-densitydpi=device-dpi, width = 263, user-scalable = 0' />" + "\n" +

            "<link rel='stylesheet' href='/css/wikitude.css' />" + "\n" +

            "<link rel='stylesheet' href='/css/pois.css' />" + "\n" +

            "<script src=''></script>" + "\n" +

            "</head>" + "<body>";

        var htmlBottom = "</body>\n</html>";

        // precedence example:
        // htmlDrawable will use the html representation
        var overlayOne = new AR.HtmlDrawable({

            htmlTop +

            //"<iframe src='index2.html' ><p>Your browser does not support iframes.</p></iframe>"→Loading…
            //"<iframe src='1_Client$Recognition_1_Image$On$Target/index2.html' ><p>Your browser does not support iframes.</p></iframe>"→can't display
            //"<script src='/js/apichatch.js'></script>"→No  reaction
            //"<script>$.ajax({type: 'GET',url: '',dataType: 'json',success: function (json) {var len = json.length; for (var i = 0; i < len; i++) { $(\"#a\").append(json[i].version + ' ' + json[i].codename + '<br>'); } }  });</script >"→NaN character display
            + htmlBottom

        }, 1);

			The last line combines everything by creating an AR.Trackable2DObject with the previously created tracker, the name of the image target and the drawable that should augment the recognized image.
			Please note that in this case the target name is a wildcard. Wildcards can be used to respond to any target defined in the target collection. If you want to respond to a certain target only for a particular AR.Trackable2DObject simply provide the target name as specified in the target collection.
		var pageOne = new AR.Trackable2DObject(this.tracker, "*", {
			drawables: {
				cam: overlayOne

	worldLoaded: function worldLoadedFn() {
		var cssDivLeft = " style='display: table-cell;vertical-align: middle; text-align: right; width: 50%; padding-right: 15px;'";
		var cssDivRight = " style='display: table-cell;vertical-align: middle; text-align: left;'";
		document.getElementById('loadingMessage').innerHTML =
			"<div" + cssDivLeft + ">Scan Target &#35;1 (cap):</div>" +
			"<div" + cssDivRight + "><img src='assets/cap.PNG'></img></div>";

		// Remove Scan target message after 10 sec.
		setTimeout(function() {
			var e = document.getElementById('loadingMessage');
		}, 3000);




My purpose is to display the JSON character string returned by WEBAPI on the screen.

I am thinking about two ways.

1: The coding method above. This is how to call WEBAPI from imageontarget.js.



JSON display


2: Access to WEB API and receive JSON character string, create javascript to be displayed next, then call it with HTML file. In other words,


HTML file



JSON display


Advice please.


In order to access old Studio and new Studio Editor, please refer to these urls:

 The API you have pasted refers to our Wikitude SDK Javascript API and not to Studio. This is why HTML Drawable is supported in this document but, not supported in new Studio Editor.



Login or Signup to post a comment