Start a new topic

Click on Marker using Vuzix M100 or Google Glass

Click on Marker using Vuzix M100 or Google Glass


Hello Wikitude,

you are amazing. I am using wikitude sdk with Vuzix M100 and Google Glass. I tested all the examples for vuzix, it's working fine. I have gone through all the documenatation and blogs but did't find my question.

My query is:

I want to fetch poi data from local source and show particular image at that poi in AR view which is OK and also explained in the example. But I want to touch or click the image to get more information which is also explained in the example and working fine if I use my Android phone screen to control Vuzix glass AR view screen in order to perform click action. How can I do the same without using the phone, only with Vuzix Glass' hardware' buttons? Thank you!

Regards

Arun Gupta

Hi there!

I guess in your scenario the following implementation would fit the best:

* Implement a crosshair as described in this forum post.
* Call architectView.callJavaScript in native Vuzix/Android environment when user pressed a specific hardware button, e.g. architectView.callJavaScript("World.onCrosshairSelectionTriggered()")

Hope that helps.

kind regards,
Andreas

Hello Andreas,

Thank you for your quick reply. I had a look what you asked to do. But it didn't solve the problem. Let me rephrase my question:

I am working on the example named '3_Point$Of$Interest_4_Selecting$Pois'. I hope you are aware of it. In this example many pois in the form of Markers are shown on AR screen. When user clicks on any particular poi, it does some action which is working with phone or table as you can touch the screen. My problem is I am using Vuzix and I can not touch the screen to select any poi with hardware back/forward button or moving head horizontally. 

please have a quick look here for more detail:

http://www.wikitude.com/external/doc/documentation/4.0/vuzix/poi.html#point-of-interest-poi

Thanks & regards

Arun Gupta

Hi again,

Yes, I understand. The concept I described previously does not need any touch input from the user.
Use the crosshair for hands-free selection. It is up to you if you want to auto-trigger the "crosshair-click" or let user press a hardware button on the Vuzix device.

 

Hi Andreas,

Thank you for your reply. The code for creating the cros hair works but the problem is: when I create a button in native xml or in html, onClick the button I call javascript function (architectView.callJavaScript("World.onCrosshairSelectionTriggered()") But I can not select the button via my Vuzix hardware left/right buttons. Therefore, unable to click the button to call the function. When I touch the button using Android phone (Vuzix Smart Glasses app), it works. Here is my code:

<body >

    

     <div data-role="page" id="page1" style="background: none;" >

 

<div data-role="button" onClick="World.clickScreenCenter()" class="button" data-theme="e" style="max-width:70px;">  

  SelectPoi

      </div>

 

I tried without div and with two buttons also using button in native xml but unable to select the button to click on it. 

 

 

Thanks & regards

Arun

Hi again.

Thant's why I wrote "let user press a hardware button". You may intercept a chosen hardware button of the smart-eyeware and forward to JS as described.

 

Hi Andreas,

Thank you for your guidance. It works now. I have one other query:

I want to open local html page from assests folder when user clicks on marker or particular image. I tried AR.context.openInBrowser("assests/test.html"), but it doesn't work. It opens only websites with internet connection.

Also is it possible to open pdf when user clicks on Marker or image? I tried window.open("assets/ACS355_Contact.pdf"), doesn't work. Thank you!

regards,

Arun 

Hi there!

Whatever path you enter there is then launched in a "neutral environment". I recommend you to use a custom link to your native environment using "architectsdk://"-links as in "Browsing POIS Native POI Detail screen"-sample. That way you can tell your native code to open a specific document and handle the execution in your own code (e.g. open WebView in new Activity and load html from assets folder.

Hope that helps.

Kind regards,
Andreas

Hi Andreas,

thank you for your answer. I really appreciate it. It is working now. I have got another query:

I am using vuzix voice control api (import com.vuzix.speech.VoiceControl) in my same application. Where user controls the action using voice. I can do it succussfully if I don't use architechview (no wikitude). But when I use it, it gives me error: "not allowed to bind to service intent". The error is here:

@Override

   protected void onResume() {

       super.onResume();       

       this.architectView.onResume();

       this.architectView.setLocation(0, 0, 0);

       vc1.on();

   }

 

   @Override

   protected void onPause() {

       super.onPause();

       vc1.off();

       // call mandatory live-cycle method of architectView

       if (this.architectView != null) {

          this.architectView.onPause();

       }       

   }

 

vc1 is an object of VoiceControl Class e.g. VoiceControl vc1;  vc1= new SpeechRecognition(this); 

 

Thanks & regards

Arun Gupta

 

Hi again,

Good to hear that you were able to resolve your issue.
Concerning the Voice Control topic. This seems to be not related to the SDK as such.
Please have a look at stackoverflow of Vuzix dev support and try to implement voice control without AR first.

Kind regards

Hi,

your magazine.wtc looks fine! The original wtc is bigger because it contains more images, not only the surfer. Have you also updated the target name in the AR.Trackable2DObject (the second parameter when instantiating it)? Alternatively you can use "*" as wildcard to match any images.

Hi Philipp,

thank you for your quick reply. Yes, I updated the second parameter with "*". It still doesn't recognize the surfer image. When I replace my magazine.wtc with your given example magazine.wtc, it works perfectly. 

thanks & regards

Arun 

Hi Phillpp and Andreas, 

I thank you for your support. My issue has been solved. It's working perfect. Once I am completely done with my application, my supervisor at the company will buy the license of Wikitude. 

best regards

Arun 

Hi Andreas,

my application is working great and its voice enabled. Thanks to you. I have got another query regarding 2D Object recognition.

I am using 'WikitudeSDK_vuzix_4.0.3_2014-11-07_19-15-54'. I have tried the example Client Recognition (IMAGE ON TARGET (1/6)) which is working fine. I can recognize the image of the Surfer. But when I converted the same surfer image to .wtc from your website and used it. The application does not recognize the image. Everything is same except magazine.wtc. I have followed the stpes mentioned here http://www.wikitude.com/developer/documentation/glass#_48_INSTANCE_9XjWBdf5iyrD_=clientre cognition.html%23client-recognition. My magazine.wtc(used Download 4.1 format, attached) for surfer image is around 40kb and when I cheked magazine.wtc (from your example) is around 100kb. Somthing is wrong with my magazine.wtc. Thank you!

regards

Arun
Login or Signup to post a comment