Start a new topic

Input Plugin API

Hi,

 


We are working on a remote support project. Most of the features were completed with Wikitude succesfully. 


Now, we have to use input plugin API to make "customer’s camera" as our "support technician source camera". We take a look at your documentation but we couldn’t success.


Is it possible to share sample code for iOS and Android. (Native or JavaScript).


Our broadcasting video coming from RTMP server. We must make this as source camera for Augmentations. 


Thank you.


Hi Alexandru


I've replied to Gökhan on the other thread with all the information  (code and screen shots) to replicate the issue.

We'll wait the data sensor issue development.


Thanks

Leo

Hi,


After reading the other thread where Gökhan offered assistance, would it be possible for you to share the project with us in its current, with some instructions on how to build and run it properly? It seems like there might be slight differences between his version and yours and I just to make sure that we're focusing on the exact problem you are facing.

Regarding the sensor data issue, we have it on our roadmap and we'll let you know as soon as it becomes available.


Best regards,

Alexandru

Hi Nicola


In respect to the item 4 Synchronization of rendering, I'm facing some trouble.

When streaming the video feed between two devices with different resolutions ans aspect ratios, the objects are placed in different positions. We tried to fix the camera and background textures to the same resolution (with Gökhan's help), but the problem remains.

Another problem is that each device uses it's own sensors data, changing the grid dimension when applying the Plane Detection sample.


Wikitude Unity SDK Professional Edition

Unity 2019.3.15

 

Any suggestion?


Thanks

Leo

Hi Burga,


Yes you'd need to implement your own plugin using the plugins API feature. But as mentioned we don't have a specific sample or further code / documentation specifically for the remote AR use case (just the sample in the sample app that shows how the Input Plugins API can be used). If it helps, we can connect you to one of our Premium Partners working with Remote AR use cases that can assist you with the implementation of your app.


Thx and greetings

Nicola

Hi Nicola,


We are working on a remote support project and we need to use Input Plugin. As you know, it's necessary to broadcast a user screen to another user. But opposite side of the broadcast must use 1st user's screen as a camera source (World). 


I saw that documentation but it is not so clear. Do we have to write a C++ plugin to get broadcasting to Wikitude source? I need more details. Thank you.


Thanks.


Bugra.

Hi,


Could you clarify what you refer to with the YUTInputPlugin? For our InputPlugin Feature, you can find a sample code in the JS or the Native SDK sample app. Additionally you can find details in the technical documentation (here the link to the Android JS SDK):

https://www.wikitude.com/external/doc/documentation/latest/iosnative/inputpluginsapi.html#input-plugins-api


Here is an additional forum post dealing with WebRTC, which might help:


https://support.wikitude.com/support/discussions/topics/5000091318


Thx and greetings

Nicola

Hi Nicola,


Thank your for kind reponse. Most of all are ready.


Just i only need that, how can i use "YUVInputPlugin" to make a video streaming as a source of World?


I am using WebRTC to broadcast a video, i save the instant target objects. Everything is ok. But i need to know more details about YUTInputPlugin. Is it possible share a sample code block or a library?


Thanks. 

Hi Bugra,



Thx for reaching out. Unfortunately,  we don't have specific sample on how to build an AR remote assistance  app, but as mentioned by Norby we do have Premium Partners experienced  with remote AR. I'll be able to highlight a rough way how you could realize a use case like this with our SDK.

  1. AR  part: As you already found out, the main component behind such an app  is a good and reliable markerless tracking or instant tracking, as we call it. This is needed to make the augmentations stick to the position they were added to even when you move around.
  2. Video  streaming: Next you need a way to stream the video feed to the remote  device. Can be another smartphone, table or PC. This depends on your  requirements and how the instructor should create the augmentations. This is nothing we provide out of the box but you will find some  solutions for this. One thing you'd have to do is create an input plugin to be able to stream the camera frames.
  3. Rendering  / Drawing / Augmentations: Next is the rendering part, which is needed  to draw instructions for the person using markerless tracking. For this  the instructor needs a UI to draw arrows, circles, etc. on the camera  view. This depends on your use-case and is very specific. Here you have  the choice to use our Javascript SDK, which already includes the rendering, Unity or our Native SDKs, where you have to deal with the  rendering yourself.
  4. Synchronization  of rendering: Last but not least you have to bring the augmentations to  the users device. A good entry point for this are our persistent  instant target samples. The main difference is that you don't store the  positions in a file but directly send it to the users device. 


Thank you

Nicola


Login or Signup to post a comment