Start a new topic

WebRTC Integration

 Hi all,

 

as the title might suggest I am trying to integrate the Wikitude samples into an android project that is using Chromium WebRTC for video calls. I am currently using Wikitude version 8.3 and have tried both the JavaScript and the native api. I have not written a plugin myself but rather am trying to import the CustomCameraPlugin into the framework to see if it works. I have integrated and adapted both the WikitudeCamera, the CustomCameraExtension/CustomCameraPluginActivity (JavaScript/native) and the FrameInputPluginModule class as well as the JNI method calls in the corresponding C files to work with our current framework.

Now to the problem. The SDK seems to work in the background as expected as debugs I put into the C code and debugs of the method calls are logged. Displaying the frame is still not possible because of different contentViews and Egl handling. The only difference to the original Wikitude samples project is that I use the SurfaceViewRenderer of the WebRTC framework to display the video stream. The ArchitectView / CustomSurfaceView are also never set using setContentView(...) because I need my own CallView to be active. Is it necessary to set the content View to own of these views for the SDK to work properly?

I took a look into the ArchitectView class (javascript api) and saw a GLSurfaceView member which indicates you use this class to manage the Egl parts. In the native api there is the CustomSurfaceView class which extends from the GLSurfaceView, a GLRenderer that extends from GLSurfaceView.Renderer to manage the Egl parts. In the webrtc lib these parts are handle by the SurfaceViewRenderer, SurfaceEglRenderer, EglRenderer and EglBase classes. It seems to me that both the classes used by Wikitude and the classes used by the WebRTC lib are doing the same under the hood so it should be possible to merge them. As it is difficult to debug obfuscated code such as the ArchitectView class I do not really know which parts I need to integrate into the corresponding WebRTC classes. I would like to integrate the parts the SDK needs into the WebRTC classes if that is possible. To this end it would be very useful to know which parts of the used Wikitude classes are needed for the SDK to work properly.

 

To maybe understand my questions better a simple WebRTC project showing the most basic operation flow can be seen at: https://chromium.googlesource.com/external/webrtc/+/refs/heads/master/examples/androidnativeapi/java/org/webrtc/examples/androidnativeapi


Best regards, Samuel

 


Hi Samuel,


Unfortunately, our SDK consumes the camera stream and there is no way to offer WebRTC access in parallel. I see your point and understand your interest, but as of now, although you may use the Data and Audio Stream of WebRTC the camera access is blocked/already in use (as of now, SDK 8.3) - compare stackoverflow post.


Best regards,
Andreas


1 person likes this

Hi,

 

thank you for the quick response. I am not sure I understand 'our SDK consumes the camera stream' completely so I will add some further questions. :)  Do you need to hold the camera at some point in the SDK other than in the FrameInputPluginModule? Just to clarify I am modifying the WebRTC files directly and use these libraries in my project.


Another approach I had was to forward frames from WebRTC to the Wikitude functions. I used the listener that is already in the Camera2Session (I moved most of the camera and plugin functionality into that). This listener (surfaceTextureHelper.startListening((VideoFrame frame) -> { . . . } ) gives me a VideoFrame which can be converted to a I420 Buffer. From this I420 Buffer I can then get the YUV ByteBuffers + their strides send it to the native functions of the Wikitude plugin and convert it back to NV12 by hand - after that I would have to implement a class that is able to take this ByteBuffer and give me a VideoFrame back. The problem I faced then was that there is no existing backwards conversion in the WebRTC framework as of now and I would have to implement this functionality in the native WebRTC code. I could try to do that though by having a closer look at the buffer structures of the WebRTC framework. Could this is theory work? Still the Wikitude SDK does not seem to execute certain functions if I provide frames this way. It seems to me that the SDK needs something that I do not yet provide.


The WebRTC Camera1Session actually uses the same capturing mechanism that the WikitudeCamera1 uses. I tried inserting notifyNewCameraFrameNV21(data) into the listenForBytebufferFrames(...) function which I can see are then forwarded to the native functions. Still I was not able to get any results.

 

 

After reading through the SO post again I decided to further clarify my modifications.


I had to adapt the WikitudeCamera2.java class in order to work with the WebRTC library (mainly moved callbacks and functionality to the Camera2Session class of WebRTC). I also do not use the FrameInputPluginModule.java class as I imported the functionality to the Camera2Session as well.So I am able to let the camera render onto two surfaces etc. . Displaying it is ofc not possible that easily since we have two different view class situations. That is why i thought it might be important to know which functionality is needed for the SDK to work properly.

I guess I now understand,

Using the InputPlugin to feed in the WebRTC cam is the cleanest possible way to consume the stream but also process it internally. Could you describe the "Still I was not able to get any results" phrase in more detail? The use of InputPlugins is described in the documentation.

BR, Andreas


Hi,

 

thanks again for the quick reply!

From now on I will only describe the Camera1 version because I think it is easier to get this to work since it uses the exact same method of obtaining frames (I can still try to get the Camera2 to work afterwards). I am currently focusing on the native API (version 8.2). As is previously mentioned I am currently not writing my own plugin, rather I am trying to integrate the Advanced Custom Camera Plugin from the Samples. As far as I understood this is

 

What is meant by "Still I was not able to get any results" is that the scanning effect is not shown on the WebRTC video. I wrote debug messages in all the functions, constructors and native function calls of the WebRTC and Wikitude classes I am using. From this I can see that the calls are the same in the native sample app (advanced custom camera) and in my application. The plugin is registered, the nativeHandle is set and the JNI functions are called when appropriate. Functions that are not called however are 'update(...)' in the YUVFrameInputPlugin.cpp and 'startRender(...)' in the OPENGLESScanningEffectRenderingPluginModule.cpp. These should be called by the SDK right? Any ideas what might cause them not to be called? Am I not providing something the SDK needs internally?

 

Hi Samuel,



as long as you have your Plugin and corresponding PluginModules registered and started the SDK I would expect these functions to be called. Did you by any chance disable your Plugin or PluginModule (setEnabled(false);)? They should default to being enabled otherwise.



- Daniel


Login or Signup to post a comment