I am trying to share my AR scene through a video call, (you can think of it as vuforia chalk, I am okay if my receiver receives a 2D video stream, but the client making the call should be in a proper 3D AR scene, I am using unity.
Is this possible via wikitude SDK, or ARtoolkit and ARcore it is consuming ?
I'm still having trouble with it. When implementing this approach in the Wikitude ExtendedImageTrackingActivity for example, it uses the CustomSurfaceView. How can I use a ImageReader? Implement another SurfaceView extending the CustomSurfaceView? Or inside the CustomSurfaceView create an ImageReader? And how create a CameraCaptureSession for camera preview to use the ImageReader surface? (based on https://github.com/SundayLab/Camera2BasicStream)
I need the camera stream to share the scene through a video call.
I've been able to implement this using a custom surfaceview implementation based on RecordableSurfaceView from UncorkedStudios
Replace the mediarecorder stuff with a ImageReader and use this reader's surface instead of the MediaCodec.createPersistentInputSurface().
Add a OnImageAvailableListener on the reader and now you have access to the rendered video frames.
You might need some image processing for your video frames depending on what you need to do with them..
I'm not in a position to share our code, so that's all I can share. Hope it helps.
Has this feature (Video capturing) been developed? I also need to get the video stream and send to another device.
I know I can get the camera frames via a c++ plugin (that's what I have now), but I need the camera frames + anything drawn over it in the GLSurfaceView (cfr screensharing)
Currently I've been able to incorporate Grafika's CircularEncoder, now I just need to figure out how to get video frames from the buffer. It's disturbingly complex to accomplish such a seemingly simple task :)
you can get the frame data with a c++ plugin. Please take a look at the Barcode plugin in our example app.
I would recommend to use the SDK 8.0 beta when starting with a plugin since there were big changes to the plugins API from 7 to 8.
I'm also looking for a way to stream AR enriched video frames using the Android native SDK.
Could you maybe point us in the right direction to be able to extract frames (preferable in NV21 format) from the GLSurfaceView?
I've been looking at the Grafika examples (ContinuousCaptureActivity and RecordFBOActivity) but these always end up recording video (mp4) instead of providing actual frames.
Same with RecordableSurfaceView (https://github.com/UncorkedStudios/recordablesurfaceview)
Any hints would be greatly appreciated!
OK, then the described solution would be our approach.
Video capturing is on our roadmap, but we have no specific timeline for this feature yet.
do I get it right that Device A should augment a scene and send a stream of the scene including augmentation to Device B. Device B doesn't have the possibility to interact with the augmentation and only consumes the stream?
If the scene is not important you could send the position of the augmentation(s) to the client. This would be also very low bandwidth and fast. Precondition is that Device B also has the augmentation.
Currently we unfortunately have no possibility to capture a video of the full AR scene comparable to our captureScreen() function. For streaming the video of the scene other solutions should be available.