Start a new topic
Solved

How to record the AR scene in Wikitude?

Hi all,


Is there a way to record an AR scene in Wikitude?


I am trying to record an MP4 video of what the Wikitude Camera sees, using a Unity plugin called NatCorder. However, the output video only contains the AR objects and does not contain the camera background. I've attached a video here illustrating what I mean.


The plugin works perfectly with ARCore; I am able create MP4 videos that contain both the AR objects and the camera background. I'm unsure why it works differently in Wikitude when the Camera components in both seem to be similar. Can anyone give me any pointers?


Thanks


James

mp4

Hi Leonardo,


This was never the case before. Since release 9.3 the BackgroundCamera is enabled by default and was hidden before. The other camera is just a simple Unity camera and should be handled correctly if everything is set up accordingly. Could you share your script on how you initialize the NatCorder part?


Kind regards,
Gökhan

Hi Gökhan


Thanks for the explanation. 

I'm using the Professional Edition Plugin.

But I still can't record both cameras simultaneously with NatCorder. 

It is possible to record the BackgroundCamera, but not the WikitudeCamera (AR objects).

Unity  suggests:


"This can probably be worked around by using a screen space - camera UI and setting it to render from your specific camera (set to orthographic mode). The 'overlay' mode doesn't go through the normal render pipeline so it's not injected into a normal camera render."

https://forum.unity.com/threads/render-a-canvas-to-rendertexture.272754/#post-1804847


But I could not make Wikitude works this way.

Any suggestion?


Thanks 

Hi Leonardo,
I assume you are using the Professional Edition Plugin. There are some differences in terms of components and scene hierarchy in contrast to the Expert Edition Plugin.
To clarify the scene hierarchy on both Plugins, here a short summary:

Professional Edition:

The BackgroundCamera gameObject renders the camera texture/frame (in Expert Edition this is the Camera Frame Renderer).
The WikitudeCamera basically renders the scene (in Expert Edition this is the Main Camera).

Expert Edition:

The Camera Frame Renderer renders the camera texture/frame (in Professional Edition this is the Background Camera).
The Main Camera basically renders the scene (in Professional Edition this is the WikitudeCamera).

Kind regards,
Gökhan

Hey Leonardo Henrique Neto,


this is how my Unity object hierarchy looks like. If yours looks different, you have to adapt it to your project.


image


Regards,

Jan

Hello


I'm trying to record the cameras, but the above solutions does not work for me.

I can get the backgroundCamera (with the video input), but how can a get the arCamera? This code does not work:


Camera arCamera = GameObject.Find ("Wikitude/Camera Frame Renderer").GetComponent<Camera> ();


I'm using Wikitude 9.4, Unity 2019.14

Thanks


Hello,

the fix is already included in Expert Edition and will be included in Professional Edition with the next release coming next week (9.3).


Kind regards,
Gökhan


1 person likes this

I'm still experiencing this issue where augmentations were misplaced when recording. Has the fix landed already?
I use Unity 2019.3.3f1
Wikitude SDK 9.2
Natcorder for recording.

Hey Gökhan,


this sound great. Thanks for your effort!


Best Jan

Hi Jan,


we found and resolved the underlying issue with the mismatched augmentations (when recording in a resolution different than the screen size). We are currently testing the fix and investigating how to release it.


Kind regards,

Gökhan


1 person likes this

I just figured out that some Android devices are crashing if recording resolution is above 1920x1080 because they can not handle it for some reason.


One of my test devices has a screen size of 2340x1080 which results in one of those crashes as soon as i start recording in the same resolution.


I had some back and forth with the main developer of NatCorder and he suggested to record in a resolution which is below Full HD but still keeps the device screen aspect ratio.


This actually fixed the Android crash but the final recorded video was weird too. In my case the augmentation no longer matches the background camera (it was misplaced: only in the recorded video, it was fine in Unity Editor as well as iOS and Android Screen).


NatCorder dev suggested to talk to Wikitude about this because this seems to be a Wikitude Camera issue (from his point of view).


This is his actual respond to my last email:


image


@Wikitude Support: Do you have any idea how to tackle this issue or what might be the issue here?

Hi both! I used an older release of NatCorder that didn't support an array of cameras in CameraInput so it's good to see that it does now. For anyone who stumbles upon this question in the future, follow Jan's solution as the API now allows for a simpler method.


Anyway here is my code for those who are interested. NatCorder allows you to obtain the frames it uses for the video as RenderTextures. At the start of the recording, I register a method called OnFrame() to start listening to the NatCorder's DispatchUtility.onFrame event. This event triggers every time a new frame is about to be added in to the video. For example, if you've set the frames per second as 30, then it will trigger 30 times per second). The bulk of the logic happens in this OnFrame() method. It first acquires a RenderTexture frame from the videoRecorder object, then it blits the WikitudeCamera.CameraTexture on it. It then iterates through the other cameras and render what they see on the texture. After this, it sends the frame back to the videoRecorder object to be included in the video.

 

using System;
using System.Collections;
using System.IO;
using NatCorder;
using NatCorder.Clocks;
using NatCorder.Dispatch;
using NatCorder.Inputs;
using UnityEngine;
using Wikitude;
using Zenject;

/// <summary>
/// The Recorder class is responsible for recording in-game videos using the Wikitude Camera
/// </summary>
public class Recorder : MonoBehaviour
{
    // The WikitudeCamera is required to access the camera feed (from the device camera)
    [SerializeField] WikitudeCamera wikitudeCamera;

    // The array of other cameras that need to be recorded
    [SerializeField] Camera[] cameras;

    // Boolean to determine whether to record sound/music or not
    [SerializeField] bool recordSound = true;

    // Audiolistener object that listens to sound/music
    [SerializeField] AudioListener audioListener;

    // The horizontal resolution of the captured video. (Note, to preserve the aspect ratio, we'll
    // dynamically compute the vertical resolution so it matches the device).
    [SerializeField] int captureResolutionWidth = 480;

    // The vertical resolution of the captured video. This is computed at runtime using both the 
    // desired horizontal resolution, and the device's aspect ratio to ensure we keep the proportions
    // correct.
    int captureResolutionHeight = 480;

    // Number of frames per second
    [SerializeField] int framesPerSecond = 30;

    // The bitrate used when recording the video.
    [SerializeField] int captureBitrate = 2500000;

    // How long we will be capturing the session for.
    [SerializeField] int captureDuration = 60;

    // Object that records video
    MP4Recorder videoRecorder;

    // Clock used by videoRecorder for correctly timing the video frames and/or audio input
    RealtimeClock recordingClock;

    // Used by the videoRecorder to record audio
    AudioInput audioInput;

    // Temporary path of the created video
    string videoPath;

    // The original render textures that the cameras render to.
    RenderTexture[] originalRenderTextures;

    private void Start()
    {
        captureResolutionHeight = Mathf.RoundToInt((float)captureResolutionWidth / ((float)Screen.width / (float)Screen.height));
        originalRenderTextures = new RenderTexture[cameras.Length];

        for (int i = 0; i < cameras.Length; i++)
        {
            originalRenderTextures[i] = cameras[i].targetTexture;
        }
    }

    public void StartRecording()
    {
        if(!enabled)
        {
            Debug.LogError("Component must be enabled in order to record.");
            return;
        }

        recordingClock = new RealtimeClock();

        videoRecorder = new MP4Recorder(
            captureResolutionWidth,
            captureResolutionHeight,
            framesPerSecond,
            recordSound ? AudioSettings.outputSampleRate : 0,
            recordSound ? (int)AudioSettings.speakerMode : 0,
            OnRecordEnd,
            captureBitrate
        );

        if (recordSound)
        {
            audioInput = new AudioInput(videoRecorder, recordingClock, audioListener);
        }

        // The frequency that the DispatchUtility.onFrame event is triggered is equal to the number of frames per second specified by the user. (e.g. 30 fps)
        // We listen to this event so we can create the frames needed to create the video
        DispatchUtility.onFrame += OnFrame;
    }

    public void StopRecording()
    {
        Debug.LogFormat(">>>> StopRecording");
        if (!enabled)
        {
            Debug.LogWarning("Component must be enabled in order to record.");
            return;
        }

        DispatchUtility.onFrame -= OnFrame;

        if (recordSound)
        {
            audioInput?.Dispose();
            audioInput = null;
        }

        videoRecorder?.Stop();
    }

    public void PauseRecording()
    {
        if (!enabled)
        {
            Debug.LogWarning("Component must be enabled in order to record.");
            return;
        }

        if (!IsRecording || recordingClock == null)
        {
            return;
        }

        recordingClock.Pause();
        DispatchUtility.onFrame -= OnFrame;
        
        if (recordSound)
        {
            audioInput.Dispose();
            audioInput = null;
        }
    }

    public void ResumeRecording()
    {
        if (!enabled)
        {
            Debug.LogError("Component must be enabled in order to record.");
            return;
        }

        if (IsRecording || recordingClock == null)
        {
            return;
        }

        recordingClock.Resume();
        DispatchUtility.onFrame += OnFrame;

        if (recordSound)
        {
            audioInput = new AudioInput(videoRecorder, recordingClock, audioListener);
        }
    }

    // OnFrame is called every time a new frame is needed to be created for the video
    private void OnFrame()
    {
        // Obtain a blank frame which we will use to create the video
        var frame = videoRecorder.AcquireFrame();

        // Copy the camera feed texture onto the frame. iOS flips the texture for some reason so check for this before rendering on it
        #if UNITY_IOS && !UNITY_EDITOR
        Graphics.Blit(wikitudeCamera.CameraTexture, frame, new Vector2(1, -1), Vector2.zero);
        #else
        Graphics.Blit(wikitudeCamera.CameraTexture, frame);
        #endif

        // Copy what each camera sees onto the frame (e.g. UI elements, etc.), on top of the camera feed texture already on the frame
        for (int i = 0; i < cameras.Length; i++)
        {
            cameras[i].targetTexture = frame;
            cameras[i].Render();
            cameras[i].targetTexture = originalRenderTextures[i];
        }

        // Send the finished frame to the video recorder to be included in the final video
        videoRecorder.CommitFrame(frame, recordingClock.Timestamp);
    }

    // OnRecordEnd is called after the video has been created.
    private void OnRecordEnd(string filePath)
    {
        Debug.LogFormat(">>>> Recorder.OnRecordEnd: {0}", filePath);
        videoPath = filePath;
        videoRecorder.Dispose();
        videoRecorder = null;
    }
}

 


2 people like this

NatCorder 1.7.0 allows to pass multiple cameras to a recorder. This code snipped works for me. Feel free to copy and modify.

using System.Collections;
using NatCorder;
using NatCorder.Clocks;
using NatCorder.Inputs;
using UnityEngine;

public class RecordController : MonoBehaviour {

    [Header ("Recording")]
    private int videoWidth = Screen.width;
    private int videoHeight = Screen.height;
    public bool recordMicrophone;

    private IMediaRecorder recorder;
    private CameraInput cameraInput;
    private AudioInput audioInput;
    private AudioSource microphoneSource;

    private IEnumerator Start () {
        // Start microphone
        microphoneSource = gameObject.AddComponent<AudioSource> ();
        microphoneSource.mute =
            microphoneSource.loop = true;
        microphoneSource.bypassEffects =
            microphoneSource.bypassListenerEffects = false;
        microphoneSource.clip = Microphone.Start (null, true, 10, AudioSettings.outputSampleRate);
        yield return new WaitUntil (() => Microphone.GetPosition (null) > 0);
        microphoneSource.Play ();
    }

    private void OnDestroy () {
        // Stop microphone
        if (microphoneSource != null) {
            microphoneSource.Stop ();
            Microphone.End (null);
        }
    }

    public void StartRecording () {
        // Start recording
        var frameRate = 30;
        var sampleRate = recordMicrophone ? AudioSettings.outputSampleRate : 0;
        var channelCount = recordMicrophone ? (int) AudioSettings.speakerMode : 0;
        var clock = new RealtimeClock ();
        recorder = new MP4Recorder (videoWidth, videoHeight, frameRate, sampleRate, channelCount);
        // Create recording inputs
        Camera arCamera = GameObject.Find ("Wikitude/Camera Frame Renderer").GetComponent<Camera> ();
        Camera backgroundCamera = Camera.main;
        Camera[] cameras = new Camera[] { arCamera, backgroundCamera };
        cameraInput = new CameraInput (recorder, clock, cameras);
        audioInput = recordMicrophone ? new AudioInput (recorder, clock, microphoneSource, true) : null;
        // Unmute microphone
        microphoneSource.mute = audioInput == null;
    }

    public async void StopRecording () {
        // Mute microphone
        microphoneSource.mute = true;
        // Stop recording
        audioInput?.Dispose ();
        cameraInput.Dispose ();
        var path = await recorder.FinishWriting ();
        // Playback recording
        Debug.Log ($"Saved recording to: {path}");
        var prefix = Application.platform == RuntimePlatform.IPhonePlayer ? "file://" : "";
        Handheld.PlayFullScreenMovie ($"{prefix}{path}");
    }
}

 

1. How do you "merge" Wikitude CameraTexture on top of the already existing RenderTexture (Camera Background) without replacing it?

2. How do you pass this "merged" RenderTexture to NatCorder?


Short code example would be incredible useful :)

 How did you render the objects on top of the Render Texture? Could you please share the full recording code? Hopelessly stuck for the past 2 days

I figured out how to record the AR scene! 


The way NatCorder works is that you pass in RenderTextures which act as frames for the video. What I did was that I rendered the current camera background onto the RenderTexture (using WikitudeCamera.CameraTexture) and then I rendered the objects that the Wikitude Camera sees on top of that. I then passed this RenderTexture to NatCorder to use as a frame for the video. I do this for every RenderTexture I receive.


1 person likes this
Login or Signup to post a comment