Multi-GPU rendering

  • When assigning a filter/encode operation to FFmpeg GPU1, the encode uses NVENC of GPU1, but the CUDA + copy engine of GPU0. There is bus activity on both GPUs, so this is not an OS reporting anomaly. Data is being shuffled from one GPU to the other mid-task.

    When assigning filter/encode operations to both GPUs: both CUDA + copy engines of FFmpeg GPU1 remain idle.

    Testing on a 2-GPU system, VoPro version, Vegas 20 build 411. GPU was assigned by setting the target GPU in the CUDA Upload filter.

    NOTE: FFmpeg numbers the GPUs differently than Windows does. In my system:

    • Windows GPU0 = FFmpeg "GPU1"
    • Windows GPU1 = FFmpeg "GPU0"

    You can find the FFmpeg designation for each card by the following command:

    ./ffmpeg -f lavfi -i nullsrc -c:v h264_nvenc -gpu list -f null -

    Scene files to test:

    PS, to run both these Scenes simultaneously, you'll probably have to patch your drivers (because there are more than 3 total output streams):

    nvidia-patch/win at master · keylase/nvidia-patch
    This patch removes restriction on maximum number of simultaneous NVENC video encoding sessions imposed by Nvidia to consumer-grade GPUs. - keylase/nvidia-patch

    Edited 16 times, last by Joe24 ().

  • Update: Added a 3rd graphics card into the system. The correct card GPU0/1/2 is always used for NVENC, but all other functions (CUDA and copy engines) are always performed by GPU0.

    Edited 2 times, last by Joe24 ().

    • New
    • Official Post

    I have already implemented something like this which you will be able to test soon.

    Just want to finish the other tasks and then do a new build.


    Upload filter:

  • Well now I feel a bit silly. It wasn't a VoukoderPro problem at all.

    Removed all video nodes from a test scene, and still observed significant GPU0 copy and CUDA activity on an audio-only VoPro encode. Hmmm.

    I always have GPU acceleration turned off in Vegas because it slows renders down . . . but must have enabled it for some test, and forgot to disable it again. *facepalm*

    Once I turned GPU acceleration off in Vegas, the copy and CUDA activity is no longer present on GPU0. There is a light 3% load on GPU0 3D/CUDA engine when Vegas is onscreen, but this drops to 0% when Vegas is minimized, and seems to be just normal Windows usage of the primary graphics card.

    Tested renders on all 3 cards individually, and all GPU activity takes place on the correct intended card.

    So there is no actual problem.

    On another note . . . To my way of thinking, it would make more sense to have each node 'inherit' the GPU assignation of the previous (upstream) node in Scene Designer, rather than having to set a target GPU for every single node. If a user wanted to transfer the stream from one GPU to another at any point, couldn't they manually use Download/Upload nodes to send data from one card to another?

    Edited 22 times, last by Joe24 ().

  • As it happens, using Download/Upload to transfer data between cards doesn't work. I'm not sure it's even possible to move video data between GPUs mid-task, or even use more than one GPU with a single instance of FFmpeg. This might be an FFmpeg limitation? Tried several different ways, and couldn't get it to work.

    When specifying an Upload to GPU0 -> Encode on GPU1, throws an FFmpeg error:

    [FFmpeg:0] Could not set non-existent option 'gpu' to value '1'

    Tried parallel Upload filters directly from Video Input node, uploading to 2 different cards (GPU0/1), and it throws the same type of error:

    [FFmpeg:0] Could not set non-existent option 'gpu' to value '0'

    Using Upload to GPU0 -> Scale -> Download from GPU0 -> Upload to GPU1 -> Encode on GPU1 doesn't work either. Throws the same FFmpeg error:

    It would certainly be nice to run multiple GPUs from a single Vegas render, but if this is in fact an FFmpeg limitation, I guess there's not much anybody can do about it except to run multiple Vegas instances, each with a separate VoukoderPro Scene controlling it's own GPU. Unless VoPro could run multiple FFmpeg instances from the same Vegas output buffer?

    Edited 6 times, last by Joe24 ().

    • New
    • Official Post

    So there is no actual problem.

    Nice!! Well, you've still got the improvement of a named device dropdown instead ofentering the device number ;)

    It would certainly be nice to run multiple GPUs from a single Vegas render, but if this is in fact an FFmpeg limitation, I guess there's not much anybody can do about it except to run multiple Vegas instances, each controlling it's own GPU. Unless VoPro could run multiple FFmpeg instances from the same Vegas output buffer?

    Voukoder(Pro) doesn't use an FFmpeg binary (as most other tools do), it's using its DLL variants. So for each export it creates a new instance. But you can test your filter chain witz the command line ffmpeg version.

    In line 65 of your previousely posted log file you can see the text representation of the filter chain:


    Just append it to the ffmpeg.exe command line using -filter_complex.

    it would make more sense to have each node 'inherit' the GPU assignation of the previous (upstream) node

    Yes, and setting the encoder pixel format automatically to CUDA. I might add that later.

  • ... using its DLL variants. So for each export it creates a new instance.

    I don't recall ever finding a way to control more than one GPU at a time with FFmpeg command line. Usually you specify "-gpu 1" etc. at the beginning, and that's the only card targeted by the entire command line. For multiple cards, you use multiple command lines.

    Am I to understand that VoukoderPro, with its FFmpeg DLL version, is under the same restrictions of 1 GPU per render/export?

    Not a huge problem for what I do, but just curious. Other people do far heavier stuff.

    Edited 3 times, last by Joe24 ().

  • Seems like the duplicate assertions of the target GPU are causing problems. Directly encoding (Video Input -> Encoder) on GPU0/1/2 works. But when using (Video Input -> CUDA Upload -> Encoder), FFmpeg scrams.

    This appears to be caused by the duplicate GPU assertion commands issued to FFmpeg (both Upload and Encoder nodes have options to choose a GPU in Scene Designer). These duplicate commands are not being accepted by FFmpeg. For instance, commanding an Upload to GPU2, then commanding an NVENC encode also on GPU2. FFmpeg acknowledges the first assertion, but rejects the second.

    As mentioned in the previous post, I don't believe FFmpeg allows you to use the "-gpu " assignment more than once in a single instance. Even if it's to the same GPU.

    This only seems to affect GPUs other than GPU0. When GPU0 is used, I don't see any GPU assertion entries at all in the log file.

    In this log, when attempting to run on GPU2, it looks like FFmpeg is choking on the second assertion of "-gpu 2". See log, especially lines 2 and 14:

    Attempts to encode on GPU0 run properly, but this is probably because according to the log, "-gpu 0" is never asserted:

    Edited 10 times, last by Joe24 ().

  • Just taking a couple steps back here and collecting a bit more data. And I triple-checked that GPU acceleration was turned off in Vegas.

    Version does use the intended GPU1/2, with no 3D/CUDA activity on any cards. However, when encoding on GPU0 (FFmpeg designation), which happens to also be Windows' primary card, 3D/CUDA sees a 93% load even with all windows minimized and GPU acceleration turned off in Vegas.

    Version 0.7.4, as mentioned, cannot do any complex operations on GPUs other than GPU0. Anything that requires both CUDA Upload and an NVENC Encoder nodes fails because "gpu" is declared/assigned multiple times in the FFmpeg initialization. GPU0 still has the same 3D/CUDA activity on it, similar to v0.7.2.8 behavior above, and only when encoding using GPU0. No GPU0 activity when encoding with GPU1/2. No 3D/CUDA activity on any cards when encoding on GPU 1/2. No CUDA filters are being used in the test scenes.

  • The following may be the problem:


    Correct syntax should be: hwupload_cuda=1

    It is in fact possible to use multiple GPUs from a single FFmpeg command line. There doesn't seem to be a lot of information out there on this topic. However, the following example uses GPU0 to perform resolution-scaling, then GPU1 and GPU2 to encode different formats (1080p, 720p).

    In FFmpeg command line, assigning each NVENC encoder "-gpu 1" or "-gpu 2" is completely pointless, as the encoder always uses whatever GPU was targeted by the previous hwupload_cuda command, regardless of NVENC GPU preference in the command line. If you (nonsensically) specify separate GPUs for Upload and Encode, the Upload GPU setting overrides the Encode GPU setting, and both actions take place on the Upload GPU. So adding/removing the NVENC "-gpu" options makes no difference in function.

    Using 3 GPUs is hilariously inefficient for such a small job, but the following command is a working example of FFmpeg running a complex operation using 3 GPUs at once. Using uncompressed AVI file input, which seems the most similar to what VoPro is doing (no hardware decoding):

    ffmpeg -y -probesize 42M -analyzeduration 10 -i "d:\temp\input.avi" -filter_complex:a "[0:a]asplit [asplit1][asplit2]" -filter_complex:v "[0:v]hwupload_cuda=0, split [split1][split2], [split1]scale_cuda=1920:1080:interp_algo=4:format=yuv420p:force_original_aspect_ratio=1, hwdownload, hwupload_cuda=1 [split1scaled],[split2]scale_cuda=1280:720:interp_algo=4:format=yuv420p:force_original_aspect_ratio=1, hwdownload, hwupload_cuda=2 [split2scaled]" -map "[asplit1]" -c:a ac3 -b:a 96k -map "[split1scaled]" -c:v h264_nvenc -gpu 1 -2pass 0 -b:v 2500k -maxrate 5000k -bufsize 5000k -bluray-compat 1 -coder 1 -cq 0 -g 48 -level 4 -preset:v p7 -profile:v high -rc:v vbr -rc-lookahead 20 -tune:v hq "output_1080v4.9.mp4" -map "[asplit2]" -c:a aac -profile:a aac_main -b:a 96k -map "[split2scaled]" -c:v h264_nvenc -gpu 2 -2pass 0 -b_ref_mode:v middle -preset:v p7 -profile:v 2 -qp 30 -rc 0 -rc-lookahead 20 -tune:v hq "output_720v2.1.mp4"

    Corresponding VoPro Scene would be something like this: This cannot yet be tested due to the current bug (as of version 0.7.4).

    Edited 17 times, last by Joe24 ().

    • New
    • Official Post

    Found it. For the hwupload_cuda filter the parameter name is device not gpu. Fixed it in 0.7.5.

    You might have to edit each hwupload_cuda node, select the gpu and save the scene again to make it work.