Voted Premiere but I'm considering switching to Resolve in the future if they can get all of the features I need. Primarily, I need an HDR preview in Windows, but also, their replacement for After Effects' Content Aware Fill doesn't appear to handle more difficult scenarios as well as AE does. If they end up doing both things as good or better than Premiere/AE in the future, I may be able to finally switch and downgrade my Adobe subscription to Photoshop only.
Posts by morphinapg
-
-
You need video filters or we can't change the resolution or framerate from what the project is.
I'm not sure about other editors, but the premiere export tab allows you to do exactly that
-
-
After grabbing this log I also just tested it by encoding with video + truehd and got the same issue
By the way, when I tested that, I wanted to send a very low resolution so that video wouldn't use much resources, and I noticed that if I went to type a custom resolution in premiere, premiere would return weird random numbers, like an overflow error or something. Worked fine in other encoders, and choosing a custom resolution in media encoder worked fine too. Adobe probably screwed something up with their ugly new export screen in Premiere.
-
-
Yeah, Premiere's encoders annoyed me with that too now that you mention it lol
Found the spec that defines these values:
mastering-display-color-volume-metadata-supporting-high-luminanc.pdfHere's the relevant part:
Quote5.6 Maximum Display Mastering Luminance The nominal maximum display luminance of the mastering display, as configured for the mastering process, shall be represented in candelas per square meter (cd/m2). The value shall be a multiple of 1 candela per square meter.
A value in the range [5, 10000] shall indicate the nominal maximum display luminance.
5.7 Minimum Display Mastering Luminance The nominal minimum display luminance of the mastering display, as configured for the mastering process, shall be represented in candelas per square meter (cd/m2). The value shall be a multiple of 0.0001 candelas per square meter.
A value in the range [0.0001, 5.0000] shall indicate the nominal minimum display luminance.
Personally, I would allow a MinMDL of 0. The paper does say out of scope values can be used, and gives some examples.
So I would say 5-10000 for Maximum, and 0.0000-5.0000 for minimum are reasonable values that I wouldn't expect anybody to want to go outside of. Honestly I wouldn't expect anybody to go below 100 for maximum either, but it's allowed in the spec so it might as well be allowed in the settings here.
Although technically, for out of scope reasons you could just allow 0-10000 for both, and maybe just note the nominal range in the tool tip.
-
I've been doing some tests, considering the possibility of moving to NVENC for the projects I previously reserved for x265, since I was assuming that x265 had better compressibility at the same file sizes, even though it was slower due to it being on the CPU, but some tests I've been doing seem to suggest that may not necessarily be true, but I need to do more tests to confirm that.
Anyway, one of these tests would involve HDR content, and in order to do that I am using setparams to set color space values. For my tests that's enough for now, but if I ended up using this workflow for my full projects, I'd need to make use of the "Mastering Display" and "Content Light Level" sections of the Side Data tab, as NVENC doesn't have its own sections for those settings like x265 does.
However, I noticed Maximum Luminance was limited to 4000 nits. While I of course do not have a display greater than 4000 nits, much of what I work with is video game content which can regularly exceed that. The problem is, if MaxMDL is considerably lower than the MaxCLL, many TVs will assume anything above MaxMDL should be clipped, and so it will use tonemapping curves that aren't the most optimal for the content. The HDR10 spec allows for content up to the 10,000 nit level, and MaxMDL should be allowed to be set that high as well.
Similarly, MinMDL seems to have a minimum of 0.001, whereas I am using an OLED that has absolute perfect blacks, so I'd prefer to set this to exactly 0, since most of my content does indeed make use of perfect blacks.
So are these limitations put in place by something in ffmpeg, or were these just values you placed here based on some other information perhaps? If these could be expanded to allow the full range of 0 through 10,000, that would probably be better. Min Luminance doesn't need to go that high of course, but if you have a screen with a 1000:1 native contrast ratio and 1000 nits, then its minimum black level would be 1 nit, which isn't unrealistic, so I'd probably place the upper limit for that maybe at 5. Although I don't think many TVs actually do anything with the MinMDL setting, but it would just be better to have a wider range of possible values if possible.
-
-
-
Thank you very much for coming back to me so quickly and completely understand regarding the guide/thread not being for this sort of thing but was also just curious as to whether Voukoder may suit some of my needs.
Please find link to the MediaInfo. I tried quite a few different combinations but so far still no luck with YouTube tagging them correctly.
https://drive.google.com/drive/folders/…tvW?usp=sharing
I will definitely look at AviDemux for trimming purposes. To be honest, ideally I didn't want to trim and re-encode the file through Premiere Pro but tried it as my original HDR capture that I uploaded to YouTube kept exhibiting frame drops at certain points in the clip that were not present when playing on any of my internal media players, but were only present once YouTube had finished converting.
I did have it happen to one of my other HDR clips at certain points but when I re-encoded from H.265 to H.264 and re-uploaded to YouTube it fixed the stuttering/fps drops.
You're right. Youtube definitely should be recognizing those files as HDR. As far as I can tell, everything you've done here is right. If it's not working, that's a problem on their end and I would recommend contacting youtube support to get it fixed. I have had some issues getting files to be recognized in the past, and it seems youtube can sometimes be inconsistent on when this works and when it doesn't, but letting them know whenever it doesn't work correctly can help solve the problem. Make sure you wait a couple days after uploading though in case processing is just taking extra time.
-
Hi all,
I hope you are doing well?
Apologies for my lack of knowledge regarding some of these processes but had hoped that someone may be able to advise me on what I may be doing wrong.
Just to be clear this guide and thread are about encoding using the Voukoder plugin. I'm not too familiar with Adobe's built-in encoders, which have also changed considerably since I wrote this guide. If I had to guess, their youtube 4K preset is not designed for HDR, but I do know some of their other codec settings work fine. I just don't like them because I don't have access to the type of codecs and options I like, which Voukoder has, so I don't use them.
For HDR recording myself, I use the Atomos Ninja Inferno (newer version is called Ninja V), which allows me to manually flag the footage as SDR if I want, or leave it as the default HDR tags to use with the preset I uploaded in the first post. In the future, the premiere connector for Voukoder will also be able to work with the native HDR support Premiere has now, which will make much of my guide obsolete. As for other ways to record HDR, I believe ShadowPlay has some kind of HDR capture support, and the PS5's built-in game capture does capture HDR as well, although it's pretty heavily compressed. There may be other ways to record HDR as well that I'm unfamiliar with.
As for why the image looks less crisp to you when converted to SDR, there could be two things going on. Either your HDR display modes are adding sharpening to them, which the display would then lack in SDR, or you're simply noticing the difference in dynamic range. Higher dynamic range means a larger contrast between dark and light colors, and higher contrast edges will inevitably look sharper because of this. When you compress that dynamic range down, you are lowering the contrast of the image, and those edges with it. This can be especially bad if the tonemapping being applied is not ideal for the footage you recorded, which is why it will look different than if you just captured it directly in SDR, like you said, since the game will tonemap its visuals much more ideally to the SDR range than any automatic tonemapping solution would do. This is why youtube allows for attaching a LUT to HDR footage, which you would generate by using color grading software, such as either lumetri color in premiere, or da vinci resolve for example. This would allow you to color grade and tonemap the image entirely yourself, giving you full control over the way shadows, midtones, and highlights look, the way color looks, the way the contrast of the image is handled, etc.
QuoteWhat I have recently noticed though is that If I have imported one of the raw captures from the Gamer Bolt into Premiere Pro and then used the razor tool to trim it, no matter what I do in terms of export settings, YouTube will not flag it as HDR even though in Media Info it appears that all of the information is correct and virtually identical to the original file.
Would you mind posting the mediainfo on such a file? If combining two videos together worked for you, trimming it shouldn't work any differently. Although note there are apps you can use to do both of these things without needing to re-encode, if you don't want to waste extra time or potentially add more compression to the image. The one I like to use for simple trims and appends is called AviDemux. Doing that will ensure the bitstream of the original is maintained 100%, without any modifications an additional encoder could do.
-
Well, you'd need to use side data to set CLL and Master Display values (HDR10).
Once I have some time I'll work on a new premiere connector as this requries the 2022 SDK.
True, I meant to specify that it eliminates the need to set color space in those things.
That being said, it's still nice to have those options if anybody needs a specific need to change the color space details without a conversion taking place. It's not always guaranteed that source footage will be correctly set up or interpreted correctly.
-
Okay that makes sense. If this works it would effectively eliminate the need to use my Lumetri preset (or like I do, record the footage incorrectly flagged as rec709) and would also eliminate the need to either use setparams, side data, or set the color space info directly in the encoder as well. It would save a lot of work for sure!
The only remaining thing would be setting the Max/Min MDL and MaxCLL/FALL metadata. You can get away with not setting it, but content will usually look better if that metadata is set correctly.
Of course I've already started a project using my old method lol, and premiere won't let me re-interpret the color space of my footage, so I'll have to wait until my next project to be able to make use of it for my own work, although I could probably do some quick tests.
-
That's a little hard to decipher. Normally I would associate fields with interlace, but I'm pretty sure UHD doesn't have any interlace formats so I'm not sure why those are there unless they refer to something else, and I don't know what biplanar means either. If you don't have any other information about these formats, I'd probably assume the ones that say FRAME and don't say FullRange are probably the best choices.
So what I'd probably do is, in the color space drop down in the connector, add:
ITU-R BT2020 (HDR PQ) - PrPixelFormat_YUV_420_MPEG4_FRAME_PICTURE_BIPLANAR_10u_as16u_2020_HDR
ITU-R BT2020 (HDR HLG) - PrPixelFormat_YUV_420_MPEG4_FRAME_PICTURE_BIPLANAR_10u_as16u_2020_HDR_HLG
-
From what I've read these formats only work with effects, not with output plugins.
Oh, that's disappointing if true
-
Okay, premiere now supports this formats:
CodePrPixelFormat_RGB_444_12u_PQ_709 = MAKE_PIXEL_FORMAT_FOURCC('@', 'P', 'Q', '7'), // 12 bit integer (in 16 bit words) per component RGB with PQ curve, Rec.709 primaries PrPixelFormat_RGB_444_12u_PQ_P3 = MAKE_PIXEL_FORMAT_FOURCC('@', 'P', 'Q', 'P'), // 12 bit integer (in 16 bit words) per component RGB with PQ curve, P3 primaries PrPixelFormat_RGB_444_12u_PQ_2020 = MAKE_PIXEL_FORMAT_FOURCC('@', 'P', 'Q', '2'), // 12 bit integer (in 16 bit words) per component RGB with PQ curve, Rec.2020 primaries PrPixelFormat_RGB_444_10u_HLG = MAKE_PIXEL_FORMAT_FOURCC('@', 'H', 'L', '1'), // 10 bit integer per component RGB with HLG curve, Rec.2020 primaries PrPixelFormat_RGB_444_12u_HLG = MAKE_PIXEL_FORMAT_FOURCC('@', 'H', 'L', '2'), // 12 bit integer (in 16 bit words) per component RGB with HLG curve, Rec.2020 primaries
I'm just wondering they're all RGB. When dealing with bt2020/HDR it's not YUV anymore?
For some reason I totally missed this post months ago ?
Will you be incorporating these formats into the Premiere connector?
Once encoded, HDR is usually stored as YCbCr yes, typically in 4:2:0, but it appears Premiere handles their processing in RGB.The main format people would probably use for HDR is the
PrPixelFormat_RGB_444_12u_PQ_2020
the first format would probably be pretty unlikely to be used, but all of the others could certainly be common enough in usage. So I'd probably provide options like:
PQ BT2020
PQ DCI-P3
HLG 10bit
HLG 12bit
in addition to the rec 709/601 options you currently have
If there's no performance difference between the 10bit and 12bit HLG modes, I'd just go with the higher of the two.
-
Okay, it turns out I found a workaround for now, using premiere's optical flow and frame blending, but it was kind of complicated. However, it performs very well.
I'm converting 60fps gameplay to 24fps with motion blur. For those who might be interested, this is what I did:
- Have 60fps gameplay
- Have 240fps sample file with same resolution
- Have 48fps sample file with same resolution
- Drag 240fps sample to create a new 240fps sequence.
- Delete the sample and drag the gameplay footage to the 240fps sequence.
- Right click the footage and time interpolation / optical flow. This will give you a smooth 240fps version of the source gameplay
- Next, drag the 48fps sample to create a new 48fps sequence
- Delete the sample and now drag the 240fps sequence onto the 48fps sequence.
- Right click this and select frame blending for time interpolation. This will blend 5 frames together for each 240fps frame.
- Export as 24fps.
The reason I used 48fps instead of 24fps (getting 10 blended frames) is because typically for motion blur, you want it to be a "180 degree shutter" which is 1/2 of the time. Having a 48fps sequence simulates a full 360 degree shutter, so exporting that to 24fps cuts out half of the frames, simulating the more typical 180 degree shutter. However, if you want the full 360 degree shutter, as it is smoother, you can simply make the second sequence 24fps.
The reason for the sample files is unfortunately it's not possible to set sequences to weird frame rates like 240/48 manually. You can also achieve this by re-interpreting the source footage as 240/48 before creating those sequences and then setting it back to its original frame rate afterwards, but of course that's an annoying way to do that.
I don't know for sure how useful this would be for OP's situation, but it may help some people.
-
I see you removed the fps filter but doesn't that then make this tmix procedure impossible?
I had the idea to do something similar in a future project, although generating the additional frames with optical flow rather than capturing them natively. However, as it currently is, tmix will mix the frames based on the frame rate selected in the connector. So I'd have to output at the high frame rate and then drop the extra frames later myself. Obviously, this is not the greatest way to do it as it results in encoding way more frames than necessary for the end product. -
Did you use the preset I uploaded? Third picture looks like what you would get if you followed the guide but forgot to apply the preset. The encoder assumes the color space is ready for HDR, meaning it should look more like image #2 in the preview monitor in premiere after you apply my preset.
What you will see on the preview monitor is the entire 10,000 nit range compressed into the rec709 preview monitor, with color space and transfer function converted to correspond to the BT2020 and ST2084 spaces, which will make the image look dark, washed out, and lower contrast on the preview monitor before you export with voukoder.
Here's the preset again:https://www.voukoder.org/attachment/744…020-pp2020-zip/
Using the HDR editing mode in Premiere 2021 doesn't work with this process btw. Keep the project as rec709.
-
Well of course now I see this post after I'm already 9 hours into a 40 hour encode with 9.2 ?
Hopefully it either doesn't affect me or is easy to fix after my encode is done.