Posts by Poo2

    I've never seen the graph plot in windows move from zero... but if one has an Intel CPU with HD graphics enabled, then Intel QSV is used to decode video...

    is this all by design? or is my system badly configured?

    my research thus far has found people on the adobe forums claiming adobe do not use the hardware decoding functionality on discreet cards, but apparently adobe do use some features of intel graphics for decoding, so this concurs with my own experiences.

    It would be nice if Adobe actually just admitted once in a while what they actually DO support and what they DONT support, rather than never officially reply to their users.

    From CISC to RISC to ZISC
    By Sheldon Liebman (1998)

    The evolution of computing technology has produced some very interesting devices. However long the list, it’s a good bet that a ZISC chip should be on it. ZISC stands for Zero Instruction Set Computer and it’s a technology that was jointly developed by IBM in Paris and by Guy Paillet, Chairman of Sunnyvale, CA-based Silicon Recognition, Inc.

    Although it may sound like to contradiction to refer to a computer as having zero instructions, that is actually a pretty good description of this technology. The first generation of ZISC chip contains 36 independent cells that can be thought of as neurons or parallel processors. Each of these cells is designed to compare an input vector of up to 64 Bytes with a similar vector stored in the cell’s memory.

    If the input vector matches the vector in the cell’s memory, it fires. Otherwise, it doesn’t. As a parallel architecture, the ZISC chip tells all 36 cells to compare their memory to the input vector at the same time. As output, the chip effectively provides the number of the cell that had a match or indicates that no matches occurred.

    Silicon Recognition has developed a technology around the ZISC chips called Parallel Associative Learning Memory, or PALM. PALM Technology combines a control system, typically an FPGA and DRAM memory, with one or more ZISC chips to create a standalone, hardwired recognition system.

    In a traditional serial environment devoted to pattern matching, a computer program basically loads a pattern into memory, then fetches a stored pattern from each location in a large array. After fetching the pattern, it does a comparison, then fetches the data from the next location and continues the process. As the number of patterns you need to check grows, the speed of the process decreases. With a very fast computer, tens or hundreds of patterns may be able to be checked in a real-time environment, but eventually, a limit is reached.

    This is because you need to look at what’s in every array location to see if there is a match. With ZISC chips and PALM technology, the system provides the location of the match without having to look at what’s in that location. Instead of looking at the problem as "What’s in array location 12 and does it match?" the problem becomes "location 12 matches." By eliminating the step of loading and comparing the pattern for each location, the speed of the system increases dramatically.

    If you only need to compare an input pattern to 36 potential matches, it’s tough to see how ZISC computing provides a real advantage over traditional computing. However, the real power of ZISC is in its scalability. A ZISC network can be expanded by adding more ZISC devices without suffering a decrease in recognition speed. According to Silicon Recognition, there is no theoretical limitation on the number of cells that can be in a network. The company regularly discusses networks with 10,000 or more cells.

    One way to think about this is to imagine a large sports arena with seating for 10,000 people. If you make an announcement over the public address system, every person in the stadium hears it at virtually the same time and can process the announcement in a truly parallel fashion. In a serial version, the announcer may move from seat to seat and speak with one person at a time. Let’s say the goal is to determine if "Sheldon Liebman is in the house." With parallel processing, I can immediately stand up and shout "Section A, Row 12, Seat 3." In the serial version, the process is much slower.

    Thus far, we’ve illustrated the process of finding an exact match to an input vector, but ZISC chips can also be used to find fuzzy matches. Instead of asking if there’s an exact match, you can ask for the closest match. Then, cells that are above a certain threshold fire simultaneously and the controller in the chip looks at which one returns the closest value. For example, location 12 may have a 63/64 match and location 14 has 62/64. In this case, the system returns that location 12 is the "best" match. At that point, higher level software can determine if the match is "appropriate" through automatic or manual methods.

    This leads us into the area of training a ZISC-based system to perform pattern recognition. As an example, let’s assume that we are trying to categorize apples on a conveyor belt as Red, Green or Yellow. We might start to picking 100 different Red apples and presenting them to the system. If they match an existing pattern, they are classified as Red. If they don’t, we instruct the system to add this pattern to a new cell. Next we do the same with 100 Green apples and 100 Yellow apples. Now, the first 100 locations define Red, the second 100 Define Green and the third 100 define Yellow. Based on which location is returned as a match, we can begin to classify our apples. However, it’s reasonable to assume that eventually, we’ll get an apple that confuses the system. When this occurs, we can add the pattern for this apple to a new cell and instruct the system as to whether that new cell refers to Red, Yellow or Green. You can also instruct by counterexample. If the system mistakes a Green apple for a Red one, you can "correct" it and add the pattern to a new cell. Eventually, the system will "learn" the different to a very high degree of accuracy. If you have 1000 sample apples, for example, you may want to use the first 300 to train the system and the next 700 to test it.

    The characteristics of the ZISC chip make it useful in two very specific situations. The first is as a recognition engine plugged into a traditional computer. Silicon Recognition actually offers PCI, ISA and VME cards that fit this description. In this environment, the ZISC chips are used to offload the recognition function from a general-purpose computer.

    The second application is where you need to tie recognition to a particular function that isn’t being controlled by a full-size computer. ZISC chips use very little power and can be put into very portable environments. For this type of application, Silicon Recognition offers 84-pin SIMs (Single Inline Modules) that contain either 3 or 6 ZISC chips with up to 216 processors.

    The company provided a real world example of this type of application in the agriculture industry. A computerized system was developed to spray weeds that grow intermixed with crops. A system was developed that places a camera on a moving appliance that covers twelve rows of crops at a time and has 12 spray nozzles attached, one for each row. Moving at approximately two miles per hour, the camera captures data and the ZISC chips determine if the spray head is over ground, crop or weed. If it’s over weeds, the spray head is activated. In this type of environment, a full size computer just can’t be used.

    There are a number of other applications that are particularly well suited to ZISC. Face recognition is one example. If your pattern is recognized, access can be granted to software or to a specific location. If this technology was incorporated into a video camera at your front door, your house could actually unlock automatically and the front door open as you approached.

    Real-time monitoring is another area well suited to ZISC chips. In France, a system is being used to count the number of people that go through a particular area each day. Since the system can continue to learn, it knows the difference between a head and a backpack, for example.

    At Lawrence Livermore Labs, ZISC is being used to inspect the optics of large laser systems. Each time the laser is fired, the optics are checked for cracks and other defects. Depending on what is found, the system decides if it is safe to fire the laser again. In applications like this, secondary processing is used once a "match" is made through ZISC. Perhaps the defect will allow the laser to be fired once more, or twice. That response depends on the location of the match.

    As technology advances, ZISC chips are expected to hold more cells and work even faster for less money. The current generation ZISC36 chip was developed using 1 micron technology at IBM. Work is being done to improve the density of the chips and Silicon Recognition is hopeful that up to 200 cells may be able to exist in a future version. Today’s ZISC chip operates at 20 MHz and can determine a match among 10,000 patterns in less than 3 microseconds. The next generation may operate at up to 100 MHz for a significant speed increase. Current Silicon Recognition products start at approximately $1000 for a board with a single ZISC chip. The company hopes to halve that number by the end of this year.

    From CISC to RISC and now to ZISC.

    Zero has never been such a significant number.


    In those 'shaky' movements, if they are part and parcel of the scene and not something you want to lose, with X265 I would enable tune for grain, and also increase the scenecut threshold to say 80 at the very least and merange to 64 with a subme of at least 7. I believe Vokoder gives you access to these options. But please do note that encoding time and output file size will both increase considerably.

    If you want to smooth out the video and not desire those 'shaky' scenes, then as Vouk suggested, Premiere offers warp-stabilisation (google it for instructions). Or you can also consider using proDAD's Mercalli plugin (my personal favourite as it permits you to be a little bit more selective of how you want to stabilise etc).

    Warp Stabilizer:

    Mercalli (plugin):

    I think the code could be in the wrong quantum state... or maybe you need to throw your old 386 in the bin and finally make that upgrade journey :S

    Bureaucracy of the super-state without any consideration for reality. Sometimes makes you think are the fat cats running the show actually living on the same planet as the rest of us?....

    Apparently this whole video was ray-traced in realtime!

    Always knew since Babylon 5, that one day in my lifetime graphics will become photorealistic in realtime.

    Now I wonder if Adobe will do anything amazing with RTX, because then I could convince myself to budget in a purchase in the near future :/

    When we get piping support, I'll definitely be including nero as an option in my planned workflow gui, still use neroaacenc quite heavily in my scripts even after all these years.

    Ok, I ran the tests again and it appears it is the latest version of Adobe Premiere that has the bug, I had 'dynamics' configured for the audio-track in the audio-mixer panel, something I always do for signal compression and limiting.

    When I do not have any effects in the audio-track mixer, then everything is fine.

    Please move to Adobe Premiere discussions instead, not a Voukoder bug. But if you can test to see if you can replicate the Premiere bug, help me sleep better at night not wandering if it is my workstation that's doing it.

    Not sure what to make of this one, it is annoying to say the least.

    If you have a look at the attached image of a sequence, as you can see the audio is shorter, and the workspace marker at the top is aligned to the end of the video I want to export. Premiere's builtin export functionality fills that void with blank audio, but Voukoder fills it with some very psychedelic noises, kind of like a micro buffer of audio from the stream on fast repeat.

    I tried this both with H265 and ProRes444, and in both cases there is psychedelia, in both cases the audio was 24bit 48khz PCM.

    I did some tests, uploaded a vid, grabbed it using a youtube downloader, demuxed it, ran it through spectral layers pro, I can see what they're doing to achieve a better sound, they're actually tweaking the harmonics for the encode, need to do a few more tests to be conclusive. Now this kind of processing will achieve perceptual similarity with very little to no artifacts on any hardware, be it consumer or high-end, but... if you had the original source to listen to, then you do notice the difference.

    For streaming online they've definitely hit the nail on its head... that's the kind of effort all the generic encoders are missing though most do the typical 'cut high frequencies' etc... but still a far-cry from what youtube appear to be doing.

    Now if we had a batch script that we could run on our raw 24 bit uncompressed audio to prepare it for a quality encoder before encoding... then we could in theory emulate their process...

    the above is all guesswork based on the spectral analysis, for all I know it could be their encoder itself that does the wizardy on the fly...

    either way... yup, you said it, too bad it's not for public usage.

    If you haven't already noticed, there is a new option in Premiere when you create a new project:

    [Blocked Image:]

    "Preview Cache"

    It's specifically designed for the workstation hypercards apparently.

    This is the blurb: AMD Radeon Pro SSG

    I just wish Adobe would get their rears into gear and optimise their code to start utilising all the cores of modern CPUs efficiently, that has been their biggest failing, an old codebase just dragging its heels.

    Exactly! And this is where it gets annoying, because unless you have a bit-perfect lossless source such as SACD etc, compressed versions sound abysmal on high-end gear. My monitors barely get much use outside of source engineering, indeed they were a worthy investment for master tuning, but playing back anything compressed bar very extreme bitrates really shows how terrible lossy audio codecs still are, it is as they say 'perceptive' and this would be down to accommodating typical consumer grade hardware, for example I'm often having to put on a pair of consumer grade sony headphones to 'hear' what the average person will.

    Unlike the big players, I don't have a team of listeners/viewers to get feedback on edits etc... but I still like to make sure I'm doing all I can to perfect my final outputs even if they may be for a very small closed-door userbase.

    I'm looking forward to FDK Voukoder integration, I can see countless nights of testing coming up ^^

    I recall a recent scientific study claiming not all ears hear the same sounds... I guess it really is as simple as that, maybe I just have far too sensitive ear-drums. <X

    That goes without saying, but lossless compressed bridging codecs don't really have any impact on speed if negligible at worst in the workflow, such as prores/cineform/mxf etc, pretty much why they're used extensively in the industry. Well when you compare a few terrabytes of raw uncompressed YUV 4:4:4 footage to a few hundred gig, a no brainer really.

    But, consumer exports and selecting the right AAC encoder for the audio is the nightmare, I can never just put my finger on it and say 'thats' the codec" for every source.

    Just did an export a few minutes ago, HE-AAC 64kb, QAAC output was terrible, Nero was perfect, but NeroAACend has not had any further development for quite a few years. Maybe an earlier version of CoreAudio dll's performed better than current for QAAC.

    Anyhow, not a concern for Voukoder, as long as the options to select the AAC codec are there then it really is down to us to decide which we need to suit whatever perceptive inclination we may hold.