Replies: 3 comments 2 replies
-
too dumb to understand |
Beta Was this translation helpful? Give feedback.
2 replies
-
not "too dumb" but certainly this is a different approach. So what is the main problem? |
Beta Was this translation helpful? Give feedback.
0 replies
-
too machine-consuming methinks |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Dear all,
the github repo https://github.com/abcnorio/aas-mult offers some R written scripts with bash call via Rscript to script ntsc-rs-cli for an arbitrary number of images (videos). To avoid uniformity of the underlying profile (presets) ie. "doing it again and again in the same identical manner" a statistical approach is chosen to introduce slight changes but maintain the general attitude of a profile. This is completely controlled by the user using a simple spreadsheet. In fact, it's just putting weights on specific parameter values values by using statistical methods.
The user defines an anchor point (= preset) for each parameter PLUS certain kind of limits (the exact definition differs in dependence to the type of a parameter like category, linear (integer, float), non-linear, etc.).
For each of those parameter types a probability distribution (ie. densities as weights) is chosen and the user has to give input to "whatever it requires" (mostly weights, standard deviation, lower/ upper limits, no math required). We talk of simple numbers based on previous work with the GUI and visual inspection of the outcome. NO magic involved. One just has to stick to the sheet and what each parameter expects and how it is defined (all noted in the sheet).
The script runs then over all files-to-be-processed and each file (image, video) gets its own json profile based on the random draw from the possibility space using the pre-defined probability distribution for each parameter.
This means - internally - the script opens up the possibility space of all ntsc-rs parameters and puts a probability distribution (pre-defined by the user) over it and draws random samples from it. The github repo shows a simple example how this works. Actually it is pretty straightforward.
Each file can have multiple (or an arbitrary number of) profiles and therefor multiple outomes.
Another script crawls through the presets section of ntsc-rs github repo and downloads all zip/ json presets (links in the github repo from some days ago).
Another script takes a json profile and embeds it into a spreadsheet so a user only has to work on the spreadsheet and nothing else. The rest is a call. So one creates a profile with the GUI, saves it as json, uses the script to put the json into the sheet, opens the sheet and configures the limits/ weights/ etc. per parameter to allow for statistical variation. The base empty sheet is NOT USABLE per default. IT REALLY REQUIRES THE USER TO CONFIGURE THIS PART unless one likes surprises. The main task is to narrow down randomness to stick to a profile while tweaking the sheet.
The amount of variation introduced ie. limits is chosen by the user. It is RECOMMENDED to visually inspect for each parameter of a profile the anchor point (=preset) + lower + upper limits to understand its impact on the outcome. Whatever a parameter offers -> a user has to find out what changes are introduced by tweaking the parameter by visual inspection. Be aware some depend on each other, ie. the effect of this may come out with more intensity if another parameter is more intense as well, etc. Such relationships are NOT covered by the script and not really possible to foresee what the user wants. The user is self-responsible for the profile and its accuracy.
Before applying the whole profile to many videos it is recommended to apply it on several stills from a video and with several variations per image to confirm everything works fine.
If one wants to work on a lot of images but with the EXACT SAME PROFILE ie. no statistical variation, you can write a short bash script to do that. The github repo does not support it because it is too easy to attain with a simple bash loop using find + ntsc-rs-cli + one profile.
One can use the random-approach here e.g. to prepare a large sample of images to train an AI/ML model for upscaling and at the same time remove analogue artifacts or remove artifacts without upscaling (factor 1 chosen for upscale) using neosr or any other engine env.
The scripts were developed under linux but should work under wsl under windows or even native windows (would require to slightly adjust paths to work for win and check which bash calls are done and provide win specific alternatives for the shell to such calls). Due to lack of daily-windows OS usage this was NOT tested, pls no complains if it does not work.
If one is not happy with the chosen prob dists, one can tweak the script by oneself and introduce whatever-prob-dist one likes. Who knows R can do that, the code is simple. The prob dists are just models and models are wrong by definition. So whoever wants a different model, just add it, and enjoy.
An example manual R run can be used for comparison in case of doubt with the bash call (it calls the same R script internally) which is based on the files on the github repo.
The github repo should contain all information required to run the scripts and hopefully understand them. Besides the statistical part it is more or less a wrapper around ntsc-rs-cli.
The whole env was used to work on several 10k images without any break or stop. Thanks to @valadaptive for this amazing work of ntsc-rs and esp. for the cli version that allows to script. VERY APPRECIATED!!!
This work is based on a private project, so please don't expect a library equivalent environment that rules out every user error. It was developed and used with plain R using rstudio (IDE for R), no bash. E.g. if one chooses images and a sourcefolder does not contain any, some cryptic R error will pop up without direct relationship to its cause - e.g. a missing file. There is no way around and there is no time to write the wrapper in a way to rule out every user error (too much effort, too less time). Same true for the sheet - the script expects to get prob dist related values in accordance to data type of a parameter - if that is changed the prob dist related value must be changed as well. IF NOT - something may not work... in case of doubt pls re-check with the worked case example.
Beta Was this translation helpful? Give feedback.
All reactions