Remastering old anime

So I've been doing some remastering using AI algorithms on an old anime, hajime no ippo, a boxing anime and have just finished it. The type of remastering I did was to enhance the resolution, clean up artifacts/noise, and frame interpolation. I've made my own subreddit, in there I've got more info, as well as some FAQs. Check it out!

r/InterpolateAndEnhance

For now I'll be doing anime, the next series will be the old hunter x hunter. At some point I want to try out live action content, specifically action stuff like martial arts movies, or straight up action, but for now the interpolation algorithms aren't quite good enough, or well actually they kinda are, at least the high end ones, but good luck trying to interpolate anything with those on anything other than a RTX 3090.
 
You should register to Cinemageddon tracker, they have a lot of still on laserdisc-only rips that would be good candidates for AI remastering.
 
You should register to Cinemageddon tracker, they have a lot of still on laserdisc-only rips that would be good candidates for AI remastering.
I just checked, looks like they're not doing signups anymore, need an invite link or something.
 
Perhaps in another lifetime then, maybe I'll be a dog or something, and I'll have a computer for dogs and will stumble upon it when I'm surfing the world wide doggy web, wwdw.
 
So I've been doing some remastering using AI algorithms on an old anime, hajime no ippo, a boxing anime and have just finished it. The type of remastering I did was to enhance the resolution, clean up artifacts/noise, and frame interpolation. I've made my own subreddit, in there I've got more info, as well as some FAQs. Check it out!

r/InterpolateAndEnhance

For now I'll be doing anime, the next series will be the old hunter x hunter. At some point I want to try out live action content, specifically action stuff like martial arts movies, or straight up action, but for now the interpolation algorithms aren't quite good enough, or well actually they kinda are, at least the high end ones, but good luck trying to interpolate anything with those on anything other than a RTX 3090.

I do clean ups of my own anime so I'm rather interested in your process, are you using vapoursynth?

You should register to Cinemageddon tracker, they have a lot of still on laserdisc-only rips that would be good candidates for AI remastering.

Laserdisc is an interesting format to rip/preserve to begin with, since it's an analog format. Domesday Duplicator came a long way, but it's not for the faint of heart.
 
I do clean ups of my own anime so I'm rather interested in your process, are you using vapoursynth?
No I use AI algorithms, better results, at least from my experimenting.
And I also add interpolation with another AI algorithm, much better results than something like SVP.
 
No I use AI algorithms, better results, at least from my experimenting.
And I also add interpolation with another AI algorithm, much better results than something like SVP.
Yeah, vapoursynth isn't SVP. It's similar to avisynth but not windows-dependent/newer, and uses python for it's scripting language. SVP appears to be some sort of high quality deinterlacer but with a snake-oil coating.

For instance, below is my vapoursynth script to process Tenchi Muyo! GXP vobs; which uses d2v files. An external script uses the below to pipe to an actual encoder:

[CODE lang="python" title="gxp.vpy"]import vapoursynth as vs
import functools
import havsfunc as haf
core = vs.get_core()

video = core.d2v.Source(input=file)
video = haf.QTGMC(video, Preset='Slow', TFF=True, opencl=True)
video = haf.Deblock_QED(video, bOff1=1, bOff2=1) #, quant1=30, quant2=38)
video = core.fft3dfilter.FFT3DFilter(video, sigma=1.5, bt=-1, bw=32, bh=32, ow=16, oh=16, sharpen=0.5)
video = core.resize.Point(video, format=vs.RGB24, matrix_in_s="170m")
video = core.fmtc.bitdepth(video, bits=32)
video = core.caffe.Waifu2x(video, noise=1, scale=2, model=6, cudnn=True, block_h=240, block_w=360)
video = core.fmtc.bitdepth(video, bits=8)
video = core.resize.Bicubic(video, format=vs.YUV420P8, matrix_s="170m")
video = core.f3kdb.Deband(video, preset='veryhigh/nograin')
video.set_output()[/CODE]

It's a fairly simple script compared to what some people do, and is "good enough" for me -- which is quite a bit better then anything fast. In the above, I use a high quality deinterlacer called QTGMC which does double the framerate; Most of the time you would process back to either the framerate of the DVD, OR some people go as far as processing back to the original framerate of the animation (which is not the same as the DVD standards). Then a simple deblocker that I tuned against the source, then a high quality denoiser+sharpener, again tuned against the source -- I do not care for people that try and clean up the "original noise" while I try and target the awful job early anime dvds had from their conversion from VHS by whatever studio. After that, I use AI upscaling, in this particular example I use Waifu2x again tweaked for this anime, and finally a debander that helps with the poorly-converted-to-dvd-from-vhs junk from the source. The GPU-accelerated portions are the bits that mention cudnn or opencl.
 
Yeah, vapoursynth isn't SVP. It's similar to avisynth but not windows-dependent/newer, and uses python for it's scripting language. SVP appears to be some sort of high quality deinterlacer but with a snake-oil coating.

For instance, below is my vapoursynth script to process Tenchi Muyo! GXP vobs; which uses d2v files. An external script uses the below to pipe to an actual encoder:

[CODE lang="python" title="gxp.vpy"]import vapoursynth as vs
import functools
import havsfunc as haf
core = vs.get_core()

video = core.d2v.Source(input=file)
video = haf.QTGMC(video, Preset='Slow', TFF=True, opencl=True)
video = haf.Deblock_QED(video, bOff1=1, bOff2=1) #, quant1=30, quant2=38)
video = core.fft3dfilter.FFT3DFilter(video, sigma=1.5, bt=-1, bw=32, bh=32, ow=16, oh=16, sharpen=0.5)
video = core.resize.Point(video, format=vs.RGB24, matrix_in_s="170m")
video = core.fmtc.bitdepth(video, bits=32)
video = core.caffe.Waifu2x(video, noise=1, scale=2, model=6, cudnn=True, block_h=240, block_w=360)
video = core.fmtc.bitdepth(video, bits=8)
video = core.resize.Bicubic(video, format=vs.YUV420P8, matrix_s="170m")
video = core.f3kdb.Deband(video, preset='veryhigh/nograin')
video.set_output()[/CODE]

It's a fairly simple script compared to what some people do, and is "good enough" for me -- which is quite a bit better then anything fast. In the above, I use a high quality deinterlacer called QTGMC which does double the framerate; Most of the time you would process back to either the framerate of the DVD, OR some people go as far as processing back to the original framerate of the animation (which is not the same as the DVD standards). Then a simple deblocker that I tuned against the source, then a high quality denoiser+sharpener, again tuned against the source -- I do not care for people that try and clean up the "original noise" while I try and target the awful job early anime dvds had from their conversion from VHS by whatever studio. After that, I use AI upscaling, in this particular example I use Waifu2x again tweaked for this anime, and finally a debander that helps with the poorly-converted-to-dvd-from-vhs junk from the source. The GPU-accelerated portions are the bits that mention cudnn or opencl.
I use topaz to upscale, I tried waifu2x but the results weren't as good. The actual quality was roughly on par, but what differentiated the two was the clean up, topaz video upscale did quite a good job at cleaning up the image such as artifacts, and could handle more artifected footage better too.
After I upscale, I go ahead and feed it to the interpolation algorithm, that one is made with pytorch. Unfortunately the pytorch one handled scene transitions badly, so I had to extend the algorithm so it could detect scene cuts and not interpolate those, no real difference in speed, but scene cuts don't look jarring anymore.

I remember dabbling in vapoursynth/avisynth several years ago but gave up cause the results just weren't good enough, fast forward several years and now there are some decent stuff out there, sadly some of them can take a ridiculous amount of time.
Have you heard of DAIN app? That one from my testing is so far the best interpolation algorithm, if you go on my lbry.tv channel, I have soul eater intro example there, I generally use that as my benchmark as it's quite a complicated animation, and even though there are some errors here and there, DAIN app handled it the best out of anything I tried.
It takes an insane amount of time though, I think just that clip took me 12-24 hours on my 2060 super, and I had to downscale it to 720p cause even with 8GB, I couldn't go higher lol.
I'm hoping to save up for a 3090, then I'll be able to do some good interpolation, as that pytorch one works well for more detailed animation, it struggles with the neo-anime style, wherein it dots around the place a lot and there's not many frames inbetween actions; DAIN handles it much better.
I had a hxh 2011 example at one point where gon fights hisoka and pytorch one didn't handle it that well, but dain app worked quite well; again though, took forever to render lol.
 
I use topaz to upscale, I tried waifu2x but the results weren't as good. The actual quality was roughly on par, but what differentiated the two was the clean up, topaz video upscale did quite a good job at cleaning up the image such as artifacts, and could handle more artifected footage better too.
After I upscale, I go ahead and feed it to the interpolation algorithm, that one is made with pytorch. Unfortunately the pytorch one handled scene transitions badly, so I had to extend the algorithm so it could detect scene cuts and not interpolate those, no real difference in speed, but scene cuts don't look jarring anymore.

I remember dabbling in vapoursynth/avisynth several years ago but gave up cause the results just weren't good enough, fast forward several years and now there are some decent stuff out there, sadly some of them can take a ridiculous amount of time.
Have you heard of DAIN app? That one from my testing is so far the best interpolation algorithm, if you go on my lbry.tv channel, I have soul eater intro example there, I generally use that as my benchmark as it's quite a complicated animation, and even though there are some errors here and there, DAIN app handled it the best out of anything I tried.
It takes an insane amount of time though, I think just that clip took me 12-24 hours on my 2060 super, and I had to downscale it to 720p cause even with 8GB, I couldn't go higher lol.
I'm hoping to save up for a 3090, then I'll be able to do some good interpolation, as that pytorch one works well for more detailed animation, it struggles with the neo-anime style, wherein it dots around the place a lot and there's not many frames inbetween actions; DAIN handles it much better.
I had a hxh 2011 example at one point where gon fights hisoka and pytorch one didn't handle it that well, but dain app worked quite well; again though, took forever to render lol.
Yeah, fast is one thing it definitely isn't. My encodes typically took about 12~16 hours per episode. Agreed on Waifu2x, it's super sensitive to the source content, though I found both the implementation of waifu2x along with the model used made a large difference. GXP was a bit of an outlier as it primarily suffered from not-too-bad blocking along with a bad VHS transfer, but it "cleaned up nice". I suspect this'll be true for many of the older Pioneer/Geneon releases. These were done a while ago on a 1070Ti+i9-9900KF+32GB DDR4; I've actually meant to get back to some projects, as I recently upgraded to a 3090 and am very curious to see what it can do!

I haven't played with DAIN yet, nope. I've experimented with some w/ mixed results, I know I used SRMD for one or two. Some of those old Pioneer releases didn't have a lot of detail to begin with and were very "clean", so there are a handful I didn't even bother doing upscaling on after experimenting, they looked great just using a high quality deinterlacer and very basic cleanup ala my GXP description.

Some of the older stuff is a real shame in that releases have such varying quality, I use my own collection as a source and american releases tend to shove more episodes on discs w/ lower quality. Funimation tends to re-cut their releases too, so you can't even use their subs/dubs (easily) with a japanese source. Their Trigun release is pretty awful in that regard, the episodes are maybe 500MB a piece in vob form while the japanese release is a more standard 1.2GB per episode. Since they recut (and I mean actually recut, not slightly-off timing) it's pretty frustrating putting together a "good" version. That said, the worst release I've come across is Psychic Academy via Tokyo pop, the quality of that one is so bad that any amount of cleanup is just playing arts and crafts with a turd.

One of these days I intend to get an analog setup going for my old VHS collection (I didn't "collect" them so much as I'm dating myself... they're pre-DVD-being-a-thing...), and someday I'd like to do an LD setup as well.
 
as I recently upgraded to a 3090 and am very curious to see what it can do!
Ohhh, so jelly!

Some of the older stuff is a real shame in that releases have such varying quality, I use my own collection as a source and american releases tend to shove more episodes on discs w/ lower quality. Funimation tends to re-cut their releases too, so you can't even use their subs/dubs (easily) with a japanese source. Their Trigun release is pretty awful in that regard, the episodes are maybe 500MB a piece in vob form while the japanese release is a more standard 1.2GB per episode. Since they recut (and I mean actually recut, not slightly-off timing) it's pretty frustrating putting together a "good" version. That said, the worst release I've come across is Psychic Academy via Tokyo pop, the quality of that one is so bad that any amount of cleanup is just playing arts and crafts with a turd.
Yeah I noticed it too, I think the american hajime no ippo dvd release was pretty dog, but I found a source where someone took the jap video and put in the dub into it, I used that as my source.

I haven't played with DAIN yet, nope. I've experimented with some w/ mixed results, I know I used SRMD for one or two. Some of those old Pioneer releases didn't have a lot of detail to begin with and were very "clean", so there are a handful I didn't even bother doing upscaling on after experimenting, they looked great just using a high quality deinterlacer and very basic cleanup ala my GXP description.
Dain app is a bit tricky to use, as with all things I suppose. One of the major issues was the scene cut, it seemed worse than my implementation on pytorch, so I might have to redo it myself. At any rate, you'll have to spend a while trying to figure out which scene detection value to use, I think 15 worked fine, but I didn't do much experimenting as it just isn't long term feasible for me right now.

I was thinking of building a second computer if I can ever get a 3090, that way I can just leave this stuff to run for a long time without it disrupting me.

Also, out of curiosity, have you shown your work to anyone else? What'd they say? I generally tend to get mixed feelings towards this remastering project, some people like it, some people hate that it's different from the original because the "creator did not intend".
 
Back
Top