Early 20th century videos reprocessed into HFR

Discussion about 120fps HFR as well as future Ultra HFR (240fps, 480fps and 1000fps) playing back in real time on high refresh rate displays. See Ultra HFR HOWTO for bleeding edge experimentation.
Post Reply
User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Early 20th century videos reprocessed into HFR

Post by Chief Blur Buster » 01 Sep 2020, 01:07

Although this is only 60fps, this is literally HFR reprocessing done on what are over 100 year old videos -- most of which were originally only about 18 frames per second.

phpBB [video]


phpBB [video]


These are generated from footage more than 100 years old --

Artificial intelligence was used to interpolate and colorize very old black and white footage from year 1902 and 1906. They're not glitch-free, but incredible progress.

In the next five to ten years, it would likely be possible for AI will be able to use its own "imagination" to create a "Holodeck" environment out of old footage, for exploring with a VR headset. There is a wright brothers equivalent here (photogrammetried video). Although some guesswork is involved in adding detail to blurry portions, I would surmise that AI could fill-in / autocomplete missing details (to retina resolution levels) based on a large library of world knowledge, as well as having learned from other historical videos, etc.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

1000WATT
Posts: 391
Joined: 22 Jul 2018, 05:44

Re: Early 20th century videos reprocessed into HFR

Post by 1000WATT » 01 Sep 2020, 11:14

Chief Blur Buster wrote:
01 Sep 2020, 01:07
Although some guesswork is involved in adding detail to blurry portions, I would surmise that AI could fill-in / autocomplete missing details (to retina resolution levels) based on a large library of world knowledge, as well as having learned from other historical videos, etc.
how can a sharpness filter be called an upscale?
Fading out a lot of detail for the upscale effect. And what is the logic in this?
123.jpg
123.jpg (148.62 KiB) Viewed 18606 times
I often do not clearly state my thoughts. google translate is far from perfect. And in addition to the translator, I myself am mistaken. Do not take me seriously.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Early 20th century videos reprocessed into HFR

Post by Chief Blur Buster » 01 Sep 2020, 15:04

It certainly trades certain details for other details, to produce an improved ("you're actually there") immersion effect. We can go yuck on medicore filters --

But this wasn't the point of the post.

Instead of sharpen filters, AI would be like a human graphics artists that repaint full-detail information (redrawing the whole scenery) using the original frames as hint. Even photogrammetry and superresolutioning and sharpening would only be for improving hinting for the AI artist that repaints the whole frames at full 3D detail. Then these artifacts wouldn't show.

The point of this post is that there are many ways to play back the same scenery (without destroying the original material), with various compromise tradeoffs. Let's surmise, that all these filter artifacts eventually disappear with future AI "do-over" algorithms. Smarter AI (of ten years from now or so) would, for example, know what the hell a brick texture looks like, and then draw real bricks in place of blurry black and white bricks, and then so on. Smarter AI would be able to detect for photoshopping artifacts or filter artifacts and keep redoing the image until it passes what it thinks human thinks looks realistic, etc. Rinse and repeat for all frames (and add new frames as necessary).
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

1000WATT
Posts: 391
Joined: 22 Jul 2018, 05:44

Re: Early 20th century videos reprocessed into HFR

Post by 1000WATT » 01 Sep 2020, 16:29

Chief Blur Buster wrote:
01 Sep 2020, 15:04
Smarter AI (of ten years from now or so) would, for example, know what the hell a brick texture looks like, and then draw real bricks in place of blurry black and white bricks, and then so on.
Perhaps this is off topic again. If I distract or mislead readers, please delete my posts.

I see the benefit in such developments for the easy transfer of video material to 3D models, use in games and virtual reality.

But not an improvement in the video itself. Yes, the AI will remove some of the film defects and this will be a plus. But if we talk about bricks, then even a person will not be able to make out exactly which artifacts on a brick wall: drips of rain, dirt, cobwebs, etc. If you start correcting these artifacts, the soul of this video will die.
I often do not clearly state my thoughts. google translate is far from perfect. And in addition to the translator, I myself am mistaken. Do not take me seriously.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Early 20th century videos reprocessed into HFR

Post by Chief Blur Buster » 02 Sep 2020, 00:48

1000WATT wrote:
01 Sep 2020, 16:29
I see the benefit in such developments for the easy transfer of video material to 3D models, use in games and virtual reality.

But not an improvement in the video itself.
I agree that the original video should never be replaced with these revisualizations.

Remasters of the video / 3D models / games / VR are quite valid, as long as the original video is kept unmodified/preserved. For example, video as a virtual reality education when a user doesn't have a headset.

For the sake of argument, it's basically an independent studio-refilming of the original scene (Aka brand new video) within the imagination of a future supersmart AI rather than through artifacting filters. Not as a classical "video remaster" but an IMAXifivation/virtually refilmed version for the purpose of being able to better immerse oneself in the classical world without a VR headset. Like a remake. But a virtually-generated remake. Perhaps that kind of think-outside-the-box.

I'm still a fan of Hollywood Movie Maker Mode, with 24fps originally, too.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

thatoneguy
Posts: 181
Joined: 06 Aug 2015, 17:16

Re: Early 20th century videos reprocessed into HFR

Post by thatoneguy » 03 Sep 2020, 00:31

First video looks fake as fuck. It looks like video game graphics.

Second video doesn't look as fake but I can't stand how it looks. The various film defects(especially the inconstant luminance shifts) look much more annoying and uncomfortable to the eye in color and at 60fps.
I don't get the point of colorizing film. Remember that Monochrome has its advantages over Color.
I don't get why you would upscale film either considering most film reels can be scanned to at least Full HD. If it is an 8mm reel then at least 480p could be doable which imo is acceptable to look at even on a 40+ inch display nowadays...but maybe I'm just not as picky.

Overall I just don't get the point of AI when it comes to imagery tbh. Even with old 240p video game graphics I would much rather use a CRT Shader then use AI Upscaling. And by the time AI becomes advanced enough to upscale 4k to super duper 32K for Virtual Reality or whatever we'll likely have the hardware to do it natively at that point.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Early 20th century videos reprocessed into HFR

Post by Chief Blur Buster » 18 Sep 2020, 12:39

It won't always look fake.

It's like how 1980s TRON looks fake as bleep, but today's CGI looks photorealistic.

As AI gets better, it becomes like a human brain imagination, painting photorealistic fakery that doesn't look fake to human eyes. And even recognizing when an image looks fake, and redoing the image until it stops looking fake.

It certainly won't be using the same algorithms; we should always keep originals -- it should not replace a master, but we should be able to ask "Computer, please load this old film into a photorealistic Holodeck environment" eventually. Basically spawn off best-effort remixed photorealistic reality (that looks perfect) off whatever material we wish. It doesn't replace the material.

Yes, I know it is a double edged sword for humanity.

AI that can do that will need to be hundreds or thousands of times more advanced, but work is already in progress to do that type of stuff. Even too, it would decouple the camera, allowing you choice of movement, like walking off the train, or sitting down/standing up inside the train, or looking at the train controls, etc. (That the computer would non-fake-look reproduce for you on a best-effort basis based on world history knowledge).

Real human artists would then use the AI tools to further enhance games, like add grafitti or post-apocalypticfeel or fiction elements to existing game art AI-generated from real life, etc.) Even though game artists will stop doing photogrammetry eventually because of such smart AI being able to recreate worlds from films/etc, they will move on to different enhancement tasks (using the AI as an assistant) to expand imagination and whatever.

But the same thing can be a great history learning tool. Imagine asking a computer, "Please create a reproduction of New York's World Fair 1939" so that you can explore the world in VR. AI would analyze all photos & footage, and generate a world based on that, in a complete-repaint way (rather than film-reprocess way).

Basically instead of planar-to-planar (like these YouTube links), it'd be planar-to-jump-into-a-holodeck (like VR headset) or FPS movements around a photorealistic environment. Instead of it being simply a "rewatch a remastered experience".

There are major flaws with using simplified AI algorithms as seen in these videos, but if you maximize the window, you will see that the humans walk much more realistically than you see in black and white films (which are much jerkier + wrong framerate, e.g. 18-sped-to-24), so even though there are ugly artifacts, other parts of immersion is improved when viewing the video in fullscreen mode and trying your hardest to ignore the flaws and focussing on things like hats & walking motion, as well as the horses, etc -- it's correctly real-life speed now (and everybody criscrosses because there was no fast vehicles back in those olden days).

The "convert film to a Holodeck" is what I viewing this through that "lens" (perspective)

Instead of applying AI algorithms to the image, the future AI would instead be a instant realtime game artist instead, a painter, recreating the whole scenery from scratch. This allows retina resolution + retina framerate + retina photorealism.

The stuff I post here is just very Wright Brothers earlybird glimpses...
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

thatoneguy
Posts: 181
Joined: 06 Aug 2015, 17:16

Re: Early 20th century videos reprocessed into HFR

Post by thatoneguy » 20 Sep 2020, 04:54

Chief Blur Buster wrote:
18 Sep 2020, 12:39
AI that can do that will need to be hundreds or thousands of times more advanced, but work is already in progress to do that type of stuff. Even too, it would decouple the camera, allowing you choice of movement, like walking off the train, or sitting down/standing up inside the train, or looking at the train controls, etc. (That the computer would non-fake-look reproduce for you on a best-effort basis based on world history knowledge).

Real human artists would then use the AI tools to further enhance games, like add grafitti or post-apocalypticfeel or fiction elements to existing game art AI-generated from real life, etc.) Even though game artists will stop doing photogrammetry eventually because of such smart AI being able to recreate worlds from films/etc, they will move on to different enhancement tasks (using the AI as an assistant) to expand imagination and whatever.

But the same thing can be a great history learning tool. Imagine asking a computer, "Please create a reproduction of New York's World Fair 1939" so that you can explore the world in VR. AI would analyze all photos & footage, and generate a world based on that, in a complete-repaint way (rather than film-reprocess way).
Therein lies the philosophical dilemma. We definitely don't have enough historic knowledge and I don't think the scientists/researchers who are into these kind of endeavors(regarding the past) will be satisfied until they invent time travel itself and record the entirety of the universe since time immemorial both temporally and spatially(crazy I know but there are people out there with dreams that big).

For some people the AI Created world will always be an approximation and will never feel like the real thing.
Even if we were to achieve a somewhat feasible interpretation of it in VR that is similar to a 4D cinema experience today interaction with the humans in the footage would be extremely limited.
I can see it having a niche popularity of course but I don't see it as something that will be disruptive.

As for video games, I don't know if you've noticed but human artists have largely been replaced in modern games in favor of mimicking realism/film. Technological advancement has ironically come so far where video games are losing their uniqueness and what made them stand out in the first place.
After all who needs concept art or novel gameplay ideas or creativity when you can simulate the real world?
It's coming to a point where video games as we know them are slowly dying. In the far future when VR is perfected(think brain-level VR aka Artificial Reality), VR will be used as a form of lucid dreaming where users can do anything they want and the only limit is their minds.

So essentially it will be pure escapism an artificial reality where people can run away to.
No need to get the latest Call of Duty anymore, you just put on your VR goggles and you're immediately where you want to be playing with your friends and shooting them and getting shot as if it were real life. No need for video game designers anymore since the fun is user created and forget about going outside and having a good old-fashioned paintball match. All for 9.99$ a month.

Now this might not even happen before I die but the thought of it saddens me. It definitely seems like that's where video games are heading towards.
Video Games have already been going downhill in the past decade but the death of the medium as an artform seems inevitable.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Early 20th century videos reprocessed into HFR

Post by Chief Blur Buster » 20 Sep 2020, 14:27

thatoneguy wrote:
20 Sep 2020, 04:54
Therein lies the philosophical dilemma.
thatoneguy wrote:
20 Sep 2020, 04:54
Now this might not even happen before I die but the thought of it saddens me. It definitely seems like that's where video games are heading towards. Video Games have already been going downhill in the past decade but the death of the medium as an artform seems inevitable.
<PandoraBox state="open">
Sure, various concerns do exist. We can continue that in the Lounge area though. There are a lot of bigger issues to worry about such as general historical preservation, since lots of old film reels are degrading in archival vaults, and data rot in lots of data no longer accessible (witness: How hard it was to read the old Apollo mission videotapes) due to obsolete equipment and formats. And of course, the deepfake concerns and AI-based newsfaking, etc. But these are multiple separate pandora boxes being spawned which I'm leaving this out of this thread -- but they can be good vigorous discussion in Lounge / Offtopic area instead of the Laboratory area.
</PandoraBox>

Without further ado, I'll solely focus on the technology and the angles of Blur Busters flavour;
thatoneguy wrote:
20 Sep 2020, 04:54
For some people the AI Created world will always be an approximation and will never feel like the real thing.
Perhaps a Holodeck won't ever be five-sigma perfect, but an eventual objective is to get reasonably close, maybe two-sigma or better.

It may take three or four decades, but the object is to pass a Holodeck Turning Test, where you can A/B test clear ski goggles versus VR headset, and not be able to tell apart reality from virtual, at least for an economically significant percentage of human population (say, >90%). That requires retina resolution + refresh + really good AI.
thatoneguy wrote:
20 Sep 2020, 04:54
Even if we were to achieve a somewhat feasible interpretation of it in VR that is similar to a 4D cinema experience
I'm not talking about even 4D experiences. I'm talking about being immersed in virtual reality not knowing you're in virtual reality. Many of us now know the various vision/display limitations that prevent this from being achievable.

Now, imagine AI generating that quality.

I think that technologically, it potentially will happen this century -- the question is if it's year 2040 or 2050 or 2060 or such. It requires a kind of convergence between retina resolution + retina refresh rate, and the graphics horsepower to keep up (probably with assistance of frame rate amplification technology). it might need a realtime renderfarm for some batch processing like preparing unseen worlds beyond view like around corners, and/or tough tasks such as ultra-fine-granularity global illumination that can be batch-tasked to different GPUs in very massive parallelization), and some possible divergences away from traditional polygon architectures to improve efficient ultramassive parallelization.
____

Although we likely still have to wear equipment (glasses/headset), there is a theoretical engineering path to an A/B blind test between clear goggles and a VR goggles, where the person isn't told what they're wearing, and then asked if they're in reality or virtual reality when presented a perfect 6dof vertigo-sync situation (which is already being done today with Oculus Rift -- like standing in a virtual room and leaning to look underneath a table surface, taking a step forward and back, tilting head. Obviously, the touch problem will exist (trying to touch virtual reality objects that don't exist in real life), but perfect-feeling vertigo sync is now mostly a solved problem at least for a roomwalk (without touching objects).

Walking a room in Oculus VR (like the Oculus trainer app, "First Contact") is MUCH less dizzying than watching a Real3D film or sitting in a 4D amusement ride, because vertigo sync is maintained in that Oculus VR app; you tilt your head 2cm, VR tilts 2cm. You look under a desk, it feels realistic. You crawl (physically on the floor) under a VR desk, it feels realistic (as long as you're not touching the nonexistent VR desk). Etc. Most of the better VR games have a default mode of Half Life: Alyx is very well vertigo sync'd (although you have to teleport to move distances further than the size of your room), since traditional movement unsync'd with real life, can inject nausea/dizziness.

For me specifically, my current high-end VR headset already feels than 100x better than the best 4D amusement ride, at least in a sufficiently big enough room that allows me to walk (1:1 VR-to-reallife vertigo sync) around instead of using teleporting. So perfect vertigo immersion was achieved, but the lowness of the Ouclus screen resolution + flicker/stroboscopics (90Hz strobed screen) + lens-based artifacts (aberration at edges), betry that I"m in virtual reality. But if you can (eventually) pretty much retina-out all of that, AND bring the graphics rendering up, then the Matrix effect / Holodeck effect can become real -- not knowing if you're in VR or real.

Ignore the rollercoaster demos and focus on the immersion, like being able to walk around on an actual public-transit vehicle and leaning physically to look out of the window (without touching window), and not knowing it's real-life versus virtual reality. You can already roam in VR today by walking around a room in VR. Oculus App Store uses a comfort rating system -- with three levels; Comfortable / Moderate / Extreme. The "Comfortable" (good vertigo sync for most stuff, even grandma will go wow), "Moderate" (like occasional locomotion moments that has no vertigo sync to real life) and "Extreme" (VR rollercoasters and nausea-inducing stuff go here). And many of these headsets will automatically pop up a grid (or fade the video room into view, from the VR's exterior cameras) if you accidentally almost walk out of bounds of the your assigned VR room space -- preventing you from accidentally crashing into walls or coffeetable. It might temporarily break the immersion effect, but at least when staying within the room bounds, you're generally operating in perfect or near-perfect vertigo sync during roomscale operation.

If you haven't used a modern 6-degrees-of-freedom VR headset in a roomscale walkabout mode -- the most "Star Trek Holodeck" you can get today -- even just an Oculus Rift, or better yet, a HTC Vive Pro (sharper than Rift) on an RTX 2080+ rendering the best-resolution VR-immerssion demos with 1:1 vertigo sync (physical room leaning/jumping/walking around = virtual room leaning/jumping/walking around), you probably haven't yet properly imagined what I'm currently imagined. Just walking up to a 4D ride or demo VR headset or Google Cardboard and suddenly seeing a rollercoaster that gives you nausea, isn't the right approach for a person to try and understand this forum thread.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

Post Reply