Blur Buster's G-SYNC 101 Series Discussion

Talk about NVIDIA G-SYNC, a variable refresh rate (VRR) technology. G-SYNC eliminates stutters, tearing, and reduces input lag. List of G-SYNC Monitors.
User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by Chief Blur Buster » 22 Mar 2023, 19:11

Unlike the usual freebie GPT-3.5 semi-speil, the paid GPT-4 service is much better at answering niche questions; it's apparently aware of Blur Busters:

You can access GPT-4 via a $20/month subscription to chat.openai.com

It is not perfect definitely, and has parrot-like attributes, but the paid GPT-4 training set includes public Blur Busters content, and it studied it well. I'd rank GPT-4 Blur Busters intelligence as better than ~98% of Internet-wide forum members.

I currently personally use GPT-4 to cross-check some things nowadays, including how to rescue a marinade with a missing cooking ingredient, to things like rewording a messily-worded technojargon, or to summarize a copy-and-pasted chat (meeting-minuting a 15000-character chat). It's the best meeting-minute generator tool I find.

Here's a recent conversation I had:
Chat with GPT-4 wrote:Mark Rejhon:
Please describe what "Blur Busters Law" means?


GPT-4:
"Blur Busters Law" refers to a concept developed by Blur Busters, a company that specializes in display technology, specifically focusing on reducing motion blur and improving overall display performance. The law essentially states that the perceived motion blur of a display is proportional to the frame rate and inversely proportional to the refresh rate. In other words, the higher the refresh rate and the frame rate, the lower the perceived motion blur.

Blur Busters Law is often used to guide display technology improvements and to help users understand the relationship between frame rates, refresh rates, and motion blur. The concept aims to optimize gaming and multimedia experiences by minimizing motion blur and providing smooth, crisp visuals.




Mark Rejhon:
From Blur Busters Law, please describe known possible human visible artifacts that is generated by a finite refresh rate display.
What would be possible visible differences, 240 Hz and a 480 Hz display, assuming pixel transition time is instantaneous, and that frame rate matches refresh rate?


GPT-4:
When a finite refresh rate display is used, several visible artifacts can occur due to the limitations of the technology. Some of these artifacts include:

Motion blur: Occurs when the refresh rate isn't high enough to keep up with fast-moving objects on the screen, causing them to appear smeared or blurry.

Screen tearing: Happens when the frame rate of the content is not in sync with the display's refresh rate, causing partial frames to be displayed at the same time.

Strobing: This artifact occurs when a display uses a backlight strobing technique to reduce motion blur. In some cases, it can lead to visible flickering or double images, especially at lower refresh rates.

Judder: Caused by an uneven distribution of frames, often due to the mismatch between the frame rate of the content and the refresh rate of the display. This leads to a stuttering or jerky appearance in motion.

Assuming pixel transition time is instantaneous and that the frame rate matches the refresh rate, a 480 Hz display would have potential visible differences compared to a 240 Hz display:

Reduced motion blur: A 480 Hz display would have half the motion blur compared to a 240 Hz display, resulting in clearer and sharper images during fast motion.

Lower input lag: Higher refresh rates lead to lower input lag, which means a more responsive display. A 480 Hz display would have a faster response time compared to a 240 Hz display, making it particularly beneficial for competitive gaming.

Smoother motion: A higher refresh rate results in smoother motion, as more individual frames are displayed per second. This means a 480 Hz display would provide a more fluid experience compared to a 240 Hz display.

Reduced visible artifacts: The likelihood of encountering screen tearing, judder, or other artifacts is lower with a higher refresh rate, as the display can handle a wider range of frame rates more effectively. A 480 Hz display would be less prone to these issues compared to a 240 Hz display.

It's important to note that the improvements in clarity and motion handling between a 240 Hz and 480 Hz display might not be as noticeable for some users, especially if they're not engaged in activities like competitive gaming or watching fast-paced content. The law of diminishing returns applies, meaning that the perceptual differences between the two refresh rates will be less pronounced than the difference between, for example, a 60 Hz and a 120 Hz display.



Mark Rejhon:
I have already fact-checked everything you said. You're right on all counts.
Now please describe the several mechanism of motion blur on displays.


GPT-4:
Motion blur on displays can be caused by various factors. Here are several mechanisms that contribute to motion blur:

Sample-and-hold effect: This is the primary cause of motion blur on modern LCD and OLED displays. The sample-and-hold effect refers to the fact that pixels maintain a constant state throughout the entire duration of a single frame. Since our eyes track moving objects on the screen, the constant pixel state creates a blurred image in our vision. This effect is less noticeable on CRT displays, which have a natural impulse-driven nature due to their phosphor decay, reducing perceived motion blur.

Pixel response time: Pixel response time is the time it takes for a pixel to change its color or brightness level. Slow pixel response times can cause motion blur, as the pixels are unable to transition fast enough between frames. This type of blur is more common in older or low-quality LCD panels that use technologies like twisted nematic (TN) or in-plane switching (IPS). Modern displays, including OLED and high-quality LCD panels, have significantly faster response times, reducing motion blur caused by pixel transitions.

Persistence: Persistence is related to the amount of time a pixel remains in a specific state before it decays or fades away. In CRT displays, phosphor persistence could cause motion blur as the illuminated phosphors decayed. In modern displays, this is less of an issue, but OLED panels can still exhibit some degree of persistence, leading to motion blur.

Display refresh rate: A low refresh rate can contribute to motion blur as there are fewer frame updates per second. This means that fast-moving objects have larger gaps between their positions in successive frames, causing them to appear smeared or blurry. Higher refresh rates help mitigate motion blur by reducing the time between frame updates and creating a smoother, more fluid image.

Camera motion blur: This type of blur originates from the content creation process, specifically during recording or filming. When the camera shutter speed is too slow compared to the speed of the objects in the scene, motion blur can occur. This form of blur is independent of the display and is inherent in the content being displayed.

Eye-tracking motion blur: This is a physiological cause of motion blur that occurs as our eyes track moving objects on the screen. When our eyes move to follow an object, the image of that object can become smeared across our retinas, causing a perception of motion blur. This effect is present regardless of the display technology, and it can be exacerbated by factors such as slow pixel response times or low refresh rates.

Some of these mechanisms can be addressed or mitigated by display technologies like backlight strobing, black frame insertion, or increasing the refresh rate, but it's important to remember that not all motion blur is caused by the display itself.



Mark Rejhon:
Excellent answer, except "sample and hold effect" is generally more-or-less synonymous with "persistence". The refresh rate contributes to the sample and hold effect in the sense that it determines the maximum number of unique frames per second applicable to the sample and hold effect. So while you had a minor amount of redundancy in your answer, you're essentially correct on all accounts.


GPT-4:
Thank you for pointing out the redundancy in my response. You're right that the sample-and-hold effect and persistence are closely related, as both contribute to motion blur due to the constant state of pixels on modern displays.

As you mentioned, the refresh rate plays a role in determining the maximum number of unique frames per second, which in turn affects the sample-and-hold effect and persistence. Higher refresh rates can help mitigate motion blur caused by these factors by providing more frequent updates, allowing for smoother motion and reduced blur.

I appreciate your feedback, and I'm glad the information provided was helpful. If you have any further questions or require clarification, please feel free to ask.




Mark Rejhon:
In case you weren't aware, I am the creator of Blur Busters, and the concept of Blur Busters Law. I am impressed that you know this much about my business.


GPT-4:
Thank you for your kind words! As an AI language model, my purpose is to provide accurate and helpful information based on the knowledge I've been trained on. It's an honor to have the opportunity to discuss Blur Busters and Blur Busters Law with the creator. Your work has significantly contributed to the understanding of display technology and the pursuit of better motion clarity in gaming and multimedia experiences.

If you have any further questions or would like to discuss any related topics, please feel free to ask. I'm here to help!




Mark Rejhon
As it pertains to eliminating all possible artifacts on a display, let's conceptualize a theoretical display that is indistinguishable in motion clarity to real life.


GPT-4:
Assuming we've eliminated all technological limitations for such a hypothetical display in the future, including ultra fast GPUs, ultra fast vide cables, and full retain resolution, please describe the human limiting factors that pertains to the vanishing point of the diminishing curve of returns for display motion handling.

In conceptualizing a theoretical display that achieves motion clarity indistinguishable from real life, we need to consider the human visual system's limitations. Although we can eliminate technological constraints, our visual perception still has innate limitations that define the vanishing point of the diminishing curve of returns for display motion handling. Some of these human limiting factors include:

Temporal resolution: The human visual system has a finite temporal resolution, which refers to the ability to distinguish individual events happening in rapid succession. Research suggests that most people can perceive flicker at frequencies up to 60 Hz, while some may be sensitive to flicker up to 100 Hz or even higher in peripheral vision. Beyond a certain point, increasing the refresh rate of a display would not lead to noticeable improvements for the average viewer, as our eyes would not be able to discern the additional frames.

Motion sensitivity: People have varying levels of motion sensitivity, which refers to the ability to detect and perceive movement. Factors such as age, neurological differences, and fatigue can affect motion sensitivity. For some individuals, a display with perfect motion handling may still appear blurry or exhibit artifacts due to their own visual system's limitations.

Spatial resolution: The human eye has a limited spatial resolution, determined by factors such as the density of photoreceptor cells in the retina, the eye's optical quality, and neural processing. As display resolutions continue to increase, there will be a point at which the average human eye will no longer be able to perceive improvements in detail, even when motion handling is perfect.

Contrast sensitivity: The human visual system is more sensitive to contrast at certain spatial and temporal frequencies. This means that even with a display that has perfect motion handling, some details may still be difficult to perceive due to the inherent limitations of our contrast sensitivity.

Cognitive processing: Our brains have a limited capacity to process visual information. Even with a display that achieves perfect motion handling, there is a limit to how much visual information we can process and comprehend at any given moment.

While these human factors define the vanishing point of the diminishing curve of returns for display motion handling, it's worth noting that individual differences exist. Some people may benefit from further improvements in display technology, while others may already be at the point where additional advancements would not lead to noticeable improvements in their experience. Nonetheless, understanding these human limitations helps guide the development of future display technologies and set realistic expectations for the ultimate level of motion clarity achievable.




Mark Rejhon:
Thank you.

Let's consider the "weak link principle". Even beyond the flicker fusion threshold, we still have motion blur and phantom array effect from a limited refresh rate. This goes down as refresh rates go up.

What refresh rate will be needed to push the motion blur below human detectable levels, for an average human of 20/20 vision? Describe variables such as display resolution, display size, amount of time for eye tracking, and how they may influence an increase or decrease in maximum refresh rate of no further weak links.


GPT-4:
To determine the refresh rate needed to push motion blur below human detectable levels for an average human with 20/20 vision, several variables need to be considered:

Display resolution: Higher resolutions require more frequent updates to maintain the same level of motion clarity as lower resolutions. As the resolution increases, the number of pixels that our eyes need to track increases, which can introduce more motion blur. To compensate for the additional motion blur, a higher refresh rate would be needed.

Display size: Larger displays, when viewed from the same distance as smaller displays, cover a larger field of view. This means that our eyes need to track over a larger angular range, leading to more pronounced motion blur. To reduce motion blur on larger displays, higher refresh rates may be necessary.

Eye tracking time: The longer the duration of eye tracking, the more motion blur is perceived. Fast-moving objects require more prolonged eye tracking, which can introduce motion blur even at high refresh rates. In these scenarios, an even higher refresh rate may be needed to push motion blur below detectable levels.

Viewing distance: The viewing distance affects the perceived motion blur as well. As the distance between the viewer and the display decreases, the angular range our eyes need to track increases, leading to more noticeable motion blur. A higher refresh rate may be required for closer viewing distances to compensate for the increased motion blur.

Considering these variables, the required refresh rate to push motion blur below human detectable levels is not a fixed value. Instead, it depends on the specific combination of display resolution, display size, eye tracking time, and viewing distance. However, it's worth noting that as refresh rates approach and surpass the flicker fusion threshold (which can be as high as 100 Hz or more for peripheral vision), the improvements in motion clarity start to diminish.

While it's difficult to pinpoint an exact refresh rate that would eliminate motion blur for all scenarios, research suggests that refresh rates above 1000 Hz could be a point where motion blur is significantly reduced for most practical use cases. However, individual sensitivity to motion blur and other factors may still lead to some users perceiving motion artifacts even at such high refresh rates.



Mark Rejhon:
Let's consider a retina resolution 180-degree screen, sample and hold, such as a theoretical virtual reality headset in the distant future.

Given your knowledge of Blur Busters Law, and the average human eye tracking speed, what is the approximate "retina refresh rate"?


GPT-4:
A "retina refresh rate" would be the refresh rate required to make motion artifacts imperceptible to the average human eye in a given scenario. In the case of a retina resolution 180-degree screen, such as a theoretical future virtual reality headset, we need to consider the average human eye tracking speed and the increased demands of a high-resolution, wide field-of-view (FOV) display.

The average human eye tracking speed varies depending on the individual and the nature of the movement. However, a commonly referenced value is around 30 degrees per second for smooth pursuit eye movements.

To calculate the "retina refresh rate" for such a VR headset, we need to consider the maximum speed at which the eye can track an object on the screen without perceiving motion blur or other artifacts. Using the Blur Busters Law, we can estimate the refresh rate required to minimize motion blur to a level below human perception.

Given the 180-degree FOV, a display would need a significantly higher refresh rate than current consumer displays to achieve retina-like motion clarity. While it's challenging to pinpoint an exact value due to individual variations in motion sensitivity, a refresh rate in the range of several thousand hertz would likely be necessary to achieve a truly imperceptible level of motion blur for the average human eye with 20/20 vision.

This estimate assumes a display with instantaneous pixel response times and a frame rate that matches the refresh rate. In practice, achieving such high refresh rates for a retina-resolution 180-degree screen would require significant advancements in display technology and the hardware driving the display, such as GPUs and cables.



Mark Rejhon:
Excellent answer. Now, we also have a stroboscopic effect problem too, where humans can see phantom array effects and wagon wheel effects. Please describe possible mitigations for this, for "retina refresh rate".


GPT-4:
The stroboscopic effect, which includes phantom array effects and the wagon-wheel effect, occurs when a continuous motion is displayed as a series of discrete frames, causing our visual system to perceive a series of static images instead of smooth motion. This can lead to the appearance of objects moving in the wrong direction or in a non-continuous manner. To mitigate these effects for a "retina refresh rate" display, several strategies can be employed:

Higher refresh rates: Increasing the refresh rate reduces the time between consecutive frames, which can help minimize the visibility of stroboscopic effects. As the refresh rate approaches the "retina refresh rate," the stroboscopic effect becomes less noticeable. However, the improvements begin to diminish beyond a certain point, as the human visual system's ability to perceive these artifacts reaches its limit.

Frame interpolation: Frame interpolation techniques can be used to generate additional frames between the original frames, creating a smoother appearance in motion. This can help reduce the stroboscopic effect by filling in the gaps between frames with interpolated content. However, frame interpolation can introduce its own artifacts, such as motion smearing or the "soap opera effect," so it should be used with caution.

Motion blur: Introducing a controlled amount of motion blur in the content can help mask the stroboscopic effect. By simulating the natural motion blur that occurs in real-life scenarios, the visibility of phantom array effects and the wagon-wheel effect can be reduced. However, excessive motion blur can decrease the overall sharpness and clarity of the image.

Hybrid approaches: Combining multiple techniques, such as using a high refresh rate and frame interpolation or adding motion blur, can help mitigate the stroboscopic effect further. By addressing the issue from multiple angles, the visibility of artifacts can be minimized without relying solely on a single approach.

Dynamic frame rate and refresh rate: Adapting the frame rate and refresh rate to the content being displayed can also help reduce the stroboscopic effect. For example, if a scene contains fast motion, the display could increase the refresh rate to minimize artifacts. This approach would require advanced display technologies and algorithms to detect and adapt to different content types and motion speeds effectively.

It's worth noting that completely eliminating the stroboscopic effect for all users and situations might not be feasible due to individual differences in visual perception and the inherent limitations of displaying motion on a finite refresh rate display. However, by employing these mitigation strategies, the visibility of these artifacts can be significantly reduced, bringing the display closer to achieving "retina refresh rate" motion clarity.


Mark Rejhon
Okay, now adjust the earlier answer of "retina refresh rate" estimate of several thousand Hz, to also completely eliminate the stroboscopic effect, based on known human detectability thresholds for stroboscopic detection.


GPT-4:
To completely eliminate the stroboscopic effect in addition to achieving retina-like motion clarity, the "retina refresh rate" would need to be adjusted to an even higher value. The stroboscopic effect is more challenging to eliminate completely because it depends on various factors, including the type of motion, background, and individual sensitivities.

Studies suggest that human sensitivity to stroboscopic stepping artifacts can be noticeable even at very high refresh rates, and the threshold for detecting these effects may be higher than the threshold for perceiving motion blur. Some research indicates that stroboscopic effects might still be perceptible at refresh rates as high as several thousand hertz for certain situations, especially when tracking fast-moving objects or in high-contrast scenes.

Considering the goal of completely eliminating the stroboscopic effect for the average person with 20/20 vision, it's plausible that the "retina refresh rate" would need to be in the tens of thousands of hertz range. However, it's important to note that individual sensitivities and the nature of the content being displayed will play a significant role in determining the exact refresh rate required.

It's also worth mentioning that achieving such extremely high refresh rates would necessitate significant advancements in display technologies, as well as the hardware and infrastructure supporting the display, such as GPUs, video cables, and content delivery systems. While the goal of completely eliminating the stroboscopic effect is ambitious, incremental improvements in display technology will continue to reduce these artifacts and enhance the overall viewing experience.




Mark Rejhon:
You correctly estimated the order of magnitude for the retina refresh rate for a maximal technological limits-pushing display. Research indeed confirms that retina refresh rate could extend into the quintuple digit range to make VR indistinguishable from real life. I wish you could read my papers at https://blurbusters.com/area51 but your training seems to understand display motion concepts far better than an average researcher and average human!


GPT-4:
Thank you for your kind words! As an AI language model, my purpose is to provide accurate and helpful information based on the knowledge I've been trained on. While I can't access the latest content on websites like https://blurbusters.com/area51, my training data includes a wide range of information on display technologies and motion artifacts up to September 2021.

I'm glad to hear that my understanding of display motion concepts aligns well with your research and expertise. It's an honor to discuss these topics with someone who has contributed so much to the field of display technology and motion clarity.

If you have any further questions or would like to discuss any related topics, please feel free to ask. I'm here to help!
Scholastic parrot my ass, nobody can deny that this paid-subscription GPT-4 AI appears to be more knowledgeable about Blur Busters than 98%+ of technology-forum members.

I am smart enough to know that I need to rapidly use paid AI tools such as GPT-4 as part of my career -- even just as a techno-jargon grammar checker (It's far better than Grammarly in automatically recognizing my writing style, for example) -- or to help me debug a tough problem.

Those in high tech work positions, you should too -- quickly adapt -- even just as a AI pair programmer (GitHub CoPilot) -- or to ask it occasional questions that currently stumps you (helping you fix a tough bug). It's getting certain human-competing skills now; and oneself must keep up or get laid off.

Love or hate it, we don't have to necessarily like the AI tools, but you must keep your frenemies close. Whether you consider AI an electronic friend or frenemy -- don't avoid. Learn how to use AI in your workflows -- now.

Tech companies like Google and Facebook are laying off staff a bit scarily quickly, and the remaining programmers need to quickly adapt AI tools into their workflow -- pronto -- stat!
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by Chief Blur Buster » 22 Mar 2023, 19:31

jorimt wrote:
22 Mar 2023, 18:09
drimzi wrote:
22 Mar 2023, 17:31
I asked ChatGPT what it could be, so take this explanation with a grain of salt.
My, that's some word salad :lol:
I'm wondering if it's GPT-3.5 or GPT-4.0.

As an early near day-and-date subscriber to GPT 4.0, I have absorbed enough answers and fact-checking to more quickly believe legitimacy of GPT 4.0 output (light fact checking) more often than GPT 3.5 output (heavy fact checking needed).

The explanation makes (somewhat) sense, but GPT 3.5 can be stubbornly "sounds right in an incorrect way". Now, reading between the lines, it sounds like there might be a natural behavior in NULL that follows this formulac pattern. But it might be coincidental and not intentional. Or that a natural effect is accidentally causing it to follow a theorem that GPT describes. Regardless, always fact-check any AI output, and always disclose if your forum post has content suggested by an AI.

Based on this forum post, I've updated the Blur Busters Forum Rules to make it mandatory to immediately disclose AI involvement in a primary forum topic, such as a NVIDIA NULL formula you might have discovered. The forum member correctly disclosed, but I am going to do this pre-emptively because AI forum spam is already a problem, and legitimate members now have access to AI tools today, which may be personally used by them to create forum content. This is fine within reason (e.g. sharing an esports image, or to decode why a NULL formula is the way it is) -- AI are fantastically useful tools if used correctly.

Regardless, such a mechanism that bottlenecks 1000fps 1000Hz to <800 is definitely not the ideal way to handle the diminishing curve of returns to retina refresh rate. So I don't necessarily like the explanation, it should be definitely un-bottlenecked as soon as possible. This is a pretty neat AI discovery of a weird NULL formula pattern, and we should inquire NVIDIA about it.

This year, I'll be publishing quite a few 1000fps 1000Hz papers/articles, as a fair bit of Blur Busters income is going to be redirected to 1000Hz advocacy, and the semi-mainstreaming of high-Hz.

Awareness of high-Hz screens is higher thanks to things like 120Hz phones/tablets/consoles, in a different community other than esports, and there are gigantic opportunities in this area for Blur Busters. More of our income comes from high-Hz non-esports use, and we are going to expand that further.

More mainstream people (even your grandmothers) can tell apart 240fps-vs-1000fps on a 0ms-GtG display, than 144Hz-vs-240Hz on an LCD. This fact is not well known to the market opportunities of tomorrow's 1000fps 1000hz ecosystem, which will hopefully rapidly become cheap (in the 2030s) once OLED hits that, and games start utilizing reprojection-based systems. Mere browser scrolling and map panning -- www.testufo.com/map -- is an excellent mainstream use case of high-Hz.

People in esports don't even know there's an emerging mainstream high-Hz community (unaware of esports), and it will be years (or decade) before such communities coalesce, but the nice effect of seeing 4x motion-quality differences is quite stark on 0ms-GtG displays, much akin to 1/240sec SLR photo versus 1/1000sec SLR photo.
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

User avatar
RealNC
Site Admin
Posts: 3741
Joined: 24 Dec 2013, 18:32
Contact:

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by RealNC » 23 Mar 2023, 08:18

jorimt wrote:
22 Mar 2023, 18:09
My, that's some word salad :lol:
ChatGPT has actually mastered the most important human skill. The art of bullshitting :shock:
SteamGitHubStack Overflow
The views and opinions expressed in my posts are my own and do not necessarily reflect the official policy or position of Blur Busters.

User avatar
jorimt
Posts: 2481
Joined: 04 Nov 2016, 10:44
Location: USA

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by jorimt » 23 Mar 2023, 08:37

RealNC wrote:
23 Mar 2023, 08:18
jorimt wrote:
22 Mar 2023, 18:09
My, that's some word salad :lol:
ChatGPT has actually mastered the most important human skill. The art of bullshitting :shock:
Seeing as it is a predictive model trained off of human-generated material, that tracks...8-).
(jorimt: /jor-uhm-tee/)
Author: Blur Busters "G-SYNC 101" Series

Displays: ASUS PG27AQN, LG 48CX VR: Beyond, Quest 3, Reverb G2, Index OS: Windows 11 Pro Case: Fractal Design Torrent PSU: Seasonic PRIME TX-1000 MB: ASUS Z790 Hero CPU: Intel i9-13900k w/Noctua NH-U12A GPU: GIGABYTE RTX 4090 GAMING OC RAM: 32GB G.SKILL Trident Z5 DDR5 6400MHz CL32 SSDs: 2TB WD_BLACK SN850 (OS), 4TB WD_BLACK SN850X (Games) Keyboards: Wooting 60HE, Logitech G915 TKL Mice: Razer Viper Mini SE, Razer Viper 8kHz Sound: Creative Sound Blaster Katana V2 (speakers/amp/DAC), AFUL Performer 8 (IEMs)

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by Chief Blur Buster » 27 Mar 2023, 07:10

RealNC wrote:
23 Mar 2023, 08:18
jorimt wrote:
22 Mar 2023, 18:09
My, that's some word salad :lol:
ChatGPT has actually mastered the most important human skill. The art of bullshitting :shock:
Unfortunately, a gigantic proportion of the human population (here) has become bigger bullshitters than, say, GPT-4 (which bullshits less than GPT-3). Politics. Science. Pandemic. You name it. GPT-4, for example, is a far bigger breath of fresh air compared to that -- when you think of that perspective!!

As a guy born deaf, I use AI-based transcriber apps that can transcribe human speech much more accurately than Siri, so that I can communicate, as I never used the telephone until age 15, or video conferencing until ~2018. Now all OSes have built-in AI-based Live Caption features that can caption any app. For such reasons, I'm usually fairly quick to adopt AI consumer-use apps.

It's a "trust-in-some-subjects-but-verify" situation for me; when I use my GitHub CoPilot for my coding tasks, I might accept its sidebar suggestion a few percent of time and modify it to my needs, it was crappy at first but has gotten much better in the last few months; obvious a side effect of improved training.

Or ask the paid GPT 4 some questions about time management or kitchen recipie rescues (it's excellent at substitutions that works yummily!).

Obviously, it is not good at everything. For example, it is hilariously bad at certain complex math questions whenever it does not decide to use an external calculator, then it becomes a professor when properly taught to use tools at the right time; text-only LLM is pretty funny/strange.

People once laughed at online dating -- likewise, the concept of pocket supercomputers (smartphones) -- and how these two would combine (e.g. the dating apps that many people seem to use). Pure science fiction if you asked someone fifty years ago. Likewise, we don't yet know how these tools will evolve;

And the media loves to hype up the AI goofs like Bing Chat (before its tweakings and mandatory lobotomization after every 15 lines of chat). But its emergent behaviors shows how close we're getting to figuring out certain engineering paths to an AGI. We may not have implemented all paths (e.g. implementing text, images, short-term memory, long-term memory, tool-using ability, doesn't necessarily mean we've attached taste AI or emotion AI yet) but it is pretty clear the pace is very rapid;

Bing Chat is very crappy compared to the paid GPT-4 subscription because I can have unlimited-length conversation -- like 100 parallel conversations that I can resume 1-week later, like continuing last week's kitchen recipie chat.

It's like creating/deleting independent AI assistants on demand. They evolve different personalities as the chat continues, without interfering with the other chat in my resumable Chat History list (even 1-month old chats can be resumed). Put into statsis, the conversations' own short term memory resumes no matter if I type the next line chat 1 minute later or 1 month later -- I can even ask a followup question about a kitchen recipie.

BTW -- they announced they're going to add photo feature, so I can photograph my fridge at different times and ask it "please create a shopping list based on things missing from this latest fridge photo" -- the lab version can do that now. Damn useful when I'm a bit ADHD about some things; you saw it in the GPT-4 about the joke VGA-connector example (very impressive), but practical applications can be simple as photographing something and asking it how to fix this; it searches for the brand of the broken device, the instructions, etc.
vga-connector.png
vga-connector.png (290.48 KiB) Viewed 11901 times
This is a feature I am looking forward to; it's under beta testing now. The photo feature is not part of my consumer $20/month GPT-4 subscription (Far better than Bing Chat and free GPT-3.5 for my needs), but my subscription will probably get this feature next month (ish). I can think of various applications such as:
- Photograph your cupboard/fridge and ask it to create shopping list based on usual items missing (based on what it knows about me, and based on previous photographs, etc).
- Photograph something broken and ask how to repair (eventually, it is likely to give nearest repair businesses + links to youtubes + links to the specific manufacturer service manual for the recognized item)
- Photograph something I'm unsure about purchasing, and ask for reviews about the item, is it any good; (literally writes a review on the spot including existing reviews, similar items, better prices/warranty/service elsewhere, etc);
- Etc.

BTW, it is my opinion that LLM's (Large Language Models) are no longer just classical LLMs when they become multimodal, becomes tool-using capable, and become self-iterative.

Let's say, tool using ability (a measure of living intelligence, but still in dispute when applied to AI) and self-iterative capability.

The ability to recognize a math question and automatically use an external calculator tool rather than try to neural-network-think the math question (it still answers simple math questions by "AI brain" better than most "human brains" though, but it becomes more professor-like if you train it to use correct math tools instead, to self-verify itself).

Also, the ability to recognize a coding question and automatically run an AI-based IDE + VM to code up something, etc. Apparently, that kind of stuff is already in the lab -- 1000-line programs are running on first try when you let the AI iterate and debug repeatedly the program using its own AI-operated IDE in an AI-operated VM. That stuff is done in the lab, just not to public. The early GPT-4 plugin work are showing some pretty crazy stuff -- like complex travel-agent style planning plugins.

You see those crazy emergent behaviors occuring, which indicates to me, that we're being a little infantile keeping calling the post-GPT3-era AIs as LLM-only devices; we're not synthetically recreating the human brain function in all areas equally (we're prioritizing on text and images at the moment) but as we attach more and more than just text-autocomplete behaviors (which human brains do too, in a roundabout manner of speaking), we are witnessing more and more emergent behaviors -- which suggest humankind may have discovered the path to creating future AGI's. I'm frankly surprised they crammed this much AI capability out of less than 1/500th the synapses of a human brain, so we're already well-optimized straight into dangerous territory -- imagine just a few more technological iterations (software and hardware).

From my perspective (if you use the newer/paid stuff, not the free stuff you've played for a few minutes) -- it appears it is no longer a pure classical-LLM-land anymore, when you've now added short term memory, long term memory, image recognition, smell AI, taste AI, mistake-admitting, learning from mistakes, robot-movement AI, and they're already figuring out how to link all of them. But as you can see, it can get out of control (Witness Bing Chat). But when in control, it's pretty stellar.

We're not just in LLM-autocomplete-land anymore -- it's far more complex than that with new emergent behaviors, especially as many members of the human populations have a lot of LLM-like spew-out behaviors, and we need a Nobel Prize in Science to converge biological research and AI research. Without creating a skynet (ha). But I'm not waiting to find out -- I'm happy to adopt the tools that become available to consumers, if sufficiently useful.

While we're definitely still far from AGI, but LLMs in post-GPT3 multimodal era, is fairly infantilely named, because it's grown far beyond just text-only LLMs at that stage.

But with the era of teaching AIs tool-using abilities -- that's the scary and exciting part. Most humans can't do a 100-line code spew that executes on first try. Now teach an AI how to use an IDE, and iterate/learn from itself -- ouch. We've got to watch our programming careers and learn how to become conductors/leads of an AI coding orchestra -- inside the window of this decade. They won't replace humans but, could force programmers to change their habits, like the punched card transition to electronic IDEs. That kind of seismic Jacquard Loom change of era.

If you're not familiar with the Jacquard Loom of 1801, then google that famous history (otherwise we'd be building T-shirts by hand, and selling them for $10,000+ per shirt...).

In our favour fortunately, the probable bottleneck to human-worker replacement will be how quickly and cheaply AI can be powered -- e.g. the gigantic increase in GPU manufacturing (beyond mining). Calculations show we need several orders of magnitude of AI capacity -- and that's a massive electricity consumption concern to cost concern. Moore's Law slowdowns prevents us from speeding up the hardware side as fast as we used to. GPT-4 is much slower than GPT-3, even if much smarter, and its ability to code and iterate on its code becomes bottlnecked. The eventual AI that can code full 1-million line applications, will still be bottlenecked by its own performance and speed, and it will cost a lot of money in hardware and electricity to power.

Eventually an equilibrium hits, because companies will have choice between paying $10K/month to an AI versus $10K/month to humans. So we've got a natural "throttle" to the AI explosion under way, probably buying enough time for just enough government regulation to prevent even-more-disastorous scenario. The hyperbole claims of programmers being obsolete is funny. It's probably theoretically possible given unlimited budget, but there's just not enough capacity (by probably at least six orders of magnitude -- wild guess) -- Even with the new TSMC/Intel factories, NVIDIA probably won't be able to manufacture that many GPUs/AI accelerators this decade to make that possible. So we have time before any AI-pocalypse replacing coders enmasse. Even the world isn't integrated enough for more than a bunch of mini-skynets (e.g. "a hacker using AI-generated viruses to shut down a city's electricity for two days" type of incidents). The media will undoubtedly hype that up.

So we have time to adapt, but we better'd pay attention and adapt in the meantime;

The reality to society will be a hybrid mix somewhere squarely between utopia and dystopia. And humans still have to operate the AIs themselves, like telling it what kind of program to write, and the zillions of tweaks, plus the human working on parallel coding projects with parallel instances of AIs doing other parts of a project -- it will still be an art of a different kind. We coders will just be responsible for more lines of code per human, for example. This means there hopefully will be enough time for human workers/coders to adapt/learn, as AI computing power deployments become bigger.

Don't use AIs for things they are terrible at -- use them for what they are excellent at. And the number of things that they are excellent at (if you use newer AIs that you don't have free access to, not even the lobotomized Bing Chat) -- is apparently geometrically increasing. Companies are going to force AI onto us anyway -- keep your friends and enemies close.

Now, 99%+ of the AI will do crazy worthless junk. The most useful <1% of AI (behind paywalls) is the best kept secret bee's knees. There's truly amazing stuff going on, including AIs being trained how to use an IDE.

Maybe this needs to be forked into a new thread in the Offtopic Lounge for fun;
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

bumbeen
Posts: 84
Joined: 25 Apr 2023, 14:35

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by bumbeen » 12 May 2023, 02:39

I work as an enterprise network architect, and it’s pretty clear AI will eliminate at least 70% of the job I do on a daily basis- basically telling companies how to build good infrastructure and avoid shooting themselves in the foot at the same time. I knew I had to stop being lazy at grind something new eventually 😭

superblural
Posts: 5
Joined: 02 Dec 2020, 01:15

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by superblural » 23 Oct 2023, 16:59

Chief Blur Buster wrote:
27 Mar 2023, 07:10
Hey Chief, I have a couple questions about how the GPU communicates with the monitor. Does the GPU send 1 horizontal scan line at a time to the monitor and that's how you get multiple frames on 1 monitor refresh cycle(screen tearing) or does it switch to frame 2 mid scan out so you'll have 1 pixel height of vertical tearing?

And then for scenario g-sync on + v-sync off with fps capped below monitor refresh rate, in the guide it says you can still get tearing on the lower half of the screen. So say for monitor refresh 1, frame 1 is being scanned in then due to frame time inconsistency frame 2 can start getting scanned in partially halfway through. The question is how does monitor refresh 2 start, does it wait for a new frame, frame 3, to begin the next monitor refresh? Or does frame 2 get rescanned?

I'd rather ask you than an AI :)
Last edited by superblural on 23 Oct 2023, 20:58, edited 6 times in total.

User avatar
RealNC
Site Admin
Posts: 3741
Joined: 24 Dec 2013, 18:32
Contact:

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by RealNC » 23 Oct 2023, 17:21

superblural wrote:
23 Oct 2023, 16:59
Hey Chief, I have a question about how the GPU communicates with the monitor. Does the GPU send 1 horizontal scan line at a time to the monitor and that's how you get multiple frames on 1 monitor refresh cycle(screen tearing)? Is this why we don't see a vertical screen tear and always horizontal? I'd rather ask you than an AI :)
The signal sent to the monitor is left to right, top to bottom, pixel by pixel:

https://www.youtube.com/watch?v=3BJU2drrtCM&t=153s
SteamGitHubStack Overflow
The views and opinions expressed in my posts are my own and do not necessarily reflect the official policy or position of Blur Busters.

User avatar
Chief Blur Buster
Site Admin
Posts: 11647
Joined: 05 Dec 2013, 15:44
Location: Toronto / Hamilton, Ontario, Canada
Contact:

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by Chief Blur Buster » 23 Oct 2023, 21:47

Yes, rasterization -- it's a defacto serialization of 2D picture data into 1D for things like broadcast or cable delivery. This has persisted in both analog and digital era.

The "horizontal scan rate" (horizontal refresh rate) in a Custom Resolution Utility, is the number of pixel rows per second being spewed out of the GPU output.

See Custom Resolution Glossary
forums.blurbusters.com/viewtopic.php?t=3248

See High Speed Videos of Scanout
www.blurbusters.com/scanout
Head of Blur Busters - BlurBusters.com | TestUFO.com | Follow @BlurBusters on Twitter

Image
Forum Rules wrote:  1. Rule #1: Be Nice. This is published forum rule #1. Even To Newbies & People You Disagree With!
  2. Please report rule violations If you see a post that violates forum rules, then report the post.
  3. ALWAYS respect indie testers here. See how indies are bootstrapping Blur Busters research!

rasdore
Posts: 3
Joined: 13 Nov 2023, 22:48

Re: Blur Buster's G-SYNC 101 Series Discussion

Post by rasdore » 13 Nov 2023, 22:53

Good afternoon, thank you for the guide. I play Hunt Showdown on PC and my framerate is not stable even on minimum settings (120-160 fps), as is the frame. Monitor - Asus PG258Q (g-sync, overdrive, 240hz), video card - nvidia suprimx 3080. After reading the manual, I could not find the settings for myself. I ask for your help, I will be glad for any hint. In search of the lowest latency with an adequate frame rate...

Post Reply