Systems like DLP and plasma buffers a whole refresh cycle inside the display device itself and outputs multiple temporal dithered fields (aka frames with limited color depth). The HDMI cable still raster-delivers (CRT-style) into the DLP's framebuffer memory, but the DLP then converts 60Hz to 960Hz, 1440Hz or 2880Hz of 1-bit refresh cycles.
Like monochrome 1-bit images, literally. The mirrors of a DLP chip can only be ON or OFF in one DLP refresh cycle. So a DLP projector is like 1440 ultrafast 1/1440sec raster scanouts.
So the DLP is already doing essentially scan-conversion (like converting PAL->NTSC or NTSC->PAL) but in a very integrated way, where 60Hz->1440Hz. Let's look at an older DLP projector with a 960Hz mirror switching rate, and a color wheel cycling rate of 240Hz. On those, there's not enough temporal bits per refresh cycle to do 24-bit in 1/60sec, so they temporal-dither spread the color temporally over multiple refresh cycles, and that creates contouring during motion. But for simplicity, let's at least understand the math, 960Hz DLP mirror rate and 240Hz color switching rate (by colorwheel or by other colorswitching means like LED cycling or laser cycling), means you're doing effectively this sequence:
A DLP that does 24-bit-per-signal-Hz, for a 1440Hz DLP chip with 240Hz color wheel, handling a 60Hz video signal:
Pseudocode.
Code: Select all
// A single 60Hz = 1/60sec refresh cycle on a 1440Hz DLP mirror switch rate with a 240Hz color wheel.
// The below has 24 different 1-bit DLP-chip refresh cycles per 60Hz video-cable refresh cycle
REPEAT (every-new-24bit-frame-from-signal-refresh-cycle) {
Buffer one refresh cycle from signal into a 24-bit framebuffer.
Split 24-bit framebuffer (from VGA/HDMI/DP/whatever) into 24 separate 1-bit images.
1-bit bitmap of red (2 colors containing only black and fullbright red) at T+0/1440sec
1-bit bitmap of red (2 colors containing only black and fullbright red) at T+1/1440sec
1-bit bitmap of red (2 colors containing only black and fullbright red) at T+2/1440sec
1-bit bitmap of red (2 colors containing only black and fullbright red) at T+3/1440sec
// color wheel transitions from red to green
1-bit bitmap of green (2 colors containing only black and fullbright green) at T+4/1440sec
1-bit bitmap of green (2 colors containing only black and fullbright green) at T+5/1440sec
1-bit bitmap of green (2 colors containing only black and fullbright green) at T+6/1440sec
1-bit bitmap of green (2 colors containing only black and fullbright green) at T+7/1440sec
// color wheel transitions from green to blue
1-bit bitmap of blue (2 colors containing only black and fullbright blue) at T+8/1440sec
1-bit bitmap of blue (2 colors containing only black and fullbright blue) at T+9/1440sec
1-bit bitmap of blue (2 colors containing only black and fullbright blue) at T+10/1440sec
1-bit bitmap of blue (2 colors containing only black and fullbright blue) at T+11/1440sec
// color wheel transitions from blue to red
1-bit bitmap of red (2 colors containing only black and fullbright red) at T+12/1440sec
1-bit bitmap of red (2 colors containing only black and fullbright red) at T+13/1440sec
1-bit bitmap of red (2 colors containing only black and fullbright red) at T+14/1440sec
1-bit bitmap of red (2 colors containing only black and fullbright red) at T+15/1440sec
// color wheel transitions from red to green
1-bit bitmap of green (2 colors containing only black and fullbright green) at T+16/1440sec
1-bit bitmap of green (2 colors containing only black and fullbright green) at T+17/1440sec
1-bit bitmap of green (2 colors containing only black and fullbright green) at T+18/1440sec
1-bit bitmap of green (2 colors containing only black and fullbright green) at T+19/1440sec
// color wheel transitions from green to blue
1-bit bitmap of blue (2 colors containing only black and fullbright blue) at T+20/1440sec
1-bit bitmap of blue (2 colors containing only black and fullbright blue) at T+21/1440sec
1-bit bitmap of blue (2 colors containing only black and fullbright blue) at T+22/1440sec
1-bit bitmap of blue (2 colors containing only black and fullbright blue) at T+23/1440sec
// color wheel transitions from blue to red
}
This is what a single 1/1440sec DLP refresh cycle looks like in the DLP's framebuffer memory. It's ALWAYS 1-bit monochrome for all DLP chips ever invented.

Your eyes are seeing 24-bit color because of the temporal dithering -- 24 different error-diffusion-dithered bitmaps per second with error diffusion patterns in very different places, in precisely mathematically-corrected locations to make sure each pixel is flashed the correct number of times per color. This is done via an FGPA or ASIC.
You cannot accurately photograph this easily because the colorwheel is always spinning in front of it, and not all pixels refresh at the same time on a DLP chip (the last pixel on a DLP chip refreshes 1/1440sec after the first pixel on the DLP chip), as DLP chips are also raster-scanout devices, just ultrafast sweeps. So it's hard to photograph a 1-bit refresh cycle unless you have an ultrafast camera shutter that is timed exactly between refresh cycle sweepouts.
The FPGA/ASIC connected to a TI DLP chip is generating 24 different error-diffusion dithered 1-bit bitmaps and then displaying them rapidly in 1/1440sec refresh cycles -- Viewpixx's Propixx Projector.
But our commodity projectors only accept a 60Hz or 120Hz signal and it's scan converted in the commodity scan converter ASIC built into every TI DLP with all the patented proprietary TI algorithms. The error-diffusion dither algorithm is both simultaneously spatial AND temporal. Spatially, it just looks like an old 1980 CGA bitmap (yes, that), except displayed at 1440 frames per second on your cheap $500 DLP projector, helped by commodity ASIC/FPGAs and a spinning colorwheel.
The ordinary 60Hz raster signal on HDMI is being scan-converted into a proprietary error-diffused dither pattern with 1440 different dither patterns per second. You can only capture it with a high speed camera shutter (1/1440sec or 1/2880sec shutter speed + global shutter, not a rolling-scanout shutter), but once you do, you start to recognize a single 1-bit error-difused bitmap on the DLP.
Just pretend the cable is stuck at 60Hz 24-bit
Just pretend the DLP chip is stuck at 1440Hz 1-bit
Therefore: Mandatory scan conversion
(plasma displays did it too, although using different algorithms than DLP).
The temporal dithering is computed so precisely that it looks like normal color, because to generate a dark red, it just has to flash the red pixel a few times, but keep mirror pixel off during the green/blue stage of color wheel. So it's possible to generate 16,777,216 colors for one pixel using just 24 different 1-bit bitmaps displayed in rapid sequence. So even though one frame of 1-bit is spatially error-diffusion dithered, the DLP ASIC/FPGA is monitoring each pixel to make sure each pixel is flashed the correct binary number of times necessary to create a specific pixel color.
Doing a half-hearted job will show odd color graidents, minor color flicker, and odd bitdepth reductions. Doing a full-hearted job will only result in clean bitdepth reductions (e.g. turning movies into literal 4-bit 16-color EGA graphics) if you do large BFI factors. You will also get contouring artifacts too, during motion -- far worse than early-plasma contouring artifacts.
Contouring artifacts used to be caused by the fact that the error-diffusion dithers of a plasma subfield or DLP subfield weren't properly temporally-error-diffused AND spatially-error-diffused in their simultaneous spatial+temporal dithered subfields. The math to calculate all of this per-pixel, real-time, could only be handled by FPGA/ASIC and they were expensive algorithms in the early displays until these chips got more commoditized (albiet still proprietary especially in higher end displays with less contouring motion artifacts)
Common dithering = spatial dithering = the semi-random patterns of adjacent pixel
Temporal dithering = the very fast semi-random flicker pattern of a single pixel to create a solid color.
DLPs are doing both simultaneously with a complex FPGA algorithm to generate full color.
Now do you understand better???

___________
Just picture the DLP chip as simply an ultrafast 1/1440sec raster device that can't refresh slower than 1/1440sec, regardless of signal input. All DLP projectors made in humankind has a defacto built-in scan converter.
The problem is that an external BFI wheel will not easily be able to "leading-edge" time those ultrafast 1/1440sec sweep-scanouts, and "splice into" the correct stage of the temporal dither. The precision margins just isn't there. Especially since the edge of a shadow (of a BFI wheel) will often be blurry/soft (another error margin too). Do you understand yet?
How ARE you going to use your Arduino to "splice correctly" at the correct moment into that dither cycle?
(1) Reconciling the 1/60sec scanout speed of your rolling-BFI with the the 1/1440sec scanout speed of a DLP chip; AND
(2) Splicing the external mechanical shutter correctly between the colorwheel boundaries;
(3) Splicing the external mechanical shutter correctly at the correct time of the proprietary temporal dither (so you don't interrupt the Texas Instruments sequence)
(4) Lack of access to the display chip''s 1440 VSYNC-equivalents per second
When you can learn how to take correct photographs of a DLP using a fast shutter speed (e.g. 1/1000sec) without odd looking color, you gain enough experience to do this complexity. BFI splcing a DLP is sometimes harder than fast-shutter photographing correctly a DLP.
If you do it precisely enough, if your shadow edges are sharp (remember putting a hand on a projector creates blurry shadows -- a BFI wheel will have a blurry edge shadow, especially if BFI wheel is close to projector lens), if you time it to within a tight error margin, then you might be able to do it without too much artifacts except simply color depth loss (from hiding part of the Texas Instruments temporal dither cycle). But the longer the BFI, the more temporal dither you're hiding, and your color depth falls. Eventually your low-persistence DLP image looks worse than 16-bit EGA, because in trying to reduce 90% of DLP persistence by blacking out 90% of the temporal dither, your 24-bit temporal dither falls to just 2.4 bits temporal dither. That's barely more than 4 colors. Ugh.
You may be able to get some useful BFI results, but the degradations (even if you're super careful and super precise) will invariably be far worse than the improvements from the low-persistence. It's a very high-effort thing.
___________
That's why you prefer MitM processors, not external mechanical BFI, with a DLP. For example, the 240Hz Optoma can fall to 1/4th persistence if you use a MitM processor to do one 60Hz refresh cycle in 1/240sec, and 3 black frames in between. Then you're not violating the proprietary temporal ditherer -- you no longer have to worry about item (2) or (3) if you're letting the DLP projector handle it instead of imposing onto it mechanically without being able to know TI's own proprietary 1440-framebuffers-per-second 1-bit error-diffused temporal-color-dithering technique.
Do you understand better?
Completely separate question to the above. It's like asking how to heal an elbow when your problem is a broken toe. Or asking for Mac-specific instructions when you only have a PC.
But it's still a useful question (you might have both a broken elbow and a broken toe.... And you might own both a PC or Mac).... because you have multiple problems to solve concurrently.
So yes, this is yet another problem you have to solve (above-and-beyond the other, unrelated problem I just described above).
Older DLP projectors did not do it, but newer DLP projectors can phase-multisync. The colorwheel will slowdown/speedup slightly and the DLP mirror switching rate will slowdown/speedup to match 59.94Hz or 60Hz. So if you input 59.94Hz into a DLP, the mirrors may slow down from 1440Hz to 1438.56Hz. So newer DLP will autophase. Unfortunately, you still have the massively complex problem of needing to correctly splice at the right signal-subrefresh DLP-chip-level-refresh, without screwing up the temporal dither sequence that was calculated inside the ASIC/FPGA.
This is why external mechanical BFI on true sample-and-hold LCD (with non-PWM backlights) and OLED (not using PWM method) are much simpler because you only have one refresh rate to worry about.
- No DLP-switch refresh rate
- No colorwheel refresh rate
- No internal temporal pulsing to be difficult to synchronize with your external pulsing
- LCD/OLED raster scanout is the same velocity as signal raster scanout.
- Scan-conversion in the scaler/TCON is not necessary on LCD/OLEDs
So you have fewer variables. BFI is a shutter, prone to the laws of physics of color mixing just like a fast camera shutter interrupting temporally-dithered color.
As a general rule of thumb, a display that looks accurate in a ~1/1000sec camera shutter photograph, is reasonably easy to mechanical-BFI. Think of BFI as a shutter for your eyes, so you need to understand camera shutter science (artifacts in photograph) to help partially understand BFI shutter science (artifacts captured by your eyes).
Much lower effort when the only thing to worry about is 1/60sec rather than reconciling (easy-sync 1/60sec AND hard-to-sync 1/240sec AND hard-to-sync 1/1440sec), which means:
- For mechanical BFI, you want to only deal with a display that refreshes in sync with signal (CRT, LCD, OLED), so you don't have to worry about the subrefresh behaviours ruining the BFI quality.
- For MitM BFI (video processor method) where BFI are full signal refresh cycles, you don't have to worry about the subrefresh behaviour.
For MitM, you don't even need to know the input Hz exactly... So if you're getting a 60Hz input and a 240Hz output in a MitM box, you already have APIs to know the input frame rate and output frame rate, and you can process accordingly. And even if you didn't know the input rate, it can be safely assumed in MitM if you accept 1 frame latency -- you can at least know the approximate ratio, since you can simply use 240Hz Direct3D VSYNC ON to output 1 visible frame and 3 black frames in a high precision thread (higher CPU priority and process priority than other parts of operating system), to create a reliable BFI without needing to know the exact input Hz. (e.g. 59.9375Hz input and 239.997Hz output) -- you might get a single stutter or two once every minute, but you wouldn't have erratic BFI flicker at all, if properly done at a sufficiently high priority. You're just letting the external device handle its own subrefresh processing, not needing to worry about the colorwheel speed, and not needing to worry about the DLP pixel switching speed, because you're not doing subrefresh BFI (e.g. BFI at a fraction of the output refresh cycle) and can thus simply sync via ultra-low-latency variants of Direct3D VSYNC ON. (Either NVIDIA NULL, or via RTSS Scanline Sync technique, or via the emulator low-lag VSYNC ON technique programmed directly into RetroArch -- these only have 1-frame latency, sometimes less, unlike ordinary video game VSYNC ON)