I salute your diligence and dedication, sir.
Mike/Vsat and I were discussing time-produced vs allpass produced "stagger" yesterday, and one of the things that occurred to me is that with time-produced stagger (i.e., the so-called "dry" signal is staggered/delayed by a small fixed amount so that the swept-delay signal has the opportunity to arrive at the mixing stage *before* the staggered signal once in a while), the amount of equivalent "phase delay" varies across the spectrum.
I'll try and express it more clearly. If I delay the fixed signal by 1msec, the cancellation that results when it is combined with a swept signal does not occur instantaneously at all frequencies. If I have a 10khz signal as part of the swept path, quite a few cycles of that signal will actually reach the mixer stage BEFORE the 1msec-delayed fixed signal gets there. Naturally, the number of cycles that pass prior to any cancellation occurring (remember, it has to be the same waveform in anti-phase versions at the mixing junction for any cancellation to occur) will be fewer for lower frequencies, and more for higher frequencies.
In a sense, you not only have variation in the distribution of notches with flanging, but differential ONSET of cancellation because of the time differences. Not having done the proper experiments, I obviously can not force the issue and say phase-delay will absolutely NOT yield the same sort of effect as true sample/time-based stagger, but the fact that musical effects are used with constantly varying, rather than steady state input signals means one has to set aside all those assumptions derived from textbooks and steady state signals, and consider real-world signals.
My gut sense is that true time-based stagger will produce cancellations in a different manner than phase-delay produced stagger, because of such differences in cancellation onset. Again, that may just be a different feeling TZF and not necessarily a worse, better, or un-TZF-ey TZF.
A reasonable inference? You tell me.