Here's the short explanation.
The square mixer signal would be perfect if it placed rising or falling edges exactly where they ought to be. This happens when sampling instants line up with the desired edge locations (i.e. when the carrier frequency is related to the sampling rate in some fashion.) Ordinarily, we do not operate in this regime, and we place edges near (but not exactly) where they should be.
When the square mixer signal is imperfect, it may be understood as the sum of a perfect square-mixer signal and an error signal. At low carrier frequencies, the error signal is a telegraph signal with relatively low energy relative to the ideal signal. The error manifests in the frequency domain as a 1/f roll-off in the "real" switching signal, compared to impulses in the "ideal" switching signal.
As the carrier frequency increases, the energy in the error signal relative to the ideal signal increases. In addition, the width of the 1/f roll-off should (?) increase. My worry was that the mixer became increasingly noisy at higher carrier frequencies, and that its performance would be severely compromised.
False alarm, I think. Here's an illustration.
Here's a series of images at different carrier frequencies. In all cases, a signal at frequency (fc+100 Hz) is synthesized by an external function generator, and demodulated at (fc) by the DMFD. The demodulated signal should be a 100 Hz sinusoid. [n.b. Frequency drift (i.e. thermal) and offset (i.e. crystal inaccuracy) is compensated so that all demodulated signals are centered at 100 Hz.]
This topic: Main
> TWikiUsers >
GraemeSmecher >
GraemeSmecherWorkbook > SquareMixerTroubleWithHighFrequencies
Topic revision: r1 - 2007-08-10 - GraemeSmecher