Product Documentation
Spectre Circuit Simulator RF Analysis Theory
Product Version 23.1, June 2023

1


Basic Reference Information

Spectre® circuit simulator RF analysis (Spectre RF) adds capabilities to the Spectre circuit simulator (Spectre) that are particularly useful to analog and RF designers, including the ability to

You can use Spectre RF simulation in combination with both the accurate Fourier analysis capability of Spectre and the Verilog®-A behavioral modeling language to simulate RF circuits. This book describes the theory of operation of Spectre RF simulation.

Spectre RF simulation brings many concepts to the Spectre simulator.

Periodic Analyses

Modulated periodic noise, time domain noise and jitter analyses have been deprecated. For more information on the current use model of periodic noise, refer to the Spectre Circuit Simulator and Accelerated Parallel Simulator RF Analysis in ADE Explorer User Guide.

Periodic Steady-State (PSS) analysis is a large-signal analysis that directly computes the periodic steady-state response of a circuit with a simulation time that is independent of the time constants of the circuit. PSS quickly computes the steady-state response of circuits that exhibit extremely long time constants, such as high-Q filters and oscillators.

Periodic Small-Signal (PAC, PSP, PXF, Pnoise and PSTB) analyses are similar to the conventional small-signal analyses (AC, SP, XF, Noise and STB) but you can apply them to periodic circuits where frequency conversion plays a critical role. The conventional small-signal analyses (AC, SP, XF, Noise and STB) linearize about the DC or time-invariant operating point and they do not include frequency conversion effects.

After a PSS analysis, the circuit is linearized about a periodic (time varying) operating point with frequency conversion effects included. After the PSS analysis, you can perform one or more of the periodic small-signal analyses. Example circuits where you might use the periodic small-signal analyses include conversion gain in mixers, noise in oscillators, and switched-capacitor filters.

Quasi-Periodic Analyses

Quasi-Periodic Steady-State (QPSS) analysis is a large signal analysis you can use for circuits with multiple large tones. QPSS analysis computes the steady-state responses of a circuit driven by two or more signals at unrelated frequencies. You select one sinusoidal or pulse signal as the large signal. Any additional signals, called moderate signals, must be sinusoids.

Quasi-Periodic Small-Signal (QPAC, QPSP, QPXF, and QPnoise) analyses are similar to the periodic small-signal analyses (PAC, PSP, PXF, and Pnoise) but they extend to circuits where frequency conversion and intermodulation effects both play a critical role.

After a QPSS analysis to determine the quasi-periodic operating point and to linearize the circuit about the quasi-periodic operating point, you can perform one or more of the quasi-periodic small-signal analyses. It is the quasi-periodically time-varying nature of the linearized circuit that accounts for the frequency conversion and intermodulation. Example circuits where you might use the quasi-periodic small-signal analyses include conversion gain in mixers, noise in oscillators, switched-capacitor filters and other periodically or quasi-periodically driven circuits.

Envelope Analysis

Envelope analysis allows RF circuit designers to efficiently and accurately predict the envelope transient response of the RF circuits used in communication systems.

The Simulation Engines

Spectre RF provides a choice of simulation engines between the shooting method and the harmonic balance method (HB) with most analyses. The harmonic balance engine complements the capabilities of the shooting method.

Harmonic Balance (HB) Engine

The harmonic balance engine supports frequency domain harmonic balance analyses. It provides efficient and robust simulation for linear and weakly nonlinear circuits. The harmonic balance engine is supported on the Solaris, Linux, HP and IBM platforms for both 32 and 64 bit architectures. See “The Harmonic Balance Engine” for more information on this simulation engine.

Time Domain (TD) Engine

The Spectre RF simulator has traditionally used an engine known as the shooting method [kundert90] to implement periodic and quasi-periodic analyses and the envelope analysis. The shooting method is a time domain method and it is used in most descriptions and examples in this manual.

Periodic Steady-State Analysis

Periodic Steady-State (PSS) analysis directly computes the periodic steady-state response of a circuit. Spectre RF simulation has traditionally used an engine known as the shooting method [kundert90] to implement PSS analysis. Now, Spectre RF supports a choice of simulation engines between the shooting method and the harmonic balance method. See “The Harmonic Balance Engine” for more information.

The Shooting Method

The shooting method is used in the majority of examples in this manual. The shooting method is a time domain method that operates by efficiently finding an initial condition that directly results in steady state, as illustrated in Figure 1-1.

Figure 1-1 The Shooting Method

The shooting method is an iterative method that begins with an estimation of the desired initial condition. The shooting method computes the initial condition that results in the signals being periodic as defined by v f v i = Δ v = 0. The circuit is evaluated for one period starting from the initial condition. The final state (vf) of the circuit is computed along with the sensitivity of the final state with respect to the initial state (∂¹¹vf / ∂vi). The nonperiodicity (Δv = vf – vi) and the sensitivities are used to compute a new initial condition. If the final state is a linear function of the initial state, then the new initial condition directly results in periodicity. Otherwise, additional iterations are needed.

Shooting methods require few iterations if the final state of the circuit after one period is a near-linear function of the initial state. This is generally true even for circuits that react in a strongly nonlinear fashion to large stimuli (such as the clock or local oscillator) applied to the circuit. Because the circuit is assumed to be periodic over the shooting interval, then the stimulus must also be periodic over that same period. The shooting method integrates over a whole number of periods of the stimulus, which minimizes the nonlinearity in the relationship between the initial and final state and minimizes the number of iterations needed for convergence. Typically, shooting methods need about five iterations on an average circuit and have little difficulty with the strongly nonlinear behavior that occurs within the shooting interval. This is a strength of shooting methods over other steady-state methods such as harmonic balance [kundert90]. See “The Harmonic Balance Engine” for information on the Spectre RF harmonic balance engine.

One reason why shooting methods are not commonly used is that they can become quite slow on larger problems. Shooting methods form a full N × N sensitivity matrix, where N is the number of equations needed to represent the circuit. During the course of the algorithm, this matrix must be inverted. This requires N 3 operations. For a large circuit, doubling the size of the circuit increases the simulation time by 8. The cubic rise in computational complexity as a function of circuit size makes this algorithm impractical for larger circuits.

Spectre RF simulation uses proprietary algorithms that eliminate the superlinear rise in computation time with problem size [telichevesky95]. Typically, the time required to directly compute the periodic steady-state response of a circuit is about the same as the time required to compute 4 to 5 periods of the circuit’s transient response. As a result, the PSS analysis is capable of directly finding the periodic steady state of very large circuits.

Distributed Components and Hidden State in PSS Analysis

PSS analysis does not support distributed components or components with hidden state. The difficulty with distributed components results from the fact that the state of a distributed component is a waveform, not a simple number, as with capacitors and inductors. To know the state of a distributed component, it is necessary to know the voltage and current along the whole length of the component. The sensitivity of the final state with respect to the initial state would be an infinite dimensional matrix. All of this greatly complicates the shooting method. This limitation prevents the use of ideal delays, transmission lines, microstrip lines, and nports with the PSS analysis.

The Transmission Line Modeler (LMG), which is available with Spectre RF, generates transmission line models that you can use in Spectre RF simulations. See the Spectre Circuit Simulator and Accelerated Parallel Simulator RF Analysis in ADE Explorer User Guide for more information.

The problem of components with hidden state results more from practical difficulties than from computational difficulties. Some components have an internal state that is hidden from the PSS analysis. Because the PSS analysis cannot access this hidden state, it cannot set it at the beginning of the period, nor can it compute the sensitivity of the final state to the initial state. Components that contain hidden state include the z-blocks and some components described using the Verilog-A® behavioral modeling language. Components described in the Verilog-A language that use values computed from a previous time step and saved in local variables to affect the behavior at a later time step are said to have hidden internal state. (Conversely, there is no problem if the only state is that held on internal or external nodes using the integ and dot operators.) You cannot use Verilog-A language components that use internal hidden state with the PSS analysis.

Fundamental Assumptions for PSS Analysis

There are two fundamental assumptions that apply to PSS analysis

Understanding these two assumptions and their consequences allows you to anticipate whether you can apply PSS successfully to your problem.

Periodicity Assumption

PSS analysis assumes that during the shooting interval, all stimuli are periodic and that the circuit supports a T-periodic response, where T is the period specified for the PSS analysis. If the circuit is driven with more than one periodic stimulus, then the frequencies must all be commensurate or co-periodic, and T must be the common period or some integer multiple of the common period. Efficiency of the simulation drops when the T is long compared to the periods of the stimuli.

In rare cases, circuits with a periodic stimulus generate subharmonics, which PSS can handle if you set the period to be that of the subharmonic. In other cases, periodically driven circuits respond chaotically, such as do delta-sigma modulators. In this case, you must use transient analysis rather than PSS analysis.

If the period T used by PSS is not an integral multiple of the period specified for each built-in time-varying independent source present in the circuit, the simulator issues a warning message and skips the PSS analysis. If you use the Verilog-A language to describe independent sources, then you must ensure that sources are co-periodic with the analysis period T. Failure to do so generally prevents PSS analysis from converging, although PSS analysis rarely converges to an incorrect result.

If the circuit is driven by T-periodic stimuli but does not have a T-periodic solution (for example, if the solution is chaotic), PSS analysis fails to converge. In this case, you should use transient analysis to simulate the circuit. Occasionally a circuit generates subharmonics, as is the case with frequency dividers. In this situation, you should specify the PSS period to be equal to that of the subharmonics.

If PSS analysis fails to converge because the circuit does not have a T-periodic solution (which might result if the circuit is not driven by T-periodic stimuli), it is considered a user error, so you are responsible for finding and fixing the problem.

Linearity Assumption

In the circuit under simulation, the relationship between the initial and final points over the shooting interval should be near-linear. The more nonlinear the relationship between initial and final points, the greater number of iterations needed by the PSS analysis, which causes the PSS analysis to take longer. If the relationship is sufficiently nonlinear, PSS analysis might not converge.

If convergence is achieved, then there are no further consequences (no degradation of accuracy). Failing to converge in this situation is considered a failure of the simulator and should be reported to Cadence. (Send a description of the problem along with the input files [netlist, Verilog-A files, model files] and the log file to spectrerf@cadence.com). Occasionally delaying the starting time of the shooting interval can improve convergence. Arrange for the shooting interval to start at a time when signals are quiescent or changing slowly.

Periodic Small-Signal Analyses

The conventional Spectre small-signal analyses (AC, SP, XF, and Noise) are useful for characterizing a wide variety of analog circuits [kundert95]. For example, you can use them to determine the frequency response, input and output impedances, loop gain characteristics, and supply rejection of circuits such as amplifiers and filters.

However, from an RF perspective, an important shortcoming of the Spectre small-signal analyses is that you cannot apply them to circuits that strongly exhibit frequency translation, either as a desired or inadvertent consequence of the design of the circuit. Commonly, you find these circuits in both analog and RF designs.

You cannot simulate any of the circuits listed in the previous paragraphs with conventional small-signal analyses because they exhibit a nonnegligible amount of frequency translation. The conventional analyses begin by linearizing the circuit about a quiescent operating point (the DC solution). A linear time-invariant model of the circuit is then constructed by the small-signal analyses and the steady-state solution is computed using phasor analysis. These conventional analyses cannot handle frequency translation because they analyze a linear time-invariant representation of the circuit, and linear time-invariant networks do not exhibit frequency translation.

Spectre RF simulation provides the periodic small-signal analyses: PAC, PSP, PXF, and Pnoise [telichevesky96b]. These analyses start by linearizing the circuit about the periodically time-varying operating point computed by a preceding PSS analysis. A periodically time-varying circuit does exhibit frequency translation, so the periodic small-signal analyses can accurately model frequency translation effects. Instead of using conventional small-signal analyses on amplifiers and filters, you can now make measurements on periodic circuits that exhibit frequency translation using the periodic small-signal analyses.

Applying a periodic small-signal analysis is a two-step process.

For the two-step process of applying the periodic small-signal analyses to be accurate, the input signals applied in the second step must be small enough so that the circuit does not respond to these signals in a nonlinear fashion in any significant way. This is not true of the signals applied in the first step. The only restriction on the large signals used in the initial PSS analysis is that they must be periodic.

This two-step process is widely applicable because most circuits that translate frequency are designed to react in a strongly nonlinear manner to one stimulus (the LO or the clock), while at the same time they react in a nearly linear manner to other stimuli (the inputs). A mixer is a typical example. Its noise and conversion characteristics improve if it is discontinuously switched between two states by the LO, yet it must respond linearly to the input signal over a very wide dynamic range.

Because the periodic small-signal analyses are performed on a linear representation of the circuit, the computed response is a linear function of the stimulus, regardless of the size of the stimulus. In the real circuit, the input must be small enough to avoid violating the assumptions of the small-signal analysis. However, after the circuit has been linearized, the amplitude chosen for the small stimuli is arbitrary. Typically the stimulus is given an amplitude of 1 and a phase of 0 (zero) to allow transfer functions to be computed directly. In this regard, the periodic small-signal analyses are similar to conventional small-signal analyses.

Conventional small-signal analyses do not model frequency translation, so between any input and output there is only one frequency-dependent transfer function. However, with the periodic small-signal analyses, there are many transfer functions between any single input and output. In fact, there are as many transfer functions as there are harmonics in the periodic operating point, which is always zero, one, or infinite.

In practice, usually only one or two transfer functions are interesting. For example, when analyzing the down-conversion mixers found in receivers, the desired transfer function is the one that maps the input signal at the RF to the output signal at the IF, which is usually the LO minus the RF. The Spectre RF simulator using the shooting method, unlike harmonic balance simulators, internally computes the response of the circuit in the time domain and converts the results into the frequency domain using the new Fourier Integral approach to Fourier analysis [kundert94].

The accuracy of the internal time-domain solution is completely independent of the number of harmonics requested. In addition, because Spectre RF analyses use the Fourier Integral rather than Discrete Fourier Transforms the accuracy of the frequency-domain results are also independent of the number of harmonics requested. With Spectre RF analyses, you are free to request as few or as many harmonics as desired without concern for the accuracy of the final result. The only exception to this general rule is the periodic noise analysis, Pnoise. Pnoise analysis models the noise folding that occurs in periodically varying circuits. If you request fewer harmonics, fewer folds are included in the accumulated total.

Another important advantage that the combination of the PSS analysis using the shooting method and the periodic small-signal analyses has over methods based on harmonic balance is that it is efficient even if the circuit is responding in a strongly nonlinear manner to the LO or the clock. As a result, you can apply Spectre RF analyses to such strongly nonlinear circuits as switched-capacitor filters, switching mixers, chopper-stabilized amplifiers, PLL-based frequency multipliers, sample-and-holds, and samplers [konrath96]. Note that the combination of the PSS analysis using the harmonic balance method and the periodic small-signal analyses efficiently analyze weakly nonlinear circuits.

Fundamental Assumptions for the Periodic Small-Signal Analyses

There are two fundamental assumptions that apply to the periodic small-signal analyses

Understanding these assumptions and their consequences lets you anticipate whether you can apply the periodic small-signal analyses to your problem.

Linearity

The periodic small-signal analyses all assume that the circuit responds linearly to the sinusoidal (PAC or PXF) or noise (PNoise) stimulus, or it involves both sinusoidal and noise (PSP) stimuli. There is no such assumption concerning the periodic signals (such as the LO or clock) applied in the initial PSS analysis. Spectre RF simulation is not capable of computing the distortion caused by the small signals, although you can use the small signals to measure distortion caused by the large signals present in the PSS analysis.

If the signals present in the circuit during a periodic small-signal analysis are large enough to drive the circuit to nonlinear behavior, then the results computed by simulation differ from the results produced by the circuit. If you consider PAC and PXF analyses to be computing transfer functions, which by definition assume arbitrarily small input signals, then this is not an issue. However, Spectre RF simulation can compute inaccurate results on strongly nonlinear circuits that exhibit very high levels of noise, such as when a circuit exhibits high levels of jitter (low-Q, highly nonlinear circuits such as relaxation and ring oscillators).

Analysis Frequency

Internally, the periodic small-signal analyses compute transfer functions using time domain techniques. The time-steps used in these time-domain computations are the same as those used in the preceding PSS analysis. In order for the transfer function to be accurate, the period of the small sinusoidal stimulus must be large compared with the largest time-step used during the PSS analysis.

If the analysis frequency of the periodic small-signal analysis is too high, the accuracy of the results degrade. The Spectre RF simulator warns you when it determines that you are requesting a frequency that is too high. You can use the maxacfreq parameter of the PSS analysis to specify the highest frequency that Spectre RF simulation can use in subsequent periodic small-signal analyses. PSS analysis then chooses the time-step in order to assure the results computed by the small-signal analyses are accurate. Specifying a very large maxacfreq causes both the PSS and small-signal analyses to run slowly and requires a large amount of memory.

Quasi-Periodic Steady-State Analysis

The Quasi-Periodic Steady-State (QPSS) analysis calculates the response due to multiple input frequencies. All of the input signals are treated in the same way as the PSS drive source so that the calculated output includes all the intermodulation distortion effects caused by frequency translation of all harmonics of the input signals. You can analyze the response of a circuit to a sum of signal sinusoids with PSS analysis by including them as components of the PSS drive source, but the resulting sum must be periodic. Spectre RF now supports a choice of simulation engines between the shooting method and the harmonic balance method. See “The Harmonic Balance Engine” for more information.

Because QPSS analysis allows arbitrary signal inputs, including sums of sinusoids that may not be periodic, you can think of it as a quasi-periodic extension of PSS analysis. You can also think of QPSS analysis as an extension of PAC analysis that allows signal inputs capable of producing third-order products.

Choose QPSS analysis when you need to compute the steady-state responses of a circuit driven by two or more signals at unrelated frequencies. Unlike PSS analysis, QPSS analysis does not require that multiple periodic stimuli be coperiodic. When you use QPSS analysis, you declare one signal as large. That signal can be a sinusoid or a pulse. Any additional-signals, called moderate signals, must be sinusoids. You must specify the number of harmonics to be simulated on all moderate tones.

Quasi-Periodic Noise Analysis

The Quasi-Periodic Noise (QPnoise) analysis is similar to the conventional Pnoise analysis, except that, in addition, it includes frequency conversion and intermodulation effects. You can use QPnoise analysis to predict the noise behavior of mixers, switched-capacitor filters, and other periodically or quasi-periodically driven circuits.

QPnoise analysis linearizes the circuit about the quasi-periodic operating point computed in the prerequisite QPSS analysis. It is the quasi-periodically time-varying nature of the linearized circuit that accounts for the frequency conversion and intermodulation.

In addition, QPnoise analysis also includes the effect of a quasi-periodically time-varying bias point on the noise generated by the various components in the circuit. The time-average of the noise at the output of the circuit is computed in the form of a spectral density versus frequency. The output of the circuit is specified with either a pair of nodes or a probe component.

Envelope Analysis

Envelope (ENVLP) analysis allows RF circuit designers to efficiently and accurately predict the envelope transient response of the RF circuits used in communication systems. For example, you can apply ENVLP analysis to efficiently and accurately analyze modulation signals in large communication circuits.

Important applications of ENVLP analysis include

As a typical example, a designer might be interested in simulating a receiver signal path involving a mixer, in particular, to predict the spectral regrowth of the mixer. As shown in Figure 1-2, the inputs to a mixer can be one complete digital low-frequency (not necessarily periodic) modulation and one high frequency LO. The result of the mixing is a modulated high-frequency signal as shown in Figure 1-2. However, due to the nonlinearity of the mixer, unwanted harmonics might be generated and it is important to predict the signal level at these unwanted harmonics in order to validate the design.

Figure 1-2 Time-Domain Modulation

Figure 1-3 shows a typical scenario in the receiver signal path. Due to the nonlinearity of the mixer, it is important to predict the resulting spectral regrowth. Spectral regrowth is extraordinarily expensive to simulate using the traditional transient analysis because spectral regrowth requires a very long time interval to resolve the required frequency resolution.

Figure 1-3 Spectral Regrowth Simulated Using Envelope Analysis

ENVLP analysis overcomes this difficulty with traditional transient analysis. ENVLP analysis reduces simulation time without compromising accuracy by exploiting the property that the behavior of a circuit in a given high frequency clock cycle is similar, but not identical, to its behavior in the preceding and following cycles. In particular, the envelope of the high-frequency clock by accurately computing the circuit behavior over occasional cycles, which accurately capture the fast transient behavior. The slow varying modulation is accurately followed by a smooth curve. As a result, you can obtain the spectrum of the circuit response by combining the spectrum of the smooth curve and the spectrum of occasional clock cycles.

The RF analysis types such as PSS (periodic steady-state analysis) [1] or QPSS (quasi-periodic steady-state analysis) [2] might not work directly because the modulation signal might be neither periodic nor quasi-periodic.

For switched-capacitor filters, the clock signal is referred to as the clock. For some other applications the clock signal might be referred to differently.

The clock signal is normally the most rapidly changing signal in the circuit and thus causes the most nonlinearity.

Spectre RF now supports a choice of simulation engines between the shooting method and the harmonic balance method.

The Harmonic Balance Engine

The harmonic balance (HB) engine supports the frequency-domain harmonic balance analyses and complements the existing analyses which use the time-domain shooting method (TD) engine. The harmonic balance engine performs efficient and robust simulation of linear and weakly nonlinear circuits on all platforms for both 32 and 64 bit architectures.

The harmonic balance engine (HB) is available for

The HB engine is supported on the Solaris, Linux, HP, and IBM platforms for both 32 and 64 bit architectures.

HB Method Theory

In the time-domain method, the signal waveform u ( t ) is solved according to Circuit Equation 1-1 where s is the source.

(1-1)

At the steady state, the solution u ( t ), the current i, and the charge q are represented by harmonics as shown in Equations 1-2, 1-3 and 1-4 where ω is the fundamental frequency.

(1-2)

(1-3)

(1-4)

Coefficients U ( k ), I ( k ), and Q ( k ) are Fourier transforms of u ( t ), i ( t ), and q ( t ) respectively at the k th harmonic. Notice that I ( k ) and Q ( k ) are functions of U ( k ) determined by i ( u ) and q ( u ).

Equivalent to the time-domain circuit Equation 1-1, the circuit equation in the frequency-domain is shown in Equation 1-5 where S ( k ) is the Fourier transform of periodic source s .

(1-5)

The fundamental concept of the harmonic balance (HB) method is to solve for U ( k ) (the frequency-domain waveform) according to Equation 1-5, rather than solving for u ( t ) (the time-domain waveform) according to Equation 1-1. With the frequency-domain solution U ( k ) obtained from Equation 1-5, the HB method uses Equation 1-2 to solve for the time-domain waveform u ( t ).

The HB method is very efficient for simulating circuits such as low-noise amplifiers that have only low order harmonics. In these cases, the HB solution is small because only a few U ( k ) coefficients are needed to give the waveform u ( t ) accurately. Mixers with a low moderate-tone power level can also be represented with low order harmonics. In general, problems related to multi-tone simulation in QPSS can be reduced in scope or even eliminated by using the HB method.

The HB method also effectively supports frequency-dependent components. These are usually provided as frequency dependent scalar data such as in an S-parameter format file.

HB Convergence Criteria

HB is designed to solve nonlinear equations. For example, you might have the equation

(1-6)

which can be solved by using the Newton method:

(1-7)

When the following two conditions are met the iterations are assumed to have converged.

(1-8)

(1-9)

To understand Spectre RF HB convergence, it is useful to know of the following two norms:

In the HB Newton method, iterations are assumed to have converged when

(1-14)

and

(1-15)

The tolerance_residual is determined by the product of the reltol and residualtol parameters.

The tolerance_delta is determined by the product of the reltol, lteratio, and steadyratio parameters.

Occasionally, Resd_Norm becomes smaller than 1 but Delta_Norm remains large. To achieve convergence in a case like this, consider setting a bigger steadyratio value so that the tolerance_delta is larger. This approach is recommended because

HB Large Signal Simulation for Driven Circuits

The following parameters are provided to specify the HB method for large signal simulations of driven circuits.

Basic HB Parameters for Driven HB Analysis

flexbalance=no

The HB engine shares the same PSS and QPSS analysis statement with the time-domain engine. To run the harmonic balance engine, set flexbalance=yes in the analysis statement, or in the graphical user interface, select Harmonic Balance for Engine.

harms
and
maxharms

Determines how many harmonics are used to expand the waveform in Equation 1-2.

In PSS, specify the maximum harmonic with harms. In QPSS, specify the maximum harmonic for each tone with maxharms.

It is important to point out that harms in PSS and the first value in the maxharms vector in QPSS are output parameters in the time-domain method but they are input parameters in the HB method and have a direct impact on the accuracy and performance of the HB simulation.

The best choice for the harms or maxharms value depends on the signal waveform and circuit nonlinearity. The faster the signal varies with time, or the more nonlinear the circuit, the more harmonics are required to accurately represent the solution. In multi-tone mixers, the large LO tone has a higher power level than the moderate RF tones and causes more nonlinear effects, so you should usually use more harmonics for the large LO tone than for the moderate tones.

The best harms or maxharms value also depends on the order of nonlinear effect that you want to study. For example, for a mixer IP3 measurement, the maxharms value for the moderate tones must be 2 or more to capture the IM3 mode at frequency 2ω1−ω2−ωLO.

tstab

tstab is a valid parameter for the initial transient analysis in HB. The default tstab value for both PSS and QPSS is one cycle of the signal period. For QPSS analysis, you can choose a specific tone for the tstab integration and only one tone is allowed. One additional cycle is used for the FFT. When you set tstab to 0, the DC results are used as the initial condition for HB.

The following are example HB statements.

pss0 pss fund=1G harms=10 flexbalance=yes
qpss0 qpss funds=["LO" "RF1" "RF2"] maxharms=[5 3 3] flexbalance=yes 

Optional HB Parameters for Driven HB Analysis

oversamplefactor

In the HB method the nonlinear parts of functions i ( u ) and q ( u ) are evaluated at a series of sample time points t n . I ( k ) and the Q ( k ) values are obtained from Fourier transforms of i ( t n) and q ( t n). In a regular Fourier transform, t n equals the number of harmonics N k. For highly nonlinear functions i ( u ) or q ( u ), if i ( t ) or q ( t ) is under-sampled by t n, aliasing effects cause inaccuracy in I ( k ) or Q ( k ). Aliasing can be prevented by increasing N k but this also increases the number of unknown U ( k ). Another way to prevent aliasing is to use the same N k but to oversample i ( t ) and q ( t ) by m N k time points, where the integer m is the oversample factor. Notice that although the oversampling method does not change the solution size, it does increase the size of the Fourier transform and the number of evaluations of i ( t n) and q ( t n).

In the PSS or QPSS analysis statement, use oversamplefactor= m to specify the oversample factor. Spectre RF oversamples i ( u ) and q ( u ) by a factor of m for each tone. As a result, the size of the Fourier transform is increased by m number-of-tones . For multi-tone cases, you can also specify oversample=[ m 1, m 2, m 3, ... ] to oversample the i th tone by a factor of m i. The size of the Fourier transform is therefore increased by m 1 m 2 m 3 ⋅ ... . The default oversample factor is 1.

Note that even with oversampling, you must make sure the maximum harmonic meets the accuracy requirement on results of interest.

itres

Controls the accuracy of the linear solution at each Newton update. It controls the convergence of the Newton iteration. The value is between [0, 1] and the default is 0.9. When you use the HB method for highly nonlinear circuits, Cadence recommends using the default value. For nearly linear circuits, using a small itres (for example,1e-2) can speed convergence.

maximorder
and
selectharm

These two parameters determine the harmonics selected for multi-tone QPSS analysis. By default, Spectre RF uses a square cut determined by maxharms. A QPSS simulation with M tones will solve for all frequencies ω = k 1ω1 + k 2ω2 + ... where k i H i for each tone and H i is the maximum harmonic for tone i . For a diamond cut ∑ k i P where P is the maximum order set by the maximorder parameter.   The selectharm parameter determines the type of cut. The options for cut are: box, diamond, axis, or funnel.

  • The default cut is box.
  • A diamond cut selects a diamond cut with order P .
    The diamond cut must not be used in a QPSS analysis that is followed by a QPAC, QPXD, QPNOISE, or QPSP analysis because some sidebands are not included in the small signal calculation.
  • An axis cut selects harmonics along an axis. This means that there are no intermodulation harmonics, there are only harmonics of type ω = k 1ω1 where k i H i.
  • A funnel cut combines the axis and diamond cuts. The funnel cut selects harmonics along an axis and intermodulation harmonics of order P or less.

freqdivide

The frequency division ratio of a large signal in QPSS analysis. The default is 1. Specify freqdivide when the circuit has a frequency divider on a large signal.

The ADE Use Model

All the analysis features mentioned above are integrated into the Analog Design Environment (ADE). You can switch between the time domain shooting (TD) engine and the HB engine in the PSS, QPSS, and ENVLP Choosing Analyses forms. For example, Figure 1-4 shows the Choosing Analyses form set up for a large signal HB Quasi-Periodic Steady State Analysis. Notice the Engine field choices labeled Shooting and Harmonic Balance.

Figure 1-4 The HB QPSS Choosing Analyses Form

For a detailed design example, refer to Lab 10, IP3 Calculation (QPSS with Shooting or Harmonic Balance Engine), in the workshop Mixer Design Using SpectreRF. The database for this workshop is available at

<instdir>/tools/spectre/examples/SpectreRFworkshop

HB Large Signal Simulation for Autonomous Circuits

The harmonic balance (HB) engine is supported for oscillators and phase-noise simulation.   HB is a frequency domain technique that solves the spectrum of each node voltage unlike the time-domain engines (such as shooting PSS) which solve the waveform of each node voltage over the period.

The harmonic balance method is a good candidate for simulating mildly-nonlinear oscillators with resonators, such as LC oscillators, negative-gain oscillators, and crystal oscillators. The time-domain shooting PSS method is a good candidate for simulating strongly-nonlinear resonatorless oscillators, such as ring oscillators, relaxation oscillators, or oscillators containing digital control components.

Basic HB Parameters

The following parameters are provided to specify the HB method for autonomous circuit large signal simulations.

Basic Parameters for Autonomous HB Analysis

flexbalance

Harmonic balance engine shares the same analysis form with time-domain engine and is invoked by setting flag flexbalance=yes in the analysis statement.

harms

Determines how many harmonics will be used to expand the waveform in eq. (2). Similar to that in PSS analysis for driven circuit, maximum number of harmonics is specified by harms. It is input parameter in HB method and has direct impact on the accuracy and performance of the simulations.

oscmethod

Currently, there are two methods implemented: onetier and twotier. They are set by parameter oscmethod. In onetier method, the frequency and voltage spectrum are solved simultaneously in one single set of nonlinear equations. In twotier method, the nonlinear equations are split into two sets: the inner set of nonlinear equations solves the spectrum of node voltage equations; the outer set of nonlinear equations solves the oscillation frequency. Physically it is equivalent to add a sinusoidal voltage probe to a pinning node and adjust its frequency and amplitude until it has no effect on the oscillator. Spectre RF automatically chooses the pinning node. By default, Spectre RF runs onetier first for n iterations; if necessary, it runs twotier for next n iterations (n is set by maxperiods). Users can also run only onetier or twotier by specifying oscmethod. Twotier has larger convergence zone as its convergence is slightly more robust. This method is however computationally more expensive than onetier method.

tstab

Specifies the length of the transient analysis used to assist the harmonic balance analysis. Spectre RF runs a transient analysis of the length specified by tstab and then switches to HB. Stable oscillation in the transient analysis can help HB to converge. Stable oscillation can be achieved by

    • Setting an initial condition for a particular node (for example: ic net01=5.0)
    • Setting an initial condition for the inductor or capacitor in the resonator in OSC (for example: L14 (Xtal02 net01) inductor l=6.m ic=0.5m)
    • Adding a damped current source in parallel to the resonator in OSC (for example: Ikicker (net01 net02) isource type=sine freq=1.0G ampl=1m damp =1.0G)
    • Adding a voltage pulse (for example: vdd (vdd 0) vsource type=pulse val0=0.0 val1=3.3 rise=1n)

Optional Parameters for Autonomous HB Analysis

oversamplefactor

The usage of oversamplefactor parameter is similar to the one in driven HB analysis. See oversamplefactor.

The ADE Use Model

All the analysis features mentioned above are integrated into the Analog Design Environment (ADE). You can use the Engine field to switch between the time domain shooting (TD) engine and the HB engine in the PSS, QPSS, and ENVLP Choosing Analyses forms. For example, Figure 1-5 shows the Choosing Analyses form for a large signal HB autonomous PSS Analysis.

Figure 1-5 The Autonomous HB PSS Choosing Analyses Form

For a detailed design example, refer to Lab 1, Phase Noise Simulation with Shooting or Harmonic Balance Engine, in the workshop VCO Design Using SpectreRF. The database for this workshop is available at

<instdir>/tools/spectre/examples/SpectreRFworkshop.

HB Small Signal Setup

HB small signal analysis runs automatically if the large signal analysis uses the HB method. Hence, HB small signal analyses do not require special setup and use the same statement as a time-domain small signal analysis. The only difference is that the maxsideband setting is ignored in HB small signal analysis when the value is larger than the harms and maxharms value of the large signal analysis.

There is no specific small signal set up for HB in ADE.

HB Envelope Analysis

See The Time Domain and Harmonic Balance ENVLP Algorithms for information about the HB ENVLP analysis.

The ADE Use Model

All the analysis features mentioned above are integrated into the Analog Design Environment (ADE). In the Choosing Analyses form, two buttons are available for you to switch between the time domain shooting (TD) engine and HB engine in the ADE PSS, QPSS and ENVLP Choosing Analyses forms. Figure 1-5 shows the HB Envlp GUI in ADE.

All the analysis features mentioned above are integrated into the Analog Design Environment (ADE). You can use the Engine field to switch between the time domain shooting (TD) engine and the HB engine in the PSS, QPSS, and ENVLP Choosing Analyses forms. For example, Figure 1-5 shows the Choosing Analyses form for an HB Envlp analysis.

Figure 1-6 The HB ENVLP Choosing Analyses Form

The following figure shows the multi-carrier HB envelope analysis set up in ADE.

Figure 1-7 The HB ENVLP Multi-Carrier Choosing Analyses Form

For a detailed design example, refer to Lab 6, Envelope Following Analysis, in workshop PA Design Using SpectreRF. The database for this workshop is available at

<instdir>/tools/spectre/examples/SpectreRFworkshop.

Multi-Thread Acceleration for Harmonic Balance Analysis

You can speed up HB large signal and small signal simulations if you have multiple CPUs. The earlier environment variable CDS_SPECTRERF_MULTITHREAD is no longer supported. Now, you can simply use the Spectre option +mt=.

For example, if you have the number of CPUs as 4, you can give:

spectre +mt=4 input.scs

Distributed Component Support

The distributed component models, which are usually provided in S-parameters scalar format, are directly computed in the frequency domain for HB analysis. The Spectre nport primitive is supported. If you specify the rational fitting method for the nport, an equivalent circuit is still used in HB and it might cause convergence issues due to an unstable model of the rational fitting method. Consequently, Cadence recommends using the default fitting method spline or linear.

Other distributed components such as mtline and delay are also supported.

Hidden State Issue

The fundamental hidden state issue in Spectre RF still exists with the harmonic balance engine. Hidden states are not exposed to solution, so it is difficult to determine how sensitive other signals are to changes in those hidden states. This is required for predicting new iterations.

Frequently Asked Questions and Answers

What are the parameters that affect accuracy in HB?

Maximum harmonic (harms in PSS and maxharms in QPSS) has the most impact on the accuracy of HB analysis. When you use too few harmonics, the spectrum outside of the maximum harmonic is folded back into harmonics inside by aliasing effects, causing errors. To obtain accurate results, set the maximum harmonic large enough to cover the signal bandwidth of interest.

The reltol or errpreset parameters also affect simulation accuracy. HB uses the same convergence criteria as the TD shooting method.

Spectre RF offers both time-domain shooting and HB options. Which one should I use for my circuits?

The HB method is very efficient for simulating weakly nonlinear circuits such as LNAs. Only a few harmonics are needed to accurately represent the solution. For highly nonlinear circuits with sharply raising or falling signals, the TD shooting method is usually more appropriate. However, HB might still have an advantage when exploring design trade-offs using a few harmonics where accuracy is not the primary concern.

For multi-tone cases such as mixers, HB is significantly more efficient than the shooting method with QPSS analysis due to its natural representation of circuit equations.

The HB method is better than the TD method for handling frequency-dependent components. In post-layout circuits with many linear elements, HB has better performance than TD shooting.

Should I use HB PSS or HB QPSS for circuits driven by multi-tone stimulus?

Because HB multi-tone simulation does not have the convergence and speed issues encountered in TD shooting QPSS analysis, use HB QPSS analysis to simulate multi-tone circuits. HB PSS analysis using the beat frequency as the fundamental is very inefficient for multi-tone cases. When source frequencies are closely spaced, their common frequency is so low that you must use hundreds or even thousands of harmonics. In this situation, you should use QPSS analysis.

When should I use oversample factor?

In general, oversampling is not needed. However, for extremely nonlinear circuits driven by sources at a very high power level, oversampling might help with convergence. If the circuit has a frequency divider on the large signal, you might want to specify the oversample factor the same as the divide ratio.

How should I choose the number of harmonics for an HB envelope analysis?

The default value is 3 harmonics. The number of harmonics you choose depends on the linearity in one cycle of the fast signal (LO or clock). For linear cases, even 1 harmonic is enough for accurate results. However, for nonlinear cases, such as a circuit with a square clock, you might need 15 or even more harmonics.

In HB autonomous analysis, why would I set the OSC Newton Method rather than letting the simulator take care of it?

The default value for oscmethod is onetier. If Spectre RF fails to converge then it automatically switches to the twotier method. In these cases it might be more efficient to use the twotier method at the beginning by specifying oscmethod=twotier.

Perturbation Based Measurements

The perturbation-based approach solves weakly nonlinear circuits based on Born approximation. The perturbation method does not require explicit high order device derivatives and Its implementation is straightforward and is equivalent to successive small signal analysis. We analyze convergence properties for perturbation-based analyses and discuss the connection between the perturbation method and Volterra series.

Rapid IP2 and IP3 measurements are calculated with 2nd and 3rd order Born approximation under weakly nonlinear conditions. Using the diagrammatic representation of nonlinear interaction, both Volterra series and Born approximation can be constructed in a systematic way. The method is generalized to calculate other high order nonlinear effects such as the IMn product. All equations are formulated as RF harmonics and they can be implemented in both the time and frequency domains.

Specialized PAC and AC Analyses for Measuring Distortion

Spectre RF supplies four specialized PAC and AC analyses based on perturbation methods that provide a basic set of rapid distortion measurements

These four perturbation-based analyses characterize

distortion and compression distortion for RF circuits such as mixers and amplifiers.

Distortion is an important issue in RF circuits such as mixers, low-noise amplifiers (LNAs) and power amplifiers (PAs). Compression and intermodulation distortion are the two aspects of distortion that are of greatest concern. At high frequencies, and particularly with narrowband circuits, it is common to characterize the distortion produced by a circuit in terms of a compression point (CP) or an intercept point (IP). These metrics characterize the circuit rather than the signal, and as such it is not necessary to specify the signal level at which the circuit is characterized.

About the IM2 Intermodulation Distortion Summary

The 3rd order intermodulation distortion occurs when input signals at frequencies f1 and f2 mix together to form the response at 2f1 – f2 and 2f2 – f1. When f1 and f2 are close enough in frequency, the intermodulation products 2f1 – f2 and 2f2 – f1 will fall within the bandwidth of the circuit and will interfere with the input signal.

Intermodulation distortion is typically measured in the form of an intercept point. As shown in Figure 1-8, you can determine the 3rd order intercept point (IP3) by plotting the power of the fundamental and the 3rd order intermodulation product (IM3) versus the input power. You should plot both input and output power in some form of dB. Thus, when you measure IP3, the fundamental power curve is extrapolated from where the curve has a slope of 1 over a broad region. The 3rd order intermodulation product is extrapolated from a point where its curve has a slope of 3 over a broad region. Extrapolate both curves from a low power level and identify the intercept point as the point where they cross.

For Rapid IP3 analysis in Spectre RF, the IP3 is obtained at an input power where the user is confident of 1st order and 3rd order linearity.

Figure 1-8 IP3 Measurement

For some applications, such as direct conversion mixers, IP2 is important to characterize the circuit for the 2nd order intermodulation products (IM2) f1 – f2 or f2 – f1 which will fall within the bandwidth of the output signal. The definition of IP2 is similar to IP3. The difference is that the 2nd order intermodulation product is extrapolated from a point where its curve has a slope of 2.

For Rapid IP2 analysis in Spectre RF, the IP2 is obtained at an input power where you are confident of the 1st order and 2nd order linearity.

Rapid IP2 analysis produces the value of IP2 for the whole circuit, while the IM2 Distortion Summary analysis produces a list of the various distortion contributors and how much distortion they contribute to the output. That is, IM2 Distortion Summary computes the contribution of each nonlinear device to the total IM2. The result is accurate because there is no presumption in computation.

About the Compression Distortion Summary

Consider the circuit in Figure 1-9. The system has a time-dependent operating point V0(t) at frequency ω­LO (for an LNA or PA, the operating point is time-independent, so ­ωLO = 0).

Figure 1-9 A Simple Circuit Representation

As shown in Equation 1-17, when the RF input signal is applied, the output signal at ­ωRF + n­ωLO is a function of input magnitude VRF.

(1-17)

The first term is the linear response to the RF input. Coefficient c1 can be obtained by small signal analysis. Other terms are distortion due to third and higher order nonlinear response. (For the Compression Distortion Summary analysis in Spectre RF, the highest nonlinear term concerned is the third order.) The total distortion, as represented in Equation 1-18, is measured in dB.

(1-18)

While you want to know the contribution of each individual device to the total distortion, mathematically it is impossible to make such separation for third or higher order nonlinearity. To quantify the effect of nonlinear response at a particular device on the output signal, the Compression Distortion Summary assumes that the rest of the circuit responds to the RF signal linearly and computes the distortion in the output caused solely by the nonlinear response at this device. With this approach, although the circuit is hypothetical, the result provides useful information about how sensitive the device is to distortion. Effects of the nonlinear response of the device and the transfer function from the device nodes to output nodes are reflected in the calculation. What is missing is the interaction between the devices.

It is important to point out that since the operating point is time-dependent, all the other devices are still nonlinear in the calculations. Only their response to the RF signal is linearized. In particular, for mixer cases, transistors in the LO block still function nonlinearly to generate the clock signal. However, their contribution to output distortion is expected to be very small in the distortion summary because the RF signal usually does not disturb the LO part of the circuit. Both linear and nonlinear responses in the LO block are almost zero.

About the Perturbation Method

SpectreRF supplies four specialized PAC analyses to solve weakly nonlinear circuits based on the perturbation approach.

In previous versions of SpectreRF, you could use either QPSS-based or QPAC-based methods to calculate IP3 for systems containing a mixer and a LO. In the QPSS-based method, three-tone QPSS analysis with LO, RF1 and RF2 tones at the frequencies ωLO, ωRF1 and ωRF2 respectively, is run at a given RF power level. IM3 of harmonic 2ωRF1 -  ωRF2 - ωLO is obtained from the solution. Assuming RF power is low enough and IM3 is dominated by leading order VRF3 terms, log(VIM3) is expected to be a linear function of log(VRF) with a slope of 3. IP3 is then extrapolated from VIM3. Here VIM3 and VRF are amplitudes of the IM3 and RF signals, respectively. The QPSS-based method requires very high accuracy to accommodate large dynamic range between RF and LO signals because they are mixed in the same solution vector. For large circuits, the QPSS-based method also relies on speed and convergence of the multi-tone QPSS analysis.

In the QPAC-based method, a two-tone QPSS analysis at frequencies­ ωRF1 and ω­LO is run first. Then RF2 input is included as a small signal by the QPAC analysis to calculate IM3 at 2ω­RF1 -­ ωRF2 - ωLO. As in the QPSS-based method, the QPAC-based method also has to cover dynamic range between RF1 and LO and depends on convergence of the two-tone QPSS analysis.

Compared to the QPSS-based approach, the QPAC-based approach reduces computation from a three-tone QPSS analysis to a two-tone QPSS analysis plus a QPAC analysis by applying first order perturbation to the RF2 signal. You can further reduce the amount of computation by treating both RF signals as perturbation to the steady-state operating point at the LO frequency with zero RF input. In this way, leading order intermodulation between the RF1 and RF2 signals in IM3 can be computed directly from third-order perturbation. Since no multi-tone large signal simulation is needed, this can dramatically improve efficiency and avoid the convergence problem of multi-tone simulation.

Now SpectreRF can use a perturbation-based approach to solve weakly nonlinear circuits based on Born approximation. The perturbation-based approach does not require explicit high order derivatives from the device model. All equations are formulated in the form of RF harmonics that can be implemented in both the time and frequency domains.

For a nonlinear system, you can express the circuit equation as in Equation 1-19.

(1-19)

in this equation

Under weakly nonlinear conditions, the nonlinear part of Equation 1-19 is small compared to the linear part. This allows you to solve the equation using Born approximation iteratively as shown in Equation 1-20.

(1-20)

In Equation 1-20, u(n) is the approximation of v and it is accurate to the order of O n).

Since the evaluation of FNL takes full nonlinear device evaluation of F and its first derivative, no higher order derivatives are needed. This allows you to carry out higher order perturbation without modifying your current device models. Also, the dynamic range of perturbation calculations covers only RF signals. This gives the perturbation method advantages in terms of accuracy.

Frequently Asked Questions and Answers

How do I select the input power for perturbation-based measurements?

The perturbation method works best under weakly nonlinear conditions. High input power which induces most nonlinearity will produce an unreasonable result. Input power that is too low will blend with the numerical noise floor. A recommended input power range for general circuits is -50 dBm to -20 dBm.

Why is dB the unit used in the Compression Distortion Summary while Volts is the unit used in the IM2 Distortion Summary?

Compression distortion evaluates distortion as the ratio of total output (linear and nonlinear) to the linear output. IM2 Distortion Summary computes the contribution of each nonlinear device to the total IM2.

How do I measure the IP2 of a double balanced mixer?

It is well known that the IP2 of an ideal double-balanced mixer is infinite. In practice, IP2 is a function of the matching between two branches. To measure IP2, add a small amount of mismatch, such as 2%, to the transistors and resistors between the two branches. Then run a worst case simulation.

Why should I set the RF input source to DC?

The perturbation method is a nonlinear small signal analysis. The analysis treats the RF signal as the small signal which is defined by pacmag (or pacdbm). If you set up the RF signal to be sinusoidal (or some other type of large signal), the PSS and small signal results will be affected.

Why might the Rapid IP2 result seem inaccurate, inconsistent, or just plain wrong?

Rapid IP2 is recommended to measure IP2 for direct-conversion mixers. For ordinary LNAs or mixers, the IM2 frequency is out of the work frequency band and the Rapid IP2 measurement produces a less meaningful result.

Why do the shooting and flexible balance PSS simulation engines produce different results?

Usually, the shooting and flexible balance engines produce results that match well. You can ensure the results will match by setting enough harmonics when you use the flexible balance engine.

Why do I see the error “More than one frequency matches are found…?”

For perturbation-based measurement, incommensurate frequencies should be used for all tones. If multiple combinations of tone frequencies match either Frequency of Linear Output Signal or Frequency of IM Output Signal, you will see this error message because perturbation-based measurement can't figure out which frequency you want to use as IM1, IM2 or IM3. In other words, each IM1, IM2 or IM3 frequency should be determined by a unique set of

k1 * flo + k2 * frf1 + k3 * frf2

respectively. To avoid this problem, adjust the values of frf1 and frf2 slightly.

References

[1]

Behzad Razavi. RF Microelectronics, Prentice Hall, c1998. Series: Prentice-Hall Communications Engineering & Emerging Technologies Series.

[2]

Thomas H. Lee. The Design of CMOS Radio-Frequency Integrated Circuits. Cambridge University Press, 1998.

[3]

Ken Kundert, “Accurate and Rapid Measurement of IP2 and IP3”,
www.designers-guide.org

Large Signal S-Parameters

Characterizing RF circuits with small-signal S-parameters is a well established practice. However, small-signal S-parameters are not sufficient to describe both strongly nonlinear circuits and circuits that exhibit frequency translation. This is especially true for designs that use power amplifiers and mixers.

As a natural extension of small-signal S-parameters, large-signal S-parameters (LSSPs) are defined as the ratio of reflected (or transmitted) waves to incident waves. Since small-signal S-parameters are based on the simulation of a linearized circuit, they are independent of input power. This description illustrates how to measure LSSPs using Spectre and SpectreRF in the Analog Design Environment (ADE).

LSSPs are based on large-signal steady state simulation techniques such as SpectreRF’s PSS analysis with both its time domain shooting Newton (TD) engine and it’s harmonic balance (HB) engine. LSSPs are sensitive to input power levels.

For multi-port circuits, you can represent

by

The multi-port circuit representation is described as follows.

The reflected wave, B i, at port i , and the frequency k ω (harmonic k ) are characterized by Equation 1-21.

(1-21)

The incident wave, A i, at port i , and the frequency k ω (harmonic k ) are characterized by Equation 1-22.

(1-22)

The LSSPs between port i with harmonic m and port j with harmonic n , are defined as in Equation 1-23.

(1-23)

Where A i( k ω) = 0, for all ports other than j and for harmonics other than n .

Large-Signal S-Parameters for a Two-Port Circuit

Consider calculating LSSPs for a two-port circuit. When you apply a source at port 1 and terminate port 2, as shown in Figure 1-10, you can measure S 11 and S 12 at a given input power P1.

Figure 1-10 Test Setup to Measure the Large-Signal S-Parameters S11 and S21

Similarly, when you apply a source at port 2 and terminate port 1, as shown in the Figure 1-11, you can measure two more LSSPs, S 12 and S 22.

Figure 1-11 Test Setup to Measure the Large-Signal S-Parameters S12 and S22

To be consistent with the input power level measured for S 11 and S 21, you can compute the power applied to port 2 in this stage using the relationship in Equation 1-24.

(1-24)

References

[1]

V. Rizzoli, A. Lipparini, F. Mastri, “Computation of Large-Signal S-parameters by Harmonic-Balance Techniques,” in Electronics Letters, Vol. 24, Issue 6, pp. 329-330, 17 Mar 1988.

[2]

Jan Verspecht, Frans Verbeyst, Marc Vanden Bossche, “Network Analysis Beyond S-parameters: Characterizing and Modeling Component Behavior under Modulated Large-Signal Operating Conditions,” in 56th ARFTG Conference Proceedings, Broomfield, Colorado, USA, December 2000.

Using the PSTB and STB Analyses with Linear Periodic Time-Varying Circuits

The Spectre stability analysis (STB) rapidly evaluates the stability of feedback circuits. As is true for all small signal analyses, the circuit under analysis must first be linearized about a DC operating point before the STB small signal analysis is performed. After the circuit is linearized about a DC operating point, the STB analysis then calculates the loop gain, gain margin and phase margin for the circuit using a subset of Nyquist criteria.

it is also important to evaluate the stability of periodic steady state regimes of nonlinear circuits such as power amplifiers, injection locked oscillators and dividers. The STB analysis fails to predict the behavior of periodic steady state regimes of nonlinear circuits due to the nonlinear effects these circuits produce.

Periodic stability analysis (PSTB) performs stability analysis for circuits with a periodically time-varying operating point, which must first be obtained using a PSS analysis. The small signal PSTB analysis calculates the loop gain, gain margin and phase margin for circuits with a periodically time-varying operating point.

A Linear Periodic Time-Varying (LPTV) Feedback Circuit

Figure 1-12 shows the topology of a general linear time-invariant (LTI) feedback circuit around a specific feedback loop.

Figure 1-12 Topology for a Linear Feedback Circuit

Equation 1-25 defines the closed-loop transfer function

(1-25)

where X(s) and Y(s) represent input and output respectively.

Equation 1-26 defines loop gain.

(1-26)

For the linear periodic time-variant (LPTV) feedback circuit shown in Figure 1-13

Equation 1-27 expresses the closed-loop harmonic transfer matrix (HTM).

(1-27)

where A(s) and F(s) are matrixes and I is the identity matrix.

Evaluating the Stability of a LPTV Circuit Using PSTB Analysis

If you treat the feedback loop determined by Equation 1-27 as a multi-input multi-output (MIMO) system, the poles of any transfer function of the system are the same [1]. If you express a kind of transfer function of the zero sideband in the feedback loop as in Equation 1-28, you can obtain an exact open-loop transfer function −μ( s ) by applying the loop-based algorithm [2] to the zero sideband.

(1-28)

To evaluate stability, apply the Nyquist criteria to μ( s ) (loop gain). This requires three steps you can perform using PSTB analysis.

  1. Calculate μ(0 + jω) by sweeping ω
  2. Plot μ(0 + jω) in a Bode plot or a Nyquist plot
  3. Judge stability in one of the following two ways
    • By the magnitude and phase of the loop gain
      or
    • By the number of clockwise encirclements of the (1, 0) point on the loop gain curve

This stability check is restricted to local stability in the sense that the results are only valid under small perturbations of the steady state (that is, by small signal analysis). A large perturbation might induce the circuit to permanently jump to a different steady state. To account for the large perturbation requires a global stability analysis in the parameter space.

(1-29)

(1-30)

Equation 1-29 is a PSS analysis equation. The parameter A can dramatically affect the circuit’s steady state solution, such as source power or amplitude. The bifurcation curves are the solutions in the (Ω, A) space that meet Equation 1-30. This means that to determine global stability, you can use a two-dimensional (Ω, A) parameter sweep combining the PSS and PSTB analyses. This process might be time-consuming.

Example 1: Comparing the STB and PSTB Analyses

Figure 1-14, the oscHartley schematic, is an inherently nonlinear VCO. You can use the STB and PSTB analyses on the Hartley VCO to compare the nonlinearity effect on circuit stability.

Figure 1-14 Hartley Oscillator (oscHartley)

This VCO has the basic Hartley oscillator topology and is tunable between 720 MHz and 1.15 GHz. The oscillation frequency ( f 0) is determined by the resonant circuit made up of inductors L0, L1 and capacitor C1. In this particular VCO, the values of L1 and L2 are fixed whereas C1 is a varactor diode. As a result, Cvar, the varactor diode s-junction capacitance, is a function of the applied voltage, as expressed in Equation 1-31.

(1-31)

In Equation 1-31

These variables have the following values

C j0

8 pF

Φ

0.75 V

γ

0.4

Because C var is inversely proportional to V and f o is inversely proportional to C var, the oscillation frequency is proportional to V . In other words, as you increase V , C var decreases and f o increases.

Set up for the STB and PSTB Analyses

Open the oscHartley schematic from your writable copy of the rfExamples library.

Set the Initial Conditions for the STB Analysis

  1. Set up V0 as a dc voltage source (Source type = dc, dc = 5).
  2. Set the voltage on the Cvar to 6 V (V_cntl = 6). The oscillator will oscillate at about 1.1 GHz.
  3. Set reltol to 1e-4 (reltol = 1e-4)
  4. Insert a current probe in the feedback loop (IPRB0).

Set Up the STB Analysis

  1. Sweep range (Start = 1000M, Stop = 1200M)
  2. Sweep type=linear
  3. Number of steps=400
  4. probe = IPRB0

Run the simulation

In WaveScan, plot the dB20 (loop gain) and Phase (loop gain) of the STB results. (See Figure 1-15.)

Set Up the PSS and PSTB Analyses

Set the Initial Conditions

  1. Change V0 to a pulse source (val1=0, val2=5, rise=1n) to start the oscillator.
  2. Other parameters are the same as for the STB analysis.

Set up a PSS Analysis

  1. Beat Frequency =1.1G
  2. Number of Harmonics =30
  3. errpreset = moderate
  4. tstab =100n
  5. Enable the oscillator button
  6. Set Oscillator node = /Vout
  7. Set Reference node = /gnd!
  8. Set Engine=shooting

Set up a PSTB Analysis

  1. Sweep range (Start=1000M, Stop=1.2G)
  2. Sweep type=linear
  3. Number of steps=400
  4. probe = IPRB0

Run the Simulations

In WaveScan, plot the dB20 (loop gain) and Phase (loop gain). Append a Bode Plot of the PSTB results to the same window as the STB results. The plot in Figure 1-15 is displayed.

For an ideal oscillator, the loop gain at oscillating frequency, f 0, should be 1, so the dB20 (loopgain) and the Phase (loopgain) should both be zero at f 0 and the phase should change abruptly at f 0. In practice, both dB20 (loopgain) and the Phase (loopgain) should be close to zero.

For this autonomous circuit, the PSS analysis gives f 0  = 1.1035 G. From Figure 1-15, the PSTB analysis gives dB20 (loopgain) = 0 and Phase (loopgain) = 0. From Figure 1-15, the STB analysis gives dB20 (loopgain) and Phase (loopgain) far from zero. The results show that a PSTB analysis gives more accurate stability information for nonlinear circuits than does a STB analysis.  

Figure 1-15 VCO Loop Gain

In Figure 1-15 lines with boxes are from the STB analysis, lines without boxes are from the PSTB analysis, solid lines represent magnitude and dotted lines represent phase.

Example 2: Local Stability of an Injection-Locked Oscillator

Figure 1-16 shows an injection-locked oscillator based on the VCO in Figure 1-14. The injection-locked oscillator also includes the following for injection: current source I0, resistor R4 and capacitor C7. The frequency and amplitude of I0 are the parameters inj_freq and inamp.

Figure 1-16 Schematic of the Injection-Locked Oscillator

Set the Initial Conditions for the PSS and PSTB Analysis

  1. Set inj_freq = 1000M.
  2. Set inamp = 13mA
  3. Other parameters are the same as the PSTB analysis in section 3 a.

Set Up the PSS Analysis

  1. Beat Frequency =1G
  2. Number of Harmonics =30
  3. errpreset = moderate
  4. tstab =100n
  5. Engine=shooting

Set Up a PSTB Analysis

  1. Sweep range (Start=1000M, Stop=1.2G)
  2. Sweep type=linear
  3. Number of steps=400
  4. probe = IPRB0

Run the Simulation

In WaveScan, plot the real vs imag (loopgain) of the PSTB analysis.

Run the same simulation again but change inamp to 13.85mA.

Run the same simulation again but change inamp to 15mA.

Plot the results of the three simulations in the same window. The plot in Figure 1-17 is displayed.

You can see from Figure 1-17 when inamp = 13.85mA, the loopgain = 1. (From the Bode plot, the corresponding small signal frequency is 1.114GHz). It implies that there is a potential for a free-running oscillation at 1.114G.

Figure 1-17 Nyquist Plot of the Loopgain of the Injection-Locked Oscillator

In Figure 1-17, the dashed line represents inamp =13mA. The dotted line represents inamp = 13.85mA, The solid line represents inamp =15mA)

When inamp is greater than 13.85mA, the loop gain curve does not encircle the (1, 0) point, which implies that the circuit is stable. When inamp is less than 13.85mA, the loop gain curve encircles the (1, 0) point clockwise, which implies that the circuit is unstable. In fact, when you run a PSS analysis with inamp=13mA and increase tstab, such as tstab=40 μs, the PSS analysis does not converge. If you run a long transient analysis, such as when Stop=350 μs, the resulting waveform is not periodic as you can see in Figure 1-18.

Figure 1-18 The Output Waveform of the Injection-Locked Oscillator With Injection Current inamp = 13 mA

Example 3: Global Stability of Injection-Locked Oscillators

Equation 1-32 represents the locked oscillating condition.

(1-32)

Equation 1-33 represents the free-running oscillator condition.

(1-33)

In Equations 1-32 and 1-33

You can use the two curves determined by Equations 1-32 and 1-33 to evaluate the global stability of the circuit. Produce the curves using a PTSB analysis.

Set the Initial Conditions for the PSS and PSTB Analyses

  1. Set inj_freq = 1075M.
  2. Set inamp = 16mA
  3. Other parameters are the same as used for the previous PSTB analysis.

Set Up the PSS Analysis

  1. Beat Frequency =1075M
  2. Number of Harmonics =30
  3. errpreset = moderate
  4. tstab =100n
  5. Engine=shooting

Set Up the PSTB Analysis

  1. Sweep range (Start=1050M, Stop=1.2G)
  2. Sweep type=linear
  3. Number of steps=400
  4. probe = IPRB0

Set Up a Parametric Analysis to Evaluate Global Stability

  1. Variable name = inamp
  2. Range Type = From/To
  3. From=16mA
  4. To=20mA
  5. Step Control = Linear steps
  6. Step Size =0.5mA

Run the simulation

In WaveScan, plot the loop gain as a Bode Plot similar to Figure 1-19.

You can see that when inamp = 16.5 mA and loopgain = 1 at frequency 1.114 GHz, the circuit is a potential free-running oscillation. When inamp is less than 16.5 mA, the circuit is unstable (you can find this in a corresponding Nyquist Plot). When inamp increases, free-running oscillation disappears and the circuit tends to be locally stable. However, when inamp increases to 19.5 mA, another oscillation occurs at 1075 M, which implies that the circuit is locked to oscillate at the injection frequency. When inamp is greater than 19.5 mA, the circuit again tends to stabilize. From Figure 1-32, you can clearly observe the injecting-locking process as you increase the injection current.

Not all injection frequencies can be locked. Only those injection frequencies near a free-running frequency can be locked. A complete global stability evaluation requires a two dimensional sweep of (inj_freq, inamp). However, it is not easy to obtain two curves accurately around the locked area for the strong competition between locked mode and free running mode. This unstable property will lead to PSS analyses that cannot converge and PSTB analyses that cannot run.

Figure 1-19 Bode Plot of Loop Gain with inj_freq = 1075M

References

[1]

Ogata K, Modern control engineering, Printice-Hall, 1998.

[2]

Michael Tian, V. Visvanathan, Jeffrey Hantgan, and Kenneth Kundert, "striving for small-signal stability", IEEE Circuit & Devices, Jan 2001(31).

The Spectre/RF MATLAB Toolbox

The MATLAB® Toolbox provides an interface between both the Spectre and UltraSim® circuit simulation technologies and the MATLAB data manipulation environment. This section describes how to use the toolbox to read Spectre simulation results into MATLAB and to use MATLAB to perform some standard RF measurements.

The toolbox provides all Spectre and SpectreRF users with an alternative method for post-processing simulation data and making standard RF measurements in MATLAB. The toolbox allows experienced SpectreRF users to use Simulink® and MATLAB to make customized measurements on information extracted from Spectre simulation results. Experienced users can also perform high-level design tasks in Simulink and MATLAB.

MATLAB, a powerful mathematical and graphic tool, provides rich data processing and display functionality. Many users want to customize their measurements and displays. The toolbox supports these use models.

The MATLAB Toolbox includes a number of functions you can use to

Install the Toolbox Package

The MATLAB Toolbox is a standalone package shipped with MMSIM version 6.1 and higher. The home directory for the toolbox is

<instdir>/tools/spectre/matlab 

where <instdir> is the installation directory for the MMSIM simulation software. In the MATLAB Toolbox there are 3 MEX-files and 12 M-files.

Use the `cds_root spectre` command to determine the installation directory of your MMSIM simulation software.

Configure the Toolbox Package

The Spectre/RF Toolbox in MATLAB depends on the Spectre simulation run environment. It uses shared libraries located in the Spectre installation path.

  1. Make sure you are running Spectre 6.0 or a higher version.
  2. Verify the dynamic library path by checking the LD_LIBRARY_PATH environment variable. Make sure that both <instdir>/tools/dfII/lib and <instdir>/tools/lib are in the path. For C shell users, use the following command
setenv LD_LIBRARY_PATH `cds_root spectre`/tools/dfII/lib:`cds_root spectre`/tools/lib:${LD_LIBRARY_PATH}
  1. Solaris users also need to add the toolbox installation path to LD_LIBRARY_PATH. For C shell users, use the following command
setenv LD_LIBRARY_PATH `cds_root spectre`/tools/spectre/matlab:${LD_LIBRARY_PATH}
  1. The MATLAB script sets the MATLABPATH environment variable to include the MATLAB toolbox directories and the user created directories. You need to add the toolbox installation path to the MATLABPATH environment variable. For C shell users, use the following command
    setenv MATLABPATH `cds_root spectre`/tools/spectre/matlab:${MATLABPATH}

The Basic Toolbox Functions

Each toolbox function command has an associated help page which you can display by typing help <command_name> in MATLAB. This section introduces the basic toolbox functions, describes how to use each function and describes how to write measurement functions.

cds_srr

Lists the datasets in the result directory, lists signals in a dataset or reads a signal into MATLAB.

  1. To list the datasets in the result directory, use the command
    datalist = cds_srr(‘result_directory’)
    All dataset names in the result directory are returned with datalist as a string vector.
  2. To list the signal names in a dataset, use the command
    signals = cds_srr(‘result_directory’, ‘dataset_name’)
    All signal names and property names in the dataset are returned with signals as a string vector.
  3. To read a signal into MATLAB, give both a dataset name and a signal name and use the command
    signal = cds_srr(‘result_directory’, ‘dataset_name’, ‘signal_name’)
    The value of the signal is returned with signal. The signal can be a single value or a structure containing a matrix.
    Normally, when you use cds_srr to read a signal into MATLAB, cds_srr returns a structure containing fields. The first field is info, a string vector of name, unit pairs. In each pair, the first value is the name of the field, the second value is the unit of the field. The first name, unit pair in the info field describes the final value, the other pairs describe the sweep information if the signal was swept. An inner sweep is always listed before an outer sweep in name, unit pairs in the info field. The innermost sweep can be a matrix with variable sizes in each column, the other sweeps have to be a vector of fixed size.

In MATLAB,

The cds_srr command supports both PSF and SST2 formats. cds_srr is an external function with cds_innersrr running in the background. The cds_srr and cds_innersrr commands have the same interface, but cds_srr adds some post processing to make the resulting matrix easier to read.

cds_evalsig

Filters the data from swept analyses. The command is

v = cds_evalsig(signal, expression)

The first parameter signal is the result of the cds_srr command. The second parameter expression is a string expression. You can use both relational operators (<, <=, ==, >=, >>) and logic operators (&, |).

If the signal has the swept fields prf and time, you can write an expression like the following,

prf < -20 | prf == -10 & time >= 2e-8 

The logic operator (|) is only effective between expressions with the same field name. The following expression is meaningful.

prf==-10 | prf >= -20 

When the logic operator (|) is used between two different field names, it is equivalent to the logic operator (&). The following two expressions are equivalent.

prf==-10 | time <= 2e-8 

and

prf==-10 & time <= 2e-8 

cds_plotsig

Plots swept signals. The command is

cds_plotsig(signal, expression, sweep_name, type_id)

The first two parameters signal and expression have the same meaning as described for the cds_evalsig command. The third parameter sweep_name is the name of a sweep field. You can define a special x-axis with sweep_name when the result has multiple sweeps. The default value for sweep_name is the name of the innermost sweep. The fourth parameter type_id is a string type id that can be any of the following: mag, phase, real, imag, both, db10, db20 or dbm.

cds_harmonic

This command is specifically designed for RF results. It helps you to select harmonics and sidebands. In the toolbox data structure created by the cds_srr command, all sweep field names are listed in the info field. The harmonic information is not an independent sweep, it always combines frequencies and tones and it is not listed in the info field. The harmonic and harmUnit fields contain the harmonic information. The cds_harmonic command can select the interesting harmonics with these pieces of information. The command is

hsig = cds_harmonic(signal, harms)

For the PSS analysis, harms is a single integer. For the QPSS analysis harms is a vector. For example, use the following command for a QPSS analysis.

ord3 = cds_harmonic(rfout, [2 -1])

cds_interpsig

The intervals between time points in Spectre results are not equal. We provide an interpolation command to distribute the time points evenly. The command is

v = cds_interpsig(signal, sweep_name, num, method)

Where the first parameter signal results from other commands such as cds_srr. The second parameter sweep_name is the name of the inner sweep—normally it is time. The third parameter num is the vector length after interpolation. The fourth parameter method specifies a method for interpolation—it is a string value which can be linear, cubic, nearest or spline. The default interpolation method is linear.

Only the first parameter is required.

The Measurement Commands

cds_fft

The FFT (Fast Discrete Fourier Transform) is a typical RF measurement. MATLAB provides an internal fft function. The command cds_fft calls the internal MATLAB fft function. It prepares the input signal, and calls fft for each outer sweep. The command is

fsig = cds_fft(tsig)

The input parameter tsign is a time-domain signal. The output parameter fsig contains frequency domain data.

The cds_fft command is a MATLAB script. It is located in the file cds_fft.m in the installation directory. If you want to write your own measurement, this script offers a good example.

In cds_fft.m, the script checks the input value at the beginning. The tsig should exist and not be empty. As a time sweep result of cds_srr, it must have the fields info and time. Because the intervals between time points in tsig vary, the script calls cds_interpsig to distribute the time points evenly. From the vector of field times, the script gets the frequency vector. tsig can also be a multi-level sweep, so it does an fft for each outer sweep. After the fft, it copies additional sweep information from the tsig to fsig, and fills the field time with the field freq.

cds_compression

Returns the n-dB input referred compression point for the wave supplied. The command is

comp = cds_compression(vport, iport, harm, rport, gcomp, curve)

Where the first parameter vport is the voltage of the port. Normally it is the return value of cds_srr. The second parameter iport is the current of the port. It can be calculated from the voltage and impedance. The third parameter harm is the 1st order harmonic, the default value is 1. The parameter rport is the impedance of the port, the default value is 50. The parameter gcomp is gain compression in dB, the default value is 1. The parameter curve is the control flag of curve display. The default value of curve is on. It can be off.

cds_ipn

Returns the Intercept Point for the wave supplied. The command is:

[ipn_in, ipn_out] = cds_ipn(vport, harmspur, harmref, epoint, rport, ordspur, iport, epref, ordref, curve)

The parameters of cds_ipn are similar to the parameters of cds_compression. The first parameter vport is the voltage of the port. The second parameter harmspur is the order harmonic value. The third parameter harmref is the reference order (1st order) harmonic value. The parameter epoint is the input power extrapolation point. The parameter rport is impedance of the port. The default value is 50.0. The parameter ordspur is order number. The default value is 3. The parameter iport is the port current. The default value is vport/rport. The parameter epref is the input power reference point. It will use epoint when this parameter is not specified. The parameter ordref is the reference order number. The default value is 1. The parameter curve is the control flag of curve display. The default value is on. It can be on or off.

The return parameter ipn_in is the input reference IPn value and ipn_out is the output reference IPn value.

Example

This section shows how to use the toolbox to make RF measurements in MATLAB. The example circuit is an LNA design from the RF workshop which is available at

<instdir>/tools/spectre/examples/SpectreRFworkshop 

Please refer to Lab3: Gain Compression and Total harmonic Distortion (Swept PSS) in the workshop LNA Design Using SpectreRF.

After finishing the LNA Lab3, you get the results in

simulation/Diff_LNA_test/spectre/schematic/psf

To start MATLAB from the system prompt, type the following command

matlab

In MATLAB, get time domain data with the command

>> rdir = 'simulation/Diff_LNA_test/spectre/schematic/psf';
>> tdout = cds_srr(rdir, 'sweeppss_pss_td-sweep', 'RFout');

Figure 1-20 Gain Compression (1dB Compression)

To get 1 dB compression point, use the following command

>> cds_compression(fdout)

A window with the result will pop up, as Figure 1-20 shows.

tdout is a power sweep result. For different prf, the time point numbers are different, and for each prf, the intervals between time points vary. The field time is a 161x11 matrix as the field V.

Execute command:
>> td161=cds_interpsig(tdout); 
>> td100=cds_interpsig(tdout, 'time', 100);
>> hold on
>> cds_plotsig(tdout)
>> cds_plotsig(td161)
>> cds_plotsig(td100)

The waveforms shape of td161 and td100 are the same as tdout. But in td161 and td100, the time points are the same for each different prf and the intervals of time points are the same for each prf. So, the field time is stored and handled as a vector.

To get the frequency domain data, execute command:

>> fdout = cds_srr(rdir, 'sweeppss_pss_fd-sweep', 'RFout');

fdout contains 11 harmonics, the field harmonic lists the harmonic number. To get the 1st order harmonic signal, use the following command

>> fd1 = cds_harmonic(fdout, 1)

Figure 1-21 Total Harmonic Distortion

To get total harmonic distortion in percent, use the following command

>> thd = cds_thd(fdout); 
>> cds_plotsig(thd);

The result is shown in Figure 1-21.

To compare the frequency data with time domain data.

>> fd1 = cds_evalsig(fdout, 'prf = -30');
>> td2fd = cds_fft(tdout);
>> fd2 = cds_evalsig(td2fd, 'prf = -30 & freq<=2.5e10')

The result is shown in Figure 1-22.

Figure 1-22 Compare FD data before FFT and after

Compatibility with Aptivia MATLAB Functions

This topic is of interest to those users who are using acv measures in MATLAB. The document for these measures is in <cdsinst>/doc/matlabmeasug/ Chapters 3 and 4.

The Spectre/RF MATLAB Toolbox is better able to handle both swept analyses and RF-related analyses such as Monte Carlo and PSS/PAC than acv measures. While acv measures provide more measurements for the transient and ac analyses, the MATLAB Toolbox can reuse acv measure functions with the command cds2vsde and the global variable CDS2ACV.

Set the global variable CDS2ACV to 1 in MATLAB with the following command.

>> global CDS2ACV; CDS2ACV = 1

When CDS2ACV=1, cds_srr prints vsde compatibility information after loading in the dataset. cds_srr will translate the signal to acv data structure for acv measures automatically.

The cds2vsde command is designed for translating signals to acv data structure. The command is

cds2vsde(sig, raw_file, variable, option)

The parameters raw_file and variable are string values to identify the signal. The parameter option can be add or replace as other acv functions.

The acv data structure cannot handle multi-level sweeps, and for each raw_file the sweep value should be the same. So cds2vsde will use different variables or different raw_files for multi-level sweep signals.

If you want to convert variable td2fd to acv format, use the following command

>> cds2vsde(td2fd, 'sweeppss_pss_fd-sweep', 'td2fd');

It will print out:

vsde compatibility information
raw_file: sweeppss_pss_fd-sweep
variable: td2fd(prf=-30)
variable: td2fd(prf=-27.5)
variable: td2fd(prf=-25)
variable: td2fd(prf=-22.5)
variable: td2fd(prf=-20)
variable: td2fd(prf=-17.5)
variable: td2fd(prf=-15)
variable: td2fd(prf=-12.5)
variable: td2fd(prf=-10)
variable: td2fd(prf=-7.5)
variable: td2fd(prf=-5)

Then use command meas_plot for plotting:

>> meas_plot('sweeppss_pss_fd-sweep', 'td2fd(prf=-20)', 'stem');

Reference

[1]

Simulation Results Reader User Guide, Product Version 5.0

[2]

MATLAB External Interfaces Reference available at

http://www.mathworks.com/access/helpdesk/help/techdoc/matlab.html

[3]

SpectreRF Workshop--LNA Design Using SpectreRF, MMSIM6.0USR2

MATLAB Support Matrix

Table 1-1 MATLAB Toolbox

MMSIM MATLAB Release Supported Platform

Linux 32

Linux 64

Solaris 64

R2007a

Supported

Supported

Supported

R2007b

Supported

Supported

Supported

R2008a

Supported

Supported

Supported

MMSIM13.1

R2008b

Supported

Supported

Supported

R2009a

Supported

Supported

Supported

R2009b

Supported

Supported

Supported

R2010a

Supported

Supported

Not Supported 1

R2010b

Supported

Supported

Not Supported 1

R2011a

Supported

Supported

Not Supported 1

R2011b

Supported

Supported

Not Supported 1

R2012a

Supported

Supported

Not Supported 1

R2012b

Not Supported2

Supported

Not Supported 1

R2013a

Not Supported2

Supported

Not Supported 1

R2013b

Not Supported2

Supported3

Not Supported 1

R2014a

Not Supported2

Supported3

Not Supported 1

Table 1-2 MATLAB Cosimulation

MMSIM MATLAB Release Supported Platform

Linux 32

Linux 64

Solaris 64

R2007a

Supported

Supported

Supported

R2007b

Supported

Supported

Supported

R2008a

Supported

Supported

Supported

MMSIM13.1

R2008b

Supported

Supported

Supported

R2009a

Supported

Supported

Supported

R2009b

Supported

Supported

Supported

R2010a

Supported

Supported

Not Supported 4

R2010b

Supported

Supported

Not Supported 1

R2011a

Supported

Supported

Not Supported 1

R2011b

Supported

Supported

Not Supported 1

R2012a

Supported

Supported

Not Supported 1

R2012b

Not Supported5

Supported

Not Supported 1

R2013a

Not Supported2

Supported

Not Supported 1

R2013b

Not Supported2

Supported6

Not Supported 1

R2014a

Not Supported2

Supported3

Not Supported 1

Starting with the MMSIM 14.1release, the MATLAB toolbox supports only the R2012a and above versions.

Noise Separation in Pnoise and Qpnoise Analysis

This section describes how to analyze RF circuits using the noise separation features in the Pnoise and Qpnoise analyses. SpectreRF users in the Analog Design Environment (ADE) will find noise separation information to be useful.

Principles of Noise Separation in RF Circuits

For the Pnoise and Qpnoise analyses, input noise can be either stationary or cyclostationary. A simple instance of stationary noise is the white noise in a resistor with constant resistance. Cyclostationary noise is generally due to the fact that in most RF circuits the operating point of the nonlinear devices, primarily transistors, is periodic and time-varying.

To better illustrate the difference between stationary noise and cyclostationary noise, consider the white thermal noise generated by either the nonlinear drain-to-source or channel resistor in a MOS transistor. The formula for white thermal noise is given as

(1-34)

In Equation 1-34 , R ( V ) is the bias-dependent small signal drain-to-source resistance. Assuming the transistor is driven by a periodic large signal excitation, for example the clock in a switched capacitor filter, R ( V ) will be time-varying and you can no longer model its associated thermal noise as a simple stationary noise source. You must treat the noise source as a cyclostationary random process.

When input noise passes through an RF circuit, the aliasing or noise folding effect is introduced by the frequency translation intentionally performed by the linear periodic time-varying (LPTV) characteristics of the RF circuit. You can summarize the noise transfer process as follows.

The Pnoise and Qpnoise analyses calculate the output noise transferred from the noise sources by transfer functions. Using the Direct Plot form, you can plot output noise, input noise, noise figure, transfer function, and so on. By using the Noise Summary Print form and selecting sort and truncation methods, you can print spot noise or integrated noise.

The noise separation feature provides more information about how much of the noise is due to the noise sources and how much of the noise is due to the corresponding transfer function. The main purpose of giving the extra information is to provide useful information about how noise sources contribute noise to the output.

You can select which one of the two approaches to use to decrease the output noise

For some projects, you might want to separate the noise due to sources from the noise due to transfer functions in order to satisfy noise specifications. When you include the noise separation specifications in the design process you are using global optimization to help produce better designs.

When you take K sidebands and one noise source into account, Equation 1-35 illustrates the power spectrum density (PSD) of the output noise due to this single noise source.

(1-35)

Where NoiseSourcei represents the PSD of Sourcei sampled at the frequency shift
of i fundamental.

When the noise is cyclostationary, Equation 1-36 represents the output PSD due to this source.

(1-36)

By introducing the noise gain as a parameter, the formulas in Equations 1-35 and 1-36 can be unified as they are in Equation 1-37.

(1-37)

Therefore, you know that

The Noise Separation GUI

This section describes how to use noise separation in the analog design environment (ADE).

Setting Up a Pnoise Analysis for Noise Separation

After you use the ADE GUI to set up both a PSS analysis and a Pnoise analyses, use the Noise Type section at the bottom of the Pnoise Choosing Analysis form to set up the noise separation analysis. When you set Noise Type as sources, the Noise Separation field is visible and you can select yes or no, as shown in Figure 1-35.

Notice the message below Noise Separation,

separate noise into source and gain

This message further describes noise separation in the current context.

Please notice the following information as it applies to this context.

When you run the PSS and Pnoise analyses, noise separation is measured during the simulation and the corresponding results are saved.

Setting Up A QPnoise Analysis for Noise Separation

After you use the ADE GUI to set up both a QPSS analysis and a QPnoise analysis, set up the noise separation analysis by selecting yes for Noise Separation as shown in Figure 1-36.

Notice the message below Noise Separation,

separate noise into source and gain

This message further describes noise separation in the current context.

Please notice the following information as it applies to this context.

When you run the QPSS and QPnoise analyses, noise separation is measured during the simulation and the corresponding results are saved.

Plotting Noise Sources, Gains, and Outputs

  1. After running the noise separation analyses from ADE, select Results — Direct Plot — Main Form in the ADE Simulation window.
    Aside from pss and pnoise (or qpss and qpnoise), another choice for Analysis, pnoise separation (or qpnoise separation), displays in the Direct Plot form.
  2. Select pnoise separation (or qpnoise separation) to create a noise separation plot.
    Six Function items are displayed. Sideband Output, Instance Output, Source Output, Instance Source, Primary Source and Src. Noise Gain.
  3. Select one of the Functions.

    Sideband Output

    Plots the noise contribution made by multiple selected sidebands to the output noise

    Instance Output

    Plots the noise contribution of some instances (for example, MOS, or BJT) to the output at one selected sideband

    Source Output

    Plots the noise contribution made by one selected sideband to the output noise

    Instance Source

    Plots the noise sources of some instances at one selected sideband

    Primary Source

    Plots the primary noise sources (for example, re or rb in a BJT) at one selected sideband

    Src. Noise Gain

    Plots the noise gains of primary noise sources (for example, re or rb in a BJT) from source to output at one selected sideband.


    Notice the message below the Function area in the Direct Plot Form. For example, after selecting the function, Sideband Output, you see the following message.
    Noise contribution of sideband to output
    This message further describes the noise separation plot in the current context.

Selecting Other Information in the Direct Plot Form

You further define what RF simulation measurements to plot by entering and selecting items in the Direct Plot form; for example, functions, signal levels, modifiers, plots and so on.

  1. While making selections in the Direct Plot form, follow new messages as they display.
    Additional instructions and prompts might appear within the body of the Direct Plot form or at the bottom of the form. For example, after selecting Sideband Output for Function and V**2 / Hz for Signal Level, you can then select either Power or dB20 for Modifier. You might also see a message such as the following.
    Currently, only sideband data is available
    This message accompanies the Output Sidebands list box.
  2. After selecting Power for Modifier and all of the sidebands in the list box, you would be left with the following message.
    Press plot button on this form...
    The completed Direct Plot form displays.
  3. As the last step in the Direct Plot form, click Plot to plot the noise contributed by each individual sideband.
    The Plot displays as shown in Figure 1-26.
    Figure 1-26 Output Sidebands Plot

When you press the plot button (or perform another specified action, such as Select Net in schematic), a simulation plot appears, by default, in the current Waveform Window. If the Waveform window or Schematic window is not open, selecting a direct plot function automatically opens both windows.

Brief instructions for all six noise separation plots follows.

Sideband Output

Plots the noise contribution of selected sidebands to the output.

Signal Level

Possible choices of measure units

Modifier

Possible measure scale according to the Signal Level

Output Sideband

A list box of all possible sidebands. It is multi-selection.

Instance Output

Plots the noise contribution of some instances; for example, MOS, BJT, and so on, to the output at one selected sideband.

Signal Level

Same as above

Modifier

Same as above

Output Sideband

Same as above, except single-selection

Filter

Frame

Include All Types —All device types in right box selected

Include None — Clear all selections in right box

Include Inst — Only include these instances. Note the device types of these instances must be selected.

Exclude Inst — Exclude these instances from the selected device types.

Truncate

Frame — Truncate the top number of Instance Output. The default is 3.

Instance Source

Plots the noise sources of some instances at one selected sideband.

Signal Level

Same as above

Modifier

Same as above

Output Sideband

Same as above

Filter

Same as above

Truncate

Same as above

Source Output

Plots the noise contribution of the primary noise source (for example, re and rb in a BJT) to the output at one selected sideband.

Signal Level

Same as above

Modifier

Same as above

Output Sideband

Same as above

Filter

Same as above

Truncate

Frame — Truncate the top number of Source Output. The default is 3.

Primary Source

Plots the primary noise sources (for example, re and rb in a BJT) at one selected sideband.

Signal Level

Same as above

Modifier

Same as above

Output Sideband

Same as above

Filter

Frame — Same as above

Truncate

Frame — Same as above

Source Noise Gain (Src. Noise Gain)

Plots the noise gains of primary noise sources (for example, re and rb in a BJT) from source to output at one selected sideband.

Signal Level

Same as above

Modifier

Same as above

Output Sideband

Same as above

Filter

Frame — Same as above

Truncate

Frame — Same as above

Printing the Noise Source Summary Results

You can also view all noise source information in the Noise Summary Print form.

In the Noise Summary form, aside from the pnoise and qpnoise items, another item, pnoise_src (or qpnoise_src) is displayed. The field structure of the form, as well as the fill, filter and truncate methods, are the same as those in the pnoise summary (or qpnoise summary) Plot form.

In print form, the Noise Source Summary provides an overview of noise source distribution in the circuit under analysis. The organizational structure of the Noise Source Summary is similar to the Output Noise Summary, but the latter provides the contributions to the output of these noise sources.

Beyond providing an overview of the source distribution, the Noise Source Summary is also useful in the following cases. The Noise Source Summary helps you to

Since you can express Rp as a simple function of rb, Rs for a Bipolar LNA, with the Noise Source Summary at hand, you can easily estimate the NF by hand.

Displaying the Noise Ssource Information in WaveScan

You can also view all noise source information in WaveScan.

The Noise Separation Flow

The hierarchical function layout, filter and truncation algorithms in the Direct Plot form, provide a good way to quickly locate the noise sources which cause the most noise in the output noise. The hierarchical layout is based on the hierarchy collection of output noise, which is simply described as follows.

The Basic Noise Separation Flow

Direct Plot Form -> Pnoise Separation (or Qpnoise Separation) ->

Sideband Output[1] -> Instance Output[2] ->

Source Output[3] -> Instance Source/Primary Source/Src. Noise gain[4]

Note:

[1] Decide which sidebands contribute the most output noise

[2] Decide which instances contribute the most output noise to the selected sideband

[3] Decide which primary noise sources contribute the most output noise to the filtered and truncated instances

[4] Locate which primary noise sources or gains have the greatest effect on output noise

Noise Separation Example

This example shows how to locate the noise sources that contribute the most Output Noise. It uses the NE600 mixer is in the rfExamples library. The schematic is shown in Figure 1-27. Suppose the RF input signal is frf=900 MHz, LO signal flo=1 GHz, the expected IF should be 100 MHz.

Figure 1-27 NE600 Mixer Schematic

Setting Up and Running a Pnoise Separation Analysis

Using the ADE GUI

Stimulus

Set up the following stimuli:

A large sinusoidal signal at the LO port (PORT2)

Set the RF port (PORT1) source type=dc

Parameters

Set up the following parameters:

flo=1G

prf=-30dBm

Simulation/Analyses

  1. Set up the following PSS analysis.
    Beat frequency: flo
    Output harmonics: Number of harmonics 10.
  2. Set up the following Pnoise analysis.
    Sweeptype: default
    Output Frequency Sweep Range (Hz):
    Start 50M
    Stop 150M
    Sweep Type: Logarithmic
    Points Per Decade: 100
    Sidebands: Maximum Sideband 5
    Output: probe
    Set up as the Output Probe Instance: /rif
    Input Source: port
    Set up as the Input Source Port /rf
    Reference side-band: Enter in field -1
    Noise Type: sources
    Noise Separation: yes
  3. Run the PSS and Pnoise simulations.

Using the Flow to Locate Noise Sources

  1. Plot the Sideband Output.
    Drag the mouse to select all sidebands (from -5 to 5). In this case, the result is shown in Figure 1-28.
    Figure 1-28 Sideband Output of the ne600 with Pnoise Separation
    From Figure 1-28, you can see that Sideband 0 contributes more output noise than any other sideband. Therefore, the next step is to check the instances in sideband 0.
  2. Plot the Instance Output.
    • Set output sideband to 0
    • Include all types
    • Top number is 5

    The result is shown in Figure 1-29.
    Figure 1-29 Instance Output of ne600 with Pnoise Separation
    From Figure 1-29 you can see that rif, rf, q56, q57 and rl1 (especially the rif) contribute more output noise than other instances.
  3. Plot the Source Output.
    • Set output sideband: 0, include all types, top number: 5,

    The result is shown in Figure 1-30.
    Figure 1-30 Source Output of ne600 with Pnoise Separation
    In Figure 1-30, you can see that rif.rn, rf.rn, rl1.rn, r45.rn and q57.rb (especially rif.rn) contribute more output noise than other noise sources.
    The list order of instances in this plot Source Output (in Figure 1-30) is different from the list order of Instance Output in Figure 1-29.
    Up to now it is obvious that rif.rn is the main contributor to the output noise in this circuit. Since rif only includes one noise source rif.rn, The Instance Source and the Primary Source in Figure 1-31 should give the same plot. You can further check the Primary Source and Src. Noise Gain of rif.rn.
  4. Plot Primary Source and Src.Noise Gain
    set output sideband: 0, include all types, top number: 1, the result is shown in Figures 1-31 and 1-32.
    Figure 1-31 Primary Source of ne600 with Pnoise Separation
    Figure 1-32 Source Noise Gain of ne600 with Pnoise Separation
    To improve the noise performance of this circuit, decreasing the output noise of rif.rn is an effective solution.
    There are two approaches
    • One is decreasing the magnitude of noise source rif.rn by adjusting the device geometric size
    • The other is decreasing the transfer function of rif.rn by adjusting the circuit architecture.

Other Notes

Simulating Noise and Jitter

Analyzing Time-varying Noise

RF circuits are usually driven by periodic inputs. The noise in RF circuits is generated by sources that can therefore typically be modeled as periodically time-varying. Noise that has periodically time-varying properties is said to be cyclostationary.

Characterizing Time-Domain Noise

Noise in a circuit that is periodically driven, say with period T , exhibits statistical properties that also vary periodically.To understand time-domain characterization of noise, consider the simple circuit shown in Figure 1-33.

Figure 1-33 Very Simple Mixer Schematic

The amplitude of the noise measured at the RF output shown in Figure 1-33 periodically varies depending on the magnitude of the modulating signal p(t) , as shown by the sample points in Figure 1-34.

Figure 1-34 Time-Varying Noise Process Analyzed at ξ1 and ξ2

In Figure 1-34

Spectre® circuit simulator RF analysis (SpectreRF) can calculate the time-varying noise power at any point in the fundamental period. In fact, SpectreRF can calculate the full auto correlation function

R ξ (p,q) = < x ξ (p)x ξ (p+q) > = R ξ (q)

and its spectrum for the discrete-time processes x ξ obtained by periodically sampling the time-domain noise process at the same point in phase.

Figures 1-35 and 1-36 show two such noise processes for two different phases in the periodic interval. Each process is stationary. Figure 1-35 shows the noise process for the phase marked ξ1 in Figure 1-34.

Figure 1-35 Noise Process for Phase ξ1

Figure 1-36 shows the noise process for the phase marked ξ2 in Figure 1-34.

Figure 1-36 Noise Process for Phase ξ2

See the “Reference Information on Time-Varying Noise” for a more detailed introduction to noise in periodically time-varying systems.

Calculating Time Domain Noise

The following steps tell you how to calculate time-domain noise using SpectreRF.

  1. In a terminal window, type icms (in IC5141) and virtuoso (in IC 6.1.4) to start the environment.
  2. In the ADE window, select Analyses – Choose.
    The Choosing Analyses form appears.
  3. In the Choosing Analyses form, highlight pss and perform the PSS analysis setup.
  4. In the Choosing Analyses form, highlight pnoise.
  5. The Choosing Analyses form changes to let you specify information for a Pnoise analysis.
  6. In the Choosing Analyses form, perform the following:
    1. Choose Noise Type timedomain.
    2. Specify an appropriate frequency range and sweep for the analysis.
      You might, for example, perform a linear sweep up to the fundamental frequency. Because each time point in the calculation is a separate frequency sweep, use the minimum number of frequency points possible to resolve the spectrum. This step minimizes computation time.
    3. Specify a noiseskipcount value or specify additional explicit time points with noisetimepoints.
    4. Specify an appropriate set of time points for the time-domain noise analysis.
    5. Use noiseskipcount to calculate time-domain noise for one of every noiseskipcount time points.
      If you set noiseskipcount to a value greater than or equal to zero, the simulator uses the noiseskipcount parameter value and ignores any numberofpoints parameter value. When noiseskipcount is less than zero, the simulator ignores the noiseskipcount parameter. The default is noiseskipcount = -1.
      You can add specific points by specifying a time relative to the start of the PSS simulation interval. noiseskipcount = 5 performs noise calculations for about 30 time points in the PSS interval.
      If you only need a few time points, add them explicitly with the noisetimepoints parameter and set noiseskipcount to a large value like 1000.
  7. In the ADE window, choose Simulation – Netlist and Run.
  8. The simulation runs.
  9. In the ADE window, choose Results – Direct Plot – PSS.
    The PSS Results form appears.
  10. To calculate time-varying noise power, perform the following steps in the PSS Results form:
    1. Click tdnoise and then select Integrated noise power.
    2. Type 0 as the start frequency and the PSS fundamental frequency as the stop period.
      For example, type 1G if the PSS period is 1ns.

    A periodic waveform appears that represents the expected noise power at each point in the fundamental period.
  11. To display the spectrum of the sampled processes, perform the following steps in the PSS Results form:
    1. Highlight Output Noise.
    2. Highlight Spectrum for the type of sweep.
    3. Clicking on Plot.

    A set of curves appears, one for each sample phase in the fundamental period.
  12. To calculate the autocorrelation function for one of the sampled processes, perform the following steps:
    1. Display the spectrum using instructions from step 11.
    2. In the ADE window, choose Tools – Calculator.
    3. The calculator appears.
    4. Click wave in the calculator and the select the appropriate frequency-domain spectrum.
      One of the sample waveforms is brought into the calculator
    5. Choose DFT from the list of special functions in the calculator. Then set 0 as the From and the PSS fundamental as the To value.
    6. Choose an appropriate window (e.g., Cosine2) and number of samples (around the number of frequency points in the interval [0,1/T]),
    7. Apply the DFT and plot the results.

    Harmonic q of the DFT results gives the value of the discrete autocorrelation for this sample phase, R(q).
    Be sure the noise is in the correct units of power (e.g., V2/Hz), not V/square root of Hz) before performing the DFT to obtain the autocorrelation.

Calculating Noise Correlation Coefficients

To characterize the noise in multi-input/multi-output systems, it is necessary to calculate both the noise power at each port and the correlation between the noise at various ports. The situation is complicated in RF systems because the ports may be at different frequencies. For example, in a mixer, the input port may be at the RF frequency and the output port at the IF frequency.

Denote the power spectrum of a signal x by S XX ( ω ) , that is

S XX ( ω ) = X*( ω ) X( ω )

where X( ω ) is the Fourier transform of the signal x(t) . For random signals like noise, calculate the expected value of the power spectrum S XX ( ω ) . To characterize the relationship between two separate signals x(t) and y(t) , you also need the cross-power spectrum

S XY ( ω ) = X*( ω ) Y( ω )

For random signals, the degree to which x and y are related is given by the cross-power spectrum. You can define a correlation coefficient ρ xy (ω) by

expresses the correlation of a signal x at frequency ω with the signal y at frequency ω + α. For example, white Gaussian noise is completely uncorrelated with itself for a ≠  0. Noise in an RF system generally has S α ( ω ) non-zero when α is the fundamental frequency, for example, the LO frequency in a mixer.

After you have measured the noise properties of a circuit, you can represent the circuit as a noiseless multiport with equivalent noise sources. For example, in Figure 1-37, first you measure the noise voltage appearing at the excitation ports of the circuit on the left in Figure 1-37. Then, you can express the noise properties of the circuit as two equivalent frequency-dependent noise voltages V N1 and V N2, and a complex correlation coefficient ρ 12 .

Figure 1-37 Calculating Noise Correlations and Equivalent Noise Parameters

When you know the noise at each port and its correlation, you can obtain any of various sets of equivalent noise parameters. For example, you can express noise in an impedance representation as the equivalent correlated noise voltage sources shown in Figure 1-37, as equivalent noise resistances and the correlation parameters, and as F min, R N, G opt, and B opt.

Cyclostationary Noise Example

As an example which illustrates the various aspects of cyclostationary noise, consider the simple mixer circuit shown in Figure 1-38.

Figure 1-38 Simple Mixer Circuit

In this simple mixer circuit, white Gaussian noise passes through a high-order band-pass filter with center frequency ω0. Then it is multiplied by two square-waves which have a phase shift a with respect to each other. Finally the output of the ideal multipliers is put through a one-pole low-pass filter to produce I and Q outputs.

The time-domain behavior of the noise is examined first. The most dramatic effect can be seen by looking directly at the mixer outputs in Figure 1-39. This figure shows the contributions to the time-varying noise power made by three separate source frequencies. Two of the source frequencies were selected around ω0, the third source frequency was selected away from ω0, slightly into the stop band of the band-pass filter. The sharp change in noise power over the simulation interval occurs because the mixers were driven with square-wave LO signals.

Figure 1-39 Time-Varying Noise Power Before Low-Pass Filter

Figure 1-40 shows the spectrum of a sampled noise process. Note the periodically replicated spectrum.

Figure 1-40 Spectrum of a Sampled Noise Process

The noise behavior at the output ports is examined next. The output spectra at the I and Q outputs are shown in Figures 1-41 and 1-42. The noise density at I is concentrated around zero because the noise at the RF input to the mixers (band-limited around ω0) is shifted down to zero and up to 2ω0, but components not around zero are eliminated by the low-pass filter.

Figure 1-41 Power Spectra With LO Tones 90deg Out of Phase

More interesting is the cross-correlation spectrum of the I and Q outputs, shown as the dashed line in Figures 1-41 and 1-42. When the signals applied to the mixers are 90 degrees out of phase (as in Figures 1-41), the cross-power spectral density of the noise at the separate I and Q outputs is small, indicating little noise correlation. If the tones are not quite out of phase (as in Figures 1-42), the correlation is much more pronounced, though in neither case is it completely zero.

In Figures 1-41 and 1-42, the solid and dashed lines represent the following

Figure 1-42 Power Spectra With LO Tones 72deg Out of Phase

A more interesting example comes from examining the correlation between the noise at the I output and the noise at the RF input. The density function as given by

is significant because it represents the correlation between the noise at the I output around the baseband frequency with the noise at the RF input, ω0 higher in frequency. The correlation is high because the noise at the RF input is centered around ω0 and converted to zero-centered noise by the mixer.

Figure 1-43 Noise Spectrum at the RF Input

In Figure 1-43, the noise spectrum at the RF input is given by the following function

Figure 1-44 Noise Spectrum at Various Power Densities

In Figure 1-44, the solid, dashed, and dashed-dot lines represent the following

Finally a detailed circuit example was considered. A transistor-level image-reject receiver with I and Q outputs was analyzed. The noise spectra at the I and Q outputs were found to be very similar, as shown in Figure 1-45.

Figure 1-45 Power Spectral Densities of an Image-Reject Receiver

In the image-reject receiver example shown in Figure 1-45, the power spectral densities are represented as follows

The IQ cross-power density was smaller, but not negligible, indicating that the noise at the two outputs is partially correlated. The correlation coefficient between noise at the I and Q outputs of the image-reject receiver is shown in Figure 1-46.

Figure 1-46 Correlation Coefficient for Output Noise of Image-Reject Receiver

Reference Information on Time-Varying Noise

The following sections provide background and reference information on the following noise-related topics:

Thermal Noise

The term noise is commonly used to refer to any unwanted signal. In the context of analog circuit simulation, noise is distinguished from such phenomena as distortion in the sense that it is non-deterministic, being generated from random events at a microscopic scale. For example, suppose a time-dependent current i(t) is driven through a linear resistor, as shown in Figure 1-47.

Figure 1-47 Deterministic Current Source Driving a Noisy Linear Resistor

The voltage that appears across the resistor is

The desired signal i(t)R , shown in Figure1-48, is corrupted by an added noise voltage n(t) that is due to resistive thermal noise. The thermal noise of the resistor is modelled by a current source in parallel with the resistor.

Figure 1-48 The Desired Signal i(t)R

The total measured voltage is shown in Figure 1-49.

Figure 1-49 The Total Measured Voltage

The added noise process alone, n(t) , is a random process and so it must be characterized in ways that are different than for deterministic signals. That is, at a time t 0 the voltage produced by the driven current can be exactly specified—it is i 0 sint 0 R . Just by inspecting Figure 1-48 we can predict this part of the measured signal.

On the other hand, the exact value of the noise signal cannot be predicted in advance, although it can be measured to be a particular value n(t 0 ) . However, if another measurement is performed, the noise signal n(t) we obtain is different and Figure 1-49 changes. Due to its innate randomness, we must use a statistical means to characterize n(t) .

Now consider the circuit in Figure 1-50, where we restrict attention to the noise source/resistor pair alone.

Figure 1-50 Resistor Modeled as a Noiseless Resistance with an Equivalent Noise Current Source

A typical measured noise current/voltage is shown in Figure 1-51.

Figure 1-51 Typical Measured Noise Current/Voltage

Because we cannot predict the specific value of n(t) at any point, we might instead try to predict what its value would be on average, or what we might expect the noise to be. For example, if we measure many noise voltage curves in the time domain, n(t) , and average over many different curves, we obtain an approximation to the expected value of n(t) which we denote by E { n(t) }. For thermal noise, we find that E { n(t) } = 0 . Therefore, instead of computing E { n(t) }, let us instead compute E { n(t) 2 }, the expected noise power. An example of this sort of measurement is shown in Figure 1-52. 250 measurements were needed to compute this curve.

Figure 1-52 Expected Noise Power

Now suppose that we wish to tap the circuit at multiple points. Each point has its own noise characteristics, but they are not necessarily independent. Consider the circuit shown in Figure 1-53.

Figure 1-53 Circuit Illustrating Correlated Noise

The signals n 1 (t) and n 2 (t) are obtained by measuring the voltage across a single resistor ( n 1 (t) ), and across both resistors ( n 2 (t) ), respectively. Just measuring E { n 1 (t) 2 } and E { n 2 (t) 2 } is not enough to predict the behavior of this system, because n 1 (t) and n 2 (t) are not independent.

To see n 1 (t) and n 2 (t) are not independent, consider Figures 1-54 and 1-55. Samples of each of the processes are taken and plotted on an X-Y graph.

Figure 1-54 Samples of n1(t) Plotted Versus n2(t)

Because n 1 (t) composes part of n 2 (t) , n 1 (t) and n 2 (t) are correlated so in Figure 1-54, the X-Y plot has a characteristic skew along the X=Y line, relative to the n 1 (t) , n 3 (t) plot in Figure 1-55,

Figure 1-55 Samples of n1(t) Plotted Versus n3(t)

The signals n 1 (t) and n 3 (t) are uncorrelated because they represent thermal noise from different sources. The additional measurement needed to describe the random processes is the measurement of the correlation between the two processes, E { n 1 (t)n 2 (t) }. We can also define a time-varying correlation coefficient ρ, with ρ∈ [ 0,1 ], as

A value of ρ =0 indicated completely uncorrelated signals, and a value near one indicates a high degree of correlation. In this example we would find that ρ (t) =1/2 , representing the fact that each of the two noise sources contributes half of the process n 2 (t) .

When there are multiple variables of interest in the system, it is convenient to use matrix notation. We write all the random processes of interest in a vector, for example

and then we can write the correlations as the expected value of a vector outer product, E { x(t)x H (t) }, where the H superscript indicates Hermitian transpose.

For example, we might write a time-varying correlation matrix as

Linear Systems and Noise

The examples in the preceding sections describe how to characterize purely static systems. Now we need to add some elements with memory, such as inductors and capacitors.

As a first example, consider adding a capacitor in parallel to the simple resistor, as shown in Figure 1-56.

Figure 1-56 A Simple RC Circuit

A sample of the noise process in shown in Figure 1-57.

Figure 1-57 Noise Process for a Simple RC Circuit

The noise looks different than the noise of the resistor alone, because the low-pass filter action of the RC circuit eliminates very high frequencies in the noise. However, we cannot see this effect simply by measuring E { n(t) 2 } as shown in Figure 1-58.

Figure 1-58 Expected Noise Power for an RC Circuit

The measurement of E { n(t) 2 } is independent of time for an RC circuit, just as it was for the for the resistor circuit.

Spectral Densities in Two Simple Circuits

Instead of expected noise power, let us look at the expected power density in the frequency domain. Let n( ω ) denote the Fourier transform of one sample of n(t) . Then, E { n( ω )n( ω )* } is the expected power spectral density, which we denote by S n ( ω ) .

In the present case, the capacitor has a pronounced effect on the spectral density. Figure 1-58 shows a computed power spectral density for the resistor thermal noise previously considered. The spectrum is essentially flat (some deviations occur because a finite number of samples was taken to perform the calculation). The flat spectrum represents the fact that in the resistor's noise, all frequencies are, in some statistical sense, equally present. We call such a process white noise.

Power Spectral Density for Resistor Thermal Noise

Figure 1-59 shows the spectrum of the noise process after filtering by the resistor-capacitor system.

Figure 1-59 Resistor-Capacitor Filtered Spectral Noise Process

It is easy to rigorously account for the effect of the RC-filter on the power spectrum of the noise signal. Suppose a random signal x is passed through a time-invariant linear filter with frequency-domain transfer function h( ω ) . Then the output is y( ω )=h( ω )x( ω ) .

Because expectation is a linear operator, we can easily relate the power spectral density of y , S y ( ω ) to S x ( ω ) , the power spectral density of x , by using the definitions of y and power density. Specifically,

The noise from the resistor can be considered to be generated by a noise current source i , with power density

placed in parallel with the resistor. With the capacitor in parallel, the transfer function from the current source to the resistor voltage is just the impedance Z( ω ) ,

and so the noise voltage power density is

Clearly the spectrum is attenuated at high frequencies and reaches a maximum near zero.

For a vector process, we may define a matrix of power-spectral densities,

The diagonal terms are simple real-valued power densities, and the off-diagonal terms are generally complex-valued cross-power densities between two variables. The cross-power density gives a measure of the correlation between the noise in two separate signals at a specific frequency. We may define a correlation coefficient as

It is often more useful to examine the correlation coefficient because the cross-power density may be small. As an example, consider a noiseless amplifier. The noise at the input is simply a scaled version of the noise at the output leading to a ρ =1 , but the cross-power density is much smaller than the output total noise power density if the amplifier has small gain.

In a numerical simulation it is important to compute only the correlation coefficient when the diagonal spectral densities are sufficiently large. If one of the power densities in the denominator of the correlation-coefficient definition is very small, then a small numerical error could lead to large errors in the computed coefficient, because of division by a number close to zero.

In the vector case, the transfer function is also a matrix H( ω ) , such that y( ω )=H( ω )x( ω ) and so the spectral densities at the input and output are related by

Time-Varying Systems and the Autocorrelation Function

If all the sources of noise in a system are resistors, and the circuit consists strictly of linear time-invariant elements, then the matrix of spectral densities S xx ( ω ) is sufficient to describe the noise. However, most interesting RF circuits contain nonlinear elements driven by time-varying signals. This introduces time-varying noise sources as well as time-varying filtering. Because most noise sources are small, and generate small perturbations to the circuit behavior, for purposed of noise analysis, most RF circuits can be effectively modeled as linear time-varying systems. The simple matrix of power spectra is not sufficient to describe these systems.

To see this, return to the simple resistor example. Suppose that a switch is connected between the resistor and the voltage measuring device, as shown in Figure 1-60.

Figure 1-60 SimpleTime-Varying Circuit with Switch

Further suppose that the switch is periodically opened and closed. When the switch is open, there is no noise measured. When the switch is closed, the thermal noise is seen at the voltage output. A typical noise waveform is shown on the bottom left in Figure 1-61.

Figure 1-61 Typical Waveforms for Noise, Noise Power and White Noise

The time-varying noise power E { n(t) 2 } can be computed and is shown in Figure 1-61 on the top left, above the time-varying noise waveform. The expected power periodically switches between zero and the value expected from the resistor noise. This is different than the resistor-only and resistor-capacitor systems considered previously. Indeed, no linear time-invariant system could create this behavior. However, if we examine the power spectrum on the right in Figure 1-61, we again find that it is flat, corresponding to white noise.

The Autocorrelation Function

At this point it is clear that E { n(t) } and E { n(t) 2 } do not completely specify the random process n(t) , nor does the power spectral density. To obtain a complete characterization, consider measuring n(t) at two different timepoints, t 1 and t 2. n(t 1 ) and n(t 2 ) are two separate random variables. They may be independent of each other, but in general they have some correlation. Therefore, to completely specify the statistical characteristics of n(t 1 ) and n(t 2 ) together, we must specify not only the variances E { n(t 1 ) 2 } and E { n(t 2 ) 2 }, but also the covariance E { n(t 1 )n(t 2 ) }. In fact because n(t) has infinite dimension, an infinite number of these correlations must be specified to characterize the entire random process. The usual way of doing this is by defining the autocorrelation function R n (t,t+ τ ) = E { n(t)n(t+ τ ) }.

If x(t) is a vector process,

then we define the autocorrelation matrix as

where superscript H indicates Hermitian transpose.

The diagonal term gives the autocorrelation function for a single entry of the vector, e.g, E { x 1 (t)x t (t+ τ ) }. For τ =0 , this is the time-varying power in the single process, e.g. E { x 1 (t) 2 }. If the process x(t) is Gaussian, it is completely characterized by its autocorrelation function R x (t, t+ τ ) because all the variances and co-variances are now specified.

We can also precisely define what it means for a process to be time-independent, or stationary—A stationary process is one whose autocorrelation function is a function of τ only, not of t . This means that not only is the noise power E { n(t) 2 } independent of t , but the correlation of the signal at a time point with the signal at another timepoint is only dependent on the difference between the timepoints, τ. The white noise generated by the resistor, and the RC-filtered noise, are both stationary processes.

Connecting Autocorrelation and Spectral Densities

At different points in the discussion above it was claimed that the expected time-varying power E { n(t) 2 } of the resistor voltage is constant in time, and also the power density S n ( ω ) is constant in frequency. At first this seems odd because a quantity that is broad in time should be concentrated in frequency, and vice versa.

The answer comes in the precise relation of the spectral density to the autocorrelation function. Indeed, it turns out that the spectral density is the Fourier transform of the autocorrelation function, but with respect to the variable τ, not with respect to t . In other words, the measured spectral density is related to the correlation of a random process with time-shifted versions of itself. Formally, for a stationary process R n (t,t+ τ ) = R n (t)) we write

For example, in the resistor-capacitor system considered above, we can calculate the autocorrelation function R n ( τ ) by an inverse Fourier transform of the power spectral density, with the result

From inspecting this expression we can see that what is happening is that adding a capacitor to the system creates memory. The random current process generated by the thermal noise of the resistor has no memory of itself so the currents at separate time-instants are not correlated. However, if the current source adds a small amount of charge to the capacitor, the charge takes a finite amount of time to discharge through the resistor creating voltage. Thus voltage at a time-instant is correlated with the voltage at some time later, because part of the voltage at the two separated time instants is due to the same bit of added charge. From inspecting the autocorrelation function it is clear that the correlation effects last only as long as the time it takes any particular bit of charge to decay, in other words, a few times the RC time constant of the resistor-capacitor system.

Note that the process is still stationary because this memory effect depends only on how long has elapsed since the bit of charge has been added, or rather how much time the bit of charge has had to dissipate, not the absolute time at which the charge is added. Charge added at separate times is not correlated because arbitrary independent amounts can be added at a given instant. In particular, the time-varying noise power,

Time-Varying Systems and Frequency Correlations

Now we have seen that the variation of the spectrum in frequency is related to the correlations of the process, in time. We might logically expect that, conversely, variation of the process in time (that is, non-stationarity) might have something to due with correlations of the process in frequency. To see why this might be the case, suppose we could write a random process x as a sum of complex exponentials with random coefficients,

Noting that c -k =c k *, the time-varying power in the process is

and it is clear that E { x(t) 2 } is constant in time if and only if

In other words, the coefficients of expansion of sinusoids of different frequencies must be uncorrelated. In general, a stationary process is one whose frequency-domain representation contains no correlations across different frequencies.

To see how frequency correlations might come about, let us return to the resistor-switch example. Let n(t) denote the voltage noise on the resistor, and h(t) the action of the switch, so that the measure voltage is given by v(t) = h(t)n(t) , where h(t) is periodic with period T and frequency

The time-domain multiplication of the switch becomes a convolution in the frequency domain,

where ⊗ denotes convolution.

Because h(t) is periodic, its frequency-domain representation is a series of Dirac deltas,

and so

and the spectral power density is simply

Because the process n is stationary, this reduces to

Because S n ( ω ) is constant in frequency, S v ( ω ) is also.

However, the process v is no longer stationary because frequencies separated by multiples of ω0 have been correlated by the action of the time-varying switch. We may see this effect in the time-variation of the noise power, as in Figure 1-61, or we may examine the correlations directly in the frequency domain.

To do this, we introduce the cycle spectra

that are defined by

and are a sort of cross-spectral density, taken between two separate frequencies. S 0 ( ω ) is just the power spectral density we have previously discussed. In fact we can define a frequency-correlation coefficient as

and if

then the process n has frequency content at ω and ω + α that is perfectly correlated.

Consider separating out a single frequency component of a random process and multiplying by a sinusoidal waveform of frequency α, as shown in Figure 1-62. The component at ω is shifted to re-appear at ω + α and ω − α. The new process' frequency components at ω − α and ω + α are deterministically related to the components of the old process located at ω. Therefore, they are correlated, and S 2a ( ω ) is non-zero.

Figure 1-62 Time-Variation Introduces Frequency Correlation

Physically, what happens is that to form a waveform with a defined shape in time, the different frequency components of the signal must add in a coherent, or correlated fashion. In a process like thermal noise, the Fourier coefficients at different frequencies have phase that is randomly distributed with respect to each other, and the Fourier components can only add incoherently. Their powers add, rather than their amplitudes. Frequency correlation and time-variation of statistics are thus seen to be equivalent concepts.

Another way of viewing the cycle spectra is that they represent, in a sense, the two-dimensional Fourier transform of the autocorrelation function, and are therefore just another way of expressing the statistics of the process.

Time-Varying Noise Power and Sampled Systems

Again supposing the signal n to be cyclostationary with period T , for each sample phase ξ ∈ [0,T) , we may define the discrete-time autocorrelation function

to be

Because the cyclostationary process R n is periodic, by inspection

is independent of p and thus stationary, that is

Note that

gives the expected noise power, R n ( ξ , ξ ) , for the signal at phase ξ. Plotting R ξ (0) versus ξ shows how the noise power varies periodically with time.

The discrete-time process

can be described in the frequency-domain by its discrete Fourier transform,

Note that the spectrum of the discrete (sampled) process

is periodic in frequency with period 1/ T .

All noise power is aliased into the Nyquist interval [ 1/ 2T, 1/ 2T] (or, equivalently, the interval [0,1/ T] ). Generally it is the noise spectrum which is available from the circuit simulator. To obtain the autocorrelation function or time-varying noise power, an inverse Fourier integral must be calculated by

Summary

This is what the current SpectreRF pnoise analysis computes.

S xx ( ω ) is constant if and only if x is a white noise process. In that case R xx ( τ ) = R δ ( τ ) if there are no correlations in time for the process.

The cross-power densities of two variables x i and x j are

If and only if the two variables have zero correlation at that frequency, then

A correlation coefficient may be defined as

and ρij (f) [0,1] .

The cycle-spectra

represent correlations between frequencies separated by the cycle-frequency α

For a single process x i, a correlation coefficient may be defined as

and

for all ω and all α ≠ 0 , that is, if there are no correlations in frequency for the process.

In other words,

for all α ≠ m ω0 for some ω0 and integer m . Frequencies separated by m ω0 are correlated. A stationary process passed through a periodically linear-time varying filter in general is cyclostationary with ω0 the fundamental harmonic of the filter.

We might also compute correlations between different nodes at different frequencies, with the obvious interpretation and generalization of the correlation coefficients.

Oscillator Noise Analysis

In RF systems, local oscillator phase noise can limit the final system performance. Spectre® circuit simulator RF analysis (SpectreRF) lets you rigorously characterize the noise performance of oscillator elements. This appendix explains phase noise, tells how it occurs, and shows how to calculate phase noise using the SpectreRF simulator.

“Phase Noise Primer” discusses how phase noise occurs and provides a simple illustrative example.

“Models for Phase Noise” contains mathematical details about how the SpectreRF simulator calculates noise and how these calculations are related to other possible phase noise models. You can skip this section without any loss of continuity, but this section can help you better understand how SpectreRF calculates phase noise and better appreciate the drawbacks and pitfalls of other simple phase noise models. This section can also help in debugging difficult circuit simulations.

“Calculating Phase Noise” provides some suggestions for successful and efficient analysis of oscillators and discusses the limitations of the simulator.

“Troubleshooting Phase Noise Calculations” explains troubleshooting methods for difficult simulations.

“Frequently Asked Questions” answers some commonly asked questions about phase noise and the SpectreRF simulator.

“Further Reading” and “References” list additional sources of information on oscillator noise analysis.

The procedures included in this appendix are intended for SpectreRF users who analyze oscillator noise. You must have a working familiarity with SpectreRF simulation and its operating principles. In particular, you must understand the SpectreRF PSS and Pnoise analyses. For information, see Spectre Circuit Simulator RF Analysis Theory.

Phase Noise Primer

Consider the simple resonant circuit with a feedback amplifier shown in Figure 1-63, a parallel LC circuit with nonlinear transconductance. At small capacitor voltages, the transconductance is negative, and the amplifier is an active device that creates positive feedback to increase the voltage on the capacitor. At larger voltages, where the transconductance term goes into compression, the amplifier effectively acts as a positive resistor (with negative feedback) and limits the capacitor voltage.

Figure 1-63 A Simple Resonant Oscillator

A simple model for the nonlinear transconductance is a cubic polynomial. We hypothesize a nonlinear resistor with a current-voltage relation given by

The effect of the resistor in parallel with the inductor and the capacitor can be lumped into this transconductance term. The parameter is a measure of the strength of the nonlinearity in the transconductance relative to the linear part of the total transconductance. Because the signal amplitude grows until the nonlinearity becomes significant, the value of this parameter does not affect the qualitative operation of the circuit.

For simplicity, for the remainder of this appendix

After some renormalization of variables, where time is scaled by

with

and current is scaled by

You can write the differential equations describing the oscillator in the following form

and

In these equations, v and i represent the normalized capacitor voltage and inductor current, respectively, and ξ ( t ) is a small-signal excitation such as white Gaussian noise, Q = R0 L is the quality factor of an RLC circuit made by replacing the nonlinear transconductance by a positive resistance R .

The equations just discussed describe the familiar Van der Pol oscillator system. This model includes many of the qualitative aspects of oscillator dynamics, yet it is simple enough to analyze in detail. Many more complicated oscillators that operate in a weakly nonlinear mode can be approximated with this model by using the first few terms in the Taylor series expansion of the relevant transconductances.

As a brute-force method of calculating the noise properties of this circuit, the nonlinear stochastic differential equations that describe the current and voltage processes were numerically integrated [1], and the noise power was obtained using a standard FFT-periodiagram technique. This technique requires several hundred simulations of the oscillator over many thousands of periods. Consequently, it is not a feasible approach for practical circuits, but it is rigorously correct in its statistical description even though it requires no knowledge of the properties of oscillators, noise, periodicity, or signal amplitudes. Figure 1-64 shows the total time-averaged noise in the voltage variable.

Figure 1-64 Noise in a Simple Van der Pol System

By plotting Power Spectral Density against Normalized Noise Frequency Offset for a Q = 5 system, Figure 1-64 shows noise in a simple Van der Pol system.

The left half of Figure 1-64 shows noise as a function of absolute frequency.

The right half of Figure 1-64 shows noise as a function of frequency offset from the oscillator fundamental frequency.

The dashed line is LC-filtered white noise, the dash-dot line is RLC-filtered white noise, the solid line is SpectreRF phase noise, and (x) marks are noise power from a full nonlinear stochastic differential equation solution.

The resulting noise power spectral density looks much like the voltage versus current response of a parallel LC circuit. The oscillator in steady-state, however, does not look like an LC circuit. As you see in the following paragraphs, this noise characteristic similarity occurs because both systems have an infinite number of steady-state solutions.

The characteristic shape of the small-signal response of an LC circuit results because an excitation at the precise resonant frequency can introduce a drift in the amplitude or phase of the oscillation. The magnitude of this drift grows with time and is potentially unbounded. In the frequency domain, this drift appears as a pole on the imaginary axis at the resonant frequency. The response is unbounded because no restoring force acts to return the amplitude or phase of the oscillation to any previous value, and perturbations can therefore accumulate indefinitely.

Similarly, phase noise exists in a nonlinear oscillator because an autonomous oscillator has no time reference. A solution to the oscillator equations that is shifted in time is still a solution. Noise can induce a time shift in the solution, and this time shift looks like a phase change in the signal (hence the term phase noise). Because there is no resistance to change in phase, applying a constant white noise source to the signal causes the phase to become increasingly uncertain relative to the original phase. In the frequency domain, this corresponds to the increase of the noise power around the fundamental frequency.

If the noise perturbs the signal in a direction that does not correspond to a time shift, the nonlinear transconductance works to put the oscillator back on the original trajectory. This is similar to AM noise. The signal uncertainty created by the amplitude noise remains bounded and small because of the action of the nonlinear amplifier that created the oscillation. The LC circuit operates differently. It lacks both a time (or phase) reference and an amplitude reference and therefore can exhibit large AM noise.

Another explanation of the similarity between the oscillator and the LC circuit is that both are linear systems that have poles on the imaginary axis at the fundamental frequency, ω0. That is, at the complex frequencies s = i ω0. However, the associated transfer functions are not the same. In fact, because of the time-varying nature of the oscillator circuit, multiple transfer functions must be considered in the linear time-varying analysis.

Understanding the qualitative behavior of linear and nonlinear oscillators is the first step towards a complete understanding of oscillator noise behavior. Further understanding requires more quantitative comparisons that are presented in Models for Phase Noise. If you are not interested in these mathematical details, you might skip ahead to “Calculating Phase Noise”.

Models for Phase Noise

This section considers several possible models for noise in oscillators. In the engineering literature, the most widespread model for phase noise is the Leeson model [2]. This heuristic model is based on qualitative arguments about the nature of noise processes in oscillators. It shares some properties with the LC circuit models presented in the previous section. These models fit well with an intuitive understanding of oscillators as resonant RLC circuits with a feedback amplifier. In the simplest treatment, the amplifier is considered to be a negative conductance whose value is chosen to cancel any positive real impedance in the resonant tank circuit. The resulting linear time-invariant noise model is easy to analyze.

Linear Time-Invariant (LTI) Models

To calculate the noise in a parallel RLC configuration, the noise of the resistor is modeled as a parallel current source of power density

Where kB is Boltzman’s constant. In general, if current noise excites a linear time-invariant system, then the noise power density produced in a voltage variable is given by [3] as follows

where H (ω) is the transfer function of the LTI transformation from the noise current source input to the voltage output. The transfer function is defined in the standard way to be

where i s is a (deterministic) current source and v 0 is the measured voltage between the nodes of interest.

It follows that the noise power spectral density of the capacitor voltage in the RLC circuit is, at noise frequency ω = ω0 + ω′ with ω′ << ω0.

where the quality factor of the circuit is

The parallel resistance is R (the source of the thermal noise), and ω0 is the resonant frequency.

If a noiseless negative conductance is added to precisely cancel the resistor loss, the noise power for small ω′ / ω0 becomes

This linear time-invariant viewpoint explains some qualitative aspects of phase noise, especially the (ω0 / Q ω′) 2 dependencies. However, even for this simple system, a set of complicating arguments is needed to extract approximately correct noise from the LTI model. In particular, we must explain the 3 dB of excess amplitude noise inside the resonant bandwidth generated by an LC model but not by an oscillator (see “Amplitude Noise and Phase Noise in the Linear Model”). Furthermore, many oscillators, such as relaxation and ring oscillators, do not naturally fit this linear time-invariant model. Most oscillators are better described as time-varying (LTV) circuits because many phenomena, such as upconversion of 1 /f noise, can only be explained by time-varying models.

Linear Time-Varying (LTV) Models

For linear time-invariant systems, the noise at a frequency ω is directly due to noise sources at that frequency. The relative amplitudes of the noise at the system outputs and the source noise are given by the transfer functions from noise sources to the observation point. Time-varying systems exhibit frequency conversion, however, and each harmonic k ω0 in the oscillation can transfer noise from a frequency ω ± k ω0 to the observation frequency ω. In general, for a stationary noise source ξ( t ), the total observed noise voltage is [3]

Each term in the series represents conversion of current power density at frequency ω + k ω0 to voltage power density at frequency ω with gain | H k ( ω )| 2. As an example, return again to the Van der Pol oscillator with α = 1/3 and notice how a simple time-varying linear analysis of noise proceeds.

The first analysis step for the Van der Pol oscillator is to obtain a large-signal solution, so you set ξ( t ) = 0 . In the large-Q limit, the oscillation is nearly sinusoidal and so it is a good approximation to assume the following

The amplitude, a, and oscillation frequency can be determined from the differential equations that describe the oscillator. Recognizing that

and substituting into the equation for dv/dt , a and ω0 are determined by the following

Substituting

and using the orthogonality of the sine and cosine functions, it follows that

and

(The sin(3 ω0 t ) term is relevant only when we consider higher-order harmonics of the oscillation.) Therefore, to the lowest order of approximation, a = 2 and ω0 = 1 .

The only nonlinear term in the Van der Pol equations is the current-voltage term, v 3 /3 . This term differentiates the Van der Pol oscillator from the LC circuit. The small-signal conductance is the derivative with respect to voltage of the nonlinear current

With

the small-signal conductance as a function of time is

Because there is a nonzero, time-varying, small-signal conductance, the Periodic Time Varying Linear (PTVL) model is different from the LTI LC circuit model. In fact, the time-average conductance is not even zero. However, the time-average power dissipated by the nonlinear current source is zero, a necessary condition for stable, sustained oscillation.

Oscillators are intrinsically time-varying elements because they trade off excessive gain during the low-amplitude part of the cycle with compressive effects during the remainder of the cycle. This effect is therefore a generic property not unique to this example.

To complete the noise analysis, write the differential equations that the small-signal solution i s(t), v s(t) must satisfy,

and

From the large signal analysis, v ( t ) = 2sin t , and so

and

The time-varying conductance can mix voltages from a frequency ω to ω - 2 . For small ω´, if an excitation is applied at a frequency ω = 1 + ω´, i s and v s are expected to have components at 1 + ω´ and − 1 + ω´ for the equations to balance. (Higher-order terms are again presumed to be small.) Writing

and substituting into the small-signal equations with

leads to the following system of equations for i + and i

Solving these equations gives the transfer function from an excitation at frequency 1 + ω´ to the small-signal at frequency 1 + ω´ that we call H 0(ω´). A similar analysis gives the other significant transfer function, from noise at frequency − 1 + ω´ of amplitude C_ to the small-signal response at frequency 1 + ω´, that we call H -2(ω´). In the present case, for small ω´,

For a general Van der Pol circuit with a parallel resistor R that generates white current noise, ξ ( t ) , with S ξ(ω) = 4 k B T / R ,

Note that this is precisely one-half the noise predicted by the LC model.

You can gain additional insight about phase noise by analyzing the time-domain small-signal response. The small-signal current response is.

Notice that c + and c - are complex random variables that represent the relative contribution of white noise at separate frequencies. As white noise has no frequency correlations, they have uncorrelated random phase, and thus zero amplitude expectation, and unit variance in amplitude. Because the large-signal current is i ( t ) = 2cos t , and the sine and cosine functions are orthogonal, the total noise for small ω´ that we computed is essentially all phase noise.

Amplitude Noise and Phase Noise in the Linear Model

Occasional claims are made that in oscillators, “Half the noise is phase noise and half the noise is amplitude noise.” However, as the simple time-varying analysis in the previous section shows, in a physical oscillator the noise process is mostly phase noise for frequencies near the fundamental. It is true that in an LC-circuit half the total noise power corresponds to AM-like modulation and the other half to phase modulation. In the literature, the AM part of the noise is sometimes disregarded when quoting the oscillator noise although this is not always the case. (The SpectreRF simulator computes the total noise generated by the circuit; see “Details of the SpectreRF Calculation”).

However, a linear oscillator does not really exist. Physical oscillators operate with a tradeoff of gain that causes growing signal strength and nonlinear compressive effects that act to limit the signal amplitude. For noise calculation, the oscillator cannot be considered a linear time-invariant system because there are intrinsic nonlinear effects that produce large phase noise but limited amplitude noise. Oscillators are time-varying, and they therefore require a time-varying small-signal analysis.

Arguments that start with stationary white noise and pass it through a linear model in a forward-analysis fashion produce incorrect answers. This is true because they neglect the time-variation of the conductances (and possibly the capacitances) in the circuit. In the simple cases considered here, the conductances vary in time in a special way so as to produce no amplitude noise, only phase noise.

They have that special variation because they result from linearization about an oscillator limit cycle. An oscillator in a limit cycle has a large response to phase perturbations, but not to amplitude perturbations. The amplitude perturbations are limited by the properties of the nonlinear amplifier, but the phase perturbations can persist. The SpectreRF simulator calculates the correct phase noise because it knows about the oscillator properties.

Similarly, arguments [13] that start with noise power and derive phase noise in a backwards fashion also usually produce incorrect results because they cannot correctly account for frequency correlations in the noise of the oscillator. These frequency correlations are introduced by the time-varying nature of the circuit.

Occasionally, a netlist appears in which a negative resistance precisely cancels a positive resistance to create a pure LC circuit. Because such a circuit has an infinite number of oscillation modes, the SpectreRF simulator cannot correctly calculate the noise because it assumes a unique oscillation. Such a circuit is not physically realizable because adding or subtracting a microscopically small amount of conductance makes the circuit either go into nonlinear operation (amplifier saturation) or become a damped LC circuit that has a unique final equilibrium point. This equilibrium point is the zero-state solution. Trying to create the negative resistance oscillator is like trying to bias a circuit on a metastable point. Any amplitude oscillation can exist, depending on the initial conditions, as long as the amplitude is less than the amplifier saturation point.

Details of the SpectreRF Calculation

This section contains the mathematical details of how the SpectreRF simulator computes noise in oscillators. Understanding the material in this section can help you troubleshoot and understand difficult oscillator problems.

The analysis the SpectreRF simulator performs is similar to the simple analysis in the section “Linear Time-Varying (LTV) Models”. During analysis, the SpectreRF simulator

  1. First finds the periodic steady state of the oscillator using the PSS analysis.
  2. Then linearizes around this trajectory.
  3. The resulting time-varying linear system is used to calculate the noise power density. The primary difference between the SpectreRF calculation and the previous analysis is that the basis functions used for the SpectreRF calculation are not just a few sinusoids, but rather a collection of many piecewise polynomials. The use of piecewise polynomials allows the SpectreRF simulator to solve circuits with arbitrary waveforms, including circuits with highly nonlinear behavior.

Noise computations are usually performed with a small-signal assumption, but a rigorous small-signal characterization of phase noise is complicated because the variance in the phase of the oscillation grows unbounded over time. From a mathematical viewpoint, an oscillator is an autonomous system of differential equations with a stable limit cycle. An oscillator has phase noise because it is neutrally stable with respect to noise perturbations that move the oscillator in the direction of the limit cycle. Such phase perturbations persist with time, whereas transverse fluctuations are damped with a characteristic time inversely proportional to the quality factor of the oscillator.

Further care is necessary because, in general, the two types of excitations (those that create phase slippage and those responsible for time-damped fluctuations) are not strictly those that are parallel or perpendicular, respectively, to the oscillator trajectory, as is sometimes claimed (for example, in [4]).

However, one must realize that the noise powers at frequencies near the fundamental frequency correspond to correlations between points that are widely separated on the oscillator envelope. In other words, they are long-time signal effects. In fact, asymptotically (at long times), the ratio of the variance of any state variable to its power at the fundamental frequency is unity for any magnitude of the noise excitation. Therefore, in practical cases, you can consider only small deviations in the state variables when describing the phase noise.

The first step in the noise analysis is to determine the oscillator steady-state solution. This is done in the time domain using Shooting methods [5]. After the periodic steady-state is obtained, the circuit equations are linearized around that waveform in order to perform the small-signal analysis.

The time-varying linear system describing the small-signal response vs( t ) of the oscillator to a signal w ( t ) can be written in general form as [6, 7]

where C( t ) and G( t ) represent the linear, small-signal, time-varying capacitance and conductance matrixes, respectively. These matrixes are obtained by linearization about the periodic steady-state solution (the limit cycle). To understand the nature of time-varying linear analysis, the concept of Floquet multipliers is introduced.

Suppose x ( t ) is a solution to the oscillator circuit equations that is periodic with period T . If x (0) is a point on the periodic solution x L ( t ) , then x ( T ) = x (0) . If x (0) is perturbed slightly off the periodic trajectory, x (0) = x L (0) + δ x , then x ( T ) is also perturbed, and in general for small δ x ,

The Jacobian matrix

is called the sensitivity matrix. The SpectreRF simulator uses an implicit representation of this matrix both in the Shooting method that calculates the steady-state and in the small-signal analyses. To see how the sensitivity matrix relates to oscillator noise analysis, consider the effect of a perturbation at time t = 0 several periods later, at t = nT . From the above equation,

so

where φi is an eigenvector of the sensitivity matrix. The C i are the expansion coefficients of δ x in the basis of φi. If ψi is a left eigenvector (an eigenvector of its transpose) of the sensitivity matrix, then

Let λ be an eigenvalue of the sensitivity matrix. In the context of linear time-varying systems, the eigenvalues λ are called Floquet multipliers. If all the λ have magnitude less than one (corresponding to left-half-plane poles), the perturbation decays with time and the periodic trajectory is stable. If any λ has a magnitude greater than one, the oscillation cannot be linearly stable because small perturbations soon force the system away from the periodic trajectory x L ( t ) .

A stable nonlinear physical oscillator, however, must be neutrally stable with respect to perturbations that move it in the direction of the orbit. These are not necessarily perturbation in the direction of the orbit because, in general,

This is true because a time-shifted version of the oscillator periodic trajectory still satisfies the oscillator equations. In other words, one of the Floquet multipliers must be equal to unity. This Floquet multiplier is responsible for phase noise in the oscillator. The associated eigenvector determines the nature of the noise.

If λ = e η is a Floquet multiplier, then η + ik ω0 is a pole of the time-varying linear system for any integer k . Therefore, because of the unity Floquet multiplier, the time-varying linear system has poles on the imaginary axis at k ω0. This is very similar to what occurs in a pure LC resonator, and it explains the identical shape of the noise profiles.

Because operator L ( t ) has poles at the harmonics of the oscillation frequency, numerical calculations of the noise at nearby frequencies become inaccurate if treated in a naive manner [8, 9]. To correctly account for the phase noise, the SpectreRF simulator finds and extracts the eigenvector that corresponds to the unity Floquet multiplier. To correctly extract the phase noise component, both the right and left eigenvectors are required. After these vectors are obtained, the singular (phase noise) contribution to the noise can be extracted. The remaining part of the noise can be obtained using the usual iterative solution techniques [6] in a numerically well-conditioned operation.

In Figure 1-64, you can see that the SpectreRF PTVL analysis correctly predicts the total noise, including the onset of 3 dB amplitude noise outside the bandwidth of the resonator. Note that this simulation was conducted at

which represents a very high noise level that is several orders of magnitude higher than in actual circuits. The good match of the PTVL models to the full nonlinear simulation shows the validity of the PTVL approximation.

Calculating Phase Noise

The following sections suggest simulation parameters, give you tips for using these parameters, and advise you about checking for accuracy.

Setting Simulator Options

The SpectreRF time-varying small-signal analyses are more powerful than the standard large-signal analyses (DC, TRAN) but, like any precision instrument, they also have greater sensitivity to numerical errors. For many circuits, particularly oscillators, more simulator precision is needed to get good results from the PAC, PXF, and Pnoise calculations than is needed to get good DC or TRAN results.

The small-signal analyses operate by linearizing around the periodic steady state solution. Consequently, the oscillator noise analysis, and the periodic small-signal analyses in general, inherit most of their accuracy properties from the previous PSS simulation. You must be sure the PSS simulation generates a sufficiently accurate linearization. See “What Can Go Wrong” for a discussion.

Table 1-3 recommends simulator options for various classes of circuits.

Table 1-3 Recommended SpectreRF Parameter Values

Circuit errpreset reltol vabstol iabstol

Easy

moderate

1.0e-4

default

default

Hard-I

conservative

1.0e-5

10n

1p

Hard-II

1.0e-6

1n

0.1p

Hard-III

1.0e-7

0.1n

0.1p

Usually setting method=gear2only is recommended for the PSS simulation (but see “What Can Go Wrong”).

The parameters in Table A-1 are used in error control at two places.

At the local truncation error (LTE) at each transients integration. The LTE control formula is

The default value for vabstol is 1.0e-6, The default value for iabstol is 1.0e-12 which is accurate enough for most cases. Tighten iabstol when necessary. Refer to section 4.3 in [17] for detailed discussion about transient integration.

To speed up transients integration during the tstab stage, the default values at the tstab stage are reltol=1.0e-3, lteratio=3.5, relref=sigglobal, maxstep=T/25. method=traponly. After the tstab stage, those parameters are set back according to errpreset.

For circuits, such as extremely high Q oscillators, that need very high accuracy, you can gain further accuracy by turning on the highorder refinement highorder=yes and errpreset=conservative. This runs multi-interval Chebyshev PSS refinement after the Shooting phase of the PSS analysis.

An effective and fast method to start the oscillator is to run the new autonomous envelope analysis and to save the simulation results. Use the results of the autonomous envelope analysis as the initial condition for the PSS analysis.

A longer tstab stage helps with PSS convergence. However a longer tstab stage can slow the simulation. Using autonomous envelope analysis to establish tstab is considerably faster than using the transient analysis for tstab. See Spectre Circuit Simulator RF Analysis Theory. for information on the autonomous envelope analysis.

For releases before MMSIM60 USR1, an effective method to start the oscillation is to add a kicker to the circuit. The kicker can be either a voltage or current source. To effectively start the oscillation, the kicker has to be placed at the most sensitive place which usually is close to the oscillating transistor. The kicker can be either a PWL source or a damped sinusoidal source with frequency set to the oscillation frequency. The kicker must die down and remain stable after oscillation is established to avoid affecting the PSS analysis.

Troubleshooting Phase Noise Calculations

The SpectreRF simulator calculates noise effectively for most oscillators. However, circuits that are very stiff, very nonlinear, or just poorly designed can occasionally cause problems for the simulator. Stiff circuits exhibit dynamics with two or more very different time scales; for example, a relaxation oscillator with a square-wave-like periodic oscillation. Over most of the cycle, the voltages change very slowly, but occasional rapid transitions are present. This section describes some of the reasons for the problems, what goes wrong, how to identify problems, and how to fix them.

See “Details of the SpectreRF Calculation” for help troubleshooting particularly difficult circuits.

Known Limitations of the Simulator

Any circuit that does not have a stable periodic steady-state cannot be analyzed by the SpectreRF simulator because oscillator noise analysis is performed by linearizing around a waveform that is assumed to be strictly periodic.

For example, oscillators based on IMPATT diodes generate strong sub-harmonic responses and cannot be properly analyzed with the SpectreRF simulator. As another example, Colpitts oscillators, properly constructed, can be made to exhibit chaotic as well as sub-harmonic behavior.

Similarly, any circuit with significant large-signal response at tones other than the fundamental and its harmonics might create problems for the simulator. Some types of varactor-diode circuits might fit this category. In addition, some types of AGC circuitry and bias circuitry can create these effects.

The SpectreRF simulator cannot simulate these circuits because simulation of an autonomous circuit with sub-harmonic or other aperiodic components in the large signal response essentially requires foreknowledge of which frequency components are important. Such foreknowledge requires Fourier analysis of very long transient simulations and cannot be easily automated. Such simulations can be very expensive.

What Can Go Wrong

The SpectreRF simulator can have problems in the following situations.

Generic PSS Simulation Problems

Any difficulties in the underlying PSS analysis affect the phase noise computation. For example, underestimating the oscillator period or failing to start the oscillator properly can cause PSS convergence problems that make running a subsequent Pnoise analysis impossible.

Hypersensitive Circuits

Occasionally, you might see circuits that are extremely sensitive to small parameter changes. Such a circuit was a varactor-tuned VCO that had the varactor bias current, and therefore the oscillation frequency, set by a 1 T Ω resistor. Changing to a 2 T Ω resistor, which is a 1e -12 relative perturbation in the circuit matrixes, changed the oscillation frequency from 125 MHz to 101 Mhz. Such extreme circuit sensitivity results in very imprecise PSS simulations. In particular, the calculated periods have relatively large variations. If precise PSS simulations are impossible, precise noise calculations are also impossible. In such a case, you must fix the circuit.

Subharmonics or Parametric Oscillator Modulation

Sometimes bias and AGC circuitry might create small-amplitude parasitic oscillations in the large signal waveform. You can identify these oscillations by performing a transient simulation to steady-state and then looking for modulation of the envelope of the oscillation waveform. For high - Q circuits and/or low-frequency parasitics, this transient simulation might be very long.

In this case, because the oscillator waveform is not actually periodic, the PSS simulation can only converge to within approximately the amplitude of the parasitic oscillation. If the waveform possesses a parasitic oscillation that changes amplitude, over one period, around 10-5 relative to the oscillator envelope, then convergence with reltol < 10-5 is probably not possible (assuming steadyratio is one or less).

These effects might also appear as a parametric sideband amplification phenomenon.

See “Frequently Asked Questions” for more information.

Small-Signal Frequency is Much Higher than the Fundamental Frequency

The same timesteps are used for both the small-signal analysis and the PSS analysis. If the small-signal frequency is much higher than the fundamental frequency, much smaller timesteps might be required to accurately resolve the small-signal than are needed for the large signal. To force the SpectreRF simulator to take sufficiently small timesteps in the PSS simulation, be sure the maxacfreq parameter is set correctly.

Wide Timestep Variation

Occasionally, in simulations that generate PSS waveforms with timesteps that vary over several orders of magnitude, the linear systems of equations that determine the small-signal response become ill-conditioned. As a result, the noise analysis is inaccurate. Usually this occurs because you have requested excessive simulator precision; for example, nine-digit precision. You can sometimes eliminate this problem using method = traponly in the PSS solution. You might also set maxstep to a very small value in the PSS analysis or you might specify a very large maxacfreq value.

Problems with Device Models

When the device models leave their physically meaningful operating range during the large-signal PSS solution, the noise calculations are usually inaccurate. Similarly, when the models are discontinuous, or have discontinuous derivatives, the small-signal analysis might be inaccurate.

Problems Resolving Floquet Multipliers in Stiff Relaxation Oscillators

Sometimes in very stiff relaxation oscillators, the PSS solution rapidly and easily converges; but the numerically calculated Floquet multiplier associated with the PSS solution is far from unity. Typically, this multiplier is real and has a magnitude much larger than unity. The SpectreRF simulator prints a warning (see “Message III”). It is interesting that sometimes the phase noise is quite accurate even with low simulation tolerances. If you have this problem, perform a convergence study.

Problems Resolving Floquet Multipliers in High-Q Resonant Circuits

In a physical oscillator, there is one Floquet multiplier equal to unity. In an infinite - Q linear resonator, however, the multipliers occur in complex conjugate pairs. A very high - Q nonlinear oscillator has another Floquet multiplier on the real axis nearly equal to, but slightly less than, one. In this presence of numerical error, however, these two real Floquet multipliers can appear to the simulator as a complex-conjugate pair. The phase noise is computed using the Floquet vector associated with the unity Floquet multiplier. When the two multipliers appear as a complex pair, the relevant vector is undefined. When the SpectreRF simulator correctly identifies this situation, it prints a warning (see “Message III”). The solution is usually to simulate using the next higher accuracy step (see Table 1-3). Sometimes varying tstab can also help with this problem.

If the circuit is really an infinite - Q resonator (for example, a pure parallel LC circuit), the multipliers always appear as complex conjugate pairs and the noise computations are not accurate close to the fundamental frequency. Such circuits are not physical oscillators, and the SpectreRF simulator is not designed to deal with them; see “Amplitude Noise and Phase Noise in the Linear Model” and “Frequently Asked Questions”.

Phase Noise Error Messages

SpectreRF displays error messages when it encounters several types of known numerical difficulty. To interpret the error messages produced by the phase noise analysis, you must know the material in “Details of the SpectreRF Calculation”.

Message I

The Floquet eigenspace computed by spectre PSS analysis appears to be inaccurate. PNOISE computations may be inaccurate. Consider re-running the simulation with smaller reltol and method=gear2only.

The eigenvector responsible for phase noise was inaccurately computed and the PSS simulation tolerances might be too loose. Try simulating the circuit at the next higher accuracy setting (see Table 1-3) and then compare the calculated noise in the two simulations.

Message II

The Floquet eigenspace computed by spectre PSS analysis appears to be ill-defined. PNOISE computations may be inaccurate. Consider re-running the simulation with smaller reltol, different tstab(s), and method=gear2only. Check the circuit for unusual components.

This can be an accuracy problem, or it can result from an unusual circuit topology or sensitivity. Tighten the accuracy requirements as much as possible (see Table 1-3). If this message appears in all simulations, the noise might be incorrect even if the simulations agree.

Message III

The Floquet eigenspace computed by spectre PSS analysis appears to be inaccurate and/or the oscillator possesses more than one stable mode of oscillation. PNOISE computations may be inaccurate. Consider re-running the simulation with smaller reltol, different tstab(s), and method=gear2only.

All the real Floquet multipliers were well-separated from unity, suggesting that the PSS simulation tolerances might be too loose. Simulate the circuit at the next higher accuracy setting (see Table 1-3) and then compare the calculated noise in the two simulations. If the calculated noise does not change, it is probably correct even if this message appears in both simulations.

The tstab Parameter

Because SpectreRF performs the PSS calculation in the time domain by using a Shooting method, an infinite number of possible PSS solutions exist, depending on where the first timepoint of the PSS solution is placed relative to the oscillator phase.

The placement of the first timepoint is determined by the length of the initial transient simulation, which you can control using the tstab parameter. If the tstab value causes the edges of the periodic window to fall on a point where the periodic oscillator waveform is making very rapid transitions, it is very difficult for PSS to converge. Similarly, the results of the small-signal analyses are probably not very accurate. Avoid such situations. If the start of the PSS waveform falls on a very fast signal transition, you usually need to view the results of further small-signal analyses with some skepticism.

Although a poor choice of the tstab parameter value can degrade convergence and accuracy, appropriate use of tstab can help to identify problem circuits and to estimate the reliability of their noise computations.

If you perform several PSS and Pnoise computations that differ only in their tstab parameter values, the results should be fairly similar, within a relative deviation of the same order of magnitude as the simulator parameter reltol. If this is not the case, you might not have set the simulator accuracy parameters sufficiently tight to achieve an accurate solution; and you need to reset one or more of the parameters reltol, vabstol, or iabstol. The circuit might also be poorly designed and very sensitive to perturbations in its parameters.

If the calculated fundamental period of the oscillator varies with tstab even when you set reltol, iabstol, and vabstol to very small (but not vanishingly small) values, the circuit is probably poorly designed, exhibiting anomalous behavior, or both. (see “Known Limitations of the Simulator”).

Frequently Asked Questions

The following questions are similar to those commonly asked about oscillator noise analysis with the SpectreRF simulator.

Does SpectreRF simulation calculate phase noise, amplitude noise, or both?

SpectreRF simulation computes the total noise of the circuit, both amplitude and phase noise. What the analog circuit design environment plots as phase noise is really the total noise scaled by the power in the fundamental oscillation mode. Close enough to the fundamental frequency, the noise is all phase noise, so what the analog circuit design environment plots of phase noise is really the phase noise as long as it is a good ways above the noise floor.

Some discussions of oscillator noise based on a simple resonator/amplifier description describe the total noise, at small frequency offsets from the fundamental, as being half amplitude noise and half phase noise. In reality, for physical oscillators, near the fundamental nearly all the noise is phase noise. Therefore, these simple models overestimate the total noise by 3 dB. For a detailed explanation, see the phase noise theory described in “Details of the SpectreRF Calculation” and the detailed discussion of the Van der Pol oscillator “Linear Time-Varying (LTV) Models”.

I have a circuit that contains an oscillator. Can I simulate the oscillator separately and use the phase noise SpectreRF calculates as input for a second PSS/PNOISE simulation?

No. Oscillators generate noise with correlated spectral sidebands. Currently, SpectreRF simulation output represents only the time-average noise power, not the correlation information, so the noise cannot be input to a simulation that contains time-varying elements that might mix together noise from separate frequencies.

If the second circuit is a linear filter (purely lumped linear time-invariant elements, such as resistors, capacitors, inductors, or a linearization of a nonlinear circuit around a DC operating point) that generates no frequency mixing, then you can use the output of the SpectreRF Pnoise analysis as a noisefile for a subsequent NOISE (not Pnoise) analysis.

How accurate are the phase noise calculations? What affects the errors?

Initially, it is important to distinguish between modeling error and simulation (numerical) error. If the device models are only good to 10% the simulation is only good to 10% (or worse). So, for the rest of this appendix, we discuss numerical error introduced by the approximations in the algorithms.

You must also distinguish between absolute and relative signal frequencies in the noise analysis. When the noise frequency is plotted on an absolute scale, the error is primarily a function of the variance in the calculated fundamental period. This is true because of the singular behavior, in these regions, of the phase noise near a harmonic of the fundamental. To see this behavior, note that for the simple oscillator driven by white noise, the noise power is proportional to the offset from the fundamental frequency,

If you make a small error in the calculation of ω0, the error Δ S v in the noise is proportional to ¹ S / ¹ω0

This error can be very large even if Δω0, the error in ω0, is small. However, because of the way SpectreRF simulation extracts out the phase noise, the calculated phase noise, as a function of offset from the fundamental frequency, can be quite accurate even for very small offsets.

Now consider how much error is present in the calculated fundamental frequency. Because the numerical error is related to many simulation variables, it is difficult to quantify, without examination, how much is present. However, as a rough approximation, if we define the quantity

where max ( i ) and max ( v ) are the maximum values of current and voltage over the PSS period, then, under some assumptions, Δ ω0, the error in the fundamental ω0, probably satisfies

where M is the number of timesteps taken for the PSS solution. This analysis assumes that steadyratio is sufficiently tight, not much more than one, and also that iabstol and vabstol are sufficiently small.

If you require a good estimate of the accuracy in the fundamental, run the PSS simulation with many different accuracy settings, initial conditions and tstab values (See “The tstab Parameter”). For example, to estimate how much numerical error remains in the calculated fundamental frequency for a given simulation, run the simulation; reduce reltol, iabstol, and vabstol by a factor of 10 to100; rerun the simulation; and then compare the calculated fundamental frequencies. For the sorts of parameters we recommend for oscillator simulations, four to five digits of precision seems typical. Past that point, round off error and anomalous effects introduced by vastly varying timesteps offset any gains from tightening the various accuracy parameters.

For phase noise calculations, again it is unrealistic to expect relative precision of better than the order of reltol. That is, if reltol is 10-5 and the oscillator fundamental is about 1 GHz, the SpectreRF numerical fuzz for the calculated period is probably about 10 KHz. Therefore, when plotted on an absolute frequency scale, the phase noise calculation exhibits substantial variance within about 10 KHz of the fundamental.

However, when plotted on a frequency scale relative to the fundamental, the phase noise calculation might be more precise for many oscillators. If the circuit is strongly dissipative (that is, low - Q , such as ring oscillators and relaxation oscillators), the phase noise calculation is probably fairly accurate up to very close to the fundamental frequency even with loose simulation tolerance settings. High - Q circuits are more demanding of the simulator and require more stringent simulation tolerances to produce good results. In particular, circuits that use varactor diodes as tuning elements in a high - Q tank circuit appear to cause occasional problems. Small modifications to the netlist (runs with different tstab values and minor topology changes) can usually tell you whether (and where) the simulator results are reliable.

Simulation accuracy is determined by how precisely SpectreRF simulation can solve the augmented nonlinear boundary value problem that determines the periodic steady-state. The accuracy of the BVP solution is controlled primarily by the simulation variables reltol, iabstol, vabstol, steadyratio, and lteratio. Typically, steadyratio and lteratio are fixed, so reltol is usually the variable of interest.

Occasionally accuracy might be somewhat affected by other variables such as relref, method, the number of timesteps, and tstab. Again, the physical properties of the circuit might limit the accuracy.

I have a circuit with an oscillator and a sinusoidal source. Can I simulate this circuit with SpectreRF simulation?

In general, SpectreRF simulation is not intended to analyze circuits that contain autonomous oscillators and independent periodic sources.

If the circuit contains components that could potentially oscillate autonomously and also independent large-signal sinusoidal sources, SpectreRF simulation works properly only if two conditions are fulfilled. The system must be treated as a driven system, and the coupling from the sinusoidal sources to the oscillator components must be strong enough to lock the oscillator to the independent source frequency. (In different contexts, this is known as oscillator entrainment or phase-locking) The normal (nonautonomous) PSS and small-signal analyses function normally in these conditions.

If the autonomous and driven portions of the circuit are weakly coupled, the circuit waveform might be more complicated; for example, a two-tone (quasi-periodic) signal with incommensurate frequencies. (Incommensurate frequencies are those for which there is no period that is an integer multiple of the period of each frequency.) Even if PSS converges, further small-signal analyses (PAC, PXF, Pnoise) almost certainly give the wrong answers.

What is the significance of total noise power?

First, you must understand that SpectreRF simulation calculates and measures noise in voltages and currents. The total power in the phase process is unbounded, but the power in the actual state variables is bounded.

Oscillator phase noise is usually characterized by the quantity

where P 1 is the power in the fundamental component of the steady state solution and Sv ( f ) is the power spectral density of a state variable V .

For an oscillator with only white-noise sources, L ( f ) has a Lorentzian line shape,

where a is dependent on the circuit and noise sources, and thus the total phase noise power

Because

we are led to the uncomfortable, but correct, conclusion that the variance in any variable is 100 percent of the RMS value of the variable, irrespective of circuit properties or the amplitude of the noise sources.

Physically, this means that if a noise source has been active, because t = - ∞, then the voltage variable in question is randomly distributed over its whole trajectory. Therefore, the relative variance is one. Clearly, the variance is not a physically useful characterization of the noise, and the total noise power must be interpreted carefully. What is actually needed is the variance as a function of time, given a fixed reference for the signal in question; or, more often, the rate at which the variance increases from a zero point; or, sometimes, the increment in the variance from cycle to cycle. That is, we want to specify the phase of the oscillator signal at a given time point and to find a statistical characterization of the variances relative to that time. But because of the non-causal nature of the Fourier integral, quantities like the total noise power give us information about the statistical properties of the signal over all time.

What’s the story with pure linear oscillators (LC circuits)?

Oddly enough, SpectreRF simulation is not set up to do Pnoise analysis on pure LC circuits.

Pure LC circuits are not physically realizable oscillators, and the mathematics that describes them is different from the mathematics that describes physical oscillators. A special option must be added to the code in order for Pnoise to handle linear oscillators. See “Models for Phase Noise”, and, in particular, “Amplitude Noise and Phase Noise in the Linear Model”. Because the normal NOISE analysis is satisfactory for these circuits and also much faster, it is unlikely that Pnoise is modified.

Why doesn't the SpectreRF model match my linear model?

As is discussed in “Amplitude Noise and Phase Noise in the Linear Model”, the difference between the SpectreRF model (the correct answer) and the linear oscillator model is that in the linear oscillator, both the amplitude and the phase fluctuations can become large. However, in a nonlinear oscillator, the amplitude fluctuations are always bounded, so the noise is half as much, asymptotically.

We emphasize that computing the correct total noise power requires using the time-varying small signal analysis. An oscillator is, after all, a time-varying circuit by definition. Time-invariant analyses, like the linear oscillator model, can sometimes be useful, but they can also be misleading and should be avoided.

There are funny sidebands/spikes in the oscillator noise analysis. Is this a bug?

Very possibly this is parametric small-signal amplification, a real effect. This sometimes occurs when there is an AGC circuit with a very long time constant modulating the parameters of circuit elements in the oscillator loop. Sidebands in the noise power appear at frequencies offset from the oscillator fundamental by the AGC characteristic frequency.

Similarly, any elements that can create a low-frequency parasitic oscillation, such as a bias inductor resonating with a capacitor in the oscillator loop, can create these sorts of sidebands.

Further Reading

The best references on the subject of phase noise are by Alper Demir and Franz Kaertner. Alper Demir’s thesis [10], now a Kluwer book, is a collection of useful thinking about noise. Kaertner’s papers [11, 12, 9] contain a reasonably rigorous and fairly mathematical treatment of phase noise calculations.

The book by W. P. Robins [13] has a lot of engineering-oriented thinking. However, it makes heavy use of LTI models, and much of the discussion about noise cannot be strictly applied to oscillators. As a consequence, you must interpret the results in this book with care.

Hajimiri and Lee’s paper [4] is worth reading, but their analysis is superseded by Kaertner’s.

Other references include [8, 14, 15, 16].

References

[1]

P. Kloeden and E. Platen, Numerical Solution of Stochastic Differential Equations. Springer-Verlag, 1995.

[2]

D. Leeson, “A simple model of feedback oscillator noise spectrum,” Proc. IEEE, vol. 54, pp. 329–330, 1966.

[3]

W. A. Gardner, Introduction to random processes. McGraw Hill, 1990.

[4]

A. Hajimiri and T. Lee, “A general theory of phase noise in electrical oscillators,” IEEE Journal of Solid. State Circuits, vol. 33, pp. 179–193, 1998.

[5]

R. Telichevesky, J. White, and K. Kundert, “Efficient steady-state analysis based on matrix-free krylov-subspace methods,” in Proceedings of 32rd Design Automation Conference, June 1995.

[6]

R. Telichevesky, J. White, and K. Kundert, “Efficient AC and noise analysis of two-tone RF circuits,” in Proceedings of 33rd Design Automation Conference, June 1996.

[7]

M. Okumura, T. Sugaware, and H. Tanimoto, “An efficient small-signal frequency analysis method of nonlinear circuits with two frequency excitations,” IEEE Transactions on Computer-Aided Design, vol. 9, pp. 225–235, 1990.

[8]

W. Anzill and P. Russer, “A general method to simulate noise in oscillators based on frequency domain techniques,” IEEE Transactions on Microwave Theory and Techniques, vol. 41, pp. 2256–2263, 1993.

[9]

F. X. Kärtner, “Noise in oscillating systems,” in Proceedings of the Integrated Nonlinear Microwave and Millimeter Wave Circuits Conference, 1992.

[10]

A. Demir, Analysis and simulation of noise in nonlinear electronic circuits and systems. PhD thesis, University of California, Berkeley, 1997.

[11]

F. X. Kaertner, “Determination of the correlation spectrum of oscillators with low noise,” IEEE Trans. Microwave Theory and Techniques, vol. 37, pp. 90–101, 1989.

[12]

F. X. Kaertner, “Analysis of white and f-a noise in oscillators,” Int. J. Circuit Theory and Applications, vol. 18, pp. 485–519, 1990.

[13]

W. P. Robins, Phase Noise in Signal Sources. Institution of Electrical Engineers, 1982.

[14]

A. A. Abidi and R. G. Meyer, “Noise in relaxation oscillators,” IEEE J. Sol. State Circuits, vol. 18, pp. 794–802, 1983.

[15]

B. Razavi, “A study of phase noise in cmos oscillators,” IEEE J. Sol. State Circuits, vol. 31, pp. 331–343, 1996.

[16]

K. Kurokawa, “Noise in synchronized oscillators,” IEEE Transactions on Microwave Theory and Techniques, vol. 16, pp. 234–240, 1968.

[17]

K. Kundert, “The Designer’s Guide to Spice & Spectre,” Kluwer Academic Publishers, 1995.

Measuring AM, PM and FM Conversion

Derivation

Consider a sinusoid that is simultaneously both amplitude and phase modulated as in Equation 1-38.

(1-38)

In Equation 1-38, A c, φc, ωc, are the amplitude, phase and angular frequency of the carrier, while α (t) , and φ (t) are the amplitude and phase modulation.

When you assume that φ (t) is small for all t , this allows the narrowband angle modulation approximation as in Equation 1-39. See [ziemer76].

(1-39)

Converting to complex exponentials gives Equation 1-40.

(1-40)

Letting both the amplitude and phase modulation be complex exponentials with the same frequency, ωm, gives Equations 1-41, 1-42, 1-43, 1-44 and 1-45.

(1-41)

(1-42)

Where

(1-43)

(1-44)

(1-45)

Assuming that both A and Φ are small and neglecting cross modulation terms gives Equation 1-46.

(1-46)

Simplifying gives Equation 1-47.

(1-47)

In Equation 1-47,

Because the left-side term vm(t) represents a real-time signal, only the real parts of the complex terms are of interest. Dropping the imaginary parts and rearranging Equation 1-47 produces Equation 1-48,

(1-48)

When you ignore the negative ωm term in Equation 1-48, you get

Assume that you perform a PAC analysis, which applies a single complex exponential signal that generates responses at the upper and lower sidebands of the ωc signal. Assume the transfer functions are L and U , so the lower and upper sideband signals are given by Equations 1-49 and 1-50.

(1-49)

(1-50)

Where

Matching common frequency terms between Equations 1-48, 1-49, and 1-50 gives Equations 1-51, 1-52, 1-53 and 1-54.

(1-51)

(1-52)

(1-53)

(1-54)

Solving for the modulation coefficients gives Equations 1-55 and 1-56.

(1-55)

(1-56)

Thus, Equation 1-55 gives the transfer function for amplitude modulation and Equation 1-56 gives the transfer function for phase modulation.

Positive Frequencies

Notice that L is defined in Equation 1-49 to be the transfer function from the input to the sideband at ωm − ωc, which is a negative frequency. This is usually a natural definition for use with the Spectre® circuit simulator RF analysis (SpectreRF) small signal analyses (depending on the setting of the freqaxis parameter). It can be cumbersome though when the only data available is at positive frequencies. Thus, the transfer function to ωc − ωm is defined as

Then, as in Equation 1-57,

(1-57)

Because the signals are real, L is a complex conjugate of

And the reverse is also true, as in Equation 1-58,

(1-58)

Equations 1-59 and 1-60 are produced by rewriting Equation 1-55 and Equation 1-56 in terms of

(1-59)

(1-60)

FM Modulation

For FM modulation, the phase modulation φ (t) becomes the integral of the FM modulation signal, ω (t) as shown in Equation 1-61.

(1-61)

Where

(1-62)

Recall from Equation 1-42 and Equation 1-56 that

(1-63)

Combining Equation 1-62 and Equation 1-63 and the differentiating both sides results in Equations 1-64, 1-65, and 1-66.

(1-64)

(1-65)

Or

(1-66)

Simulation

The test circuit, represented by the two netlists shown in Example  and Example , was run with SpectreRF. The test circuit consists of three, linear, periodically-varying modulators that are driven with the same input. The input is constant valued in the large signal PSS analysis, and generates a single complex exponential analysis during the PAC analysis. The idea is to compute the transfer functions from this input to the upper and lower sidebands at the output of the modulators and then use the derivation just described to convert these transfer functions into transfer functions to the AM, PM, and FM modulations and then check the simulation results against the expected results.

Notice that freqaxis=out. This is necessary to match the derivation. If you would rather use freqaxis=absout, you would have to use the complex conjugate of L as in Equation 1-59 and Equation 1-60.

Netlist for the AM, PM, and FM Conversion Test Circuit
// AM, PM, and FM modulation test circuit
simulator lang=spectre
ahdl_include "modulators.va"
parameters MOD_FREQ=10MHz
parameters CARRIER_FREQ=1GHz
Vin (in 0) vsource pacmag=1 pacphase=0
Mod0 (unmod in) AMmodulator freq=CARRIER_FREQ mod_index=0
Mod1 (am in) AMmodulator freq=CARRIER_FREQ mod_index=1
Mod2 (pm in) PMmodulator freq=CARRIER_FREQ kp=1
Mod3 (fm in) FMmodulator freq=CARRIER_FREQ fd=MOD_FREQ
waves pss fund=CARRIER_FREQ outputtype=all tstab=2ns harms=1
xfer pac start=MOD_FREQ maxsideband=4 freqaxis=out

The netlist for the modulator models shown in Example 1-66, has the filename modulators.va.

Netlist for the Modulator Models Written in Verilog-A
`include "discipline.h"
`include "constants.h"
module AMmodulator (out, in);
input in;
output out;
electrical out, in;
parameter real freq = 1 from (0:inf);
parameter real mod_index = 1;
    analog begin
V(out) <+ (1+mod_index*V(in)) * cos(2*`M_PI*freq*$abstime);
$bound_step( 0.05 / freq );
end
endmodule
module PMmodulator (out, in);
input in;
output out;
electrical out, in;
parameter real freq = 1 from (0:inf);
parameter real kp = 1 from (0:inf);
    analog begin
V(out) <+ cos(2*`M_PI*freq*$abstime + kp*V(in));
$bound_step( 0.05 / freq );
end
endmodule
module FMmodulator (out, in);
input in;
output out;
electrical out, in;
parameter real freq = 1 from (0:inf);
parameter real fd = 1 from (0:inf);
real phi;
    analog begin
V(out) <+ cos(2*`M_PI*(freq*$abstime + idtmod(fd*V(in),0,1, -0.5)));
$bound_step( 0.05 / freq );
end
endmodule

Results

The simulations were run with various values for pacphase on Vin.

Table 1-4 shows results for the output of the AM modulator with v LO = cos ( ωc t ) .

Table 1-5 shows results for the output of the PM modulator with v LO = cos ( ωc t ) .

Table 1-4 Results for the AM Modulator Output

pacphase L U A Φ

0

1/2

1/2

1

0

45

0

90

j /2

j /2

j

0

180

−1/2

−1/2

−1

0

Table 1-5 Results for the PM Modulator Output

pacphase L U A Φ

0

−1/2

1/2

0

1

45

0

90

1/2

−1/2

0

j

180

j /2

j /2

0

−1

If you repeat the simulations but replace the cos function in the modulators with the sin function, which is equivalent to changing the LO to v LO = sinc t ) or setting φc = − 90 , you achieve the following results.

Finally, Table 1-8 shows the results for the FM modulator with v LO = cosc t ). The FM modulator has a modulation coefficient of ωm built-in, which renormalizes the results.

Table 1-8 Results for the FM Modulator Output

pacphase L U

0

−1/2

1/2

1

45

90

j /2

j /2

j

180

1/2

−1/2

−1

Conclusion

This appendix shows that the PAC analysis can be used to determine the level of AM or PM modulation that appears on a carrier. This is done by applying a small signal and using the phase of the carrier along with the transfer function to the upper and lower sidebands of the carrier to compute an AM or PM transfer function.

References

[Ziemer 76]

R. Ziemer and W. Tranter. Principles of Communications: Systems, Modulation, and Noise. Houghton Miffin, 1976.

[Robins 96]

W. Robins. Phase Noise in Signal Sources (Theory and Application). IEE Telecommunications Series, 1996.

[1]

J. Roychowdhury, D. Long and P Feldmann. Cyclostationary Noise Analysis of Large RF Circuits with Multitone Excitations. JSSC Vol. 33 No.3 1998

From MATLAB versions 2010a onwards, Solaris Platform is no longer supported.

From MATLAB versions 2012b onwards, Linux 32-bit platform is no longer supported

From MATLAB vesions 2012b onwards, Red Hat Enterprise Linux 5 is no longer supported

From MATLAB versions 2010a onwards, Solaris Platform is no longer supported.

From MATLAB versions 2012b onwards, Linux 32-bit platform is no longer supported

From MATLAB vesions 2012b onwards, Red Hat Enterprise Linux 5 is no longer supported


Return to top
 ⠀
X