To understand how signals are routed through plugins go to console.html. Now a word on plugin usage. Some plugins are self explanatory and aren't explained here. Others need quite a bit of explaining to get the most out of them.
BLUR
Blur performs gaussian blur on the image. It is capable of using unlimited processors in parallel.DENOISE
Denoise performs wavelet transform, subtracts the entropic part of the signal, and performs inverse wavelet transform. The sensitivity of the noise reduction is set by Nose level. The Window size has no effect on the sound other than to speed up processing.FREEZE FRAME
This is usually used as a transition and only reads the first frame. It then outputs the first frame for every frame after it. Disable tracks when no edits should be disabled in preferences->playback and Play every frame should be selected in preferences->video for this to work.PITCH SHIFT
Pitch shift performs a fast fourier transform, scales the array subscripts of the coefficients, and performs an inverse fast fourier transform. Much of the outcome depends on the window size. For 44100 audio a window size of 4096 seems to work best. Downward shift requires larger window sizes and upward shifting requires smaller window sizes. Voices shift better than music.REVERB
Getting good reverb is more of an artform even for the most automated signal processors. The output consists of an initial signal followed by many reflections. As the reflections move out in time, a lowpass filter is usually applied with decreasing cutoff frequency. If more than one track is attached to a single reverb plugin, the reflections for both tracks are swapped to simulate stereo. Simulated stereo is capable of using a separate CPU for each track.RGB <-> 601
When recording video from certain devices you'll get black at either 0 or 16 and white at either 255 or 235. The 16-235 colorspace is called ITU-R.BT601 and the 0-255 colorspace is called RGB. The purpose here is to give professional TV gurus headroom and footroom for their whites and blacks. Alternatively you may be doing compositing and want the extra headroom for an intermediate step. Either way you'll want only RGB video on a computer monitor and only 601 video when you print to video.STABILIZETo convert between the two, Broadcast 2000 supplies the RGB <-> 601 plugin. By simply attaching this to your video tracks or routing all your video tracks through it you can convert between either colorspace.
Image stabilization reads a window from one image and performs an exhaustive search of the next image to try to find where the window went. The search area is in the center of the image. Once it finds a new location it calculates a motion vector and offsets the image.The Search radius determines how far out from the original window it should search. A higher search radius allows more jittery motion to be compensated at the expense of more clock cycles.
The window size determines how many pixels the window should be per edge. A larger window size allows more accurate placement of the motion vector but increases computation exponentially.
The Acceleration determines the maximum distance to offset the new frame from the old frame. If the new location of a window is farther out than the acceleration value, the motion vector is shortened to the acceleration value. The automation on this setting is intended for turning image stabilization on or off. Where you don't want image stabilization you set the automation to -1 and where you want image stabilization you set the automation to 0.
The exhaustive search is capable of using unlimited numbers of processors and achieves huge benefits from doing so. Another problem you'll encounter is that image stabilization can't recognize repeating patterns so if the center is on a repeating pattern or straight line it'll shake violently. The image stabilization also becomes erratic if the center is on a moving object.