75 – Creating Better Patches (Part 2)

In this second installment of Reason 101’s Guide to creating better patches, I’m going to focus on Performance, Velocity, and how the MBRS (Modulation Bus Routing Section) in Thor relates to both. The focus is to look at new creative ways you can improve how Thor reacts to your playing style and explain some of the reasons why Thor is such a powerhouse of flexibility in this area. Again, I’m not going to be approaching this as a complete guide to every possible performance technique you can accomplish inside Thor, but rather try to outline its flexibility and show you a few key aspects of performance that you should think about as you develop your own patches.

In this second installment of Reason 101’s Guide to creating better patches, I’m going to focus on Performance, Velocity, and how the MBRS (Modulation Bus Routing Section) in Thor relates to both. The focus is to look at new creative ways you can improve how Thor reacts to your playing style and explain some of the reasons why Thor is such a powerhouse of flexibility in this area. Again, I’m not going to be approaching this as a complete guide to every possible performance technique you can accomplish inside Thor, but rather try to outline its flexibility and show you a few key aspects of performance that you should think about as you develop your own patches.

What is Performance?

Performance has less to do with the actual sound than it does with how the sound is played. If sound is the Motor that moves the car, Performance is the route it takes. It adds dynamism, movement, and modulation. And it is just as important as the actual sound you are hearing, or in our case, “creating” inside Thor. Both the sound and the performance of the sound are intrinsically interconnected. Without performance, sound would be very lifeless and dull, devoid of any movement or humanity. In terms of creating a patch in Thor, there are several performance parameters that you can use to determine how the sound is affected (changed or modulated) based on the way the patch is played by the musician. It is up to you, as a sound designer, to select what changes are made to the sound when a key is struck softly versus when the key is struck hard. It is up to you to determine what happens to the sound when the patch is played at different pitches along the keyboard, or when the Mod Wheel is used. And Thor offers an endless variety of ways you can harness the power of performance.

Performance Parameters

Performance parameters fall into the following categories (note the names in parentheses refer to the different names these performance parameters are given on the front panel of the Thor synth):

  • Velocity (or Vel): How soft/slow or hard/fast the keys on your keyboard are initially pressed.
  • Keyboard Scale (or “KBD,” or “Key Sync” or “KBD Follow”): The Keyboard register/pitch, or, where you play on the length of the keyboard (from -C2 to G8)
  • Aftertouch: Also called “Pressure Sensitivity,” Aftertouch responds to the pressure you place on the keys after they have initially been pressed down.
  • Mod (Modulation) Wheel: A unipolar (0 – 127) wheel that is generally used to (but not limited to) control vibrato (pitch wobble), tremolo (amp wobble), or both.
  • Pitch Bend: A bipolar (-8,192 – 8,191) wheel that is generally used to (but not limited to) control the pitch of the sound upward or downward.
  • Breath: Used with a breath or wind controller. Breathing into the controller will usually cause the sound to be modulated in some way. And if you’re interested in how a breath controller can be used, check out http://www.ewireasonsounds.com/ and http://www.berniekenerson.com/
  • Expression: Usually this parameter is tied to an Expression Pedal, usually found on an organ or piano.
  • Sustain Pedal: Usually this parameter is tied to a Sustain Pedal, usually found on an organ or piano.

Note: While performance relates to how the physical instrument / MIDI Controller is played by the musician, any performance parameter can also be programmed or automated in the Main Sequencer in Reason.

The various Performance parameters that can be assigned in Thor's MBRS.
The various Performance parameters that can be assigned in Thor's MBRS. Note the Sustain Pedal is located in the root folder, not within the Performance subfolder. Velocity is under the "MIDI Key" subfolder, and Keyboard Scale is found at the top of the root menu under the "Voice Key" subfolder.

While all these parameters can be “turned on” or “turned off” (“implemented” or “not implemented”) in a patch, generally you want to make use of most of these parameters in order to make your patches highly flexible and dynamic. However, I don’t use the Breath, Expression, or Sustain Pedal controls. To my mind, these three controls are very specific, and unless the Musician has a pedal or a wind controller (like a MIDI Flute), they won’t be able to make much use of them. If I were designing a ReFill specifically for a Wind Controller, however, then the Breath parameter would be extremely important and you would probably design most of your patches with this type of control in mind. But for the majority, these controls probably won’t need your attention. And I won’t be discussing them here.

Out of the remaining controls, you can break them down into two groups:

A: Keyboard controls: Velocity, Keyboard Scale, and Aftertouch. These are the Performance parameters that rely on how you play the keys on your MIDI keyboard. Velocity and Keyboard Scale are vital in my opinion. Aftertouch is not as vital, since not every MIDI Keyboard controller can utilize Aftertouch. But many CAN utilize it, and as a designer trying to make your patches stand out, this is one area that can separate your patches from others; making them shine. Note: If your keyboard is not equipped with Aftertouch, you can still test your patches by creating an aftertouch automation lane in the Main Sequencer in Reason, and drawing in your automation. This is true of any of the above Performance parameters. However, this kind of testing can be rather tedious. Better to try and purchase a controller that comes equipped with Aftertouch capability if you can spare the money.

B: Wheel controls: Pitch Bend and Mod Wheel. These are the Performance parameters that rely on how you play the two wheels on your MIDI controller. It’s rare you will find a MIDI keyboard that doesn’t have these two control wheels as commonplace controls, so it’s always a good idea to design your patch with these two controls assigned to modulate something in your patch. Furthermore, even if you don’t have a keyboard controller that has these wheels, you can still test the controls by turning the Thor wheels up or down on-screen with your mouse.

Let’s start with the Keyboard controls:

Velocity

Think of a sound that has no velocity sensitivity. You actually don’t need to travel too far to think about it. Load up a Redrum, set the Velocity switch to Medium, and enter a Kick drum that beats on every fourth step (typical four to the floor programming). Now play the pattern back. Sure, the drum sounds great, and it has a beat. But it has no change in level. It’s as lifeless as a bag of hammers.

Now put a high velocity on the second and low velocity on the third drum beats. Listen to the difference. Obviously this is still pretty lifeless, but by introducing Velocity, you’ve introduced a small degree of movement to the pattern. It’s more dynamic “with” velocity than “without” velocity. It doesn’t sound stilted or robotic. It starts to take a better shape. You’ve just added a performance characteristic by changing how the sound is played, albeit, you’re programming the velocity instead of playing it on a keyboard.

Now instead of putting the Kick drum through Redrum, what if you built your own Kick drum in Thor, and played it from your MIDI controller, Your keyboard is capable of a range which goes from 0-127, so you can have 127 different degrees of Velocity (or put another way, you have 127 different velocity levels). When you strike the keyboard to play your Kick drum, the “Velocity” at which you strike the keys can be used to determine the amplitude of your Kick Drum sound.

Velocity in Thor’s MBRS

Now here’s where things get interesting, and Modular / Semi-modular, in Thor terms. Thor offers both hard-wired (fixed) routings, and programmable (adjustable) routings. What you see on the front panel of Thor is what I would term as “Fixed,” while the Modulation Bus Routing Section (the green area below the front panel) offers you the ability to create your own custom routings; not just audio routings, but also performance routings. Using the MBRS, you can adjust what these performance characteristics will affect in an incredibly open-ended way. In other words, you can use any of these performance parameters to change any other Thor parameters you wish (within a few limitations).

Now let’s look at a fundamental use of Velocity in Thor.

Velocity = How soft or hard you play your keyboard. How the note is performed.
Amplitude = The amplitude or volume of a note. How soft or loud the note sounds.

By combining these two parameters together, you end up with the following:

Velocity Amplitude = A change in amplitude when you play your keyboard soft versus hard. Put another way, the “Velocity” is what is “performing the change” while the “amp” is “being changed.” Velocity is the “How” and Amplitude is the “What.” Velocity is the “Verb” and Amplitude is the “Subject.” Or put in Thor terms, Velocity is the “Source” and Amplitude Gain is the “Destination.”

I’m stressing this concept for a very good reason, because it’s the basis of all modulation concepts inside Thor (and any other really good modular synth for that matter). The main reason why people go kookoo for cocoa puffs over the MBRS in Thor is because you can change the “Verbs” and “Subjects” around in any wacky way you like. So any of these “Performance Parameters” can be used to change any other “Thor Parameters.” And not just that, but you can have as many “Verbs” affecting as many “Subjects” as you like. Or have one “Verb” affecting many “Subjects” or have many “Verbs” affecting one “Subject.” The only limitation to how many routings you can create is the number of MBRS rows provided in Thor.

At this point, you might want to know the complete list of Verbs and Subjects right? No problem. In the MBRS, click on the first “Source” field. Those are your “Verbs.” Now click on the first “Destination” field. Those are your “Subjects.”

Typically, you want your Velocity to affect the amplitude in such a way that the softer you press the key, the lower the amplitude is, while the harder you press the key, the higher the amplitude is. But what if we want to reverse this relationship. What if we want softer key strikes to result in louder sounds, and harder key strikes to result in softer sounds. We can very easily accomplish this in Thor using the “Amount” field in the MBRS. Since you can set up the amount to go from -100 to +100, you can make the Velocity affect the Amplitude by a “positive amount” or a “negative amount.” Here’s how both Velocities would look inside the Thor MBRS:

First, turn down the Velocity and Amp Gain knobs on Thor’s front panel, so they are fully left. Then Add the following routing in the first line of Thor’s MBRS:

Positive Velocity Amplitude = MIDI Velocity (Source) modulates by +100 (Amount) to affect Amp Gain (Destination)

The Source (MIDI Key > Velocity) and Destination (Amp > Gain) settings in the MBRS row
The Source (MIDI Key > Velocity) and Destination (Amp > Gain) settings in the MBRS row

Next, turn the Amp Gain knob up, fully right. Then change the amount in the MBRS line you previously created, as follows:

Negative Velocity Amplitude = MIDI Velocity (Source) modulates by –100 (Amount) to affect Amp Gain (Destination)

I’m sure by now you’ve noticed that the amount does not necessarily need to be exactly 100 in either direction. You can, of course, enter any amount between -100 and +100 as well. What happens if you lower the Positive Velocity Amplitude? You end up with Velocity affecting the Amp Gain to a lesser degree. In this respect, Amount is actually a way to “Scale” back on the Amp Gain when Velocity is used.

Now what if you want Velocity to affect Amp Gain some of the time, but not all the time? For example, I want to create a patch where the performer can use Velocity some of the time, but not all the time. You can create an on/off switch for this very easily using the “Scale” parameter in the MBRS. Just add the following:

Positive Velocity Amplitude = MIDI Velocity (Source) modulates by +100 (Amount) to affect Amp Gain (Destination)

and this Positive Velocity Amplitude modulation is scaled by +100 (amount) from the Button 1 (Scale) control.

Put another way:

PVA = [MIDI Vel (Source) modulates +100 (Amount) to affect Amp Gain (Destination)] scaled by +100 (Amount) from the Button 1 (Scale) control.

In the grand scheme of things, Sources and Scales are the same. Anything that can be used as a source can also be used to Scale a modulation. The only limitation is that you can’t have a “per voice” parameter scale a “global” modulation. For example, you can’t have the Modulation Envelope Scale the LFO2 Source changing the Global Envelope Attack. Anything that is “per voice” is considered anything in the “black area” on Thor’s front panel, while anything “global” is located in the “brown area” on Thor’s front panel. There’s also a line that separates the “Per Voice” parameters from the “global” parameters in the menu that opens when you click on “Source,” “Destination,” and “Scale” fields in Thor. “Per voice” parameters are located above the separator, while “global” parameters are located below the separator. If you choose a global modulation to scale a per voice modulation, a strikethrough line will appear over the text in the MBRS row.

Now, when Button 1 is turned on (lit up), the Positive Velocity Amplitude is active for the performer. When the Button 1 is turned off, the Positive Velocity Amplitude is inactive. By now, I’m sure you have figured out that you can reverse this “Button 1 on/off behavior” by reversing the Scale amount to -100. This would mean the PVA is active when Button 1 is off, and inactive when Button 1 is on.

You might also want to provide “degrees” or “gradations” of changes in the way the PVA is modulated. If this is the case, change “Button 1” to “Rotary 1” and then use the Rotary to provide 127 shades to how “active” the PVA modulation is. The more the Rotary is turned right, the stronger the effect of the PVA becomes. The lower you turn the Rotary, the less impact PVA will have on the performance. How you set this up is totally up to you, the sound designer.

Important Point: Your setting in the MBRS works “in conjunction with” the fixed parameters in the Thor synth. This means that the amount of your Amp Gain knob is going to determine how the routing you’ve set up for it in the MBRS operates. If the Amp Gain knob is set to zero (0) on the front panel, and you’ve set up a Positive Velocity Amplitude as shown above, the knob has no effect, and the MBRS settings are doing all the work to control the Amp Gain. If, on the other hand, you turn up the Amp Gain knob, the sum of the knob’s gain position is added “on top of” the amplitude increase you’ve set up in the MBRS. It is cumulative. This is why you need to adjust the “Amp Gain” knob in the above examples, even when you enter the MBRS settings. The fixed “Amp Gain” knob setting works in conjunction with the adjustable MBRS “Amp Gain” routing assignment.

In this setup, the Amp Gain is completely controlled via the MBRS routing, since both the Amp Gain and Velocity knobs are turned all the way down (fully left).
In this setup, the Amp Gain is completely controlled via the MBRS routing, since both the Amp Gain and Velocity knobs are turned all the way down (fully left).

Now that you know a little bit about how the MBRS works, I’m going to completely throw all of the above away, because you don’t have to set any of this up in the MBRS at all. Notice the little “Vel” knob next to the Amp Gain knob? This is an example of one of those “fixed” elements of Thor. And since a “Positive Velocity Amplitude” is such a basic principle in most sounds or patches, The Propellerheads gave it a “fixed” position in Thor, next to the Amp Gain knob. By default, it is turned down or off, but you can raise it (turn it right) to achieve the same effect as if you created a line for it in the MBRS.

In this setup, the Amp Gain knob and Velocity knob are controlling the Velocity, not the MBRS. The velocity control result is exactly the same as in the previous image. It's just a different way to set it up within your patch.
In this setup, the Amp Gain knob and Velocity knob are controlling the Velocity, not the MBRS. The velocity control result is exactly the same as in the previous image. It's just a different way to set it up within your patch.

Also keep in mind that since both the “fixed” parameter (the Velocity knob) and routing (the MBRS) work in tandem, if you have the Velocity knob set to 127 (fully right), and have a line in the MBRS set up for Positive Velocity Amplitude as outlined above, you are essentially doubling the degree to which your Velocity is affecting the Amp Gain (+200). Same goes if your Velocity knob is set to zero (0), and you create two lines in the MBRS that both have Velocity affecting the Amp Gain by +100. If you duplicate lines in the MBRS, you ARE going beyond a value of 100, and this is true if you go in a positive or a negative direction. Lastly, if you have the Velocity knob set to +127 and the MBRS is set to -100, then they cancel each other out, and Velocity DOES NOT affect Amp Gain at all.

It should be noted that there are actually three different Velocities that can be used as a Source or a Scale in Thor. Here’s how they differ:

  • Voice Key > Velocity: This setting sources velocity on a “per note” basis. In this respect, it’s the most granular of the Velocity settings in Thor. Each note polyphonically will receive a different Velocity setting based on how soft or hard you play each key. Of course, if you use this setting, you probably also want to be using a polyphonic patch that has more than one voice. Otherwise, it will react the same way as the MIDI Key > Velocity setting.
  • Last Key > Velocity: This allows you to use the Step Sequencer or incoming MIDI key signal to source Velocity. This is also global, so it is also “monophonic” by nature. The idea is that the last key played (from either the Step Sequencer or the MIDI Key) determines how the velocity is sourced.
  • MIDI Key > Velocity: This sources the Velocity globally via the incoming MIDI key signal. It is different from the Voice Key Velocity setting because it is monophonic, and it is different from the Last Key Velocity because it does not react to incoming signals from the Step Sequencer; only incoming MIDI signals (ie: a keyboard controller).

So before you start assigning Velocity settings, think about how your patch will be played by the musician. If your patch is programmed via Thor’s step sequencer, then you will need to use “Last Key Velocity.” If you want Velocity to be accessed via the MIDI Keyboard, all three settings will work, but you have the option to set up velocity on a per-note basis using “Voice Key Velocity” or on a global basis using “Last Key Velocity” or “MIDI Key Velocity.”

Beyond Typical Velocity Settings

Up to this point, all we’ve accomplished is how to create one simple performance parameter in the MBRS which is used the majority of the time in most patches in one way or another: Positive Velocity Amplitude. And yet I can’t tell you how many times I’ve seen patches that don’t even go this far. No, I’m not going to name names. But my point is that if you do anything at all in your patches, at the very least turn up the “Vel” knob next to the Amp Gain at least a little bit. Or keep the Filter envelope and velocity settings at their default in order to create a little movement in your patches that are tied to Velocity. Sure, there are cases where Velocity does not effect Amp Gain, and even cases were Velocity is not used at all. There will always be exceptions. But if you do anything at all, use the velocity knobs that Thor is giving you in the main panel. This will bring your patch designs from Noob to “Beginner” or “Good” as far as Velocity goes. Don’t forget to think about Velocity! It can be of the most expressive of qualities of your patch, and it adds yet another dimension to your patch that shouldn’t be overlooked.

Now if you want to make your patches go from “Good” to “Great” might I suggest getting your feet wet in the MBRS and experimenting with the following ideas:

  1. Change the destination around. What if we have Velocity affect the Filter Cutoff, or the FM Frequency, or the Mix between Oscillator 1 and 2? The point is, try it out for yourself and see what creativity you can come up with. See if it enhances your sound or detracts from it.  Remember that you are not limited to tying volume to velocity.
  2. Test out the “Amount” setting when you are creating an MBRS routing. Sometimes a negative value will produce a better result than a positive one. If a velocity setting produces a very harsh jump in modulation from soft to hard key presses (or vice versa), you might need to scale back the amount to a more comfortable setting.
  3. Try having the Velocity affect more than just a single parameter. Have Velocity affecting both the Filter Cutoff and the Filter Resonance at the same time. Or perhaps, if two filters are used, have the Velocity setting open up one filter (positive amount) and close the other filter (negative amount). This creates something akin to a Filter Crossover.
  4. Try assigning different destinations to the “Voice Key > Velocity” and “MIDI Key > Velocity” sources. I haven’t tried doing this yet, but I would imagine it can create some very interesting Velocity-sensitive sounds, since one is “per voice” and the other is “global.”
  5. Something I’ve been experimenting with lately is having the Velocity affect the Rate of an LFO, and then having the LFO affecting another parameter in Thor. This has the effect of creating a slow modulation on one end of the velocity spectrum and a faster modulation on the other end of the spectrum. Using positive amounts, when you press the key softly, the LFO is slow, and when you press the key hard, the LFO speeds up. Using negative amounts will reverse the process.
  6. Velocity is independent of the Amp Envelope. Whereas the Velocity is a measurement of how soft or hard you press the key (a function of Weight+Speed on the keys), the Amp Envelope is a measurement of loudness over time. That being said, Velocity occurs before the Attack portion of the Amp Envelope, and therefore, it can be used as a source to control the Attack, Decay, or Release portion of the Amp Envelope (or any other envelope) in Thor. Try using Velocity to change these aspects of your patch. It can produce interesting results as well.

So go make some killer patches and practice changing the destinations and the amounts, so that you hone in on just the performance quality you want out of your patch. And ensure that you keep testing using your Keyboard Controller. Play your patches at low velocities and high velocities as you create modulation routings so that you can hear the effect Velocity has on your sound.

Note: Most Keyboard Controllers have built-in velocity sensitivity and even come with specialized settings that allow you to select from different Velocity scales, depending on your playing style. But before you begin, ensure your keyboard IS velocity sensitive. In the rare case that it is not, you can press F4 (in Reason 6) to access the on-screen keyboard. Using the keyboard, you can switch between velocities. It’s time-consuming to test this way, but I would be remiss if I didn’t mention it as an option.

Fixed Velocities in Thor

In Thor, there are essentially two types of “Fixed” Velocities. I’ve already discussed the first fixed velocity as the “Positive Velocity Amplitude” which is otherwise known as the “Vel” knob in the Amp section of Thor. So I won’t go into detail about that. But there’s also another kind of Velocity which is located as a knob on all Filters in Thor. This is what I like to call the “Positive Filter Envelope Velocity” knob. This sets how much the velocity you play on your keyboard affects the envelope of the Filter. Think of it as having Velocity affecting the Envelope. If the envelope is set to zero, the Velocity knob has no effect on the envelope. Nothing happens. If your envelope is turned higher, and Velocity is turned up to 100, for example, the Velocity you play will have a pretty significant effect on whether or not you hear the envelope affecting the filter. Sounds complicated, but test it out by creating a very noticeable Filter envelope, and then turning up both the envelope and velocity knobs, then play your key controller softly and very hard. Notice the difference?

The Fixed Velocity settings in Thor.
The Fixed Velocity settings in Thor. Note that you can turn off these fixed Velocities very easily by turning all the Velocity knobs fully left. This frees you up to set up your own velocity routings in the MBRS, as you see fit.

So that does it for the second part of the series. I’ll continue with the other Performance parameters in part 3. As always, if you have any questions or want to contribute your thoughts and ideas, I encourage you to do so. I’m always interested in hearing new ways you’ve found to work with Reason. All my best until next time.

74 – Creating Better Patches (Part 1)

As a sound designer, your job is to create sounds for others to utilize in their own compositions. There’s immense satisfaction to be gained on both sides as the artist gains access to a myriad of new sounds, and you, as a sound designer, get to benefit from hearing how others are using your sounds (and remuneration for your time and effort creating them is always a plus). So in this article, I’m going to explore some ways you can A) Make your patches more usable by the artist, and B) Think more creatively about the art of patch design.

As a sound designer, your job is to create sounds for others to utilize in their own compositions. There’s immense satisfaction to be gained on both sides as the artist gains access to a myriad of new sounds, and you, as a sound designer, get to benefit from hearing how others are using your sounds (and remuneration for your time and effort creating them is always a plus). So in this article, I’m going to explore some ways you can A) Make your patches more usable by the artist, and B) Think more creatively about the art of patch design.

You can download the project files here: zero-initialized-patch-files. This zip file contains the Reason 6 setup for creating a Thor patch. It also includes a few fully initialized patches for the various Reason devices. These patches, as explained in the tutorial below, can be used as a starting point to create a basic patch in all three of the Reason synths. The patches are used to force every parameter to “zero” or “off” so that you can truly start the patch creation process from a completely blank slate. To this day, I still use this method when starting a new sound or patch design. It may take a little longer to set up. But it ensures I’m not colored by any settings that are already input into the device, and ensures I think about the whole audio signal as I go.

This article is not about every single patch design possibility. You could literally write several books on the subject and still not be finished. Instead I’m going to focus on a few key areas that are important and may open you up to some new sound design possibilities. For this series of tutorials, I’m going to focus on Thor, and show you some of its flexibility. Being the flagship synth of the Reason software, it deserves more attention than the other synths. It also has a steeper learning curve than the other synths (not to say the other synths aren’t very powerful. They are! But Thor is just so much more flexible in a lot of ways, mainly due to it’s semi-modular design).

I’m also going to assume that you know a little bit about what an Oscillator, Envelope and Filter does, and you’ve worked a little bit creating your own sounds. If you’re unfamiliar with these basic synth concepts, then start off reading the Reason Manual (start with the Subtractor section), or else go online to Wikipedia and get familiar with the basic building blocks of sound and sound modulation. Then come back and see if this makes a little more sense to you.

Also please take this series of articles with a grain of salt. Sound design is very subjective and encompasses a wide range of styles, approaches, likes and dislikes. This is not gospel. This is just some of the ways I go about creating my patches. You may have your own methods which are just as valid as mine. There’s no right or wrong (except that you don’t want any patches to clip or have a volume that’s too hot – this is pretty much golden no matter who you are).

The Checklist

Before jumping into some of the basics, here is a checklist you can use to ensure you make the most out of your patches and sound designs. This breaks everything down into a few important sections. When I’m designing a sound, I’m always conscious of the following:

  • What sound are you creating (optional): Sometimes you want to get in there and create a Bass, other times, you just want to experiment and see what you come up with, without having a preset notion of the end sound design. Both approaches should be explored, and is totally up to you.
  • Amplitude: Ensure the patch is in the general range of -6 dB when the big meter is set to peak / peak mode. Never let the patch clip (go into the red).
  • Performance Assignments: Ensure that rotaries, buttons, and wheels have assignments. Brownie points for Velocity, Aftertouch, and Key Scaling (if needed). Provide the most functionality or flexibility you can for the user in each and every patch.
  • Labels: Ensure all the rotaries, buttons, patch names, and ReFill Folder names and structure, are logical, useful, or make sense. Bonus points if you can make the labels have meaning both on a functional level (what I call the “engineering”) level, and on a descriptive level (what I call the “Plain English” or “Layman’s terms”) level. For example, the label “LP Filter” is functional. The label “Brightness” is descriptive. So perhaps you can combine them and call the label “LP Brightness,” which can make it a more useful label. Remember when you are creating patches for others to use, the label is often the only documentation provided with a patch. So make it count.
  • Documentation: You may find that a patch you create requires some user documentation. Situations where this can come in handy is if you want to further explain the interaction between rotaries/buttons, how to use the Mod Wheel or Performance during play, if a patch is designed for a specific key range and what that range is. Providing extra documentation can help make you go from a good patch designer to a great patch designer (even if the majority don’t read the documentation). That being said, in general you should always try to design your patches to have the most flexibility possible. And documentation is no substitution for intuitive patch design. The more intuitive and interesting you can make your patches, to the point where anyone who opens them can start playing and use them without documentation, the further along the road to “great” patch designer you are.
  • Creativity: No, there’s nothing under the sun that hasn’t been done before. True. But be creative. Try out new methods and think outside the box. How about applying the rate of an LFO to the Velocity, or adding FM between two oscillators to the Aftertouch setting? What about changing Delay Feedback at the same time you open up the Filter Frequency and attaching that to a Rotary? There’s a million different approaches, combinations, and ideas. Do something that sets your patches apart from everyone else.
  • Did you create the sound you wanted? Is there anything you would change to make it better? This is the final stage where you review the sound and ensure that everything makes sense, and that things sound the way you want them to sound. Often times, I’ll put down the patch or else create a series of patches and then leave them alone for a few days, and then come back and listen to them again. Often times this can shed light on a few areas that could be tweaked or rewritten to make the sounds better. I might change assignments from the Mod Wheel to aftertouch or velocity. I might find a sound is too harsh, so I’ll filter it a little more. This is where you put the finishing touches on your patches, just as you would perform final EQ adjustments to your songs.

Amplitude & The Big Meter

First, let’s start with the Level / Amplitude of your patch. While I know everyone has their own opinion on what the “correct” level of a patch should be, unless I’m designing something solely for myself, and solely for a specific reason, I generally open up the Big Meter in Reason’s Hardware Interface, set it to VU+PEAK mode, and ensure the levels of your patch do not exceed approximately -4 dBFS (VU) or -6 dBFS (Peak). This is dependent on a lot of factors of course, and the meters are only an aid to help you. They are not meant to replace your ears. If the patch sounds right, then it IS right. But one thing is for sure: you don’t want the meters going in the red. EVER! This means your signal is clipping, and should be avoided. Better to be too cautious than too aggressive in your sounds.

Note that clipping occurs when you see the Meter go into the red. With Reason’s default settings, this is a value of 0 dBFS (VU) or 0 dBFS (PEAK). Ensure that your audio signal is always consistently below these values, no matter what performance parameters are used in the patch, and no matter what combination of these performance parameters are used (more on this below).

If you’ve never worked with meters before, it helps to understand how they work and what they are telling you. Without getting bogged down in the theory behind them, here’s the bare bones of what you need to know. First, the video:

Next, the quick discussion: All 3 meters are explaining the same thing, but in different ways. The VU (Volume Unit) meter is averaging a period of time, and so provides you with an average volume level for what you’re monitoring. The PEAK meter, on the other hand, is providing you with an instantaneous Peak level for what you’re monitoring. It may seem from reading this that the VU meter isn’t accurate for measuring clipping, but this is not entirely true. While PEAK is a better indication of the actual highest point of your audio level, the VU meter is closer to what the human ear perceives as loudness. Therefore, it has value when creating your patches. For example, you may have a sound without a loud peak, but with a very loud average volume (for example, long sustained pad sounds). This can lead to a patch that seems too loud when you hear it, but still doesn’t clip. Or you may have a sound that has very high peaks, but seems too soft to your ears (vocals and transients can easily fall into this category). These two meters can help guide you to see that you’re on the right path. The following explores the way the Big Meter operates in Reason (the down and dirty “Coles Notes” version):

The Big Meter and its associated parameters.
The Big Meter and its associated parameters.

Here’s a quick tutorial to set up the Big Meter to monitor your patch sound designs:

  1. First, click on the Audio I/O and Big Meter buttons in the Hardware Interface. This opens both, and expands the Big Meter below the Audio I/O.
  2. Next, click the little square below Audio Output 1/2 (or whichever audio output you want to monitor). Alternately, use the Big Meter Channel knob to turn the Big Meter’s focus on the proper output.
  3. Using the Big Meter Mode button, adjust until you have VU+PEAK mode selected. Now you can monitor your audio using both meters. VU is listed in light blue along the top of the Big Meter scale, and PEAK is listed in orange along the bottom. When viewing the audio output LED lights on the Big Mixer, note that the VU LEDs are always to the left of the PEAK indicators. And the Peak indicators are displayed as two LED lights to the farthest right. It takes a little getting used to, but it’s a valuable way to monitor your sound and ensure there is no clipping.
  4. Ensure the Peak Hold is set to Five Sec (to indicate that the PEAK setting readjusts after every 5 seconds).

In addition, setting the amplitude of your patch revolves around several interrelated aspects of your patch: The big Volume knob, How many Oscillators are being used, the Polyphony setting, how they are routed, the Mix between them, The Decay / Sustain setting of the Amp Envelope, the Gain setting of the Amp section, the Filter Drive settings, and even the Filter Frequency and Resonance. All of these factor into how loud your patch sounds, and whether or not it’s clipping. In fact, everything you do in your patch will affect the amplitude one way or another. That’s just the nature of the beast.

Generally speaking, I adjust most of the amplitude levels during patch creation; as I go, since most changes you make to a patch will affect amplitude in some way. So I’m forever adjusting and tweaking the levels as I create the patch. However, I tend to work from the inside out. I’ll adjust patch settings inside the Voice and Global sections of Thor first, making sure those settings are correct. Then I may make an adjustment to the main level of the Thor patch via the big Volume knob after the patch settings are in place. In addition, I usually (but not always) work by following the signal flow. For instance, I’ll first adjust the drive setting of Filter 1 before adjusting the Gain setting of the Amp section inside Thor. This helps to get the levels correct, and also helps to understand the signal flow of the patch you’re creating.

Testing Amplitude During Performance

I categorize the Polyphony, Envelopes, Velocity, Aftertouch, and Key Scaling settings in this group of volume control. I generally know in advance if I’m going to be creating a patch that is monophonic versus Polyphonic. And if the patch is monophonic, I set the volume levels while playing a single note at maximum velocity. If the patch is polyphonic, I’ll adjust amplitude settings based on playing a 3- or 4-key chord at maximum velocity. It’s also important to play the patch up at high registers and down at lower registers to see if the volume is consistent across the entire keyboard. Often times, a synth that has a proper volume in the C2 and C3 range, can be overly bright at higher registers (which can increase the volume), or can overload the bottom end amplification-wise. Without getting too deep into this conversation, just keep in mind that the number of voices (polyphony) affects amplitude, and should be tested accordingly.

The envelopes you set up for the Oscillators are also important. The sound can start off low, and then build up over time to peak much higher than your maximum. So if you’re creating a long drawn-out pad sound which develops over time, be sure to listen and monitor the sound played over the entire duration of the envelope and check that big meter to make sure you don’t go too high.

And not to make things even MORE complicated, but if you have set up your rotaries, buttons, mod wheel, aftertouch, and velocity settings to certain parameters, it’s very important that you test out all the possible combinations of these buttons and performance settings and see how they all interact together. Often times, when you create a patch, you’ll test out the sound while turning Rotary 1 front and back, but have you tested how Rotary 1 sounds when Rotary 2 is turned front to back as well? Or when button 1 is on, etc. You can easily end up in situations where Rotary 1 sounds perfect on it’s own, but introduce Rotary 2 and all of a sudden the patch is clipping. So be very cautious of that.

It’s important to understand that all of these settings inside Thor affect the amplitude and are interconnected, playing off each other. Balancing out the amplitude while getting the sound you want is one of the primary keys of good sound design. So when you’re first starting out, don’t be afraid to push the limits and see how turning up the filter drive affects the sound, or turning down the gain while turning up the Filter 3 drive changes the sound. But be cautious about good balance. Generally if you turn up one amplitude setting, this will mean you have to turn down another setting somewhere along the audio signal chain. This is where the Big Meter can help you understand how the various parameters you set affect the amplitude. After a while, you may find you don’t need it at all.

Volume Consistency

There’s also one other reason to pay close attention to the volume level of your patches: Consistency. You don’t want one patch to be extremely loud and the other patch to be extremely soft. You want to provide some degree of consistency across your patches, and indeed your Refill.

For me, a good consistency is when the patches are all somewhat close to each other in volume, both average volume and without causing any peaks to clip. Of course, this will also depend on the type of instrument you’re creating, the frequency range of the instrument, the rise and fall of the Amp Envelope, etc. But you still want them all to be somewhat consistent relative to each other.  Make it easy for the user to use your patches. Providing consistency will help do this.

Note: A good test to see how accurate your ears are is to turn off the big meter entirely, and hide all the metering (put a piece of tape over the metering displays on your monitor if you have to). Design a few different patches in Thor without the aid of any metering. Then go back through the patches and see how close you came to getting the patches all set to consistent levels.

In summary, it’s a good idea to think of Volume and amplitude in your patches as a system of pulleys and levers that need to equalize at the outcome. If you pull one lever up, you generally need to pull another level somewhere in the audio chain down. Kind of like balancing a budget. If you spend more in one area, you have to save in another. Thinking in these terms will help you come out with a better patch in the end, which is consistent with your other patches. When you produce a refill, getting the levels even across the board is very important, and it’s something that will make your ReFills better in the long run. So take a minute to explore how volume is affected when you change things around in your patch, and learn to compensate when you raise that drive on that filter.

Setting up your Thor patch

Now that we’ve got the basics out of the way, let’s fast forward to looking at how you can assign some parameters in Thor. What I tend to do when starting any new patch is to “zero out” all the parameters. This means I don’t start from the typical “Init patch” assigned to the device. Instead, I’ve created my own set of “True Init Patches” for each of the devices. This means all the sliders and levers are dropped to zero, all the buttons are turned off, and there are no assignments anywhere. I do tend to leave the polyphony setting to “1” voice for “polyphony” and “1” voice for “release polyphony” — I can always add more voices later. Furthermore, I remove all the labels for the rotaries and buttons. Starting from this blank slate is worthwhile for two reasons:

  1. Starting from “zero” does not influence the direction I am headed when creating a sound or patch. With the typical “init patch” setting in Thor, for example, I might be influenced to create a sound which uses an analog oscillator and low-pass filter, and keep the settings for the envelopes as they are. If, on the other hand, you start out with a completely blank slate, there is nothing that is previously set to color the direction of your patch creation. This may not be your style, but I find it very refreshing to start with a blank piece of paper when writing something, as opposed to starting somewhere midway in the story, where the introduction has already been written.
  2. Second, if you are new to synthesizers and sound design, starting from ground zero forces you to learn what each setting does. And it forces you to learn one step at a time. You’ll soon learn that without an Oscillator and Amp envelope turned on in Thor, you won’t hear a sound. But forcing you to turn it on will force your brain to make the connection by showing you what the Amp envelope is doing with your sound. Same for each and every parameter in Thor or any other synth for that matter. Once you become more comfortable with all the settings, it’s perfectly fine to create your own “Init Patch” or use the one that Reason comes with. But for the sake of learning your synth from the ground up, there’s no better exercise than to turn everything off and start from nothing. You may stumble a bit at first, but you’ll learn much more in the process I think.

That’s all I have for you in this tutorial. In the next part, I’ll go into more depth on ways you can use Thor creatively to produce some interesting results in your patches. I’ll try to think along a few lines that you may not have thought about before, and give you a few ideas that can take your patches from “good” to “great.” Until next time, get familiar with these concepts of volume, metering, consistency across your patches, and the basics of starting from the ground up. Then continue this journey to better sound designs and better patches in Reason. Happy music-making!

And please, if you have any comments, ideas, thoughts about these topics, please let me know. I”m happy to receive feedback and I’ll do my best to help answer any questions you might have.

73 – Pounding The Pulveriser (Pt. 2)

In this next installment of exploring the Pulveriser, I’ll go a little deeper and see how we can use it for more than just Parallel Compression. We can see how we can use it to warm or destroy a sound, and explore some of the CV / audio routing possibilities to get much more out of the device. And while it’s great on drum sounds and good to beef things up, it can be downright scary when used in a glitch environment. So let’s take a deeper look.

In this next installment of exploring the Pulveriser, I’ll go a little deeper and see how we can use it for more than just Parallel Compression. We can see how we can use it to warm or destroy a sound, and explore some of the CV / audio routing possibilities to get much more out of the device. And while it’s great on drum sounds and good to beef things up, it can be downright scary when used in a glitch environment. So let’s take a deeper look.

You can download the project files here: pulveriser-ideas. This zip file contain a .reason file and separate Combinator files that outline some of the ideas you’ll find in this article.

First, a note about Pulveriser Levels.

If you bypass the filter, and keep the volume level at 100, and the Blend knob fully wet, you’ll notice that the audio source going into the device is higher than it would be if you were to bypass the Pulveriser. If you then turn the Blend knob fully dry, you’ll notice that the signal increases even more. This had me confused, so I went about trying to figure out what level the Pulveriser should be in order to give consistent volume levels for what they would be if they were bypassed. From my tests, I found that a Volume level of 64 when Wet, and a Volume level of 44 when Dry made the device consistent with a bypassed audio source. How helpful is this information? Well, it could help to know in the event you want to have more consistency between the levels going into the Pulveriser and the relationship between the Volume and the Blend knobs. I find myself usually turning the volume down quite a bit in my own patches. But again, it depends on other factors, such as Squash, Dirt, and Filter settings. I just wanted to point out what I found while I was doing my own tests.

In the project files, you’ll see a “Dry-Wet” Combinator that is used to affect an Initialized Thor patch. The first Rotary labeled “True Dry/Wet” was used to crossover the Dry/Wet Blend knob to create consistent volumes with the source audio. It’s just a test file, and whether or not this is of any use to anyone, I’ll let you decide. But it was a good test to find out those equivalent values.

Now let’s have some fun with FM and AM inputs on the back of the Pulveriser

There are two Audio inputs on the back of the Pulveriser that are used to modulate the Filter Frequency with external audio (FM – Frequency Modulation) and the Amplitude Output with external audio (AM – Amplitude Modulation). According to the Reason manual, these inputs accept the rate of the incoming audio signal, so play around with the rate of the incoming audio and this also has an effect on the Filter Frequency and Amplitude.

So here’s an idea. Use two Thor Oscillators that are played via Step Sequencer to affect both the FM and AM in the Pulveriser.

Let’s give our Bass some Wobble (is that a Dubstep Bass Wobble I hear?)

The Pulveriser was not only built to provide Parallel Compression and Dirty up your signal with some distortion. It also is quite capable of wobbling pretty much anything you can think of, and synching or unsynching the wobbled signal. Not only that, but you can spread the wobble across the stereo field if you like with the Spread button. This makes it pretty handy to create Dubstep Bass wobbles without too much trouble. Here’s nifty way you can create a unique Wobble effect for your Bass, or any other audio you care to throw at the Pulveriser.

Here’s a video to outline one of a thousand ways to creatively wobble your bass:

Triggering the Follower via External CV.

You can also “Trigger” the Follower to act via the CV on the back of the device. This means you can use a Matrix Curve CV or Thor Step Sequencer Curve CV to “Gate” the Follower. If you do this, the Follower connection is broken, meaning that the Audio input does not trigger the Follower anymore. Now, your CV connection is doing the work. For example, you could have the Tremor set up to control the Volume (Tremor to Volume knob), and have the Follower control the Filter Frequency (Follower to Frequency knob). In this scenario, the Pulveriser’s incoming volume has no effect on the Follower. Now, the Curve CV is sending both Gate (when the Follower is triggered) and Velocity (the strength of the signal sent to the Follower) into the Pulveriser’s Follower.

Note: The strength or velocity of the signal can be roughly gauged by the red lamp between the Threshold and Attack rotaries. While I love the lamp idea, I would have preferred a dynamic numerical gauge to accompany the lamp (or at least a tooltip readout of the CV velocity value). And for that matter, a numerical readout on the Matrix Curves and Gates. But I digress. . .

Here’s a video to show how to trigger the Follower from an external CV source. In this video, I’m using Thor’s global envelope to trigger the Pulveriser’s Follower:

Some other Assorted things that can be done

You can use the Follower to control any external CV destination. From the back, hook the CV output of the follower to any CV input in Reason. Or send it to a CV input on a Combinator to control any other parameter of any other device inside the same Combinator. This opens up the door to several possibilities because you can shape the Follower’s Threshold, Attack and Release settings to modify its effect on the CV destination.

If you like the Filters in the Pulveriser, you can use the Filter only, without the Squash, Dirt, Follower or Tremor. In this way, the Pulveriser is an advanced Filter device that can warm up your sounds, or carve out your sounds. The nice thing about using the Pulveriser in this way is that it’s one of the easiest filters to insert into your audio flow, and requires no routing knowledge whatsoever. Just insert and start filtering.


What other things are you doing with the Pulveriser that might be of interest to the community? As always, please share your ideas. I hope everyone has a safe and happy holiday season. I probably won’t write any more tutorials until the new year. But you can be sure I’ll be thinking up some new ideas for 2012. Until then, good luck in all your musical endeavors.

72 – Pounding The Pulveriser (Pt. 1)

Now there’s a word with some power behind it: PULVERISE! Let’s pulverise our sound. New in Reason 6 is this wonderful Distortion-Compression-Tremolo-Follower-Filter-Parallel Processor — And oh yeah, it’s got a Lag feature too! Suffice it to say this thing is vintage goodness, and it can do an awful lot to your sound, whether you just want to warm things up a bit, or set your sound to completely self-destruct. Let’s push it to the limit and see where it takes us.

Now there’s a word with some power behind it: PULVERISE! Let’s pulverise our sound. New in Reason 6 is this wonderful Distortion-Compression-Tremolo-Follower-Filter-Parallel Processor — And oh yeah, it’s got a Lag feature too! Suffice it to say this thing is vintage goodness, and it can do an awful lot to your sound, whether you just want to warm things up a bit, or set your sound to completely self-destruct. Let’s push it to the limit and see where it takes us.

This article is going to provide you with an introduction to the device, as well as show you how to use it as a parallel compressor. I’ll get into more tricks with it in part 2. But for right now, I thought we’d first like to get acquainted with all of it’s goodies. So let’s start with a look at all the parameters on the device.

Also don’t forget that my latest Refill, Pureffects, is available with 1,250 effect patches designed specifically with Reason 6 in mind. There’s 200 Pulveriser patches alone, and several Pulveriser-based Combinators with all kinds of interesting routing ideas. Read More about the ReFill.

Explaining the Interface

Here’s the front look at the Pulveriser and a brief explanation of what each feature does.

The front panel of the Pulveriser with an explanation of the interface.
The front panel of the Pulveriser with an explanation of the interface.

Now let’s break down the Pulveriser into it’s Component Sections:

Compression Section

  • Squash: This is another fancy word for Compression. It compresses the audio signal coming into the Pulveriser. So in this respect, it’s kind of like the Compression Ratio of the M Class Compressor. Though it’s probably a different algorithm entirely if I’m reading the Props correctly (maybe Selig can provide charts and comparisons?).
  • Release: Controls the Release of the Squash (compression).

Distortion Section

  • Dirt: This is another fancy word for Distortion. It distorts the signal coming into the Pulveriser. Note that the Dirt algorithm is unlike any other one found in any other Reason device. I have this on very good authority from the Props themselves. So you can think of this as a brand new Scream algorithm if you like. And I must say it’s one of the cleanest distortions I’ve heard in Reason or any other virtual instrument. Definitely one of the highlights of Reason 6.
  • Tone: This controls a low pass filter that is applied to the Dirt. It is outside the realm of the Filter section, so this is not to be confused with any of the Filters found there. This is a separate filter altogether. Practically, it smooths out the Dirt (Distortion), the further left you turn the knob; which may be what you want.

Filter Section

  • Filter: Allows you to select from a variety of built-in Filters. Bypass will bypass the Filter section entirely. The other filters provided are Low Pass 24, Low Pass 12 + Notch, Band Pass, High Pass, and Comb.
  • Frequency: Controls the Filter Cutoff Frequency. Pretty standard for most filters.
  • Peak: This is just another fancy word for Filter Resonance. However, to my ears this Resonance is not as bright as other Resonance settings for other filters in Reason. I may be wrong, or my ears may be shot. But this Resonance algorithm sounds a little more “tame” than the others. But don’t get me wrong, if that is the case, I’m happy about this. It means that we have more choices in Reason on what kind of Filters and Resonance settings to use. More choices = more flexibility.

Tremor Section

The Tremor section can be thought of as an LFO which you can send to various parameters both inside and outside the Pulveriser.

  • Tremor Rate: This controls the Rate of the LFO.
  • Tremor Synch: This controls whether the Tremor is synched to Tempo (when the Sync button is lit) or the Tremor is unsynced/free running (when the Sync button is turned off).
  • Waveform: You can select between 9 different waveforms, as follows: Sine (0), Triangle (1), Square/Pulse (2), Sawtooth (3), Random Square (4), Downward Ramp (5), 3-Step Sawtooth (6), 4-Step Sawtooth (7), and 4-Step Triangle (8).
  • Tremor Spread: This is another fancy way of saying “Pan.” It pans the LFO across the stereo field in a ping-pong kind of way, based on the Rate and LFO waveform.
  • Lag: This is actually yet another filter inside the Pulveriser, however, it’s a filter which is not so much audible, as it is a tool to smooth out the LFO Curve. Obviously if you select a sine wave, it’s already smoothed out, so lag has no effect. But if you select a square wave, for example, the more lag you use (more right you turn the knob), the smoother the corners of the LFO Curve become. So fully right and a square wave would become a Sine wave.

Note: That’s also my biggest beef with the Alligator. For all the greatness it possesses, it doesn’t posses a “lag” knob for the LFOs, which would reduce the pops and clicks I was speaking about in the Alligator tutorial I wrote. And as far as I can tell, there’s no way for you to “add” any lag to the LFO in the Alligator because you can’t, for example, take an LFO from the Pulveriser and use that to affect the LFO of the Alligator.

Oh and by the way, did anyone catch that little note on the back of the Pulveriser? It points you in the direction of how to use the “Lag” feature. However, I don’t see a similar note on Alligator. Too bad.

  • Tremor to Frequency & Tremor to Volume knobs: Here’s how I think of it. When you use the “Tremor to Volume” knob, you are creating Tremolo (change in amplitude over time, or more colloquially, a “Volume Wobble”). Depending where the volume knob is located, this Tremolo effect tremors lower (to the left) or higher (to the right). But it’s more of a Vibrato effect when using the “Tremor to Filter” knob. Because the Tremor creates a “Filter Frequency Vibrato” — think Dubstep Bass Wobble. I honestly don’t know if musical terminology has an alternate name for “Filter Frequency Vibrato” since strictly speaking, “Vibrato” refers to a change in Pitch, not in Filter Frequency.

Follower Section

The Follower is an Envelope Follower that takes the incoming Pulveriser audio and converts it to a CV signal that you can use to modulate all kinds of things both inside the Pulveriser and outside the Pulveriser (via the CV on the back of the Pulveriser – don’t worry, we’ll get to that shortly).

  • Trig: This allows you to manually trigger the Follower section. If you do this, you are no longer using the Audio alone to trigger the Follower. It becomes a manual process (for as long as you hold down “Trig”). Once you let go, if you have any audio going into the Pulveriser, it triggers the Follower to act. Of course, you can disconnect audio from being sent into the Pulveriser and use the Trig button to manually trigger the follower. Nothing is preventing you from doing that. In that case, the Follower is purely manual, with no audio controlling the Follower whatsoever.
  • Threshold: Controls the Threshold of the Follower.
  • Lamp: The Lamp is just a simple red light which turns on while the Follower is triggered. Note that the intensity of the red light shows the level of the Follower. So if the light is dull, the level is low. If the light is intensely red, then the Follower level is high.
  • Attack: Controls the Attack of the Follower.
  • Release: Controls the Release of the Follower.
  • Follower to Rate: The Follower affects the Rate. So if you aren’t using the Tremor to affect either the Filter Frequency or the volume, this knob does nothing. If, however, you are using the Tremor for anything, using this knob will basically send the follower to the Tremor Rate. Depending where the Rate knob is “set” turning this knob left will shift the rate downward (slower rates), and turning this knob right will shift the rate upward (faster rates). So, for example, if the Tremor is affecting your volume, and the Tremor Rate is set to 1/4, turning the knob right means the volume will be “wobbled” at a rate starting at 1/4, but then the wobble will get faster — move at a faster rate — depending on your follower settings. The Follower is kind of like a “rate envelope” (i.e.: it changes the Tremor Rate over time). With the knob set further left of center, the rate still starts at 1/4, but becomes “slower” over time. Personally, this is one of the most interesting knobs on the whole device IMHO.
  • Follower to Frequency: Similar to the “Tremor to Frequency” knob, this knob sends the envelope follower to control the wobble of the Frequency. You can get some really interesting effects when using both Tremor and Follower sent to the Frequency, so its important to play around with both knobs in tandem (though you don’t have to). This is bipolar. Moving the knob more to the left, and the Frequency wobbles below where the Frequency knob is set. Moving the knob more to the right, and the Frequency wobbles above where the Frequency knob is set.

Mix Section

  • Volume: This is the master volume for the Pulveriser. Pretty standard.
  • Blend: This blends the original audio signal with the “Pulverised” audio signal. In this sense, it acts as a Dry/Wet knob. However, when you use this in conjunction with the Squash on your Kick drum, for example, it turns into a Parallel Processor. The nice thing about using the Pulveriser in this way is how easy it is to create create Parallel Compression for your audio, because you only need one device and two knobs to achieve the effect. Of course, you can still use all the other features, such as Dirt, Filtering, Tremor, etc.

Ins and Outs of The Pulveriser: A Look at the Flipside

The back of the Pulveriser presents several ways you can modulate the Pulveriser with CV and output the Tremor and Follower to modulate other devices in Reason. Here’s the skinny. . .

An explanation of the connections on the back of the Pulveriser device.
An explanation of the connections on the back of the Pulveriser device.

I won’t go into explaining all the uses of the inputs / outputs and CV connections. Instead, the image pretty much says all you need to know. In the next article, I’ll go over some of the uses of these connections, and how you can use them in various Pulveriser techniques. Suffice it to say you have a lot of power on the rear of the device, as I hope you can see from the image above.

Basic Application: Parallel Compression.

So after reading that somewhat elaborate explanation of all the parameters on the Pulveriser, where do you start?

A hint comes in the way I’ve explained things. Notice that each part of the front of the Pulveriser is divided into sections. These sections can be used independently or in tandem to produce results. In addition, the Pulveriser’s sections are also very interdependent; arguably more-so than the other effect devices. So, for example, if you want to Parallel Compress your kick drum you would create your drum sound, and then add the Pulveriser as an insert effect. Bypass the filter section, and use the following:

Squash and Release + Blend = Parallel Compression
Squash and Release + Blend = Parallel Compression

Instant Parallel Compression using 3 knobs. It really can’t get much easier than this.

Let’s compare this to how you would traditionally set it up in Reason. To get this setup outside the Pulveriser, you would have to split the output of your drum module and send one split into an M Class Compressor and then out to Mix Channel “1” and then send another split directly to Mix Channel “2.” While you can say that this setup provides for more control over the compression, you could also say that it involves more routing, more time to setup, and more controls to worry about. In our Pulveriser setup, this process is much more efficient and easy to setup and control, at the expense of a slight loss in some of the finer aspects of control (you don’t have Compression Attack, for example, which you DO have with the M Class Compressor setup).

In the Pulveriser, the M Class Compressor would be akin to the Squash and Release knobs, while the two faders used to “blend” the two Mixer channels together would be akin to the (you guessed it) Blend knob.

Here’s a video to show you the comparison:

So which setup do you select? That all depends on what you feel sounds better to you, and it’s important to note that both setups are equally valid. The Pulveriser simply provides you with a convenient way to set up Parallel Compression.


Well that’s it for now. I’m a little worn out from writing all this stuff out. But I’ll come back and continue with the Pulveriser to see some of the cool things you can do with it. Stay tuned. And write me if you think of some really wacky ways it can be used. From my own experiments, I’ve learned you can do everything from enhance and warm up sounds to completely destroy them. Hours of fun, and you won’t hurt anything except maybe your hearing while you experiment. Let me know what you think. And thanks for reading this.

71 – Entering Song Contests

One of the best things you can do to learn, improve, contribute, and generally be creative is to enter your music in contests. Usually there are several contests through magazines, online music forums, bandcamp, soundcloud, and the like. I would strongly urge you to look up some of them and enter into the contests you feel are appropriate for your style of music.

One of the best things you can do to learn, improve, contribute, and generally be creative is to enter your music in contests. Usually there are several contests through magazines, online music forums, bandcamp, soundcloud, and the like. I would strongly urge you to look up some of them and enter into the contests you feel are appropriate for your style of music.

First, some Suggestions and Tips when Contributing

There’s a few suggestions I would make when looking at which contests to enter.

  • Look for a contest where you find like-minded people and musicians that will be accepting of your work. For example, if a contest is put on by a cinematic club, you wouldn’t post country music. So first, read up about the contest, listen to some of the tracks done for previous competitions of the same contest, try getting in touch and networking with individuals that are a part of the contest or group submitting their tracks to the contest. And then get up the courage to submit your work.
  • Read the rules! I can’t stress this enough. Read the rules and then follow those rules. The worst thing you could have happen is that you win a contest only to be disqualified because you didn’t follow the rules properly. Remember that unless YOU are the organizer, you can’t make the rules. Therefore, don’t argue about them. Just follow them. If you have a question, don’t hesitate to ask for clarification. And suggest ways to improve the competition in the future. But remember the goal is to get your music out there, connect with others, and learn more about music in general (others’ music and your own).
  • Generally, don’t submit a work longer than about 4 minutes, unless the competition specifically asks you to. Long drawn-out tracks are a surefire way to bore even the most die-hard music enthusiast, and will probably tire out the judges and those that will have to listen to your music. Make it interesting and get your point across in a reasonable length of time. Sure there are exceptions to this rule, but if it’s not specified, assume a length of 3-4 minutes is standard. And see the previous point: if you don’t know, ask!
  • Listen to the music of others and participate when you have a point and when and where you can. Discuss what you like about other peoples’ work. Provide “Constructive Criticism” and DO NOT give an opinion without any feedback. You know what I mean. Don’t say “this sounds good” or “love it.” That’s not helpful at all. And if you have nothing to provide, then just keep your mouth shut! A helpful comment always comes with honest feedback which helps the musician learn what is working and what is not working, and always with zero negativity. For example, “The compression on that kick drum seems a little too high for my taste, and the melody line comes across too rigid. Perhaps try to lower the compression on the Kick drum and try using and arpeggiator on the melody. That could be one approach to enhancing the track. But that bass you have going on is perfect. The frequency is just where it needs to be.” Notice here that you’ve not said anything negative and you’ve provided some helpful ideas. As an aside, it’s usually good form to tell the musician what you enjoy about the track and what you feel works in the track if the majority of your comments are things you would do differently in the track.
  • Don’t overdo it! If you are commenting 10 times on every song or in response to every single comment, people are going to get tired of listening to you, and you’re probably cluttering up the competition with a lot of crap. Tone it down. Think about your responses. And if you’re impulsive (hey, we’re all impulsive from time to time, especially about things that we feel passionate about), instead write your comment down in notepad and save it for one day. Then go back to it and see if you still want to post it. The one day rule gives you some time to ponder what you’re saying, maybe you want to rework what you previously wrote as well. And it gives you a break from the forum / competition. Most importantly, it gives the other people a break from YOU. And remember, a well-formed and well thought-out comment is usually greeted with much more acceptance than an impulsive one.
  • Most importantly, have fun submitting your track(s). Well-run competitions are fun places where everyone shares ideas. Make the most of them. Listen to what others have to say about your work. And try to grow from the experience. You will find many useful ideas that others suggest are often ideas you hadn’t thought about before, and it may be just what you need for some new inspiration.

Always remember that competitions are NOT about winning, despite what others might say. Competitions are about people getting together to share their passions. They are about learning how to take your music to the next level. They are about learning from the techniques about others. Winning is merely a by-product and a cherry on the cake. If you win, great. But if you lose, you still win. Those that enter competitions merely for the sake of winning are actually the ones that are losing. Because they are stagnant. Those that get the most out of the competitions are those that open themselves up to the process and learn from the experience. They are the ones that go home with new friends, new ideas, new techniques, and new ways to improve themselves. Ask yourself which one do you want to be?

My Song Contribution to the Reason Song Challenge III

So with all of that said, here’s my own entry from the Reason Song Challenge III put on by Rob (FailedMuso). In order to use this file, you will need to download Patrick’s (Bitley’s) DeLight Fairlight Demo ReFill. The whole idea for this competition was to create a full song from the sounds found in that demo ReFill. Lots of entries were created. I think a total of 32 songs were submitted. Mine did not make the grade. But as I said, that’s totally ok with me. I’m just very happy to be listening, talking, and sharing with such a talented group of people. That’s why I wanted to share my track with everyone. That way you could take a look at how it was put together, and perhaps find a few ideas in there which may spark your own creative juices or inspire you in some way.

So here is the final entry I submitted to the contest (sadly, it did not come in first, second, nor third place. Nor did it get any honourable mentions). But that’s ok. As I said, even when you don’t win, there’s a lot you can take away from it, and a lot you can learn.

Enlightened v2 by Phi Sequence

And you can download the song here: Enlightened. The file is zipped up, and when you open it you’ll also need Reason 6 to inspect it (since it was done in Reason 6 and is a .reason format). Feel free to look it over, play around with it, remix it and send it back to me. Have at it. But remember it’s still copyrighted and I still own the rights to it. So no distributing it without letting me know first.

Finally, here’s a video I put together on YouTube to show you what’s inside the file. I think the hardest thing was coming up with Hi Hats (none were included in the demo ReFill), and making the song “my own.” It was also interesting to try to cull the proper sounds I wanted out of the demo. For instance, there were no sweeps inside the demo, so I had to make changes to the synth parameters to get some sweeps out of them. Little things like this add up to a lot of fun hours working inside the song document and making the demo sounds do what I wanted them to do. But in the end, I can honestly say it’s all worthwhile.

Also, limiting your song palette in this way forces you to work within boundaries. And that is sometimes a great source of inspiration. Especially when all the ReFills out there have massive libraries of sounds and effects. It’s very easy to get lost in a sea of sounds. Sometimes a competition like this comes along and it forces you to work in a limited way. Or else it forces you to find workarounds you might not have ever thought about before. In short, it’s a great way to stimulate your musical mind.

I hope you found some of this useful information. Drop me a comment and let me know what you think? I’d love to hear from you.

70 – ChipTune Sounds

There’s a wealth of great information out there on recreating the sounds of old computer chips, like the Commodore 64 or old SID chips and video console chips, and using these sounds to create tunes (Chiptune). I honestly knew very little about the subject until I, along with several other very talented folks, were asked to put together some fresh new sounds for the Reason 6 Factory Sound Bank (FSB). So here I’m going to explore and explain how I created a few of these sounds, and show you that you can definitely recreate some convincing Chiptune sounds using nothing but Reason and a little experimentation.

There’s a wealth of great information out there on recreating the sounds of old computer chips, like the Commodore 64 or old SID chips and video console chips, and using these sounds to create tunes (Chiptune). I honestly knew very little about the subject until I, along with several other very talented folks, were asked to put together some fresh new sounds for the Reason 6 Factory Sound Bank (FSB). So here I’m going to explore and explain how I created a few of these sounds, and show you that you can definitely recreate some convincing Chiptune sounds using nothing but Reason and a little experimentation.

There are no additional project files for this tutorial because all the project files can already be found in the FSB (Factory Sound Bank) for Reason 6. So read the article, watch the videos, and I’ll point to where you can find these chiptune sounds and show you how they are put together.

The Process: It’s all in seeking out the Answers!

Before I delve into creating these wily little playful sounds, I want to shed a little light on how I approached the subject. Because hopefully that might give some insight into the process of sound creation and more importantly, might inspire you to seek out the knowledge you need to tackle any sound design project, even if you know very little about it. The whole process starts by asking yourself two questions:

  1. What is the sound I’m looking for? What does Chiptune sound like?
  2. How can I recreate those sounds inside Reason.

The answer to the second question depends upon the first, so your first step is to seek out Chiptune sounds. Try to find some sites on the internet that cater to that specific sound and immerse yourself in them. It also helps if you can download a few sounds in your genre of choice and then deconstruct them using a spectrum analyzer. The more you do this, the better you will be at instantly recognizing what type of Oscillator was used, envelope settings, and the like. And yes, this is another plea for the Props to introduce a fully-featured Spectrum Analyzer in Reason. Because, for a sound designer, this is a very important analysis tool.

Once you have an idea of how the genre or specific instruments sound, the next step is to seek out information on how those sounds are built. I went to my best virtual friend next to Google to get the answer: Wikipedia. Enter “Chiptune” into the search box, and you have all the information you need to figure out how to build chiptune sounds inside Reason (or at least a very good start). Look for keywords like which “Oscillators” and “Waves” and “Filters” were used. Then you can transfer this knowledge over to Reason and be well on your way to creating vintage Chiptune sounds.

So if you really want to know about Chiptune, go here: http://en.wikipedia.org/wiki/Chiptune. After you’ve read that (especially the section on “Technology” and “Style”), come back here and we’ll continue with taking the knowledge from that article and build our chiptune sounds in Reason.

Note: There are several sites dedicated to creating Chiptune sounds, software and hardware used to create chiptune sounds, discussion groups, sound share sites, and a plethora of everything to try to recreate these old video game consoles and chips. Simply searching on Google will give you a wide variety of information and I urge you to explore those avenues as well, if this is your thing.

Components of Chiptune Sounds

Now that we have an idea of what Chiptune sounds like, and we know a little about what elements of the synthesizers are used to recreate those sounds, we can jump into Reason. Of course since this is a Reason article and since we’re using Reason software, it makes perfect sense that we are taking the software route to recreate these sounds. But that doesn’t mean you can’t recreate them via hardware instead. You could also, if you’re lucky, have access to those older chips from the 80’s and build your own box if you are feeling inspired by your inner engineer spirit. There’s also a wide variety of other software dedicated to recreating these chips and chip sounds. But that’s a whole other article for a totally different kind of blog. Here, it’s all about software and Reason.

Most of the components of the chiptune sounds use some or all of the features below, and are very simple to create in Reason. As a matter of fact, you can use any or all of Reason’s synths to mimic some very realistic chiptune sounds. Where the fun comes is adding your own variation or style to the sound. Since the chips used to create the sounds of the 80’s arcade devices were very basic, they required very little CPU and that holds true even today. So you can create whole songs out of them with very little strain on your processor.

  • Usage of Square Wave (also called Pulse Wave)
  • Usage of the Bandpass filter
  • Bitcrushing to add distortion and a low quality feel
  • White noise for drum sounds
  • Arpeggios were simulated with a fast-changing pitch (something easily simulated by an LFO affecting the Pitch)

That’s pretty much it in a nutshell. Now let’s see how we can set up our sounds using all three Reason synths.

Malstrom Chiptune Sounds

Here’s the video that shows you how to build some chiptune sounds using the Malstrom. Think of this as your own Malstrom chiptune emulator:

Here’s the process to create some interesting Chiptune sounds with the Malstrom (the short version):

  1. Create a Combinator and a 14:2 Mixer inside the Combinator. Then create a Malstrom, so it is auto-routed to the first channel of the Mixer.
  2. Turn off filters A & B in the Malstrom.
  3. Select Curve 26 for Modulator A, and Curve 3 (square wave) for Modulator B. You can play around with the Curves after you’ve created most of the patch. The key is to select sharp-edged (stepped or square) waves. This provides a distinctive chip sound for the patch.
  4. Turn the Rate knob for Modulator A to around 104, and the Rate knob for Modulator B to around 96. Also set the “Modulator A to Pitch” knob to 63 (all the way up). And then set the “Modulator B to Motion” knob to around 22
  5. In Oscillator A, select “VSWaves” and turn the Motion down to -64. Then turn the Index up to 127. Finally, set the ADSR envelope of Oscillator A to 0 / 127 / 0 / 0
  6. Set Polyphony to 1.

Now when you play the Malstrom you should hear some interesting chippy sounds coming out of it. You can now set up the Combinator Rotaries/Buttons to play around with the Rates of both Modulator A & B, as well as the Modulator A to Pitch, Modulator B to Motion, and Oscillator A Index. Here are the settings I entered for the Malstrom Emulator patch I created for the Reason 6 FSB. The idea is to keep things playful, flexible and usable.

The Combinator's Modulation Routing settings for the Malstrom, which were used in the Chiptune Emulator A patch from the Reason 6 FSB
The Combinator's Modulation Routing settings for the Malstrom, which were used in the Chiptune Emulator A patch from the Reason 6 FSB

Adding some “Crunch” or Bitcrushing into the Mix

To add a bit of bitcrushing to the sound, you can do the following:

  1. Select the 14:2 Mixer and create a Scream 4 device. This will auto-route to the first Auxiliary of the Mixer.
  2. Turn the “Return” knob of Aux 1 down to around 50 or so. Then turn Channel 1 Aux 1 all the way up to 127.
  3. Select the Scream device, and set the Damage Control to full (127), Damage Type to “Digital,” and P1 to 105.
  4. Select the “Scream 4” in the Combinator Mod Routing section, and enter the following settings: Rotary 4 > Parameter 2 : 0 / 127. Also set Button 4 > Damage On/Off : 0 / 1.

This way, Button 4 acts as the Bitcrush On/Off switch, and Rotary 4 acts as a “tone” knob for the bitcrusher. If the effect is too much for you, reduce the Return knob on the Mixer for Aux 1 down a bit more. If it’s not enough, turn it up.

Thor Chiptune Sounds

Here’s the video that shows you how to build some chiptune sounds using the Thor synth. Think of this as your own  Thor chiptune emulator:

Here’s the process to create some interesting Chiptune sounds with Thor (the short version):

  1. Create a Combinator and a 14:2 Mixer inside the Combinator. Then create a Thor synth, so it is auto-routed to the first channel of the Mixer.
  2. Ensure Oscillator 1 is Analog (by default, this should already be set). Then change the wave to a square (pulse) wave. Use the Oscillator 1 Mod setting to 64.
  3. Bypass the Filter 1 slot, but ensure the Oscillator is still going into Filter 1 (the red “1” button to the left of the Filter 1 slot).
  4. Set the LFO 2 to a square (pulse) wave, and the Rate to somewhere around 11.3 Hz. Things are more interesting if you don’t “sync” the LFO to the tempo, so leave that off for now.
  5. Set the Amp Envelope’s ADSR to 0 / 127 / 0 / 0
  6. Set Polyphony to “1” and Release Polyphony to 0 (zero).
  7. In the Modulation Bus Routing Section (MBRS) at the bottom of Thor, enter the following: LFO2 : 66 > Osc1 Pitch
  8. Setting up the “Bitcrusher” is exactly the same as previously described when creating the Malstrom patch, so you can set that up for your Thor chiptune emulator if you desire.

Now when you play the Thor synth you should hear a classic Chiptune sound. The next step is to set up the Combinator Rotaries/Buttons to play around with the Rate of the LFO2, as well as the PWM (Pulse Width Modulation) of Oscillator 1.

The interesting thing about using Thor is that you also have access to the built in step sequencer. So another approach is to set up a random pattern to a “Pendulum” setting, then unsyncing the step sequencer, and having it trigger from a button on the Combinator (see the video for more on this). Setting up the sequencer this way means that you have an “auto-sequenced” pattern triggered from the Combinator button. Handy for instant chiptune.

Here are my own Combinator settings for the Thor device:

The Combinator Modulation Routing settings for the Thor chiptune Emulator.
The Combinator Modulation Routing settings for the Thor chiptune Emulator.

Now let’s have a look at some Chiptune sounds triggered via Kong.

Here is another little patch that I put together for the FSB, along with some other talented people who helped out by providing a few of their sounds as well. It’s a little Kong Kit that you can experiment with. Note that in order to create some of the drum sounds that are classic 80’s arcade, you will want to use “white noise.” This formed the basis for classic chiptune drums.

So there are a few outlines to create classic 80’s arcade sounds. With a little tweaking, I’m sure you can come up with several authentic sounding video game sounds using Reason. And I’d sure love to hear them. Furthermore, armed with this knowledge, why don’t you try out using the Subtractor to create chiptune sounds. And if you come up with some good ones, please be sure to send them my way and let me have a listen. Until next time, happy Reasoning.

69 – All about the Alligator (Part 2)

Let’s continue with the Alligator and find a few other tricks that it can perform. In the first part, I looked at how the Alligator works, and provided a few ideas for how to work with it. In this part, I’m going to get a little more practical and show a few new ideas you can incorporate into your tunes. Hopefully this will provide you with some new creative inspiration.

Let’s continue with the Alligator and find a few other tricks that it can perform. In the first part, I looked at how the Alligator works, and provided a few ideas for how to work with it. In this part, I’m going to get a little more practical and show a few new ideas you can incorporate into your tunes. Hopefully this will provide you with some new creative inspiration.

You can download the project files here: alligator-techniques-part2. There are some Combinators and a .reason file showcasing the examples found below. In the .reason file, I’ve used mutes to silence all the tracks. To listen to a track, unmute it. I hope you find some of these tricks useful.

Also don’t forget that my latest Refill, Pureffects, is available with 1,250 effect patches designed specifically with Reason 6 in mind. There’s 200 Alligator patches alone, and several Alligator-based Combinators with all kinds of interesting routing ideas. Read More about the ReFill.

So let’s continue where we left off last. . .

Technique #5: Using the Alligator Effects without the Gates or Filters

After I wrote the first tutorial, it occurred to me that you can use each part of the Alligator independantly. For example, you can use the Gates only, without the Filter and Effects section. Or you can use the Filter section only, while keeping the Gates continually open. And lastly, you can use the Effects or Mixer sections only, if that’s all you want to use (Drive, Phaser, Delay, Pan, Volume). The way to achieve it is as follows:

  1. Start with “Technique #2: Keeping your Gates Open.” To recap, you set the pattern to #60, and ensure the Amp Envelope Decay knob is set to 127 (fully right). Then flip to the back of the Alligator, and send Gate CV Output 1 to Gate CV Input 3, Gate CV Output 2 to Gate CV Input 1, and Gate CV Output 3 to Gate CV Input 2. This way all your gates are Fully open.
  2. Fully turn down the volume of the High and Band pass filters. Note: Alternately, you can move the main left/right output cables to the Low Pass Channel left/right outputs on the back of the Alligator. This way, the audio input into the Alligator is only passing through the Low Pass section of the Alligator. Note also that this doesn’t have to be the Low Pass Channel. This idea works with any of the Channels in the Alligator, since the effects and mixer sections are the same for all 3 Channels. The idea is to pick one Channel, and mute the other two.
  3. Turn off the filter section (green light) on the low pass filter.
  4. You might also want to reduce the volume of the low pass filter (I found a volume setting of 64 for the low pass volume combined with a volume setting of 100 for the master volume is equivalent to bypassing the Alligator — at least to my ears).

Now your signal is passing through the low pass Channel only, and with the Gate always open and the Filter section turned off, you can use the effects & mixer sections independently to affect your sound.

The Alligator Front Panel, showing the areas of importance when trying to use a single channel for effects and Mixer only; in this case, using the Low Pass Channel
The Alligator Front Panel, showing the areas of importance when trying to use a single channel for effects and Mixer only; in this case, using the Low Pass Channel

Technique #6: Using External Effects

As with all Reason devices, you can very easily set things up to use external Reason devices. In other words, you are not limited to using the Drive, Phaser, and Delay that’s already built into the Alligator device. If you wish to use an external effects device, such as an Echo, Scream, or even any of the Kong FX modules, you can easily set this up. Here’s how you go about it.

  1. Select the sound source device or Audio Channel and then add an Alligator so that it’s auto-routed.
  2. Determine if you want the external effect to be applied to all three Channels or just one of the Channels.
  3. If you want the effect applied to all three Channels, it’s super easy. Just select the Alligator and add your effect device (a Scream, for instance). The Scream is auto-routed after the Alligator and will be applied to the Gated signal from all three Channels.
  4. If, however, you want to have the effect applied to specific Channels, select the Alligator and add your effect device (a Scream, for instance). Then flip to the back of the device (Tab), and move the main left/right Alligator audio output cables to one of the three desired Channel output pair (High, Band, or Low Pass Channel).
  5. If you desire, you can do the same thing for the other two Channels by adding other effects and routing them in a similar way. This way, you could have a Chorus effect inserted into the High Pass Channel, a Scream inserted into the Band Pass Channel, and a Pulveriser inserted into the Low Pass Channel. Or you could have three different screams inserted into the three different Channels, and set the parameters/settings for all three Screams differently to get more variation.

The more you experiment with external devices, the more I’m sure you’ll find how versatile the Alligator is in processing your sounds. You might even wonder how you ever lived without this device.

The back of the rack, showing how to process the audio through single Alligator channels and external effects.
The back of the rack, showing how to process the audio through single Alligator channels and external effects.

Here’s the video showing you the above two techniques:

Technique #7: Gating Three different Audio signals across Three Different Gate Channels.

Here’s an interesting way to go about using the Alligator Effect device. How about taking three different sound sources and passing them through the three different Alligator Gate Channels, and then outputting them to three different Mixer Channels? Confused yet?

No, you can’t really do this with a single Alligator because the Alligator only has one pair of Inputs. However, you can create three Alligators, and then send your three audio sources into the three different Alligators (one audio source for each Alligator). Once this is done, you can send the separate channel outputs on to their own Mix Channel Inputs. The setup would look like this:

  • Audio Source 1 > Alligator “A” main left/right input > High Pass Channel left/right output > Mix Channel “A” left/right input
  • Audio Source 1 > Alligator “B” main left/right input > Band Pass Channel left/right output > Mix Channel “B” left/right input
  • Audio Source 1 > Alligator “C” main left/right input > Low Pass Channel left/right output > Mix Channel “C” left/right input

The nice thing about this setup is that you can choose to keep all three Alligators’ “Pattern” section the same, in which case the “Pattern” sounds sync nicely together between all three Alligators, or you can spice it up by setting up different parameters in each of the Alligator “Pattern” sections. For example, you could set Alligator “A” to have a resolution of 1/8, Alligator “B” to have a resolution of 1/16, and Alligator “C” to have a resolution of 1/32. And/or you can set all three Alligators to different patterns entirely. This way, you can attain some very intriguing and unique gate patterns going on, and all your friends will wonder how the hell you did it. Well, maybe not, but it’s something worth a try!

Showing different audio sources processed by different alligator channels to achieve an interesting result.
Showing different audio sources processed by different alligator channels to achieve an interesting result.

Technique #8: Oh Hell, let’s just dive off the cliff already!

You want something completely off the wall right? Well here goes. Since the Alligator contains separate outputs, you could theoretically “Nest” the Alligators one after the other. This creates a Double-Gate-Filter-FX Channel for any audio source passed through it. Here’s how this little trick works at it’s most simplest (note that you can create Triple- Quadruple- and so on, nested Channels up to the point where it probably just won’t sound good anymore and your CPU will have a heart attack and lie dead on the operation table). This technique goes down one nested level, and only highlights the “High Pass” Channel. Of course, there’s nothing stopping you from mixing/matching nested channels either. For example, a High Pass Channel goes into a Low Pass Channel, or a Low Pass Channel goes into a Band Pass Channel. Just be warned that you may need to write this out as a diagram on paper first before you lose yourself in Cables.

  • Audio Source > Alligator “A” main left/right input > High Pass Channel left/right output. . .
  • . . . Alligator “A” High Pass Channel left/right output > Alligator “B” main left/right input > High Pass Channel left/right output > Mix Channel left/right input

So now that you’ve set it up in this way, if all your Alligators are set to the exact same parameters, adding on these additional “nested” Alligators won’t produce a different sound. Where things get interesting is when you start changing the parameters so each of the Alligators are set with unique parameters. See the Project Files at the top of this tutorial for a Combinator that shows this type of setup.

Of course, if you want to use all three channels, you’ll need to create more Alligators, and each nested level will require a new Alligator. You’ll also need a Spider Audio Merger/Splitter to both split the Audio Source into the three channels, and then use the Merged side to merge them back again before going on to the Mix Channel input. As I said, this could get very tricky very quickly, and will probably be quite CPU-intensive. My advice would be to start off with one Channel and two levels (as the example shows above), then start setting up the parameters on both Alligators. See how that works out for you. Then build it up by moving to the next Channel (if you like), and so on.

Processing your audio source through multiple Alligator Channels. This shows the High Pass Channel in series. But you can do the same with other Alligator channels.
Processing your audio source through multiple Alligator Channels. This shows the High Pass Channel in series. But you can do the same with other Alligator channels.

Technique #9: Let’s Dial it Back a Bit. Adding Filter Movement

This technique is a little less freaky, and I’m going to end off with it so that you give your mind a chance to wrap around that last little technique. In this one, we’ll do something easy. We’ll take some Malstrom Curves and send them into the Alligator’s Frequency CV inputs to create some movement. Then, since there is only 2 Curve’s available with the Malstrom, we’ll steal the LFO CV output and send it into the third Filter Frequency CV input (all within the same Alligator). Sounds more complicated than it actually is. But here’s an image that shows the setup:

Showing the Malstrom's 2 Mod A/B waves and the Alligator's LFO to provide movement to the Alligator's 3 filters.
Showing the Malstrom's 2 Mod A/B waves and the Alligator's LFO to provide movement to the Alligator's 3 filters.

Those are all the ideas I have for the Alligator at the moment. I’m sure there’s many other interesting ways it can be used. If you have any other ideas, I’d love to hear them, so please don’t be shy and comment to let me know what you’ve come up with. I’m all ears. Until next time, go make some killer music. 🙂

68 – All about the Alligator (Part 1)

In this tutorial I’m going to talk about the new Alligator device in Reason 6. I think no other device has mystified so many since the RPG-8, and a lot of people have reluctance to really dive into it, thinking it’s mainly built for electronic musicians. Truth is that it’s a very easy device to work with, and it has applications for all kinds of instruments and all kinds of genres. So don’t be intimidated by all the knobs and levers. It’s a veritable evil laboratory, but getting it under control is easier than you think, and that’s the focus here.

In this tutorial I’m going to talk about the new Alligator device in Reason 6. I think no other device has mystified so many since the RPG-8, and a lot of people have reluctance to really dive into it, thinking it’s mainly built for electronic musicians. Truth is that it’s a very easy device to work with, and it has applications for all kinds of instruments and all kinds of genres. So don’t be intimidated by all the knobs and levers. It’s a veritable evil laboratory, but getting it under control is easier than you think, and that’s the focus here.

You can download the project files here: alligator-techniques. They contain a .reason file with all the techniques described below, as well as the separate combinators. You will of course need Reason 6 in order to load and use any of the files.

Introduction to the Alligator

The Alligator is billed as a “Triple Filtered Gate” and that’s exactly what it is. However, it’s quite a bit more. It contains 64 patterns that can be manipulated, it has a few built in effects (Drive, Phaser, and Delay), it has 9 LFO’s that can be used to affect the filters, and the Panning capabilities allow you to create some movement in the stereo field. That’s not even taking a look at what you can do with the CV connections on the back of the device.

To start, let’s take a look at the various sections of the Alligator. When I’m starting off creating a patch for this device, I usually first load up a sound I want affected. So if it’s a Bass or a Synth or a Drum sound, all of these sounds will require a different approach to the Alligator. In other words, the sound I feed into it coupled with what I want to do to that sound in my head, will decide how I proceed with the device.

Following is an explanation of the parameters you will find on the front panel of the Alligator device.

The Alligator front panel with legend and explanation of the device functions.
The Alligator front panel with legend and explanation of the device functions.

And following is an explanation of the inputs and outputs found on the back panel of the Alligator device.

The back of the Alligator device with an explanation of the CV and Audio inputs and outputs.
The back of the Alligator device with an explanation of the CV and Audio inputs and outputs.

And here’s the quick introduction video to show you the main components of the Alligator:

Technique #1: Creating a Dry / Wet Knob for the Alligator

Since the Alligator does not have a Dry / Wet knob, we have to go about getting a little creative. This means wrapping the device inside a Combinator. Once there, you can use the Dry Level Knob and program its direction to be inverse to the individual Band level knobs. Set that up on a rotary in the Combinator and you have an instant Dry / Wet control for our Mister Alligator.

Technique #2: Keeping your Gates Open

You’ll notice that the Alligator by default uses a pattern to open/close the gates. You can turn them off or turn the pattern on, but what if you want to keep the gates open all the time. The easy solution is to do the following:

  1. Set the Alligator pattern to #60
  2. Flip to the back of the Alligator and send the Gate 1 CV output to Gate 3 CV input (both on the same Alligator device)
  3. Send Gate 2 CV output to Gate 1 CV input (both on the same Alligator device)
  4. Send Gate 3 CV output to Gate 2 CV input (both on the same Alligator device)

If you flip back to the front of the Alligator you will see all the gates are permanently on. This means that you can still use the LFO, Frequency, Resonance, all the Effects (Drive, Phaser, Delay), and Mixer controls to affect the sound, but you bypass the Gate section of the Alligator. It’s always on.

Just note one thing when you do this: You want to keep the Amp Envelope Decay set to full (fully right). If you lower the Amp Envelope Decay knob, the gate will fade out (even though it is completely open). If this happens, you’ll have to first move the decay knob all the way right, and then reset the CV on the back of the Alligator (unplug all 3 CV connections, and plug them back in again).

Alternately, you can send a one step tied curve pattern in a Matrix split 3 ways through a Spider and then sent to all 3 gate inputs, but this means creating additional devices when it can all be accomplished with a single Alligator.

This video will show you how to set up the above 2 Techniques:

Technique #3: Creating your own Patterns to Control the Gates

You’re not limited to the 64 patterns that are built into the Alligator (though you can definitely have a lot of fun with so many different patterns). You can easily use 3 Thors or 3 Matrix Curves/Gate CV to control all 3 gates in a single Alligator. To do this, first turn off the pattern section in the Alligator (the big “ON” button at the top of the Pattern section). Once you do this, you’ll need to create your Thors or Matrixes and flip to the back of the rack. Send the CV from the Step Sequencers into the 3 Gate CV inputs and then start all of the pattern devices up (this is easier to do if everything is Combined in a Combinator. That way when you press the “Run all Pattern Devices” or press “Play” on the Transport, the Step Sequencers start gating the Alligator. Dead simple my friends!

Best of all, this means you can create any kind of gate of any length you can imagine (See my “Matrix” series of tutorials #48-51 or Thor sequencing ideas #60-62 for ways in which you can extend the length of your patterns).

Technique #4: Stealing the Patterns to sequence other Reason Device Parameters

Forgetting about the Alligator’s intended purpose for a second, you can use its built-in patterns to affect any other parameter in any of Reason’s devices (just about). In this way I got pretty excited to see that you can use the Alligator as an “already pre-configured Matrix with double the amount of patterns” — yeah that’s pretty exciting for a nerdy nerd like me. It means I don’t have to tediously program two matrixes filled with patterns (though truth be told, if you’ve read article #3 in my 101 Creative Projects category, you already have a huge array of Matrixes from which you can copy/paste into any of your projects, right?).

In any event, to get the ball rolling, pick a pattern you like. Then flip to the back of the Alligator and disconnect the audio cables. You’re only using the pattern section here to trigger something else in Reason. And since you have 3 gates, this means you can modulate three other parameters from a single Alligator device (or how about using a spider to combine the three gates and sending the merged output to control a single parameter). I think you get the picture. This is a very quick and easy way to control things via CV.

One idea is to use the Gates in the Alligator to play the Kong drum designer. Send the three gates of an alligator into 3 drum pad CV inputs on the back of Kong, and then you can set up some pad groups in Kong so that you get even more variation. Finally, set up a Matrix curve to control the “Shift” knob via one of the CV inputs in a Combinator, and you have instant “Groove” for your drums, without ever using the ReGroove. It’s a nice alternate way to get some drums going quickly in your tracks. For the full feature on how this is done, see the video below.

Tip #1: Tuning your Filters

This concept was provided by Peff when he was doing his tutoring session in Las Vegas. And I really do hope that he doesn’t mind me providing the tip here. But in the attempt at full disclosure he needs full credit on this one.

Tuning filters is not a concept I was all that familiar with, but armed with this knowledge, it actually makes perfect sense, and has applications that reach out much farther than just the Alligator. But that could be a whole tutorial in and of itself. For our purposes, tuning the Alligator filters is a way to produce a more even sound coming out of the device. The idea is that you pick a frequency as your “Base” and then set the other filters up so that they are multiples of this “Base” frequency. So if you set up the LP Filter on the low end to be 200 Hz, then the BP Filter could be set up to 400 Hz and the HP Filter could be set to 800 Hz, which should produce a “cleaner” tone than if the filters were out of sync or out of tune.

Now while this is a handy technique, I should also say that going for a sound where the filters are more out of tune is perfectly valid. This is not a practice that should be set in stone. It’s more a technique that you should understand and get acquainted with and add into your arsenal of knowledge. But don’t be afraid to venture outside this technique.

You’ll also notice that it’s not always possible to get a precise multiple of a specific Filter Frequency. But generally, the closer you are to a multiple, the more “in tune” the filters should be with each other.

Tip #2: Taming the Dreaded Pops & Clicks

One thing that still bothers me to this day is how quirky the envelopes and LFO can be in the Alligator. Under certain settings, you can hear noticeable pops and clicks which are most definitely unwanted. Here are a few ways to deal with this if you find it happening to you.

First, it’s important to note what’s causing the pops and clicks in the first place. More often than not, it’s a result of a short Attack Time in the Amp Envelope coupled with a slow-running LFO with a sharp edge (think the Pulse or stepped Waveforms). The lack of a lag feature (which is available in the Pulveriser) means that you can’t smooth out the LFO. And when it’s running too slow, and the attack time is short, this is usually a recipe for disaster. Here are a few hints to get you out of this jam. Note that all of these methods will change the sound of the gated effect, but there’s really no way around this that I’ve found.

  • Use a smooth LFO, such as the Sine Wave or even the Triangle Wave. Stay away from the Stepped, Ramp, or Pulse waves.
  • Adjust the Amp Envelope’s Attack time to be slower (turn the knob more to the right). Times that are above 25 or 30 work well.
  • Don’t use the LFO at all. Ensure that all the LFO knobs for the bands you are using are all pointing due west! This means the LFO does not affect the bands whatsoever.

So there are a few tips and tricks for you to get acquainted with the Alligator. Give it a whirl on any kind of audio just to get a feel for it, and have some fun gating your audio. Until next time, happy Reasoning!

67 – 7 Songs in 7 Days

When you get a creative spurt, don’t dismiss it! Make sure you embrace it, accept it, harness it, and get it out of you immediately. Most importantly, learn to understand it’s happening (or when it’s about to happen), and let it ride for as long as you can. I can tell you this because from experience you will have dry spells and they tend to last longer than the creative spells. So it’s important to read the signs and act on them.

When you get a creative spurt, don’t dismiss it! Make sure you embrace it, accept it, harness it, and get it out of you immediately. Most importantly, learn to understand it’s happening (or when it’s about to happen), and let it ride for as long as you can. I can tell you this because from experience you will have dry spells and they tend to last longer than the creative spells. So it’s important to read the signs and act on them.

You can download the project files here: 7-Songs-in-7-Days. There are 2 song files zipped up. Note that they are a “.reason” format, which means you need Reason 6 in order to open them. But even if you don’t have Reason 6, you can follow along with the project below, watch the video and adjust it for previous versions of Reason. You’ll probably also learn more by doing it this way. The first song is “7-songs” which is outlined in the videos below, and the second is “Crease Kink” which I threw in there in case you want to remix it or play around with it. Have some fun with it, but remember that I own the rights to it. So if you use it, please let me know and I’ll showcase it here (to date no one has yet to do something with one of my files, but there’s always hope). 🙂

Usually when I’m in a dry spell, I tend to notice it pretty quickly. And when I do, I immediately shift gears and work on a different project. For example, recently I was creating nothing. It happened about 3 months ago. So shifting focus, I started working on designing sounds — LOTS of sounds. Rather than sit at my computer with a blank canvas open or hacking away at nonsense, I shifted focus. This does two things:

  1. It gets your mind out of the clouds and away from the blank slate.
  2. It usually rejuvinates you and lets you recharge your batteries. Kind of like a holiday. We all need them from time to time to rest and clear the mind.

When I got back (and after producing my latest Pureffects ReFill), I was more creative than I’d ever been. And this is all just a long-winded way of saying I brought on a new challenge for myself. Let’s create 7 songs in 7 days. Not an easy challenge, to be sure, but a very fun challenge nonetheless. And though not every song is spot on, most I’m happy about, and with a few modifications, I could say they are finished.

The Main Sequencer after adding some of the elements you'll learn in this creative project.
The Main Sequencer after adding some of the elements you’ll learn in this creative project.

First: The Songs

So first, here’s the songs in order of creation (day by day). Skip all of this if you just want to get to some of the good tricks that I used inside them. You’ll find some of those tricks down below. Otherwise, please have a listen and comment. One can never have too much feedback.

Day 1: Friday October 14, 2011 – Crease Kink (Glitchcore)

Crease Kink by Phi Sequence

Day 2: Saturday October 15, 2011 – Redaction (Dark Ambient)

Redaction by Phi Sequence

Day 3: Sunday October 16, 2011 – 200 Years (Dark Ambient)

200 Years by Phi Sequence

Day 4: Monday October 17, 2011 – Lens Fold (Dark Experimental)

Lens Fold by Phi Sequence

Day 5: Tuesday October 18, 2011 – Vicious Viscous (Dance)

13 – Vicious Viscous by Phi Sequence

Day 6: Wednesday October 19, 2011 – Severed (Electronica)

Severed by Phi Sequence

Day 7: Thursday October 20, 2011 – Palatinate (Electronica Dance)

Palatinate by Phi Sequence


Second: The Methods & Tricks

Now for a few tricks. I thought I would show a few ways I went about creating some of these songs. It’s not really a formula, but it’s a way I used to start off these songs and keep going in order to create so many in such a short time. There are, of course, a million and one ways to create songs. This hopefully will show you a few techniques that might inspire you to jump into a new direction with your own music. I hope so.

Before I start, let me first say that a lot of the time, I’ll create all the sounds from scratch using the Reason synths. But for the sake of efficiency, and in order to create a song every day, I opted for the approach of using patches in the Factory Sound Bank (FSB) for all the instrumentation, and my Pureffects ReFill for some of the effects. Since I had already created thousands of effects patches (and some in the FSB as well), why not put them to good use. In this respect, it was a bit of a novel approach for me, but one that was very rewarding.

Let’s start by looking at the videos on how I started out a song. This gets you about 50% done in less than an hour.

Of course, after these tutorials, there is still work to be done. Mainly some EQ work on the various elements, continuing with the arrangement in the sequencer, throwing in additional elements to spruce it up a little, adding compression here and there, gluing the track together with a Reverb or two, adding a mastering suite to open up the track, and finally using the Master Bus Compressor from the main mixer (if you choose). But this is where I’m going to leave you for now. If you want to pursue these tracks, I have put two in the Download up at the top of the tutorial. Feel free to work on those and come up with your own mix. If you do, please share it with us so we can have a listen.

Starting off with a few Drums

Let’s start at the beginning. Usually the first thing is to start with your drums. That’s a normal starting point for most tracks. So we’ll start with a Redrum. Now normally, you have an idea in mind before you start a track. But I seldom work that way. In truth, I usually lay down a few drum sounds in a pattern in the Redrum and wait until I hear it. I then start to get a certain idea of how to shape the sound and where it’s going to go. In this respect, the song becomes a combination of what I hear first, and then I make decisions on how to shape it as its playing. This is usually backwards from the way most people would compose. But it’s worked for me in the past. So I’m going to do the same thing here.

Once you create a redrum, open up the assorted kick drums in the FSB, and add one into the first channel. I usually keep them playing as I’m selecting them so I can get a feel for what I’m going to use as a Kick drum. Once that’s done, I flip to the back of the rack and send the Gate Out from Channel 1 into the Gate In on channel 2. I do the same thing between channels 2 & 3. Then flip to the front of the rack and select Drum Channel 1. Lay down a pattern (usually I lay down a 64 step pattern to take full advantage of all the steps in the Redrum, and to have something that sounds like it has a lot of changes throughout). Since I’ve connected the Gate channel 1 to channels 2 & 3, I can layer the kick with 2 other kick sounds, but I only have to enter 1 pattern in the Redrum. This speeds up the workflow a little (and is why you should get familiar with your CV connections!).

So now add two more kick drums in Channels 2 & 3. Once this is done, I shape the levels, length & pitch of all 3 drums together. It’s at this point that I start to get a feel for how things work together. If something isn’t working right, I’ll either change one of the samples, or adjust those parameters.

Next, I add 2 Snare drums in channels 4 & 5 and use the same gate CV trick between the two. This means I need to create a snare pattern on channel 4.

And finally I’ll create 2 Hi Hats on channels 8 & 9 – and set the button  to have exclusive sounds between those two channels. This way I can set up an open hi hat on 8 and a closed hi hat on 9. Though, I don’t always use open / closed Hi Hats. But it’s there if you want.

Now create 2 additional mix channels and put them under the first mix channel for the Redrum. Label each Mix Channel Kick, Snare and Hi Hat. Create 3 Spider merger/splitters and use them to merge the sounds from each single channel through them and back into their respective mix channels. This way you end up with 3 channels for the 3 different drums. You’ll see why we do this in a minute.

And now copy the Redrum pattern to the track. Go to the track and using the Tools window (F8), go to the “Explode” function and explode the drum clip to their own lanes (you may want to delete the old lane and label the 3 different clips so that you know what’s what. I’ve done this enough times to know that the order is the Hi Hats at the top, the Snare Drum in the middle, and the Kick drum on the bottom. Easy enough.

So why separate mix channels? The reason is because you want to be able to process each drum differently. And this ensures that we can do this with a minimum of fuss. So first off, select the Kick mix channel, right click, and go to Effects > Pulveriser. The device is automatically routed inside the Mix Channel’s “Insert FX” section — if you’re new to Reason 6, this might be confusing at first. But think of each Mix Channel as having its own built-in Combinator where you can place effects and even route / program them as you would a normal Combinator. It keeps your rack clean and makes working with effects easier. Trust me on this.

With the Pulveriser, I’m going to set up a nice beefy Kick — watch the video for how this is done. After that, I’m going to have a little bit of fun by varying the pitch somewhat on 2 out of 3 of the kicks. Flip to the back and you’ll see that the Pulveriser can not only be used to beef up the kicks, but we can steal some of it’s modulation to affect the pitch on those kicks as well. So use the Tremor / Follower outputs and send them into the Pitch inputs of 2 kick drum channels on the Redrum. It’s a nice idea also because it doesn’t require any extra devices to set up.

Now I’m not going to go into processing for the snares and kicks, but I think you get the idea now. Don’t be afraid to add a few effects, and when done, you can also add an EQ to each drum individually to cut the low end and boost certain frequencies a bit. That’s one method to work with the drums. Now on to the Bass.

Working the Bass into the Mix

Now we move on to the bass. Again, this is just one of the methods I used to come up with this song challenge. It’s nothing fancy. Start out by adding a Bass instrument and ensure you have a bass that you like. Something with a long sustain so our notes can ring through. Don’t worry if the notes sound too long. We’ll adjust their note lengths in a minute. For now, just find a bass sound in the FSB that works with the drums and that gives you some degree of pleasure when you hear it.

Now go to the sequencer and draw in a 4-8 bar bassline. Again, don’t get too caught up in the composition. You can always change it later. Just something simple. This will be the driving sound for the song, for the most part. At least that’s what it is for me.

Now Loop the clip. This is so that you can audition the effect we’re going to add. And also so that we can switch out the bass with another bass instrument if we don’t like it. I do both of these things. I also add another one or two basses and copy the clip to those other instruments. This way we have a nice thick layered bass; just as the drums are layered.

For the effect, my go-to device is an Alligator. This is because it really is a nice way to add some movement and also since I’ve created a bazillion Alligator patches, I can audition them as they play through. Once I find one I like, I keep it. And since I usually have at least 2 bass instruments, I have the lower bass line kept as the sub bass without any effects, and the higher basses utilize the Alligator. In this way, you end up with a nice lush line playing through your song.

Another Fun Audio/Alligator Trick: Introducing some Pad-like Elements.

Here’s something fun that I’ve been toying with in most of these songs. It involves adding one or two audio channels, and then dropping a short melodic sample into the audio track. Then I’ll take the sample and stretch it the entire length of the song (or beyond, depending how the waveform looks). The idea is to have the audio act as a Pad sound for the entire length of the song. And since you’re stretching it out beyond recognition, you end up with a real twist on the sample.

Once you do this, make sure to lower the level significantly. The idea is to have the sound running in the background so it’s audible, but doesn’t take over the drums or bass. A nice subtle sound.

To complete the trick, I add an Alligator to the audio channel. It’s important to remember that the Effects work on audio just as they would on MIDI clips. So adding an Alligator provides some nice movement to your pad sounds. I wouldn’t go crazy with the Alligator in this situation. Just a slight movement to the sound is all you need. So again, audition some of the Alligator patches in the FSB, or get my Pureffects Refill for a selection of 200+ Alligators. 🙂

 Finally: Some Dr. OctoRex accents to really Kick it up a Notch

Lastly, let’s add some accents using a loop. You’d think a single loop can’t do much, but wait until you see this fun little trick. It’s not so much a mystery, but while the song is playing, create a Dr. OctoRex and find a loop that goes along with the song. In this experiment I wanted to find something with a hard edge, so I opened up the “Hardcore” folder in the FSB and found a loop that I thought had some possibilities. I then set about filtering it and adjusting the Octave setting, and adding the LFO for some modulation. Once this is done, copy the loop to a portion of the track (4-8 bars long) and then turn off the “Enable loop playback” button, so you don’t get double-note sounds.

Now in the sequencer you can go about altering the notes and creating some variation.

Finally, copy the Dr. OctoRex and Track. Using this copy, make some adjustments. Then combine the Dr. OctoRex and create an RPG-8 Arpeggiator to play the Combinator. Go to the sequencer and move the notes from the second Dr. OctoRex down to the Arpeggiator track. Then go back to the rack and copy the loop into all 8 slots. This is so that if the notes on the sequencer playing the RPG-8 switch the loop slot, the same loop is still playing.

From there, you can go to town switching things around in your loop. Usually changing the octave, updating some settings on the RPG-8, etc. will produce some interesting results. The nice thing is that both loops have the same groove and timing, so they will sound like they belong together. Indeed, this is where I had the most fun: playing around with settings until you find the right balance between the loops.

Don’t forget you can also edit the slices directly, or use some CV to modulate things further. Experiment and play until you make the loop your own. That’s really the key here.

 

The front of the rack after adding the Drums, Bass, Audio Track (Pad), and some Dr. OctoRex Loop accents.
The front of the rack after adding the Drums, Bass, Audio Track (Pad), and some Dr. OctoRex Loop accents.

So there you have it. One method I used to create seven songs in seven days. Now go out there and challenge yourself to create your own group of songs using your own methods, or incorporating some of the methods I outlined here. The sky is the limit. So reach for the sky!

66 – The ECHO Echo echo. . .

Today let’s take a nice little look into the world of The Echo. New to Reason 6, this is one of 3 new effects which were much anticipated by the User Community (Do we all remember posts with subject lines reading “We need better effects in Reason”? I do).

Today let’s take a nice little look into the world of The Echo. New to Reason 6, this is one of 3 new effects which were much anticipated by the User Community (Do we all remember posts with subject lines reading “We need better effects in Reason”? I do).

You can download the project files here: Echo-Techniques. There are some Combinators and a .reason file showcasing the examples found below. In the .reason file, I’ve used mutes to silence all the tracks. To listen to a track, unmute it. I hope you find some of these tricks useful.

Also don’t forget that my latest Refill, Pureffects, is available with 1,250 effect patches designed specifically with Reason 6 in mind. There’s 200 Echo patches alone, and about 30 or so Echo Combinators with all kinds of interesting routing ideas. For example, ever thought about setting up Kong to “Play” the various “Delay Times” via your pads? Since there are 16 “synched” times in The Echo delay, they are perfect for mapping to the Kong pads. Yup. It’s in there. Read More about the ReFill.

Overview of The Echo

The Echo brings some nice new features into Reason. It’s an Echo / Delay device that can also phlange things around, and purports a new Algorithm to the Echo that you won’t find inside the RV7000. As with all Reason devices, it doesn’t replace anything, but instead adds something new into Reason. So it’s not meant to be a replacement for the DDL-1 or the RV7000 Echo algorithm. They all have their place inside Reason.

One of the nice things I like about the Echo is the “Keep Pitch” button. Essentially, this will keep the pitch as you adjust the Delay Time, so that you don’t get those insane squelches that you get if you were to change the Delay Time in Thor or the DDL-1 devices. This is unique to The Echo.

Another unique offering is the way in which you can control The Echo. There are 3 ways to control it:

  1. Normal: Allows you to use The Echo as an insert or send device in any situation. The Dry / Wet knob determines the amount of Echo produced. This mode is probably the one you will be most familiar with, as it’s the way most effects are set up in Reason.
  2. Triggered: Allows you to Trigger The Echo via the Trigger button or by sending CV to the back of The Echo device. The only thing to remember with this mode is that Any Audio being processed through The Echo will not sound unless the Triggering is enabled. Something that can be a little confusing at first. Honestly, I would have preferred if the Audio was sent through the device dry when not triggered, so that you could still hear the unprocessed audio, and then triggering allowed you to hear The audio processed through The Echo, but I’ll show you a way around that later.
  3. Roll: This is a really nice option when you want to play a stuttering effect on the Audio. The Roll Feature will boost both the Feedback and “Wet” signal as the Roll lever is moved to the right. When this lever is all the way right, both Feedback and the Wet signal = 100%. Fully left, and both Feedback and the Wet signal are 0%. This makes it very useful as a “Freeze” or “Beat Repeat” function.

And of course you have the Color, Modulation, Ping-Pong mode, Offsets for Delay and Feedback, and Ducking. I’m not going to go into all of these features because the manual does a pretty good job of explaining them. Instead, I’m going to show you a few practical ways you can use them in your compositions.

Trick #1: Echoing Slices in a Loop via Parallel Processing

This idea uses 2 instances of the same loop to parallel process the drum beats. One will be the original unprocessed Loop, and the other will have a few slices of the same Loop run through The Echo. It’s a very simple process, but it can be a lot of fun when you’re spicing up your loops.

Start by creating a 6:2 Mixer inside a Combinator. Then create a Dr. OctoRex Loop player and initialize the device. Open up the lower section of the Dr. OctoRex and load up one of your favorite loops into slot 1. Now Duplicate the Dr. OctoRex Loop player. Flip around to the back panel (Tab), and connect the second Dr. OctoRex to Channel 2 (the first Dr. OctoRex should be connected to Channel 1). Flip back to the front, and select the Second Dr. OctoRex. Right-click and select Effects > The Echo. This will automatically connect The Echo as an insert device between the second Dr. OctoRex and the 6:2 Mixer. Perfect.

Now for the important part. Flip to the back of the Rack, and move the output of the second Dr. OctoRex from the Left / Right output to the 1/2 output instead.

Parallel Processing specific Slices in a Dr. OctoRex with The Echo
Parallel Processing specific Slices in a Dr. OctoRex with The Echo (back of the Rack)

And now comes the fun part. Set both Dr. OctoRex devices to “Select Slice By Midi.” This allows you to see the slice focus as the loops are running. On the first Dr. OctoRex, select the “Slice Edit Mode” button and select the word “Level” in the Loop display area (not the “Level” knob). In the second Dr. OctoRex, select the “Slice Edit Mode” button and select the word “Out” in the Loop display area (not the “Out” knob).

Find a few key slices as the loop is played (Bass Drum is a pretty good choice in most cases), and in the second Dr. OctoRex, pencil in a quarter of the way up over these key slices. You should be hearing The Echo affecting those slices. In essence, the slices you select to be output from 1/2 (a quarter of the way up in the display) will be processed through The Echo. The reason it’s a parallel process is that the original loop is still playing simultaneously. If you don’t want the slices processed in a parallel way, go to the first Dr. OctoRex and reduce the level completely on the same slices you just sent to the Echo. You can introduce a little of the original signal back in by raising this level for the first Dr. OctoRex loop slices, or you can reduce/raise the levels in different ways to add a little variation and groove. It’s a very versatile method.

If you really want to start tweaking things, try transposing the second Dr. OctoRex loop by a few semitones or a whole Octave. If it sounds good, go with it. And when you’re done, save your creation in a Combinator.

Parallel Processing Specific Slices in a Dr. OctoRex Loop with The Echo
Parallel Processing Specific Slices in a Dr. OctoRex Loop with The Echo (Slice Edit Mode)

Trick #2: Using a Matrix to Trigger Echoes

This idea is very basic but shows you how to use the Trigger Mode of The Echo. The idea is to use a Spider Audio to split the signal of a Dr. OctoRex Loop, and have one split going into a Mixer Channel, and another Split processed by The Echo and then sent to another Mixer Channel. Then take a Matrix Pattern Sequencer, and send the Curve CV from the Matrix into The Echo’s Trigger CV input on the back of the device. On the front panel, Set The Echo to “Triggered” mode, and set the Matrix to “Curve” mode. Start the Main Sequencer by pressing the “Play” button,  and start drawing in some steps in the Matrix curve. The steps should be draw in at full level, and you don’t need a lot of them. Start by drawing in one or two on a few key beats. You’ll start to hear how The Echo is affecting the loop, so draw in your steps by letting your ears decide where they should go.

Here’s a video that showcases how to set up the above 2 tricks. Don’t worry, it’s really not as hard as it sounds. Once you understand the concepts, you can really have some fun with these methods.

Trick #3: The Echo’s Feedback Loop

Using the built-in Feedback loop of The Echo is something that hasn’t been talked about very much since Reason 6 was released a few days ago, but I’ll bet this will be one area that will get a lot of attention in the coming weeks and months. If not, then it should. The Feedback Loop is one helluva tremendous feature. And it’s really simple to implement with a lot of possibility. The only thing I would advise when you start fiddling with the Feedback loop is to reduce the Feedback of the Echo down to zero before you start connecting devices to it. Also make sure there is no feedback offset either. Keep that knob centered at zero (0). Otherwise you can really destroy your ears if Feedback is set very high and you accidentally connect a Scream to it, for instance.

No doubt someone will eventually do this. And for that poor soul I guarantee it will end up being the last time they do it.

So now that you know the rules of the road, here’s how you set it up:

Load up a sound device, like your favorite Thor, Malstrom, Subtractor, or any Audio Track if you prefer.

Create an Echo device underneath the sound source device (or you can place it inside the Audio/Mix Channel’s “Insert Effects” section). Turn the Feedback knob down to zero. I also turn the Delay Time down to zero as well, but this isn’t necessary.

Hold your Shift Key down and under The Echo device insert your favorite Effect device from the “Effects” subfolder (to make it interesting, select a Pulveriser).

Flip to the back of the rack and connect the Pulveriser’s audio outputs to The Echo Feedback Loop audio inputs (Breakout Input). Then connect The Echo Feedback Loop audio outputs (Breakout Output) to the Pulveriser’s audio inputs.

The basic setup is complete. Now you can flip to the front of the rack and slowly raise The Echo’s feedback knob. I would probably set the knob to somewhere around 20. Then start to play with the parameters on the Pulveriser. Don’t go crazy. Just slowly introduce a few things such as some dirt, filtering and dry/wet blend. If you don’t hear anything too exciting, keep experimenting by turning up the feedback a little more, then increase the dirt and perhaps the squash settings. Try out the various filters.

You’ll start to hear how it all affects the feedback inside The Echo. And believe me, there are hours of fun found just in creating weird feedback loops. I created a lot of different feedback loops inside my Pureffects refill for Reason 6. I even fed a second Echo device through the feedback loop in one patch, and an RV7000 in another patch. Though when you feed delay effects through the Feedback Loop, you really need to mind your manners. As you can imagine, piling delays upon delays can be a recipe for disaster. Though with the right settings, you can produce some phenomenal results. It goes without saying, but I’ll say it again and again: Experiment! Experiment! Experiment!

Here’s the Video for setting up a Feedback Loop in The Echo:

Trick #4: The Zipper Effect

Here’s a little trick to create an off-the-wall sound with The LFO feature of The Echo. I personally like it for a very rapid delay which sounds somewhat like a zipper. So I’m calling it the zipper effect. It’s also very simple to set up.

The "Zipper" effect in The Echo
The "Zipper" effect in The Echo

Here’s a short video on how this is set up:

Trick #5: Automating the Roll feature in The Echo with an LFO

This is another one of those simple ideas that can produce some nice results. Try automating the Roll in The Echo with an LFO. Any LFO will do. You can even use the Pulveriser’s Tremor feature (essentially an LFO) to control your Rolls.

A Subtractor LFO connected to The Echo's Roll CV Input
A Subtractor LFO connected to The Echo's Roll CV Input

Here’s a short video on how this is set up:

Bonus Trick #6: Processing a Kong Snare Drum through The Echo

Much like Trick #1, where the Slices are independently sent out to The Echo, you can also set up a Kong Drum Kit, and send specific drums out to The Echo for processing. This video shows how you can set up a Snare Drum to be processed by a Pulveriser and an Echo, while all the other drums are sent out Dry. Once you understand these concepts, there’s no end to how you can process things with The Echo, as well as other effects devices in Reason. Have fun with it!


And here’s a really nice tutorial that Selig put together on parallel processing and sidechaining with The Echo, which I just found the other day. Really nice method to parallel process using nothing more than a single Echo. Sweet.


I hope that opens up some new ideas for those of you using The Echo. Any other ideas, please share them here. Until next time, have a blast with your new Reason 6 upgrade.