Search This Blog

Tuesday, July 26, 2011

Synthodeon, MIDI and the Future... (Part 2)

So far I have not said anything remotely interesting, at least technically, but hopefully that will change now... 

There are in fact products just like I described in my last post: ipMIDI, a C# MIDI code set, a Cycling 74 patch, QmidiNet, and many others.  All of these products, though, treat MIDI as a simple end-point-to-end-point topology that does nothing beyond the basic MIDI capabilities.

But my interest here goes beyond this basic ability.

First off, when you are working with software samplers, e.g., Kontakt, you have the ability to organize multiple samplers on the same MIDI channel or duplicate samplers on a set of MIDI channels.  This ability allows you to have much finer control over what is played for a given set of MIDI values as well as allows you to do things that you otherwise could not.

In terms of something like, say, Guitar samples, its often the case that there are a lot of articulations available for a given sound, e.g., strumming up versus strumming down, starting on one string, ending on another, mutes, harmonics, and so on.  Many software samplers today cram these sounds into chord sequences, e.g., I hold down "C" in on section of keys and I use "A#" in another to strum a C chord.  Some samplers allow you to control what chords are being sounded (on the fly or with setups) and others do not.

What happens is that the complexity of what is being (or needs to be played) quickly and exponentially gets out of hand - there are too many ways (inversions, where on the "guitar neck" your hand is, and so on) to play a "C" chord, which form of "C" is appropriate for the piece I am playing, and so on.

At a low level MIDI is getting the "note up & down" events to the sampler but the "sound" the musician is asking for is somewhat beyond this basic ability.

While things like MIDI CC values could be used to address this issue they are not generally because picking you own choices for specific CC values may be "non-standard" and would render any software or hardware that did so incompatible with a lot of existing MIDI equipment.

Even something somewhat standard, like breath control (velocity) MIDI values, are not always handled correctly.  For example, I've played an EWI 4000s live for several years using either Mr. Sax (from samplemodelling.com) or Roland XV-5050s with the Patchman Sound banks (BTW, I cannot say enough good things about these sound banks).  Some out-of-the-box 5050 patches works with a breath controller, some do not (patchman ones always do, even on the XV-5050, as they were designed for this purpose).

Mr. Sax uses a variety of CC controls for special things while playing - which is inconvenient if you do not have a way to generate them.  This is particularly true live - I have enough to haul around as it is without yet more knobs and pedals to munge CC values.

For the more sophisticated guitar functionality you would like a controller to be able to do "more" for the player in terms of triggering samples in response to the player's hand movements.  There are many cases where one MIDI channel is simply not enough to do this.

Take, for example, note bends.  On a synth keyboard you typically bend the whole keyboard with the pitch wheel, i.e., all the notes change in response to the wheel.  On a faux sax or clarinet with the EWI you bend a single note (by biting the mouth piece).  On a guitar you can bend multiple notes differently and simultaneously.  MIDI does not help you in this regard.  (Not to beat up MIDI - it was developed before the things I am talking about were even possible.)

Now throw into the mix your typical keyboard layering and/or splits.  Now I really need multiple MIDI channels per split to do anything beyond basic notes.

Then there are devices like the iOS ones from Apple (and well, possibly, as Android).  Playing them wireless in the context of a guitar or keyboard where exact millisecond  timing is required is a show stopper - its simply not reliable enough.

(Yes you can create a private network on your Mac, etc. but I've not had satisfactory results much less taken it into a paid live gig alone or with other musicians.)

You really need a wire to get this done.

(At least Macs work live - I've played a variety of samplers and Logic live without problems.  I guess the same is true for PCs but I personally would not rely on one...  BTW, I am no PC bigot and what I will be posting going forward will certainly apply to PCs as well.)

So the bottom line here is that we need more than just a single MIDI channel between a controller for modern samplers if we want to get anything done that gives us much closer control that we get for a single keyboard, etc.

My solution to this is very, very simple: combine collections of MIDI channels into (sub-)groups, or what I will refer to as simply groups and sets of MIDI virtual ports into what I will call Multis.

Now MIDI makes this easy on a computer because ports have names, e.g., Kontakt Virtual Output.  These names are derived from the software or, in the case of hardware, from the device itself as part of the USB standard for modern MIDI devices.

Devices are often, even in the wild, named sequentially with simple numbering, e.g., E-MU Xmidi 2x2 Midi Out 1, E-MU Xmidi 2x2 Midi Out 2, ..., so it makes sense to think of sequentially numbered devices as a Multi.

But fortunately we don't have to worry about actual devices for this abstract model.  If we have software that simply creates groups of ports through the local OS we can create and name them any way we like, e.g., Port 1, Port 2, Port 3.

Secondly, because in the world I describe we can support any number of MIDI port plus have a fast, reliable means to talk between the we can have a relatively large number of MIDI channels in a Multi, say four, and still achieve very high performance.

So, for example, in the diagram below we have "A MIDI 1" a (Single) logical OS-based MIDI port we've created on Node 1 that's linked to Node 2 (or to other random Nodes).  We also have a set of MIDI ports "B MIDI 1", "B MIDI 2", and "B MIDI 3" that we want to think about as a Multi.  We name this Multi by dropping off the space and number to get "B MIDI".  Any port the follows the template of "B MIDI x" where x is a sequential number is included.



We divide "A MIDI 1" up into groups of, for example, four MIDI channels - groups being named 1, 2, 3, and 4.  We also divide the Multi "B MIDI" into twelve groups of four MIDI channels.  Groups are numbered from one on the "first" MIDI port and consume units of n channels per group until you run out of channels (n is always less than 16 - at least for now - though I suppose there is no reason n couldn't be 32 or 18 or something else like that).

So what we have done is create logical groups of MIDI channels into a simple grouping and naming system.  There's no requirement as to how many or few channels fit into a group, or how many ports into a Multi - those are decisions based on the application.

So what?

You might love MIDI as it is and hate me for what I am saying - so be it.

But face it, as I said in the last post, MIDI is like a Platypus - its here, it works, we love it but its not the best we can do and its certainly not the latest design...

For one thing when we talk to a sampler it makes more sense to talk about a sound using a group rather than a channel.  (Certainly one sound can be one group which is one channel - which is what MIDI is today.)  But if I want to create a more complex controller that works like some of the guitar-based sampler keyboard inputs I can (no pun intended) re-group into a set of, say four or eight MIDI channels, and use the extra bandwidth to do interesting things in the controller to diddle the sampler in interesting ways.

For another we can send metadata along a given Multi between apps that speak the same MIDI protocol at the Multi level.  Because I have good hard bandwidth between, say, Node 1 and Node 2, I can encode things into MIDI and use it as a transport layer - either in a group or as a Multi.  (There are other, more serious reasons to want to do this which will be revealed as the '3i' gets closer to debut.)  This gives me a reason not to invent yet another MIDI over UDP/TCP protocol - though I may release tools that make of this simple to do.

After all this is basically kind of like what's going on with MIDI-controlled DAW control surfaces so why not buy into the same concept for actual MIDI controllers?

In the reality of the 3i what this all really means is that the 3i thinks about the world as groups of MIDI channels - not just one.  Which makes effects beyond what one channel can do possible if not down right easy.   The 3i sort of kind of only cares about output groups and its set involves knowledge of the target sampler to some degree in order to take advantage of what the sampler can really do.

No comments:

Post a Comment