Synesthesia is the transference of one sense to another, such as seeing sounds. I have a theory, that has been provisionally working out, that during times of flux in the senses, such as when the possibilities of a new medium are first being fully explored, synesthesia becomes more prevalent. If you can see what I am saying, you have one version of it, though cogsci types try to seperate metaphors from the actual “affliction.” For some reason, it usually maps sound onto vision, but I have instead mapped vision onto sound with PRISMM.
Here is another video of PRISMM, this time it is just me. If you missed out last time, this is a max/msp/jitter patch I wrote that makes music from motion. Up and down is pitch, left and right changes the mix of the two synths. Let me know what you think!
Synesthesia is a favorite topic of mine. It is the transference of one sense to another
Arrived in Kolkata after 35 hrs of flying and waiting in airports…spent most of the day watching web pages load letter by letter on my thrilling 56k web connection. Aah, longing for the good old internet of the 90s. Monisha got back about half an hour ago with a 1mbps modem to replace the other one, so I am a bit cheerier.
As promised the other day, here is a very poor quality video of the people’s revolutionary insurrectionist synesthetic music machine, or PRISMM for short. Everyone at the Revolution Books show had a grand time with it. This section is set to a whole tone scale (a spacy thing with no tonal center that Claude DeBussy favored) and uses the piano sound only. Moving the green blobs up makes higher notes, down for lower. The bigger the green blobs, the louder they are. Moving them left and right moves them in the stereo field. And here is a music clip that I made with it using a pentatonic scale and a more complex mix of synth sounds. If I get time, I’ll spring it on some folks in Kol, try to record the output, and post that.
I’ve been thinking about a name for it for a long time, thus:
Whereas it takes people moving, sometimes as little as an eyebrow, for it to work, and it works whether they want it to or not, thus preventing wallflowerism and related ways ofnon-participation, and
Whereas it got its public debut on the 12th at Revolution Books, and
Whereas adding “insurrectionist” to the name adds to the general gist of of the thing, and
Whereas adding said word causes the acronym to be nearly meaningful, and
Whereas it is a vision-to-sound sensory translator (i.e. a machine that performs synesthesia), and
Whereas all the possible names with variants of synesthesia in them sound hopelessly lame, and
Whereas the results sound vaguely musical, and
Whereas computers and max patches are basically machines, and
Whereas the resulting acronym nearly makes sense in the context,
therefore
Be it resolved that henceforth the max patch I spent a semester working on instead of writing like a good historian be so named: the People’s Revolutionary Insurrectionist Synesthetic Music Machine, or PRISMM for short. I figured I better let it out of the bag because pretty soon MS Kinect and the like will be all over this idea, but I’ve had this working since 2007, so remember where you saw it first!
PRISMM works on an object oriented music and vision platform called Max/MSP/Jitter. The patch, as max constructions are called, is only slightly more complex that the Tokyo public transportation system.
I’d love to compile a version of PRISMM to distribute, and in theory it is easy, but the patch relies on some non-standard jitter parts that no one is maintaining anymore on a PC platform, and so I have been unable to package it up as a standalone yet. There are surmountable technical problems, so if anyone knows how to compile (not write) a max extension in C++ so that it will run as a dll under windows, please let me know in the comments or else contact me.
The Revolution Books show went well. I gave the synesthesia machine its public debut. It was a big hit. I have a short video I’ll post once we get internet in Kolkata for those who don’t know what I’m talking about.
DJ Anthony Chang beatboxed to “The Revolution will not be on the internet.” He and his friend Matt were visiting from CA. I will post links to some of their music soon.
Yes, It is time to give thanks, not for pilgrims getting fed so they could survive another year to get the great American land ripoff going in New England, but for a brand new rreplay tune, atmospheric phenomenon.
Two new experimental tracks from rreplay are available. Pop the player out in the background here and come back to read while you give them a listen.
They are from the more experimental end of our stuff. I think I was having a hard drive problem and had to use a bunch of stuff I don’t normally use for jamming, and we had no drum loops. The first, short piece is modem and dial tone sounds worked through a sampler that I played via my guitar. The second uses chipsounds and something called scanned synthesis. One of the big breakthroughs in the 1980s synth scene was wavetable synthesis, where you could construct one cycle of a sound wave of any shape you desired, load it into memory, and then play it at any pitch. While it opened up new possibilities for synthesis, it also had its limits, the most telling being that the wavetable is static, while the timbre and thus waveshape of acoustic instruments varies over time. Scanned synthesis (pdf) attacks this problem by having the wavetable change over time using shifts to the wavetable so that the sound evolves. The shifts are slow vibrations that vary over time, modelled on struck, plucked, and bowed vibrating objects (like a string, for example), and lately on multi-dimensional creations that exist in more than four dimensions. The result is much livelier, natural sounding synth where the tone evolves with the playing. Strings are actually a lively area of math and theoretical physics work right now, and I think some of the concepts from string theory are working their way into the synthesis method (pdf).
As usual, lots more to listen too at way.music. Please give us a listen.
Just an update of the Todos Somos Arizona July 29th civil disobedience soundscape. The original sounded fine on the computer speakers where I mixed it, but when I played it at H’s party, I was a bit horrified at how loud and rumbly the helicopters were on a pair of speakers that actually had some bass response. So I tweaked the mix a little: presenting the Todos Somos July 29th action soundscape remix, now with less helicopter [mp3]. If you play it on computer speakers or headphones, it won’t sound any different, but if you have a big stereo with good speakers or maybe a good subwoofer, it should be much less anxiety producing now! The sound quality is still all iPhone though. Maybe I’ll get my hands on a good digital recorder for the next time.
Todos Somos Arizona (“We are all Arizona”) is a collective of like minded people opposed to the racism of the new immigration laws being put into effect in Arizona and elsewhere. They are based in Los Angeles. You can find out more about them — and what you can do — from their Facebook page. 24 people are still facing charges stemming from the July 29th protest, and they could use your support.
This is an old thing I probably wrote 30 years ago It is a bit orchestral in a post-trippy sort of way. It has words, and this is just a draft of it, but I was pleased with how it worked out. Hope you like it. As usual, FB people might have to come to the waymusic page (http://way.net/waymusic) to hear it. I hope you enjoy it. It is not my usual fare. I’m not sure if that means it is more or less likely you will like it!
For plugin freaks, the strings are from Cakewalk’s Dimension synth. The acoustic guitar is the Godin xTSA played through Voxengo’s Perfect Space, but with the convolution set to the sound of ice defrosting in Lake Baikal instead of crossing it with the usual acoustic guitar body. The loop for the acoustic sound is made with Mobius. The electric guitar in the rhythm is played through a twangy setting on Voxengo’s great free boogex amplifier then piped through the no longer available Glaceverb reverb set to model the inside of a piano with the sustain pedal down and all the string resonating. The electric guitar in the last half of the piece is again the Boogex, this time heavily distorted with the speaker modeling turned off and run through the same setting on the glaceverb.
Hey everyone, I hope you will give a listen to the soundscape collage/remix I did of the Todos Somos Arizona action on July 29. Basically, it is the whole action (excluding the jail vigils) mixed down to four and a half minutes of killer beats of the drummers, chanters, cops, speakers, and a helicopter. Shout outs to Carlos and Mario for getting the drums to happen, Kiwa, the busriders’ union and the two ninja drummers for providing the sound of the beat, Danae and the rest for keeping the chants going, the people who put their bodies on the line to get arrested, and everyone who made the action happen. It took me a crazy amount of time to put together so I hope will take the couple of minutes and give it a listen! If you like, or even if you don’t, I hope you will also check out all the free music at Way Music and this blog.
In this section, I want to tackle two things about the Godin xSTA‘s synth section and then compare the Roland GI-20 Synth controller to the Axon AX50 controller. I’ll show how the guitar and the controller together make up an expressive unit that effects the sound profoundly before it ever gets to the synth part of your setup. But first, I am going to complain about a flaw in this expressive unit, why no one addresses it, and a workaround. Keep in mind that although I am complaining, this is not, as Liz Lemon would say, a dealbreaker. It just means I had to settle:) The problem with an expressive unit that is made by two different companies is that when something is wrong, each can point the finger to the other and say the problem lies over there.
The first task is to map out how the guitar synth makes sound. The bridge of the xSTA connects to a hardware synth controller via a thirteen pin plug. What’s so hexaphonic you ask? Instead of one output, it has a separate output for each of the 6 (=hex) strings. Instead of outputting a single weak audio voltage, it presents you with six, so each string sends its own signal, making chords much easier to pick out than with software, but also meaning you can set each string to its own synth if you want. The box you plug it into — either the Roland or the Axon — then transforms pile of voltage signals that into a stream of midi signals and sends it on via USB to your computer and from thence to whatever you are using to turn midi into sounds, the synth proper. Both Roland and Axon make controllers that have built in synths for double to quadruple the price, but I loves my vsts and saved the money and got the cheaper controllers that just transform the voltage to midi for me to mangle myself. For the most part, the pre-built synth sounds are cheesy, both Roland’s and Axon’s. That includes the resource hog NI Kontact Player softsynths included with the AX50. I’d rather roll my own and have more control over the tweaking and so forth.
The one big problemwith the “expressive unit” arises from a ground loop hum when you use the USB midi/audio interface on either the Roland or the Axon controller. The details of the hum are in the original post I made, and it is a problem well known on the Roland and Axon, forums, and to Godin (the latter via me at least). Because the problem appears on both synths, and it seems that it is particular to Godin guitars, the problem would seem to lie in the xSTA. My theory is that guitar synth setups are still designed for a hardware rather than a software synthesizer producing the actual sound, and with the expectation that a guitarist will not only use an outboard synth but a separate hardware amp or two for the electric and acoustic signals. If you try to send the (non-synth) audio signals and the synth signals to the same device (to treat the signals on your laptop rather than through an amp and effects boxes as discussed in a zillion other articles here), a shared ground sets up a feedback loop which in turn produces the hum. The problem is in the wiring of the thirteen pins. To get rid of the hum, send the acoustic and electric guitar signals through their own cables to the sound card in of the computer, skip the USB portion of the controller altogether, and send your output to a separate midi input device (I use an M-Audio midi-sport 2×2 for example). No hum, but then you cannot use the software patch editors for either unit while you are playing, or the built in internal midi ports they create. If you want to set up and store patches, you have to do it along with the hum, then unhook the USB to play. It is not a problem until you start running everything into the laptop, but now that it is an option to do so, and a good one, Godin should really rewire the plugs to get rid of the ground loop. A response from someone at Godin would be especially welcome here. When I contacted tech support, they said take it in for service, but this seems from the forums to be a problem with all of them, a design flaw rather than a defect with my particular guitar. But because the source is easy to push off , and there is a workaround, Godin has more or less ignored the problem.
The second problem with the setup from the guitar angle is that the thirteen pin connection is spotty, even with a brand new cable (update: It was the cable. Another new cable sorted this out). Both the Roland and the Axon have an onboard tuner, and I recommend you use it each time you plug the cables in because often one string won’t output any signal until you remove and reseat the cable plug in the guitar. Once it is working it seems to be pretty solid, but this is not a guitar for Pete Townshend acolytes (although I suppose Pete has mellowed enough to play it by now). This second problem just makes it one more thing to check and double check before a live gig. Doing a live gig with a guitar and laptop setup is still a pretty brave move, because a dozen things can be unplugged, switched off, or in need of a jiggle whether electronic or physical.
That is the end of my problem with the guitar, and as I said, all this is something I’ve learned to live with that is more than made up for by the sounds I can concoct on this setup.
tracking
I’ve been interested in guitar synthesis for eons, but it was always priced out of my reach until the arrival of VSTi synths. Even then, the software-only solution I used had a major weakness. The biggest technical difficulty for guitar synthesis is something called tracking. That is the ability, or lack thereof, of software or hardware to turn an audio signal into a midi note.
I tried a couple of all-software ways to convert guitar notes to midi with mixed results. That method uses a freestanding program like the discontinued g-tune or a vst plugin like widi to analyze the audio input and produce midi output. I’ve described the sound (and the method of getting it) elsewhere as being like playing with a drunken Thelonius Monk wannabe. The notes come out, but they are a little late and often a little bit askew, and all the velocity information is lost. the note is either on or off. This can actually be charming on occasion, and making synth sounds with slow attacks and long releases is a pretty good way of adapting to the all-software methods, as in this rreplay song. If you are holding off on going synth because of $$, this is a cheap way to get started and opens up lots of tonal vistas. Ultimately, whether it is hardware or software, the converter has to deal with the same thing, so I think that tracking will eventually move out of the controller box (i.e. the Roland or Axon) and back into the software. Optimizing what comes out of the guitar will still help better the process though, so the specialized synth pickups on the Godin will probably stay. Software solutions to hexaphonic output are limited by soundcard input at this point, which does not have the right connectors, and except for semi-pro and pro sound cards, does not have enough channels to deal with all the sounds coming from something like the xSTA. For the foreseeable future the outboard controller box is still a necessity, but not for any really good reason other than nobody has moved it onboard a sound card and developed the necessary software yet. Maybe Gibson’s firewire cabling scheme will catch on –it is a step in the right direction — but it remains to be seen, and at a little shy of 4 grand for the guitar, I won’t be seeing it!
I finally took the dive and got the xSTA (less than a grand) and started out with a Roland GI-20, then switched to an Axon AX50 synth controller interface. This requires a bit of explanation. The xSTA and other Godin guitars are known for having state of the art tracking. Not having the budget to go out and buy a bunch of synth ready guitars, I’ll have to take their word for it.
The biggest challenge for tracking is the bass register of the guitar. The lower the note, the more time between wave peaks in the sound signal. Most voltage-to-midi converters need at least one wave cycle of the note, usually more, to determine what to output. To make things worse, the initial moment of striking a guitar with a pick or finger creates a noisy stretch before the note stabilizes, called a transient. As a result, there is a slight delay before the note can be calculated from the signal, and the lower the note, the longer it takes to calculate because a wavelength takes longer to happen, so the worse tracking gets. One solution is to just turn the octave setting down a few notches and play your bass using higher notes on the guitar. I just have a hard time getting the feel of a bass when I am playing on a wambly little G, B, or E string.
The xSTA’s job is to get the cleanest signal for each note to the midi controller and from there the controller takes over, so we need to divide the issue up and separate the xSTA’s job from the midi box’s job. That said, the tracking from the xSTA is twenty times better than the software tracking solutions, which it damn well better be because it is also twenty time the price! There really is not much more to say about the guitar part of the synthesis chain, so with that, we need to turn from the guitar to the midi controller hardware, and compare the Roland to the Axon.
I started with the Roland GI-20. It was a hundred or two $$ less than the Axon AX50 and offered some features the Axon did not have (see below). The tracking is much better than software solutions, but not good enough to play a complex rhythm on a bass line and keep the nuances. You can hear it drift around a bit in this piece, which is the acoustic section played with the synth on and the midi transposed down an octave and sent to a bass synth vst plugin. The bass should be playing the same notes as the guitar part an octave down, but as you can hear, they don’t exactly, giving the not unpleasant illusion in this case of two players playing slightly different things. Not drunk Monk, but forget funk or even punk. Its still thunk thunk. Sorry, had to do that 🙂
Ultimately, two factors made me retire the Roland and go for the AX50. One is size (because as I mentioned, I am traveling a lot) and the other is the tracking. It has been a tradeoff though, as the Roland is better for some things, especially if you have some room to spread out (anyone want a good deal on a barely used GI-20? seriously, if you do, contact me!).
The Axon is pitched as a tracking monster. Instead of waiting for a whole wavelength, the folks at Axon studied the transient part of a guitar picker’s attack, the very first moments during and after hitting the string but before the signal has stabilized to produce whatever frequency it is going to produce. They figured out how to accurately predict the note to follow from the transient, in theory doing away with our tracking problems. It works really well, if not quite perfectly.
In the musical example, the first time through I play the electric guitar section only on the left channel, so you can hear what I am trying to play. Next comes the guitar on the left with a bass synth on the right powered by the xSTA and the AX50 run through Cakewalk’s Dimension with the “electric fingered 1” bass patch. The third time, just so you can hear the difference from software solution tracking, I used widi, an audio to midi vst plugin, fed into nuSofting’s marimka, emphasizing the attack so you can hear how slow the tracking is on the software solution. Its not bad, but a little off.
If you look closely at the image files, you will see that the green lines, which mark the onsets of notes in the guitar only signal, come a little before the bass notes in the left (lower) channel. In the closeup image, it becomes clear that this lag is right at the edges of perception, in the ten to twenty millisecond range. Thus playing it doesn’t sound like a lag so much as it feels a little slow. (n.b.: the green lines are generated by a software onset detector that uses the whole signal, thus not real time, and not subject to the problems above, but also not useful for live playing)
Once you set the AX50 up to your playing style, it tracks any cleanly played notes with very little mis-tracking (playing notes other than what you played). Some of the subtleties of a bass line will still get lost, as is obvious from the gaps in the first picture for the left channel, but the tracking is just shy of instant. I have found that you can get a pretty expressive bass sound by mixing in some of the acoustic guitar signal into the octave-down synth mix to get the sound of the strings and the transients in there with zero lag and then plastering on whatever bass tone you choose so that the acoustic transients cover for the lag.
GI-20 vs. AX50
The GI-20 has some features I like that the AX50 lacks. It is massively flexible. You can program the switches on the xSTA to send either octave up and down messages, great if you want to switch quickly to a bass guitar register, or it can be set to send patch change messages (which by the way, can also be programmed to set the octave, sort of rendering the first choice a bit redundant). The volume control for the synth can be set to any midi control (cc) message rather than just controlling volume, so for example, you could use it to sweep a filter instead of controlling volume (losing the volume control in the process however). Where the GI-20 is most remarkable in this regard is when you hook up both the optional foot controllers, one being a two button footswitch and the other an expression pedal. The GI-20 gives you lots more options than just sending CC mesages, though it does that to.
THe AX50 has no inputs or dials. no footswitches or pedals. just an LED with a simple display and a power switch and a little tuning button on the front. Everything gets done via the program banks controlled from the tail-most switch on the Godin, which you set up in software (with the USB hooked up). What is extraordinary about the setup is that because of the transient sensitivity, the unit can tell where on the guitar you are picking, whether closer to the neck or the bridge, with enough accuracy that you can assign a midi controller to it so that you can for example run a filter sweep (like on a wahwah pedal) by changing the picking location. This maneuver without the synth is already part of any expressive guitar player’s repertoire — it changes the tone drastically — so hooking it up to midi is a natural feeling extension of what you can do with the tone. This is a great feature and I love it a lot. You can divide the picking area and the fretboard into zones and assign each by string to a different sound, so that if you play an “A” on the seventh fret of the D string, it can play a different instrument than if you play it on the twelfth fret of the A string! I have not had the chance to experiment with this much yet, but I like the idea. What this does is keeps the expressive potential on the guitar. No feet are involved and the hands never have to leave the guitar. Now if I can only get the tilt sensor to the wiimote hooked up and attach it somewhere…oh wait a minute, maybe this not-so-free freeplayer thingie will work for another chunk of change….
Anyway, the bottom line is that you can get a tremendous variety of tonal wonderment out of the mixture of electric, acoustic, and synth sounds that the xSTA makes accessible, and the whole package of guitar and controller costs around the price of a nice mid-level guitar. I opted for the Axon for the tracking and having a simpler smaller setup, but the Roland has its neat features too. The whole setup is a blast most of the time, and I am still, after a year, finding new tones and new ways of making and mixing sounds every day, which is what it is about for me. I’ll keep writing about the software setup, but I think I’ve pretty much covered what I have to say about the hardware. Any comments?
Finally, if you made it this far, please take a little more time and listen to some of the great music on way net if you haven’t already. Anything I’ve recorded in the past year and a half or so has been on the xSTA and either the Roland or the Axon.