“Studio”, from the Italian studium, a place for study, since about 1800 a workroom for a painter, sculptor or photographer. Today, it can be a place for creation, practice or instruction in music, dance, film or video. An ideal studio is private and large enough to do whatever work needs to be done. My own studio has always been the place where I go to be alone and compose, and for nearly thirty years, to produce synthesized performaces and recordings.
When I began studying music seriously in my last year of high school (1968), my parents bought a baby grand for the living room, and I began taking lessons. But I did very little composing "at the piano", preferring a table in a quiet corner of the basement, with a pile of sharp pencils. The following year, when I started studying with Thomas Briccetti, my father and I built a semi-soundproof room in the basement, furnished with a bed, table and chair, my portable stereo record-player and an old spinet piano from a downtown music store. That was my first studio, where I spent many hours alone, writing and studying.
Juilliard didn’t have a synthesizer studio when I was there, so I studied concurrently at an electronic music studio at New York University. The Buchla synthesizer and Ampex tape recorders probably cost $100,000, in 1970 dollars, and I soon realized that such a studio was beyond my reach. However, in the mid-70’s I studied all the copies of Electronotes and The Musical Engineer’s Handbook, and from its schematics designed and built a basic analog synthesizer from scratch, with parts and copper-clad board from the local electronics surplus shop. It was a good learning experience, but the result wasn’t good enough for serious work.
Before xerography and laser printers made small-scale graphic printing feasible, I had a drafting table and pens for copying manuscripts onto special paper for printing. The goal was to show printed music to musicians in order to secure at least one performance, and hopefully a live recording as well. Repeated listening to what I had written was invaluable in making revisions, and the year or more that elapsed between creation and performance made for a slow and frustrating process. Real recordings were seldom an option because of the cost and logistics involved. In any case, what was needed most was a way tp hear my work BEFORE it got to the performance and recording phase.
About 1977, feeling too much at the mercy of performing arts organizations and musicians with their own agendas, I decided to take more control of the music production process by founding a group to perfom my music. I began writing for multiple keyboard instruments, and my studio soon accumulated instruments: a piano, Rhodes and Wurlitzer electric pianos, and an old electric organ.
But the old roles didn't change significantly. Although I owned the instruments and managed an organized ensemble, it still comprised musicians with other commitments, and even after overcoming the technical challenges of increasingly rhythmically complex music, concerts still had to be organzied and produced, egos stroked, a fickle public satisfied. And it didn't solve my real studio problem, getting to hear what I was writing, to get feedback and make edits before releasing it to the public. Ultimately the project was a failure, and out of frustration I almost stopped composing.
And then came the day in 1981 when everything changed. I was lent an Apple II computer, specially equipped with a very advanced sound synthesizer, and immediately realized that this technology could give me, some day, the control I was seeking. So for several years I became, to paraphrase Harry Partch, a music man sedcued into computrer programming. It’s been a long, frustrating and often expensive development, but I believe the results far outweigh all that.
When I began my foray into computer-assisted composition with the Mountain Computer Music System hardware and software, I still wrote everything down first in a traditional score format, in pencil (on Archive blank 18-stave spiral-bound orchestra paper). Then I used the MCMS software to enter small pieces of the score, one part at a time, until the Apple II’s 48k of memory was full. This usually yielded a minute or two of music, depending on the polyphony. The software showed a blank piano staff, on which notes were entered with the keyboard or a light pen, Then, after all the note data was entered, the music had to be “compiled” before playing, which could take up to five minutes. Finally, each piece was recorded to magnetic tape, and then all the pieces were spliced together.
Nevertheless, it was revolutionary, because for the first time, I could hear a new idea almost immediately, over and over, with variations. There were no more surprises in first rehearsals. Every idea became immediately clear in a moment, and not a year later. Passport Designs offered the MCMS hardware with a music keyboard and their own software which emulated a multi-track tape recorder. I wrote a note editor for this system, and used it to produce the original version of Drastic Measures.
San Francisco studio about 1983. Apple II on left is for writing code, on the right for music. Soundchaser keyboard far right. Rack includes very first JL Cooper MIDI IO.
Studio about 1986. Apple II controls a rack of MIDI synthesizers. Teac 4-track tape recorder and Nakamichi cassette recorder (on rack). Synths include Oberheim Xpander.
In 1983, the Musical Instrument Digital Interface (MIDI) specification was adopted by most of the musical instrument industry, and the floodgates were opened. It enables synthesizers to talk to each other, or to communicate with a master computer, in a simple language of performance gestures. I continued to develop MIDI software for Passport Designs until 1987, and used the last version to produce a live concert. Since then, I’ve used several newer and more sophisticated applications, in particular Cakewalk, which I liked because it was easy to write custom code to do odd things.
San Francisco studio, 1988 - a soundproof room inside my loft. Apple II has been replaced with early Macintosh, running Performer and Master Tracks Pro. Electro-Voice mixer and Peavey power amp on left.
Thailand studio, 2005. Tannoy passive monitors were driven by Alesis RA-100 power amp (not shown). Two (of at least four) cheap UPS's on far right.
Physically, this studio was in a small (3.4 x 3.6 m.) spare bedroom that received a do-it-yourself acoustic treatment (stacked insulation rolls behind the speakers for bass traps. home-made "Abflectors, etc.) It used state-of-the-art digital audio equipment, described in detail below.
Metaphorically, my workspace is two separate but interlocking organizations:
* The Lamluka Sinfonia, the virtual orchestra/ensemble/group that performs music I write;
* The recording studio and production environment that captures performaces by the LLS and produces finished product for distribution.
Over time, they’ve become more integrated; composition, performance and recording are often nearly simultaneous activities. Nevertheless, each task requires special tools, which are discussed in detail below.
My original workflow (until 2004) was to write out an entire work by hand first, in pencil on orchestra paper, then use MIDI editor software to input the notes so it could be played back as MIDI performance instructions. This was done either in “step time” with a small 4-octave keyboard, or just by drawing them by clicking on a grid. Cakewalk was my favorite application for many years, mainly because it was possible to write custom commands to do weird things to the notes. Notes could be displayed either in basic music notation or as a “piano roll” format. It was always awkward to read but better for certain types of transformations. When the score was roughly done, I would export it to Pro Tools for finishing, before recording audio.
About 2004, I stopped using pencil and paper except to sketch ideas. Ideas went directly into Cakewalk. This process moved me in unexpected musical directions.
Since about 2006, when Pro Tools version 7 (and beyond) provided much more sophisticated MIDI tools, I began the composing process with Pro Tools directly, though I sometimes use SONAR for MIDI development, because it has more sophisticated features.
The keyboard in my studio now is a Yamaha CP-33 stage piano, which serves both as instrument to play for fun or to come up with new ideas, as a MIDI output device to send notes to Pro Tools, and as a MIDI input device joining the Lam Lukka Foni Sinfonia.
I also have a couple of guitars. My favorite is a 1993 Gibson Nighthawk.
I seldom use microphones, but there's a basic selection for different situations:
M-Audio Pulsar matched pair for recording a live stereo mix;
M-Audio Solaris, large diaphragm condensore mic for vocals;
Audio-technica AT822 stereo mic for field recording with MicroTrak II;
Shure SM-57 for almost anything.
For capturing ambiances and sound effects, especially when I want to record discreetly, I use an M-Audio MicroTrack II recorder.
For generating unusual sounds, I use Reason, a software application from Propellerhead that emulates a more experimental electronic music studio. It has many types of synthesizers, mixers, and effects; I'm especially fond of the granular synthesizer.
Occasionally I use Avid Sibelius to write down ideas, though I generally use it to produce scores and parts from MIDI files exported from completed Pro Tools sessions It’s a very sophisticated music notation editor that lets me write the way I used to by hand, but to hear it as I go. Then the score is imported as MIDI data into Pro Tools, so I can prep it for performance by the LFS. This task was made much easier in version 8, when a notation window was added.
The Lomluka Sinfonia
This is the name given to the assortment of hardware and software, running on two computers (so far) that produces sounds and "plays" my compositions. As I wrote in 1984: "I view the computer ... merely as another musical instrument, but one which will play my music exactly as I specify, and whenever I choose ... to circumvent the thorny problems that arise from attempts by instrumentalists to perform this music, both in terms of technical problems and the politics of procuring adequately-prepared performances through existing musical institutions."
It's the fulfillment of my goal of freeing myself from the tyranny of the performer, who usually has his own agenda. Too many years and too many performances by organizations whose only interest in playing "new music" was to gain them favorable publicity for their charity work, much as someone who donates a lot of money to a hospital to get their name on a building, I've becme hopelessly cynical about anyone who claims to want to promote my music. The Lam Lukka Foni Sinfonia is thus my standard; if someone can't play it much better, then they don't get to play it at all.
From the adoption of MIDI in 1983, my “virtual orchestra” was a stack of hardware synthesizers connected to a computer through a MIDI interface. I began working this way as soon as there were MIDI synthesizers available. Rather than recording a piece of music track by track on a multitrack tape recorder, the entire composition was played “live” by the orchestra of synthesizers, and only recorded as a final stereo mix when all the editing was done. For many years after I began composing this way, my music was written specifically for synthesizers and never intended to be played by humans.
When I liquidated my studio in 1994, I purchased a Turtle Beach Maui sound card that could synthesize a basic sound pallette using a protocol known as General MIDI, which dictated that sound #1 is a grand piano, #2 is upright piano, etc. For a year I worked this way, with just a computer and a set of small multimedia speakers. Then I set up a fairly portable system, comprising a portable rack case with one General MIDI synthesizr (a Roland MGS64, which I still have), a reverb unit, patch bay, power amp and speakers.
A big change came in 2000, when I began working for Digidesign. A big fringe benefit was easy access to the best digital recording equipment available. Pro Tools version 5 had just been released, and it added the capability to record and play MIDI data as well as audio. It included a few “virtual synthesizers”, software plug-ins that appear in windows on the screen and closely resemble classic music hardware. This is mostly how I work today, with multiple virtual instrument application, running either standalone or as a Pro Tools plug-in.
In 2003, I started to miss the experience of composing something that might theoretically get played some day. In particular I wanted to write a symphony scored for a fairly traditional symphony orchestra. So I turned to Gigastudio, a program that used some clever technology to play samples of real instruments, almost an entire orchestra at once.
Today, the "Lomluka Sinfonia" comprises applications running on three computers, plus some dedicated synthesizer hardware:
Computer 1: several applications share 80 MIDI channels of note data in via a Digidesign MIDI IO. An M-Audio ProFire Lightbridge proverides 32 channels of digital audio (ADAT optical) out, and an RME 9632 I/O provides an additional 8 ADAT channels.
Gigastudio 3.0, a sample-player application. TASCAM discontinued development in 2008, but it works well most of the time, and I have significant time and money investment in sample libraries, which I've customized. This includes:
Vienna Symphony Orchestra
Dan Dean Woodwinds
John Rekovics Saxophones
Virtuoso Solo Strings
Quantum Leap Strat '56
Akai Pure Guitars
Christian Lane Vibes & Marimba
London Orchestra Percussion
New York Percussion Workshop
the main Gigastudio 3.0 interface
Structure, Digidesign's plug-in sample player for all Pro Tools platforms. I use it with M-Powered. Mostly I use two libraries, the East-West Goliath Edition and the East-West Orchestral Collection.
The Structure interface
Pro-52, a software emulation of the Sequential Circuits Prophet 5 synthesizer, from Native Instruments.
B4, a Hammond organ emulation application, from Native Instruments.
Computer 2: This one is mainly Pro Tools, currently version 8 (I had problems with MIDI throughput on v. 10 and rolled back to this version). I use several Pro Tools virtual instrument plug-ins regularly:
Digidesign Velvet, Rhodes and Wurlitzer electric pianos.
Digidesign DB-33, an excellent Hammond organ emulator.
Native Instruments Elektrik Piano, also Rhodes and Wurlitzer pianos, plus Hohner Clavinet.
Digidesign Strike, a drum machine. Mostly I use the banks of sound and write out the parts as MIDI tracks, rather than programming Strike.
Native Instruments Vokator, a high-qualty vocoder I use to synthesize vocal parts.
Digidesign Xpand!, a simple sample player that uses very little system resources.
Digidesign Vaccum, a virtual monophonic analog synth.
Access Virus Indigo, a software emulation of their hardware synth.
Reason also runs on this computer. The audio output of this aplication is routed to Pro Tools via ReWire, so it looks to Pro Tools like another plug-in.
Computer 3: Acquired in 2014, it's much faster than the other two combined. Currently it's running Pro Tools LE (with an Mbox), and several new virtual instruments, plus Sibelius:
Vienna Symphonic Library Dimension Strings
Wallender Virtual Instrument Woodwinds
Garritan Personal Orchestra
AAS String Studio
Finally, there are also two dedicated hardware synthesizers:
A Roland M-GS64 General MIDI sample player with 32 MIDI channels in, 64-voice polyphony and two stereo analog outputs.
A Yamaha CP-33 stage piano, 16 MIDI channels in, one stereo analog output.
As tools have become ever more sophisticated, this setup has worked out very well for instrumental music. Vocals have remained problematic, however. My best success so far has been with a vocoder, a device that modulates one sound (the modulator) with another (the carrier). Typically, this has been used as a special effect, i.e., a talking guitar. But I found that with a very high-quality vocator and a full-bandwidth carrier, more realistic speech could be obtained. Specifically, if I use the 1,024-band Vokator to modulate a very harmonic- and noise-rich sawtooth waveform with my spoken voice, it sounds very much like me singing. This techjnique was first used in Oh The Agony of My Heart, and later in a recording of Chistmas Eve, Alone from Play the Piano Drunk. I’m still doing research in this area, but I suspect a 10,000-band vocoder might give a much more realistic vocal rendering.
Real or virtual performances become digital files storedon hard drives, eventually to become audio CD’s or MP3 files.
After wasting time with inadequate software, I moved to the industry-standard Pro Tools for recording as soon as I could afford it. This technology records the digital ouptut of the LLFS directly to a hard disk, where it can be edited and manipulated as much as one wants, with no loss of fidelity. Most tracks are recorded at 48 kHz, 24 bits, meaning 48 thousand audio samples per second, each with 24 bits of resolution. By comparison, standard audio compact disks are 44.1 kHz, 16 bits.
Pro Tools 8 windows
This system accepts 48 inputs:
Two 192 Digital IO's, each of which has 16 channels of optical digital (ADAT) input from the ProFire Lightbridge
One 192 IO:
8 optical inputs from the RME 9632 ADAT input
2 AES-EBU digital inputs from the POD Pro
2 analog channels from the Roland M-GS64
2 analog inputs from the Yamaha piano
1 (mono) input from the TrakMaster preamp
one analog input free
It's labelled on the diagram below as an HD3 Accel PCI system. It's actually a hybrid of an original HD 2, plus a third ACCEL-erated board.
Two Avid SCSI QuietDrives ("DigiDrives") are used for active projects, with several SATA, USB and FireWire drives online for storage and backup.
A Command 8 mixing control surface appears as a traditional mixer with automated faders, controlling a Pro Tools session. The only audio that passes through it is for the monitoring subsystem.
The monitoring scheme I'm using is a bit unusual: One pair of speakers, Digideisgn RM2's, are connected directly to an AES-EBU out on the 192 IO. Room volume is controlled by adjusting a Master Fader in the Pro Tools session (via the Command 8). The other speakers, M-Audio DSM3's, are connected to an analog out on the 192, then routed to the Command 8's Control Room input. Room volume and headphone volume can be adjusted there. The Yamaha piano is connected to the Command 8 External Source input, so it cn be played through the monitors directly.
The current software version can record up to 256 audio tracks, which has so far been adequate for even the biggest projects. Most tracks are from virtual synthesizers, but I occasionally record a live guitar or keyboard part, and I have a microphone preamp and a selection of decent microphones for vocals or other sounds.
Recordings are eventually mixed down to a stereo master, which is then rendered as a CD or MP3 file.
This computer also hosts Sibelius, which is used both as a input to and output from Pro Tools (by exporting and importing MIDI files).