Learn To Make Hip Hop

...Learn to make hip hop music. become a true beatmaker today.


...now browsing by tag


iPhone app for making cover songs, a sign of a changing music world

Friday, August 28th, 2015


The music industry is fantastic at hindsight. We’ve obsessed over the spread of online piracy, the death of the CD, then the impact of streams. But every measure of the business model is somehow framed around acquiring records. And it’s about passive consumption.

We have to remember, though, that passive consumption is itself really the outlier. Until the dawn of recording, music only existed when you played it. Our current copyright and licensing system was first structured around sheet music. And that world never went away. Precise recordings can give you the experience of listening, but no technology can give you the feeling of singing.

So it’s time to start thinking about business models that involve active participation. We saw that earlier this month with label Ninja Tune embracing remixing in an app and Launchpad sound packs. Here’s a more conventional approach.

Wurrly is an app for recording covers of popular songs. It starts with a song store (and links to the originals on iTunes), but instead of tapping to download, you tap to sing. Choose a pre-made accompaniment (full band, piano, or guitar), set the key and tempo, and record. The cleverest part of the app is probably the interface for adding finishing touches: you get a simple fader for mixing and Instagram-style effects. (I’m sure we’ll keep hearing about an “Instagram for music” or “Instagram for sound” until someone really nails it.)

Of course, this is all paired with social sharing features and featured songs. I’m impressed, some of the recordings are pretty good — there are some talented singers, not just karaoke fare. I find the arrangements themselves to be a little dry; I think the app would benefit from original stems coming from the artists.

And yes, theoretically, this sort of thing could be a revenue stream — though again rates set for statutory licensing are key. A spokesperson for the developer tells CDM:

We have deals with all of the majors and we have blanket licenses on their content. As you know, songs these days can have multiple co-publishers, so we go directly to the stakeholders to get permission. We pay them royalties based on the seconds of usage in the app, quarterly. We also have encryption within the app so that users cannot manipulate it.

Think of this as the pop song / singer analog to Native Instruments’ Stems, and you begin to see where the landscape might shift.

It’s tough to tell what will be a hit and what won’t, in apps as in music itself. But looking beyond just acquiring music directly is wise. The beauty of the shift from devices like the iPod or Walkman to those like the iPhone or tablet is that it’s far easier to engage the user in a creative, active experience. And just as the phone made people feel better about taking more photos by making them look better, there’s no question that making people happy with the way they sound is a key motivation for encouraging musicianship.

Of course, in the past I made this prediction about music games, and that trend lost some steam. But I think we’re still in early days. Watch this space.





The post iPhone app for making cover songs, a sign of a changing music world appeared first on Create Digital Music.


Here’s how to start making your own Stems for sale or DJing

Monday, August 24th, 2015

We’ve heard a lot about Stems, a distribution format providing four separate, DJ-ready parts. And we already go to the point where you could buy a range of Stems music online. What you haven’t been able to do is try making your own Stems, unless you were on one of the early label partners.

That changes today, with Native Instruments’ public release of the free Stem Creator Tool. This is officially a beta version, but NI reports the files are created correctly and you should find it stable.

This also means whether or not you’re sold on Stems yet, you’ll get a better picture of how it works for producers.

First, to the pack itself. You get get:
1. A quick-start guide. (There’s also a video, included here.)
2. A guide on making your own Stems album cover (so it says ‘Stems’ on it, basically), accompanied by a template .psd file.
3. A software tool for Mac and WIndows that handles metadata, dynamics processing, and file export. (Only 64-bit Windows is supported at the moment, but 32-bit support is coming.)

Probably your best bet is to watch the video. There are some interesting details you might easily have missed in previous discussions:

The Stem Creator Tool provides its own compression and limiting tools. “How do you master Stems?” was a frequent question. The answer is, basically, you don’t — not before you get to the creator tool. That tool has its own compressor/limiter for quickly making the mixed stems sound as loud as the master.

Here’s the workflow: first, go back to your project file, and make sure that when you mix together all four stems, the results don’t clip. In other words, you’re treating exporting stems the way you would sending individual track exports to a remix artist or mastering engineer — dry.

Then, from the creation tool, you apply internal compression and limiting to bring the mixed four stems up to the loudness level of your stereo master. That means you’re now feeding the individual stems through NI’s own processing rather than what you or your mastering engineer used on the stereo master signal chain.

The advantage here is ease and reliability. You can quickly get your track to the same overall loudness as the master. To get there, you sacrifice some control — though since you still distribute the stereo master, that’s perhaps not much of an issue.


The Stems playback tool has to reproduce those compression settings. Here’s the interesting bit. When you export, the tool doesn’t bounce the processing. Instead, it stores your settings in the metadata of the file. A playback tool — for now, this is just Traktor — has to reproduce the same processing.

NI tells CDM a bit about their motivation:

Those settings are then read from the file when the user loads it in TRAKTOR, thus the user hears what the producer heard when they have all the stem volumes at max. When the user starts changing the mix of the stems, the real-time compression/limiting then responds accordingly so the end results sound authentic and professional.

Any Stems-compatible tool will need this (free) DSP library. NI mentioned recently it would provide DSP in the SDK. Now we know why: you’ll need to add their compression/limiting library to your tool so the Stems play back dynamics settings from the file accurately.

NI confirms that to CDM. “Without the DSP library, the end results could be thin (lack of compression) or could result in clipping (if the stems add up to a level over 0dBFS and the Limiter isn’t used),” NI tells us. “Anyone who fails to implement the DSP library will not have fully implemented Stems support in their product.”

Stems will respond dynamically as you mix. Since compression/limiting is applied in real-time, rather than to individual stems, you can depend on your mixed files sounding loud enough even as you mix.

But you can bake in your own processing, if you choose. If you want to do some dynamics processing before exporting Stems, there’s nothing stopping you. However, your Stems may not sound loud enough when used elsewhere.

There isn’t a standard for color and order. In the creation tool, you drag and drop your Stems, then choose color and title. There isn’t a convention; any standardization is down to paying attention to what other Stem producers are doing. This means you have a fair bit of choice, if your track is something other than “drums / bass / synth / vocal” — or if you’re picky about color. (Hey, some people really do have serious synesthesia that makes them compulsive about this! Or maybe your vocals were done by a Smurf, and so you want them to be blue. Whatever. I’m not sure what color as a bassline is supposed to be.)

Reflections: Stems is still “a Traktor thing” until there are other software partners or at least an SDK — I still think the SDK could get small mobile developers onboard in a hurry, for instance. But the creation tool is important because it opens up the appeal beyond just labels and stores, and potentially to producers. It’s at least of appeal if you’re a producer who either uses Traktor or things your fans might want to use your tracks in their DJ sets.

The delicate balance NI has to walk is one between standardization and ease on one hand, and flexibility on the other. Will producers be happy with these internal compression tools? Will this be a mess of unpredictable colors and titles?

I will say, though, for most dance music producers, the Stems Creator Tool is easy enough that this should be a no-brainer to at least try.

For distribution, if you have music that lends itself to releasing stem files, it’s a solid option. Even if you’re not on a label on one of these stores, there are various tools that let you sell zip files directly (including Bandcamp).

And what we know now is that making the files is a quick and painless process. Expect to see a lot of artists try this and see if it sticks. (In fact, Stems’ new problem is how to sustain interest past that initial, likely boom.)

For anyone doing hybrid live/DJ sets on a laptop, this could also be worth trying today. It means at last you can load individual, remix-able parts of your own songs into Traktor with a lot less effort, versus making individual Remix Decks and so on. (I will save my rant about Ableton Live “live” sets that do nothing other than trigger scenes for another article.)

Let us know if you get using it. I still have questions about the approach to mastering — and I’m really interested to see that SDK.


Newsletter signup, which is where NI hopes you’ll go for the latest on the format.

Previously: our in-depth guide to the format

The post Here’s how to start making your own Stems for sale or DJing appeared first on Create Digital Music.


Making a light sculpture a musical instrument, played with Animoog on iPad

Saturday, August 15th, 2015

Light organs have been in use for generations. But this is the first generation that has grown up in a world of image and sound in which expression across electronic media might seem simply second nature.

And oddly, as screens have become more ubiquitous, so, too, has thinking beyond them.

What we see here, then, isn’t a projection. It isn’t a display. It’s a big bundle of lightbulbs, making rhythmic poetry in off and on once connected to a jumble of wires. Play the Moog app Animoog on an iPad, and that mountain of electronic junk winks back at you like lightning bugs.

Going from screen (iPad) to pre-digital expression (lightbulbs) seems to make perfect sense.

This project of course isn’t alone. I’m actually hopeful that we’ll see experiments like this become more commonplace — that connected interactive lighting will be as common as VJing to a projector has been in the past.

In this case, the project comes from Moog’s home of Asheville and artist Chas Llewellyn. Chas is working with a computer running Cycling ’74′s graphical creation tool Max/MSP to translate music notes into lights. Max, then, is an interactive accompanist, containing the logic of how to convey the ideas of the musical performance in dances of light.

Full description:

Sculptor / programmer / interactive interface designer, Chas Llewellyn, explores the form and function of a large-scale light sculpture he designed using Moog Music’s Animoog app as the control source. From his former Wedge Studios workspace in Asheville’s River Arts District, Chas details the multiple electronic communication protocols that he integrated into the final working installation.

Thanks to the fine folks at Moog for sharing this, and their excellent iOS synthesizer.


Animoog | Interactive Interface

The post Making a light sculpture a musical instrument, played with Animoog on iPad appeared first on Create Digital Music.


J74 releases Progressive SE (Standalone Edition) – Desktop Application for Windows and Mac specialized in chord progression making

Saturday, August 1st, 2015

J74 has released J74 Progressive SE, a desktop application for Windows and Mac specialized in chord progression making. The application can be used with any Digital Audio Workstation (DAW) supporting [Read More]

MeldaProduction launches “MXXX Preset Making Action” – Get MXXX For Free

Tuesday, June 2nd, 2015

MeldaProduction has launched a “preset making action” event for its forthcoming MXXX modular multi-effect plugin. Here’s what they say about it: “Whether you want just to contribute or to get [Read More]

iPad Music Making Apps: Classics For a Reason – pt. 3 – iMPC Pro Review

Tuesday, April 14th, 2015


Akai released the much anticipated iMPC app for iPad at the end of 2012. It was fun but limited, a bit of a toy. Last summer iMPC Pro appeared with a new UI and better features.

First impressions

The opening page shows a series of 3.5” floppy graphics. Each floppy is a complete demo, click to load and although the styles might be a bit “cheesy corporate” they do show the main features off. So, load a floppy you’ll reach the Main section and see the pads.


16 of them with three more pages adding up to 64 and a set of touch performance controls to the left. A two finger gesture tape stop, lo-pass-hi-pass filter and more. All quick and intuitive. Just playing through the pads, there’s plenty of high quality sounds available, the pads respond quickly and when you hit record they record accurately with good quantize functions to put set your playing right. And it’s loud, louder than Native Instruments’ iMaschine, which enhances the overall impression of quality.

The whole app is split into five sections: Main, Program, Mixer, Timeline and Song.


The program section is where you can access controls for individual pads: amplitude, filter settings, effects and sample editing features. All this works pretty intuitively and I only got stuck once or twice to find functions I needed. Editing samples is basic and straightforward as is importing your own samples. iMPC is also a proper sampler, it will sample incoming audio from the internal microphone, direct line in or from your iTunes library.


The mixer section offers a decent mixer section with 3 band EQ aswell as reverb, delay and chorus sends on each track, with an FX edit page, and master effects too. The FX are decent too, but the track routing is a one of the mysteries you need to unravel.
The timeline section is a basic piano roll, which is assigned to a pad in the song section.


Finally, the song section contains a further 64 pads, each of which contain a piano roll, plus master settings like tempo/tap tempo, time signature etc., and so represents a section of the song. Improvising song structure is really straightforward but, again, it took a while to understand programming the individual pads.

I found the iMPC Pro mostly straightforward to use, and plenty of fun. The overall sound is very clean and loud but I think it misses options to degrade the sound in a classic MPC kind of way. Playability is good as well as the song page for live work. In terms of connectivity iMPC pro is strangely limited. It supports Apple’s Inter App Audio but not Audiobus. It will talk to some Akai MPC/MPDs but not others. It does support the co-developer, Retronym’s, Tabletop App.
Sound design is excellent, with an extensive Richard Devine designed library, covering drums, instruments and special FX as well as a function to upload kits to and download kits from a user database. Much of the functionality of the MPC is in there as well as some neat touch features. It is well laid out and I believe complete tracks could be produced within. The only issues I have with this are that navigation can sometimes be difficult, but improves with practice, the lack of Audiobus compatibility is an issue when working with other apps and the aforementioned sound is not quite an MPC.

— Good value
— Pristine sound and good playability

— Connectivity
— Lack of vintage MPC sound emulation

$ 12.99


Get Inspired with Excerpts of Ableton’s Making Music Book

Friday, April 10th, 2015


Following our interview with author Dennis DeSantis, we can start your weekend with some sage advice from his book Making Music. While published by Ableton, this isn’t an Ableton book. It lies as the boundary of software and music, at the contact points of creativity in the tool.

For a CDM exclusive excerpt, I wanted to highlight two chapters. One deals with the question of how to overcome default settings — this cries out as almost a public service announcement for people making 120 bpm 4/4 tunes because that’s what pops up when you start a new project in Live and many other DAWs. The other looks at programming drums by grounding patterns in the physical — it’s no accident that Dennis is himself a trained percussionist.

Even if you did land a copy of the printed edition already, this seems a perfect “book club” affair for us to share. Thanks to Dennis and Ableton for making them available; I hope it lights a spark for you and people you know. -Ed.

The Tyranny of the Default

Every time you’re inspired to start a new song, you open your DAW and are immediately terrified by the blank project. Maybe you have a simple melody, bass line, or drum part in your head. But in order to hear it, you first have to load the appropriate instruments, set the tempo, maybe connect a MIDI controller, etc. By the time you’ve gotten your DAW to a state where you can actually record, you’ve either lost the motivation or you’ve forgotten your original musical idea.

Because DAWs have to cater to a wide range of users, they are often designed to work out of the box with a collection of default options and a basic screen layout that will be offensive to no one but probably also not optimal for anyone. This inevitably leads to a phenomenon that software developers call “the tyranny of the default”: Since most users will never change their default software options, the seemingly small decisions made by developers may have a profound effect on the way users will experience the software every day.

Here’s how to overcome the tyranny of the default in your own studio.

Rather than allowing your DAW to dictate the environment in which you’ll start each track, take the time to build your own default or template project. People often think of templates as blank slates containing a bare minimum of elements, and most default templates provided by DAWs are exactly that; maybe one or two empty tracks, perhaps a return track containing a single effect. But if you regularly start work in a similar way (and even if you don’t), building a template that’s unique to your musical preferences and working style can save you lots of time when you start a new song, allowing you to more quickly capture an initial musical idea from your head into your DAW.

For example, many DAWs set a default tempo of 120 bpm for new projects. If you tend to work in a genre that is generally in a different range of tempos, save yourself time by saving your template with a more appropriate tempo. Additionally, your DAW’s default project likely makes a lot of assumptions about how many output channels you’ll be using (usually two), as well as more esoteric settings like sample rate, bit depth, and even the interface’s color scheme. If you prefer different settings, don’t change them every time you start a new song. Instead, make these changes once and save them in your own template.

Additionally, if you regularly use a particular collection of instruments and/or effects, try pre-loading them into tracks in your DAW and saving them into your template. If you have a go-to sound that you use for sketching out ideas (maybe a sampled piano or a particular preset), preload that preset in your template and even arm the track for recording. This way you can be ready to play and record as soon as the project is loaded.

Some DAWs even allow you to create templates for different types of tracks. For example, if you regularly use a particular combination of effects on each track (such as a compressor and EQ), you could preload these devices—and even customize their parameters—into your default tracks. Then each time you create a new track in any project, you’ll have these effects in place without needing to search through your library of devices.

If you regularly work in a variety of genres, you should consider making multiple templates, each one customized for the different sounds and working methods you prefer. Even if your DAW doesn’t natively support multiple templates, you can still create your own collection; you’ll just need to remember to Save As as soon as you load one, so you don’t accidentally overwrite it.

Some producers, recognizing the value of a highly customized template project, have even started selling templates containing nearly (or even completely) finished songs, with the stated goal that newer producers can use these to learn the production techniques of the pros. If that’s really how you intend to use them, then these are a potentially valuable learning resource. But be careful to avoid just using these as “construction kits” for your own music. This is potentially worse than working from an empty default and is a grey area between original music and paint-by-numbers copying (or worse, outright plagiarism).

Programming Beats 4: Top, Bottom, Left, Right

From listening to a lot of music, you have a general understanding of how to program beats that sound similar to those in the music that inspires you. But you don’t really have a sense of how the various drums in a drum kit relate to each other or the way human drummers think when they sit down at the drums and play. As a result, you’re concerned that your programmed beats are either too mechanical sounding or are simply the result of your own interpretation and guesswork about what you hear in other music.

Even if you have no intention of writing “human”-sounding drum parts, it can be helpful to understand some of the physical implications of playing a real drum kit. Here are some ways that drummers approach their instrument.

At a philosophical level, a drum kit can be thought of as divided into top and bottom halves. The top half includes all of the cymbals: the hi-hat, ride, crashes, and possibly more esoteric cymbals like splashes, Chinese cymbals, gongs, etc. These are the “top” half for two reasons: They’re both physically higher than the drums, and they also occupy a higher range in the frequency spectrum. In contrast, the bottom half is the drums themselves: the kick, snare, and toms. (The snare is a special case and can be thought of as somewhere in between the top and the bottom in frequency. But for our purposes, let’s consider it part of the bottom group).

Drummers tend to unconsciously approach beat making from either the “top down” or the “bottom up,” depending primarily on genre. Jazz drumming beats, for example, are generally built from the top down, with the ride cymbal pattern being the most important element, followed by the hi-hat (played by the foot). In this context, the kick and snare drum serve to accent or interrupt the pattern which is established by the cymbals. A typical jazz drumming pattern might look like this:


In contrast, rock, pop, or R&B drumming beats are built from the bottom up, with the interplay between the kick and the snare comprising the most important layer and the hi-hat or ride cymbal patterns serving as a secondary element. A typical rock drumming pattern might look like this:


Note that in both jazz and rock beats, the cymbals generally play simple, repeating patterns, while the kick and snare play gestures that are more asymmetrical. But in jazz, those simple cymbal patterns are fundamental signifiers of the genre. In rock, the cymbal patterns are secondary in importance, while the asymmetrical kick and snare gestures are what define the music.

An awareness of these drumming concepts might give you some things to think about when writing your own electronic drum parts. Are you thinking from the top (cymbals) down, or from the bottom (kick and snare) up? Is the genre you’re working in defined by repeating patterns (such as the steady four-on-the-floor kick drum of house and techno) or by asymmetrical gestures (such as the snare rolls used for buildups in trance)?

In addition to the top/bottom dichotomy, drummers also must make decisions along the left/right axis when determining how a particular pattern is divided between the left and right hands. On a drum kit, some of this is determined by the physical location of the instruments. But for an instrument like a hi-hat that can be reached by either hand, there is often a subtle difference in sound depending on how the pattern is played. For example, consider the following beat:


At slow-to-moderate tempos, most drummers would probably play the hi-hat part with one hand, leaving the other free for the snare drum. But once the tempo becomes too fast, it’s no longer possible to play a continuous stream of sixteenth notes with one hand. At this point, many drummers would switch to playing the hi-hat with alternating sticking, each stroke with the opposite hand. But this requires some compromises: Beats two and four require both hands to be playing together, so the player must either move one hand very quickly between the snare and hi-hat or play at least two consecutive hi-hat notes with the same hand. In both cases, there will likely be a slightly different resulting sound. Even the subtle physical differences between two drumsticks can result in a different sound versus when a pattern is played with a single hand.

Of course, none of these physical restrictions apply to the electronic domain by default. There’s no inherent physical speed limit and no need for any notion of “alternating stickings.” At any tempo, consecutive notes can sound completely identical if that’s your intent. But if you’d like to apply some of the sonic characteristics that come about as a result of these human restrictions, you can do so manually. For example, you could try creating a very small change in velocity for every other note in a repeating pattern. Or with a bit more work, you could actually use a slightly different sound for every other note. Some software samplers have a feature called “round robin” that automatically plays a different sample with each key press.

Thinking like a drummer can be a useful exercise when writing beats for any genre—even ones that have no overt relationship to acoustic music at all.

The post Get Inspired with Excerpts of Ableton’s Making Music Book appeared first on Create Digital Music.


Non-Oblique Strategies: Author on the Discipline of Making Music

Thursday, April 9th, 2015


The blank screen. The half-finished project. The project that wants to be done.

We talk a lot about machines and plug-ins, dials and patch cords, tools and techniques. But the reality is, the most essential moments of the process go beyond that. They’re the moments when we switch on that central technology of our brain and creativity. And, very often, they crash and require a restart.

So it’s about time to start talking about the process of how we make music — even more so when that process is in some sense inseparable from the technology we use, whether the time-tested “technology” of music tradition or the latest Max for Live patch we’ve attempted to make work in a track.

Making Music is a book published, improbably, by Ableton. Sold out in its first paper run, with digital shortly on the way, it has already proven that there’s a hunger for creative tomes that harmonize with our tech-enabled world. Making the book Making Music is a story unto itself. Ableton’s Dennis DeSantis joins CDM to explain his own experience — and what happens when he gets stuck like the rest of us.


Do you still get stuck creatively sometimes? When? Did writing this help you find any new routes out of that?

I definitely do, and I also did while writing this book, which felt very meta. At this point, though, I’ve spent so much time thinking about the causes and solutions of my creative blocks that I can almost always blame pure procrastination or laziness if I’m stuck now. If I’m not getting work done, it’s probably because I’d simply rather be doing something else. Unless I’m faced with a deadline, I’ll usually just go do the other thing for a while.

I don’t think I uncovered anything new about my own process while writing this book. For the most part, it’s just a catalog of the kinds of things I think about anyway when making music. But I do think that the act of actually putting them down on paper forced me to think about them in a more streamlined, focused way.

You come, as I do, from a training in music composition and theory. To me, a lot of that clearly informs what you’re doing here. Where did that classical training inform what’s here? Is there a translation process for people who didn’t come from that technique and language — but who might benefit from the ideas?

The most obvious place is in the chapters that are heavily devoted to music theory. I mostly just thought about stripping away everything except the absolute most essential components. When I learned harmony it was via things like four-part chorale exercises, which I think is completely unnecessary for the way electronic musicians are working today, and probably unnecessary for learning how harmony works in general. In the context of the traditional conservatory model of harmony instruction, the chapters in my book probably look too stripped down. But I’ve heard from a number of early readers that they finally “get” the concept of functional harmony for the first time, and I’d like to think this validates my approach.

Besides the theory-heavy chapters, there are a number of more abstract concepts I learned from traditional composition training, about topics like motivic development, creating variations to expand a small amount of material into something larger, etc. These ideas are general purpose enough that I think they can be presented to people who are outside of the world of classical training, and without all of the baggage that comes from also having to learn 800 years of music to be able to cite relevant examples. It’s not actually necessary to study Mozart string quartets to figure out how good melodic writing works; you can find it in Daft Punk tracks.

You say you’ve now gone from classical music to house and techno. Why that transition, what’s new about working in those idioms versus classical work — and would you say the tech has played a role in that shift for you?

Well, I don’t think it’s an open door/shut door kind of situation. I haven’t turned my back on anything, and I’m always open to writing more concert music if the project feels right. But in general, I found that I was just way more interested in what was happening in electronic music, both at a musical level and at a cultural one. I gradually started to have more and more experiences with concert music where I felt like the little kid in the story “The Emperor’s New Clothes”—and this was even true when listening to my own concert music, which, with a few exceptions, is mostly not very good.

I don’t think tech played a role for me as much as my realization that I wasn’t particularly interested in the interpretation layer that’s implicit in any composer/performer relationship. When you’re writing for instruments played by other people, you’re subject to their own physical and artistic wishes and limitations. You provide notated music that conveys an extremely limited amount of information and what you get back is one possible performance, filtered through those people and their instruments. In very special cases, you get back something that’s better than what you imagined. But the rest of the time, you don’t

On the other hand, I can get exactly what I want when writing music for machines. If I want to tweak a kick drum sound for nine hours, I can do that without making someone else suffer. There’s no need for me to compromise anything, ever. For me, this is a much more appealing way to spend creative time. Right now, I’m just fundamentally much more interested in the kinds of musical things machines can do than the kinds of musical things people can do. That’s not to say I don’t like to hear a great band play great music sometimes. But I’m not so interested in them playing my music.


People may be surprised that Ableton Live really isn’t ever explicitly mentioned in the book, to the point of avoiding it. But then it’s there in the images. What was Live’s role in this book?

Live is the DAW I use for my own work, so it’s naturally what I gravitated to for the screenshots. The ideas really are meant to be equipment-agnostic, although I’ve been using Live for so long that it’s hard for me to have an outsider’s sense of how much its inherent workflows have influenced the way I think about music now. I do think anyone making music with machines can get something out of this book, whether they use Live or not.

Your examples here seem to oscillate between larger theoretical ideas and, sometimes, very specific practical ideas. Sometimes they’re recipes, sometimes broader concepts. How did you approach that? How did you organize these different levels? (Oblique Strategies appears to be an influence in some ways… Fux? Rameau? Cage?)

It’s pretty unorganized, actually, or at least it was while writing. I tried to sort and order things by general concept only after finishing writing. What I did figure out fairly early on was that I wanted to segment things into the three big phases of actual work: beginning, progressing, and finishing. I think these phases have their own unique sets of problems and solutions, and it felt like they should be separated. Beyond that, things are loosely grouped based on their discussion of broad musical concepts like melody, harmony, rhythm, and form.

Oblique Strategies, of course, looms large over any project like this. But what I really wanted to do was something not at all oblique. I wanted simple, concrete problems with simple, concrete solutions; direct strategies.

It’s funny, two of the people I know who have been really good at speaking to electronic musical practice and do that for Ableton have been you and Dave Hill, Jr. (Dave’s now at iZotope) — and you’re both percussionists. Does that experience playing rhythm physically inform what you do?

For me, I guess it does, and many of the chapters are specifically about drumming concepts. One of the things I like to do in my own music is to play little rhythmic games—uneven loops, un-synced automation gestures, polyrhythms, etc. These aren’t concepts that are inherently “about” drumming, but I guess drummers tend to think about patterns and pattern relationships more than some other musicians.

I can’t speak for Dave, although certainly some of his ability to speak eloquently comes from him being a genuinely smart, thoughtful guy. He’s also a great drummer, although I have no idea how much this plays into his thinking about electronic music. I should ask him, because it’s an interesting question.


Some of these challenges are to do with theory and musicianship, some with expressing ideas with the technology. Is there a distinction between those, areas where they pull apart? Do they all blur?

At this point, it all blurs for me, although my overarching goal with this book was to write something that was decidedly not about technology. I wrote the book to address what I see as a huge imbalance between the amount of good resources available on the music side versus the technology side. There are so many tutorials about how to use tools, and so few about how to make music with them. This book is an attempt to remedy that, and I hope there will be more like it. I know there are people out there who can tackle this topic from different perspectives, and I’d love to read what they have to say.

Your day job, as it were, is support at Ableton. Tell us a little bit about what you do. And what’s your day like?

Not really support, but rather documentation. I’m not actually on the phones with customers. I write the manual and other tutorials for Live and Push, as well as helping out with some marketing writing. Sometimes I help on the product design side as well.

How did you find the time to make music while writing this book and working at Ableton?

Honestly, I barely did, which I guess calls into question whether I’m qualified to be writing a book like this at all! I did only a little bit of music in 2014. But now that the book is out, I hope to find more time for it this year.

Hope you do, too, Dennis — I want to hear it, apart from Ableton! Thanks! Check out the book here — and follow CDM on Facebook for news of when the digital edition becomes available.


The post Non-Oblique Strategies: Author on the Discipline of Making Music appeared first on Create Digital Music.


Kill Some Time Making Acid in a Browser, And Other Fun Tricks

Thursday, March 12th, 2015


Error 303, anyone?

Computation is everywhere — phones, tablets, watches (apparently), and yes, browsers in all of those places. And that computational power can be harnessed to completely distract you from doing real work in the office — um, I mean, make music.

“Acid Machine Beta” is a rather fun implementation of two synths and a drum machine, all running in your browser. The “Randomize” function alone should hook you for a bit. Beyond that, you get a decent complement of synth and percussion controls that could make a reasonable little groove. (Recording isn’t directly possible, but you could route audio from your browser to another app.)

I’ve tested the app in all the browsers I have here. Google Chrome/Chromium, as advertised, works best. Firefox is working, too, though UI activities can make sound skip. Safari is not functioning. It’s a start — maybe not enough to justify buying that new Google Chromebook Pixel, but a nice proof of concept.

If you want other stupidly-fun ways of accessing acid, we’ve got you covered.

There’s the Skinnerbox Sting for Ableton Live and Max for Live. It’s free, it’s fun, it’s totally addictive.

Sting will get you started. But in Max for Live, it’s tough to beat the sophisticated EvilFish 303. First introduced a couple of years ago, it’s been completely re-built and expanded since. It’s a 303, but with additional sound shaping and overdrive added — a bit like having all the mods for the original 303 and a lot of modernization to boot. And there’s a pattern generator, too.

It’s kind of ridiculous, actually, like the kind of thing you wouldn’t want to tell anyone else about. Or even that you might not want to know about yourself. But look at it this way: this is the cure to those dark moments when you feel uninspired and can’t start something. Let this EvilFish come to your aid, friend.

EvilFish 303

The EvilFish 303 is in a bundle with lots of other goodies, too:


Just remember, kids, when you’re playing with these tools, to play it safe. Make sure that you don’t “copy the defining funk of the cowbell accents” of any music you’ve ever heard before, or you could get in trouble with the po-po.

Thank you, David Abravanel.

The post Kill Some Time Making Acid in a Browser, And Other Fun Tricks appeared first on Create Digital Music.


For Apple, Music Making is For Everybody: New Holiday Ad

Monday, December 15th, 2014

We face a challenge in the music technology community. Underlined by a century in which music creation was the privilege of a few, in the studio world, and mass music was about records and radio, people might claim music making is niche. It’s the domain of specialists, techies — a weird overlap of superstars and nerds.

But some of us believe that musical expression as as essential as singing, and the tools matter just as much.

You don’t see much music technology in Apple’s latest ad. I think it might be a new record or near-record for the absence of screen time for Apple’s products. But what you do see is unquestionably creation, not consumption. There are subtle hints to every aspect of musical practice — guitar songbooks, multitrack recording, sharing.

And the video, a follow-up to last year’s Creative Arts Emmy winner, goes beyond the technology. It’s about why we make music — reaching other people.

It’s meaningful that a multibillion-dollar company would see making music as a core part of its mission, as the essential value to some of the most successful consumer products in history. Recently, I noted via Twitter that Apple’s own Logic Pro climbs to the top of the paid charts on their App Store — notable not so much because it’s an Apple product as it’s a music product.

Apple’s holiday campaign links to a variety of music app, a nice Christmas present for the developers featured. They show GarageBand, of course, but also include Propellerhead’s innovative Take vocal app, a tool that remembers that, for many people, music is about singing or playing an instrument and not just editing beats on a timeline. There’s also a beautiful app called Chord! that presents scales and chords in a gorgeous, luxurious format. And there’s the fun Sing!Karaoke from Smule, the rare superstar breakout developer that found a way to take music technology prowess and bring it to a mass market.

Now, whether or not you own a single Apple product, there’s a lesson here, about how important music is to one of the world’s biggest companies — and, much more importantly, how to tell the story about what music is to the general public. It’s a reason for the season.

The post For Apple, Music Making is For Everybody: New Holiday Ad appeared first on Create Digital Music.