Learn To Make Hip Hop

...Learn to make hip hop music. become a true beatmaker today.

sound

...now browsing by tag

 
 

Sound Magic Releases AAX Version of Fazioli Rose and updates Vocalist for Windows to v1.5

Friday, April 24th, 2015

Sound Magic has released the AAX version of Fazioli Rose, a modeled Fazioli Grand piano. Now it can be used on all Pro Tools systems on both Windows and Mac OS X. Sound Magic also updates its [Read More]
AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

Sound Magic Releases AAX Version of Fazioli Rose and updates Vocalist for Windows to v1.5

Friday, April 24th, 2015

Sound Magic has released the AAX version of Fazioli Rose, a modeled Fazioli Grand piano. Now it can be used on all Pro Tools systems on both Windows and Mac OS X. Sound Magic also updates its [Read More]
AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

PlugInGuru releases MegaMagic Dreams – sound design based sample library for Kontakt 5, EXS24, SERUM and Iris 2

Tuesday, April 21st, 2015

PlugInGuru.com has released MegaMagic Dreams, featuring 102 sample-based patches where the 12 second long reverb and other sound design processing is recorded into the samples. This gives this [Read More]
AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

Sound Dust releases “Cloud Viola” for Kontakt with 20% off introductory offer

Monday, April 20th, 2015

Sound Dust has announced the release of their new Kontakt hybrid instrument. Here’s what they say: Cloud Viola is a 1.7GB sample library with some special (and frankly impossible) articulations [Read More]
AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

Wagsrfm releases Sound Libraries for Cakewalk Z3ta+2 and Sound Guru The Mangle

Saturday, April 18th, 2015

Wagsrfm has released a new sound library featuring 250 sounds for Cakewalk’s Z3ta+2 and another featuring 350 new sounds for Sound Guru’s The Mangle granulizer, a library that focuses on Soundscapes, [Read More]
AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

Igor Vasiliev updates SoundScaper – Experimental Sound Mini Lab for iPad to v1.2

Saturday, April 18th, 2015

Igor Vasiliev has updated SoundScaper, the experimental sound mini lab for iPad, to version 1.2. SoundScaper is an experimental sound mini lab for creating unusual soundscapes, atmospheric textures, [Read More]
AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

Particle Sound releases “DW8 I” for Kontakt with Introductory Offer

Thursday, April 9th, 2015

Particle Sound has announced DW8 I, the 3rd release in their Carbon series of Kontakt 5 formatted vintage synthesizer instruments. DW8 I features 1.7GB of 24-bit samples from the often overlooked [Read More]
AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

Q+A: How the THX Deep Note Creator Remade His Iconic Sound

Wednesday, April 8th, 2015

THXEclipseScreenshot

How do you improve upon a sound that is already shorthand for noises that melt audiences’ faces off? And how do you revisit sound code decades after the machines that ran it are scrapped?

We get a chance to find out, as the man behind the THX “Deep Note” sound talks about its history and reissue. Dr. Andy Moorer, the character I called “the most interesting digital audio engineer in the world,” has already been terrifically open in talking about his sonic invention. He’s got more to say – and the audience is listening. (Sorry, I sort of had to do that.)

Andy 2

CDM: First, my big question is – how did you go about reconstructing something like this? Since the SoundDroid / Audio Signal Processor (ASP) is gone, that’s obviously out. Was it difficult to match the original? Was there any use of the original recordings?

Andy: I had two computer files from 1983. One was the original C program that generated the score, and the other was the “patch” file for the ASP that was written in Cleo, an audio-processing language that doesn’t exist anymore. The first thing I did was to resurrect the C program and make sure that it did generate the score properly. Then I wrote a set of special-purpose sound synthesis routines to interpret the score and produce the sound.

This wasn’t as difficult as it sounds. First off, the original program didn’t use a lot of different kinds of processing elements – maybe a total of 6 different elements. Next, this is the kind of software I have been writing for 45 years – I could write it in my sleep. It was not a problem. It took about a week to write the synthesis engine software and plug it into the original 30-year-old C program and get some sound out of it. There were also some calibration issues that I had to deal with, since the original ASP used some fixed-point scaling that I no longer recall. I had to experiment a bit to get the scaling right.

Then I was back at zero – I could start making the top-quality modern version. It took about another week to “tune” the new version and get it the way I wanted it. I spent more time in San Francisco with the THX folks making sure it met their needs as well. We then took the resulting sound files to Skywalker Sound to make the final mixes. It was a real thrill to finally hear the various versions in the Skywalker Stag theater. It was breathtaking.

I used the original only as an audio reference when I was bringing up the synthesis engine. Otherwise, the original sound was not used.

Andy 1

What’s it like making this sound with today’s tech? It seems one advantage, apart from the clear affordability and accessibility of hardware, is the ability to control sound in real-time. But how did you work with it now? What’s your current tool setup?

Boy, it is night and day. In 1983, it took a rack of digital hardware 6 feet [1.8m] tall to synthesize 30 voices in real time. Today I can do 70 voices on my laptop at close to real time. It is an absolute joy the power we have at our fingertips today. For this particular project, I just used a standard Dell W3 laptop. Nothing fancy. I have an M-Audio Fast Track Ultra 8R for the multi-channel D/A output for monitoring and testing. That gives me 8 channels of output, so I can do up to 7.1 but not Dolby Atmos. I didn’t actually hear the Atmos version I had synthesized until I got to Skywalker. I had a pretty good idea of what it would sound like, though.

A couple of people have already asked me about how you wound up with 20,000 lines of code in the original. I expect there was a fair bit of manual mucking about?

Actually I made a mistake with that 20,000 lines of code statement – that was just off the top of my head. I need to correct that if I can figure out how, but it also depends a bit on what lines of code you count. The original 30-year-old C program is 325 lines, and the “patch” file for the synthesizer was 298 more lines. I guess it just felt like 20,000 lines when I did it.

Given that it was written and debugged in 4 days, I can’t claim the programming chops to make 20,000 lines of working code that quickly. But, to synthesize it in real time, in 1983, took 2 years to design and build a 19” rack full of digital hardware and 200,000 lines of system code to run the synthesizer. All that was already done, so I was building on a large foundation of audio processing horsepower, both hardware and software. Consequently, a mere 325 lines of C code and 298 lines of audio patching setup for the 30 voices was enough to invoke the audio horsepower to make the piece.

What state is that code in, presently? Some people were curious to see it, but it seems efforts like Batuhan Bozkurt’s do a good job. I was actually unclear in the other correspondence I read what you meant my musical notation.

I guess you are asking who owns the code. THX, Ltd is the owner of the code that I produced. As you note, Mr. Bozkurt and a number of other folks have done perfectly marvelous versions of the piece as well. Around 1984, folks at the trademark division of Lucasfilm asked me to make a “score” for the piece in musical notation – you know – treble clefs and staff lines and that stuff. I took out my Rapidograph india-ink pens and drafted something that kind of suggests how the piece works as well as it can be expressed in traditional music notation. That was apparently necessary to copyright the piece in 1984.

Andy 3

This time, THX wanted a whole lot of versions. How did you approach all of these different surround formats?

It was a bit tricky, since it was not just the additional formats, but it was also fact that we wanted it to work well in the living room as well as the modern cinema. In 1983, home theater systems were not even a gleam in the eye. In the living room, you are relatively close to the speakers, so to make it sound rich and “enveloping”, I wanted the sound in each speaker to be as different as possible. In the cinema, you have a different problem – if you sit to one side, you will be close to one speaker and a long ways away from the others, so that speaker will dominate. In this case, the sound in each speaker had to be as rich as possible so it could stand on its own.

My first couple of tries sounded OK in the living room, but when I listened to each channel separately, they sounded thin and cheezy. If I just ran up the number of voices to 70, 80, or 90, each speaker sounded fine, but the overall impression got “mushy” and diffused. What I ended up doing is to put about 8 distinct voices into each speaker, so the 5.1 has about 40 voices (plus the subsonic ones in the subwoofer), the 7.1 version about 56 voices, and the Atmos 9.1 bed has over 70 voices.

The different versions do sound different. The 5.1 has a lot of “clarity” – you can really hear the individual voices move around. The 7.1 is still pretty crisp, since the voices come from different directions, you can still make them out separately, but there are some new voices moving in different directions that aren’t in the 5.1 version. The Atmos 9.1 adds two more overhead. These are far enough away that you don’t hear them clearly, but they just add to the richness.

In short, it was a challenge, but I think we came up with a viable solution that provides an experience that scales with the setup but preserves a lot of clarity and precision.

Are you involved in music and sound today? (Still playing the banjo?)

Is there air? Yes, of course. I play and make music every chance I get. I did some sax and banjo tracks for an album of spoken poetry a couple years ago and I play at the local watering holes from time to time.

Thanks, Andy! Really an inspiration to get to hear some of the nitty-gritty details – and to see a sound like this have such power. Now, up to us to work on our sound design coding chops – and our banjo licks, both, perhaps!

Shown: Andy Moorer at THX Ltd. San Francisco headquarters in 2014 during his visit to work on the regenerated THX Deep Note. (Photos: © THX Ltd.)

Previously: THX Just Remade the Deep Note Sound to be More Awesome

THX Blue Note

The post Q+A: How the THX Deep Note Creator Remade His Iconic Sound appeared first on Create Digital Music.


AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

On Apple Watch, Sound Design Translates into Haptic Feel

Tuesday, April 7th, 2015

applewatch3

You already know sound is something you feel, physically – you know this from the sensation in your head on headphones, from your gut as a PA produces big bass, from the bodily experience of thunderstorms or the siren on an ambulance.

But we may soon live in a world where increasingly the role of sound design is wrapped up in interaction – where those sounds can produce physical sensations and haptic interactions. And whether or not the Apple Watch is used by musicians and DJs with new apps, it could add to possibilities for sound designers.

Wired has a fascinating report on the design process behind the Apple Watch. It’s worth reading and reflection even if you have no interest in buying Apple’s gadget, because the lessons here might apply widely.

And in particular it notes that one of the most essential interactions of the Apple Watch depends on sound design as much as something visual. Vibrations are pretty crude on your phone, but on the watch, they become an essential part of the design – and the success or failure of Apple’s introduction may depend on whether people like the way they convey information, since part of the idea is to avoiding constantly digging your phone out of your pocket.

And here, the line between sound and feeling is basically nonexistent. Wired reports:

Because our bodies are enormously sensitive to taps and buzzes, the Watch can deliver rich information with only slight variations in pace, number, and force of vibrations. One sequence of taps means you’re getting a phone call; a subtly different one means you have a meeting in five minutes.

One sequence of taps means you’re getting a phone call. A subtly different sequence means you have a meeting.
Apple tested many prototypes, each with a slightly different feel. “Some were too annoying,” Lynch says. “Some were too subtle; some felt like a bug on your wrist.” When they had the engine dialed in, they started experimenting with a Watch-specific synesthesia, translating specific digital experiences into taps and sounds. What does a tweet feel like? What about an important text? To answer these questions, designers and engineers sampled the sounds of everything from bell clappers and birds to lightsabers and then began to turn sounds into physical sensations.

They spent over a year on this project. Now I’m curious what a lightsaber feels like.

More:

iPhone Killer: The Secret History of the Apple Watch

This kind of thinking isn’t limited to Apple, either. Tactilu is a project out of Poland that I’ll be writing about soon (though its approach to haptics differs from the sound/vibration-based approach here). And that project, in turn, included tactile/haptic work by Warsaw’s panGenerator that featured in our hacklab at CTM. More on both of those shortly…

The post On Apple Watch, Sound Design Translates into Haptic Feel appeared first on Create Digital Music.


AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

THX Just Remade the Deep Note Sound to be More Awesome

Monday, April 6th, 2015

It’s one of the best-known electronic sounds ever – perhaps the best electronic sound branding in history. It made its launch in 1983 – right before Star Wars: Episode VI – Return of the Jedi, no less.

But it seems the THX “Deep Note” was due for an upgrade. And that’s what it got last week. THX called upon the original creator of Deep Note, Dr. James ‘Andy’ Moorer, to remake his legendary sound design for modern theater audio technology.

Here’s a look at that history and where it’s come.

You can in the meanwhile watch the trailer here, though I think you’ll really want a THX-certified theater for this, obviously (through stereo headphones and whatever they’ve used to encode it here, it isn’t really distinguishable from the original):
http://www.thx.com/consumer/movies/120832135

This is actually the third major version of the Deep Note trailer. “Wings” was the first, heralding the arrival of Lucasfilm’s theater sound certification process. The one that probably springs to mind is “Broadway”, which features a blue frame on the screen. Less is more: that elegant rectangle plays against the holy $ #*(& my face is about the melt off!!$ #($ *& effect of the sound. See the brilliant authorized Simpsons parody:

Tiny Tunes had fun with it, too (in the domain of parody, rather than an exact copy):

The sound itself is patented – and using it unaltered, without permission, can land you in hot water. (Dr. Dre lost a suit by Lucasfilm when he used it without permission in his song ’2001′.)

But the history of the sound, and of Dr. Moorer, says a lot about the massive pace of creative technology in the past decades.

Dr. Moorer has four patents to his name and a series of lives in technology.

In the 70s, he co-founded and co-directed Stanford’s CCRMA research center, which continues to give birth to leading figures in music technology. (Today, superstars like doctoral student Holly Herndon go there, to study with teachers like Ge Wang who managed both to invent the ChucK programming language and reimagine the phone as an instrument with the hugely successful Smule.)

Dr. Moorer was also an advisor to Paris’ IRCAM, where he worked on speech analysis and synthesis – for a ballet company.

And he worked in research and development at Lucasfilm’s The Droid Works. There, he designed something called the Audio Signal Processor, the mainframe on which the Deep Note sound would be created – alongside pioneering sound design production techniques for Jedi, Temple of Doom, and more. That machine would eventually be sold for scrap, but its legacy lives on.

In fact, the ASP and the larger “SoundDroid” system around it read like a template for everything that would happen in audio production tools since. Listen to how it’s described on Wikipedia: “Complete with a trackball, touch-sensitive displays, moving faders, and a jog-shuttle wheel, the SoundDroid included programs for sound synthesis, digital reverberation, recording, editing and mixing.” Yes, touch displays, like the iPad. Hardware controls, like advanced studio controllers – years before they would become available for computers. Digital processing. Sure, we take this stuff for granted, but in the 80s, it had to be built from scratch.

And he worked with Steve Jobs at NeXT (which also would pioneer sound tech that would reach the masses later – the forerunner of today’s Max/MSP, for instance, ran exclusively on a NeXT machine).

Accordingly, Dr. Moorer has an Emmy Award, and an Oscar.

And now he’s Principal Scientist at Adobe Systems.

And he repairs old tube radios and plays banjo, says Music thing.

He is the most interesting digital audio engineer in the world.

He told Music thing the full story of the THX sound, built on a massive mainframe – no DSP chips could be had at the time.

As he tells it:

“I was asked by the producer of the logo piece to do the sound. He said he wanted “something that comes out of nowhere and gets really, really big!” I allowed as to how I figured I could do something like that.

“I set up some synthesis programs for the ASP that made it behave like a huge digital music synthesizer. I used the waveform from a digitized cello tone as the basis waveform for the oscillators. I recall that it had 12 harmonics. I could get about 30 oscillators running in real-time on the device. Then I wrote the “score” for the piece.

“The score consists of a C program of about 20,000 lines of code. The output of this program is not the sound itself, but is the sequence of parameters that drives the oscillators on the ASP. That 20,000 lines of code produce about 250,000 lines of statements of the form “set frequency of oscillator X to Y Hertz”.

“The oscillators were not simple – they had 1-pole smoothers on both amplitude and frequency. At the beginning, they form a cluster from 200 to 400 Hz. I randomly assigned and poked the frequencies so they drifted up and down in that range. At a certain time (where the producer assured me that the THX logo would start to come into view), I jammed the frequencies of the final chord into the smoothers and set the smoothing time for the time that I was told it would take for the logo to completely materialize on the screen. At the time the logo was supposed to be in full view, I set the smoothing times down to very low values so the frequencies would converge to the frequencies of the big chord (which had been typed in by hand – based on a 150-Hz root), but not converge so precisely that I would lose all the beats between oscillators. All followed by the fade-out. It took about 4 days to program and debug the thing. The sound was produced entirely in real-time on the ASP.

For more, check out the 2005 Music thing story:
TINY MUSIC MAKERS: Pt 3: The THX Sound

The other interesting thing about the story told to Music thing is that the piece is essentially a generative performance. Random numbers mean each time the code is run, it “performs” a different version. So some of the recognizable features of the THX recording are very much the outcome of a particular performance – so much so that, when the recording was temporarily lost, people complained.

I’m going to try to get hold of Dr. Moorer to find out how the new piece was created, as press materials (naturally) fail to go into detail. But part of the reason you’ll want to hear it in a theater is the mix: there are three different lengths (30 seconds, 45 seconds, and 60 seconds) each of them made with stereo, 5.1, 7.1 and Atmos mixes.

And yes, I definitely hear a similarity to Xenakis’ Metastasis. In fact, the technique described above in code is similar to the overlaid glissandi in the Xenakis score – and perception will do the rest.

Perception itself is interesting – particularly the fact that the design of the sound, not its actual amplitude, is what gives it its power. (Lesson to learn for all of us, there.) Even with the sound turned down, it sounds loud; sound designer Gary Rydstrom has said that this spectral saturation means it “just feels loud.”

It’s also been a model for recreation – a kind of perfect homework assignment for sound design coders. For instance:

Recreating THX’s Deep Note in JavaScript with the Web Audio API

Writing for his blog Earslap, Batuhan Bozkurt has a masterful recreation of the Deep Note sound in the coding environment SuperCollider. Whether it sounds exactly like that original recording I think isn’t so important – just working through the basic technique of reproducing it opens up a lot of techniques you could expand into other, more personal expressions.

This is a great article and well worth reading:

Recreating the THX Deep Note [Earslap]

It has Dr. Moorer’s seal of approval; he writes in comments: “Thanks for the trip down memory lane, and congratulations for a job well done. I really wish I could share the details with everyone. Maybe someday! Let 1024 blossoms bloom . . .”

And it’s also notable that the SuperCollider language can run on a $ 25 Raspberry Pi comfortably – no Lucas mainframes in sight. Coding is also something that’s opened up to countless young men and women around the world in typical music classes.

Think about that: what was once the domain of a tiny handful of people in Hollywood is now something you can run on a $ 25 piece of hardware, something you can learn with more ease than finding a violin teacher. Indeed, only education and literacy are the final, if significant barrier. With that knowledge and basic technology access, the most advanced and unique computer music technique of my own childhood is now nearly as accessible worldwide as opening your mouth and singing. This says a lot about the power of access to ideas in the modern world – and it makes it even less excusable that there are the significant gaps in gender, in economic status, and in geography.

The post THX Just Remade the Deep Note Sound to be More Awesome appeared first on Create Digital Music.


AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks