Learn To Make Hip Hop

...Learn to make hip hop music. become a true beatmaker today.

Gestures

...now browsing by tag

 
 

How Gestures and Ableton Live Can Make Anyone a Conductor of Mendelssohn [Behind the Scenes]

Friday, August 15th, 2014

effekt_aktion

Digital music can go way beyond just playback. But if performers and DJs can remix and remake today’s music, why should music from past centuries be static?

An interactive team collaborating on the newly reopened Museum im Mendelssohn-Haus wanted to bring those same powers to average listeners. Now, of course, there’s no substitute for a real orchestra. But renting orchestral musicians and a hall is an epic expense, and the first thing most of those players will do when an average person gets in front of them and tries to conduct is, well – get angry. (They may do that with some so-called professional conductors.)

Instead, a museum installation takes the powers that allow on-the-fly performance and remixing of electronic music and applies it to the Classical realm. Touch and gestures let you listen actively to what’s happening in the orchestra, wander around the pit, compare different spaces and conductors, and even simulate conducting the music yourself. Rather than listening passively as the work of this giant flows into their ears, they’re encouraged to get directly involved.

We wanted to learn more about what that would mean for exploring the music – and just how the creators behind this installation pulled it off. Martin Backes of aconica, who led the recording and interaction design, gives us some insight into what it takes to turn an average museum visitor into a virtual conductor. He shares everything from the nuts and bolts of Leap Motion and Ableton Live to what happens when listeners get to run wild.

First, here’s a look at the results:

Mendelssohn Effektorium – Virtual orchestra for Mendelssohn-Bartholdy Museum Leipzig from WHITEvoid on Vimeo.

Creative Direction, GUI and Visuals by WHITEvoid
Interior Design by Bertron Schwarz Frey
Creative Direction, Sound, Supervision Recording, Production, Programming by aconica

CDM: What was your conception behind this piece? Why go this particular direction?

Martin: We wanted to communicate classical music in new ways, while keeping its essence and original quality. The music should be an object of investigation and an experimental playground for the audience.

The interactive media installation enables selective, partial access to the Mendelssohn compositions. The audience also has the opportunity to get in touch with conducting itself. They can easily conduct this virtual orchestra without any special knowledge — of course, in a more playful way.

It was all about getting in touch with Mendelssohn as a composer, while leaving his music untouched.

The idea for the Effektorium originated in the cooperation between Bertron Schwarz Frey Studio and WHITEvoid. WHITEvoid worked on the implementation.

What was your impression of the audience experience as they interacted with the work? Any surprises?

Oh yes, people really loved it.

The audience during the reopening was pretty mixed — from young to old. Most people were very surprised what they are able to do with the interactive installation. I mean, everything works in real time, so you would have direct feedback on whatever you’re doing. The audience could be an active part — I think that’s what people liked the most about it.

effekt_kids

Visitors can also just move around within the “orchestra pit” in order to listen to the individual instruments through their respective speakers. This creates the illusion of walking through a real orchestra. Normally you are not allowed to do that in a real concert situation. So this was also a big plus, I would say.

I saw a lot of happy faces as people played around with the interactive system and as they walked around within the installation room.

Can you explain the relationship of tracking/vision and sound? How is the system set up?

There are basically two computers connected to each other and communicating via Ethernet to run the whole system. The first computer runs custom-made software, built in Java and OpenGL, for the touchscreen, Leap Motion control (via its Java SDK), and the whole room’s lighting and LED/loudspeaker sculptures. Participants can navigate through various songs of the composer and control them.

The second computer is equipped with Ableton Live and Max for Live. Ableton Live is the host for all the audio files we recorded at the MDR Studio in Leipzig, with some 70 people and lots of help. We had specific needs for that installation, for both the choir and the orchestra sounds. So everything had to be very isolated and very dry for the installation, which was very unusual for the MDR Studio and their engineers and conductor.

Within Live, we are using some EQs, the Convolution Reverb Pro, and some utility plug-ins. That’s it, more or less. Then there is a huge custom-made Max Patch/Max for Live Patch … or a couple of patches, to be exact.

effekt_wideview

We decided to just work exclusively with the Arrange view within Live. So this made it easy to arrange all the orchestral compositions within Live’s timeline, and to read out these files for controlling and visualisation.

Both computers need to know the exact position at the same time in order to control everything via touchscreen and Leap fluently. For the light visualisation, we also needed this data to control the LED`s properly to the music.

We basically read out the time of the audio files — we’re basically tracking the time and the position within Ableton’s timeline.

How does the control mechanism work – how is that visitors are able to become conductors, in effect – how do their gestures impact the sound?

The Leap Motion has influence on the tempo only (via OSC messages). One has to conduct in time to get the playback with the right tempo. There’s also an visualisation of the conducting for the visitors in order to see if they are too slow or too fast. You have two possibilities in the beginning when you enter the installation, playback audio with conducting or playback audio without conducting. If you choose “playback audio with conducting” you have to conduct all the time; otherwise the system will stop and ask you kindly to continue.

For the audio, we are working heavily with the warp function in Live to keep the right pitch. But we scaled it a bit to stay within a certain value range. The sound of the orchestra was very important, so we had to make sure that everything sounded normal to the ears of an orchestral musician. Extreme tempo changes and of course very slow and very fast tempo was a no-go.

effekt_score

And you’ve given those visitors quite a lot of options for navigation, too, yes – not only conducting, but other exploration options, as well?

The touch screen serves as an interactive control center for the following parameters:

  • position within the score
  • volume for single orchestral or choral groups
  • selective mute to hear individual instruments
  • visualization of score and notes
  • the ability to compare a music piece as directed by five different conductors
  • room acoustics: dry, music salon, concert hall, church
  • tuning: historical instruments (pitch 430 Hz) and modern instruments (pitch 443 Hz)
  • visualization of timbre and volume (LEDs)
  • general lighting

So all these features could be triggered by the visitor. The two computers communicate via OSC. Every time someone hits one of the features via touchscreen, Max for Live gets an OSC message to jump position within the score or to change the room acoustics (Convolution Reverb Pro) on the fly, for example.

effekt_ui

Right, so you’re actually letting visitors change the acoustic of the environment? How is that working within Live?

We had to come up with a proper working signal routing to be able to switch room acoustics fluently with the IR-based plug-in. Especially the room acoustics were a big problem in the beginning. We really wanted to use the built-in Convolution Reverb, but figured out that we couldn’t use ten instances of that plug-in or more at the same time without any latency problems. So now we are basically using three of the them at the same time for one setting (room acoustics: concert hall for example). All the other Reverbs are turned off or on automatically while switching settings. Everything runs very fluently now and without any noticeable latency. I would say that this wouldn’t have been possible five years ago. So we are happy that we made it :-)

effekt_feedback

How did you work with Leap Motion? Were there any challenges along the way; how happy were you with that system?

The Leap Motion is very limited when it comes to its radius: it’s only sensitive within about 60 cm. So we had to deal with that; that isn’t much. Some of the testers, conductors and orchestra musicians, were of course complaining about the limited radius. If you have a look how conductors work, you can see that they use their whole body. So this kind of limitation was one of our challenges, to satisfy everyone with the given technology.

We could’ve used Kinect, but we went for Leap, because it allowed us to monitor and track the conductor’s baton. The Kinect is not able to monitor and track such a tiny little thing (at least not the [first-generation] Kinect). This was much more important for us than to be able to monitor the whole body, and it was also part of the concept.

I would say that we were kind of happy with the Leap Motion, but the radius could be bigger. Maybe this will change with version 2, but I don’t know what they have planned for the future.

Another problem was the feature list. We had a lot more features in the beginning, but we found out very quickly that one feature would be enough for the Leap Motion tracking, especially when you think of the audience who will visit this kind of museum. It has to be easy to understand and more or less self-explanatory. So by gesturing up and down, one will have influence on the tempo of Mendelssohn’s music only – that’s basically it. All the other features were given to the touchscreen – functional wise. So we have basically two interactive components for the installation setting.

Your studio was one among others. How did you collaborate with these other studios?

Bertron Schwarz Frey and WHITEvoid were basically the lead agencies. Bertron Schwarz Frey is the agency that was responsible for the whole redesign of the Mendelssohn-Haus museum in Leipzig and we and WHITEvoid just worked on the centerpiece of the newly reopened Mendelssohn Museum – the interactive media installation “Effektorium”. So we worked directly with WHITEvoid from the very beginning and our part was mainly the sound part of the project.

I am very happy that Christopher Bauder from WHITEvoid asked us to work with him on this project. We are actually good friends, but this was the first project we worked on. So I am glad that it ended up very well. Then there was a lot of other people to deal with. The sound engineers from the MDR studio, technicians, the conductor David Timm, the orchestra and choir, and of course the people from the Mendelssohn Haus museum itself.

effekt_kidlistening

Thanks, Martin! I was a Mendelssohn fan before ever even spotting a computer, so I have to say this tickles my interest in technology and Classical music alike. Time for a trip to Leipzig. Check out more:

“Mendelssohn Museum – Effektorium”

The post How Gestures and Ableton Live Can Make Anyone a Conductor of Mendelssohn [Behind the Scenes] appeared first on Create Digital Music.


AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

A Case in Motion: AUUG Marries Wearable iPhone, Software, Cloud in Gestures

Thursday, November 21st, 2013

AUUG Motion Synth - Group shot 3 - 385 - small

The next innovations in music and sound may come somewhere between fashion and instrument, between hardware, software, and service.

The AUUG Motion Synth represents one idea of how to do that. In terms of hardware, it’s just aluminum – albeit aluminum in a rather clever configuration. Worn on your wrist, it solves the problem of how to gesture with an iPhone or iPod touch without … well, without dropping it. There isn’t any additional sensor; it simply uses the sensing already in the device. Then again, with Apple’s iPhone 5S, that may be what you want, and the presence of the wearable accessory directs motion more specifically by controlling the orientation of your device. In addition to gripping the phone, the windows in the case also provide tactile feedback for buttons on the synth.

On the software side, AUUG the app handles tracking and synthesis. Sharing is built in, too, with a “cloud” for exchanging presets and ideas.

“Great! A big bracelet that lets me use one app!” No, actually – you can send MIDI to any iOS app, or transmit MIDI to your computer. Any Core MIDI-compatible app or WiFi-MIDI-enabled computer will work. Since there’s CoreMIDI support, you can also use wired MIDI if you choose.

Whether this particular idea takes off, it seems it represents a number of promising trends. Thinking about hardware and software (and online sharing) as they come together in one product is smart. And strange as it may seem now, I think the fusion of wearable tech will continue to progress and become more natural – I can’t help but notice the similarity here to Onyx Ashanti’s (open source) prosthetics and handheld hardware. (He’s still working on that; I hope we check up soon.)

Video:

AUUG Motion Synth Kickstarter video from AUUG on Vimeo.

The only bad news is, the Kickstarter project is looking for US$ 70,000, which is a bit steeper than similar music projects we’ve recently seen funded. But you get a discount for investing early: US$ 68 buys you the whole accessory, which the makers say will retail for $ 110.

Now, if only the top of the wristband worked like the communicators on Babylon 5, I’d buy ten. (Wait… was I the only person who made that connection? Nerd…)

The Motion Synth: Turn Movement into Music

More images:

AUUG grip on hand small

AUUG Motion Synth - vocal harmony and effects control - 297-small

AUUG Motion Synth - wifi MIDI - 313-small

The post A Case in Motion: AUUG Marries Wearable iPhone, Software, Cloud in Gestures appeared first on Create Digital Music.


AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

Steinberg releases Cubase iC Air – Control Cubase with hand gestures

Tuesday, November 19th, 2013

Steinberg Media Technologies has announced the release of Cubase iC Air, allowing users of Cubase 7 or Cubase Artist 7, together with Leap Motion Controller or Intel systems powered by the Intel [Read More]
AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

Cubase Goes Futuristic: Motion Hand Gestures Control Music in Free Add-on [Details]

Tuesday, November 19th, 2013
Cubase iC Air, erm... artists' rendering. Just about got that mix right. (Hold on - red ball. This track is not going to be premeditated.)

Cubase iC Air, erm… artists’ rendering. Just about got that mix right. (Hold on – red ball. This track is not going to be premeditated.)

When it comes to big, flagship audio tools, you don’t get a whole lot of sci-fi in your software. That makes Steinberg’s announcements this week more of a change of pace. They aren’t the first to talk about virtual studio sessions, or even gesturally-controlled music. But seeing this as an add-on to Cubase, not just an experimental hack, counts as news.

And Cubase users can add on those futuristic capabilities in the form of two new tools.

You can fly through Cubase sessions with gestural controls using depth cameras (on Windows) or LEAP Motion (on Windows and Mac). And you can cross time and space by connecting remotely to Cubase projects – soon, even through your mobile device.

Cubase iC Air: Gestural Control

iC Air is a new add-on, available free, that lets Cubase 7 users (in any edition) control various parameters without touching a controller, simply by using hand gestures in front of a camera or sensor.

Right now, the most likely way to do that will be via the LEAP Motion hardware, because it’s readily available and compatible with both OS X and Windows. But the Steinberg engineers also support a set of developer tools from Intel called the Intel Perceptual Computing SDK 2013. That promises support with future depth cameras, including, at the moment, one made by Creative Labs. That’s mostly future-proofing, though, as you need to actually fill out a developer form to even get hold of the one camera that currently works. (More on why that’s interesting in a moment.)

Regardless of input hardware, you get the same basic capabilities. People aren’t doing the “devil horns” sign in the video just for dramatic flair; that’s actually a discrete gesture.

The Minority Report comparison here is apt, as this is the mode of interaction depicted in the movie: particular hand positions and gestures like zooming work much as the now-aging sci fi film predicted. (That’s because interaction experts consulted on the movie.)

Specific positions of your hand will control starting and stopping the transport. (Three spread fingers start, five – think Stop in the Name of Love – bring things to a halt.)

Make circular motions, and you “spin” the transport position in forward or rewind. Swipe horizontally or vertically to switch between tracks. Move your hand left and right with a gesture to control the shuttle.

Most impressively, spreading your fingers apart or bringing them close together controls zoom.

And the combination of zoom and transport means this could be genuinely useful when recording while, say, holding a guitar.

Sadly, giving your computer the middle finger (or two fingers, British folk) will not trigger undo/delete. Yet.

Of course, it’d be a missed opportunity if you couldn’t control other parameters, as well. Using Cubase’s “AI Knob,” you can assign gestures to free control of any parameters, for Theremin-style manipulation of things like effects levels. (It also disables the other gestures, so you don’t get overly confused.) On the LEAP, you hold your palm parallel to your desk and hold your hand above the LEAP sensor. When using a camera with the Intel tools, your palm faces your computer screen and moves up and down in the camera range. (It looks, therefore, like the LEAP has a slight edge, at least for now – but perhaps soon PC depth cameras will become more commonplace.)

Other than that, if you’ve got a licensed copy of Cubase 7, you can give this a go. We’ll be doing that soon in the studio with our LEAP.

CUBASE IC AIR [Downloads, documentation and info]

Now, as for the Intel dev tools, for now, you’re probably out of luck – even if you’re a developer. Developer kits are out of stock, and there’s no sign yet of the commercial Creative product, dubbed the Creative Senze3D, that I could find.

But Intel’s long-range plans for “conceptual computing” look fascinating. They include the sort of thing you’ve seen on Microsoft’s Kinect, but aimed at general PC audiences. (Microsoft promises some of that, too, but we won’t see general-purpose PC SDKs until next year, as far as the new Kinect camera. It’ll be interesting to see who delivers first, and most widely.)

And Intel, who years before the Kinect popularized computer vision with their OpenCV toolkit, have a lot of ideas in mind. They list speech recognition, individual finger tracking in 3D, facial analysis, augmented reality, and 3D point clouds in their sights.

See the SDK page for more info.

And if you do have Cubase and LEAP, let us know how this goes. Especially if you can do a good Tom Cruise impersonation.

iCAir

The post Cubase Goes Futuristic: Motion Hand Gestures Control Music in Free Add-on [Details] appeared first on Create Digital Music.


AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

Drumactica Augments Percussion with Gestures, Ice Cube Trays; Here’s How it Was Done

Friday, August 23rd, 2013

breakfastsnare

Bacon and eggs on a drum snare? Hands-through-the-air gestural control with Leap Motion? Water pianos in ice trays and a hacked Makey Makey, all talking to Ableton Live? Drumactica has a little bit of everything. London-based percussionist Dr. Enrico Bertelli shares with us how he “augmented” percussion for his latest project – with all the details – for a guest post on CDM. Just make sure to give due respect to John Cage.) -Ed.

Drumactica 2.0 is a solo, augmented percussion set up, created for Hack the Barbican, London.
The piece is about the creative bond between the desire to improve melodic and harmonic contours of the snare drum and the concept of an embodied music performance.

Drumactica 2.0 started as a composition for an array of augmented percussive tools. I started by modifying the Makey Makey code, through the Arduino software, changing all its outputs to letters. As a tutorial on Sparkfun explains in very simple terms, you just need to add letters between inverted commas (i.e. ‘a’) and you have your new output. I tried to exploit all the keys that send notes to Ableton from the computer keyboard. I am rather certain that the Makey Makey can send MIDI messages, bypassing this 15 letters limitation, but it’s currently beyond my technical knowledge.

I augmented the snare drum by attaching a strip board and sending six cables inside the shell, to pierce the head and bring the signal to the surface of the instrument. Through a mixture of acrylic and conductive ink, my lovely girlfriend Oriana Lauria created a breakfast plate. When touched, the conductive outlines of the various shapes closed the circuit and send the signal to the boards.

Even though the snare lost some of its acoustic charm, due to all these holes and muffling material, I particularly enjoyed improvising with brushes; their metal tips and bottoms are too conductive and allow the performer to either play acoustically or interact with the attached VSTs.

enricoplaying

A few more commonplace objects further added to the setup. I pierced through the soft silicon bottom of an ice-cube tray, creating a water piano that could control my arpeggiators. I also wrapped two toy tennis rackets in tin foil and started using them as pedals.

The acoustic part of the composition was very important because the Makey Makey did not allow for any dynamic (velocity) control, which I had when using knock sensors (piezo) or Arduino setups. Furthermore, this new instrument allowed me to control the length of the notes. As a percussionist, this was just a dream! Most of the acoustic instruments (except vibraphone, glockenspiel and tubular bells) have no easy and precise control over the length of the note. Playing with rhythm and long pedals had an incredible potential for the improvisatory process, at the origin of Drumactica 2.0.

Luckily enough, the LeapMotion was released just a week before my performance, and I integrated it into the setup through GecoMIDI. This app allows the user to send MIDI CC messages, which can be mapped to any continuous control within Ableton. The process was laborious but intuitive; in a few hours I was mapped my setup in full.

The Leap Motion boosted the expressiveness control and humanized the digital sounds to a surprising extent. I found quite difficult and risky to perform with it, away from the computer but, with some practice, one can handle the imaginary positions in space.

The MIDI data was handled in an Ableton Live 8 environment. Each sensor was isolated and sent to a different channel via IAC driver buses. I did this to control the pitch output. The key problem of this instrument is that each connection always produce the same note. Together with digital composer Mr. David Ibbett, I designed a patch that can modify the incoming note, filtering it through another source. If I had a constant C in input, I could play a series of notes on my other MIDI controller – a launchpad on this occasion – and have the original signal to cycle through these new series. I often put a chord generator after this patch, called serialiser, to boost the polyphony. The chord parameters were assigned to the cc messaging and, by moving one hand (roll, yaw and all the other 3D movements) I could control the harmonic content.

The instrument I came up with, reflected my interests towards the embodiment of the music. Since the early days of my percussion instruments studies, it became obvious how ancillary gestures were fundamental for both a good technique and sound production, and for an engaging audio/visual performance. I incorporated more and more of these movements in my music theatre performances of pieces by Aperghis, Globokar, among others, even directing them towards imaginary instruments, like in Homework II by François Sarhan. Initially, I amplified these gestures by carefully positioning floor lights during the performance, creating an interesting shadow play. With Leap Motion, for the first time, I had the chance to exploit these movements to physically manipulate the sound.
This significant step towards the embodiment of sound was matched by the Makey Makey’s organizing principle, which uses one’s own body to convey the electric signal and produce data.

As a percussionist, I owe a lot to John Cage for his achievements on the emancipation of noise and the elevation of objects to musical instruments. With this branch of research I am trying to blur the even deeper distinction between the human performer and the sound controller. This ache for a complete embodiment, strives for a re-invention of the performative act; the creation of a continuum between body and object, for the production and control of a new imaginary landscape.

Thanks for this, Enrico! We look forward to seeing more – and readers, if you’ve found ways of augmenting percussion and hacking new instruments, do let us know. -PK

All images courtesy the artist.

The post Drumactica Augments Percussion with Gestures, Ice Cube Trays; Here’s How it Was Done appeared first on Create Digital Music.


AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

From Gestures to MIDI: Geco Promises Music Applications for Leap Motion

Tuesday, April 23rd, 2013
These strange glyphs represent the dictionary of hand gestures Geco and LEAP can turn into music control.

These strange glyphs represent the dictionary of hand gestures Geco and LEAP can turn into music control.

Here we go again. Touchless hand gestures have been part of electronic musical performance ever since the Theremin first hummed to life almost 100 years ago. And those gestures embody the same challenges and promise. We have the ability as humans to think spatially, in three dimensions, and to have a tight sense of control via our muscles of gestures in space. We use gestures to communicate and to manipulate our world. Those same expectations can be disappointed in electronic systems, however, as they lack tangible physical feedback and may misinterpret our intentions.

It’s easier to play with these ideas and experiment with them than talk about them, though. And for everyone who’s turned off by the idea, someone else is enthused.

What the US$ 79.99 Leap Motion may do for gestures in music is to lower the bar for entry – and up the bar for performance. Leap is affordable hardware, there are already lots of developer units out in the world, and there’s an easy-to-program SDK. We’ve already seen Microsoft’s Kinect open up gestural control to lots of new music projects. Leap may do more: it’s cheaper, it’s faster and operates with vastly lower latency, and it’s more precise for individual hand gestures. It also offers a platform for developers to share their work, in an app store full of stuff you can use, so that the hardware theoretically won’t become a paperweight in the cubicles of the digerati.

Latency alone could make a big difference for musical applications. It’s not the only challenge in motion control, but it has been the showstopper, particularly with the hefty lag you get using something like Kinect. Leap is different, offering latencies low enough to satisfyingly control music applications.

The unobtrusive Leap Motion hardware. Courtesy the manufacturer.

The unobtrusive Leap Motion hardware. Courtesy the manufacturer.

That doesn’t mean you should run out and buy one. Healthy skepticism is always good practice. So, I actually agree with some of Chris Randall’s complaints about Leap, as discussed on Twitter. I think anyone experimenting with novel control schemes, though, may learn something from successes and failures alike.

If you’re ready for the adventure, though, Leap will make it immediately easy to start mucking about with music. Leading the charge is Geert Bevin and his Geco (originally Gesture Control) app. I’m testing it now, but here’s a quick look at what it does.

By making a virtual MIDI port, and using a library of gestures and mappings, Geco allows a wave of your hand to control any music tool that works with MIDI.

  • Using two hands, create up to 40 different streams of control messages.
  • 16 MIDI channels.
  • Mappings with MIDI Control Change or (with greater data precision) pitch bend.
  • Manipulate different streams using “open” and “close” gestures of your hands.
  • Low-latency control, with visual feedback on both MIDI and movement analysis.
  • Send MIDI on the Mac using a virtual MIDI port you can then connect to other applications – or, on either platform, to physical MIDI ports.
  • Graphical UI with color/graphical customization, information on gestures and so on.
  • Thin out your MIDI data to work with old gear that can’t respond to all those messages.

The intro price will be US$ 9.99. It should launch with the Leap Motion app store – dubbed Airspace – when the controller launches on May 13.

MIDI is useful, but it’s too bad there’s no higher-precision control implementation here. (OSC would be one option; it seems apps that do that are a likely addition.) There is a whole lot of detail and thought that has gone into how the UI works, and Geert promises that the whole engine is low on system resources and approaches “zero latency” (at least, it’s very, very fast).

It’s worth taking a look at the draft documentation for more detail:

http://uwyn.com/geco/

Here’s another experiment showing VST and AU control:

Nor is Leap Motion the only game in town. On Create Digital Motion yesterday, I wrote about another project that is using crowd funding to launch an open source rival. I can imagine developer APIs that let you work across each. The advantage of open hardware would be that people can understand how the device works, and modify it for specific applications (both code and hardware form factor.)

DUO is a DIY 3D Sensor – Like Leap, But Open Source, From Gesture and Vision Veterans

I’m clarifying the details of their licensing plan. At least one of this team has come under criticism in the past for the approach to open source releases and Kinect hacking – you can read the discussion in both directions, though I’m encouraged that developer AlexP was ultimately responsive to some of those concerns. We’ll see how this project is structured.

It does seem that people will continue to develop this thread in motion control. We’ll be watching.

As I do have a Leap, let me know if there’s anything you’d like tested or developed (summer project!), or questions you may have.

https://www.leapmotion.com


AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

Hope: In Piano Gestures and Glitches, a Gorgeous Free Compilation from Japan

Monday, November 26th, 2012

kaiwa; from mitsuru shimizu on Vimeo.

Quietly melancholic piano gestures and reversed piano hammer strokes collide like waves against glitch-infused rhythms in hope3.0, the output of elementperspective. The “sound & design label” from Osaka weaves together a diverse group of promising Japanese artists, showing in many cases sonic maturity that belies their young average age.

The balance between minimal, glittering piano prettiness and raw, digital rhythms is perfectly on evidence in the music video at top, for Mitsuru Shimizu’s triumphant “kaiwa;” – a real highlight of the set. The photographer and self-described “sound proposer” produces visuals and sounds alike here.

“Hope” is a fitting title for this, the latest (and, sadly, “final”) of three in the free series. There’s a sense of hope emerging from reflection, of transcendent optimism. In delicate ambiences and sometimes glowingly-upbeat textures, the music is perpetually inclined to the positive.

hope3.0 by V.A

True to the design side of the label, you get a set of visual accompaniments in addition to the free music. There are extensive liner notes with details on the artists, plus lock screens and wallpapers for iOS gadgets and desktop computers. This includes the cover image, by Risa Ogawa, a 25-year-old image and textile artist who “expresses on the theme of infection what is not visible and feeling.” The ring of red you see is produced in cotton and nylon.

“Japan is full of hope,” reads the liner notes for the PDF. Listening to the mix, so am I.

More:
http://elementperspective.com/
Facebook page

Found via the wonderful, intelligent INQ Mag, which for me has become about the best way to find out what’s happening in the netlabel scene. (And yes, if you don’t believe there’s still a lively netlabel scene, start reading INQ. You might change your mind. Bandcamp forever.)

For good measure, another fragile, lovely video punctuated with piano and lo-fi film imagery, also by Mitsuru Shimizu.

h from mitsuru shimizu on Vimeo.


AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

In a Swirl of Particles, luanna Uses Gestures to Touch Samples [iPad]

Monday, May 21st, 2012

luanna is a beautiful new application out of Tokyo-based visual/sound collective Phontwerp_. Amidst a wave of audiovisual iPad toys, luanna is notable for its elegance, connecting swirling flurries of particles with gestures for manipulation. I imagine I’m not alone when I say I have various sample manipulation patches lying around, many in Pd, lacking visualization, and wonder what I might use in place of a knob or fader to manipulate them. In the case of luanna, these developers find one way of “touching” the sound.


As the developers put it:

luanna is an audio-visual application designed for the iPad
that allows you to create and control music through the manipulation of moving images.

The luanna app has been designed to be visually simple and intuitive, whilst retaining a set of rich and comprehensive functions. Through hand gestures you can touch, tap and manipulate the image, as if you were touching the sound. The image changes dynamically with your hand movements, engaging you with the iPad’s environment.

The interface is multi-modal, with gestures activating different modes. This allows you to select samples, play in reverse, swap different playback options, mute, and add a rhythm track or crashing noises. It’s sort of half-instrument, half-generative.

Phontwerp_ themselves are an interesting shop, descibed as a “unit” that will “create tangible/intangible products all related to sound.” Cleverly naming each as chord symbols, ∆7, -7, add9, and +5 handle sound art, merch, music performance / composition / sound design, and code, respectively. That nexus of four dimensions sounds a familiar one for our age.

Sadly, this particular creation is one of a growing number of applications that skips over the first-generation iPad and its lower-powered processor and less-ample RAM. Given Apple can make some hefty apps run on that hardware, though, I hope that if independent developers find success supporting the later models, they back-port some of their apps.

See the tutorial for more (including a reminder that Apple’s multitasking gestures are a no-no).

US$ 16.99 on the App Store. (Interested to see the higher price, as price points have been low for this sort of app – but I wonder if going higher will eventually be a trend, given that some of the audiovisual stuff we love has a more limited audience!)

Find it on our own directory, CDM Apps:
http://apps.createdigitalmusic.com/apps/luanna

http://phontwerp.jp/luanna/

See also, in a similar vein, Julien Bayle’s US$ 4.99 Digital Collisions:

http://julienbayle.net/2012/04/07/digital-collisions-1-1-new-features/

http://apps.createdigitalmusic.com/apps/digital-collisions-hd


AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

From Beautiful Ambient Modern Dance to Dubstep, Gestures to Music in Kinect (Download the Tool)

Tuesday, March 6th, 2012

It started as some compelling demos or proof of concept, but it’s plenty real now: the tools for translating movement, gesture, and dance from the body to interactive music march forward. Empowered by Microsoft’s Kinect and an artist-friendly toolchain, even a single, clever developer can do a lot. Sound designer, music producer, and Max/MSP developer Chris Vik of Melbourne has been one of those busy early pioneers, with an incredible tool called Kinectar.

So, the tech is cool and shiny and impressive: what about the actual music? And, even more importantly, what if all the hand waving and moving about could be meaningful? That’s the next step. For his part, Chris is teaming up with a dancer and choreographer to combine his compositional ideas with someone who knows how to move. The Dubstep-y demos (all below) are impressive, true, but the early tests of the work with the choreographer are simply beautiful, and demonstrate that wobble bass isn’t the limit of what this can do. They also turn the arbitrary arm-waggling into a part of the art.

And as for you: the software’s alpha, but you can fire up your copy of software like Ableton Live and grab this software for Mac or Windows and try it yourself. So if you don’t like the results – be the gesture-controlled basslines too wobbly, be they not wobbly enough – you can put your music, and your movement, where your mouth is.

At top, Chris shows off an early test of the dance collab. (There’s more to come.) Below, a tutorial that shows how this works with Ableton. And read on for more from Chris on what the work with the dancer is about, and what the tool can do.

Chris writes:

Since April 2011 I’ve been working solidly with the Microsoft Kinect, developing my software, Kinectar, to enable its use as a MIDI controller for performing music live. I’ve done a number of performances around Australia since I started the project, however, it’s safe to say that, although I would consider myself an electronic musician, I’m certainly no dancer. Enter, Paul…

Dancer, Paul Walker and I have joined forces to bring the Kinect controlled music concept into the world of contemporary dance. Recently we obtained a residency at PACT theatre (centre for emerging artists), where we spent the week developing different ways of implementing my Kinect music control system in a dance context.

My system is developed in Max and uses OpenNI drivers, OSCeleton and Ableton Live.

via Chris’ blog

CDM will check back in with Chris soon, because:

I’ve got some more videos to release over the coming weeks from a range of my different Kinect music performance applications, including controlling/conducting the Melbourne Town Hall Organ and a 100+ speaker Kinect-controlled diffusion performance. I’ll keep you posted when they’re released!

More on the software:

Kinectar Performance Platform is a toolkit developed by music producer Chris Vik to allow the use of Microsoft’s Kinect motion tracking sensor in computer-based music. The software is designed for electronic musicians to expand the way they control their music in a futuristic and extremely expressive way, using only the waving of hands and a small amount of creativity. It can be used to control the simplest of parameters like a filter or LFO, play notes and chords on a sampler or synthesizer, or be programmed to control an entire live-set through nothing more than gesture.

Key Features:

Movement Tracking UI allows manipulation of the Kinect’s human tracking capabilities, displaying all relevant data extracted from the hands location in 3d-space

Instrument Builder lets the user build virtual ‘instruments’ by outputting MIDI notes in three modes:

  • Static – Produces a single note value. Useful for drum triggers, turning on/off effects within a DAW or feed that trigger into Kinectar to switch between presets using your gesture
  • Solo – Do sweeping solos by selecting from over 40 musical scale presets or click the notes on the UI to make your own
  • Chord – Create a progression of up to 8 chords per preset to play live

Global Flags lets you turn on/off Kinectar’s instruments using a MIDI note sent from your DAW, external MIDI controller or Kinectar itself

MIDI Preset Control lets you switch between Kinectar’s presets and instruments using a single MIDI note

Value Editor enables many more MIDI/OSC outputs, for controlling device values

Visual Metronome popout window sits on top of all programs to make it easy to see if you’re in-time when the music gets messy

It’s labeled “rough alpha,” so don’t expect a finished tool here, but you can go download it and give it a try (or learn more about what’s possible):

http://kinectar.org/download

And now, the obligatory (but quite awesome, Chris) Dubstep demo videos:


AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks

Grabbing Invisible Sounds with Magical Gloves: Open Gestures, But with Sound and Feel Feedback

Wednesday, September 7th, 2011

You might imagine sound in space, or dream up gestures that traverse unexplored sonic territory. But actually building it is another matter. Kinect – following a long line of computer vision applications and spatial sensors – lets movement and gestures produce sound. The challenge of such instruments has long been that learning to play them is tough without tactile feedback. Thereminists learn their instrument through a the extremely-precise sensing of their instrument and sonic feedback.

In AHNE (Audio-Haptic Navigation Environment), sonic feedback is essential, but so, too, is feel. Haptic vibration lets you know as you approach sounds — essential, as they’re invisible. The work of Finland-based DJ/VJ Matti Niinimäki, aka MÅNSTERI (“Mons-te-ri”), the project is part of research undertaken at SOPI Research Group at Media Lab Helsinki. Like some sort of sound sorcerer, the user is entirely dependent on movement, feel, and sound as they move unseen sound sources through space. (More technical details below.)

It’s labeled, as always, “proof of concept.” The creator promises more videos to come; we’ll be watching as this evolves, as it looks terribly promising.

Below, “Tension” is a fair bit simpler, in which users walk through a space and control synth parameters. (“You are the knob,” one might say, though I don’t suggest shouting that at someone you don’t know. They could take it the wrong way.)

More descriptions:

AHNE

This is a demonstration video of AHNE – Audio-Haptic Navigation Environment.

It is an audio-haptic user interface that allows the user to locate and manipulate sound objects in 3d space with the help of audio-haptic feedback.

The user is tracked with a Kinect sensor using the OpenNI framework and OSCeleton (github.com/​Sensebloom/​OSCeleton).

The user wears a glove that is embedded with sensors and a small vibration motor for the haptic feedback.

This is just the first proof-of-concept demo. More videos coming soon.

HEI Project 2011
SOPI Research Group
sopi.media.taik.fi/

Aalto University School of Art and Design

AHNE – Sound and Physical Interaction

Tension

A brief video showing Tension. An interactive spatial sound installation for multiple users.

A person enters the space and a generative sound is assigned to that person. The sound pans around in the 6-channel speaker system following the user in the space.

Up to 5 users can use the installation at the same time. Each person modifies the other sounds based on the distance to the other users. The closer you are to other people the more the tension in the sound increases.

Tension – Sound and Physical Interaction

Side note: watching these two videos makes me want to consult with someone on non-verbal expression, posture, and stage presence. That criticism is mounted at myself – I could use it. Perhaps we need an all-physical, unplugged music event for laptopists, controllerists, and electronic musicians. And I can at least say I’ve had some experience in this, working in the dance program at my undergraduate alma mater, Sarah Lawrence. Anyone game? (Sounds like something we could do while CDM is in Berlin in the fall.)

For their part, the Finnish research facility is working with dancers, along with Nokia Research Center. (Sadly, I can’t find documentation.) But I think interesting things happen when us non-dancers learn movement technique, too.


AudioProFeeds-1

Tell others about us:
  • Twitter
  • Digg
  • StumbleUpon
  • MySpace
  • Reddit
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks