Browsing articles tagged with " max msp"

DISRUPT!ON installation & performance report

Apr 1, 2014

So we did it.

François Larini, the curator of DISRUPT!ON both in London (installation side) and in Monaco (performance side), and I were saying this : WE DID IT.

At first in this report-like post, I want to thank massively and infinitely:

- François Larini, without him, this project wouldn’t have been done at all and I wouldn’t have met a very great person and curator. I really looking forward to continue to work with him,
- Marie-Claude Beaud, NMNM’s Director and a very impressive and amazing person,
- Pierre & Gary from The Centre of Attention in London for hosting the installation in their place,
- GMEM in which I prepared and tested the project itself,
- NMNM’s people for their kindness, interest and precious help

I spent a couple of days in London to install DISRUPT!ON on previous week. It was a really nice moment: the one when everything comes to life.

installation01

François Larini and I were struggling with wires

installation02

Setting up the 2 BIG LCD videoprojectors!

 

Then I tested and tried to find the sound that fitted the most my intention. Pierre, the gallery’s owner and François were there to help me and support my work. They were nice moments !
Here are some pictures taken after the moment where I said: “ok I’m ok now!”

installationPreOpeningLondon

disruption03

disruption01

openingDisruptionLondon04

openingDisruptionLondon03

openingDisruptionLondon05

Then, the Friday 28th March, I played the live performance in Monaco at Nouveau Musée National de Monaco. While I was performing, I was focused on visuals & sound in Monaco, tweaking the sound and trying to find the edge and the fragility of the right tension and moment. The nice thing which have been a bit hard to explain to the audience was that each parameter I was tweaking was also sending numerical values through internet to the installation in London. This latter was reacting by changing, moving and evolving on a way I didn’t even know in Monaco, as I didn’t have the vision of what it was happening there.

presentingDisruptionMonaco

I was presenting DISRUPT!ON to the museum’s audience

performanceDisruptionMonaco

I was about to play DISRUPT!ON in NMNM

Here are some pictures shot by Fiorella del Prato and Élodie Biancheri from NMNM. They have been shot using mobile phones. I’m waiting for pictures from Monaco TV which made us an interview with François Larini. They also shot parts of the performance, apparently.

live01

live02

live03

You can retrieve other pictures on my FB page here.

Here are 3 videos captured by Élodie from NMNM:

Darwin Grosse & Julien Bayle discussing Art & Technology

Feb 3, 2014

I have been invited by Darwin Grosse, one of the magic developer and head of documentation of Max6 (formerly Max/MSP) at Cycling 74 to have a discussion about my art, my creation processes and my background.

It was a 50′ big discussion a bit messy sometimes, but rich and very interesting.

I’m inviting you now to listen the whole discussion here:

The header of the podcast written by Darwin Grosse is this one:

If you are involved in electronic/digital media creation, you’ve surely run across Julien Bayle. He is a performer, installation artist, programmer, writer and is heavily involved in almost every facet of media arts. He is also a public figure that spends a lot of time investigating systems – and sharing what he’s found with the community. I find Julien fascinating, because he seems to be incredibly productive, and seems to finish more work than most people can even conceive of starting. In this conversation, we talk about his background, how he became comfortable with the many technologies that he uses, and how he is able to accomplish so much.

As I suggest in the intro, this podcast was a bit of a scramble; both technical and scheduling issues pushed this right to the brink. But this conversation is a great way to peer into the mind of a self-described “obsessive”, and understand how Julien decides on which work to pursue as well as which decisions he’s decided to make. Great stuff. To learn more about Julien’s work, please check out julienbayle.net, where you can read his blog and follow him through some of his tech-exploits.

Formations Ableton Live, Max6 et Max for Live

Jan 20, 2014

ableton_certified_trainer_logo_white_bg Bandeau_Ableton-live9

En tant que formateur certifié par Ableton et Professeur d’arts numériques à l’École Supérieure d’Arts et de Design Marseille-Méditerranée, je t’apprends à utiliser d’Ableton Live 9 Suite et Max for Live !

Je donne aussi des cours débutant, intermédiaire et avancé sur Max6 (anciennement nommé Max/MSP), Processing et Arduino.

Mon prix est de 1500€ pour une semaine en individuel (présence ou skype) ou 2500€ pour les entreprises

ATTENTION => Je ne suis pas le plus cher et spécialement souvent moins cher que des formateurs non certifiés…

Nous pouvons traiter ensemble TOUT sujet lié à l’art & la technologie comme les visuels audio-réactifs, plugins spécifiques, contrôleur MIDI etc

 

Comment faire pour prévoir ta formation ?

Envoies à  julien@julienbayle.net :
- tes coordonnées (nom, adresse) pour faire une convention et un devis,
- tes objectifs de formation,
- les dates que tu souhaiterais réserver.

Quel est le programme  ?

Il y a deux axes possibles:
- cours complet
- cours dédié

Le cours complet couvre TOUS les aspects de chacune des technologies. Comme une semaine ne permet pas de rentrer dans tous les détails avec le même niveau de profondeur, l’objectif est que tu sois totalement autonome à la fin de la formation pour pouvoir te débrouiller complètement. Maîtrisant ces technologies, je sais précisément sur quoi appuyer en particulier et aussi ce que l’on peut un peu plus survoler.

Le cours dédié est basé sur tes objectifs personnels de formation. Je t’aide à les définir si tu n’y arrives pas.
Sur 5 jours Ableton Live, cela peut être, par exemple:
- 2 jours de rappels détaillés et précis des modes arrangements/session, clips, pistes, routing, warp, synchro, automation/modulation,
- 1 jour sur l’ensemble des périphériques natifs (instruments, effets audio et midi)
- 1 jour sur ta manière de faire et composer, avec un accompagnement sur toutes tes questions spécifiques (et incluant aussi d’autres logiciels)
- 1 jour d’aide à la construction d’un device Max for Live dédié à ta propre utilisation en studio

Mais aussi 3 jours Live et 2 Max for Live, ou toute autre combinaison qui te sera bénéfique !

Si ça revient trop cher pour toi ?

Je suis déclaré comme centre de formation en préfecture sous le numéro 93 131436613
Cela signifie que tu peux faire prendre en charge cette formation par ton employeur sans que tu débourses un centime.
Toutes les informations sont sur cette page expliquant les financements possibles.

 

Si tu as besoin d’une formation comprenant plusieurs sujets ?

Envoies à  julien@julienbayle.net l’ensemble de tes besoins.
J’ai l’habitude de former des artistes, des profs, des débutants, des étudiants et je fais à chaque fois des cours dédiés aux  stagiaires.
On peut donc parler de Max for Live, un peu d’Arduino et beaucoup de Processing par exemple.

Si une semaine c’est beaucoup trop ?

On peut faire 1, 2 ou quelques jours répartis sur plusieurs semaines. Il est indispensable de réserver l’ensemble et on peut prévoir sur la convention des jours “flottants”.

5 RAISONS POUR LESQUELLES ON ME CHOISIT

VRAI UTILISATEUR = VRAI EXPERT

Je suis un artiste et formateur sur Ableton Live depuis 8 ans et je l’utilise tous les jours pour mes propres créations.

GARANTIE DE TOUT APPRENDRE

Je m’engage personnellement et en mon nom à ce que tu saches tout faire avec Ableton Live: pas d’anonymat ou nom de société mais le mien !

SESSION DÉDIÉE POUR TOI

Je construis la semaine passée ensemble autour de TON besoin, pour arriver à ce que TU veux: pas de session générale/globale avec moi !

SUPPORT POST-FORMATION

Si tu as besoin de moi après ta formation, tu m’envoies tes questions, j’y réponds personnellement: pas d’intermédiaire !

ORGANISME DE FORMATION DÉCLARÉ

Toutes mes formations sont finançables par ton employeur car je représente un organisme de formation déclaré officiellement. Lis ça pour bien comprendre !

Ma page sur le site Ableton est ici : JulienBayle_AbletonCertifiedTrainer

Introducing ALPHA a/v live performance

Jan 7, 2014

Some of you are following me tightly. They already read and know about ALPHA, my new a/v live performance, exploring further uses of complex beats layering & real-time sequences generation as the the same time as sound reactive visualization.

The others have to know, now.

The official page is : http://julienbayle.net/works/creation/alpha

alpha is the code name of a series of live performance planned for 2014.

It involves sounds, visuals, algorithmic & controlled sequencing for providing a unique experience where both sounds & visuals aren’t limited by usual codes & borders, allowing the audience to be completely immersed into a strongly tight & dense matter.

From physical approach of sounds to algorithmic sequencing, alpha a/v live performance explores sounds & visuals fields as much as data processing.

Here are early pictures captured from first live performance real test. Obviously, you have to see it while hearing sounds and that will immerse yourself inside alpha’s universe deeply. Please subscribe to my youtube channel, this is the best way to keep connected tightly to information about it (twitter and fb too for more texts and descriptions)

Here is the video list channel that will be updated all along the live composition and prototyping and real-time tests.

Here are some visual artwork captured during tests too.

alpha

alpha2

alpha3

alpha4

alpha5

alpha6

alpha7

alpha8

Exploring a new live performance setup

Dec 23, 2013

Since many years, as many musicians and performers, I’m exploring every time new way of performing on stage.

It seems I reached another important step in my exploration and work these days.

My requirements are quite simple:

- ability to interact w/ sequences on stage

- triggers available for my visuals

This seems simple, indeed.

But this is a long process that includes exploration, tests, build/destroy steps and refinement.

I’m using exclusively Ableton Live today, but often on some uncanny ways, using only the multi-track ability to route audio from external stuff, internal stuff like VST etc.

Disturbed determinism & framed chaos

On stage (and in my life), I like to build a process, a system, and let it evolving, fixing it sometimes, but letting it to live its own life.

Creating a sequence is an usual process. A lot of electronic musicians create sequences. Often, a LOT of sequences.

From a nominal clip to many alteration of this clip, we all have several bases for our music and then we can derivate them to make variations, breaks, or other stuff.

I like that way, but in my case, that would mean 1 nominal clips and A BUNCH of others. Why? Because this or that day I would like to play this or that one on stage.

I did that a lot and actually, I’m continuing like that for some parts of my livesets but I need to go further.

Sequencers are exactly what I wanted to use. Live is a sequencer, actually. But it cannot produce sequences by itself. Of course, with Max for Live we can do that (even create a bunch of clips automatically or things like that), but this is exactly my purpose: I have to use something else beside of Live.

Reaktor’s sequencers and environment are very interesting besides Max for Live. Actually, I’m using both of them even if I prefer to use Max because it is very faster during the programming steps. Native Instruments should really involve a couple of programmers to improve the programming interface.

Anyway, my purpose can be drawn like that here:

sequencers

 

I’m using both method but I’m moving more on the right with live sequencers and more risks, but also more flexibility on stage, and more expression room too.

Another requirement with any sequencers is the ability to use MIDI both for sound generation but also for visuals triggering.

A small story of my 2013′s ways of visuals

After a lot of explorations, experimentations and tests in real live performance conditions, I’m still using Max6 / Jitter as standalone for any audio or MIDI reactive visuals generation.

I taught some workshops about that and it is finally my only way to keep my setup flexible and powerful at the same time.

VDMX is also a massive and perfect tools but I miss Max6 while I’m using Quartz Composer, even if there are a lot of similarities.

If one day I’m using footages on stage, I WILL go VDMX for sure. But I don’t really like that, as I don’t really use audio samples, anyway. That is not a philosophical choice, just a pragmatic one according my own workflow, practice etc. VDMX is THE tool for people who don’t want to program (and are on OSX..) It is really impressive.

You can check this page about one of my latest workshop. I talked about basics and what every one have to know if they want to build their own audio-visual setup for live performance. It is a bit oriented on the programming framework jitter (but also talking about Processing) instead of talking about already made softwares like Resolume Arena or like. But especially,

One of the core of this workshop is the small list of necessary modules/concepts to consider to create such a system :
- flexible way to display/design any visuals
- ability to rapidly save all parameters of a “scene” as a preset easily reloadable
- easiness of connection to audio/midi triggering system
- rapid routing feature to map this to that

Actually, my system allows all of that. I should write my systems. Indeed, I built more than one and some are very specific and not totally re-usable. This is really related to my way of building or composing every time new stuff and new setup.

But the core of the system is every time:

workshop02

Of course, that would be better if you have followed my workshop, but the idea is:
- you have data providers (for instance Ableton Live popping out MIDI data)
- you select them and filter what you need
- you route data into your visuals system(s)
- you can save/restore parameters according to specific message (included in data flow from providers)

Almost all my systems are based on that.

Of course, it can be duplicated with Openframeworks, Processing or Max6, by using OSC instead of MIDI to transport data for instance, or by using multiple computers to distribute the CPU load over a powerful network of machines.

The basic idea involves some Reaktor ensembles’ hack

Reaktor’s ensembles design can be a pain. I cannot start to talk about Reaktor forgetting about saying that. Especially, when you used to patch with Max6 (and we cannot compare both frameworks because they are just absolutely not purposed for the same thing), you miss all small useful :
- copy/paste
- cords routing
… finally, easy patching features.

But, you have a powerful DSP (without script coding, btw), and especially, the ONLY proofed & reliable way to design VST with a GUI easily without even knowing about C++ or text coding paradigm based framework.

I especially played with a couple of sequencers like :
- gobox
- newscool
- aerobic
- rythmaker
- krypt

These sequencers are really interesting. I’m not going to describe each of them (suggesting you to explore by yourself)

gobox

newscoolFuse

aerobic

rythmaker

krypt

From many random-based sequences builder to pure monophonic (but rapid sound switching) triggers like GoBox, I really (re)discovered these in 2013 and my path drives

But the point here is: would I have been able to use them as they were ?

Short answer: no !

Longer one, now.

If those sequencers often embed a lot of sound generators or sample mangling tools, output are often only 2 channels: Left & Right.
It sucks, especially if you need to use these inside a proper daw.
Even if some Reaktor’s ensembles provides (apparently) multiple outputs, in the fact I found bugs, problem and finally, not a lot providing that.

So I had to customize, build some new parts of ensembles in order to provide proper and working outputs.

Indeed, Reaktor VST system provides natively 4 x Stereo outputs which is already VERY good.

Let’s see how it works and how I use that in Ableton Live.

At first, let’s load Reaktor VST in a classic MIDI track in Live and load any ensemble you want to use.

Then create an audio track and route-by-source the signal from the track with the Reaktor VST device inside of it.

Check the routing and change the audio track’s monitor selector to IN (in order to commute the input to the output every time, even when the track isn’t armed)

Then, select the track with the Reaktor VST as audio source and see what there is inside that list:
- pre-FX
- post-FX
- post-Mixer
- Reaktor 2 – Reaktor5
- Reaktor 3 - Reaktor5
- Reaktor 4 - Reaktor5

pre-FX grabs the audio signal before all audio FX in the source track (the audio track)
post-FX do the same but after all audio FX (it also means before the mixer)
post-Mixer grabs the audio signal after it passes through all audio FX and the mixer of the source track

You have to know that Reaktor VST device itself provides 4 stereo outputs (= 8 mono)
Each one is usable independantly in the host (Live) and this list teaches you how to do that by itself.
preFX, postFX post mixer are related to the first 2 outputs (the first stereo output)
Then Reaktor 2 to the second stereo, 3 to the third etc.

Usual, but sometimes it is good to remind you that.

But if the VST device itself got 4 stereo outputs DOES NOT mean that the ensemble inside of it can produce a sound on each 4 stereo outputs at all. Actually, a few provide that and some are even buggy.

Here is a small schematic showing how it goes.

reaktorAudioOutputs

Does it make sense ? I guess it does.

On the schematic, in the ensemble part, there are 4 elements. These are special Reaktor’s objects named Out Port
We can find it in the contextual menu in Reaktor when you want to add an object on the playground:

outport

 

Any signal connected to one of these out ports can be available in Live (or whatever VST host)

This is for audio signals.

Now, let’s check how it can work for grabbing MIDI data from Reaktor’s ensembles.

But at first, you’d have to know WELL how midi device, audio device work in Live.

I’d suggest you to read a part of my Max for Live book about that

So if you read it correctly, you know the only device able to pop out MIDI informations in chains are MIDI FX.

This is as true as false.

Actually, by using MIDI source routing, we can also grab MIDI data from any ensembles in Reaktor VST, as soon as the ensemble pops out MIDI of course.

Let’s see how it goes but at first here is the final big picture:

audioAndMidi

 As I described it for audio, you can suppose that it works the same for MIDI. Bingo, you supposed correctly.

MIDI port doesn’t work in Reaktor ensemble through VST as audio ports (out port object) work.

We “only” have to create a note pitch/gate object and correctly connect it inside the ensemble and that latter will pop out MIDI note that can be grabbed by MIDI source route the VST in Live (see previous schematic)

MIDINoteReaktor

If you are hacking some ensembles that don’t provide MIDI output natively, you’ll probably have to use this object Note Pitch/Gate and the Constant Math’ object too.

Here in that schematic, you can see some objects with a number inside. These are Constant Math’ objects. As soon as they are connected to something else, they pop out (set) the value they contain inside of them.

I’m not going to describe newscool ensemble, but basically, it potentially produces 6 different sounds and these latter are triggered by 6 different output (on the right of the module at the top left in the ensemble’s playground) ; these triggers just change values from 0 to 1 when they require a sound to be triggered… and I basically used that to trigger my MIDI notes.

Indeed, this system pops out a different MIDI note from a pitch of 0 to 7 (eight different notes), one for each trigger.

So now, you know a bit more about how to enable and use multiple outputs in Ableton Live and also more about how to use PRECIOUS MIDI data from nice Reaktor’s sequencers…

Multiple audio-outputs provide a very nice way to tweak them on stage (or in studio). We can equalize them differently, use effects etc.

MIDI notes coming from Reaktor is a very useful feature for making music (using such sequencers with Operator or other VST synth is just … a pure inspiring pleasure!) but also for other kind of triggering like visual triggering, which is basically why I studied and hacked that.

I hacked some like those I quoted at the beginning of this article and some others too.

My new setups are now embedding some of these sequencers. But as soon as I played with my new systems, I notice something absolutely stupid:

if I use MIDI notes coming from these sequencers and if I have all these different audio channels from these sequencers too (in order to mute, or alter them), if I mute all channel except one for instance, MIDI notes are still popped out even those corresponding to my audio muted channels.

That system imperfection have been fastly fixed using Max for Live :)

Where Max for Live helped a lot

To summarize clearly the problem, I can say:
- my reaktor sequencers are popping out sounds + MIDI notes
- I can mute some audio tracks and then mute some sounds popped out by reaktor sequencers
- but MIDI notes are always popped out

Basically, I had to design some max for live devices that:
- are dedicated to particular Reaktor sequencers (some pops out 4 tracks, some other less etc)
- observe track’s mute state
- react to some mute by closing or opening some doors letting or blocking some MIDI notes specifically

Here is the patch, basically.

MIDIfilter

I cannot make a Max for Live course here: my book is well placed for that.

But here are some tips about that patch.
It is basically a MIDI filter. All data coming from the input of the track is coming inside the device by the top-left object named [midiin]. This happens like that (the data incoming in the track) because of the routing  +  the monitor state to In.

Then, these data are pre-filtered (I only want MIDI notes with a pitch of 0, 1 … to 7 here)

Then, there are these small gates that can be open, or closed and that are handled by subpatchers named [muteObserver]. Basically, mute observing occurs inside of these.

The spaghetti-wizardry on the right set up all mute-observer for observing the right track.

Then, [midiformat] reformat MIDI data and sends them to [midiout] which pops out notes (or not) to the output of the track which is my Max6 visual generator.

That means what?

If I have a kick sounds produced by my Reaktor sequencer, we can hear the sound, the MIDI notes generated by each sound triggered go to my Max6 patch and (for instance) it produces a color.

But, as usual when we are using sequencers on stage, if I want to mute (in Live part) the audio track producing my kick sound, then the Max for Live device detects that and close a door and the corresponding MIDI note won’t be popped out and the color won’t be triggered, loop is over :)

I hope you enjoyed these explanations, and if you needed more, let me some comments.
I should add some videos about that asap, but my schedule is tight.



MOST READ POSTS SINCE 30 DAYS