for those interested in my project myUniverse, here are my statement after the week2 of work/design.
Indeed, these aren’t really 2 weeks, but at least 1 full on it.
I hope you like my journey’s description.
One of the reason of that is a bit because travelling without sharing things suck a bit.
It is also to write my log file too, and to be able to see the path walked behind me.
Some architecture design mods
I decided to put all dynamic data like distances objects/cam, and their related calculations in my objects.
I mean, the nearer from the entity which needs it.
That way, I can free the JAVA Core to:
– calculate & store (store the current distance isn’t really relevant!)
– message all objects with these dynamic values which takes time & energy..
Now, I’m using only Max objects to <em>transmit</em> the current cam position & orientation to ALL my objects using a broadcast bus (receive/send)
I’m also beginning to use jit.gen / GenExpr for fast calculations of distances & more, and yes, it seems TERRIFICALLY fast.
Indeed, pure matrix calculation seems really speedy
And indeed, the JAVA Core will progressively be only used for:
– interface with presets (store/load)
– interface with the GUI
All the dynamic & current value job would be done using Max objects directly (I mean native or externals, but .. Max objects, not JAVA or JS or code)
Some sound part big change
I wanted to keep part separated.
During my prototyping step, I used some small poly~ directly in my objects abstractions.
It was only to avoid to use the messaging system and to have to switch between Super Collider & Max6 during the prototyping phase.
And I liked it.
Indeed, it would be much more simple to make all in Max.
I’d go to SC or another external sound generator only if things go bad, from a performance side.
I tested ICST Ambisonic pack.
It seems to work very fine.
I’m not sure I’ll use it yet.
Each sound part of my objects will be built like that:
– the sound generator itself
– the distance attenuation part (depending of object itself + some atmosphere global parameters)
– the doppler effect part (depending of the relative speed between cam & object)
– the spectralization part (depending on the 3D position of the object relative position to cam)
In this case of object containing both visual & sound part, I have a total coupling between those part.
It makes me afraid to be limited ; at first I wanted to be able to link that object to this or that synth
BUT no. at very first i wanted to create object units very specific. as if they were living… with their characteristics.
artistically speaking, I like that idea.
this would be like … I spend some time on this creature.. making it living progressively.. If I need something different, I would create a totally new one… being able to use both type in my scenes
Not a lot visuals yet
Currently, my system only displays kind of basic primitive spheres.
I’m afraid that hardcore textured or multiple objects would crash the whole system
But I cannot go further by detailing all at the same time
I’ll see that part quite soon
Already, I built a system that take care of inactivate/activate objects depending on their relative position to cam (far OB3D objects are inactivated for the current openGL context)
my current steps:
– to be able to integrate the sound chain in each poly, or in each object at the outside of the poly (safer, maybe)
– to be able to really separate dynamic & static (=those who need to be store/retrieve from presets)
– to be able to calculate all hardcore stuff using the fastest way jit.gen or even some specific C externals
– to be able to draw some nice piece of visuals using the whole system, without killing the performance which would drive me to consider again some great part of the architecture
15 December 2014 / 20:48
7 November 2014 / 00:55
4 November 2014 / 18:18
2 November 2014 / 18:51
22 September 2014 / 14:21