So, if you read my previous post, you'll be thinking; "Griff, how does any of this apply to droidViz. How are you going to redesign it?" To be honest, I'm not sure. Let's find out! This post may be pretty non-linear -as its mostly a stream of consciousness. Bear with me!
What does droidViz have to do?
At its root, droidViz performs a fluid simulation and render it using OpenGL ES 2.0. Great. What else does droidViz have to do? Well, we need to vary up the visualization somewhat. We need the ability to create custom visuals with a high degree of customization. droidViz needs to be implemented in the NDK in order to get access to high performance visuals and processing. droidViz needs to have its simulation parameters able to be changed at anytime while running, and visuals must be able to be switched on the fly as well. Developers will not want to work at the native level, so droidViz must have a simple and intuitive Java interface which allows OpenGL based rendering before or after droidViz renders. We need the ability to create changes in density and velocity from the Java level, based on one or multiple "multitouch" inputs. Most importantly, droidViz must consume as small an amount of CPU and memory as possible, in order to enable droidViz to be used in existing applications.
Wow. That was a lot. Let's make a bulleted list for easy reference
- Perform a fluid simulation and render it
- Create custom visuals with a high degree of customization
- Have a full Java level interface which allows for control of simulation constants
- Allow for rendering before and after droidViz at both the Java and Native levels. (That'll be hard)
- Make droidViz have as small a footprint as possible.
Well, currently droidViz does the first bullet pretty well. Unfortunately, it doesn't do ANY of the others yet. In its current state, its nothing more than a tech demo. This is something we want to change.
How can we create an OOP framework and support all of these required specifications? Lets start with the first bullet in mind and work toward the final architecture.
Perform a fluid simulation and render it
Currently, fluid.c deals with creating the VBOs, updating them frame by frame, and rendering them using calls to GL. This is all stuff we'd like to encapsulate into an extensible OOP framework. Solver.c provides the optimized algorithmic backend (Thanks to the legendary Jos Stam!). We have two levels of rendering which currently take place - we can render the velocity field and the density field. Hell, we could even render other stuff too. (Temperature field? Particles?) What if we want to render multiple simulations on the same screen with the same context? Is this something we want to support? Is this something which is even really feasable? I don't think so. I can't think of a good reason for this. We'll say no to this for now.
Should the developer have control over which layers render? Yes. Should the developer be able to create new render layers? Yes. Should he be able to create a new render layer from the Java level? I can't think of a good reason for this. We'll say no.
Thinking about it this way, it seems to make scene to create a framework of render layers, each rendering something unique; like the density or velocity fields. Let's take a step back though... Both the density and velocity field are fundamental to the fluid simulation - they're both required for the simulation to work and are implemented inseparably in solver.c. Further render layers will be derived from the values in the density and velocity render layer. Temperature will be mostly derived from the density values. Any sort of future particle engine will rely only on the velocity values...
Perhaps we create a render stack which has - at its base level - the current droidViz rendering with switches to turn on and off density and velocity visualizations. A generic render layer interface could be created which passes in velocity and density information and provides a "draw" function which allows us to perform calculations per frame with updated simulation variables. How does this differ from pre-post rendering later? Well, these render layers rely on droidViz's simulation results - whereas drawing which doesn't need these values can be done elsewhere.
So, currently, I have a render stack in mind with a base fluid layer with switchable basic visualizations of velocity and density. OOH Wait. What if the base is just the "Invisible fluid simulation" and velocity and density are implemented as render layers and put into the render stack just like any other render layer? OOO I like this. This also allows us to separate the fluid solver and updating code from the actual render code. This works out to our advantage.
How is this implemented? We'll have to create two objects - a render stack and a fluidsim object. Each frame, fluidSim generates density and velocity arrays and feeds them to the renderStack which then performs the rendering sequentially similar to the processing chain I described earlier. This allows for expansion and modularity. I LOVE IT.
Is there anything currently limiting or infeasable with this implementation? I can't think of anything. Lets move on to the next bullet.
Create custom visuals with a high degree of customization
Well, the obvious answer here - and what we've kind of been working on already - is to have the final visualiation controlled by GLSL ES 2.0 shaders. This just came out of spec a year or two ago, so the developer is going to have some learning to do, but this allows for the HIGHEST degree of customization we can afford to the developer. How is this implemented?
Initially, we were planning on one vertex/fragment shader for the whole shebang It would receive velocity and density values as vertex attributes (just like the render layers) and perform coloration and positioning based on these values. How does this change with our new "RenderLayer" architecture?
Well, quite a lot. First off, we now have the ability to swap out different shaders for each renderLayer. Is this something we want to support? I think so, yes. Say for example we have a particle renderlayer which is receiving irrelevant vertex attributes based on the old architecture. This is not really what we want. This way, each render layer can receive a different coloration algorithm based on shaders and this runs on the GPU which is good for performance.
There are a few important problems... A shader is simply a string of shader code - for our purposes. It may come from the strings.xml at the java level, or as a text file from the sd card. At some level these must be parsed and linked at initialization so that the glUseProgram can be used to bind them during rendering. How can we ensure that a shader intended for the particle layer be bound at the particle layer and not at the density layer? This would cause a problem due to the fact that the shader would be looking for vertex attributes which might not be passed to it. Should this be done at the Java level? My initial response is no - because of how low level this is - but the fact that a shader may be stored as a string in strings.xml is an important feature.
How about we use a comment or some sort of GLSL pre processor command to specify which droidViz renderlayer this shader targets. If the renderstack attempts to use it at a different layer, we can throw an error before the program crashes spectacularly. Is this a good solution? I believe so. This brings me to another important question. How are render layers created such that we have sufficient control of the visualization at the java level?
RenderLayers must be created at the Native level due to the fact that GL2.0 functions aren't currently exposed at the Java level. Plus, we can't instantiate renderlayer objects from the java interface easily... Well, we could, but this is something which I'm just not interested in developing. Renderlayers can be added to the project by adding them to some sort of "RenderLayer" folder in the JNI folder of the droidViz project. Doing so allows for easy compilation. "ndk-build" automatically recurses through directories in the jni folder so as long as the "Android.mk" file is included in the renderlayer folder as well as the source, it'll be automatically compiled into the project! People can develop custom render layers and pass them around like candy then. All it requires to add a new render layer to your droidViz based visualization is a download, a copy-paste, and a recompile. NO CODE CHANGES NECESSARY. This is a good solution. We can create a pre-processor macro which includes the render layer in an array of available render layers which can be queried at java level. In fact, shaders can be included in false header files in the renderlayer folder which ensures that only that shader is used for that render layer. In fact, the strings.xml is no place for a shader anyway. Storing shaders on the SD card is a no-no as well, because then the user could accidentially delete the shader causing the program to stop working.
Of course, there is a good reason for loading a shader from the sd card. What happens when we get into a situation like milkdrop - where milkdrop indexes presets at startup from a file of presets which the user has access to. The end user will not have any way to recompile the executable from their droid phone. It needs to be possible to create a renderlayer - or at least some sort of generic render layer which loads a shader from the sd card... How about this... The author of the renderlayer has the ability to expose the shader at the java level ... gross. Nevermind. See, the problem boils down to this - compile the shaders into the program and only allow the developer to add them individually, or expose the shaders at java level and risk all sorts of headaches, but allow for end users to create shaders and use them in droidViz, similar to milkdrop and milkdrop's community.
Ideally, I'd like to support a community like milkdrop's. Is there any good way to load a shader in and link it for a renderlayer? If we could tell somehow that the shader implements the necessary vertex attributes and doesn't ask for ones we don't provide at that render layer, there may be a good way to do this. Alright. After a bit of research, it appears that we can tell if the shader has the uniforms we're going to request from it. If it doesn't have them we can gracefully error out. This means that we can do the required error checking.. How do we expose this at the Java level though...
Perhaps we have a "GetRenderLayer" function which returns a handle to a renderlayer which is already instantiated in the renderStack. This handle can be used with a renderLayer member method called "SetShader". The SetShader function is implemented by the renderLayerAuthor and can accept the input gracefully, or report that it doesn't accept other shaders. I think this is a valid solution.
So, at the java layer, the steps to create a droidViz based visualization is to
- Query the available render layer types using "GetRenderLayerTypes". This returns an array of indices and names of those render layers as strings.
- Use the CreateRenderLayer( layerTypeIndex ) using the index of the type of render layer you would like to create, given to you by "GetRenderLayerTypes". This returns a renderLayerHandle
- Perhaps call the SetRenderLayerShader( int idx, string VProgram, string FProgram ) before adding the layer to the renderstack.
- Push the renderLayer to the stack by calling PushBackRenderStack( idx ) which pushes the render layer onto the back of the stack, or PushFrontRenderStack( idx ) which pushes the render layer onto the front of the stack. Find, Insert, and Delete could also be implemented...
- Start droidViz and it will lock and start rendering using the current renderStack.
Using this architecture, the shader can be changed on the fly by pausing the simulation, removing any current render layers which you need, inserting any new render layers, and resuming the simulation.
Just thought of another problem... How do users interact with render layers? Lets say I have a particle render layer. How do we tell the particle render layer to create a particle at a certain position? How can we make this interface generic enough so that it applies to more than just particles?
How about this: We create a "SendMessage( idx, string ) " function at the java level which can send a string to the render layer which corresponds to that index. In fact, we could probably even send "Change Shader" commands this way. We would ensure that the message would be delivered to the renderLayer by the next frame. The renderLayer can do whatever it wants with this input (even nothing at all). In fact, we could even have a "ReceiveMessage( idx, string ) implemented at the java level which allows a user to pick up a message sent from the renderLayer. Perhaps the renderLayer knows something about the number of particles that we need to know - we can just call the "ReceiveMessage" and pick it up.
Only problem with the messaging system is that the GL Renderer operates in a separate thread from the main java thread. Thus, passing messages becomes a problem of mutex locking and buffering... I'd prefer not to buffer messages... If a renderlayer needs a response before it can send more data - so be it. I'll only store one message. If the renderlayer tries to send more before the main thread has picked up the last bit, it'll just overwrite the last message.
This is starting to get a bit complex... Let's evaluate what we have so far.
Currently, we have a renderstack with renderlayers. These renderlayers are created and put into the stack at the java level. Renderlayers are all passed fluid simulation data such as density and velocity. RenderLayers can not change fluid simulation values. RenderLayer types are added to the droidViz Engine at jni compile time. It is possible to create a renderLayer which can receive messages and send messages from and to the java layer using some sort of non-buffered mutex locked message system. RenderLayers will be used to render anything relating to the fluid simulation. If a visual can be rendered without the need of velocity or density information, it can do so by rendering at the java level using the same GL context we give droidViz.
Shortcomings - The current renderLayer system allows for users to send messages to renderlayers which would allow for rendering through droidviz without using the fluid data. This is out of the scope of droidviz.
Solution? Take out the messaging system. It never really pleased me in the first place. When we do this, we effectively say that "Anything that needs more than just the fluid simulation data to render should not be implemented as a renderLayer." With the removal of the SendMessage function, we get the addition of the LoadShader(idx) function. Perhaps this is a good solution.
Perhaps we expose the fluid simulation values at the java level and enable users to render what they want with the fluid simulation at the java level. Hell, this even enables people to completely subvert the entire render layer process and just use droidViz as a fluid solver. Perhaps people will want to do that? Will they just use droidViz as a fluid solver? Should the fluid solver be abstracted and removed from droidViz as a whole? I like to keep options open to programmers, but this one I'll stick by. If they really want to use just the fluid solver, they can modify droidviz and remove all the rendering code. With the new modular architecture, it should be easy enough.
So how do we render particles? Theoretically, we could just create them at random positions every so often, and have them die off after a certain amount of time. You might say, "But I want particles to be created where I touch the screen!", to which I respond - if your visual requires more information than just what the fluid simulation gives as data, it is not within the scope of a renderLayer.
'Course, with an answer like that, and not exposing fluid simulation values at Java level, it's impossible to create particles which react to the fluid based solely on touch input... Perhaps there's a better way. If you can think of one, post in the comments!
This brings us to the next point
Have a full Java level interface which allows for control of simulation constants
Well, this seems easy enough. With this architecture, we can create a few jni level functions such as FluidSimSetVisc( float ), or FluidSimSetDiff( float ). We can even create functions like FluidSimCreateDensity( float x, float y ) and FluidSimCreateVelocity( float x, float y, float dx, float dy ).
One important fact that I want to get through is that the library will have NO idea of any sort of cursor input or XY touch input. It will only call FluidSimCreateVelocity or FluidSimCreateDensity with data about which vertex to create the velocity/density at. At no point shall the library ever know what a cursor is, or if the user has fingers. The inputs to the fluid simulation must call these functions. We can use the OnTouchEvent method to get touch information, then translate it to input the fluid simulation will understand, but my point is to keep the input generic.
These two (density and velocity) functions will be the only way to interact with the fluid simulation.
Next!
Allow for rendering before and after droidViz at both the Java and Native levels.
This is actually pretty easy now with the architecture we've derived so far. We can simply create a Java "interface" which is called whenever we draw a frame. droidViz will be implemented as a glSurfaceView.Renderer which means it has three functions it must override -
- OnSurfaceCreated
- OnSurfaceChanged
- OnDrawFrame
Currently, droidVizRenderer just calls a native function in these callbacks. If we need to render before or after, we just inherit droidVizRenderer in our own class and put draw calls before or after the "super.OnDrawFrame( gl );" call. This enables rendering before and after at the java level. Rendering at the native level is taken care of with the new renderlayer interface.
So, in order to use droidViz, we instantiate a droidViz object (which will still be a glSurfaceView decendent which creates an OpenGL ES 2.0 context), instantiate a decendent of droidVizRenderer, setup the renderer renderStack with all sorts of renderLayers, feed the renderer to droidViz, then set the droidViz as our main content view. Its that easy! As mentioned before, render layers can be added at compile time extremely easily, and we have full control over the whole shebang at native and java levels!
Next bullet.
Make droidViz have as small a footprint as possible
Optimizations aside, we have an important task here. droidViz must operate within less than 50% of the CPU. This number was arrived at arbitrarily, but I think its a good number. For what droidViz does, 50% is a valid number. Currently, droidViz uses 100% CPU. The reason is simple.
There is a dt variable set in the solver code. Essentially, every drawframe, it steps the simulation by whatever the value in dt is. If its 0.001, we step forward by one millisecond each frame. Thing is, we're seeing around 30 frames per second... We should pass in the dt each frame to the fluid solver so that it can accurately simulate the fluid in realtime. So long as we lock the framerate at 30 FPS (again arbitrary, but a good spec) we can use less than 100% CPU.
There's also the possible problem that all of this abstraction and layering has slowed everything down. I don't know how considerable the slowdown is, but this is something we'll have to evaluate once we have the new structure in place.
Optimization doesn't work the same on an embedded device. All of the sudden, floating point operations become the enemy. We can't just throw everything on the GPU either, because the GPU will quickly become overwhelmed and throw the workload back onto the CPU. Optimizations come further down in the development process. The worst thing you can do in software development is optimize too soon. Code becomes unreadable and un-debuggable if you optimize too soon. As such, we'll discuss this part when we get to it.
So, what have we arrived at?
Well, it seems we've derived an acceptable architecture for droidViz given what we know now about the way the APIs work. This could change, so this architecture may change several times throughout development, but we have a good set of ideas to work off of. The next logical step would be a class diagram, but I'm tired now from all the thinking, so I'll lay off for a bit.
There is one remaining question with the architecture we've set up: how to deal with renderlayers and user input. How would a user tell a renderlayer to render something given their touch input. The more and more I think about it, it may be a good idea to give droidViz some sort of XY input awareness... The translation of XY input to vertex is impossible at the java layer due to the encapsulation of the rendering code... Maybe I'll have to revise what I said in the java level control section.
Well, I've been thinking for a while now, its late and this post is epic enough for now. I've arrived at several good conclusions and I hope you agree. If theres anything you'd like to add, just write it in the comments. I'll start on the formal architecture change tomorrow.
Peace out.
-Griff