Friday, July 30, 2010

Update

Its been a little bit of time since we've had a blog post because we've been working hard on this framework redesign since having restructured the project.

Our goal has been documented in detail at the Google Code Wiki. We are aiming to develop this framework to allow easy access to the NDK system while giving the flexibility to incorporate it in many new and exciting ways.

The Code itself hasn't been updated yet because we are constantly redesigning the framework to keep the scope clear yet extensible. Right now, we are working towards having an engine that generates "events", visuals, when being used at the JAVA level.

We should add after having researched and programmed now in the NDK for many, many months the documentation on Android, OpenGL ES 2.0, and the NDK/JNI beast we can confidently say that documentation is sparse at best. By having been allowed to work on this project through RCOS, our code and documentation on these topics present a very easily searchable project with useable code, examples, documentation, and an overall idea on how this large scale integration of technologies is done. To date, we have found very little help on such integration (especially with Android).

Will be working hard to get a framework complete by the summer's end.

Wednesday, July 14, 2010

Patent Update

So the current status of DroidViz is as follows.

The algorithm in solver.c and solver.h is patented by AutoDesk with the inventor being Jos Stam. We have not or ever licensed the files of solver.c and solver.h but they've included accreditation to Jos Stam in the opening comments. We are going to cease distribution of the code with DroidViz.

Without confirmation from AutoDesk that solver.c and its patented algorithm can be distributed the SVN will have the source removed and until the DroidViz API rewrite is complete, any interested parties will need to implement their own fluid solver into the framework.

Sunday, July 4, 2010

Finalizing New Spec

Hey guys,

I've posted a (very) rough draft of the new architecture I talked about in the post yesterday. I think its getting close, but theres still a few shaky areas regarding input which need some insight yet. I've decided to backtrack on what I said yesterday about the library having no idea what input is. Turns out that its not only very useful, but performing the Screenspace->Fluidspace calculations at the Java level is inefficient and messy. Hopefully the architecture can handle this small change gracefully!

The architecture can be found on the Google Code wiki, linked here:


Peace,

-Griff

Saturday, July 3, 2010

Redesigning droidViz

So, if you read my previous post, you'll be thinking; "Griff, how does any of this apply to droidViz. How are you going to redesign it?" To be honest, I'm not sure. Let's find out! This post may be pretty non-linear -as its mostly a stream of consciousness. Bear with me!

What does droidViz have to do?

At its root, droidViz performs a fluid simulation and render it using OpenGL ES 2.0. Great. What else does droidViz have to do? Well, we need to vary up the visualization somewhat. We need the ability to create custom visuals with a high degree of customization. droidViz needs to be implemented in the NDK in order to get access to high performance visuals and processing. droidViz needs to have its simulation parameters able to be changed at anytime while running, and visuals must be able to be switched on the fly as well. Developers will not want to work at the native level, so droidViz must have a simple and intuitive Java interface which allows OpenGL based rendering before or after droidViz renders. We need the ability to create changes in density and velocity from the Java level, based on one or multiple "multitouch" inputs. Most importantly, droidViz must consume as small an amount of CPU and memory as possible, in order to enable droidViz to be used in existing applications.

Wow. That was a lot. Let's make a bulleted list for easy reference
  • Perform a fluid simulation and render it
  • Create custom visuals with a high degree of customization
  • Have a full Java level interface which allows for control of simulation constants
  • Allow for rendering before and after droidViz at both the Java and Native levels. (That'll be hard)
  • Make droidViz have as small a footprint as possible.
Well, currently droidViz does the first bullet pretty well. Unfortunately, it doesn't do ANY of the others yet. In its current state, its nothing more than a tech demo. This is something we want to change.

How can we create an OOP framework and support all of these required specifications? Lets start with the first bullet in mind and work toward the final architecture.

Perform a fluid simulation and render it

Currently, fluid.c deals with creating the VBOs, updating them frame by frame, and rendering them using calls to GL. This is all stuff we'd like to encapsulate into an extensible OOP framework. Solver.c provides the optimized algorithmic backend (Thanks to the legendary Jos Stam!). We have two levels of rendering which currently take place - we can render the velocity field and the density field. Hell, we could even render other stuff too. (Temperature field? Particles?) What if we want to render multiple simulations on the same screen with the same context? Is this something we want to support? Is this something which is even really feasable? I don't think so. I can't think of a good reason for this. We'll say no to this for now.

Should the developer have control over which layers render? Yes. Should the developer be able to create new render layers? Yes. Should he be able to create a new render layer from the Java level? I can't think of a good reason for this. We'll say no.

Thinking about it this way, it seems to make scene to create a framework of render layers, each rendering something unique; like the density or velocity fields. Let's take a step back though... Both the density and velocity field are fundamental to the fluid simulation - they're both required for the simulation to work and are implemented inseparably in solver.c. Further render layers will be derived from the values in the density and velocity render layer. Temperature will be mostly derived from the density values. Any sort of future particle engine will rely only on the velocity values...

Perhaps we create a render stack which has - at its base level - the current droidViz rendering with switches to turn on and off density and velocity visualizations. A generic render layer interface could be created which passes in velocity and density information and provides a "draw" function which allows us to perform calculations per frame with updated simulation variables. How does this differ from pre-post rendering later? Well, these render layers rely on droidViz's simulation results - whereas drawing which doesn't need these values can be done elsewhere.

So, currently, I have a render stack in mind with a base fluid layer with switchable basic visualizations of velocity and density. OOH Wait. What if the base is just the "Invisible fluid simulation" and velocity and density are implemented as render layers and put into the render stack just like any other render layer? OOO I like this. This also allows us to separate the fluid solver and updating code from the actual render code. This works out to our advantage.

How is this implemented? We'll have to create two objects - a render stack and a fluidsim object. Each frame, fluidSim generates density and velocity arrays and feeds them to the renderStack which then performs the rendering sequentially similar to the processing chain I described earlier. This allows for expansion and modularity. I LOVE IT.

Is there anything currently limiting or infeasable with this implementation? I can't think of anything. Lets move on to the next bullet.

Create custom visuals with a high degree of customization

Well, the obvious answer here - and what we've kind of been working on already - is to have the final visualiation controlled by GLSL ES 2.0 shaders. This just came out of spec a year or two ago, so the developer is going to have some learning to do, but this allows for the HIGHEST degree of customization we can afford to the developer. How is this implemented?

Initially, we were planning on one vertex/fragment shader for the whole shebang It would receive velocity and density values as vertex attributes (just like the render layers) and perform coloration and positioning based on these values. How does this change with our new "RenderLayer" architecture?

Well, quite a lot. First off, we now have the ability to swap out different shaders for each renderLayer. Is this something we want to support? I think so, yes. Say for example we have a particle renderlayer which is receiving irrelevant vertex attributes based on the old architecture. This is not really what we want. This way, each render layer can receive a different coloration algorithm based on shaders and this runs on the GPU which is good for performance.

There are a few important problems... A shader is simply a string of shader code - for our purposes. It may come from the strings.xml at the java level, or as a text file from the sd card. At some level these must be parsed and linked at initialization so that the glUseProgram can be used to bind them during rendering. How can we ensure that a shader intended for the particle layer be bound at the particle layer and not at the density layer? This would cause a problem due to the fact that the shader would be looking for vertex attributes which might not be passed to it. Should this be done at the Java level? My initial response is no - because of how low level this is - but the fact that a shader may be stored as a string in strings.xml is an important feature.

How about we use a comment or some sort of GLSL pre processor command to specify which droidViz renderlayer this shader targets. If the renderstack attempts to use it at a different layer, we can throw an error before the program crashes spectacularly. Is this a good solution? I believe so. This brings me to another important question. How are render layers created such that we have sufficient control of the visualization at the java level?

RenderLayers must be created at the Native level due to the fact that GL2.0 functions aren't currently exposed at the Java level. Plus, we can't instantiate renderlayer objects from the java interface easily... Well, we could, but this is something which I'm just not interested in developing. Renderlayers can be added to the project by adding them to some sort of "RenderLayer" folder in the JNI folder of the droidViz project. Doing so allows for easy compilation. "ndk-build" automatically recurses through directories in the jni folder so as long as the "Android.mk" file is included in the renderlayer folder as well as the source, it'll be automatically compiled into the project! People can develop custom render layers and pass them around like candy then. All it requires to add a new render layer to your droidViz based visualization is a download, a copy-paste, and a recompile. NO CODE CHANGES NECESSARY. This is a good solution. We can create a pre-processor macro which includes the render layer in an array of available render layers which can be queried at java level. In fact, shaders can be included in false header files in the renderlayer folder which ensures that only that shader is used for that render layer. In fact, the strings.xml is no place for a shader anyway. Storing shaders on the SD card is a no-no as well, because then the user could accidentially delete the shader causing the program to stop working.

Of course, there is a good reason for loading a shader from the sd card. What happens when we get into a situation like milkdrop - where milkdrop indexes presets at startup from a file of presets which the user has access to. The end user will not have any way to recompile the executable from their droid phone. It needs to be possible to create a renderlayer - or at least some sort of generic render layer which loads a shader from the sd card... How about this... The author of the renderlayer has the ability to expose the shader at the java level ... gross. Nevermind. See, the problem boils down to this - compile the shaders into the program and only allow the developer to add them individually, or expose the shaders at java level and risk all sorts of headaches, but allow for end users to create shaders and use them in droidViz, similar to milkdrop and milkdrop's community.

Ideally, I'd like to support a community like milkdrop's. Is there any good way to load a shader in and link it for a renderlayer? If we could tell somehow that the shader implements the necessary vertex attributes and doesn't ask for ones we don't provide at that render layer, there may be a good way to do this. Alright. After a bit of research, it appears that we can tell if the shader has the uniforms we're going to request from it. If it doesn't have them we can gracefully error out. This means that we can do the required error checking.. How do we expose this at the Java level though...

Perhaps we have a "GetRenderLayer" function which returns a handle to a renderlayer which is already instantiated in the renderStack. This handle can be used with a renderLayer member method called "SetShader". The SetShader function is implemented by the renderLayerAuthor and can accept the input gracefully, or report that it doesn't accept other shaders. I think this is a valid solution.

So, at the java layer, the steps to create a droidViz based visualization is to
  • Query the available render layer types using "GetRenderLayerTypes". This returns an array of indices and names of those render layers as strings.
  • Use the CreateRenderLayer( layerTypeIndex ) using the index of the type of render layer you would like to create, given to you by "GetRenderLayerTypes". This returns a renderLayerHandle
  • Perhaps call the SetRenderLayerShader( int idx, string VProgram, string FProgram ) before adding the layer to the renderstack.
  • Push the renderLayer to the stack by calling PushBackRenderStack( idx ) which pushes the render layer onto the back of the stack, or PushFrontRenderStack( idx ) which pushes the render layer onto the front of the stack. Find, Insert, and Delete could also be implemented...
  • Start droidViz and it will lock and start rendering using the current renderStack.
Using this architecture, the shader can be changed on the fly by pausing the simulation, removing any current render layers which you need, inserting any new render layers, and resuming the simulation.

Just thought of another problem... How do users interact with render layers? Lets say I have a particle render layer. How do we tell the particle render layer to create a particle at a certain position? How can we make this interface generic enough so that it applies to more than just particles?

How about this: We create a "SendMessage( idx, string ) " function at the java level which can send a string to the render layer which corresponds to that index. In fact, we could probably even send "Change Shader" commands this way. We would ensure that the message would be delivered to the renderLayer by the next frame. The renderLayer can do whatever it wants with this input (even nothing at all). In fact, we could even have a "ReceiveMessage( idx, string ) implemented at the java level which allows a user to pick up a message sent from the renderLayer. Perhaps the renderLayer knows something about the number of particles that we need to know - we can just call the "ReceiveMessage" and pick it up.

Only problem with the messaging system is that the GL Renderer operates in a separate thread from the main java thread. Thus, passing messages becomes a problem of mutex locking and buffering... I'd prefer not to buffer messages... If a renderlayer needs a response before it can send more data - so be it. I'll only store one message. If the renderlayer tries to send more before the main thread has picked up the last bit, it'll just overwrite the last message.

This is starting to get a bit complex... Let's evaluate what we have so far.

Currently, we have a renderstack with renderlayers. These renderlayers are created and put into the stack at the java level. Renderlayers are all passed fluid simulation data such as density and velocity. RenderLayers can not change fluid simulation values. RenderLayer types are added to the droidViz Engine at jni compile time. It is possible to create a renderLayer which can receive messages and send messages from and to the java layer using some sort of non-buffered mutex locked message system. RenderLayers will be used to render anything relating to the fluid simulation. If a visual can be rendered without the need of velocity or density information, it can do so by rendering at the java level using the same GL context we give droidViz.

Shortcomings - The current renderLayer system allows for users to send messages to renderlayers which would allow for rendering through droidviz without using the fluid data. This is out of the scope of droidviz.

Solution? Take out the messaging system. It never really pleased me in the first place. When we do this, we effectively say that "Anything that needs more than just the fluid simulation data to render should not be implemented as a renderLayer." With the removal of the SendMessage function, we get the addition of the LoadShader(idx) function. Perhaps this is a good solution.

Perhaps we expose the fluid simulation values at the java level and enable users to render what they want with the fluid simulation at the java level. Hell, this even enables people to completely subvert the entire render layer process and just use droidViz as a fluid solver. Perhaps people will want to do that? Will they just use droidViz as a fluid solver? Should the fluid solver be abstracted and removed from droidViz as a whole? I like to keep options open to programmers, but this one I'll stick by. If they really want to use just the fluid solver, they can modify droidviz and remove all the rendering code. With the new modular architecture, it should be easy enough.

So how do we render particles? Theoretically, we could just create them at random positions every so often, and have them die off after a certain amount of time. You might say, "But I want particles to be created where I touch the screen!", to which I respond - if your visual requires more information than just what the fluid simulation gives as data, it is not within the scope of a renderLayer.

'Course, with an answer like that, and not exposing fluid simulation values at Java level, it's impossible to create particles which react to the fluid based solely on touch input... Perhaps there's a better way. If you can think of one, post in the comments!

This brings us to the next point

Have a full Java level interface which allows for control of simulation constants

Well, this seems easy enough. With this architecture, we can create a few jni level functions such as FluidSimSetVisc( float ), or FluidSimSetDiff( float ). We can even create functions like FluidSimCreateDensity( float x, float y ) and FluidSimCreateVelocity( float x, float y, float dx, float dy ).

One important fact that I want to get through is that the library will have NO idea of any sort of cursor input or XY touch input. It will only call FluidSimCreateVelocity or FluidSimCreateDensity with data about which vertex to create the velocity/density at. At no point shall the library ever know what a cursor is, or if the user has fingers. The inputs to the fluid simulation must call these functions. We can use the OnTouchEvent method to get touch information, then translate it to input the fluid simulation will understand, but my point is to keep the input generic.

These two (density and velocity) functions will be the only way to interact with the fluid simulation.

Next!

Allow for rendering before and after droidViz at both the Java and Native levels.

This is actually pretty easy now with the architecture we've derived so far. We can simply create a Java "interface" which is called whenever we draw a frame. droidViz will be implemented as a glSurfaceView.Renderer which means it has three functions it must override -
  • OnSurfaceCreated
  • OnSurfaceChanged
  • OnDrawFrame
Currently, droidVizRenderer just calls a native function in these callbacks. If we need to render before or after, we just inherit droidVizRenderer in our own class and put draw calls before or after the "super.OnDrawFrame( gl );" call. This enables rendering before and after at the java level. Rendering at the native level is taken care of with the new renderlayer interface.

So, in order to use droidViz, we instantiate a droidViz object (which will still be a glSurfaceView decendent which creates an OpenGL ES 2.0 context), instantiate a decendent of droidVizRenderer, setup the renderer renderStack with all sorts of renderLayers, feed the renderer to droidViz, then set the droidViz as our main content view. Its that easy! As mentioned before, render layers can be added at compile time extremely easily, and we have full control over the whole shebang at native and java levels!

Next bullet.

Make droidViz have as small a footprint as possible

Optimizations aside, we have an important task here. droidViz must operate within less than 50% of the CPU. This number was arrived at arbitrarily, but I think its a good number. For what droidViz does, 50% is a valid number. Currently, droidViz uses 100% CPU. The reason is simple.

There is a dt variable set in the solver code. Essentially, every drawframe, it steps the simulation by whatever the value in dt is. If its 0.001, we step forward by one millisecond each frame. Thing is, we're seeing around 30 frames per second... We should pass in the dt each frame to the fluid solver so that it can accurately simulate the fluid in realtime. So long as we lock the framerate at 30 FPS (again arbitrary, but a good spec) we can use less than 100% CPU.

There's also the possible problem that all of this abstraction and layering has slowed everything down. I don't know how considerable the slowdown is, but this is something we'll have to evaluate once we have the new structure in place.

Optimization doesn't work the same on an embedded device. All of the sudden, floating point operations become the enemy. We can't just throw everything on the GPU either, because the GPU will quickly become overwhelmed and throw the workload back onto the CPU. Optimizations come further down in the development process. The worst thing you can do in software development is optimize too soon. Code becomes unreadable and un-debuggable if you optimize too soon. As such, we'll discuss this part when we get to it.

So, what have we arrived at?

Well, it seems we've derived an acceptable architecture for droidViz given what we know now about the way the APIs work. This could change, so this architecture may change several times throughout development, but we have a good set of ideas to work off of. The next logical step would be a class diagram, but I'm tired now from all the thinking, so I'll lay off for a bit.

There is one remaining question with the architecture we've set up: how to deal with renderlayers and user input. How would a user tell a renderlayer to render something given their touch input. The more and more I think about it, it may be a good idea to give droidViz some sort of XY input awareness... The translation of XY input to vertex is impossible at the java layer due to the encapsulation of the rendering code... Maybe I'll have to revise what I said in the java level control section.

Well, I've been thinking for a while now, its late and this post is epic enough for now. I've arrived at several good conclusions and I hope you agree. If theres anything you'd like to add, just write it in the comments. I'll start on the formal architecture change tomorrow.

Peace out.

-Griff

Software Design

Hey guys,

So, if you've taken a look at droidViz lately, its a bit of a mess. "fluid.c" is all over the place with functions for creating and rendering particles, as well as allocation of density arrays and velocity arrays... The file is too long and difficult to dive into.

I thought I'd do some thinking out loud "in blog form" and talk about good and extensible software design for a bit.

Currently, droidViz is written in straight C. This code isn't object oriented, but it doesn't really have to be. All it has to be is organized and easy to read. Unfortunately, I've not had much practice with non object oriented languages and software, so much of my experience is with OOP. I'll be beginning a transition from the current droidViz.c into C++. (Yeah, I know C++ isn't TRUE OOP, but that's really just nitpicking...)

So, what goes through my head when designing a software framework? Well, as far as I can gather, this isn't really taught in school and is something you really need to pick up on your own. I find that easily half of my time programming is spent thinking about how the next feature will be best implemented into a framework.

So, now that we're talking OOP, we have classes. What is a class for? A class is a definition for an object. A class may be instantiated elsewhere to easily manage something, or inherited by a subclass for further specification. When I start thinking about the program, I think about classes as building blocks. I start with a base class which performs one or two tasks and performs them well and transparently, then instantiate the class as an object in another class, used only to perform its job. Doing so makes code more readable and easier to debug. If we can confirm that the base class is performing without error, we can assume it does everywhere else and move on.

So what needs to be put into a class and what doesn't. This is a tricky question, especially when dealing with the JNI. See, we can make as much of our code as modular and objective as we want, but in the end it becomes non object oriented when we create the JNI interface. We package this JNI interface into a Java object, but at that transition layer, our nicely object oriented code just becomes a static library which instantiates the main class and calls its member functions.

So, let's look at an example of a signal processing chain. I know it has little to do with droidViz, but just bear with me. It'll explain a lot later. I need to apply an IIR filter and a Low pass filter to my incoming data. How would I do this? It shouldn't be a function, because this code will likely be long and not have much to do with the remainder of the code in the current file. Well, I'll first make a class called "CustomFilter" and create an initialize function, and a process function. In Initialize, we'll set up the variables and allocate space for the data, and in process we'll perform the IIRFilter processing and the LPFilter processing code, and output it. Then, later in my code I can create an object of the "CustomFilter" class and use it.

Is this an intelligent way of setting up this code? It depends. If this is the only place that this sort of code will be needed, sure this is sufficient. If we'll need more filters down the line - even one more - and that filter will need special configuration which is different from the first filter, this is not a good way to structure your code.

Usually I only put code which logically falls into objects into classes. If I have a generic signal processing chain, I'll declare a "Filter" class which has methods that all filters will need - like Initialize(), and Process(), etc. I'll then derive several classes from this "base" "Filter" class - like IIRFilter, LPFilter, and FFTFilter. Polymorphic inheritance comes in handy here because when I create the "main" class "FilterChain", I can have an array of "Filter" objects which all have the same interface; though they perform different tasks. Filter chain will pass the output of one filter to the input of the next, automatically. This way, I don't need to specifically handle this in my code where I'm using the filter - adding unnecessary complexity and unreadability at a higher level of code.

In the end, when I need to implement a signal processing chain which performs a high pass filtering algorithm, its as easy as instantiating a "FilterChain" object, then an "IIRFilter" object. I can then use methods custom to the IIRFilter class to customize this filter to my needs, then I can add the IIRFilter object to the processing chain with a simple "FilterChain.AddFilter(filter)" function. I can do the same to add an LPFilter object to the chain. Now whenever I get data, I can pass it into the "FilterChain.process( data )" method and get an output.

Here's where the beauty and elegance of the design lies: I don't have to worry about the details of the processing at this point. I could hand this code off to someone with no DSP knowledge, and they'd be able to use it and understand what the code is doing. So long as the processing code is performing its function successfully without crashing, they'll never even need to see the filtering code - and just assume that its okay - allowing them to focus on matters they understand without distracting them or overwhelming them. Furthermore, so long as I know that IIRFilter performs its task well, and that LPFilter successfully performs the low pass filter, and that FilterChain feeds the output of the first filter in its stack to the input of the next filter in its stack, I know it works fine. Debugging becomes easier at this point because I can simply isolate where the error is happening and work at that level.

Of course, there is a speed trade off with levels of abstraction - especially when this abstraction hits the JNI layer. Not everything can be neatly packaged and still operate well - especially in this world of embedded devices and JNI. This presents an interesting problem for me, because I'm so entrenched in this modular design which works so well on our computers.

droidViz is no signal processing chain, so a lot of what I said here is irrelevant to the project, but its good advice to follow when designing your software. I'll be thinking aloud and figuring out "base" and "main" classes of droidViz in the next post. Keep your eyes peeled.

-Griff

Sunday, June 20, 2010

Shader Compiling from SDCard

So the work continues on the Application and the Library itself.

Right now we can compile the shader sources at run-time from files residing on the SDCard!
We've also been working hard to make sure that the library is working with the NDK r4. It does and we're working on writing new shaders to expand droidViz and begin getting particles emitted.

Sunday, June 13, 2010

Big Updates!

We've been hard at work on droidViz and have some big updates included in 1.2.1 on the marketplace.

First, we've incorporated the ability to hold the multitouch surface in order to produce continued density renderings. We've removed the vector field lines in order to show the density rendering a bit more clearly.

Second, we've added a nice new icon to the application to make it look much better than the default one given!

Last, we've begun incorporating the NDK r4 into the application so that one can use the newest and greatest Native Development Kit in order to make droidViz even better!!