This page collects my notes about the crazy "pluggable renderers" idea I posted to the cytoscape-staff mailling list, was debated during CytoscapeRetreat2008 and apparently got some people excited.

The main idea is to gut the current rendering and vizmapping code so that visual attributes (the fact that there is such a thing as borderWidth or node color) are not mentioned in them anywhere and so that the rendering code doesn't know how to draw a node. Instead, provide an API so that these can be supplied in pluggable modules.

The current plan is to produce a proof-of-concept prototype by the cytoscape retreat in Toronto. (which means in one week)

The svn branch is at svn+ssh://grenache.ucsd.edu/cellar/common/svn/cytoscape3/branches/abeld-gsoc/dev/pluggable-renderers

currently done

So far I have been working from the inside out: (from the inner loop outwards)

Immediate TODO

Implementation plan

Do in following order:

  1. refactor GraphGraphics and GraphRenderer to call separate NodeRenderers for rendering. No vizmap changes yet, and only original renderers/ nodeshapes kept

  2. refactor vizmapper to be able to use pluggable 'mappable visual attributes'. (Try to simplify vizmapper API at this step, creating and applying custom visual style are apparently not simple enough; see ["Visual Style Simplification"]) for vizmapper GUI, take ideas from the Tunables API, as they are very similar (the visualproperties defined by the renderer will have to include some description of how they can be edited in the vizmapper GUI; try to make this like Tunables.)
  3. Lift some noderenderers from other code (ideally the following: pie nodes, custom bitmap and custom vector graphic image)
  4. use OSGi for pluggability (this will be about simply figuring out OSGi and how to use it for renderers-as-service)
  5. do some benchmarks to show that all this didn't make rendering slow as molasses.
  6. create UndirectedEdgeRenderer from current edge rendering, which has only endpoints and not separate target and source visual properties.

Things to figure out

Notes about current rendering and vizmapping architecture

These are mostly notes and reminders for myself (but if I misunderstand something, feel free to correct it):

Pluggable renderers and pluggable vizmapper

Having truly modular renderers will require a more modular vizmapper: the VisualProperties that Attributes can be mapped to will have to be dynamic, defined by the Renderers. (Currently they are hard-coded in the vizmapper.)

Making vizmapper pluggable would, however be good not only for pluggable renderers, but for other PresentationViews as well, since it could allow other types of views (like Matrix view or such to use the same vizmapper architecture.)

In fact, having a pluggable vizmapper is sort-of-needed to allow plugin-developers to store all state in the ViewModel.

Possible impact on rendering speed

Since the plan is to make the rendering more modular, some thing will have to be done with dynamic lookup which is done with hard-coded methods currently. This means that rendering a node will most likely end up a bit slower. However, the overall speed of the renderer does not necessary have to be slower, since the current implementation does not use some possible optimizations that it could:

(Here I use 'rendering' for 'turning visual attributes into bitmaps' and 'repainting' for copying the bitmap images of nodes in the right place to show the network. The point is that if repainting is fast, it doesn't matter much if rendering is slow.)

(I hope that I am not mis-interpreting the behaviour of the current code in the description below.)

Note that all these could be implemented with the current rendering architecture, but the point is that with such optimizations, the responsiveness and repainting speed should not depend critically on the rendering speed of nodes and thus having pluggable renderers should not mean a sever drop in rendering speed.

Planned core NodeRenderers

Planned core EdgeRenderers

pluggable edge renderers would be a less usefull feature, but still, here are some ideas: (see also RichLineTypes rfc for ideas, etc.)

Things a NodeRenderer will have to provide

I.e. a high-level requirements of the API

Also: to support several levels of detail, maybe one NodeRenderer will actually have several render() methods (or have a NodeStyle object that has several NodeRenderer objects, one for each level of detail.) In this case from the above methods, only render() will be in NodeRenderer, the rest in NodeStyles.

PluggableRenderers vs. NodeDecorators

The original idea for PluggableRenderers assumed that each node would be rendered by a single renderer. (I.e. the graphic created by the NodeRenderer would replace the node, instead of adding to it.) Apparently the current custom graphics API adds to the graphic of the node. These two are a bit different, with different usecases; I am going to call the second one NodeDecorator. I don't know all the usecases that custom graphics is used for, but I think (hope) that NodeRenderers would be a better fit for these usecases, and having them as decorators was more of a work-around.

Selection

requirements:

note selection is planned to be done with attributes in 3.0 (?)

Possibilities for visual representation:

Funding for Cytoscape is provided by a federal grant from the U.S. National Institute of General Medical Sciences (NIGMS) of the Na tional Institutes of Health (NIH) under award number GM070743-01. Corporate funding is provided through a contract from Unilever PLC.

MoinMoin Appliance - Powered by TurnKey Linux