This page collects my notes about the crazy "pluggable renderers" idea I posted to the cytoscape-staff mailling list, was debated during CytoscapeRetreat2008 and apparently got some people excited.

TableOfContents

The main idea is to gut the current rendering and vizmapping code so that visual attributes (the fact that there is such a thing as borderWidth or node color) are not mentioned in them anywhere and so that the rendering code doesn't know how to draw a node. Instead, provide an API so that these can be supplied in pluggable modules.

The current plan is to produce a proof-of-concept prototype by the cytoscape retreat in Toronto. (which means in one week)

The svn branch is at svn+ssh://grenache.ucsd.edu/cellar/common/svn/cytoscape3/branches/abeld-gsoc/dev/pluggable-renderers

Note that building that branch is currently non-trivial, I'll clean it up and make it easier to build sometime soon (definately before the second mini-retreat).

currently done

(note that this might be outdated, I'll refresh it this weekend) So far I have been working from the inside out: (from the inner loop outwards)

Immediate TODO: next steps

(note that this might be outdated, I'll refresh it this weekend)

Implementation plan

Do in following order:

  1. refactor GraphGraphics and GraphRenderer to call separate NodeRenderers for rendering. No vizmap changes yet, and only original renderers/ nodeshapes kept

  2. refactor vizmapper to be able to use pluggable 'mappable visual attributes'. (Try to simplify vizmapper API at this step, creating and applying custom visual style are apparently not simple enough; see ["Visual Style Simplification"]) for vizmapper GUI, take ideas from the Tunables API, as they are very similar (the visualproperties defined by the renderer will have to include some description of how they can be edited in the vizmapper GUI; try to make this like Tunables.)
  3. Lift some noderenderers from other code (ideally the following: pie nodes, custom bitmap and custom vector graphic image)
  4. use OSGi for pluggability (this will be about simply figuring out OSGi and how to use it for renderers-as-service)
  5. fix editor: Currently when generating the palette, the editors (for example, DefaultCytoscapeEditor) set the VisualProperty values. (for example, node color, etc.) However, the editors should not reference VisualProperties at all. Instead, they should specify a VisualStyle and abstract attribute values for the shapes that are placed on the palette. Then, the editor framework should take care of generating the preview icons. (Note that even now it is done something like this, in DefaultCytoscapeEditor; but that code still references VisualProperties)

  6. do some benchmarks to show that all this didn't make rendering slow as molasses.
  7. create UndirectedEdgeRenderer from current edge rendering, which has only endpoints and not separate target and source visual properties.

Things to figure out

(note that this might be outdated, I'll refresh it this weekend)

Notes about current rendering and vizmapping architecture

These are mostly notes and reminders for myself (but if I misunderstand something, feel free to correct it):

Pluggable renderers and pluggable vizmapper

Having truly modular renderers will require a more modular vizmapper: the VisualProperties that Attributes can be mapped to will have to be dynamic, defined by the Renderers. (Currently they are hard-coded in the vizmapper.)

Making vizmapper pluggable would, however be good not only for pluggable renderers, but for other PresentationViews as well, since it could allow other types of views (like Matrix view or such to use the same vizmapper architecture.)

In fact, having a pluggable vizmapper is sort-of-needed to allow plugin-developers to store all state in the ViewModel.

Dependent VisualProperties

We will want to allow the visibility of a VisualProperty to depend on the value of some other VisualProperty. This means that based on the result of vizmapping to VisualProperty A, VisualProperty B may or may not appear in the vizmapper ui.

Two usecases for this:

  1. Node size locking: in current (pre-refactoring cytoscape), there

    is a boolean isLocked parameter (which is not a VisualProperty) which regulates whether a node has a single size, or seperate height and width. The height, width and size VisualProperties appear on not appear in the vizmapper UI based in the value of this parameter.

  2. Renderers: handling NodeRenderers and EdgeRenderers as VisualProperties has the advantage that the vizmapping UI can be used to map attributes to them. This is good. However, it would not make sense to show in the vizmapping UI VisualProperties that will have no effect: only those VisualProperties should appear that are used by those Renderers which will be actually used.

The 'dependent VisualProperties' feature will be able to handle these (and possibly other) usecases in a general way. After all, handling the first one without hardcoding the VisualProperties in question into the vizmapper ui will result in a mechanism that can also handle usecase 2.

Thus, locking the node size will be a (boolean) VisualProperty, and NodeRenderers will be a VisualProperty, just like everything else.

Possible impact on rendering speed

Since the plan is to make the rendering more modular, some thing will have to be done with dynamic lookup which is done with hard-coded methods currently. This means that rendering a node will most likely end up a bit slower. However, the overall speed of the renderer does not necessary have to be slower, since the current implementation does not use some possible optimizations that it could. See RenderingOptimizations. In particular, seperation of rendering and repainting (see that page) would mean that if repainting is fast, it doesn't matter much if rendering is slow.

Note that optimizations mentioned on RenderingOptimizations could be implemented with the current rendering architecture, but the point is that with such optimizations, the responsiveness and repainting speed should not depend critically on the rendering speed of nodes and thus having pluggable renderers should not mean a sever drop in rendering speed.

Possible impact on memory consumption

Since the renderers themselves are stateless, there will be only one renderer object for each one, thus their memory consumption should be minimal. (And as the framework will make it possible to implement them in plugins, it should be possible to load only the ones that are actually used thus ensuring that unused renderers don't eat memory. Although this really shouldn't be needed.)

The memory needed by the viewmodel, on the other hand might be larger, simply due to replacing the current statically bound implementation with dynamically lookup: as noted [#crosssection-pattern above], the current implementation uses a http://www.c2.com/cgi/wiki?CrossSection pattern to avoid java's large memory overhead on small objects. A similar optimization might be more difficult (but hopefully still possible) with the dynamic, pluggable architecture.

Also, most of the RenderingOptimizations ideas are basically about trading memory for speed, thus optimizing for speed might increase the memory footprint.

Currently I prefer to make the pluggable idea work first, and then think about optimizing it.

Planned core NodeRenderers

maybe in plugin:

Planned core EdgeRenderers

pluggable edge renderers would be a less usefull feature, but still, here are some ideas: (see also RichLineTypes rfc for ideas, etc.)

maybe in plugin:

Things a NodeRenderer will have to provide

I.e. a high-level requirements of the API

Also: to support several levels of detail, maybe one NodeRenderer will actually have several render() methods (or have a NodeStyle object that has several NodeRenderer objects, one for each level of detail.) In this case from the above methods, only render() will be in NodeRenderer, the rest in NodeStyles.

PluggableRenderers vs. NodeDecorators

The original idea for PluggableRenderers assumed that each node would be rendered by a single renderer. (I.e. the graphic created by the NodeRenderer would replace the node, instead of adding to it.) Apparently the current custom graphics API adds to the graphic of the node. These two are a bit different, with different usecases; I am going to call the second one NodeDecorator. I don't know all the usecases that custom graphics is used for, but I think (hope) that NodeRenderers would be a better fit for these usecases, and having them as decorators was more of a work-around.

BackgroundRenderers

Implementing background images might be done by adding BackgroundRenderers, which could use the node and edge positions to draw things. Thus the following Renderers might be possible:

Ideas about handling Selection

requirements:

note selection is planned to be done with attributes in 3.0 (?)

Possibilities for visual representation:

Funding for Cytoscape is provided by a federal grant from the U.S. National Institute of General Medical Sciences (NIGMS) of the Na tional Institutes of Health (NIH) under award number GM070743-01. Corporate funding is provided through a contract from Unilever PLC.

MoinMoin Appliance - Powered by TurnKey Linux