This page collects my notes about the crazy "pluggable renderers" idea I posted to the cytoscape-staff mailling list, was debated during CytoscapeRetreat2008 and apparently got some people excited.
The main idea is to gut the current rendering and vizmapping code so that visual attributes (the fact that there is such a thing as borderWidth or node color) are not mentioned in them anywhere and so that the rendering code doesn't know how to draw a node. Instead, provide an API so that these can be supplied in pluggable modules.
The current plan is to produce a proof-of-concept prototype by the cytoscape retreat in Toronto. (which means in one week)
The svn branch is at svn+ssh://grenache.ucsd.edu/cellar/common/svn/cytoscape3/branches/abeld-gsoc/dev/pluggable-renderers
currently done
So far I have been working from the inside out: (from the inner loop outwards)
node rendering is done in ShapeRenderer.java, seperate from the inner rendering loop.
- visual attributes about the node are stored dynamically (in a
DNodeView.visualAttributes HashMap), not in NodeDetails.
Immediate TODO
- start doing from "outside in": refactor how Calculators are loaded so that borderWidth calculator etc. is not loaded, since these will
be supplied by the NodeRenderer instance. Instead load NodeRenderer the way calculators are loaded now. (from a .props file)
- Refactor vizmapper so that borderWidth, etc. calculators are only added to the list of calculators when the renderer has been selected in the vizmapper.
- study how vizmapper bypass is done now
Implementation plan
- only do nodes at first, don't care about node labels or edges. The idea is to have nodes handled in a pluggable way and compare the pluggable architecture with the previous one. (Later the node labels should be handled by the node renderers, as well; Once the general architecture is working it should be pretty simple to copy that for the other visual attributes as well.)
- don't care about benchmarking, at first
- don't care about OSGi framework, at first
- don't care about backward-compatibility for IO (for the saved visual styles) -- I'll leave IO last, and will worry about how to convert old-type saved visual styles to the new format then.
Do in following order:
refactor GraphGraphics and GraphRenderer to call separate NodeRenderers for rendering. No vizmap changes yet, and only original renderers/ nodeshapes kept
- refactor vizmapper to be able to use pluggable 'mappable visual attributes'. (Try to simplify vizmapper API at this step, creating and applying custom visual style are apparently not simple enough; see ["Visual Style Simplification"]) for vizmapper GUI, take ideas from the Tunables API, as they are very similar (the visualproperties defined by the renderer will have to include some description of how they can be edited in the vizmapper GUI; try to make this like Tunables.)
- Lift some noderenderers from other code (ideally the following: pie nodes, custom bitmap and custom vector graphic image)
- use OSGi for pluggability (this will be about simply figuring out OSGi and how to use it for renderers-as-service)
fix editor: Currently when generating the palette, the editors (for example, DefaultCytoscapeEditor) set the VisualProperty values. (for example, node color, etc.) However, the editors should not reference VisualProperties at all. Instead, they should specify a VisualStyle and abstract attribute values for the shapes that are placed on the palette. Then, the editor framework should take care of generating the preview icons. (Note that even now it is done something like this, in DefaultCytoscapeEditor; but that code still references VisualProperties)
- do some benchmarks to show that all this didn't make rendering slow as molasses.
create UndirectedEdgeRenderer from current edge rendering, which has only endpoints and not separate target and source visual properties.
Things to figure out
- place in model / viewmodel / view division (I'll have to study the current plan for the viewmodel api)
- are currently used customGraphics used as a kind of decorators on
nodes? (I.e. are they used to replace or to add to nodes?) If yes, supporting such 'node decorators' might be needed.
- Stroke attributes, for example width and dash settings are set seperately, and should be usable for mapping attributes to seperately, but could be actually stored in a single Stroke object. How similar is this to setting colors, where an RGB triad is used, which is set in one dialog and stored as one java.awt.Color object? I think that to allow mapping attributes to width, it won't be possible to handle it as Stroke, but the visual attributes will have to be the parameters of the Stroke.
Notes about current rendering and vizmapping architecture
These are mostly notes and reminders for myself (but if I misunderstand something, feel free to correct it):
- good description of visual mapping framework: ["Visual_Mapping_System_Guide"]
- eclipse's "Open Call Hierarchy" is pretty neat
render.immed is only GraphGraphics, which is only a collection of methods It is only called from render.stateful GraphRenderer.renderGraph() which is basically the inner loop Basically these are the places where large refactorings will be done.
visual style is applied to individual nodes (NodeViews), but is stored in a central NodeDetails object. I.e. DNodeView forwards all data to DNodeDetails; as if a NodeView would be one 'row' in the table of NodeDetails. Then, when rendering, a row is read from NodeDetails since nodes are rendered one-by-one.
This is basically an optimization pattern, to work around java's big memory overhead for small objects. See for example: http://www.c2.com/cgi/wiki?CrossSection
Pluggable renderers and pluggable vizmapper
Having truly modular renderers will require a more modular vizmapper: the VisualProperties that Attributes can be mapped to will have to be dynamic, defined by the Renderers. (Currently they are hard-coded in the vizmapper.)
Making vizmapper pluggable would, however be good not only for pluggable renderers, but for other PresentationViews as well, since it could allow other types of views (like Matrix view or such to use the same vizmapper architecture.)
In fact, having a pluggable vizmapper is sort-of-needed to allow plugin-developers to store all state in the ViewModel.
Dependent VisualProperties
We will want to allow the visibility of a VisualProperty to depend on the value of some other VisualProperty. This means that based on the result of vizmapping to VisualProperty A, VisualProperty B may or may not appear in the vizmapper ui.
Two usecases for this:
- Node size locking: in current (pre-refactoring cytoscape), there
is a boolean isLocked parameter (which is not a VisualProperty) which regulates whether a node has a single size, or seperate height and width. The height, width and size VisualProperties appear on not appear in the vizmapper UI based in the value of this parameter.
Renderers: handling NodeRenderers and EdgeRenderers as VisualProperties has the advantage that the vizmapping UI can be used to map attributes to them. This is good. However, it would not make sense to show in the vizmapping UI VisualProperties that will have no effect: only those VisualProperties should appear that are used by those Renderers which will be actually used.
The 'dependent VisualProperties' feature will be able to handle these (and possibly other) usecases in a general way. After all, handling the first one without hardcoding the VisualProperties in question into the vizmapper ui will result in a mechanism that can also handle usecase 2.
Thus, locking the node size will be a (boolean) VisualProperty, and NodeRenderers will be a VisualProperty, just like everything else.
Possible impact on rendering speed
Since the plan is to make the rendering more modular, some thing will have to be done with dynamic lookup which is done with hard-coded methods currently. This means that rendering a node will most likely end up a bit slower. However, the overall speed of the renderer does not necessary have to be slower, since the current implementation does not use some possible optimizations that it could. See RenderingOptimizations. In particular, seperation of rendering and repainting (see that page) would mean that if repainting is fast, it doesn't matter much if rendering is slow.
Note that optimizations mentioned on RenderingOptimizations could be implemented with the current rendering architecture, but the point is that with such optimizations, the responsiveness and repainting speed should not depend critically on the rendering speed of nodes and thus having pluggable renderers should not mean a sever drop in rendering speed.
Planned core NodeRenderers
NonRenderer -- don't actually draw the node -- might be useful for hiding the node (but how would boundingPoligon be implemented for this? or make boundingPoligon just strongly suggested, not mandatory, i.e. allowing returning null by that method?)
TrivialRenderer -- only circle shape, only one color, one size; basically no VisualProperty at all
SimpleShapeRenderer -- basically current node rendering: star-like shape, with fill color and transparency and a border around it. This Renderer should make it possible to dynamically extend the available Shapes (i.e. plugins can define new polygons to use as shapes.); this would be good example of allowing extensible plugins, i.e. plugins that themselves can user services defined by other plugins.
PieRenderer -- just copy over the implementation available elsewhere; Will be example for mapping a List to a VisualProperty.
- SVGRenderer -- should also be exensible with new svg objects
BitmapRenderer -- also usable for backward-compatibility with CustomGraphics API
maybe in plugin:
- MiMRenderer (molecular interaction maps, biochemical networks): node decorations, binding regions, etc. For examples, see "species"
part of [http://www.systems-biology.org/cd/images/components40.png this png] and [http://www.cytoscape.org/cgi-bin/moin.cgi/Molecular_Interaction_Maps Molecular_Interaction_Maps]. This will be a good example of node-decorator-like renderer, to show how node decorator idea can be replaced with pluggable renderers.
Planned core EdgeRenderers
pluggable edge renderers would be a less usefull feature, but still, here are some ideas: (see also RichLineTypes rfc for ideas, etc.)
DirectedEdgeRenderer -- basically current renderer
UndirectedEdgeRenderer -- basically current renderer, but instead of separate Target.. and Source... VisualProperties, have only EdgeEnd... VisualProperties to enforce constraint that both ends of the edge must look the same.
RainbowEdge --- (?)
FancyStrokeRenderer -- see http://www.java2s.com/Code/Java/2D-Graphics-GUI/CustomStrokes.htm for ideas
NonRenderer -- see above
Things a NodeRenderer will have to provide
I.e. a high-level requirements of the API
render() for actual rendering
renderPreview() -- used not only in vizmapper GUI, but also in the editor GUI to provide the palette
boundingPoligon() -- required so that layout methods can take into account node size; and also for handling mouse events (so that the Presentation object can figure out which GraphObject got the event)
supportedVisualProperties()
Also: to support several levels of detail, maybe one NodeRenderer will actually have several render() methods (or have a NodeStyle object that has several NodeRenderer objects, one for each level of detail.) In this case from the above methods, only render() will be in NodeRenderer, the rest in NodeStyles.
PluggableRenderers vs. NodeDecorators
The original idea for PluggableRenderers assumed that each node would be rendered by a single renderer. (I.e. the graphic created by the NodeRenderer would replace the node, instead of adding to it.) Apparently the current custom graphics API adds to the graphic of the node. These two are a bit different, with different usecases; I am going to call the second one NodeDecorator. I don't know all the usecases that custom graphics is used for, but I think (hope) that NodeRenderers would be a better fit for these usecases, and having them as decorators was more of a work-around.
Selection
requirements:
- somehow show selection, ie. selected nodes differently
pre-3.0 cytoscape has a 'selection' (by default, yelow) and a 'reverse selection' (by default green, used for selection from AttributeBrowser); I (DanielAbel) (and some others) think that an arbitrary number of selections should be possible. see MultipleSelections.
note selection is planned to be done with attributes in 3.0 (?)
Possibilities for visual representation:
have NodeRenderer decide how to show selected nodes differently
- mandate that selection is showed by a colored border around node -- because this border already exsists, since the boundingPoligon must be known.
mandate that selection is showed by some other VisualProperty (like node color) -- but not all Renderers will have the same VisualProperties