This page collects my notes about the crazy "pluggable renderers" idea I posted to the cytoscape-staff mailling list, was debated during CytoscapeRetreat2008 and apparently got some people excited.
Contents
- Current Status
-
Notes, Ideas and Plans
- Notes about current rendering and vizmapping architecture
- Pluggable renderers and pluggable vizmapper
- Dependent VisualProperties
- Shared VisualProperties
- Possible impact on rendering speed
- Possible impact on memory consumption
- Implementing Serializing (IO)
- Planned core NodeRenderers
- Planned core EdgeRenderers
- Non-Graphics2D presentation
- Things a NodeRenderer will have to provide
- PluggableRenderers vs. NodeDecorators
- BackgroundRenderers
- Ideas about handling Selection
The main idea is to gut the current rendering and vizmapping code so that visual attributes (the fact that there is such a thing as borderWidth or node color) are not mentioned in them anywhere and so that the rendering code doesn't know how to draw a node. Instead, provide an API so that these can be supplied in pluggable modules.
The current plan is to finish the proof-of-concept prototype by the cytoscape mini-retreat in San Francisco.
The svn branch is at svn+ssh://grenache.ucsd.edu/cellar/common/svn/cytoscape3/branches/abeld-gsoc/dev/pluggable-renderers
Note that building that branch is currently non-trivial, I'll clean it up and make it easier to build sometime soon (definately before the second mini-retreat).
Also note that the name 'Pluggable Renderers' is a bit misleading: the bigger change is the 'Pluggable VisualProperties' part.
Current Status
currently done
(all these done in the pluggable-renderers branch, as mentioned above)
So far I have been working from the inside out: (from the inner loop outwards)
node rendering is done in ShapeRenderer.java, seperate from the inner rendering loop.
- visual attributes about the node are stored dynamically (in a
DNodeView.visualAttributes HashMap), not in NodeDetails.
- Large parts of the viewmodel has been refactored:
The Appearance, AppearanceCalculator, and all its Node..., Edge..., Global.... subclasses have been removed. (These were a totally unneccesary layer between the calculators and the NodeView/EdgeView objects.)
some hard-coded enums (NodeShape, ArrowShape, LineStyle) etc. were replaced with runtime-extensible lists. (but these don't yet use OSGi for extension, and extendending them is not actually tested yet.)
Implement DependentVisualProperties.
- remove the idea that there is such a thing as the "global current
Visual Style" since that doesn't make sense: since each NetworkView can have a seperate VisualStyle, having methods like VisualMappingManager.getVisualStyle() and VisualMappingManager.setVisualStyle(VisualStyle vs) doesn't make sense.
make VizMapperMainPanel more modular: currently that single .java is 3000 lines long and pretty hard to understand. (The
- propertyChange() method alone is almost 400 lines long and nigh impossible to follow.) It manages several widgets which should be put into seperate .java files / classes. Also clean up event-handling in that class: for example, currently
the VizMapperMainPanel.switchVS(String vsName, boolean redraw) method initiates the VisualStyle switching _and_ updates the vizmap gui, instead of just calling VisualMappingManager.setVisualStyleForView() and reacting to the event that fires in an event handler.
Immediate TODO: next steps
fix the vizmap gui to use pluggable VisualProperties properly: make VisualProperties appear and disappear when the NodeRenderer that defined them is enabled/disabled.
fix the view layer to use pluggable VisualProperties and renderers, and to rethink API
- re-check API, transition to using OSGi services where possible (i.e. use OSGi for handling the extension-points).
Implementation plan
- only do nodes at first, don't care about node labels or edges. The idea is to have nodes handled in a pluggable way and compare the pluggable architecture with the previous one. (Later the node labels should be handled by the node renderers, as well; Once the general architecture is working it should be pretty simple to copy that for the other visual attributes as well.)
- don't care about benchmarking, at first
- don't care about OSGi framework, at first (If the framework is 'pluggable', using OSGi for the plugging will be trivial)
- don't care about backward-compatibility for IO (for the saved visual styles) -- I'll leave IO last, and will worry about how to convert old-type saved visual styles to the new format then.
Do in following order:
refactor GraphGraphics and GraphRenderer to call separate NodeRenderers for rendering. No vizmap changes yet, and only original renderers/ nodeshapes kept
- refactor vizmapper to be able to use pluggable 'mappable visual attributes'.
(Try to simplify vizmapper API at this step, creating and applying custom visual style are apparently not simple enough; see Visual_Style_Simplification) for vizmapper GUI, take ideas from the Tunables API, as they are very similar (the visualproperties defined by the renderer will have to include some description of how they can be edited in the vizmapper GUI; try to make this like Tunables.)
- Lift some noderenderers from other code (ideally the following: pie nodes, custom bitmap and custom vector graphic image)
- use OSGi for pluggability (this will be about simply figuring out OSGi and how to use it for renderers-as-service)
fix editor: Currently when generating the palette, the editors (for example, DefaultCytoscapeEditor) set the VisualProperty values. (for example, node color, etc.) However, the editors should not reference VisualProperties at all. Instead, they should specify a VisualStyle and abstract attribute values for the shapes that are placed on the palette. Then, the editor framework should take care of generating the preview icons. (Note that even now it is done something like this, in DefaultCytoscapeEditor; but that code still references VisualProperties)
- do some benchmarks to show that all this didn't make rendering slow as molasses.
create UndirectedEdgeRenderer from current edge rendering, which has only endpoints and not separate target and source visual properties.
Things to figure out / Think about
Think about 'Composite VisualProperties': Stroke attributes, for example width and dash settings are set seperately, and should be usable for mapping attributes to seperately, but could be actually stored in a single Stroke object. How similar is this to setting colors, where an RGB triad is used, which is set in one dialog and stored as one java.awt.Color object? I think that to allow mapping attributes to width, it won't be possible to handle it as Stroke, but the visual attributes will have to be the parameters of the Stroke.
Notes, Ideas and Plans
Notes about current rendering and vizmapping architecture
These are mostly notes and reminders for myself (but if I misunderstand something, feel free to correct it):
good description of visual mapping framework: Visual_Mapping_System_Guide
- eclipse's "Open Call Hierarchy" is pretty neat
render.immed is only GraphGraphics, which is only a collection of methods It is only called from render.stateful GraphRenderer.renderGraph() which is basically the inner loop Basically these are the places where large refactorings will be done.
visual style is applied to individual nodes (NodeViews), but is stored in a central NodeDetails object. I.e. DNodeView forwards all data to DNodeDetails; as if a NodeView would be one 'row' in the table of NodeDetails. Then, when rendering, a row is read from NodeDetails since nodes are rendered one-by-one.
This is basically an optimization pattern, to work around java's big memory overhead for small objects. See for example: http://www.c2.com/cgi/wiki?CrossSection
Pluggable renderers and pluggable vizmapper
Having truly modular renderers will require a more modular vizmapper: the VisualProperties that Attributes can be mapped to will have to be dynamic, defined by the Renderers. (Currently they are hard-coded in the vizmapper.)
Making vizmapper pluggable would, however be good not only for pluggable renderers, but for other PresentationViews as well, since it could allow other types of views (like Matrix view or such to use the same vizmapper architecture.)
In fact, having a pluggable vizmapper is sort-of-needed to allow plugin-developers to store all state in the ViewModel.
Dependent VisualProperties
We will want to allow the visibility of a VisualProperty to depend on the value of some other VisualProperty. This means that based on the result of vizmapping to VisualProperty A, VisualProperty B may or may not appear in the vizmapper ui.
Two usecases for this:
- Node size locking: in current (pre-refactoring cytoscape), there
is a boolean isLocked parameter (which is not a VisualProperty) which regulates whether a node has a single size, or seperate height and width. The height, width and size VisualProperties appear or not appear in the vizmapper UI based in the value of this parameter.
Renderers: handling NodeRenderers and EdgeRenderers as VisualProperties has the advantage that the vizmapping UI can be used to map attributes to them. This is good. However, it would not make sense to show in the vizmapping UI VisualProperties that will have no effect: only those VisualProperties should appear that are used by those Renderers which will be actually used.
The 'dependent VisualProperties' feature will be able to handle these (and possibly other) usecases in a general way. After all, handling the first one without hardcoding the VisualProperties in question into the vizmapper ui will result in a mechanism that can also handle usecase 2.
Thus, locking the node size will be a (boolean) VisualProperty, and NodeRenderers will be a VisualProperty, just like everything else.
Shared VisualProperties
Although each Renderer will be defining its own VisualProperties, there will be only one list of VisualProperties (ie, not a two-level renderer -> VisualProperty tree). This will make it possible to share VisualProperties between renderers, ie. to allow two different Renderers to use the same VisualProperty. This sharing is needed to both to not bloat the list of VisualProperties, but also to make it possible to define consistent VisualStyles. For example, take a group view, where collapsed metanodes are shown as pienodes, which show the percentage of the nodes of different colors. In this case one wants to use the mapping defined for the NODE_COLOR of the normal nodes to color the slices of the pienode. The easiest to implement this is to make the PieNodeRenderer use the NODE_COLOR VisualProperty, too, but interpret it as "pie slice color".
This, of course, will mean that two Renderers might define the same VisualProperty (i.e same name), but with different type. I think in this case the loading of the second plugin will have to be refused. I don't expect this to occur often, since the canonical set of VisualProperties (used by the core NodeRenderers) will be pretty extensive already, and if people define new ones, they will be able to think of unique names for them.
Possible impact on rendering speed
Since the plan is to make the rendering more modular, some thing will have to be done with dynamic lookup which is done with hard-coded methods currently. This means that rendering a node will most likely end up a bit slower. However, the overall speed of the renderer does not necessary have to be slower, since the current implementation does not use some possible optimizations that it could. See RenderingOptimizations. In particular, seperation of rendering and repainting (see that page) would mean that if repainting is fast, it doesn't matter much if rendering is slow.
Note that optimizations mentioned on RenderingOptimizations could be implemented with the current rendering architecture, but the point is that with such optimizations, the responsiveness and repainting speed should not depend critically on the rendering speed of nodes and thus having pluggable renderers should not mean a sever drop in rendering speed.
Possible impact on memory consumption
Since the renderers themselves are stateless, there will be only one renderer object for each one, thus their memory consumption should be minimal. (And as the framework will make it possible to implement them in plugins, it should be possible to load only the ones that are actually used thus ensuring that unused renderers don't eat memory. Although this really shouldn't be needed.)
The memory needed by the viewmodel, on the other hand might be larger, simply due to replacing the current statically bound implementation with dynamically lookup: as noted above, the current implementation uses a http://www.c2.com/cgi/wiki?CrossSection pattern to avoid java's large memory overhead on small objects. A similar optimization might be more difficult (but hopefully still possible) with the dynamic, pluggable architecture.
Also, most of the RenderingOptimizations ideas are basically about trading memory for speed, thus optimizing for speed might increase the memory footprint.
Currently I prefer to make the pluggable idea work first, and then think about optimizing it.
Implementing Serializing (IO)
Since the view layer is to be stateless, having pluggable renderers won't impact how serializing is done. Pluggable VisualProperties, however, will (since that is where the difference between this proposal and the currently used code is).
The viewmodel is very similar to CyAttributes: both can be imagined as a table which contains one row for each Node / Edge and one column for each Attribute/VisualProperty. The main difference is that the "native datatypes" are different in the two case: numbers, strings, lists etc. for CyAttributes, while colors, strings, etc. for the viewmodel. In addition, the viewmodel has 'default value' framework built in: if a value is not defined (either with a mapping or with a ByPass) for a row, visual-style default is used.
Thus serializing / deserializing (i.e. saving or loading) the state stored in the viewmodel will be basically the same as saving and loading CyAttributes: simply the serializing for the 'native datatypes' has to be taken care of. (For example, when serializing a Renderer, simply the java classname of the Renderer can be used, since that is a simple unique identifier of the Renderer.)
Note that the above does not consider the issue of matching up whatever is read from the file with whatever can be used by the current renderers. I.e: there will be a set of VisualProperties that are present in the file being read, and a set that is defined by the currently loaded Renderers.
This might work somewhat similar to what happens when serializing data used by a plugin that is stored CyAttributes: the data is saved and loaded by Cytoscape, but the code that actually uses it is in a plugin. If the plugin is not loaded when the data is loaded, it will simply be sitting among the Attributes, but nothing will use it. If the plugin is loaded, but the data is not (i.e. the data is missing), the plugin has to do something (in our case, the framework will automatically give the a default value).
As an extra convienience, the framework should be able to handle the "plugin is missing" case: When finding a Renderer in the file, which is not available, the framework can, (in order of difficulty and user friendliness): fallback to a default visual style / warn the user / tell the user which plugin to load / try to load the plugin automatically from the web.
This will work seamlessly if the plugin containing the Renderer is available and complications are only caused by the possibility that the plugin (and thus the renderer) is not available. An alternative would be to pack the Renderer or the plugin that provides the Renderer into the xgmml file, which, however, would mean massive bloat in the most common case and thus would be a pretty bad tradeoff in my (DanielAbel) opinion.
Planned core NodeRenderers
NonRenderer -- don't actually draw the node -- might be useful for hiding the node (but how would boundingPoligon be implemented for this? or make boundingPoligon just strongly suggested, not mandatory, i.e. allowing returning null by that method?)
TrivialRenderer -- only circle shape, only one color, one size; basically no VisualProperty at all
SimpleShapeRenderer -- basically current node rendering: star-like shape, with fill color and transparency and a border around it. This Renderer should make it possible to dynamically extend the available Shapes (i.e. plugins can define new polygons to use as shapes.); this would be good example of allowing extensible plugins, i.e. plugins that themselves can user services defined by other plugins.
PieRenderer -- just copy over the implementation available elsewhere; Will be example for mapping a List to a VisualProperty.
- SVGRenderer -- should also be exensible with new svg objects
BitmapRenderer -- also usable for backward-compatibility with CustomGraphics API
maybe in plugin:
- MiMRenderer (molecular interaction maps, biochemical networks): node decorations, binding regions, etc. For examples, see "species"
part of this png and Molecular_Interaction_Maps. This will be a good example of node-decorator-like renderer, to show how node decorator idea can be replaced with pluggable renderers.
Planned core EdgeRenderers
pluggable edge renderers would be a less usefull feature, but still, here are some ideas: (see also RichLineTypes rfc for ideas, etc.)
DirectedEdgeRenderer -- basically current renderer
UndirectedEdgeRenderer -- basically current renderer, but instead of separate Target.. and Source... VisualProperties, have only EdgeEnd... VisualProperties to enforce constraint that both ends of the edge must look the same.
RainbowEdge --- somewhat like pienodes, but for edges: the edge is made of paralell bands of different color, the thickness of which might show some percentage.
FancyStrokeRenderer -- see http://www.java2s.com/Code/Java/2D-Graphics-GUI/CustomStrokes.htm for ideas
NonRenderer -- like for nodes: just don't draw edge
maybe in plugin:
GradientEdgeRenderer -- set color and opacity for the two ends of the edge based on NodeAttributes, and render the edge as a gradient between these.
This will be nice as an example of setting Edge VisualProperties based on NodeAttributes. Note that implementing this nicely might be impossible, in which case the implementation would be simply to transform the EdgeAttributes with names like source_edge_color etc.
Non-Graphics2D presentation
To allow easy implementation of different NetworkViews, like 3D or matrix views, the use of Graphics2D and similar APIs should be limited to the presentation layer. NodeRenderer and EdgeRenderer (both the interface and the actual implementations) thus belong in the presentation layer. A common Renderer could, in theory, be placed in the viewmodel layer, but there is no need for that: the viewmodel layer and the viewmodel GUIs don't need to know which VisualProperty corresponds to Renderers.
The viewmodel and vizmap layers must not reference Graphics2D (and don't need to).
Things a NodeRenderer will have to provide
note: this is an initial list of ideas, and not a specification of the NodeRenderer API. I'll revise it as I finalise the NodeRenderer API during the initial prototyping work. Also note that this is Graphics2D-specific, since it is for the stand-alone cytoscape GUI program. Other presentation implementations (web interface, headless, etc.) will have different requirements from their Renderers.
I.e. a high-level requirements of the API
render() for actual rendering
boundingShape() -- required so that layout methods can take into account exact node size and shape; also needed for handling mouse events (so that the Presentation object can figure out which GraphObject got the event) Will return a java.awt.Shape or something similar.
supportedVisualProperties() -- used by the DependentVisualProperty callback of the NODE_RENDERER VisualProperty
And possibly:
renderPreview() -- used not only in vizmapper GUI, but also in the editor GUI to provide the palette -- note that this will most likely not be needed, and instead previews will be rendered the way the Default values are shown in the vizmapper in 2.6.x: by making a small network, and rendering that. Thus special methods in NodeRenderer won't be needed, and ensuring that the preview is accurate will be easier.
Also: to support several levels of detail, maybe one NodeRenderer will actually have several render() methods (or have a NodeStyle object that has several NodeRenderer objects, one for each level of detail.) In this case from the above methods, only render() will be in NodeRenderer, the rest in NodeStyles.
PluggableRenderers vs. NodeDecorators
The original idea for PluggableRenderers assumed that each node would be rendered by a single renderer. (I.e. the graphic created by the NodeRenderer would replace the node, instead of adding to it.) Apparently the current custom graphics API adds to the graphic of the node. These two are a bit different, with different usecases; I am going to call the second one NodeDecorator. I don't know all the usecases that custom graphics is used for, but I think (hope) that NodeRenderers would be a better fit for these usecases, and having them as decorators was more of a work-around.
BackgroundRenderers
Implementing background images might be done by adding BackgroundRenderers, which could use the node and edge positions to draw things. Thus the following Renderers might be possible:
TrivialBackgroundRenderer -- just a background color (basically providing current functionality)
ImageBackgroundRenderer -- paint a fixed png or svg image, scaled to the size of the network but otherwise not caring about the placement of the nodes and edges
DiffusiveColorRenderer -- place colored spots where the nodes are, and then blur that (with, say, gaussian blur) This will create a sort of 'halo' around nodes. This might be useful if node color correlates with node position, i.e. if the groups of the network can be mostly embedded into the two-dimensional layout of the graph.
GroupRenderer -- It might make sense to draw 'rubberbands' around the nodes (somewhat like the rectangles the bubblerouter plugin uses) altough I am not sure that a BackgroundRenderer is the best way to implement this.
Ideas about handling Selection
requirements:
- somehow show selection, ie. selected nodes differently
pre-3.0 cytoscape has a 'selection' (by default, yelow) and a 'reverse selection' (by default green, used for selection from AttributeBrowser); I (DanielAbel) (and some others) think that an arbitrary number of selections should be possible. see MultipleSelections.
note selection is planned to be done with attributes in 3.0 (?)
Possibilities for visual representation:
have NodeRenderer decide how to show selected nodes differently
- mandate that selection is showed by a colored border around node -- because this border already exsists, since the boundingPoligon must be known.
mandate that selection is showed by some other VisualProperty (like node color) -- but not all Renderers will have the same VisualProperties