Differences between revisions 5 and 6
Revision 5 as of 2008-07-04 19:04:11
Size: 1963
Editor: csik
Comment:
Revision 6 as of 2008-07-10 07:42:02
Size: 7831
Editor: csik
Comment:
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:

The main idea is to gut the current rendering and vizmapping code so
that visual attributes (the fact that there is such a thing as
borderWidth or node color) are not mentioned in them anywhere and so
that the rendering code doesn't know how to draw a node. Instead,
provide an API so that these can be supplied in pluggable modules.
Line 5: Line 11:
The svn branch is at svn+ssh://grenache.ucsd.edu/cellar/common/svn/cytoscape3/branches/abeld-gsoc/dev/pluggable-renderers

== currently done ==

So far I have been working from the inside out: (from the inner loop outwards)
 * node rendering is done in ShapeRenderer.java, seperate from the inner rendering loop.

 * visual attributes about the node are stored dynamically (in a
 {{{DNodeView.visualAttributes HashMap}}}), not in NodeDetails.
Line 7: Line 23:
 * read viewmodel api, structure proposals (["Cytoscape_3.0/ViewModelDiscussions"] and cyto-staff)
 
 * re-read vizmapper architecture
 * start doing from "outside in": refactor how Calculators are loaded
 so that borderWidth calculator etc. is not loaded, since these will
 be supplied by the NodeRenderer instance. Instead load NodeRenderer
 the way calculators are loaded now. (from a .props file)
Line 11: Line 28:
 * study how viizmapper bypass is done now  * Refactor vizmapper so that borderWidth, etc. calculators are only
 added to the list of calculators when the renderer has been selected
 in the vizmapper.

 * study how vizmapper bypass is done now
Line 15: Line 36:
 * only do nodes at first  * only do nodes at first, don't care about node labels or edges. The
 idea is to have nodes handled in a pluggable way and compare the
 pluggable architecture with the previous one.
 
 (Later the node labels should be handled by the node renderers, as
 well; Once the general architecture is working it should be pretty
 simple to copy that for the other visual attributes as well.)
Line 36: Line 64:
 
Line 38: Line 65:
== Notes about current rendering architecture ==  * are currently used customGraphics used as a kind of decorators on
 nodes? (I.e. are they used to __replace__ or to __add to__ nodes?)
 If yes, supporting such 'node decorators' might be needed.

 * Stroke attributes, for example width and dash settings are set
 seperately, and should be usable for mapping attributes to
 seperately, but could be actually stored in a single Stroke
 object. How similar is this to setting colors, where an RGB triad is
 used, which is set in one dialog and stored as one java.awt.Color
 object?

 I think that to allow mapping attributes to width, it won't be
 possible to handle it as Stroke, but the visual attributes will have
 to be the parameters of the Stroke.

== Notes about current rendering and vizmapping architecture ==

These are mostly notes and reminders for myself (but if I
misunderstand something, feel free to correct it):
Line 46: Line 91:
   * visual style is applied to individual nodes (NodeViews), but is
 stored in a central NodeDetails object. I.e. DNodeView forwards all
 data to DNodeDetails; as if a NodeView would be one 'row' in the
 table of NodeDetails. Then, when rendering, a row is read from
 NodeDetails since nodes are rendered one-by-one.

== Possible impact on rendering speed ==


Since the plan is to make the rendering more modular, some thing will
have to be done with dynamic lookup which is done with hard-coded
methods currently. This means that rendering a node will most likely
end up a bit slower. However, the overall speed of the renderer does
not necessary have to be slower, since the current implementation does
not use some possible optimizations that it could:

(Here I use 'rendering' for 'turning visual attributes into bitmaps'
and 'repainting' for copying the bitmap images of nodes in the right
place to show the network. The point is that if repainting is fast, it
doesn't matter much if rendering is slow.)

(I hope that I am not mis-interpreting the behaviour of the current
code in the description below.)

 * cacheing of rendered nodes: rendering a node, especially if it has
 a text label, is much slower than copying the image of the
 node. Thus, (for a given zoom level) rendering should be only done
 once, to a cached image, and when drawing the graph simply that
 cached image has to be copied to the right place.

 * when zooming the view, instead of re-rendering everything to redraw
 the graph, simply scale the exsisting view and then re-render it in
 an idle callback. (This is what evince does, for example.) This
 would mean that when doing a zoom, first the view would look blocky
 (since a bitmap image was scaled as fast as possible) then it would
 clear up. In effect this allows to sacrifice (temporary) visual
 correctness for responsiveness.

 * There is no need to re-render the graph when panning it: the
 previously rendered image (of the visible part of the graph) simply
 has to be re-painted, in a different place and only the new part
 (which is now visible, but wasn't visible previously) has to be
 rendered. For figuring out what to render in this second part, the
 same code that currently only renders the visible part can be used.

 * When moving a node, the nodes and edges that are not moved should
 not be re-rendered: the stationary nodes and edges can be simply
 treated as background, rendered once to an image, when the move
 begins and only repainted from this image.

 Currently the repaint speed, and thus the responsiveness of the gui,
 is slower when a node is moved in a large network than when in a
 small network and there is no intrinsic reason for that.

 * LOD could also depend on interaction:
   
 If rerendering is needed often despite the tricks mentioned above,
 (for example, rerendering of some edges when it's endpoint node is
 moved) the LOD could be set lower (possibly for only those objects,
 since for the rest, which doesn't get rerendered, this is not
 needed.)

 * rendering should be done in background, not in the main thread (if
 possible): see also Matthias Reimann's mail to cyto-staff list on 2008. jul. 2.

Note that all these could be implemented with the current rendering
architecture, but the point is that with such optimizations, the
responsiveness and repainting speed should not depend critically on
the rendering speed of nodes and thus having pluggable renderers
should not mean a sever drop in rendering speed.

This page collects my notes about the crazy "pluggable renderers" idea I posted to the cytoscape-staff mailling list.

The main idea is to gut the current rendering and vizmapping code so that visual attributes (the fact that there is such a thing as borderWidth or node color) are not mentioned in them anywhere and so that the rendering code doesn't know how to draw a node. Instead, provide an API so that these can be supplied in pluggable modules.

The current plan is to produce a proof-of-concept prototype by the cytoscape retreat in Toronto. (which means in one week)

The svn branch is at svn+ssh://grenache.ucsd.edu/cellar/common/svn/cytoscape3/branches/abeld-gsoc/dev/pluggable-renderers

currently done

So far I have been working from the inside out: (from the inner loop outwards)

  • node rendering is done in ShapeRenderer.java, seperate from the inner rendering loop.

  • visual attributes about the node are stored dynamically (in a

    DNodeView.visualAttributes HashMap), not in NodeDetails.

Immediate TODO

  • start doing from "outside in": refactor how Calculators are loaded so that borderWidth calculator etc. is not loaded, since these will

    be supplied by the NodeRenderer instance. Instead load NodeRenderer the way calculators are loaded now. (from a .props file)

  • Refactor vizmapper so that borderWidth, etc. calculators are only added to the list of calculators when the renderer has been selected in the vizmapper.
  • study how vizmapper bypass is done now

Implementation plan

  • only do nodes at first, don't care about node labels or edges. The idea is to have nodes handled in a pluggable way and compare the pluggable architecture with the previous one. (Later the node labels should be handled by the node renderers, as well; Once the general architecture is working it should be pretty simple to copy that for the other visual attributes as well.)
  • don't care about benchmarking, at first
  • don't care about OSGi framework, at first

Do in following order:

  1. refactor GraphGraphics and GraphRenderer to call separate NodeRenderers for rendering. No vizmap changes yet, and only original renderers/ nodeshapes kept

  2. refactor vizmapper to be able to use pluggable 'mappable visual attributes'. (Try to simplify vizmapper API at this step, creating and applying custom visual style are apparently not simple enough; see ["Visual Style Simplification"])
  3. Lift some noderenderers from other code (ideally the following: pie nodes, custom bitmap and custom vector graphic image)
  4. use OSGi for pluggability (this will be about simply figuring out OSGi and how to use it for renderers-as-service)
  5. do some benchmarks to show that all this didn't make rendering slow as molasses.

Things to figure out

  • place in model / viewmodel / view division (I'll have to study the current plan for the viewmodel api)
  • are currently used customGraphics used as a kind of decorators on

    nodes? (I.e. are they used to replace or to add to nodes?) If yes, supporting such 'node decorators' might be needed.

  • Stroke attributes, for example width and dash settings are set seperately, and should be usable for mapping attributes to seperately, but could be actually stored in a single Stroke object. How similar is this to setting colors, where an RGB triad is used, which is set in one dialog and stored as one java.awt.Color object? I think that to allow mapping attributes to width, it won't be possible to handle it as Stroke, but the visual attributes will have to be the parameters of the Stroke.

Notes about current rendering and vizmapping architecture

These are mostly notes and reminders for myself (but if I misunderstand something, feel free to correct it):

  • good description of visual mapping framework: ["Visual_Mapping_System_Guide"]
  • eclipse's "Open Call Hierarchy" is pretty neat
  • render.immed is only GraphGraphics, which is only a collection of methods It is only called from render.stateful GraphRenderer.renderGraph() which is basically the inner loop Basically these are the places where large refactorings will be done.

  • visual style is applied to individual nodes (NodeViews), but is stored in a central NodeDetails object. I.e. DNodeView forwards all data to DNodeDetails; as if a NodeView would be one 'row' in the table of NodeDetails. Then, when rendering, a row is read from NodeDetails since nodes are rendered one-by-one.

Possible impact on rendering speed

Since the plan is to make the rendering more modular, some thing will have to be done with dynamic lookup which is done with hard-coded methods currently. This means that rendering a node will most likely end up a bit slower. However, the overall speed of the renderer does not necessary have to be slower, since the current implementation does not use some possible optimizations that it could:

(Here I use 'rendering' for 'turning visual attributes into bitmaps' and 'repainting' for copying the bitmap images of nodes in the right place to show the network. The point is that if repainting is fast, it doesn't matter much if rendering is slow.)

(I hope that I am not mis-interpreting the behaviour of the current code in the description below.)

  • cacheing of rendered nodes: rendering a node, especially if it has a text label, is much slower than copying the image of the node. Thus, (for a given zoom level) rendering should be only done once, to a cached image, and when drawing the graph simply that cached image has to be copied to the right place.
  • when zooming the view, instead of re-rendering everything to redraw the graph, simply scale the exsisting view and then re-render it in an idle callback. (This is what evince does, for example.) This would mean that when doing a zoom, first the view would look blocky (since a bitmap image was scaled as fast as possible) then it would clear up. In effect this allows to sacrifice (temporary) visual correctness for responsiveness.
  • There is no need to re-render the graph when panning it: the previously rendered image (of the visible part of the graph) simply has to be re-painted, in a different place and only the new part (which is now visible, but wasn't visible previously) has to be rendered. For figuring out what to render in this second part, the same code that currently only renders the visible part can be used.
  • When moving a node, the nodes and edges that are not moved should not be re-rendered: the stationary nodes and edges can be simply treated as background, rendered once to an image, when the move begins and only repainted from this image. Currently the repaint speed, and thus the responsiveness of the gui, is slower when a node is moved in a large network than when in a small network and there is no intrinsic reason for that.
  • LOD could also depend on interaction: If rerendering is needed often despite the tricks mentioned above, (for example, rerendering of some edges when it's endpoint node is moved) the LOD could be set lower (possibly for only those objects, since for the rest, which doesn't get rerendered, this is not needed.)
  • rendering should be done in background, not in the main thread (if possible): see also Matthias Reimann's mail to cyto-staff list on 2008. jul. 2.

Note that all these could be implemented with the current rendering architecture, but the point is that with such optimizations, the responsiveness and repainting speed should not depend critically on the rendering speed of nodes and thus having pluggable renderers should not mean a sever drop in rendering speed.

DanielAbel/PluggableRenderers (last edited 2009-02-12 01:04:08 by localhost)

Funding for Cytoscape is provided by a federal grant from the U.S. National Institute of General Medical Sciences (NIGMS) of the Na tional Institutes of Health (NIH) under award number GM070743-01. Corporate funding is provided through a contract from Unilever PLC.

MoinMoin Appliance - Powered by TurnKey Linux