Size: 2642
Comment:
|
Size: 8238
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 45: | Line 45: |
* CyDataTables higher order semantics / operations (JOIN, etc) | |
Line 53: | Line 54: |
Review of architecture: (photo will be attached) * UI - Swing, headless * Application - will be thin layers with a bunch of Swing code to handle GUI or cmd line input * Work - * I/O - * View Model - State (color), Selection/Highlighting, Hide/Restore, Layout coordinates, supports serialization * Presentation - * Model - Examples: * Layouts/Tunables - UI is inferred (Swing or cmd line) * MCode - can switch to Tunables? Pros and cons. * Selection:view - UI selection of graphics with visual feedback * Selection:subsets - selection gets passed to model * Presentation-based selection - presentation fires selection event, application listens, application tells View Model to define subset (a list that can be serialized), then Presentation layers can respond to subsets Questions: * WIll the plugin dev have to write multiple UIs for each display type (Swing, web, cmd)? Will Tunables suffice for plugins? Or, should we have an abstract UI layer that handles a broader set of UI elements for the diverse front-ends. * What exactly is the definition of the web front-end? This will influence how we design the back-end. * Does selection happen in Model? * Could the View Model be absorbed by the Model as namespaces? * Do we need to be more generic than "subset", to support other attributes at the View Model layer? ---- VizMap: * Visual Style - define Visual Property mappings, serializable, sits in VizMap * Mapping Calculator - given a View Model and Attributes, it tells you how to map; one calculator per attribute, provided when created; get/set attrName, Visual Property, apply(view) * Can you get at a discrete or continuous mapper from cmd line? Too much of the control is in the VizMap UI. This needs to be made public in order to support headless Visual Style creation. * Start up renderer; renderer provides list of Visual Properties into the VizMap UI to appropriately restrict * Does Renderer handle x,y,z? Are they Visual Properties? * User space: arbitrary coord system, +/- infinity * Device space: * Even headless mode has a renderer, e.g., for pdf export. A basic renderer can handle x,y,z for headless. * Layout sets coords by setting Visual Properties. The default renderer will provide the basic, core set of Visual Properties. * Where are these canonical VPs stored and how does the end user access these? * Should be able to query renderer on what is supported AND should be able to access canonical set of VPs. * Store immutable status of a VP in View Model so it is not overwritten by VizMap. * Node renderers are mapped by VizMap via attributes per node; the node renderer will determine which VPs are available in the UI. ---- Web presentation strategy: * Functionality could be handled by JSON, passed back and forth between a web client and an Work layer. JSON (javascript object notation) provides a way to pass data structures as objects, like Perl's structures, e.g., Hashes of Hashes of Arrays. * Node context menu, search, add node ---- Presentation Layer: * Current methods in Graph View need to be split into View Model, Presentation, Renderers: * background, zoom, getCenter - VM * layers? * get/set node/edge views; ditch iterators? - VM * selection; ditch enable/disable in interface? - VM * hide/show view objects - VM * fitContent - P * updateView - P * fitSelected - P * setGraphLOD - R * getComponent, getContainer -> drawViewGraph - P, then events are interpreted by Application. When/how does presentation get its context? At instantiation? Or, subclass? The Swing subclass impl could have a more complex drawViewGraph or at instatiation to take in the context, JComponent. * printing - P. It's really just another Presentation derived from View Model. * Presentation: getNetworkView, draw, update, fitContent, fitSubset * Should Presentation listen for View Model changes? Or, should the View Model be listening and then call Presentation.update? * Should consider having special cases for setting x,y,z Visual Properties to improve performance on move events. We don't want to open the door to having a bunch of special sets, but this case is isolated. Ultimately, we should benchmark this option and see if the performance gains are significance. * UI events (click, double-click, etc). Presentation needs to fire those events. Application listens, interprets and calls methods in View Model. These are Cytoscape "clicks" not low-leve swing events. * EventConsumers must be resolved for one-to-one mapping with functionality. If two plugins both consume double-click, the user will have to pick one. EventListeners, on the other hand, can co-exist, no problem. How do we want to share clicks? How do we want users to choose? * Layers & Annotations: * We need z-order in the View Model. * Do we still need layers? * We allow in the View Model, describe node shape by SVG? * Can we shoe-horn annotations into nodes? * Should they be put into a subnetwork separate from CyNetwork? * Should we super class node to Shape and have nodes and annotations? Create an Annotation class? Or just have annotations be nodes? * EdgeViews, NodeViews, and Identifiables are basically the same. Maybe CyDecoration should extend Identifiable. * Should there be a getViews() that gets Nodes, Edges, and Annotations? Then getSUID will also retrieve Annotations. And you can get View Events and figure out which object changed. |
Mini-Retreat Two
- November 14-15, 2008
Genentech Hall, 4th floor, [http://pub.ucsf.edu/missionbay/ UCSF Mission Bay], 600 16th St. San Francisco, CA 94158
- Alex's cell: 415-328-3967
Logistics
- Flights: arrive Thursday night, depart Saturday evening
- The Official Hotel!
The Queen Anne is particularly recommended [http://www.queenanne.com/] ($125; It's not close by, however. We would have to organize pick ups/ drop offs)[http://channels.isp.netscape.com/whatsnew/package.jsp?name=fte/bedbugs/bedbugs&floc=wn-nx Check for bed bugs]! Ask for the UCSF group rate!
Transportation: You can take [http://www.bart.gov/guide/airport/inbound_sfo.aspx BART] to and from the SFO airport. And there is the [http://transit.511.org/schedules/index.aspx#m1=S&m2=rail&routeid=26606&cid=SF K/T-line train] that connects downtown SF to the campus (takes ~20min). If you are staying at the Queen Anne check out [http://www.queenanne.com/resources/transport.html this shuttle].
Schedule: THUR - nothing planned; FRI - all day meeting, breakfast and lunch on campus, dinner reservations for 6:30PM at [http://www.kohsamuiandthemonkey.com/dinner_menu.htm Kohsamui & The Monkey (thai)] ; SAT - pastries and coffee provided in the morning, lunch on campus.
- We'll be on skype... and email, of course.
Topics for Discussion
ViewModel
- How does this relate to widely diverse presentation use cases
- lightweight web rendering
- use of Cytoscape model components in other systems/products without view
- Daniel's pluggable renderers
- Questions for Daniel:
- What are the possible memory costs for having pluggable renderers (not detailed -- approximate)?
- What are the performance implications?
- How do we serialize these?
My (DanielAbel) notes are at DanielAbel/PluggableRenderers which discusses these issues a bit.
- Questions for Daniel:
- How does this relate to widely diverse presentation use cases
- Begin Presentation API
- UI Events and how they interact with Cytoscape Events
- Work Layer (Command, Tunables)
some of my questions and ideas: ["Cytoscape 3.0/MiniRetreatTwo/DanielsQuestions"] (DanielAbel)
Agenda
Friday
9:15AM - 6:00PM
(Note: ISB group can't stay for Saturday, need view model nailed down and some talk about the web stuff)
- View Model
- Discussion of web-capability
- Presentation
Saturday
9:30AM - 3:00PM
- Renderers
- UI Events
CyDataTables higher order semantics / operations (JOIN, etc)
Goal
Finish the ViewModel
Pre-meeting Comments
Should vizmapper and editor be covered at this meeting or is this for a later meeting? ->later
Meeting Notes
Review of architecture: (photo will be attached)
- UI - Swing, headless
- Application - will be thin layers with a bunch of Swing code to handle GUI or cmd line input
- Work -
- I/O -
- View Model - State (color), Selection/Highlighting, Hide/Restore, Layout coordinates, supports serialization
- Presentation -
- Model -
Examples:
- Layouts/Tunables - UI is inferred (Swing or cmd line)
- MCode - can switch to Tunables? Pros and cons.
Selection:view - UI selection of graphics with visual feedback
Selection:subsets - selection gets passed to model
- Presentation-based selection - presentation fires selection event, application listens, application tells View Model to define subset (a list that can be serialized), then Presentation layers can respond to subsets
Questions:
- WIll the plugin dev have to write multiple UIs for each display type (Swing, web, cmd)? Will Tunables suffice for plugins? Or, should we have an abstract UI layer that handles a broader set of UI elements for the diverse front-ends.
- What exactly is the definition of the web front-end? This will influence how we design the back-end.
- Does selection happen in Model?
- Could the View Model be absorbed by the Model as namespaces?
- Do we need to be more generic than "subset", to support other attributes at the View Model layer?
Visual Style - define Visual Property mappings, serializable, sits in VizMap
- Mapping Calculator - given a View Model and Attributes, it tells you how to map; one calculator per attribute, provided when created; get/set attrName, Visual Property, apply(view)
Can you get at a discrete or continuous mapper from cmd line? Too much of the control is in the VizMap UI. This needs to be made public in order to support headless Visual Style creation.
Start up renderer; renderer provides list of Visual Properties into the VizMap UI to appropriately restrict
- Does Renderer handle x,y,z? Are they Visual Properties?
- User space: arbitrary coord system, +/- infinity
- Device space:
- Even headless mode has a renderer, e.g., for pdf export. A basic renderer can handle x,y,z for headless.
- Layout sets coords by setting Visual Properties. The default renderer will provide the basic, core set of Visual Properties.
- Where are these canonical VPs stored and how does the end user access these?
- Should be able to query renderer on what is supported AND should be able to access canonical set of VPs.
Store immutable status of a VP in View Model so it is not overwritten by VizMap.
Node renderers are mapped by VizMap via attributes per node; the node renderer will determine which VPs are available in the UI.
Web presentation strategy:
- Functionality could be handled by JSON, passed back and forth between a web client and an Work layer. JSON (javascript object notation) provides a way to pass data structures as objects, like Perl's structures, e.g., Hashes of Hashes of Arrays.
- Node context menu, search, add node
Presentation Layer:
- Current methods in Graph View need to be split into View Model, Presentation, Renderers:
- background, zoom, getCenter - VM
- layers?
- get/set node/edge views; ditch iterators? - VM
- selection; ditch enable/disable in interface? - VM
- hide/show view objects - VM
- fitContent - P
- updateView - P
- fitSelected - P
- setGraphLOD - R
getComponent, getContainer -> drawViewGraph - P, then events are interpreted by Application. When/how does presentation get its context? At instantiation? Or, subclass? The Swing subclass impl could have a more complex drawViewGraph or at instatiation to take in the context, JComponent.
- printing - P. It's really just another Presentation derived from View Model.
- Presentation: getNetworkView, draw, update, fitContent, fitSubset
- Should Presentation listen for View Model changes? Or, should the View Model be listening and then call Presentation.update?
- Should consider having special cases for setting x,y,z Visual Properties to improve performance on move events. We don't want to open the door to having a bunch of special sets, but this case is isolated. Ultimately, we should benchmark this option and see if the performance gains are significance.
- UI events (click, double-click, etc). Presentation needs to fire those events. Application listens, interprets and calls methods in View Model. These are Cytoscape "clicks" not low-leve swing events.
EventConsumers must be resolved for one-to-one mapping with functionality. If two plugins both consume double-click, the user will have to pick one. EventListeners, on the other hand, can co-exist, no problem. How do we want to share clicks? How do we want users to choose?
Layers & Annotations:
- We need z-order in the View Model.
- Do we still need layers?
- We allow in the View Model, describe node shape by SVG?
- Can we shoe-horn annotations into nodes?
Should they be put into a subnetwork separate from CyNetwork?
- Should we super class node to Shape and have nodes and annotations? Create an Annotation class? Or just have annotations be nodes?
EdgeViews, NodeViews, and Identifiables are basically the same. Maybe CyDecoration should extend Identifiable.
- Should there be a getViews() that gets Nodes, Edges, and Annotations? Then getSUID will also retrieve Annotations. And you can get View Events and figure out which object changed.