You are viewing a plain text version of this content. The canonical link for it is here.
Posted to batik-users@xmlgraphics.apache.org by Justin Couch <ju...@vlc.com.au> on 2002/03/15 17:15:45 UTC

SVG Toolkit requirements (from a 3D graphics programmer's perspective)

Well it looks like things started out the wrong way, so I want to 
document exactly what I/we are looking for in an SVG toolkit. The 
perspective that we bring is that of 3D graphics programmer wanting to 
include SVG content into their world. First the unadulterated 
requirements, then our position statement

1. Works in a mix-content document environment
1.1 SVG will be only one of many different XML-based content types we 
are needing to render.
1.1.1 Guaranteed SVG will not be the primary document type.
1.1.2 Others XML document types willbe X3D, XHTML and MathML as a 
minimum (others, such as HumanML and VoiceML may also follow)

1.2 Controllable by external scripting. Engine & language unspecified
1.2.1 Ideally, the ability to plug in our own interpreters. There may be 
custom scripting language types
1.2.2 Use of Rhino in a single shared context is high priority as our 
other file formats already use Rhino for ecmascript support.
1.2.3 Scripting may be external to the complete process space 
controlling the rendering space - up to, and including, remote processes 
on physically separate machines.

1.3 User input will be externally driven and filtered before it gets to 
the toolkit
1.3.1 Mouse and keyboard
1.3.2 Other 3D devices such as gloves/wands, with appropriate device 
coordinate system transformation. They will not drive events into the 
system using the standard AWT listener model.

1.4 Timing and animation control will be externally driven.
1.4.1 All clockings will be determined externally. Any runtime engine 
driving internally defined scripting must only take time clicks from the 
externally defined clock.
1.4.2 Render page flipping (double or tripple buffered) will be 
externally managed. Rendering may involve multipass and/or compositing 
processes with appropriate filters defined at each pass (think 
Blinn/DOT3 bump mapping here) or SVG-over-video.

1.5 Render to a common rendering surface, clipping bounds to be 
externally specified. Surface to be externally specified, but will be at 
least BufferedImage and VolatileImage.

2 Componentised architecure
2.1 Pluggable loading system
2.1.1 Ability to use any DOM parser to provide content. Minimum: JDK 
stock system/JAXP1.1
2.1.2 URL content resolution is to be separate. For example, X3D uses 
URNs and we don't use java.net.URL to load external content.
2.1.3 Externally defined images to accept both ImageProducer output and 
BufferedImage from loader. Ideally, animated images, such as MPEG/MNG 
would be supported in combination (probably using JMF for internal 
rendering surface).
2.1.4 Cache management to be pluggable or removable so that external 
caching systems may be used in place.
2.1.5 Ability to create internal scenegraph structure from non-XML 
source. For example, may be database input to drive the input and then 
render out as SVG (think Arc/Info Shape files or similar GIS system as 
input)

2.2 Renderer ability to operate standalone
2.2.1 Renderer system must be in fire & forget mode. Once the DOM is 
loaded into the rendering system, the DOM must be removed from memory. 
Rendering engine to operate on converted content only.
2.2.2 Input to drive changes in the rendering core driven by non-DOM 
input. For example, a scripting engine may be using direct access to the 
core rather than going through the DOM layer.
2.2.3 Minimal memory footprint. Ability to control what is loaded and 
when, and when to discard bits.

2.3 Renderer to operate on many different output device types
2.3.1 Straight single image
2.3.2 direct to video capture device (JMF output)
2.3.3 The null output device - ie no rendering, just the scenegraph, but 
still animates the scenegraph in realtime.
2.3.4 Paged output rendering. SVG content may be running over extremely 
large coordinate system areas, so the ability to either clip or render 
to a sub-image size is required (think map-of-the-earth visualisations 
where we are only loading small parts of the tile model at any one time)

3. Scenegraph management
3.1 If SVG is the parent document and contains a non-svg child content 
type as embedded, must be able to invoke an external renderer and 
composite the output back to a single surface.
3.2 If SVG is not the parent document, must be able to take all 
directions from parent rendering engine.
3.3 Time-zero loader. Load the scene graph, render it once, throw it 
away. All dynamic aspects are ignored to the point of not even starting 
the appropriate scripting engines etc.
3.4 Mark parts of the subgraph as "not traversable" or only render 
subgraphs of the entire scenegraph.
3.5 Link management to be externally managed. If the user "clicks" on a 
link in the SVG content, that results in a message being passed out of 
the SVG context for an external piece of code to decide what to do next.


Our assesment of Batik:

Summary: A toolkit that has narrow focus, designed primarly for 2D GUI 
work and file format conversion.
Various bullet points I made as I was working my way through the code:
- Evidence within the toolkit suggests that it was built to confirm that 
the spec was implementable, but wasn't really expected to be used in 
production systems.
- Documentation, while initially promising (nice high-level arch docs) 
quickly turns to dismal. Almost zero javadoc. Nothing explaining how 
each component works, either internally or as interface expectations. No 
documentation of external dependencies and/or system 
requirements/limitations.
- Inflexible APIs. Not built using standard design patterns. Ignores the 
most basic MVC principles. Actually, most of the "design" looks like it 
was organic as the developers decided to add in more capabilites to do 
bigger and better demos (see SVGCanvas and how many different 
"awt/swing" package trees)
- NIH rendering systems - particularly for image blending and cropping. 
Could be due to pre 1.3 requirements of the original code, but not sure.
- SVGGraphics2D idea is brilliant - steal the concept for Xj3D.
- Can it be printed? Not sure, but gut feel says it wouldn't work
- GVTBuilder/ContextBridge concept is overly verbose. Why do i need all 
this setup crap, all i want to do is pass it my F@*($% DOM and get a 
GraphicsNode back.
- ContextBridge requires SVG interfaces, wonder how much effort it would 
take to cut my own that takes any DOM? Probably a week or so, but would 
require modifying core GVT system too, to be standalone.
- Too many threads for a useful application. Every panel has its own 
thread. Reasonable size application that has 20 screens would fall over 
under the thread context-swap load. No way of stopping/restarting the 
threads when the canvas becomes non-visible. Animated content would 
still keep going!
- Own caching and content loading system. Doesn't want to know about the 
std core Java APIs, or that a user might want to do it themselves.
- Wasteful of resources. Lots of creating of temporary objects and then 
throwing them away. Hurst performance - particularly the image 
transcoding. 2 copies of every image if using alpha backgrounds!
- Not robust. Crashes on erroneous input (see point about using external 
DOMs)

So that's our requirements. Why do we have them?

Myself and a few others are in the position of working on building a 
number of different opensource toolkits, as well as defining 
specifications. Our core interest is in 3D graphics and virtual reality. 
As maintainers of the test implementation of the X3D specification 
(Xj3D) we are at the bleeding edge of the development of an incomplete 
spec. One of our goals is to test stuff out before it goes into the spec 
(the other is to make money, of course :0 ). We get directed by others, 
in particular the Web3d Consortium and a number of our paying clients to 
"go try this out and tell us how it works". As an ISO specification, we 
have to take input from many different sources. In particular, the X3D 
spec is also getting a lot of push/pull in the W3C world as well. For 
example, the requirements of X3D are driving a lot of the new 
functionality in DOM Level 3. At the same time, W3C is coming back to us 
and say "you should be using this" - the big ticket item currently is 
CSS (hence the note in my other email).

Within W3C, there is also a lot of concern that a bunch of the XML 
specifications are all marching to their own tune. That is, SVG has 
their world, MathML theirs, and HumanML another. There are moves afoot 
for a working group that will oversee "specification integration". That 
is - all these different document types must be able to play together in 
the same page-space. For example, I should be able to define a page with 
XHTML and embed in that a 3D model that uses 2D overlays (think callouts 
reaching into the 3D space) and is driven by a MathML physics model. The 
Web3d folks are starting to lead the charge there, pushed along by W3C, 
effectively making us, the Xj3D project, as one of the front-runners and 
testers of this content integration project. It is expected that either 
myself or my business partner will be the X3D liason/member of that 
working group (W3C and Web3d have reciprocal membership status).


So there you have it. There's not really much more to say about our 
goals. Batik, as the largest of the open source SVG projects, will 
eventually need to conform to these requirements and specifications. I'm 
just getting in ahead of the game somewhat, driven by the desire to have 
answers as soon as possible. There is also considerable interest from 
the Java gaming community for SVG+3D mixing. As I said in a previous 
email, our personal goal is to show a precursor of the mixed-content 
rendering system by Siggraph this year (July 23-26 IIRC). We have the 
resources and knowledge to go it alone and write our own SVG renderer if 
we have to. We would prefer not to though.  Getting modifications done 
to Batik is for mutual benefit. We can throw programming resources at it 
if required, however there appears to be quite an entrenched set of 
developers already, so our role is probably going to be that of sniping 
from the sidelines :(

A bunch of useful links:

The Xj3D project homepage at Web3D C
http://www.web3d.org/TaskGroups/source/xj3d.html

The Xj3D documentation
http://www.xj3d.org/javadoc/

The X3D spec homepage. In particular, look about 1/2 way down at the 
"X3D Architecture Diagram" as that describes the X3D abstract model, 
which as you will notice, is very, very similar to SVG.
http://www.web3d.org/x3d.html

Java3D code repository that I also maintain. Lowest level SVG 
integration as textures into Xj3D would come through this codebase:
http://code.j3d.org/


-- 
Justin Couch                         http://www.vlc.com.au/~justin/
Java Architect & Bit Twiddler              http://www.yumetech.com/
Author, Java 3D FAQ Maintainer                  http://www.j3d.org/
-------------------------------------------------------------------
"Humanism is dead. Animals think, feel; so do machines now.
Neither man nor woman is the measure of all things. Every organism
processes data according to its domain, its environment; you, with
all your brains, would be useless in a mouse's universe..."
                                               - Greg Bear, Slant
-------------------------------------------------------------------



---------------------------------------------------------------------
To unsubscribe, e-mail: batik-users-unsubscribe@xml.apache.org
For additional commands, e-mail: batik-users-help@xml.apache.org