MODELLING AND SIMULATION, WEB ENGINEERING, USER INTERFACES
June 6th, 2010

Google Summer of Code 2010, Project Update 1

I’m two weeks into my Google Summer of Code project, and decided it was time to write the first update describing the work I’ve done, and the work I will do.

Project Overview

First a quick overview of what my project is, what it does, why one might care about it. The SCXML Code Generation Framework, JavaScript Edition project (SCXMLcgf/js) centers on the development of a particular tool, the purpose of which is to accelerate the development of rich Web-based User Interfaces. The idea behind it is that there is a modelling language, called Statecharts, which is very good at describing dynamic behaviour of objects, and can be used for describing rich UI behaviour as well. The tool I’m developing, then, is a Statechart-to-JavaScript compiler, which takes as input Statechart models as SCXML documents, and compiles them to executable JavaScript code, which can then be used in the development of complex Web UIs.

I’m currently developing this tool under the auspices of the Apache Foundation during this year’s Google Summer of Code. For more information on it, you could read my GSoC project proposal here, or even check out the code here.

Week 1 Overview

As I said above, I’m now two weeks into the project. I had already done some work on this last semester, so I’ve been adding in support for additional modules described in the SCXML specification. In Week 1, I added basic support for the Script Module. I wrote some tests for this, and it seemed to work well, so I checked it in.

Difficulties with E4X

I had originally written SCXMLcgf/js entirely JavaScript, targeting the Mozilla Rhino JavaScript implementation. One feature that Rhino offers is the E4X language extension to JavaScript. E4X was fantastic for rapidly developing my project. It was particularly useful over standard JavaScript in terms of providing an elegant syntax for: templating (multiline strings with embedded parameters, and regular JavaScript scoping rules), queries against the XML document structure (very similar to XPath), and easy manipulation of that structure.

These language features allowed me to write my compiler in a very declarative style: I would execute transformations on the input SCXML document, then query the resulting structure and and pass it into templates which generated code in a top-down fashion. I leveraged E4X’s language features heavily throughout my project, and was very productive.

Unfortunately, during Week 1, I ran into some difficulties with E4X. There was some weirdness involving namespaces, and some involving scoping. This wasn’t entirely surprising, as the Rhino implementation of E4X has not always felt very robust to me. Right out of the box, there is a bug that prevents one from parsing XML files with XML declarations, and I have encountered other problems as well. In any case, I lost an afternoon to this problem, and decided that I needed to begin to remove SCXMLcgf/js’s E4X dependencies sooner rather than later.

I had known that it would eventually be necessary to move away from E4X for portability reasons, as it would be desirable to be able to run the SCXMLcgf/js in the browser environment, including non-Mozilla browsers. There are a number of reasons for this, including the possibility of using the compiler as a JIT compiler, and the possibility of providing a browser-based environment for Statechart development. Given the problems I had had with E4X in Week 1, I decided to move this task up in my schedule, and deal with it immediately.

So, for Week 2, I’ve been porting most of my code to XSLT.

Justification for Targeting XSLT

At the beginning of Week 2, I knew I needed to migrate away from E4X, but it wasn’t clear what the replacement technology should be. So, I spent a lot of time thinking about SCXMLcgf/js, its architecture, and the requirements that this imposes on the technology.

The architecture of SCXMLcgf/js can be broken into three main components:

  • Front End: Takes in arguments, possibly passed in from the command-line, and passes these in as options to the IR Compiler and the Code Generator.
  • IR Compiler: Analyzes the given SCXML document, and creates an Intermediate Representation (IR) that is easy to generate code from.
  • Code Generator: Generates code from a given SCXML IR. May have multiple backend modules that target different programming languages (it currently only targets JavaScript), and different Statechart implementation techniques (it currently targets three different techniques).

My goal for Week 2 was just to eliminate E4X dependencies in the Code Generator component. The idea behind this component is that its modules should only be used for templating. The primary goal of these template modules is that they should be easy to read, understand, and maintain. In my opinion, this means that templates should not contain procedural programming logic.

Moreover, I came up with other precise feature requirements for a templating system, based on my experience from the first implementation of SCXMLcgf/js:

  • must be able to run under Rhino or the browser
  • multiline text
  • variable substitution
  • iteration (loops)
  • if/else blocks
  • Mechanisms to facilitate Don’t Repeat Yourself (DRY)
    • Something like function modularity, where you separate templates into named regions.
    • Something like inheritance, where a template can import other templates, and override functionality in the parent template.

Because I’m very JavaScript-oriented, I first looked into templating systems implemented in JavaScript. JavaScript templating systems are more plentiful than I had expected. Unfortunately, I did not find any that fulfilled all of the above requirements. I won’t link to any, as I ultimately chose not to go down this route.

A quick survey of XSLT, however, indicated to me that it did support all of the above functionality. So, this left me to consider XSLT, the other programming language which enjoys good cross-browser support.

I was pretty enthusiastic about this, as I had never used XSLT before, but had wanted to learn it for some time. Nevertheless, I had several serious concerns about targeting XSLT:

  1. How good is the cross-browser support for XSLT?
  2. I’m a complete XSLT novice. How much overhead will be required before I can begin to be productive using it?
  3. Is XSLT going to be ridiculously verbose (do I have to wrap all non-XML text in a <text/> node)?
  4. Is there good free tooling for XSLT?
  5. Another low-priority concern was that I wanted to keep down dependencies on different languages; it would be nice to focus on only one. I’m not sure about XSLT’s expressive power. Would it be possible to port the IR-Compiler component to XSLT?

To address each of these concerns in turn:

  1. There are some nice js libs that abstract out the browser differences: Sarissa, Google’s AJAXSLT.
  2. I did an initial review of XSLT. I found parts of it to be confusing (like how and when the context node changes; the difference between apply-templates with and without the select attribute; etc.), but decided the risk was low enough that I could dive in and begin experimenting with it. As it turned out, it didn’t take long before I was able to be productive with it.
  3. Text node children of an <xsl:template/> are echoed out. This is well-formed XML, but I’m not sure if it’s strictly legal XSLT. Anyhow, it works well, and looks good.
  4. This was pretty bad. The best graphical debugger I found was: KXSLdbg for KDE 3. I also tried the XSLT debugger for Eclipse Web Tools, and found it to be really lacking. In the end, though, I mostly just used <xsl:message/> nodes as printfs in development, which was really slow and awkward. This part of XSLT development could definitely use some improvement.

I’ll talk more about 5. in a second.

XSLT Port of Code Generator and IR-Compiler Components

I started to work on the XSLT port of the Code Generator component last Saturday, and had it completed by Tuesday or Wednesday. This actually turned out not to be very difficult, as I had already written my E4X templates in a very XSLT-like style: top-down, primarily using recursion and iteration. There was some procedural logic in there which need to be broken out, so there was some refactoring to do, but this wasn’t too difficult.

When hooking everything up, though, I found another problem with E4X, which was that putting the Xalan XSLT library on the classpath caused E4X’s XML serialization to stop working correctly. Specifically, namespaced attributes would no longer be serialized correctly. This was something I used often when creating the IR, so it became evident that it would be necessary to port the IR Compiler component in this development cycle as well.

Again, I had to weigh my technology choices. This component involved some analysis, and transformation of the given SCXML document to include this extra information. For example, for every transition, the Least Common Ancestor state is computed, as well as the states exited and the states entered for that transition.

I was doubtful that XSLT would be able to do this work, or that I would have sufficient skill in order to program it, so I initially began porting this component to just use DOM for transformation, and XPath for querying. However, this quickly proved not to not be a productive approach, and I decided to try to use XSLT instead. I don’t have too much to say about this, except to observe that, even though development was often painful due to the lack of a good graphical debugger, it was ultimately successful, and the resulting code doesn’t look too bad. In most cases, I think it’s quite readable and elegant, and I think it will not be difficult to maintain.

Updating the Front End

The last thing I needed to do, then, was update the Front End to match these changes. At this point, I was in the interesting situation of having all of my business logic implemented in XSLT. I really enjoyed the idea of having a very thin front-end, so something like:

xsltproc xslt/normalizeInitialStates.xsl $1 | \
xsltproc xslt/generateUniqueStateIds.xsl - | \
xsltproc xslt/splitTransitionTargets.xsl - | \
xsltproc xslt/changeTransitionsPointingToCompoundStatesToPointToInitialStates.xsl - | \
xsltproc xslt/computeLCA.xsl - | \
xsltproc xslt/transformIf.xsl - | \
xsltproc xslt/appendStateInformation.xsl - | \
xsltproc xslt/appendBasicStateInformation.xsl - | \
xsltproc xslt/appendTransitionInformation.xsl - | \
xsltproc xslt/StatePatternStatechartGenerator.xsl | \
xmlindent > out.js

There would be a bit more to it than that, as there would need to be some logic for command-line parsing, but this would also mostly eliminate the Rhino dependency in my project (mostly because the code still uses js_beautify as a JavaScript code beautifier, and the build and performance analysis systems are still written in JavaScript). This approach also makes it very clear where the main programming logic is now located.

In the interest of saving time, however, I decided to continue to use Rhino for the front end, and use SAX Java API’s for processing the XSLT transformations. I’m not terribly happy with these API’s, and I think Rhino may be making the system perceptibly slower, so I’ll probably move to the thin front end at some point. But right now this approach works, passes all unit tests, and so I’m fairly happy with it.

Future Work

I’m not planning to check this work into the Apache SVN repository until I finish porting the other backends, clean things up, and re-figure out the project structure. I’ve been using git and git-svn for version control, though, which has been useful and interesting (this may be the subject of another blog post). After that, I’ll be back onto the regular schedule of implementing modules described in the SCXML specification.

April 29th, 2010

Update

Courses have finished, and I’ve been accepted into Google Summer of Code 2010. Lots interesting to come.

January 31st, 2010

DocBook Customization From a User’s Perspective

Today I was working on a project proposal for my course on Software Architecture. There was a strict limit of 5 pages on the document, including diagrams, and so it was necessary to be creative in how we formatted the document, to fit in the maximum possible content. I think that DocBook XSL, together with Apache FOP, generates really great-looking documents out of the box. Unfortunately, however, it does tend to devote quite a lot of space to the formatting, so today I learned a few tips for styling DocBook documents. These techniques turned out to be non-trivial to discover, so I thought I’d share them with others.

Background Information

Some customization can be done very simply, by passing a parameter at build-time to your xslt processor. For many of these customizations, however, DocBook does not insulate the user from XSLT. Specifically, it is necessary to implement what DocBook XSL refers to as a “customization layer”. This technique is actually fairly simple, once you know about it.

In short, when compiling your DocBook document, to, for example, html or fo, you would normally point your xslt processor to html/docbook.xsl or fo/docbook.xsl in your DocBook XSL directory. To allow for some customizations, however, you need a way to inject your own logic, and to do this, you create a new xsl document (e.g. custom-docbook-fo.xsl), which imports the docbook.xsl stylesheet you would have originally imported. By creating your own xsl document, you’re able to your inject customization logic, hence, this document is called a “customization layer”. This is not difficult in practice, but, as I said, it does not insulate the user from XSLT, which for me, was a bit shocking, as I’m not used to seeing and working with XSLT.

Easy Customizations

Two customizations I wanted to do were:

  • Remove the Table of Contents
  • Resize the body text

Both of these customizations require the user to simply add a parameter when calling their XSLT processor. In ant, this looks like the following:

        <xslt style="custom-fo.xsl" extension=".fo" 
            basedir="src" destdir="${doc.dir}" includes="*.xml">
            <classpath refid="xalan.classpath" />
			<param name="body.font.master" expression="11"/>
			<param name="generate.toc" expression="article/appendix  nop"/>
        </xslt>

The above params remove the Table of Contents, and set the body font to 11pt. Additionally, all other heading sizes are computed in terms of the “body.font.master” property, so they will all be resized when this property is set.

That’s pretty much all there is to it.

Harder Customizations

Two other customizations I wanted to do were:

  • Reduce the size of section titles.
  • Remove the indent on paragraph text.

To do this, I had to create a customization layer document in the manner I described above. It looks like the following:

<?xml version='1.0'?> 
<xsl:stylesheet  xmlns:xsl="http://www.w3.org/1999/XSL/Transform"  
      version="1.0"> 

<xsl:import href="docbook-xsl/docbook-xsl-1.75.2/fo/docbook.xsl"/> 

<!-- set sect1 and sect2 title text size-->
<xsl:attribute-set name="section.title.level1.properties">
  <xsl:attribute name="font-size">
    <xsl:value-of select="$body.font.master * 1.3"/>
    <xsl:text>pt</xsl:text>
  </xsl:attribute>
</xsl:attribute-set>

<xsl:attribute-set name="section.title.level2.properties">
  <xsl:attribute name="font-size">
    <xsl:value-of select="$body.font.master * 1.1"/>
    <xsl:text>pt</xsl:text>
  </xsl:attribute>
</xsl:attribute-set>

<!-- remove the indent on para text -->
<xsl:param name="body.start.indent">
  <xsl:choose>
    <xsl:when test="$fop.extensions != 0">0pt</xsl:when>
    <xsl:when test="$passivetex.extensions != 0">0pt</xsl:when>
    <xsl:otherwise>0pc</xsl:otherwise>
  </xsl:choose>
</xsl:param>
</xsl:stylesheet>  

Note that the content of the above document was mostly copy-pasted from various sections of Part 3 of DocBook XSL: The Complete Guide. All I had to do was guess at what it was doing, and substitute my desired values; I wouldn’t have been able to program this myself.

A very useful resource for these sorts of customizations is FO Parameter Reference.

Customizations You Need a Degree in Computer Science to Understand

One of the first customizations I wanted to make was to reduce the font sizes used in the title of the document. Even with detailed instructions, it took me about two hours to figure out how to do this, just because the method of accomplishing this task was so unexpected.

In general, what you’re doing is the following:

  1. Copying a template that describes how to customize the title.
  2. Customizing that template with things like the Font size.
  3. Using an XSL stylesheet to compile that customized copy to an XSL stylesheet. Yes, you using an XSL stylesheet to create an XSL stylesheet.
  4. Include the compiled XSL stylesheet in your customization layer.
  5. Optionally, automate this task by making it a part of your build process.

Holy smokes! Let’s run through a concrete example of this.

First, make a copy of fo/titlepage.templates.xml. I put it in the root of my project and called it mytitlepage.spec.xml. I then messed with the entities in mytitlepage.spec.xml to change the title font size. This was pretty self-explanatory. I then skipped a few steps, and integrated it with my ant script.

	<target name="build-title-page">
		<xslt style="${docbook.xsl.dir}/template/titlepage.xsl" extension=".xsl" 
            basedir="." destdir="." includes="mytitlepage.spec.xml">
            <classpath refid="xalan.classpath" />
        </xslt>
	</target>

And made my build-fo task depend on this new task:

    <target name="build-fo" depends="depends,build-title-page" 
        description="Generates HTML files from DocBook XML">
		...
    </target>

Now, whenever I build-fo, mytitlepage.spec.xml will be processed by template/titlepage.xsl in my DocBook XSL directory, producing the document mytitlepage.spec.xsl. I then import mytitlepage.spec.xsl into my customization layer:

<?xml version='1.0'?> 
<xsl:stylesheet  xmlns:xsl="http://www.w3.org/1999/XSL/Transform"  
      version="1.0"> 

<xsl:import href="docbook-xsl/docbook-xsl-1.75.2/fo/docbook.xsl"/> 

<xsl:import href="mytitlepage.spec.xsl"/>
...
</xsl:stylesheet>  

And that’s it. It’s really not that difficult once you know how to do it, and you only have to wire it all together once, but it took a long time to see how all of the pieces fit together.

Conclusion

My advisor knows all sorts of tricks for Latex in order to, among other things, compress documents down to sizes to get them into conferences with strict page limits. I think this is pretty standard practice. You can do the same thing with DocBook, but expect a high learning curve, especially if you’ve never seen XSLT or are unfamiliar with build systems. I think DocBook is pretty consistent in this respect. But, in all fairness, I was ultimately successful: all of the resources were there to allow me to figure this out myself.

January 28th, 2010

docbook-ant-quickstart-project

I was forced to learn Docbook for the SVG Open 2009 conference, which asks all of their users to submit in Docbook format. I found it cumbersome and confusing to set up, but, once I had put in place all of my tool support, I actually found it to be a very productive format for authoring structured documents.

Similar in concept to Latex, I now prefer to use Docbook for all of my technical writing. I like it because it’s XML (this is a matter of personal taste, but I like XML as a markup format), because it is environment-agnostic (I prefer to edit in Vim, but Eclipse includes great XML tooling and integration with version-control systems, and thus is also an excellent choice for a Docbook-editing environment), and because, thanks to the Apache FOP and Batik projects, it’s very easy to create PDF documents which include SVG images.

Still, I could never forget the initial pain involved in setting up Docbook, and so I’ve created docbook-ant-quickstart-project, a project to reduce this initial overhead for new users. From the project description:

Docbook is a great technology for producing beautiful, structured documents. However, learning to use it and its associated tools can involving a steep learning curve. This project aims to solve that problem by packaging everything needed to begin producing rich documents with Docbook. Specifically, it packages the Docbook schemas and XSL stylesheets, and the Apache FOP library and related dependencies. It also provides an Ant script for compilation, and includes sample Docbook files. Thus, the project assembles all of the components required to allow the user to begin creating PDF documents from Docbook XML sources quickly and easily.

I spent a long time looking for a similar project, and, surprisingly, didn’t find too much in this space. I did find one project which has precisely the same goals, but it relies on make and other command-line tools typically found on Unix platforms. Right now, I’m on Windows, and Cygwin has been problematic since Vista, so Ant and Java are a preferred solution. Also, by using Ant and Java, it is very easy to begin using this project in Eclipse.

I hope Docbook enhances your productivity as much as it has mine :)

December 31st, 2009

2010 New Year’s Resolution

I have a backlog of draft posts, and I’m going to start working my way through them. I had a very interesting semester, and the next should be even more interesting. I’m keen to resume sharing. Look for more posts to come.

November 5th, 2009

Leveraging the Java Servlet API with Rhino

As I’ve previously state, I’m rather enamored of the JavaScript language, and I enjoy exploring its use in various contexts outside of the web browser. There’s currently a large contingent of developers bent on exactly the same thing, particularly with respect to server-side web development. There are many projects, both old and new which attempt to use JavaScript productively on the server.

I’ve lately had the opportunity to explore this a bit myself. For my course in compilers, we’re writing a compiler for a Domain Specific Language called WIG. I won’t go into the specifics of what WIG is or what it does, but suffice it to say that my group has chosen to target Rhino, Mozilla’s implementation of JavaScript on the Java platform. In this post, I’ll attempt to sketch out how you might get started using Rhino to develop server-side web applications. I won’t talk about WHY you might want to do this, as opposed to using, for example, pure Java. I think it’s enough to say that JavaScript may be a very productive language, and the JVM may be a very productive environment, and so the union of the two is very intriguing.

To begin, it’s important to note that there are roughly two ways to leverage Rhino on the server: via CGI, and via the Java Servlet API.

CGI

I don’t have too much to say about this. The main thing you need to know in order to run Rhino as CGI is how to set it up to run with a shebang.

Here’s an example of a minimal Rhino CGI script:

#!/usr/bin/env rhino
print("Hello world!");

After that, it’s mostly a matter of setting up your web server to run .js files as CGI.

Servlets

Using Rhino to leverage the Java Servlet API was much more interesting to me. When I initially looked into this, I found an article that talked about using Rhino with servlets, but it worked only by using the context of a host Java application. I wanted to use pure JavaScript and stay completely away from Java, and I wasn’t able to find too much information on how this might be done.

First, here’s a tarball of the project in case you’re interested in exploring my implementation: RhinoServlet.tar.gz

It’s an Eclipse project, but it’s driven by an ant build.xml script. Creating the build.xml script was a nontrivial part of the project, and so it’s worth briefly examining. The build.xml script is responsible for setting up the classpath, compiling any Java code (there is none), compiling any JavaScript code (more on what this means in a moment), creating a WAR archive, and potentially deploying the WAR to a Tomcat server.

There are two JavaScript files in the project, TinyServlet.js, and TestServlet.js. TestServlet is very minimal, and TinyServlet aims to be a bit more complex. Both implement the Java Servlet API, and in fact, extend javax.servlet.http.HttpServlet. This is possible, thanks to the jsc tool bundled with Rhino, which compiles JavaScript to Java .class files. Each .js file will map to one top-level .class file, and potentially several other auxiliary classes or subclasses. jsc may be told that the generated top-level class should inherit from some other Java class, via the “-extends” argument. Likewise, the class generated from the .js file may implement one or more interfaces through jsc’s “-implements” command-line argument. The best resource I found on extending JavaScript objects from existing Java classes in general may be found here. The best resource I found on using jsc to extend the top-level Java classes generated from the JavaScript may be found here.

For a .js file to inherit the servlet API by extending the javax.servlet.http.HttpServlet class, then, it must be compiled with a “-extends javax.servlet.http.HttpServlet” command-line argument, and javax.servlet.http.HttpServlet must be on the classpath. Ant does all of the heavy lifting, then, both setting up the classpath, and using jsc to compile with all appropriate command-line arguments.

Here’s the relevant ant task that does this work:

<target name="compile-js" >
                <mkdir dir="${js-classdir}"/>
                <echo>Compiling ${targetjs}</echo>
                <java classname="org.mozilla.javascript.tools.jsc.Main" classpathref="project.class.path" >
                        <arg value="-extends"/>
                        <arg value="javax.servlet.http.HttpServlet"/>
                        <arg value="-g"/>
                        <arg value="-opt"/>
                        <arg value="-1"/>
                        <arg value="${targetjs}"/>
                </java>
                <move todir="${js-classdir}">
                        <fileset dir=".">
                                <include name="*.class"/>
                        </fileset>
                </move>
        </target>

TinyServlet.js, then is able to implement the servlet API functions in the global namespace. In this way, doPost, doGet, and the other familiar servlet API methods, will actually override those of the HttpServlet class. TinyServlet.js, then, becomes, in effect, a real subclass of HttpServlet, with no Java-language host context required.

TinyServlet.js then compiles to two class files, one of which is called TinyServlet.class and may be imported and used by other JVM classes. These classes may be put into a WAR using the standard ant war task, and then deployed to a server. There is nothing to indicate to the servlet container that the original language used was JavaScript and not Java.

All-in-all, I think this is pretty slick. There is one caveat which must be taken into account, however, which is that jsc will not compile JavaScript code that uses continuations. This limitation is not very well-documented, and certainly confused me when I first encountered it. This isn’t a huge limitation, however, as closures still work very nicely.

Anyhow, I found this to be a very interesting exercise. At this moment, the project is ongoing. Now that the technical part is out of the way, we’ll actually be able to focus on generating JavaScript code from a high-level DSL – I feel like the most exciting part is yet to come.

October 27th, 2009

Ubuntu to Vista and Back Again

Happy Halloween everyone! I’ve been writing this blog post for about a week now, continually adding to it as I acquire new information. At the moment, I have some work to do, but am waiting for Oracle XE to finish downloading, and so I’m going to try to finish this blog post before I feel obliged to resume being productive.

I’ll split this post up into a few parts. I’ve been having operating system trouble, which has been ongoing. This may be interesting to others, so I thought I’d share it.

Leaving Ubuntu 9.04

I’ve been using Ubuntu as my primary OS since 2006, when I switched from Windows XP. In that time, I’ve had the opportunity to install Ubuntu on a lot of different hardware, but I’ve used a Dell Inspiron 1300 laptop as my main machine. In August, the laptop finally died a slow, lingering death due to hardware failure, and so I bought a new laptop, an HP Touchsmart tx2-1000. In addition to having pretty good specs, this machine has the distinction of being what HP calls the first consumer laptop with a multi-touch display. I chose this particular machine because it was on sale and offered extraordinary value for the price, and because I believe that the multi-touch display will prove very useful and interesting for my research into UI.

I put Ubuntu 9.04 on the new computer the very first night I received it. I was pretty impatient, and didn’t even attempt to create Windows Vista restore disks. I intended to make a dual-boot, but didn’t defragment the hard drive before installing, and so the Ubuntu installer failed to resize the Vista partition, and ended up hosing it. The restore partition was still intact, though.

Unfortunately, Ubuntu 9.04 did provide a very good experience on this hardware. Audio playback worked alright, even if it was somewhat suboptimal. Headphone jack sensing didn’t work,a nd so it was necessary to manually mute the front speakers through the mixer. Most importantly, I could never get the microphone to work. Linux is such am ess right now that it’s hard to say where the fault lies when something isn’t working. For example, when Ekiga fails to make a voice call, it could be Ekiga failing to communicated with Pulseaudio via the Pulseaudio alsa plugin, Pulseaudio failing to talk to alsa, or alsa failing to properly communicate with my hardware. VOIP is critical to everything that I do on a day-to-day basis. I need it to work, and so I tried many different things in order to make it work: I tried different voip clients (Skype 2.0, Skype 2.1, and Ekiga), I tried stripping out Pulseaudio and using ALSA directly, I messed around with ALSA, and then I swapped out the stack entirely and used OSSv4. This was as painful as it sounds, and I was unable to converge to a resonable result.

The screen worked well as a tablet (using the pen, not fingers) out of the box, which was nice, but the requirements to get the touchscreen working were nontrivial. When presented with clear instructions, I’m very comfortable patching and compiling my own kernel. Unfortunately, the instructions are still evolving. The end result was, I spent a few hours working on this, broke tablet support, and then gave up. I might have tried again, but I’ve been exceptionally busy.

I had some trouble with suspend/resume, in that it would occasionally suspend and then be unable to resume. The screen would simply be black; no X, no backlight, nothing to do but reboot.

Finally, while the open source radeon driver worked very well with my graphics card, and provided a very solid experience, I really wanted to use Compiz, and the proprietary driver, which enabled 3D graphics on my hardware, turned out to be rather sketchy. Once again, I treid many permutations, but was unable to converge on something that I felt was solid and reliable.

After all this, for the first time in 3 years, I decided it might be better for me to switch back to Windows. If this were Windows XP, this might have been a good decision. Unfortunately, Windows Vista was far worse than I had anticipated.

Reinstallation of Windows Vista

Reinstallation of Windows Vista was nontrivial, and I’ll only say a few words about it, as the procedure was not very difficult, but was nontrivial to discover. I had not created Vista restore disks, and I didn’t have a a true Windows repair disk. Fortunately, when installing Ubuntu, the installer is clever enough to detect the restore partition as a separate Vista install. I was then able to boot into the restore partition using grub. Unfortunately, the HP restore tools were unable to restore Vista with Ubuntu on that partition. The solution was to use the windows cmd shell provided by the restore partition to:

  1. restore the MBR to use the Windows NT bootloader,
  2. delete the Ubuntu partition, and
  3. initiate system restore

I discovered the details of how to do this by reading this post on Ubuntu forums, which proved to be a critical resource in this process.

Trying to Construct a Linux-flavored Userland in Vista x64

I was fairly optimistic about transitioning to Windows. I know that there are a lot of FLOSS projects that would help ease the transition. There are some basic tools that I need readily at hand in my OS in order to be comfortable there: GNU screen, bash, vim, a unix-like shell environment, and X11.

At the top of my list was Portable Ubuntu. Portable Ubuntu looks like quite a nice piece of work: it uses coLinux to run the Linux kernel as a process inside of Windows; it the uses the Windows port of Pulseaudio, and Xming, an XServer port for Windows. The effect of this is that you get the full Ubuntu Linux userland, running full-speed, with simialr memory consumption, and exellent integration into the Windows shell. Windows kernel, with all of the hardware support, and Ubuntu userland, sounds like a pretty attractive ideal combination.

Unfortunately, this didn’t work, for two reasons. First, because coLinux doesn’t work on 64-bit versions of Windows in general. Second, because Windows Vista 64-bit does not allow the installation of drivers that are not signed by Microsoft. This basically means that coLinux is full-out for me.

I next tried Ubuntu running in a Virtual Machine inside of VirtualBox. This is pretty wasteful for just an X server and a shell, but whatever, my machine has a nice fast processor and lots of RAM. Unfortunately, this did not provide very good integration with the Windows shell, even with seemless mode, and soon proved annoying to use. I may revisit it at some point, but I decided to look into a Windows-native solution that would provide better integration.

I then tried Cygwin, which attempts to create a unix subsystem in windows. Cygwin would give me X11, Xterm, bash, screen, vim, and pretty much everything else I require.

Unfortunately, Cygwin has its own problems. Specifically, Cygwin attempts to be POSIX-compliant, and the way it encodes Unix filesystem permissions on NTFS, while totally innocuous in Windows XP, seems to conflict with Windows Vista’s User Access Controls. This is not something that the Cygwin developers seem to have have any interesting in fixing. The result of this is you get files that are extermely move and copy, and very difficult to delete using the Windows shell. So Cygwin was not an effective solution for me.

I finally tried one last thing, a combination of tools: Xming, MSYS, MINGW, and GNUwin32. MSYS and MinGW appear to be mostly intended for allowing easier porting of software written for a unix environment to Windows, however MSYS provides a very productive unix-flavored shell environment inside of Windows. GNUwin32 ports many familiar GNU tools to Windows, so I have a fairly rich userland: rxvt as a terminal emulator, vim, bash, and a unix-flavored environment. This is not ideal, as it is not easily extensible, and doesn’t support any concept of packages, but it seems it’s the best I can do on Windows Vista x64.

A Very Late Review of Windows Vista

Let me start with the things that I like about Vista.

When I develop software, I primarily target the web as a platform, and so I like the fact that I can install a very wide range of browsers for testing: IE 6, 7, and 8 (Microsoft publishes free Virtual PC images for testing different versions of IE), Chrome, Safari, Firefox and Opera. It’s very convenient not to have to fire up a VM for testing.

Hardware support is top-notch. The audio and video stack feel polished and mature. I’ve never had an instance of them failing. And, all of my special hardware works, including the multi-touch touchscreen, and pressure-sensitive pen.

Now for the bad stuff. I want to keep this very brief, because it’s no longer interesting to complain about how bad Vista is… But it is so bad, it is virtually unusable, and I want to make it clear why:

  1. I seem to get an endless stream of popups from the OS asking if I really want to do the things I ask it to do. This transition is visually jarring, and very annoying.
  2. It maints the behaviour that it had back in Windows 95, where if a file is opened by some application and you attempt to move it, it will fail without meaningful feedback. This can be overcome with File Unlocker, but it’s crazy that this simple usability issue has never been addressed.
  3. File operations are so slow as to be unusable.
  4. Before attempting to move a file with Windows Explorer, it attempts to count every single file you’re going to move before it attempts to move it. This makes no sense to me at all, because moving a file in NTFS, I believe is just a matter of changing a pointer in the parent. If you use the Windows cmd shell, with the “move” operation, or Cygwin/MSYS’s “mv” operation, then the move takes place instantaneously. It does not attempt to count every file before moving the parent directory. So, this really is just a windows shell issue. It has nothing to do with the underlying filesystem. So, as bad a user experience as moving files using Windows Explorer is, it’s much much worse when you discover that it’s completely unnecessary.
  5. Out of the box, my disk would thrash constantly, even when wasn’t doing anything. I eventually turned off Windows Defender, Windows Search, and the Indexer service, and things have gotten better.
  6. It takes about 2 minutes to boot, and then another 5 minutes before it is at all usable, as it loads all of the crapware at boot. I’ve gone through msconfig and disable a lot of the crapware preinstalled by HP, and this has gotten somewhat better, but out of the box it was just atrocious.
  7. Windows Explorer will sometimes go berserk and start pegging my CPU.
  8. Overall it just feels incredibly, horribly slow. I feel like it cannot keep up with the flow of my thoughts, or my simple needs for performance and responsiveness. It does not offer a good user experience.
  9. Only drivers signed by Microsoft allowed on 64-bit Vista. This is a huge WTF.

All in all, Vista sucks and I hate it. Maybe Windows 7 will be better. Right now, though, a real alternative is necessary, because Vista offers such a poor experience that it is simply not usable for me. I had forgotten what it was like to want to do physical violence to my computer. No longer.

Really, at this point I feel like I should have gotten a Mac.

Last Word: the Karmic Koala

Ubuntu 9.10 Karmic Koala came out this past Thursday, and I just tried it out using a live USB. I’m happy to say that it sucks significantly less on my hardware than 9.04! In particular, audio now seems to work flawlessly: playback through speakers, headphones, and headphone jack sensing all work fine; recording through the mic jack works out of the box. I didn’t try Skype, but the new messaging application shipped with Ubuntu, Empathy, is able to do voice and video chat with Google Chat clients using the XMPP protocol.

I had mixed success with Empathy. It wouldn’t work at all with video chat; I think this had to do with an issue involving my webcam, as Cheese and Ekiga also had trouble using it. With regard to pure audio chat, it worked fine in one case, but in another it crashed the other user’s Google Chat client. Yikes. So, clearly there are still some bugs that need to be worked out with respect to the client software.

I now feel much more optimistic about the state of the Linux audio stack. I wasn’t really sure that the ALSA/Pulseaudio stack was converging on something that would eventually be stable and functional enough to rival the proprietary stacks on Windows and Mac OS X. The improvements I have seen on my hardware, though, are very encouraging, and so I think I may go back to Ubuntu after all. At the very least, I’m going to hook up a dual boot.

Wow, that was long post! I hope parts of it might be generally interesting to other who may be ina similar situation. In the future, though, I’m going to try to focus more on software development issues.

October 8th, 2009

JavaScript 1.7 Difficulties

For my course in compilers, we have a semester-long project in which we build a compiler for a DSL called WIG. We can target whatever language and platform we want, and there are certain language features of JavaScript, specifically the Rhino implementation, that I thought could be leveraged very productively. I was excited to have the opportunity to shed the burden of browser incompatibilities, and to drill down into the more advanced features of the JavaScript language. Unfortunately, I’ve also encountered some initial challenges, some of which are irreconcilable.

E4X

One thing that I was excited about was E4X. In WIG, you’re able to define small chunks of parameterizable HTML code, which maps almost 1-1 to E4X syntax. Unfortunately, Rhino E4X support is broken on Ubuntu Interpid and Jaunty. Adding the missing libraries to the classpath has not resolved the issue for me. On the other hand, the workaround of getting Rhino 1.7R2 from upstream, which comes with out-of-the-box E4X support, is unacceptable, as this Rhino version seems to introduce a regression, in which it throws a NoMethodFoundException when calling methods on HTTP servlet Request and Response objects. I’ll file a bug report about this later, but the immediate effect is that I’m stuck with the Ubuntu version, and without E4X support.

Language Incompatibilities

Destructuring assignments were introduced first in JavaScript 1.7. While array destructuring assignments have worked fine for me, unfortunately, I haven’t been able to get object destructuring assignments to work under any implementation but Spidermonkey 1.8. Rhino 1.7R1 and 1.7R2, as well as Spidermonkey 1.7.0 both fail to interpret the example in Mozilla’s documentation: https://developer.mozilla.org/en/New_in_JavaScript_1.7#section_25

This is disappointing, as it would have provided an elegant solution to several problems presented by WIG.

October 7th, 2009

SVG Open 2009 Results and Other Things

It’s been awhile since I’ve posted here because I’ve been very busy doing interesting work! First, I had to prepare for the SVG Open 2009 conference, where I presented a paper on modelling the reactive behaviour of user interfaces with class diagrams and statecharts. The paper and presentation can be found online here.

I have to say, the conference went really well! My feeling about it was that many developers are already using state machines to describe the behaviour of their objects. Many saw the techniques I presented as the more developed version of the techniques they were already using. All in all, my experience at the conference convinced me that people are ready to begin using these techniques and incorporating them into their workflows. What is lacking is tooling, in the form of a good Statechart editor and Statechart-to-JavaScript compiler. These tools need to be high-quality, free and open source, and have a clean code base that is hacker-friendly. It has always been my intention to fill this gap, but I now feel highly motivated to renew my efforts.

In order to write the SVG Open paper, I had to learn to use Docbook. Getting set up in an environment that was conducive to being creative with this format turned out to be nontrivial, and I hope to make this the subject of a future post. Suffice it to say, I now quite like it, and I’ve found it to be a very productive format. I’m considering using it to write my master’s thesis, as opposed to LaTeX.

I’m doing very interesting work for my courses this year as well, especially my course in Distributed Systems. The Prof has granted me permission to do my own project, and so I’m focusing on distributed user interfaces. Of course, I’m targeting the browser as the preferred client. On the server, I’m running Batik inside a servlet, with SVG documents and objects exposed via a RESTful API that I rolled myself. The project is going to focus on issues of performance and concurrency. This is really great stuff, and I hope to write more about it as it develops.

Finally, Google Chrome for Linux is just amazing. Where Firefox always feels sluggish, even on my new 64-bit AMD Turion X2 Dual Core laptop, Chrome is always lightning fast. Unfortunately, I need Firefox for 3 reasons: plugins, plugins, plugins. Actually, I need it for Zotero, Firebug, and Xmarks. Once this gap is filled, once developers can begin writing extensions for Chrome, that may be the endgame for Firefox.

August 17th, 2009

GSoC 2009 Final Report

Today is the last day of GWT, and so I’ve put together a rather long post talking about several different things.

A brief recap of the project

    The original project goals were to port GMF to the web, which is to say, to create a graphical, web-based diagram editor frontend that would interface with an EMF model living on the server on the backend. I had related experience in this domain, prior to this project, from my work as a researcher for the McGill University Modelling, Simulation, and Deisgn Lab. My research explored the development of modelled, web-based diagram editors, and included the the production of a prototype editor. My hope was that with Google Summer of Code would allow me to extend this work, such that it would be possible to build a web-based diagram editor that would interact with a full meta-modelling kernel (Ecore) hosted on a server. You may see my original project proposal here

    The project
proposal was informed by the fact that GMF was built on top of GEF (a
generic diagram editor library), and that GEF was built on top
of Draw2D (a graphical drawing library).

    My project was mentored by e4 committer and Architexa employee Vineet Sinha. Vineet has had experience porting the GEF stack to the web via flash. Limitations in the capabilities of Flash support made us consider a non-Flash based solution for this project.

    Looking back, I would say that this project has been divided up into about three phases:

  1. Trying to get code already checked into e4 to work. In this phase, we attempted to leverage an existing body of code checked into in the e4 repositories. This code attempted to port the SWT API to GWT, and thus would have made an appropriate foundation for implementing SWT/GC, SWT’s low-level, immediate-mode graphics API, on top of the HTML5 Canvas API. Unfortunately, the result of this was that we spent 1.5 months simply trying to compile the existing code, without success. After this time we focused on starting from scratch in bringing Draw2D into web browsers.
  2. Trying to implement Draw2d on top of SWT/GC by using Java2Script. This was done because Java2Script provided good support for SWT, and was an alternative to GWT, which we had had trouble with in Phase 1. The result was that we found bugs in the Java2Script compiler, and had to return to GWT.
  3. Trying to implement Draw2d on top of SVG by using GWT. This was done because we wanted to use GWT, but decided it would be more productive to start a level higher in the SWT/Draw2d/GEF stack.

    As you can see, we ended up trying many different strategies throughout this project, and therefore, the work that I am doing now is the third time I’ve started over from scratch. This may be understandable, given the experimental nature of the project and the methods by which were attempting to achieve the project’s goals (using a Java-to-JavaScript cross-compiler, etc).

Overview of implementation details of Phase 3

    We use an adapter pattern: each org.eclipse.draw2d.Figure class composes an  handle object native to the environment, which is in this case an org.w3c.dom.svg.SVGElement instance. Then internally, the Figure’s API is implemented in terms of this native DOM object. Here’s a snippet that should clarify what this means:

public class Figure<T extends SVGElement> implements IFigure<T> {
    
    //in this implementation, Figure is no longer lightweight
    protected T handle;
    
    public Figure(){
        //create the handle
        handle = (T) DOM.getDocument().createElementNS(SVGConstants.SVG_NAMESPACE_URI, SVGConstants.SVG_G_TAG);
    }

}

There are three interesting things to note in the above snippet:

  1. Figure composes a handle of type <T extends SVGElement>. SVGElement is a subclass of org.w3c.Node and the parent class of all SVG elements.
  2. The type of handle can be further specified using Java 5 syntax. This is useful, because a Draw2d Rect shape may want the compose a SVGRectElement rather than a generic SVGElement. Adding a generic parameter to Figure is thus useful, and has the additional advantage of extending the API without breaking compatibility with existing code.
  3. Figure is not abstract, and may be instantiated to contain other Draw2d elements. It is therefore roughly analogous to the SVGGElement, and this is what is instantiated in the constructor using the statically exposed method DOM.getDocument() and standard SVG DOM API’s.

    Implementing Draw2d in terms of SVG is theoretically achievable because the Draw2d API is attempting to achieve roughly the same thing as the SVG DOM API, namely, providing a retained-mode graphics API.

Nevertheless, there are architectural and conceptual differences between the two. Here are few that I’ve noticed:

  • SVG lacks a concept of connectors and layout, which Draw2d has.
  • Draw2d provides access to an immediate-mode API to its Figures through the Graphics object. SVG does not provide access to such an API
  • In many Draw2d examples, it is common to see a class inheriting from Figure. While it might be sometimes possible to do the same thing in SVG, it is more common to see composition used, rather than inheritance.
  • SVG hides paint events from its user. In Draw2d, you can force a manual refresh of the scene graph.
  • Draw2d allows fine control over updates in the scene graph, while SVG will in general always update its scene graph synchronously, whenever you change a value in DOM.

    It’s also worth noting that, by implementing Draw2d in terms of SVG, the org.eclipse.draw2d.LightweightSystem class is no longer really a Lightweight System, as it’s composing a System-native handle, which, among other things, can handle its own event dispatching. This means that, rather than having events be dispatched through a single source, the LightweightSystem, inner DOM node handles should instead be connected to the proper interfaces on their host Figure when the Figures are instantiated.

Figures will also have to handle tearing down the DOM node when they are destroyed.

What has been implemented

    Everything required to get org.eclipse.draw2d.HelloWorld to work. Here’s a snippet that should illustrate this:
 public static void main(String[] args) {

    Display d = new Display();
    Shell shell = new Shell(d);
    shell.setLayout(new FillLayout());
    
    FigureCanvas canvas = new FigureCanvas(shell);
    canvas.setContents(new Label(“Hello World”));

    shell.setText(“draw2d”);
    shell.open();
    while (!shell.isDisposed())
        while (!d.readAndDispatch())
            d.sleep();
}

  • GWT-compatible classes have been created for Display, Shell, FigureCanvas, and Label.
  • Instantiation of SWT objects, passing parents into the constructor should work in general, as occurs with Shell and FigureCanvas classes. The rest of the SWT API has been stubbed out.
  • The Figure class and some subclasses, including Label and Rectangle have been created. The API has been completely stubbed out and partially implemented.
  • The class will create JavaScript code which, when included in an XHTML document, will create a new HTMLDivElement, SVGSVGElement, and SVGTextElement, which will display “Hello World” on the page.

What has not been implemented

Everything else, notably:

  • Most subclasses of Figure lack implementations.
  • Most methods of Figure superclass lack implementations.
  • Connectors
  • Layout
  • Colors
  • Fonts
  • Event Handling
  • There are still holes in the gwt-svg library, the library that exposes native SVG and HTML DOM to GWT:
    • not every SVGElement has an implementation.
    • even those that do, not every element is properly wrapped in SVGElementImpl.wrapElement. So if you’re getting ClassCastExceptions, check to make sure that your element is properly handled in SVGElementImpl.wrapElement
    • The whole business of wrapping Elements should probably be cleaned up a bit. It’s currently quite spread out and a bit confusing. Was already a bit crufty when I started using gwt-dom.

Most recent dev experience

General Approach

    So the goal of Phase 3 was to implement the Draw2d API in terms of the SVG DOM API by way of GWT.

I worked very conservatively, only merging in code that I felt I understood quite well, and would not break the compiler. In that way, I was able to avoid most of the mysterious compiler errors that had occurred for me in Phase 1 of the project.

Problems with SVG Embedding and the SWT API

I did run into a few interesting problems that are worth talking about. Let me set up the problem like this:

  1. Since GWT 1.4, GWT out of the box does not support XHTML or SVG (XML) documents. It only support HTML4 in quirks mode and standards mode.
  2. SVG can be viewed by a web browser in the following ways:
    1. As a plain SVG document (image/svg+xml mimetype, usually with a .svg extension).
    2. Included in an (X)HTML document in the following ways:
      1. Inline in an XHTML document, in which the SVGSVGElement root element is loaded synchronously with the rest of the page.
      2. Embedded via the object, embed or iframe tags in an XHTML or XML document in which the SVGSVGElement in the embedded SVGDocument is loaded asynchronously, independent of the rest of the page. Basically, to get the SVGSVGRootElement, you need to set a LoadListener, otherwise, the internal contentDocument will simply be null. In general, listening to load events like this is quite common in web programming, and usually not problematic, but you will see that this did cause a problem of competing requirements…
  3. The SWT API requires widgets to be instantiated synchronously. The reason for this is simply that the method calls are synchronous, so for example, new FigureCanvas(shell), does not take a callback.

    This system cannot be solved. 1 blocks 2.1 and 2.2.1. On the other hand, 3 blocks 2.2.2. I actually had been using option 2.2.2, with an object tag and the SVG document encoded in a data URI, and I had a first implementation of basic SWT support that used this, and tried to do some tricky things involving managing widgets’ internal state and setting callbacks in order to fake some kind of synchronicity, but it clearly was not going to scale, and I felt that that was not the place spend my effort. So, basically, I had to change one of the assumptions, and the one I decided to change was GWT. This meant going into the GWT core and figuring out what it was doing to break XHTML support. I found most of these answers here and here, and it basically has to do with the fact that they’re using document.write and document.body in the module loading code, neither of which are supported in XHTML DOM. Rather than go into the GWT core to change this, I just fixed it once by hand, and then wrote a little patch which I ran each time I compiled. Here’s the patch, which you can see is not very much:



45c45,47

< $doc_0.write('<script id="' + markerId + '"><\/script>');

---

> var scriptElement = document.createElement("script");

> scriptElement.setAttribute("id",markerId);

> document.getElementsByTagName('head')[0].appendChild(scriptElement);

48c50

< while (thisScript && thisScript.tagName != 'SCRIPT') {

---

> while (thisScript && thisScript.tagName.toUpperCase() != 'SCRIPT') {

167c169

< $doc_0.body.appendChild(iframe);

---

> document.getElementsByTagName('body')[0].appendChild(iframe);

286c288,291

< $doc_0.write('>script defer="defer">org_eclipse_draw2d_e4_examples.onInjectionDone(\'org_eclipse_draw2d_e4_examples\')<\/script>');

---

> var scriptElement = document.createElement("script");

> scriptElement.setAttribute("defer","defer");

> scriptElement.text = "org_eclipse_draw2d_e4_examples.onInjectionDone('org_eclipse_draw2d_e4_examples')";

> document.getElementsByTagName('head')[0].appendChild(scriptElement);

    Now, I highly suspect that there would be problems using GWT’s widget library in the XHTML document context, as they’re probably using innerHTML. But for the purposes of getting basic GWT’s module loading and DOM API up and running, this small patch was perfectly sufficient. I would be very happy to see it get integrated into the GWT core, and get pushed upstream, and I imagine a lot of SVG developers would be as well.

Hacking on SWT API’s confuses GWT

    There was another issue involving GWT, namely that hacking on API’s in the SWT namespace seems to confuse it a lot. When I attempted to launch Hosted mode, it complained about missing methods in some SWT classes. Those methods were missing in my emulated SWT classes. In any case, this meant that I couldn’t use GWT Hosted mode, and hence did all of my debugging on the generated JavaScript code in Firefox and Firebug. This was challenging at first, but became easier as I became better acquainted with the kind of code GWT produces, and the most common errors I could run into.

Zero-Argument Constructors on Figures

    In my implementation of Draw2d, every Figure is supposed to wrap a <T extends SVGElement>. The only way to create new SVG Elements is to use the Document factory. What I would have preferred to do was use dependency injection, and pass in the handle to a new DOM node to each new Figure in the constructor. Unfortunately, the Figure API only has a zero-argument constructor, and it was thus not possible to achieve this  without changing the API. My solution to this was somewhat evil, which was to simply use a “global variable”, namely, the statically exposed DOM.getDocument() method to obtain the document factory inside of the constructor. This is similar to what you might see in pure javascript, though (the document is a global variable), so I think it’s not so bad.

Considerations about future work

GWT vs. Java2Script

    My experiences with GWT in Phase 1 were not very favorable. After spending 1.5 months, I was still not able to get the code already checked into e4 to compile.

    After that experience, I found that it was much easier to get set up with Java2Script. It compiled all of my Java code to JavaScript transparently, and without complaint. I found that it had excellent integration with Eclipse, especially with regard to building my code (it’s actually hooked into the incremental compiler that comes with JDT!). This spared me the constant edit-compile-debug cycle one experiences with GWT. This was very refreshing.

    However, while compiling a large body of Java code to JavaScript was very easy with Java2Script, I found I was running headlong into bugs in the Java2Script compiler. It would throw runtime errors in the core lib which were highly time consuming for me to debug.

    I also wasn’t very favorably disposed to the way Java2Script handled native JavaScript embedding, versus GWT’s JSNI. Java2Script uses scriptdoc annotations before empty braces, with JavaScript being put in the comments. Compared to GWT’s JSNI, was very easy to set up and use, and, while not perfect, I felt that it much easier to read than JSNI.

    Unfortunately, there are two problems with Java2Script’s method of native JavaScript injection versus JSNI. First, I feel it encouraged poor coding practices, as, rather than necessarily having native JS separated nicely out into its own method, where it is clearly marked as native and encapsulated, you instead find JavaScript code mixed intermittently with the Java code. For example, see the Java2Script implementation of org.eclipse.swt.widgets.Display. I find this method of programming very difficult to understand, and not very maintainable. The second reasons that I preferred JSNI is that the very awkward, ugly constructions used to preserve type information in JSNI actually serve a useful purpose, in that the compiler is able to do more useful checks at compile-time to prevent run-time bugs. It’s also important for the way GWT optimizes the generated JavaScript code.

    My mentor and I decided we needed a rock-solid cross-compiler, and for that reason elected to revisit GWT, this time moving one layer up the stack, focusing directly on Draw2d rather than SWT. With regard to my initial difficulties, when I adopted a more conservative approach in Phase 3, I did not have any trouble compiling a fairly complex project that leveraged an existing body of Java code. Also, I have yet to experience any compiler bugs in GWT.

    Also, GWT should theoretically create code that loads and runs faster than Java2Script. However, with this gain in speed, you lose some flexibility, namely that dynamic class loading in GWT (class.forName) impossible. Dynamic class loaing is very possible in Java2Script. Other forms of reflection should be possible in both GWT and Java2Script.

    An optimal middle-ground may be to take Java2Script’s SWT implementation and port it to GWT. This would be very challenging, though, I think primarily because of the use of native JavaScript code inlining that I mentioned above.

SVG vs. Canvas

    The approach we took in Phase 2 was to implement one immediate-mode Graphics API in terms of another: SWT GC on top of HTML Canvas. As we suspected, synchronizing these API’s was not very difficult, and I experienced some success with that, as you may see in the demo here.

    One difficulty with this approach, however, was that a common pattern in SWT is to attach a PaintListener to a Drawable (usually a Canvas), and and then put your your drawing logic there. HTML Canvas does not give you native paint events, so this would need to be somehow emulated. I moved onto Phase 3 before I resolved this.

Draw2d and SVG, on the other hand, both have much bigger API’s, and are conceptually different from one-another in many ways. It is significantly more challenging to implement one API in terms of the other, and ensure that they have identical semantics.

    Still a retained mode API is a necessary part of the stack we are trying to build, and the only question is, what is the best way to get there. I believe that one consideration that works in SVG’s favor is speed. All things being equal, given an implementation of a retained-mode API in C++ vs. one in pure JavaScript (albeit highly optimized using GWT), it seems likely that the C++ implementation would be faster. Perhaps not, though… perhaps a lightweight system, with a single event handler and dispatcher (like org.eclipse.draw2d.LightweightSystem), would be faster than the slow DOM with all of event listeners. This is worth investigating.

Where is this being hosted?

    Here: https://eclipse-gwf.svn.sourceforge.net/svnroot/eclipse-gwf/p3/

    Right now you need a few libraries that are not included in that repo. I have a releng project which is almost done which I’m planning to commit soon, and I will also post explicit build instructions later.

    I’m going to put some compiled examples up on my personal page as well.

Good project. I hope I have the opportunity to do more with it in the future.

This work is licensed under GPL - 2009 | Powered by Wordpress using the theme aav1