June 28th, 2010

Google Summer of Code, Update 3: More Live Demos

Just a quick update this time. The scxml-js is moving right along, as I’ve been adding support for new features at, on average, a rate of about a feature per day. Today, I reached an interesting milestone, which is that scxml-js is now as featurful as the old SCCJS compiler which I had previously been using in my research. This means that I can now begin porting the demos and prototypes I constructed using SCCJS to scxml-js, as well as begin creating new ones.

New Demos

Here are two new, simple demos that illustrate how scxml-js may be used to to describe and implement behaviour of web User Interfaces (tested in recent Firefox and Chromium; will definitely not work in IE due to its use of XHTML):

Both examples use state machines to describe and implement drag-and-drop behaviour of SVG elements. The first example is interesting, because it illustrates how HTML, SVG, and SCXML can be used together in a single compound document to declaratively describe UI structure and behaviour. The second example illustrates how one may create state machines and DOM elements dynamically and procedurally using JavaScript, as opposed to declaratively using XML markup. In this example, each dynamically-created element will have its own state machine, hence its own state.

I think the code in these examples is fairly clean and instructive, and should give a good sense regarding how scxml-js may ultimately be used as a finished product.

June 23rd, 2010

Google Summer of Code 2010, Project Update 2

Here’s another quick update on the status of my Google Summer of Code project.

Finished porting IR-compiler and Code Generation Components to XSLT

As described in the previous post, I finished porting the IR-compiler and Code Generation components from E4X to XSLT.

Once I had this working with the Java XML transformation APIs under Rhino, I followed up with the completion of two related subtasks:

  1. Get the XSL transformations working in-browser, and across all major browsers (IE8, Firefox 3.5, Safari 5, Chrome 5 — Opera still to come).
  2. Create a single consolidated compiler front-end, written in JavaScript, that works in both the browser and in Rhino.

Cross-Browser XSL Transformation

Getting all XSL transformations to work reliably across browsers was something I expressed serious concerns about in my previous post. Indeed, this task posed some interesting challenges, and motivated certain design decisions.

The main issue I encountered in getting these XSL transformations to work was that support for xsl:import in xsl stylesheets, when called from JavaScript, is not very good in most browsers. xsl:import works well in Firefox, but is currently distinctly broken in Webkit and Webkit-based browsers (see here for the Chrome bug report, and here for the Webkit bug report). I also had limited success with it in IE 8.

I considered several possible solutions to work around this bug.

First, I looked into a pure JavaScript solution. In my previous post, I linked to the Sarissa and AJAXSLT libraries. In general, a common task of JavaScript libraries is to abstract out browser differences, so the fact that several libraries existed which appeared to do just that for XSLT offered me a degree of confidence when I was initially choosing XSLT as a primary technology with which to implement scxml-js. Unfortunately, in this development cycle, on closer inspection, I found that Sarissa, AJAXSLT, and all other libraries designed to abstract out cross-browser XSLT differences (including Javeline, the jquery xsl transform plugin), are not actively maintained. As web browsers are rapidly moving targets, maintenance is a major concern when selecting a library dependency. In any case, a pure JavaScript solution did not appear feasible. This left me to get the XSL transformations working using just the “bare metal” of the browser.

My next attempt was to try to use some clever DOM manipulation to work around the Webkit bug. In the Webkit bug, xsl:import does not work because frameless resources cannot load other resources. This meant that loading the SCXML document on its own in Chrome, with an xml-stylesheet processing instruction pointing to the code generation stylesheet, did generate code correctly. My idea, then, was to use DOM to create an invisible iframe, and load into it the SCXML document to transform, along with the requisite processing instruction, and read out the transformed JavaScript. I actually had some success with this, but it seemed to be a brittle solution. I was able to get it to work, but not reliably, and it was difficult to know when and how to read the transformed JavaScript out of the iframe. In any case my attempts at this can be found in this branch here.

My final, and ultimately successful attempt was to use XSL to preprocess the stylesheets that used xsl:import, so as to combine the stylesheet contents, while still respecting the semantics of xsl:import. This was not too difficult, and only took a bit of effort to debug. You can see the results here. Note that there may be some corner cases of XSLT that are not handled by this script, but it works well for the existing scxml-js code generation backends. This is the solution upon which I ultimately settled.

One thing that must still be done, given this solution, is to incorporate this stylesheet preprocessing into the build step. For the moment, I have simply done the simple and dirty thing, which is to checked the preprocessed stylesheets into SVN.

It’s interesting to note that IE 8 was the easiest browser to work with in this cycle, as it provided useful and meaningful error messages when XSL transformations failed. By contrast, Firefox would return a cryptic error messages, without much useful information, and Safari/Chrome would not provide any error message at all, instead failing silently in the XSLT processor and returning undefined.

Consolidated Compiler Front-end

As I described in my previous post, a thin front-end to the XSL stylesheets was needed. For the purposes of running inside of the browser, the front-end would need to be written in JavaScript. It would have been possible, however, to write a separate front-end in a different language (bash, Java, or anything else), for the purposes of running outside of the browser. A design decision needed to be made, then, regarding how the front-end should be implemented:

  • Implement one unified front-end, written in JavaScript, which relies on modules which provide portable API’s, and provide implementations of these API’s that vary between environments.
  • Implement multiple front-ends, for browser and server environments.

I decided that, with respect to maintainability, it would be easier to maintain one front-end, written in one language, rather than two front-ends in different languages, and so I chose the first option. This worked well, but I’m not yet completely happy with the result, as I have code for Rhino and code for the browser mixed together in the same mdoule. This means that code for Rhino is downloaded to the browser, even though it is never called (see Transformer.js for an example of this). The same is true for code that targets IE versus other browsers. I believe I’ve thought of a way to use RequireJS to selectively download platform-specific modules, and this is an optimization that I’ll make in the near future.

In-Browser Demo

The result of this work can be seen in this demo site I threw together:

This demo provides a very crude illustration of what a browser-based Graphical User Interface to the compiler might look like. It takes SCXML as input (top-most textarea), compiles it to JavaScript code (lower-left textarea, read-only), and then allows simulation from the console (bottom-right textarea and text input). For convenience, the demo populates the SCXML input textarea with the KitchenSink executable content example. I’ve tested it in IE8, Safari 5, Chrome 5, Firefox 3.5. It works best in Chrome and Firefox. I haven’t been testing in Opera, but I’m going to start soon.

Future Work

The past three weeks was spent porting and refactoring, which was necessary to facilitate future progress, and now there’s lots to do going forward. My feeling is that it’s now time to get back to the main work, which is adding important features to the compiler, starting with functionality still missing from the current implementation of the core module:

I’m going to be presenting this work at the SVG Open 2010 conference at the end of August, so I’m also keen to prepare some new, compelling demos that will really illustrate the power of Statecharts on the web.

May 20th, 2009

Convergence 1

Warning: Extremely Boring, Project-Specific Post

I worked really hard today, and I have lots to report. I started at 6:30am and worked straight through on GSoC until about 7:30pm. I also spent some time investigating alternate tooling, and I’ll talk a bit about this in another post.

So, to recap, we have this stack: Demos/GEF/Draw2d/SWT/JRE. The goal of this week is to get Draw2d to compile under GWT. After that, I’ll have to try to get it to run, but for now, I’m just concerned with getting it to compile. This, of course, means figuring out what Draw2d’s dependencies are in SWT and the JRE, and where they are not satisfied, writing out stub code and small TODO comments. Also, I will atempt, where possible, to reuse the existing, non-functional code from the previous project that used GWT to compile SWT to js.

To summarize my experiences of today: JRE == SUCCESS, SWT == FAIL.

First some background: GWT already provides some JRE emulation, but it is not sufficient to compile the SWT codebase. To get SWT to compile, one must at least implement certain stub interfaces that GWT does not provide. There are additional issues, such as reflection, that may be more serious, and I’ll talk about those in a later post. The extension to the GWT emulation classes live in the project org.eclipse.swt.e4.jcl. Left over from the previous project were three GWT modules which I was never able to compile. Additionally, it was unclear to me how these modules were supposed to compile, as they used the GWT module “source” tag, rather than the “super-source” tag, which, according to all of the documentation I have read, is what one is supposed to use when implementing JRE classes.

Today, I successfully compiled the previous project’s JCL classes. In order to accomplish this, I applied Technique 1 from my previous post, which is to say, I added a prefix (“e4″) to the JCL packages, where before there was no prefix, and I used <super-source path=”e4″/> in my GWT modules. I had to futz a bit to discover in which order I should import the modules, but after that, they compiled without complaint, w00t

//TODO: insert complete project structure here

Note that, since yesterday, I have been developing this prototype outside of the Eclipse environment. My toolset has consisted of GNU Screen, Vim, a Vim plugin called VimExplorer, and Ant. This has allowed me to ignore all of the IDE-specific confusion that I have so far encountered. My next post will be about my experiences using some of these tools.

I was feeling very optimistic after succeded in getting JCL to compile, so I set it as my goal to get the necessary SWT libraries to compile as well. Unfortunately, I failed in this task, because, I believe, I took the wrong approach. The approach I took was to attempt to reuse directly the SWT emulation classes left over from the previous project, just as I reused the previous project’s JCL classes. Unfortunately, these SWT emulation classes had many missing dependencies. My strategy was to just keep hacking on it (adding in missing classes, stub methods, etc.), until the thing worked. I spent most of the afternoon working on this, probably about 4 hours. Ultimately, I ended up pulling in most of the code in “org.eclipse.swt/Eclipse SWT/common”. I didn’t get to the end of the dependency chain, but I think I got far enough along to recognize a trend that I wasn’t happy with, namely, pulling in dependencies that in turn rely on other dependencies, etc. It should be a major priority to limit dependencies, because the more deps I cut out, the less complex my project will be. Given that this project is still at the prototyping stage, reducing complexity is of critical importance.

Note that I also tried compiling “org.eclipse.swt/Eclipse SWT/common”, but this pulled in some native code which GWT was unhappy with. So, I really do need to be selective.

The alternative to today’s approach is to start at the top of the stack (work top-down) and discover what the dependencies are for the Draw2d demos; then, find out what the dependencies are for Draw2d; then, very carefully, attempt to pull in those deps from org.eclipse.swt, or stub them out. I intend to use Doxygen to figure out the complete dependency chain. This will be tomorrow’s work.

This work is licensed under GPL - 2009 | Powered by Wordpress using the theme aav1