MODELLING AND SIMULATION, WEB ENGINEERING, USER INTERFACES
October 7th, 2012

New projects: scxml-viz, scion-shell, and scion-web-simulation-environment

I have released three new projects under an Apache 2.0 license:
  • scxml-viz: A library for visualizing SCXML documents.
  • scion-shell: A simple shell environment for the SCION SCXML interpreter. It accepts SCXML events via stdin, and thus can be used to integrate SCXML with Unix shell programming. It integrates scxml-viz, so can also allow graphical simulation of SCXML models.
  • scion-web-simulation-environment: A simple proof-of-concept web sandbox environment for developing SCXML. Code can be entered on the left, and visualized on the right. Furthermore, SCION is integrated, so code can be simulated and graphically animated. A demo can be found here: http://goo.gl/wG5cq
August 4th, 2012

Thinkpad W520 Multi-Monitor nVidia Optimus with Bumblebee on Ubuntu 12.04

Last night I decided to upgrade from Ubuntu 11.10 to 12.04 on my Thinkpad W520. The main reason for this was that my current setup was making suboptimal use of the hardware, and due to recent advances which I found documented in several blog posts, it seemed I could improve this situation. The goal of this post is then to document I hoped to achieve, and how I arrive there, so that in the future I’ll be able to remember what the heck I did to set this all up.

Project Goals

I purchased the Thinkpad W520 back in November, because my fanless Mini Inspiron netbook kept overheating when I left it to run performance benchmarks related to my research. Ubuntu 11.10 worked pretty well on the W520 out of the box, but there were two major outstanding compatibility issues.

First, the W520 comes with nVidia Optimus graphics. In this setup, the laptop has a discrete nVidia card and an on-board Intel graphics card, and the operating system is able to enable and disable the nVidia card in software in order to save power. nVidia has explicitly stated that they will not support Optimus in Linux, which six months ago meant that there were only two options for Linux users: enable only discrete graphics or integrated graphics in the BIOS.

When the Intel graphics were enabled in the BIOS, the open source Intel integrated graphics drivers worked like a dream – 3D acceleration, flawless suspend/resume support, and everything was just a superb, rock-solid experience. The battery life was also excellent. For someone like me who mostly uses the laptop to write software and does not care about 3D acceleration, this would have been an ideal choice, except for one major flaw, which is that the external display ports on the W520 (VGA and DisplayPort) hardwired to the nVidia card, so using an external monitor is impossible when only Intel Graphics are enabled in the BIOS. I use an external monitor at home, and so this meant Intel graphics were a nonstarter for me.

As it was not possible to use Optimus or Intel graphics, this left me with only one choice, which was to use the nVidia graphics. This process went something like this:

  1. Tried the nouveau driver. This worked pretty well, but would hang X on suspend/resume. Solid suspend/resume support is a must-have, so I eliminate this option
  2. Tried to install the nVidia binary driver in Ubuntu using the nice graphical interface (jockey-gtk). Ultimately, this did not work. Uninstalled the binary driver using jockey-gtk.
  3. Tried to install the nVidia binary driver by running the Linux installer shell script from nVidia. This felt evil, because you have no idea what the script is doing to your system, but everything installed correctly, and after a reboot, the laptop finally had working graphics.

The binary nVidia drivers were pretty solid: 3D acceleration, multi-monitor support, suspend/resume, and VDPAU video acceleration all worked great. My laptop had an uptime of several months under this configuration. Unfortunately, however, battery life was pretty poor, clocking in at about 3 hours.

Furthermore, and more seriously, the laptop firmware has a bug where Linux would hang at boot when both VT-x (Intel hardware vitalization technology) and nVidia graphics were enabled in the BIOS. This was pretty annoying, as I tend to run Windows in a VM in Virtualbox on Linux for testing compatibility with different versions of Internet Explorer. I believe this bug is now being tracked by Linux kernel developers, who are working around this issue by disabling X2APIC on boot, but Lenovo has refused to fix this bug, or acknowledge its existence. Not cool, Lenovo.

This meant that it would not be possible to have both working multi-head support, and reasonable battery life and VT-x support. Not optimal.

Bumblebee

Bumblebee is a project to bring support for nVidia Optimus to Linux. It basically renders a virtual X server on the nVidia card, and then passes the buffer to the Intel card which dumps it to the screen. Apparently, this is pretty much how Optimus works on Windows as well.

The advantage to using Bumblebee is that, theoretically, you would be able to have the excellent battery life of the Intel graphics, but also have 3D acceleration and multi-monitor support from the nVidia graphics.

I tried bumblebee 6 months ago, but was unable to get it to work. The project had also been forked around that time, and it wasn’t clear which fork to follow.

However, the following blog posts led me to believe that the situation had changed, and a multi-monitor setup could be achieved using Ubuntu 12.04 and Bumblebee:

I decided to see if I could get this to work myself, and ultimately I was successful. My current setup is now as follows:

  • Ubuntu 12.04 x64
  • Optimus Graphics and VT-x enabled in BIOS
  • External monitor, which can be enabled or disabled on-demand, and works consistently after suspend/resume
  • Bumblebee set to use automatic power switching, so the nVidia card is disabled when not in use.
  • Xmonad and Unity2D desktop environment

The remainder of the blog post documents the process I went through in order to obtain this optimal setup.

Multi-Monitor Support with Optimus and Bumblebee on Ubuntu 12.04

I primarily followed the process described on Sagar Karandikar’s blog, up to, but not including, his changes to /etc/bumblebee/bumblebee.conf.

Sagar says to set the following parameters in /etc/bumblebee/bumblebee.conf

And then enable the external monitor as follows:

As far as I understand it, these parameters set in /etc/bumblebee/bumblebee.conf tell Bumblebee to use the nVidia proprietary driver (Driver=nVidia), keep the nVidia card turned on (PMMethod=none disables bumblebee power management), and perpetually run an X server (KeepUnusedXServer=true). Clearly this setup would have negative implications for battery life, as the nVidia card is kept on and active.

optirun true then should turn on the nVidia card and output to the external monitor. xrandr tells the X server where to put the virtual display, and screenclone clones the X server running on display :8 (the X server being run by Bumblebee on the nVidia card) to the Intel virtual display.

I found that this technique was really finicky. optirun true would enable the external display right after rebooting, but would often not enable the display in other situations, such as after a suspend/resume cycle. It wasn’t clear how to bring the nVidia card back into a good state where it could output to an external monitor.

At this point, I read a comment by Gordin on Sagar’s blog, and his blog post. In this post, he describes using bbswitch for power management in bumblebee.conf, and running the second display using optirun screenclone -d :8 -x 1. This has the advantage of: a) enabling power management on the nVidia card, so it is turned off when not in use, and b) seemingly increased reliability, as the nVidia card will be enabled when screenclone is run, and disabled when the screenclone process is terminated. Based on these instructions, I came up with the following adapted solution.

Set the following settings in /etc/bumblebee/bumblebee.conf:

The following shell script will enable the external monitor. ^C will disable the external monitor:

This setup now works great, although screenclone does behave a bit strangely sometimes. For example, when switching workspaces, screenclone may require you to click in the workspace on the second desktop before it updates its graphics there. There were a few other minor quirks I found, but ultimately it seems like a solid and reliable solution.

Desktop Environment: Xmonad and Unity2D

I like Unity mostly because it provides a good global menu, but I find most other parts of it, including window management and the launcher, to be clunky or not very useful. Furthermore, certain Ubuntu compiz plugins that would improve the window management, such as the Put plugin, seem to be completely broken on Ubuntu out of the box:

I therefore set up my desktop environment to use the Unity2D panel and the Xmonad window manager. I primarily followed this guide to set this up: http://www.elonflegenheimer.com/2012/06/22/xmonad-in-ubuntu-12.04-with-unity-2d.html

The only change I made was to /usr/bin/gnome-session-xmonad. I’m not sure why, but xmonad was not getting started with the desktop session. I therefore started it in the background in the /usr/bin/gnome-session-xmonad, along with xcompmgr, a program which provides compositing when running non-compositing window managers like Xmonad. xcompmgr allows things like notification windows to appear translucent.

For a launcher, I’m now trying out synapse, which can be set to run when the gnome session is started.

July 29th, 2012

Syracuse Student Sandbox Hackathon Recap

Yesterday I participated in a hackathon at the Syracuse Student Sandbox. This blog post is meant to provide a quick recap of the interesting technical contributions that came out of this event.

All source code mentioned in this article is available on Github.

What I Did

My project idea was to develop a voice menu interface to the Archive.org live music archive using Twilio. The idea was that you would call a particular phone number, and be presented with a voice menu interface. There would be options to listen to the Archive.org top music pick, or to perform a search.

Core Technology

Archive.org

Archive.org exposes a very nice, hacker-friendly API. It is fairly well-documented here. I only encountered a few gotchas, which are that the API to the main page does not return valid JSON, and so it must be parsed using JavaScript’s eval; and, the query API is based on Lucene query syntax, which I did not find documented anywhere.

Twilio

Developing a Twilio telephony application is just like developing a regular web application. When you register with Twilio, they assign you a phone number, which you can then point to a web server URL. When someone calls the number, Twilio performs an performs HTTP request (either GET or POST, depending on how you have it configured) to the server which you specified.

Instead of returning HTML, you return TwiML. Each tag in a TwiML document is a verb which tells Twilio what to do. TwiML documents can be modelled as state machines, in that there’s a particular flow between elements. For certain tags, Twilio, will simply flow to the next tag after performing the action associated with that tag; however, for other tags, Twilio will perform a request (again, either GET or POST) to a URL specified by the tag’s “action” attribute, and will execute the TwiML document returned by that request. This is analogous to submitting a form in HTML.

Each HTTP request performed by Twilio will submit some data, like the caller’s phone number and location, as well as a variable which allows the server to track the session.

There were a few instances of undocumented behaviour that I encountered, but overall developing a TwiML application was as easy as it sounds. After I had my node.js hosting set up, I had an initial demo working in less than an hour, in which the user could call in, and would be able to hear the archive.org live music pick. This was simply a matter of using Archive.org’s API to retrieve the URL to the file of the top live music pick, and passing this URL to Twilio in a <Play> element. Twilio was then able to stream the MP3 file directly from Archive.org.

Main Technical Contribution: Using SCXML and SCION to Model Navigation in a Node.js Web Application

I developed the application using Node.js and SCION, an SCXML/Statecharts interpreter library I’ve been working on. In addition to providing a very small module for querying the archive.org API using Node.js, I feel the main technical contribution of this project was using SCXML to model web navigation, and I will elaborate on that contribution in this section.

Using Statecharts to model web navigation is not a new idea (see StateWebCharts, for example), however, I believe this is the first time this technique has been used in conjunction with Node.js.

From a high level, SCXML can be used to describe the possible flows between pages in a Web application. SCXML allows one to model these flows explicitly, so that every possible session state and the transitions between session states are well-defined. Another way to describe this is that SCXML can be used to implement routing which changes depending on session state.

A web server accepts an HTTP request as input and asynchronously returns an HTTP response as output. Each HTTP request can contain parameters, encoded as query parameters on the URL in the case of a GET request, or as POST data for a POST request. These parameters can contain data that allows the server to map the HTTP request to a particular session, as well as other data submitted by the user.

These inputs to the web server were mapped to SCXML in the following way. First, an SCXML session was created for each HTTP session, such that subsequent HTTP requests would be dispatched to this one SCXML session, and this SCXML session would maintain all of the session state.

Each HTTP request was turned into an SCXML event and dispatched as input to the SCXML session corresponding to the session of that HTTP request. An SCXML event has “name” and “data” properties. The url of the request was used as the event name, and the parsed query parameters were used as the event data. Furthermore, the Node.js HTTP request and response objects were also included as event data.

In this implementation, SCXML states were mapped to individual web pages, which were returned to the user on the HTTP response.

The SCXML document modelling navigation can be found here. Here is a graphical rendering of it (automatically generated using scxmlgui):

Statecharts Diagram

Statecharts Diagram

<?xml version="1.0" encoding="UTF-8"?>
<scxml 
	xmlns="http://www.w3.org/2005/07/scxml"
	version="1.0"
	profile="ecmascript">

    <datamodel>
        <data id="serverUrl" expr="'http://jacobbeard.net:1337'"/>
        <data id="api"/>
    </datamodel>

    <script src="./playPick.js"/>
    <script src="./performSearch.js"/>

    <state id="initial_default">
        <transition event="init" target="waiting_for_initial_request">
            <assign location="api" expr="_event.data"/>
        </transition>
    </state>

    <state id="waiting_for_initial_request">
        <transition target="root_menu" event="/"/>
    </state>

    <state id="root_menu">
        <onentry>
            <log label="entering root_menu" expr="_events"/>

            <!-- we want to send this as a response. hack SCION so we can do that somehow -->
            <Response>
                <Gather numDigits="1" action="number_received" method="GET">
                    <Say>Root Menu</Say>
                    <Say>Press 1 to listen to the archive dot org live music pick. Press 2 to search the archive dot org live music archive.</Say>
                </Gather>
            </Response>
        </onentry>

        <transition target="playing_pick" event="/number_received" cond="_event.data.params.Digits === '1'"/>
        <transition target="searching" event="/number_received" cond="_event.data.params.Digits === '2'"/>

        <!-- anything else - catchall error condition -->
        <transition target="root_menu" event="*">
            <Response>
                <Gather numDigits="1" action="number_received" method="GET">
                    <Say>I did not understand your response.</Say>
                    <Say>Press 1 to listen to the archive dot org live music pick. Press 2 to search the archive dot org live music archive.</Say>
                </Gather>
            </Response>
        </transition>
    </state>

    <state id="playing_pick">
        <!-- TODO: move the logic in playPack into SCXML -->
        <onentry>
            <log label="entering playing_pick"/>
            <script>
                playPick(_event.data.response,api);
            </script>
        </onentry>

        <!-- whatever we do, just return -->
        <transition target="root_menu" event="*"/>
    </state>

    <state id="searching">
        <datamodel>
            <data id="searchNumber"/>
            <data id="searchTerm"/>
        </datamodel>

        <onentry>
            <log label="entering searching"/>
            <Response>
                <Gather numDigits="1" action="number_received" finishOnKey="*"  method="GET">
                    <Say>Press 1 to search for an artist. Press 2 to search for a title.</Say>
                </Gather>
                <Redirect method="GET">/</Redirect>
            </Response>

        </onentry>

        <transition target="receiving_search_input" event="/number_received" cond="_event.data.params.Digits === '1' || _event.data.params.Digits === '2'"> 
            <assign location="searchNumber" expr="_event.data.params.Digits"/>
        </transition>
        <transition target="root_menu" event="/"/> 
        <transition target="bad_search_number" event="*"/> 
    </state>

    <state id="receiving_search_input">
        <onentry>
            <Response>
                <Gather numDigits="3" action="number_received" method="GET">
                    <Say>Press the first three digits of the name to search for.</Say>
                </Gather>
                <Redirect method="GET">/</Redirect>
            </Response>

        </onentry>

        <transition target="performing_search" event="/number_received" cond="_event.data.params.Digits"> 
            <assign location="searchTerm" expr="_event.data.params.Digits"/>
        </transition>
        <transition target="bad_search_number" event="/number_received"/> 
        <transition target="root_menu" event="*"/> 
        
    </state>

    <state id="performing_search">
        <onentry>
            <script>
                performSearch(searchNumber,searchTerm,_event.data.response,api);
            </script>
        </onentry>
        
        <transition target="searching" event="/search-complete" />
        <transition target="searching" event="/artist-not-found" />
        <transition target="root_menu" event="*" />
    </state>

    <state id="bad_search_number">
        <onentry>
            <Response>
                <Say>I didn't understand the number you entered.</Say>
                <Redirect method="GET">/</Redirect>
            </Response>

        </onentry>

        <transition target="searching" event="/"/> 
    
    </state>

</scxml>

Note that the transition conditions do not appear in the above diagram, so I would recommend reading the SCXML document as well as the diagram.

In this model, the statechart starts in an initial_default state in which it waits for an init event. The init event is used to pass platform-specific API’s into the state machine. After receiving the init event, the statechart will transition to state waiting_for_initial_request, where it will wait for an initial request to url “/”. After receiving this request, it will transition to state root_menu. Of particular interest here are the actions in the <onentry> tag. The TwiML document to be returned to the user is inlined directly as a a custom action within <onenter>, and is executed by the interpreter by writing that document to the node.js response object’s output stream. This document will tell Twilio to wait for the user to press a single digit, and to submit a GET request to URL “/number_received” when the request completes.

There are two transitions originating from root_menu. The first targets state play_pick, the second targets state searching, and the third loops back to state root_menu. The first two transitions have a cond attribute, which is used to inspect the data sent with the request. So, for example, if the user presses “1″, Twilio would submit a GET request to URL “/number_received?Digits=1″ (along with other URL parameters, which I have omitted for simplicity). This would be transformed into the SCXML event {name : '/number_received', data : { Digits : '1' }}, which would then activate the transition to playing_picks. The system would then transition to playing_picks, which would call a JavaScript function that would query the Archive.org API to retrieve the URL to Archive.org’s top song pick, and would output a TwiML document on the HTTP response object which would contain the URL to that song.

If the user pressed a “2″ instead of a “1″, then the cond attribute would cause the statechart to activate the transition to state searching instead of playing_pick. If the user pressed anything else, or attempted to navigate to any other URL, then the wildcard “*” event on the third transition would simply cause the statechart to loop back to root_menu.

The rest of the application is implemented in a similar fashion.

Comments and Critiques

While overall, I feel this effort was successful, and demonstrates a technique that could be used to develop larger and more complex applications, there are ways I would like to improve it.

First, while I feel that being able to inline the response as custom action code in the entry action of a state is a rather elegant approach, it would be useful to make the inline XML templated so that it can use data from the system’s datamodel.

Second, there’s a disconnect between the action specified in the returned document (the url to which the document will be submitted), and the transitions originating from the state corresponding to that document. For example, it would be possible to return a document with a form with action attribute “foo”, and have a transition originating from that state with event /bar. This may not be a desirable behaviour, as there’s not legal way for the returned web page to submit to URL “/bar”. The action attribute on the returned form can be understood as specifying the SCXML events that that page will be able to generate, or the possible flow between pages within the web application, and so it might be better somehow model the connection between returned form actions and transition events more explicitly.

Third, there are several features of SCXML that this demo did not make use of, including state hierarchy, parallel and history states. Uses for these features may emerge in the development of a more complex web application.

Fourth, there is currently quite a lot of logic in the action code called by the SCXML document. This includes multiple levels of asynchronous callbacks. This is not an ideal approach, as it means that even after an SCXML macrostep has ended, a callback within the action code executed within that macrostep may be asynchronously called by the environment. I feel this breaks SCION’s interpretation of Statecharts semantics, and may lead to unexpected behaviour. A better approach would be to feed the result of each asynchronous callback back into the state machine as an event, and use a sequence intermediate states to model the flow between callbacks.

Fifth, and finally, I had a few technical difficulties with SCION, in that node.js’s require function was not working correctly in embedded action code. I worked around this by passing the required API’s around as a single object between functions in action code. I fixed this issue in SCION today.

Conclusion

The finished application can be demoed by calling (315) 254-2188. I’m going to leave it up until the account runs out of money, so feel free to try it.

I had a great time at the Hackathon, and I feel my participation was productive on multiple levels. I’m looking forward to further researching how SCXML and SCION can be applied to web application development.

November 30th, 2011

Master Thesis Mini-Update: Initial Release of SCION

I just wanted to quickly announce the release of SCION, a project to develop an SCXML interpreter/compiler framework suitable for use on the Web, and the successor to SCXML-JS.

The project page is here: https://github.com/jbeard4/SCION
Documentation, including demos, may be found here: http://jbeard4.github.com/SCION/

I welcome your feedback.

June 8th, 2011

Master’s Thesis Update 2: New Statecharts Project

I’m currently working on a chapter of my master’s thesis, which basically fleshes out and elaborates on the paper I wrote for the SVG Open 2010 conference. The goal is pretty much the same:

  1. write an optimizing Statechart-to-ECMAScript compiler
  2. describe optimization goals: execution speed, compiled code size and memory usage
  3. describe the set of optimizations I intend to perform
  4. write a comprehensive suite of benchmarks
  5. run benchmarks against as wide a set of ECMAScript implementations as possible
  6. analyze the results

In order to fulfill the first step, I wrote SCXML-JS, which started as a course project, and which I continued to develop during Google Summer of Code 2010. There were a number of things which SCXML-JS did well: its design was flexible enough to support multiple transition selection algorithms. It was then possible to do performance testing of these different strategies across different JavaScript interpreters.

Unfortunately, SCXML-JS also had a number of shortcomings, so I’ve decided to start over and develop a new Statechart interpreter/compiler. This new interpreter would have the following improvements over SCXML-JS.

Separate out Core Interpreter Runtime

SCXML-JS generated a large chunk of boilerplate code that varied only in very small ways between optimizations. The other Statechart compiler I have worked on, SCC, worked the same way. One of the repercussions of this design choice is that it made it difficult to meaningfully compare size of the code payload, as the points of variation would be intermixed with the boilerplate.

The goal of the new project, then, is to move that boilerplate out into its own module, which will effectively function as a standalone interpreter for Statecharts. Certain methods (e.g. selecting transitions, updating the current configuration) would then be parameterized, so that optimized methods that use fast data structures that have been compiled ahead-of-time (e.g. a state-transition table) could then be injected at runtime when the interpreter class is instantiated. This would allow the methods we would like to optimize to be neatly separated out from the core interpreter runtime, thus allowing these methods to be directly compared for performance and payload size.

I think that moving the generated boilerplate code out into a class or set of classes also aids hackability, as this is more object-oriented approach, and overall is easier to read and comprehend.

Less code generation also means there is less motivation to use XSLT from top to bottom (SCXML-JS was about 90% implemented in XSLT). I have a lot of positive things to say about XSLT, but I think that overall, this will also be an advantage in terms of overall developer friendliness.

Focus on Semantics

There are many possible semantics that can be applied to Statecharts, and I decided with my advisor that it would be useful to orient the semantics of the new compiler to a set of possible semantic choices described in Big-Step Semantics, by Shahram Esmaeilsabzali, Nancy A. Day, Joanne M. Atlee, and Jianwei Niu at the University of Waterloo. I first considered using the Algorithm for SCXML Interpretation described in the SCXML specification, but after finding a bug in the algorithm, it seemed like a better approach would be to write my own step algorithm, and base it on a clear set of semantic choices outlined in Big-Step Semantics.

The semantics of the new project will be similar to SCXML semantics, but not identical. I will write a blog post in the future describing in detail the points of variation, and how they affect various edge cases.

Having made precise decisions regarding the semantics that would be used, I have endeavored to write the interpreter using a test-first methodology. Before writing the interpreter, I attempted to write a comprehensive test suite that would cover both the basic cases, as well as complex examples (e.g. n-nested parallel states with deep and shallow history, transition interrupts, and executable behaviour) and edge cases. I have also written a simple test harness to run these tests.

Focus on Threading

In SCXML-JS, I didn’t cleanly take into account whether it would be executing in a single- or multi-threaded environment, and so the approach used for event handling was not well-realized for either scenario. SCXML-JS used a queue for external events, the window.setTimeout method to poll the queue without blocking the browser thread, and a global lock on the statechart object to prevent events from being taken while another event was being processed. In the browser environment, which is single-threaded (Web Workers aside), this “busy-wait” and locking approach is simply unnecessary, as while the Statechart is processing an event, it has possession of the thread, and thus cannot be interrupted with another event before that first event has finished. There is no need to queue events, because each call to send an event to the statechart would be handled synchronously, and would return before the next event would be sent. Likewise, in a multi-threaded environment, there are often better approaches than busy-waiting, such as using a blocking queue, in which the thread sleeps when it finds the queue is empty, and gets awoken when another thread notifies it that an element as entered the queue.

In the new project, I would like to accommodate both single- and multi-threaded environments, and so the implementation will take this into account, and will provide multiple concrete implementations that leverage the best event-handling strategy available for the environment in which it is executing. Furthermore, all concrete implementations will conform to the chosen Statechart semantics for handling events.

This will also make performance and unit testing easier. The single-threaded implementation provides an easy programming model: an event is sent into the statechart, a big-step is then performed by the statechart, and the synchronous call returns to the thread of execution to the caller. It is then possible to check the configuration of the statechart to ensure that it conforms to an expected configuration. Likewise, it is easy to send many events into a statechart, and measure the amount of time the statechart takes to process all the events. This would be more complicated to implement (although certainly not impossible), in a multi-threaded implementation based on a blocking queue.

Where it will Live

I’ve posted the code to github, but I’m not going to post a link here until I work out some issues with the in-browser unit test harness, make sure it works across a range of browsers, and write some documentation.

I could commit it to Apache Commons Sandbox SVN as a new branch to SCXML-JS, but I’m going to hold off on that, for several reasons. First, I want to release alpha builds of this software, and Apache Commons has a policy that a project must graduate from Commons Sandbox to Commons Proper in order to perform releases. This requires a vote by the community regarding the project’s suitability for inclusion in Commons, and, unfortunately for my project, this is in part dependent on whether the project uses Maven as its build system. I think Maven is probably the best choice, if one is developing an application in Java, and all of its library dependencies can be found in Maven repositories. But for SCXML-JS at least, the Maven dependency proved to be extremely cumbersome. I spent at least a month after GSoC 2010 was done trying to fix up the SCXML-JS build system so that it would be acceptable to Commons, and this was not a great experience. The end result wasn’t great either. I don’t yet know what build system I will use for this project, but my main motivation is to not waste a lot of time on it, not let it be an obstacle, and to ship early.

Another factor is that I’d like to use git to version-control it. Even when I was working on SCXML-JS, I was using git-svn. git-svn is great, but it would occasionally break, and then it was tricky to recover. Ultimately, the requirement to use SVN is just another small distraction, when I would rather be spending my time coding.

I also haven’t yet decided how I would like to license it.

Project Status

What currently works:

  • Everything that would be considered a part of SCXML core.
  • send, script, assign, and log tags.
  • Fairly thorough test suite written (currently 84 unique tests).
  • Python and JavaScript test harnesses. The JavaScript test harness is working in Rhino, and kind of working in the browser (works in Firefox, but is slow; freezes chromium).
  • Implementations written in python and CoffeeScript. Python supports scripting in both python and JavaScript via the excellent python-spidermonkey language bindings.

Regarding what is to come, once I have the test harness working, have tested across browser environments, and written some documentation and examples, I will announce the project here, as well as on relevant mailing lists, and publish an alpha release.

After that, I will be working on implementing optimizations, testing performance, and analyzing the results. Optimistically, this should take about 2 weeks.

Then I will write my thesis.

October 16th, 2010

Master’s Thesis Update 1

Last week, I had the first meeting with my academic adviser since about 8 months. Everything seems on track to make scxml-js a core part of my Master’s thesis, which I am very happy about.

To celebrate, here is an SVG heart for you:

I’ve also been working on other small tasks: how to obtain a nice animated rainbow radial gradient, like the one above; finding the most RSS-friendly way to post SVG images to this WordPress blog; and playing around with Gtk and embedded Webkit. I’ve also been exploring Antwerp, including their new hackerspace. Hopefully, I’ll be able to write in more detail about these topics soon.

September 29th, 2010

scxml-js Build Adventures

I presented my work on scxml-js at the SVG Open 2010 conference at the end of August. My hope was that I would be able to have a release prepared by this time, to encourage adoption among interested developers. However, I soon discovered that there was some process overhead involved in preparing releases in Apache Commons. Currently, scxml-js is a Commons Sandbox project, and Sandbox projects are not allowed under any circumstances to publish releases. In order to publish a release, the scxml-js would need to be “promoted” to Commons Proper, which would require a vote on the Commons mailing list. In order to pass a vote, it seemed likely that the scxml-js build system would need to be overhauled to use Maven, so as to be able to reuse the Maven parent pom, and thus inherit all of the regulated, well-designed build infrastructure shared by all projects at Apache Commons.

I had originally allocated two weeks to this task, from the end of Google Summer of Code, to the start of the SVG Open conference, but in fact I ended up spending over a month working on just the build infrastructure. I think some interesting new techniques emerged out of this work.

First, a description of what I was migrating from: a single custom build file written in JavaScript and designed to be run under Rhino. The reasoning behind this technique was that JavaScript is quite a nice scripting language, and very useful for many tasks, including writing build scripts, and due to its ability to use the RequireJS module system and dojo.doh unit testing framework natively, writing a custom build script in Rhino seemed to be the fastest, easiest way to perform automated tasks related to unit and performance testing of scxml-js. What it was not useful for, however, was setting up a Java classpath and calling java or javac (it also seemed like too much of an investment to perform a proper topological sort of dependencies between build targets). In the beginning of the project, using java and javac was not needed as scxml-js would always be run in interpreted mode on the command-line. As time went on, however, I wanted to use Rhino’s jsc utility to compile scxml-js to optimized Java bytecode, in order to improve performance as well as provide a standalone executable JAR for easy deployment. In order to solve this problem, I began to use Ant, which of course has very good integration with tasks relating to Java compilation.

Compiling JavaScript to Java Bytecode with Ant

Invoking jsc using Ant is actually pretty easy. The only complication arises if you have dependencies between your scripts (e.g. using Rhino’s built-in load() function), as jsc will not catch these. What is required is to preprocess your scripts so that all js dependencies are included in a single file, and then to run jsc on that built file. If you’re using just using load() to import script dependencies, this can be difficult to accomplish. If you’re using RequireJS, however, then you can make use of its included build script which does precisely what I described, in that it seeks out module dependencies and includes them in one giant file. It can also include the RequireJS library itself in the file, as well as substitute text (or XML) file dependencies as inline strings, so the end result is that all dependencies are are included in this single file. Compilation of scxml-js to Java bytecode is then a two-step process: calling the RequireJS build script to create a single large file that includes all dependencies, and calling jsc on the built file to compile it to bytecode. This will produce a single executable class file. Here’s a snippet to illustrate how this works:


<!-- this is the path to a front-end module that accepts command-line arguments and passes them into the main module -->
<property name="build-js-main-rhino-frontend-module" value="${src}/javascript/scxml/cgf/build/rhino"/>

<!-- RequireJS build script stuff -->
<property name="js-build-script" location="${lib-js}/requirejs/build/build.js"/>
<property name="js-build-dir" location="${lib-js}/requirejs/build"/>

<!-- include a reference to the closure library bundled with the RequireJS distribution -->
<path id="closure-classpath" location="${lib-js}/requirejs/build/lib/closure/compiler.jar"/>

<!-- jsc stuff -->
<property name="build-js-main" location="${build-js}/main-built.js"/>
<property name="build-class-main-name" value="SCXMLCompiler"/>
<property name="build-class-main" location="${build-class}/${build-class-main-name}.class"/>

<target name="compile-single-js">
	<mkdir dir="${build-js}"/>

	<java classname="org.mozilla.javascript.tools.shell.Main">
		<classpath>
			<path refid="rhino-classpath"/>
			<path refid="closure-classpath"/>
		</classpath>
		<arg value="${js-build-script}"/>
		<arg value="${js-build-dir}"/>
		<arg value="name=${build-js-main-rhino-frontend-module}"/>
		<arg value="out=${build-js-main}"/>
		<arg value="baseUrl=."/>
		<arg value="includeRequire=true"/>
		<arg value="inlineText=true"/>
		<arg value="optimize=none"/>
	</java>
</target>

<target name="compile-single-class" depends="compile-single-js">
	<mkdir dir="${build-class}"/>

	<!-- TODO: parameterize optimization level -->
	<java classname="org.mozilla.javascript.tools.jsc.Main">
		<classpath>
			<path refid="maven.plugin.classpath"/>
		</classpath>
		<arg value="-opt"/>
		<arg value="9"/>
		<arg value="-o"/>
		<arg value="${build-class-main-name}.class"/>
		<arg value="${build-js-main}"/>
	</java>
	<move file="${build-js}/${build-class-main-name}.class" todir="${build-class}"/>
</target>
// This is the module referenced by property "build-js-main-rhino-frontend-module".
// It accepts command-line arguments and passes them into the main module
(function(args){
	require(
		["src/javascript/scxml/cgf/main"],
		function(main){
			main(args);
		}
	);
})(Array.prototype.slice.call(arguments));

All of this was not too difficult to set up, and allowed me to accomplish my goal of building a single class file for the scxml-js project.

Importing JavaScript Modules with RequireJS in Ant

Rather than maintain two build scripts, it was desirable to move the unit testing functionality in the Rhino build script into Ant. As I had already put a significant amount of time into developing the Rhino build script, I wanted to directly reuse this code in Ant. This seemed possible, as Ant already provides good integration with Rhino and other scripting languages via its script tag, and either JSR-223 of the Bean Scripting Framework API. Unfortunately, however, Rhino when run under Ant does not expose any properties on the global object; this means that by default load() and readFile() are not available, making it virtually impossible to import code from other files, and thus making it impossible to directly import RequireJS modules in an Ant script. However, I sought help on the Rhino mailing list, and a convenient workaround was developed. The following code should be included at the beginning of every script tag, and the manager attribute on the script tag should be set to “bsf”:

var shell = org.mozilla.javascript.tools.shell.Main;
var args = ["-e","var a='STRING';"];
shell.exec(args);

var shellGlobal = shell.global;

//grab functions from shell global and place in current global
var load=shellGlobal.load;
var print=shellGlobal.print;
var defineClass=shellGlobal.defineClass;
var deserialize=shellGlobal.deserialize;
var doctest=shellGlobal.doctest;
var gc=shellGlobal.gc;
var help=shellGlobal.help;
var loadClass=shellGlobal.loadClass;
var quit=shellGlobal.quit;
var readFile=shellGlobal.readFile;
var readUrl=shellGlobal.readUrl;
var runCommand=shellGlobal.runCommand;
var seal=shellGlobal.seal;
var serialize=shellGlobal.serialize;
var spawn=shellGlobal.spawn;
var sync=shellGlobal.sync;
var toint32=shellGlobal.toint32;
var version=shellGlobal.version;
var environment=shellGlobal.environment;

Although, now that I’m reading this again, this also isn’t quite right, as, while these variables are being defined in the global namespace, technically they are not being added to the global object… but, in any case, this has not proven to be problematic.

Because this is a verbose declaration, and we like code reuse, I defined a macro called “rhinoscript” to abstract it out:


<macrodef name="rhinoscript">
	<text name="text"/>

	<sequential>
		<script language="javascript" manager="bsf">
			<classpath>
				<path refid="maven.plugin.classpath"/>
			</classpath><![CDATA[
				var shell = org.mozilla.javascript.tools.shell.Main;
				var args = ["-e","var a='STRING';"];
				shell.exec(args);

				var shellGlobal = shell.global;

				//grab functions from shell global and place in current global
				var load=shellGlobal.load;
				//import everything else...

				@{text}
		]]></script>
	</sequential>
</macrodef>

<!-- example call -->
<target name="test-call">
	<rhinoscript><![CDATA[
		load("foo.js");
		print("Hello World!");
	]]></rhinoscript>
</target>

This then allowed RequireJS modules to be imported and reused directly, as in the original Rhino build script. The only caveat is that, rather than using nice JavaScript data structures (Arrays, Objects, etc.) to store build-related properties, it was necessary to use Ant data structures (properties, paths, etc.) instead. Here’s the final result of the Ant task that uses RequireJS and dojo.doh to run unit tests:

<target name="run-unit-tests-with-rhino" depends="setup-properties">
	<rhinoscript><![CDATA[
		//load requirejs
		Array.prototype.slice.call(requirejs_bootstrap_paths.list()).forEach(function(requireJsPath){
			load(requireJsPath);
		});

		//this is a bit weird, but we define this here in case we need to load dojo later using the RequireJS loader
		djConfig = {
			"baseUrl" : path_to_dojo_base+"/"
		}

		function tailRecurse(list,stepCallback,baseCaseCallback){
			var target = list.pop();

			if(target){
				stepCallback(target,
					function(){tailRecurse(list,stepCallback,baseCaseCallback)});
			}else{
				if(baseCaseCallback) baseCaseCallback();
			}
		}

		var isComplete = false;

		require(
			{baseUrl:basedir},
			[path_to_dojo,
				"lib/test-js/env.js",
				"test/testHelpers.js"],
			function(){

				dojo.require("doh.runner");

				var forIE = "is-for-ie";
				var scxmlXmlTestPathList = Array.prototype.slice.call(scxml_tests_xml.list());
				var backendsList = backends.split(",");

				print("backendsList : " + backendsList);
				print("backendsList.length : " + backendsList.length);

				var oldDohOnEnd = doh._onEnd;
				doh._onEnd = function() { isComplete = true; oldDohOnEnd.apply(doh); };

				//we use tailRecurse function because of asynchronous RequireJS call used to load the unit test module
				tailRecurse(scxmlXmlTestPathList,
					function(scxmlXmlTestPath,step){
						var jsUnitTestPathPropertyName = scxmlXmlTestPath + "-" + "unit-test-js-module";
						var jsUnitTestPath = project.getProperty(jsUnitTestPathPropertyName);

						require([jsUnitTestPath],
							function(unitTestModule){

								backendsList.forEach(function(backend){
									var jsTargetTestPathPropertyName =
										forIE + "-" + backend + "-" + scxmlXmlTestPath + "-" + "target-test-path";

									var jsTargetTestPath = project.getProperty(jsTargetTestPathPropertyName);

									print("jsTargetTestPathPropertyName : " + jsTargetTestPathPropertyName);
									print("jsTargetTestPath  : " + jsTargetTestPath);

									//load and register
									load(jsTargetTestPath);

									unitTestModule.register(StatechartExecutionContext)
									delete StatechartExecutionContext;
								});

								step();
							});
					},
					function(){
						//run with dojo
						doh.run();
					}
				);

			}
		);

		//hold up execution until doh completes
		while(!isComplete){
			java.lang.Thread.sleep(20);
		}

	]]></rhinoscript>

I think this is kind of nice, because, if you look at new unit testing frameworks like Rake or Jake, the big advantage that they give you is the ability to use a real programming language (as opposed to a build Domain Specific Language, like Ant), and at the same time provide facilities for define build targets with dependencies, and topographically sorting them when they are invoked. Ant still has many advantages, however, including great support in existing continuous integration systems. The approach I have described seems to marry the advantages of using Ant, with those of using your preferred scripting language.

Integrating Ant with Maven

At this point, I had brought over most of the existing functionality from the Rhino build script into Ant, and I was beginning to look at ways to then hook into Maven. While I had some previous experience working with Ant, I had never before worked with Maven, and so there was a learning curve. The goal was to hook into the existing Apache Commons Maven-based build infrastructure, while at the same time trying to reuse existing code.

While this part was non-trivial to develop, it is actually the least interesting part of the process to me, and I think the least relevant to this blog (it doesn’t have much to do with JavaScript or Open Web technologies), so I’m only going to briefly describe it. The build architecture is currently as follows:

I felt it was important to maintain both an Ant front-end and Maven front-end to the build, as each has advantages for certain tasks. Common functionality is imported from build-common.xml. Both the Maven (pom.xml) and Ant (build.xml) front-ends delegate to mvn-ant-build.xml, which contains most of the core tasks without the dependencies between targets.

Based on my experience on the Maven mailing list, if you are a “Maven person” (a person who has “drunk the Maven kool-aid” – not my words, Maven people seem to like to use this phrase), then this architecture built around delegation to Ant will likely make you cry. It will seem needlessly complex, when the alternative of creating a set of custom Maven plugins will seem much better. This might be the case, and I proposed investigating this options. The problem, however, seems to be that relying on custom Maven plugins for building is a no-go for Commons projects (with the exception of the Maven Commons plugin), as it is uncertain where these plugins will be hosted. However, building a Maven plugin for process of compiling JavaScript using the RequireJS framework to Java bytecode, as outlined above, is I think something that has value, and which I would like to pursue at some point.

Future Work

I still have not put scxml-js forward for a vote, and even though the refactoring of the build system is more or less complete, I still may not do so. I have just arrived in Belgium where I will be working on my Master’s thesis for three months, and so I may need to deprioritize my work on scxml-js while I prioritize researching the theoretical aspects of my thesis. Also, now that SVG Open has passed, there seems to be less incentive to publish an alpha release. It may be better to give scxml-js more time to mature, and then release later on.

August 16th, 2010

Google Summer of Code 2010, Final Update

The pencils’ down date for Google Summer of Code 2010 is right now. Here’s a quick overview of what I feel I have contributed to scxml-js thus far, and what I feel should be done in the future.

Tests and Testing Framework

Critical to the development of scxml-js was the creation of a robust testing framework. scxml-js was written using a tests-first development style, which is to say that before adding any new feature, I would attempt to map out the implications of that feature, including all possible edge cases, and would then write tests for sucess, failure, and sanity. By automating these tests, it was possible to avoid regressions when new features were added, and thus maintain robustness as the codebase became more complex.

Testing scxml-js was an interesting challenges with respect to automated testing, as it was necessary to test both the generated target code (using ahead-of-time compilation), and the compiler itself (using just-in-time compilation), running in all the major web browsers, as well as on the JVM under Rhino. This represented many usage contexts, and so a great deal of complexity was bundled into the resulting build script.

The tests written usually conformed to a general format: a given SCXML input file would be compiled and instantiated, and a script would send events into the compiled statechart while asserting that the state had updated correctly. A custom build script, written in JavaScript, automated the process of compiling and running test cases, starting and stopping web browsers, and harvesting results. dojo.doh and Selenium RC were used in the testing framework.

Going Forward

It would be useful to phase out the custom JavaScript build script for a more standard build tool, such as maven or ant. This may be challenging, however, given the number of usage contexts of the scxml-js compiler, as well as the fact that the API it exposes is asynchronous.

Another task I’d like to perform is to take the tests written for Commons SCXML and port them so that they can be used in scxml-js

Finally, I have often noticed strange behaviour with Selenium. At this moment, when run under Selenium, tests are broken for in-browser compilation under Internet Explorer; however when run manually, they always pass. I’ve traced where the tests are failing, and it’s a strange and intermittent failure involving parsing an XML document. I it think may be caused by the way that Selenium instruments the code in the page. I feel it may be worthwhile to investigate alternatives to Selenium.

scxml-js Compiler

This page provides an overview of what features works right now, and what do not.

In general, I think scxml-js is probably stable enough to use in many contexts. Unfortunately, scxml-js has had only one user, and that has been me. I’m certain that when other developers do begin using it, they will break it and find lots of bugs.

I’m hoping to prepare a pre-alpha release to coincide with the SVG Open 2010 conference at the end of the month, and in preparation for this, I’m reaching out to people I know to ask them to attempt to use scxml-js in a non-trivial project. This will help me find bugs before I attempt to release scxml-js for general consumption.

Going Forward

There are still edge cases which I have in mind that need to be tested. For example, I haven’t done much testing of nested parallel states.

I also have further performance optimizations which I’d like to implement. For example, I’ve been using JavaScript 1.6 functional Array prototype extensions (e.g. map, filter, and forEach) in the generated code, and augmenting Array.prototype for compatibility with Internet Explorer. However, these methods are often slower than using a regular for loop, especially in IE, and so it would be good to swap them out for regular for loops in the target code.

Another performance enhancement would be to encode the statechart’s current configuration as a single scalar state variable, rather than encoding it as an array of basic state variables, for statecharts that do not contain parallel states. This would reduce the time required to dispatch events for these types of statecharts, as the statechart instance would no longer need to iterate through each state of the current configuration, thus removing the overhead of the for loop.

I’m sure that once outside developers begin to look at the code, they will have lots of ideas on how to improve performance as well.

There are other interesting parts of the project that still need to be investigated, including exploring the best way to integrate scxml-js with existing JavaScript toolkits, such as jQuery UI and Dojo.

Graph Layout, Visualization, and Listener API

As I stated in the initial project proposal, one of my goal for GSoC was to create a tool that would take an SCXML document, and generate a graphical representation of that document. By targeting SVG, this graphical representation could then be scripted. By attaching a listener to a statechart instance, the SVG document could then be animated in response to state changes.

I was able to accomplish this by porting several graph layout algorithms written by Denis Dube for his Master’s thesis at the McGill University Modelling, Simulation and Design Lab. Denis was kind enough to license his implementations for release in ASF projects under the Apache License. You can see a demo of some of this work here.

Going Forward

The intention behind this work was to create a tool that would facilitate graphical debugging of statecharts in the web browser. While this is currently possible, it still requires “glue code” to be manually written to generate a graphical representation from an SCXML document, and then hook up the listener. I would like to make this process easier and more automatic. I feel it should operate similarly to other compilers, in that the compiler should optionally include debugging symbols in the generated code which allow it to map to a “concrete syntax” (textual or graphical) representation.

Another issue that needs to be resolved is cross-browser compatibility. It’s currently possible to generate SVG in Firefox and Batik, but there are known issues in Chromium and Opera.

Also, there are several more graph layout algorithms implemented by Denis which I have not yet ported. I’d really like to see this happen.

Finally, my initial inquiries on the svg-developers mailing list indicated that this work would be useful for other projects. I therefore feel that these JavaScript graph layout implementations should be moved into a portable library. Also, rather than generating a graphical representation directly from SCXML, it should be possible to generate a graphical representation from a more neutral markup format for describing graphs, such as GraphML.

Demos

I have written some nice demos that illustrate the various aspects of scxml-js, including how it may be used in the development of rich, Web-based user interfaces. The most interesting and complex examples are the Drawing Tool Demos, which implement a subset of Inkscape’s UI behaviour. The first demo uses scxml-js with a just-in-time compilation technique; the second uses ahead-of-time compilation; and the third uses just-in-time compilation, and generates a graphical representation on the fly, which it then animates in response to UI events. This last demo only works well in Firefox right now, but shows what should be possible going forward.

I have several other ideas for demos, which I will attempt implement before the SVG Open conference.

Documentation

The main sources of documentation now are the User Guide, the source code for the demos, and Section 5 of my SVG Open paper submission on scxml-js.

Conclusion

This has been an exciting and engaging project to work on, and I’m extremely grateful to Google, the Apache Software Foundation, and my mentor Rahul for facilitating this experience.

June 28th, 2010

Google Summer of Code, Update 3: More Live Demos

Just a quick update this time. The scxml-js is moving right along, as I’ve been adding support for new features at, on average, a rate of about a feature per day. Today, I reached an interesting milestone, which is that scxml-js is now as featurful as the old SCCJS compiler which I had previously been using in my research. This means that I can now begin porting the demos and prototypes I constructed using SCCJS to scxml-js, as well as begin creating new ones.

New Demos

Here are two new, simple demos that illustrate how scxml-js may be used to to describe and implement behaviour of web User Interfaces (tested in recent Firefox and Chromium; will definitely not work in IE due to its use of XHTML):

Both examples use state machines to describe and implement drag-and-drop behaviour of SVG elements. The first example is interesting, because it illustrates how HTML, SVG, and SCXML can be used together in a single compound document to declaratively describe UI structure and behaviour. The second example illustrates how one may create state machines and DOM elements dynamically and procedurally using JavaScript, as opposed to declaratively using XML markup. In this example, each dynamically-created element will have its own state machine, hence its own state.

I think the code in these examples is fairly clean and instructive, and should give a good sense regarding how scxml-js may ultimately be used as a finished product.

June 23rd, 2010

Google Summer of Code 2010, Project Update 2

Here’s another quick update on the status of my Google Summer of Code project.

Finished porting IR-compiler and Code Generation Components to XSLT

As described in the previous post, I finished porting the IR-compiler and Code Generation components from E4X to XSLT.

Once I had this working with the Java XML transformation APIs under Rhino, I followed up with the completion of two related subtasks:

  1. Get the XSL transformations working in-browser, and across all major browsers (IE8, Firefox 3.5, Safari 5, Chrome 5 — Opera still to come).
  2. Create a single consolidated compiler front-end, written in JavaScript, that works in both the browser and in Rhino.

Cross-Browser XSL Transformation

Getting all XSL transformations to work reliably across browsers was something I expressed serious concerns about in my previous post. Indeed, this task posed some interesting challenges, and motivated certain design decisions.

The main issue I encountered in getting these XSL transformations to work was that support for xsl:import in xsl stylesheets, when called from JavaScript, is not very good in most browsers. xsl:import works well in Firefox, but is currently distinctly broken in Webkit and Webkit-based browsers (see here for the Chrome bug report, and here for the Webkit bug report). I also had limited success with it in IE 8.

I considered several possible solutions to work around this bug.

First, I looked into a pure JavaScript solution. In my previous post, I linked to the Sarissa and AJAXSLT libraries. In general, a common task of JavaScript libraries is to abstract out browser differences, so the fact that several libraries existed which appeared to do just that for XSLT offered me a degree of confidence when I was initially choosing XSLT as a primary technology with which to implement scxml-js. Unfortunately, in this development cycle, on closer inspection, I found that Sarissa, AJAXSLT, and all other libraries designed to abstract out cross-browser XSLT differences (including Javeline, the jquery xsl transform plugin), are not actively maintained. As web browsers are rapidly moving targets, maintenance is a major concern when selecting a library dependency. In any case, a pure JavaScript solution did not appear feasible. This left me to get the XSL transformations working using just the “bare metal” of the browser.

My next attempt was to try to use some clever DOM manipulation to work around the Webkit bug. In the Webkit bug, xsl:import does not work because frameless resources cannot load other resources. This meant that loading the SCXML document on its own in Chrome, with an xml-stylesheet processing instruction pointing to the code generation stylesheet, did generate code correctly. My idea, then, was to use DOM to create an invisible iframe, and load into it the SCXML document to transform, along with the requisite processing instruction, and read out the transformed JavaScript. I actually had some success with this, but it seemed to be a brittle solution. I was able to get it to work, but not reliably, and it was difficult to know when and how to read the transformed JavaScript out of the iframe. In any case my attempts at this can be found in this branch here.

My final, and ultimately successful attempt was to use XSL to preprocess the stylesheets that used xsl:import, so as to combine the stylesheet contents, while still respecting the semantics of xsl:import. This was not too difficult, and only took a bit of effort to debug. You can see the results here. Note that there may be some corner cases of XSLT that are not handled by this script, but it works well for the existing scxml-js code generation backends. This is the solution upon which I ultimately settled.

One thing that must still be done, given this solution, is to incorporate this stylesheet preprocessing into the build step. For the moment, I have simply done the simple and dirty thing, which is to checked the preprocessed stylesheets into SVN.

It’s interesting to note that IE 8 was the easiest browser to work with in this cycle, as it provided useful and meaningful error messages when XSL transformations failed. By contrast, Firefox would return a cryptic error messages, without much useful information, and Safari/Chrome would not provide any error message at all, instead failing silently in the XSLT processor and returning undefined.

Consolidated Compiler Front-end

As I described in my previous post, a thin front-end to the XSL stylesheets was needed. For the purposes of running inside of the browser, the front-end would need to be written in JavaScript. It would have been possible, however, to write a separate front-end in a different language (bash, Java, or anything else), for the purposes of running outside of the browser. A design decision needed to be made, then, regarding how the front-end should be implemented:

  • Implement one unified front-end, written in JavaScript, which relies on modules which provide portable API’s, and provide implementations of these API’s that vary between environments.
  • Implement multiple front-ends, for browser and server environments.

I decided that, with respect to maintainability, it would be easier to maintain one front-end, written in one language, rather than two front-ends in different languages, and so I chose the first option. This worked well, but I’m not yet completely happy with the result, as I have code for Rhino and code for the browser mixed together in the same mdoule. This means that code for Rhino is downloaded to the browser, even though it is never called (see Transformer.js for an example of this). The same is true for code that targets IE versus other browsers. I believe I’ve thought of a way to use RequireJS to selectively download platform-specific modules, and this is an optimization that I’ll make in the near future.

In-Browser Demo

The result of this work can be seen in this demo site I threw together:

http://live.echo-flow.com/scxml-js/demo/sandbox/sandbox.html

This demo provides a very crude illustration of what a browser-based Graphical User Interface to the compiler might look like. It takes SCXML as input (top-most textarea), compiles it to JavaScript code (lower-left textarea, read-only), and then allows simulation from the console (bottom-right textarea and text input). For convenience, the demo populates the SCXML input textarea with the KitchenSink executable content example. I’ve tested it in IE8, Safari 5, Chrome 5, Firefox 3.5. It works best in Chrome and Firefox. I haven’t been testing in Opera, but I’m going to start soon.

Future Work

The past three weeks was spent porting and refactoring, which was necessary to facilitate future progress, and now there’s lots to do going forward. My feeling is that it’s now time to get back to the main work, which is adding important features to the compiler, starting with functionality still missing from the current implementation of the core module:

https://issues.apache.org/jira/browse/SCXML-137

I’m going to be presenting this work at the SVG Open 2010 conference at the end of August, so I’m also keen to prepare some new, compelling demos that will really illustrate the power of Statecharts on the web.

This work is licensed under GPL - 2009 | Powered by Wordpress using the theme aav1