MODELLING AND SIMULATION, WEB ENGINEERING, USER INTERFACES
October 7th, 2012

New projects: scxml-viz, scion-shell, and scion-web-simulation-environment

I have released three new projects under an Apache 2.0 license:
  • scxml-viz: A library for visualizing SCXML documents.
  • scion-shell: A simple shell environment for the SCION SCXML interpreter. It accepts SCXML events via stdin, and thus can be used to integrate SCXML with Unix shell programming. It integrates scxml-viz, so can also allow graphical simulation of SCXML models.
  • scion-web-simulation-environment: A simple proof-of-concept web sandbox environment for developing SCXML. Code can be entered on the left, and visualized on the right. Furthermore, SCION is integrated, so code can be simulated and graphically animated. A demo can be found here: http://goo.gl/wG5cq
November 30th, 2011

Master Thesis Mini-Update: Initial Release of SCION

I just wanted to quickly announce the release of SCION, a project to develop an SCXML interpreter/compiler framework suitable for use on the Web, and the successor to SCXML-JS.

The project page is here: https://github.com/jbeard4/SCION
Documentation, including demos, may be found here: http://jbeard4.github.com/SCION/

I welcome your feedback.

June 8th, 2011

Master’s Thesis Update 2: New Statecharts Project

I’m currently working on a chapter of my master’s thesis, which basically fleshes out and elaborates on the paper I wrote for the SVG Open 2010 conference. The goal is pretty much the same:

  1. write an optimizing Statechart-to-ECMAScript compiler
  2. describe optimization goals: execution speed, compiled code size and memory usage
  3. describe the set of optimizations I intend to perform
  4. write a comprehensive suite of benchmarks
  5. run benchmarks against as wide a set of ECMAScript implementations as possible
  6. analyze the results

In order to fulfill the first step, I wrote SCXML-JS, which started as a course project, and which I continued to develop during Google Summer of Code 2010. There were a number of things which SCXML-JS did well: its design was flexible enough to support multiple transition selection algorithms. It was then possible to do performance testing of these different strategies across different JavaScript interpreters.

Unfortunately, SCXML-JS also had a number of shortcomings, so I’ve decided to start over and develop a new Statechart interpreter/compiler. This new interpreter would have the following improvements over SCXML-JS.

Separate out Core Interpreter Runtime

SCXML-JS generated a large chunk of boilerplate code that varied only in very small ways between optimizations. The other Statechart compiler I have worked on, SCC, worked the same way. One of the repercussions of this design choice is that it made it difficult to meaningfully compare size of the code payload, as the points of variation would be intermixed with the boilerplate.

The goal of the new project, then, is to move that boilerplate out into its own module, which will effectively function as a standalone interpreter for Statecharts. Certain methods (e.g. selecting transitions, updating the current configuration) would then be parameterized, so that optimized methods that use fast data structures that have been compiled ahead-of-time (e.g. a state-transition table) could then be injected at runtime when the interpreter class is instantiated. This would allow the methods we would like to optimize to be neatly separated out from the core interpreter runtime, thus allowing these methods to be directly compared for performance and payload size.

I think that moving the generated boilerplate code out into a class or set of classes also aids hackability, as this is more object-oriented approach, and overall is easier to read and comprehend.

Less code generation also means there is less motivation to use XSLT from top to bottom (SCXML-JS was about 90% implemented in XSLT). I have a lot of positive things to say about XSLT, but I think that overall, this will also be an advantage in terms of overall developer friendliness.

Focus on Semantics

There are many possible semantics that can be applied to Statecharts, and I decided with my advisor that it would be useful to orient the semantics of the new compiler to a set of possible semantic choices described in Big-Step Semantics, by Shahram Esmaeilsabzali, Nancy A. Day, Joanne M. Atlee, and Jianwei Niu at the University of Waterloo. I first considered using the Algorithm for SCXML Interpretation described in the SCXML specification, but after finding a bug in the algorithm, it seemed like a better approach would be to write my own step algorithm, and base it on a clear set of semantic choices outlined in Big-Step Semantics.

The semantics of the new project will be similar to SCXML semantics, but not identical. I will write a blog post in the future describing in detail the points of variation, and how they affect various edge cases.

Having made precise decisions regarding the semantics that would be used, I have endeavored to write the interpreter using a test-first methodology. Before writing the interpreter, I attempted to write a comprehensive test suite that would cover both the basic cases, as well as complex examples (e.g. n-nested parallel states with deep and shallow history, transition interrupts, and executable behaviour) and edge cases. I have also written a simple test harness to run these tests.

Focus on Threading

In SCXML-JS, I didn’t cleanly take into account whether it would be executing in a single- or multi-threaded environment, and so the approach used for event handling was not well-realized for either scenario. SCXML-JS used a queue for external events, the window.setTimeout method to poll the queue without blocking the browser thread, and a global lock on the statechart object to prevent events from being taken while another event was being processed. In the browser environment, which is single-threaded (Web Workers aside), this “busy-wait” and locking approach is simply unnecessary, as while the Statechart is processing an event, it has possession of the thread, and thus cannot be interrupted with another event before that first event has finished. There is no need to queue events, because each call to send an event to the statechart would be handled synchronously, and would return before the next event would be sent. Likewise, in a multi-threaded environment, there are often better approaches than busy-waiting, such as using a blocking queue, in which the thread sleeps when it finds the queue is empty, and gets awoken when another thread notifies it that an element as entered the queue.

In the new project, I would like to accommodate both single- and multi-threaded environments, and so the implementation will take this into account, and will provide multiple concrete implementations that leverage the best event-handling strategy available for the environment in which it is executing. Furthermore, all concrete implementations will conform to the chosen Statechart semantics for handling events.

This will also make performance and unit testing easier. The single-threaded implementation provides an easy programming model: an event is sent into the statechart, a big-step is then performed by the statechart, and the synchronous call returns to the thread of execution to the caller. It is then possible to check the configuration of the statechart to ensure that it conforms to an expected configuration. Likewise, it is easy to send many events into a statechart, and measure the amount of time the statechart takes to process all the events. This would be more complicated to implement (although certainly not impossible), in a multi-threaded implementation based on a blocking queue.

Where it will Live

I’ve posted the code to github, but I’m not going to post a link here until I work out some issues with the in-browser unit test harness, make sure it works across a range of browsers, and write some documentation.

I could commit it to Apache Commons Sandbox SVN as a new branch to SCXML-JS, but I’m going to hold off on that, for several reasons. First, I want to release alpha builds of this software, and Apache Commons has a policy that a project must graduate from Commons Sandbox to Commons Proper in order to perform releases. This requires a vote by the community regarding the project’s suitability for inclusion in Commons, and, unfortunately for my project, this is in part dependent on whether the project uses Maven as its build system. I think Maven is probably the best choice, if one is developing an application in Java, and all of its library dependencies can be found in Maven repositories. But for SCXML-JS at least, the Maven dependency proved to be extremely cumbersome. I spent at least a month after GSoC 2010 was done trying to fix up the SCXML-JS build system so that it would be acceptable to Commons, and this was not a great experience. The end result wasn’t great either. I don’t yet know what build system I will use for this project, but my main motivation is to not waste a lot of time on it, not let it be an obstacle, and to ship early.

Another factor is that I’d like to use git to version-control it. Even when I was working on SCXML-JS, I was using git-svn. git-svn is great, but it would occasionally break, and then it was tricky to recover. Ultimately, the requirement to use SVN is just another small distraction, when I would rather be spending my time coding.

I also haven’t yet decided how I would like to license it.

Project Status

What currently works:

  • Everything that would be considered a part of SCXML core.
  • send, script, assign, and log tags.
  • Fairly thorough test suite written (currently 84 unique tests).
  • Python and JavaScript test harnesses. The JavaScript test harness is working in Rhino, and kind of working in the browser (works in Firefox, but is slow; freezes chromium).
  • Implementations written in python and CoffeeScript. Python supports scripting in both python and JavaScript via the excellent python-spidermonkey language bindings.

Regarding what is to come, once I have the test harness working, have tested across browser environments, and written some documentation and examples, I will announce the project here, as well as on relevant mailing lists, and publish an alpha release.

After that, I will be working on implementing optimizations, testing performance, and analyzing the results. Optimistically, this should take about 2 weeks.

Then I will write my thesis.

June 6th, 2010

Google Summer of Code 2010, Project Update 1

I’m two weeks into my Google Summer of Code project, and decided it was time to write the first update describing the work I’ve done, and the work I will do.

Project Overview

First a quick overview of what my project is, what it does, why one might care about it. The SCXML Code Generation Framework, JavaScript Edition project (SCXMLcgf/js) centers on the development of a particular tool, the purpose of which is to accelerate the development of rich Web-based User Interfaces. The idea behind it is that there is a modelling language, called Statecharts, which is very good at describing dynamic behaviour of objects, and can be used for describing rich UI behaviour as well. The tool I’m developing, then, is a Statechart-to-JavaScript compiler, which takes as input Statechart models as SCXML documents, and compiles them to executable JavaScript code, which can then be used in the development of complex Web UIs.

I’m currently developing this tool under the auspices of the Apache Foundation during this year’s Google Summer of Code. For more information on it, you could read my GSoC project proposal here, or even check out the code here.

Week 1 Overview

As I said above, I’m now two weeks into the project. I had already done some work on this last semester, so I’ve been adding in support for additional modules described in the SCXML specification. In Week 1, I added basic support for the Script Module. I wrote some tests for this, and it seemed to work well, so I checked it in.

Difficulties with E4X

I had originally written SCXMLcgf/js entirely JavaScript, targeting the Mozilla Rhino JavaScript implementation. One feature that Rhino offers is the E4X language extension to JavaScript. E4X was fantastic for rapidly developing my project. It was particularly useful over standard JavaScript in terms of providing an elegant syntax for: templating (multiline strings with embedded parameters, and regular JavaScript scoping rules), queries against the XML document structure (very similar to XPath), and easy manipulation of that structure.

These language features allowed me to write my compiler in a very declarative style: I would execute transformations on the input SCXML document, then query the resulting structure and and pass it into templates which generated code in a top-down fashion. I leveraged E4X’s language features heavily throughout my project, and was very productive.

Unfortunately, during Week 1, I ran into some difficulties with E4X. There was some weirdness involving namespaces, and some involving scoping. This wasn’t entirely surprising, as the Rhino implementation of E4X has not always felt very robust to me. Right out of the box, there is a bug that prevents one from parsing XML files with XML declarations, and I have encountered other problems as well. In any case, I lost an afternoon to this problem, and decided that I needed to begin to remove SCXMLcgf/js’s E4X dependencies sooner rather than later.

I had known that it would eventually be necessary to move away from E4X for portability reasons, as it would be desirable to be able to run the SCXMLcgf/js in the browser environment, including non-Mozilla browsers. There are a number of reasons for this, including the possibility of using the compiler as a JIT compiler, and the possibility of providing a browser-based environment for Statechart development. Given the problems I had had with E4X in Week 1, I decided to move this task up in my schedule, and deal with it immediately.

So, for Week 2, I’ve been porting most of my code to XSLT.

Justification for Targeting XSLT

At the beginning of Week 2, I knew I needed to migrate away from E4X, but it wasn’t clear what the replacement technology should be. So, I spent a lot of time thinking about SCXMLcgf/js, its architecture, and the requirements that this imposes on the technology.

The architecture of SCXMLcgf/js can be broken into three main components:

  • Front End: Takes in arguments, possibly passed in from the command-line, and passes these in as options to the IR Compiler and the Code Generator.
  • IR Compiler: Analyzes the given SCXML document, and creates an Intermediate Representation (IR) that is easy to generate code from.
  • Code Generator: Generates code from a given SCXML IR. May have multiple backend modules that target different programming languages (it currently only targets JavaScript), and different Statechart implementation techniques (it currently targets three different techniques).

My goal for Week 2 was just to eliminate E4X dependencies in the Code Generator component. The idea behind this component is that its modules should only be used for templating. The primary goal of these template modules is that they should be easy to read, understand, and maintain. In my opinion, this means that templates should not contain procedural programming logic.

Moreover, I came up with other precise feature requirements for a templating system, based on my experience from the first implementation of SCXMLcgf/js:

  • must be able to run under Rhino or the browser
  • multiline text
  • variable substitution
  • iteration (loops)
  • if/else blocks
  • Mechanisms to facilitate Don’t Repeat Yourself (DRY)
    • Something like function modularity, where you separate templates into named regions.
    • Something like inheritance, where a template can import other templates, and override functionality in the parent template.

Because I’m very JavaScript-oriented, I first looked into templating systems implemented in JavaScript. JavaScript templating systems are more plentiful than I had expected. Unfortunately, I did not find any that fulfilled all of the above requirements. I won’t link to any, as I ultimately chose not to go down this route.

A quick survey of XSLT, however, indicated to me that it did support all of the above functionality. So, this left me to consider XSLT, the other programming language which enjoys good cross-browser support.

I was pretty enthusiastic about this, as I had never used XSLT before, but had wanted to learn it for some time. Nevertheless, I had several serious concerns about targeting XSLT:

  1. How good is the cross-browser support for XSLT?
  2. I’m a complete XSLT novice. How much overhead will be required before I can begin to be productive using it?
  3. Is XSLT going to be ridiculously verbose (do I have to wrap all non-XML text in a <text/> node)?
  4. Is there good free tooling for XSLT?
  5. Another low-priority concern was that I wanted to keep down dependencies on different languages; it would be nice to focus on only one. I’m not sure about XSLT’s expressive power. Would it be possible to port the IR-Compiler component to XSLT?

To address each of these concerns in turn:

  1. There are some nice js libs that abstract out the browser differences: Sarissa, Google’s AJAXSLT.
  2. I did an initial review of XSLT. I found parts of it to be confusing (like how and when the context node changes; the difference between apply-templates with and without the select attribute; etc.), but decided the risk was low enough that I could dive in and begin experimenting with it. As it turned out, it didn’t take long before I was able to be productive with it.
  3. Text node children of an <xsl:template/> are echoed out. This is well-formed XML, but I’m not sure if it’s strictly legal XSLT. Anyhow, it works well, and looks good.
  4. This was pretty bad. The best graphical debugger I found was: KXSLdbg for KDE 3. I also tried the XSLT debugger for Eclipse Web Tools, and found it to be really lacking. In the end, though, I mostly just used <xsl:message/> nodes as printfs in development, which was really slow and awkward. This part of XSLT development could definitely use some improvement.

I’ll talk more about 5. in a second.

XSLT Port of Code Generator and IR-Compiler Components

I started to work on the XSLT port of the Code Generator component last Saturday, and had it completed by Tuesday or Wednesday. This actually turned out not to be very difficult, as I had already written my E4X templates in a very XSLT-like style: top-down, primarily using recursion and iteration. There was some procedural logic in there which need to be broken out, so there was some refactoring to do, but this wasn’t too difficult.

When hooking everything up, though, I found another problem with E4X, which was that putting the Xalan XSLT library on the classpath caused E4X’s XML serialization to stop working correctly. Specifically, namespaced attributes would no longer be serialized correctly. This was something I used often when creating the IR, so it became evident that it would be necessary to port the IR Compiler component in this development cycle as well.

Again, I had to weigh my technology choices. This component involved some analysis, and transformation of the given SCXML document to include this extra information. For example, for every transition, the Least Common Ancestor state is computed, as well as the states exited and the states entered for that transition.

I was doubtful that XSLT would be able to do this work, or that I would have sufficient skill in order to program it, so I initially began porting this component to just use DOM for transformation, and XPath for querying. However, this quickly proved not to not be a productive approach, and I decided to try to use XSLT instead. I don’t have too much to say about this, except to observe that, even though development was often painful due to the lack of a good graphical debugger, it was ultimately successful, and the resulting code doesn’t look too bad. In most cases, I think it’s quite readable and elegant, and I think it will not be difficult to maintain.

Updating the Front End

The last thing I needed to do, then, was update the Front End to match these changes. At this point, I was in the interesting situation of having all of my business logic implemented in XSLT. I really enjoyed the idea of having a very thin front-end, so something like:

xsltproc xslt/normalizeInitialStates.xsl $1 | \
xsltproc xslt/generateUniqueStateIds.xsl - | \
xsltproc xslt/splitTransitionTargets.xsl - | \
xsltproc xslt/changeTransitionsPointingToCompoundStatesToPointToInitialStates.xsl - | \
xsltproc xslt/computeLCA.xsl - | \
xsltproc xslt/transformIf.xsl - | \
xsltproc xslt/appendStateInformation.xsl - | \
xsltproc xslt/appendBasicStateInformation.xsl - | \
xsltproc xslt/appendTransitionInformation.xsl - | \
xsltproc xslt/StatePatternStatechartGenerator.xsl | \
xmlindent > out.js

There would be a bit more to it than that, as there would need to be some logic for command-line parsing, but this would also mostly eliminate the Rhino dependency in my project (mostly because the code still uses js_beautify as a JavaScript code beautifier, and the build and performance analysis systems are still written in JavaScript). This approach also makes it very clear where the main programming logic is now located.

In the interest of saving time, however, I decided to continue to use Rhino for the front end, and use SAX Java API’s for processing the XSLT transformations. I’m not terribly happy with these API’s, and I think Rhino may be making the system perceptibly slower, so I’ll probably move to the thin front end at some point. But right now this approach works, passes all unit tests, and so I’m fairly happy with it.

Future Work

I’m not planning to check this work into the Apache SVN repository until I finish porting the other backends, clean things up, and re-figure out the project structure. I’ve been using git and git-svn for version control, though, which has been useful and interesting (this may be the subject of another blog post). After that, I’ll be back onto the regular schedule of implementing modules described in the SCXML specification.

April 29th, 2010

Update

Courses have finished, and I’ve been accepted into Google Summer of Code 2010. Lots interesting to come.

January 31st, 2010

DocBook Customization From a User’s Perspective

Today I was working on a project proposal for my course on Software Architecture. There was a strict limit of 5 pages on the document, including diagrams, and so it was necessary to be creative in how we formatted the document, to fit in the maximum possible content. I think that DocBook XSL, together with Apache FOP, generates really great-looking documents out of the box. Unfortunately, however, it does tend to devote quite a lot of space to the formatting, so today I learned a few tips for styling DocBook documents. These techniques turned out to be non-trivial to discover, so I thought I’d share them with others.

Background Information

Some customization can be done very simply, by passing a parameter at build-time to your xslt processor. For many of these customizations, however, DocBook does not insulate the user from XSLT. Specifically, it is necessary to implement what DocBook XSL refers to as a “customization layer”. This technique is actually fairly simple, once you know about it.

In short, when compiling your DocBook document, to, for example, html or fo, you would normally point your xslt processor to html/docbook.xsl or fo/docbook.xsl in your DocBook XSL directory. To allow for some customizations, however, you need a way to inject your own logic, and to do this, you create a new xsl document (e.g. custom-docbook-fo.xsl), which imports the docbook.xsl stylesheet you would have originally imported. By creating your own xsl document, you’re able to your inject customization logic, hence, this document is called a “customization layer”. This is not difficult in practice, but, as I said, it does not insulate the user from XSLT, which for me, was a bit shocking, as I’m not used to seeing and working with XSLT.

Easy Customizations

Two customizations I wanted to do were:

  • Remove the Table of Contents
  • Resize the body text

Both of these customizations require the user to simply add a parameter when calling their XSLT processor. In ant, this looks like the following:

        <xslt style="custom-fo.xsl" extension=".fo" 
            basedir="src" destdir="${doc.dir}" includes="*.xml">
            <classpath refid="xalan.classpath" />
			<param name="body.font.master" expression="11"/>
			<param name="generate.toc" expression="article/appendix  nop"/>
        </xslt>

The above params remove the Table of Contents, and set the body font to 11pt. Additionally, all other heading sizes are computed in terms of the “body.font.master” property, so they will all be resized when this property is set.

That’s pretty much all there is to it.

Harder Customizations

Two other customizations I wanted to do were:

  • Reduce the size of section titles.
  • Remove the indent on paragraph text.

To do this, I had to create a customization layer document in the manner I described above. It looks like the following:

<?xml version='1.0'?> 
<xsl:stylesheet  xmlns:xsl="http://www.w3.org/1999/XSL/Transform"  
      version="1.0"> 

<xsl:import href="docbook-xsl/docbook-xsl-1.75.2/fo/docbook.xsl"/> 

<!-- set sect1 and sect2 title text size-->
<xsl:attribute-set name="section.title.level1.properties">
  <xsl:attribute name="font-size">
    <xsl:value-of select="$body.font.master * 1.3"/>
    <xsl:text>pt</xsl:text>
  </xsl:attribute>
</xsl:attribute-set>

<xsl:attribute-set name="section.title.level2.properties">
  <xsl:attribute name="font-size">
    <xsl:value-of select="$body.font.master * 1.1"/>
    <xsl:text>pt</xsl:text>
  </xsl:attribute>
</xsl:attribute-set>

<!-- remove the indent on para text -->
<xsl:param name="body.start.indent">
  <xsl:choose>
    <xsl:when test="$fop.extensions != 0">0pt</xsl:when>
    <xsl:when test="$passivetex.extensions != 0">0pt</xsl:when>
    <xsl:otherwise>0pc</xsl:otherwise>
  </xsl:choose>
</xsl:param>
</xsl:stylesheet>  

Note that the content of the above document was mostly copy-pasted from various sections of Part 3 of DocBook XSL: The Complete Guide. All I had to do was guess at what it was doing, and substitute my desired values; I wouldn’t have been able to program this myself.

A very useful resource for these sorts of customizations is FO Parameter Reference.

Customizations You Need a Degree in Computer Science to Understand

One of the first customizations I wanted to make was to reduce the font sizes used in the title of the document. Even with detailed instructions, it took me about two hours to figure out how to do this, just because the method of accomplishing this task was so unexpected.

In general, what you’re doing is the following:

  1. Copying a template that describes how to customize the title.
  2. Customizing that template with things like the Font size.
  3. Using an XSL stylesheet to compile that customized copy to an XSL stylesheet. Yes, you using an XSL stylesheet to create an XSL stylesheet.
  4. Include the compiled XSL stylesheet in your customization layer.
  5. Optionally, automate this task by making it a part of your build process.

Holy smokes! Let’s run through a concrete example of this.

First, make a copy of fo/titlepage.templates.xml. I put it in the root of my project and called it mytitlepage.spec.xml. I then messed with the entities in mytitlepage.spec.xml to change the title font size. This was pretty self-explanatory. I then skipped a few steps, and integrated it with my ant script.

	<target name="build-title-page">
		<xslt style="${docbook.xsl.dir}/template/titlepage.xsl" extension=".xsl" 
            basedir="." destdir="." includes="mytitlepage.spec.xml">
            <classpath refid="xalan.classpath" />
        </xslt>
	</target>

And made my build-fo task depend on this new task:

    <target name="build-fo" depends="depends,build-title-page" 
        description="Generates HTML files from DocBook XML">
		...
    </target>

Now, whenever I build-fo, mytitlepage.spec.xml will be processed by template/titlepage.xsl in my DocBook XSL directory, producing the document mytitlepage.spec.xsl. I then import mytitlepage.spec.xsl into my customization layer:

<?xml version='1.0'?> 
<xsl:stylesheet  xmlns:xsl="http://www.w3.org/1999/XSL/Transform"  
      version="1.0"> 

<xsl:import href="docbook-xsl/docbook-xsl-1.75.2/fo/docbook.xsl"/> 

<xsl:import href="mytitlepage.spec.xsl"/>
...
</xsl:stylesheet>  

And that’s it. It’s really not that difficult once you know how to do it, and you only have to wire it all together once, but it took a long time to see how all of the pieces fit together.

Conclusion

My advisor knows all sorts of tricks for Latex in order to, among other things, compress documents down to sizes to get them into conferences with strict page limits. I think this is pretty standard practice. You can do the same thing with DocBook, but expect a high learning curve, especially if you’ve never seen XSLT or are unfamiliar with build systems. I think DocBook is pretty consistent in this respect. But, in all fairness, I was ultimately successful: all of the resources were there to allow me to figure this out myself.

January 28th, 2010

docbook-ant-quickstart-project

I was forced to learn Docbook for the SVG Open 2009 conference, which asks all of their users to submit in Docbook format. I found it cumbersome and confusing to set up, but, once I had put in place all of my tool support, I actually found it to be a very productive format for authoring structured documents.

Similar in concept to Latex, I now prefer to use Docbook for all of my technical writing. I like it because it’s XML (this is a matter of personal taste, but I like XML as a markup format), because it is environment-agnostic (I prefer to edit in Vim, but Eclipse includes great XML tooling and integration with version-control systems, and thus is also an excellent choice for a Docbook-editing environment), and because, thanks to the Apache FOP and Batik projects, it’s very easy to create PDF documents which include SVG images.

Still, I could never forget the initial pain involved in setting up Docbook, and so I’ve created docbook-ant-quickstart-project, a project to reduce this initial overhead for new users. From the project description:

Docbook is a great technology for producing beautiful, structured documents. However, learning to use it and its associated tools can involving a steep learning curve. This project aims to solve that problem by packaging everything needed to begin producing rich documents with Docbook. Specifically, it packages the Docbook schemas and XSL stylesheets, and the Apache FOP library and related dependencies. It also provides an Ant script for compilation, and includes sample Docbook files. Thus, the project assembles all of the components required to allow the user to begin creating PDF documents from Docbook XML sources quickly and easily.

I spent a long time looking for a similar project, and, surprisingly, didn’t find too much in this space. I did find one project which has precisely the same goals, but it relies on make and other command-line tools typically found on Unix platforms. Right now, I’m on Windows, and Cygwin has been problematic since Vista, so Ant and Java are a preferred solution. Also, by using Ant and Java, it is very easy to begin using this project in Eclipse.

I hope Docbook enhances your productivity as much as it has mine :)

December 31st, 2009

2010 New Year’s Resolution

I have a backlog of draft posts, and I’m going to start working my way through them. I had a very interesting semester, and the next should be even more interesting. I’m keen to resume sharing. Look for more posts to come.

October 27th, 2009

Ubuntu to Vista and Back Again

Happy Halloween everyone! I’ve been writing this blog post for about a week now, continually adding to it as I acquire new information. At the moment, I have some work to do, but am waiting for Oracle XE to finish downloading, and so I’m going to try to finish this blog post before I feel obliged to resume being productive.

I’ll split this post up into a few parts. I’ve been having operating system trouble, which has been ongoing. This may be interesting to others, so I thought I’d share it.

Leaving Ubuntu 9.04

I’ve been using Ubuntu as my primary OS since 2006, when I switched from Windows XP. In that time, I’ve had the opportunity to install Ubuntu on a lot of different hardware, but I’ve used a Dell Inspiron 1300 laptop as my main machine. In August, the laptop finally died a slow, lingering death due to hardware failure, and so I bought a new laptop, an HP Touchsmart tx2-1000. In addition to having pretty good specs, this machine has the distinction of being what HP calls the first consumer laptop with a multi-touch display. I chose this particular machine because it was on sale and offered extraordinary value for the price, and because I believe that the multi-touch display will prove very useful and interesting for my research into UI.

I put Ubuntu 9.04 on the new computer the very first night I received it. I was pretty impatient, and didn’t even attempt to create Windows Vista restore disks. I intended to make a dual-boot, but didn’t defragment the hard drive before installing, and so the Ubuntu installer failed to resize the Vista partition, and ended up hosing it. The restore partition was still intact, though.

Unfortunately, Ubuntu 9.04 did provide a very good experience on this hardware. Audio playback worked alright, even if it was somewhat suboptimal. Headphone jack sensing didn’t work,a nd so it was necessary to manually mute the front speakers through the mixer. Most importantly, I could never get the microphone to work. Linux is such am ess right now that it’s hard to say where the fault lies when something isn’t working. For example, when Ekiga fails to make a voice call, it could be Ekiga failing to communicated with Pulseaudio via the Pulseaudio alsa plugin, Pulseaudio failing to talk to alsa, or alsa failing to properly communicate with my hardware. VOIP is critical to everything that I do on a day-to-day basis. I need it to work, and so I tried many different things in order to make it work: I tried different voip clients (Skype 2.0, Skype 2.1, and Ekiga), I tried stripping out Pulseaudio and using ALSA directly, I messed around with ALSA, and then I swapped out the stack entirely and used OSSv4. This was as painful as it sounds, and I was unable to converge to a resonable result.

The screen worked well as a tablet (using the pen, not fingers) out of the box, which was nice, but the requirements to get the touchscreen working were nontrivial. When presented with clear instructions, I’m very comfortable patching and compiling my own kernel. Unfortunately, the instructions are still evolving. The end result was, I spent a few hours working on this, broke tablet support, and then gave up. I might have tried again, but I’ve been exceptionally busy.

I had some trouble with suspend/resume, in that it would occasionally suspend and then be unable to resume. The screen would simply be black; no X, no backlight, nothing to do but reboot.

Finally, while the open source radeon driver worked very well with my graphics card, and provided a very solid experience, I really wanted to use Compiz, and the proprietary driver, which enabled 3D graphics on my hardware, turned out to be rather sketchy. Once again, I treid many permutations, but was unable to converge on something that I felt was solid and reliable.

After all this, for the first time in 3 years, I decided it might be better for me to switch back to Windows. If this were Windows XP, this might have been a good decision. Unfortunately, Windows Vista was far worse than I had anticipated.

Reinstallation of Windows Vista

Reinstallation of Windows Vista was nontrivial, and I’ll only say a few words about it, as the procedure was not very difficult, but was nontrivial to discover. I had not created Vista restore disks, and I didn’t have a a true Windows repair disk. Fortunately, when installing Ubuntu, the installer is clever enough to detect the restore partition as a separate Vista install. I was then able to boot into the restore partition using grub. Unfortunately, the HP restore tools were unable to restore Vista with Ubuntu on that partition. The solution was to use the windows cmd shell provided by the restore partition to:

  1. restore the MBR to use the Windows NT bootloader,
  2. delete the Ubuntu partition, and
  3. initiate system restore

I discovered the details of how to do this by reading this post on Ubuntu forums, which proved to be a critical resource in this process.

Trying to Construct a Linux-flavored Userland in Vista x64

I was fairly optimistic about transitioning to Windows. I know that there are a lot of FLOSS projects that would help ease the transition. There are some basic tools that I need readily at hand in my OS in order to be comfortable there: GNU screen, bash, vim, a unix-like shell environment, and X11.

At the top of my list was Portable Ubuntu. Portable Ubuntu looks like quite a nice piece of work: it uses coLinux to run the Linux kernel as a process inside of Windows; it the uses the Windows port of Pulseaudio, and Xming, an XServer port for Windows. The effect of this is that you get the full Ubuntu Linux userland, running full-speed, with simialr memory consumption, and exellent integration into the Windows shell. Windows kernel, with all of the hardware support, and Ubuntu userland, sounds like a pretty attractive ideal combination.

Unfortunately, this didn’t work, for two reasons. First, because coLinux doesn’t work on 64-bit versions of Windows in general. Second, because Windows Vista 64-bit does not allow the installation of drivers that are not signed by Microsoft. This basically means that coLinux is full-out for me.

I next tried Ubuntu running in a Virtual Machine inside of VirtualBox. This is pretty wasteful for just an X server and a shell, but whatever, my machine has a nice fast processor and lots of RAM. Unfortunately, this did not provide very good integration with the Windows shell, even with seemless mode, and soon proved annoying to use. I may revisit it at some point, but I decided to look into a Windows-native solution that would provide better integration.

I then tried Cygwin, which attempts to create a unix subsystem in windows. Cygwin would give me X11, Xterm, bash, screen, vim, and pretty much everything else I require.

Unfortunately, Cygwin has its own problems. Specifically, Cygwin attempts to be POSIX-compliant, and the way it encodes Unix filesystem permissions on NTFS, while totally innocuous in Windows XP, seems to conflict with Windows Vista’s User Access Controls. This is not something that the Cygwin developers seem to have have any interesting in fixing. The result of this is you get files that are extermely move and copy, and very difficult to delete using the Windows shell. So Cygwin was not an effective solution for me.

I finally tried one last thing, a combination of tools: Xming, MSYS, MINGW, and GNUwin32. MSYS and MinGW appear to be mostly intended for allowing easier porting of software written for a unix environment to Windows, however MSYS provides a very productive unix-flavored shell environment inside of Windows. GNUwin32 ports many familiar GNU tools to Windows, so I have a fairly rich userland: rxvt as a terminal emulator, vim, bash, and a unix-flavored environment. This is not ideal, as it is not easily extensible, and doesn’t support any concept of packages, but it seems it’s the best I can do on Windows Vista x64.

A Very Late Review of Windows Vista

Let me start with the things that I like about Vista.

When I develop software, I primarily target the web as a platform, and so I like the fact that I can install a very wide range of browsers for testing: IE 6, 7, and 8 (Microsoft publishes free Virtual PC images for testing different versions of IE), Chrome, Safari, Firefox and Opera. It’s very convenient not to have to fire up a VM for testing.

Hardware support is top-notch. The audio and video stack feel polished and mature. I’ve never had an instance of them failing. And, all of my special hardware works, including the multi-touch touchscreen, and pressure-sensitive pen.

Now for the bad stuff. I want to keep this very brief, because it’s no longer interesting to complain about how bad Vista is… But it is so bad, it is virtually unusable, and I want to make it clear why:

  1. I seem to get an endless stream of popups from the OS asking if I really want to do the things I ask it to do. This transition is visually jarring, and very annoying.
  2. It maints the behaviour that it had back in Windows 95, where if a file is opened by some application and you attempt to move it, it will fail without meaningful feedback. This can be overcome with File Unlocker, but it’s crazy that this simple usability issue has never been addressed.
  3. File operations are so slow as to be unusable.
  4. Before attempting to move a file with Windows Explorer, it attempts to count every single file you’re going to move before it attempts to move it. This makes no sense to me at all, because moving a file in NTFS, I believe is just a matter of changing a pointer in the parent. If you use the Windows cmd shell, with the “move” operation, or Cygwin/MSYS’s “mv” operation, then the move takes place instantaneously. It does not attempt to count every file before moving the parent directory. So, this really is just a windows shell issue. It has nothing to do with the underlying filesystem. So, as bad a user experience as moving files using Windows Explorer is, it’s much much worse when you discover that it’s completely unnecessary.
  5. Out of the box, my disk would thrash constantly, even when wasn’t doing anything. I eventually turned off Windows Defender, Windows Search, and the Indexer service, and things have gotten better.
  6. It takes about 2 minutes to boot, and then another 5 minutes before it is at all usable, as it loads all of the crapware at boot. I’ve gone through msconfig and disable a lot of the crapware preinstalled by HP, and this has gotten somewhat better, but out of the box it was just atrocious.
  7. Windows Explorer will sometimes go berserk and start pegging my CPU.
  8. Overall it just feels incredibly, horribly slow. I feel like it cannot keep up with the flow of my thoughts, or my simple needs for performance and responsiveness. It does not offer a good user experience.
  9. Only drivers signed by Microsoft allowed on 64-bit Vista. This is a huge WTF.

All in all, Vista sucks and I hate it. Maybe Windows 7 will be better. Right now, though, a real alternative is necessary, because Vista offers such a poor experience that it is simply not usable for me. I had forgotten what it was like to want to do physical violence to my computer. No longer.

Really, at this point I feel like I should have gotten a Mac.

Last Word: the Karmic Koala

Ubuntu 9.10 Karmic Koala came out this past Thursday, and I just tried it out using a live USB. I’m happy to say that it sucks significantly less on my hardware than 9.04! In particular, audio now seems to work flawlessly: playback through speakers, headphones, and headphone jack sensing all work fine; recording through the mic jack works out of the box. I didn’t try Skype, but the new messaging application shipped with Ubuntu, Empathy, is able to do voice and video chat with Google Chat clients using the XMPP protocol.

I had mixed success with Empathy. It wouldn’t work at all with video chat; I think this had to do with an issue involving my webcam, as Cheese and Ekiga also had trouble using it. With regard to pure audio chat, it worked fine in one case, but in another it crashed the other user’s Google Chat client. Yikes. So, clearly there are still some bugs that need to be worked out with respect to the client software.

I now feel much more optimistic about the state of the Linux audio stack. I wasn’t really sure that the ALSA/Pulseaudio stack was converging on something that would eventually be stable and functional enough to rival the proprietary stacks on Windows and Mac OS X. The improvements I have seen on my hardware, though, are very encouraging, and so I think I may go back to Ubuntu after all. At the very least, I’m going to hook up a dual boot.

Wow, that was long post! I hope parts of it might be generally interesting to other who may be ina similar situation. In the future, though, I’m going to try to focus more on software development issues.

October 8th, 2009

JavaScript 1.7 Difficulties

For my course in compilers, we have a semester-long project in which we build a compiler for a DSL called WIG. We can target whatever language and platform we want, and there are certain language features of JavaScript, specifically the Rhino implementation, that I thought could be leveraged very productively. I was excited to have the opportunity to shed the burden of browser incompatibilities, and to drill down into the more advanced features of the JavaScript language. Unfortunately, I’ve also encountered some initial challenges, some of which are irreconcilable.

E4X

One thing that I was excited about was E4X. In WIG, you’re able to define small chunks of parameterizable HTML code, which maps almost 1-1 to E4X syntax. Unfortunately, Rhino E4X support is broken on Ubuntu Interpid and Jaunty. Adding the missing libraries to the classpath has not resolved the issue for me. On the other hand, the workaround of getting Rhino 1.7R2 from upstream, which comes with out-of-the-box E4X support, is unacceptable, as this Rhino version seems to introduce a regression, in which it throws a NoMethodFoundException when calling methods on HTTP servlet Request and Response objects. I’ll file a bug report about this later, but the immediate effect is that I’m stuck with the Ubuntu version, and without E4X support.

Language Incompatibilities

Destructuring assignments were introduced first in JavaScript 1.7. While array destructuring assignments have worked fine for me, unfortunately, I haven’t been able to get object destructuring assignments to work under any implementation but Spidermonkey 1.8. Rhino 1.7R1 and 1.7R2, as well as Spidermonkey 1.7.0 both fail to interpret the example in Mozilla’s documentation: https://developer.mozilla.org/en/New_in_JavaScript_1.7#section_25

This is disappointing, as it would have provided an elegant solution to several problems presented by WIG.

This work is licensed under GPL - 2009 | Powered by Wordpress using the theme aav1