I was approached two years ago by a colleague from the Queensland Department of Education and Training (DET) to help set up a web application for GUS Gives – a charity portal that collects payments from members and provides detailed analytics for charity organisations. My role was to provide data management support and the first task on the list was creating sample data sets in order to test the report generation functions of the website.
I had done similar work when I was working at DET of rule-based generation of staff and student data sets. Using the same technique, I developed an application that generated random people/members as a CSV file (file type choice by the application developers). The reference data of first and last names, locations, phone number prefixes, salaries, etc. were from public sources – US Census Bureau, Australian Bureau of Statistics, OpenStreetMap, and Wikipedia.
Now that GUS Gives looks like it’s a non-starter, I have uploaded the source to Github. The project also contains the data requirements and the collated reference data so the application can run without further dependencies.
My 85 year old grandma has been getting into reading various bible translations from her iPad. In order to improve her Hindi reading skills, she was looking for a side-by-side translation of a Malayalam bible and Hindi bible. While we could find Malayalam-English and Hindi-English versions, we couldn’t find a Malayalam-Hindi one that she could access on her computer or iPad.
I decided to help her out and the result is Parallel Bible. I have also uploaded the source code to Github.
The application is static site generator using Razor templates – I took this approach as I couldn’t host a Ruby on Rails or .NET website using my current hosting provider. It uses jQuery Mobile as my grandma uses her iPad for the majority of her browsing, and I wanted the site to be reasonably usable on a mobile. To run the application, the bible data itself must be downloaded for the English version, but is screen scraped for the Indian translations. The translations are then merged during the template execution process.
I did find it interesting that the number of chapters and verses did vary per translation. It was more frequent with the English vs Malayalam where there were quite a few instances of ±1 verse.
Rails asset precompile times on JRuby are considerably slower compared to MRI. I came across this post which provided suggestions on speeding up the asset precompile task.
Using the following options – using Node.js instead of therubyrhino for JS compilation, forcing the JVM to 32 bit (although this can be omitted on a 32 bit JVM) and not using JIT compilation – cut my asset precompile time from 4 mins 37 secs to 2 mins 8 secs. Using Node.js contributed to the majority of that time since I’m using a 32 bit VM.
EXECJS_RUNTIME='Node' JRUBY_OPTS="-J-d32 -X-C" rake assets:precompile