Offline web apps

I was doing some work with a mobile web application that had to continue to operate in a disconnected scenario, as users may intermittently go in and out of areas with mobile network reception.

Adding offline support was reasonably straightforward in web browsers that support:

Example code available on GitHub..

Querying ArcGIS attribute subtypes and domains with SQL

ArcGIS geodatabases have the concept of domains, where an attribute can be restricted to a set of values, similar to a drop down list. Furthermore, geodatabases also allow an attribute to be specified as a subtype where the subtype determines which set of domains that apply to attributes. This can be used when setting up cascading drop down lists.

In an Enterprise Geodatabase, this subtype and domain metadata is stored in the GDB_ITEMS table in an XML definition. This ESRI technical article provides code on extracting domain values via SQL:

SELECT
  items.Name,
  codedValue.value('Code[1]', 'nvarchar(max)') Code,
  codedValue.value('Name[1]', 'nvarchar(max)') Value
FROM SDE.GDB_ITEMS AS items
INNER JOIN SDE.GDB_ITEMTYPES AS itemtypes ON items.Type = itemtypes.UUID
CROSS APPLY items.Definition.nodes('/GPCodedValueDomain2/CodedValues/CodedValue') AS CodedValues(codedValue)

This works well if there are no subtypes, as you can just join your table or view against the Code column and display the Value column. To go one step further an incorporate subtypes, we need to introduce another query that extracts the domains used by a subtype (change MyTable and MyAttribute to appropriate values):

SELECT
  SubtypeChildren.value('SubtypeCode[1]', 'varchar(50)') SubtypeCode,
  DomainName.value('.', 'varchar(50)') DomainName
FROM GDB_ITEMS i
CROSS APPLY i.Definition.nodes('//Subtype') AS Subtype(SubtypeChildren)
CROSS APPLY SubtypeChildren.nodes('FieldInfos/SubtypeFieldInfo/FieldName[text()="MyAttribute"]/../DomainName') AS SubtypeFieldInfo(DomainName)
WHERE Name = 'MyTable'

This returns the subtype code as well as the domain name. The subtype code can then be joined against your table or view, while the domain name can be joined against the items.Name from the first query. To be able to return the cascaded domain, we need to join again on the first query. For example, using CTEs:

WITH domains AS (
  SELECT
    items.Name,
    codedValue.value('Code[1]', 'nvarchar(max)') Code,
    codedValue.value('Name[1]', 'nvarchar(max)') Value
  FROM SDE.GDB_ITEMS AS items
  INNER JOIN SDE.GDB_ITEMTYPES AS itemtypes ON items.Type = itemtypes.UUID
  CROSS APPLY items.Definition.nodes('/GPCodedValueDomain2/CodedValues/CodedValue') AS CodedValues(codedValue)
),
subtypes AS (
  SELECT
    SubtypeChildren.value('SubtypeCode[1]', 'varchar(50)') SubtypeCode,
    DomainName.value('.', 'varchar(50)') DomainName
  FROM GDB_ITEMS i
  CROSS APPLY i.Definition.nodes('//Subtype') AS Subtype(SubtypeChildren)
  CROSS APPLY SubtypeChildren.nodes('FieldInfos/SubtypeFieldInfo/FieldName[text()="MyAttribute"]/../DomainName') AS SubtypeFieldInfo(DomainName)
  WHERE Name = 'MyTable'
)
SELECT
  m.Category CategoryCode,
  d1.Value CategoryDescription,
  m.MyAttribute TypeCode,
  d2.Value TypeDescription
FROM MyTable m
INNER JOIN domains d1 ON d.Code = m.Category
INNER JOIN subtypes s ON s.SubtypeCode = m.Category
INNER JOIN domains d2 ON d2.Code = m.MyAttribute

The subtypes query above is limited to a single attribute as this was the use case I needed to support. To generalise it and return all fields, modify it to return the SubtypeCode, FieldName and DomainName.

web.config for React and Node.js on Azure App Service

I’ve been creating a web application using React and Node.js, and decided to host it on Azure App Service. I had a lot of difficulty finding an appropriate web.config rewrite rules that would host static assets, redirect deep React Router links to index.html and send API calls to Node.js via iisnode. I finally came across this post which got me most of the way there, and I just had to adapt it to use Node.js:

WebdriverIO with ChromeDriver headless without Selenium

I was excited to discover that WebdriverIO supports the use of ChromeDriver running in headless mode without the use of Selenium (because ChromeDriver supports the WebDriver protocol). Here’s a starter guide (assuming Node.js is installed):

  1. Install Chrome Web Browser
  2. Download ChromeDriver
  3. Run ChromeDriver
  4. Install WebdriverIO by running npm i webdriverio
  5. Run the following file (based on the WebdriverIO example)

const webdriverio = require('webdriverio')
const client = webdriverio.remote({
  host: "localhost",
  port: 9515,
  path: "/",
  desiredCapabilities: {
    browserName: "chrome"
  }
})
client
  .init()
  .url('https://duckduckgo.com/')
  .setValue('#search_form_input_homepage', 'WebdriverIO')
  .click('#search_button_homepage')
  .getTitle().then(function(title) {
    console.log('Title is: ' + title);
  })
  .end()

To run Chrome in headless, use the following config:

const client = webdriverio.remote({
  host: "localhost",
  port: 9515,
  path: "/",
  desiredCapabilities: {
    browserName: "chrome",
    chromeOptions: {
      args: ["headless", "disable-gpu"]
    }
  }
})

Creating a ArcGIS map service raster layer in PyQGIS

It’s relatively straightforward creating layers programatically in QGIS using PyQGIS. However, the documentation isn’t clear on how to use create a raster layer from a ArcGIS map service.

The list of raster providers can be found by searching for addRasterProviderDialog in qgsdatasourcemanagerdialog.cpp.

There are two mandatory parts to the data source string when specifying the ArcGIS map service, the url and layer, which need to be space separated. For example:

iface.addRasterLayer("url='http://gis.infrastructure.gov.au/infrastructure/rest/services/KeyFreightRoute/KFR/MapServer' layer='5'", "Key road freight routes", "arcgismapserver")

or using the constructor:

layer = QgsRasterLayer("url='http://gis.infrastructure.gov.au/infrastructure/rest/services/KeyFreightRoute/KFR/MapServer' layer='5'", "Key road freight routes", "arcgismapserver")

To understand what data source parameters are supported by the ArcGIS map service provider (AMSProvider), search for dataSource.Param in qgsamsprovider.cpp.

Websockets are easy

I was interested in getting a minimal example working for Websockets and it was surprisingly easy to get a demo working between Node.js and a browser.

First install the ws library:

npm install ws

Create an index.js file with the contents:

const WebSocket = require('ws');
const wss = new WebSocket.Server({ port: 8080 });

wss.on('connection', function(ws) {
  console.log('Starting connection');
  ws.on('message', function(message) {
    console.log('Received message: ' + message);
    ws.send('You sent ' + message);
  });
});

Start the Node.js process:

node index.js

In a browser console, send a message to the Node.js process:

var ws = new WebSocket("ws://localhost:8080");
ws.onmessage = function(e) {
    console.log(e.data);
}
ws.send('Hello World');

Re-rendering map layers

I recently was optimising the performance of a Leaflet-based map that rendered TopoJSON layers via Omnivore. The layers were a visualisation using the ABS’s Postal Areas, and while there was only a single TopoJSON file, this resulted in a number of feature layers being displayed, with each bound to their own data. The data for each feature layer could change depending on what filters were set (these filters were displayed in a left panel).

In the original implementation, whenever a change was made to a filter, the entire TopoJSON layer was removed, and then re-joined to the data set and rendered:

var dataLayer;
function renderDataLayer() {
    var prevLayer = dataLayer;
    dataLayer = L.geoJson(null, {
        filter: filter,
        style: style,
        onEachFeature: onEachFeature
    });
    omnivore
        .topojson('postal_areas.topojson', null, dataLayer)
        .on('ready', function() {
            // Remove previous layer when current layer is ready to avoid flickering
            if (prevLayer) {
                prevLayer.remove();
            }
        })
        .addTo(map);
}

Using Chrome’s “Record JavaScript CPU Profile”, it clearly showed the code invoking Omnivore was the problem:

Changing the render function to (1) iterate over the existing layers and (2) change the style of the layer, made the map feel a lot more responsive.

var dataLayer = omnivore.topojson('postal_areas.topojson', null, dataLayer);
function renderDataLayer() {
    dataLayer.eachLayer(function(featureLayer) {
        featureLayer.setStyle(style(featureLayer.feature));
    });
}

One of the disadvantages was that we can’t use the filter function on the geoJson object anymore, since in order for a style change to occur, the feature layer must be present (although it can be hidden). This potentially can lead to slowness when dragging or zooming a map.

Another related disadvantage is hiding layers both visually and from mouse events. Setting the opacity and fillOpacity of the layer to 0 will take care of the first, while this CSS will prevent the mouse cursor from changing when hovering over a hidden layer:

/* Don't show pointer on hidden layers */
.leaflet-pane > svg path.leaflet-interactive[stroke-opacity="0"][fill-opacity="0"] {
    pointer-events: none;
}

Templated SQL

I’ve recently been working on a data migration SQL script that performs the same operations against numerous tables. While I could use dynamic SQL – where SQL statements are built as a string and then executed – I don’t particularly like the downsides:

  • It cannot be checked at compile time
  • It is difficult to read, debug and reason about
  • I’d have to use a cursor to iterate through each of the table names

Since this was a one-off migration task, I decided to render the SQL dynamically ahead of time and wrote a little utility in Node.js. So the template contained the SQL statements I wanted to execute, while the data contained a single column with the table names. For example:

data.csv

table
tblA
tblB
tblC

template.mst

ALTER TABLE {{table}} ADD col VARCHAR(50);

Output

ALTER TABLE tblA ADD col VARCHAR(50);
ALTER TABLE tblB ADD col VARCHAR(50);
ALTER TABLE tblC ADD col VARCHAR(50);

Wrap the output in a BEGIN/COMMIT TRANSACTION and we’re done.