Our user experience team were looking to do a refresh on our symbology that we use for our Map Services that we host on ArcGIS Server. They wanted to see all symbols used in the map service so they could pass it on to our graphic designer to redesign the icons. While you can see each symbol by adding
/legend to the map service URL, there was no way to see all symbols on a single page. There isn’t any native way in ArcGIS Server or Desktop to be able to easily extract this either.
Instead, I wrote a Node.JS script to scrape the symbols from ArcGIS Server using Puppeteer. The script handles map services that only have a single symbol vs grouped symbols, and only outputs the unique symbols.
One of the problems we often face when diffing Analysis Services Tabular models in a collaborative development environment is that tables, columns, and relationships can often be reordered. This is because Visual Studio caches data to disk, and when the Model.bim is reloaded, the order from disk is used, and any extra items are appended to the end of the item array.
For example, if Alice loads Bob’s changes in Visual Studio, Alice’s cached data is used to determine the order, and then Bob’s changes are appended to the end of the array. So if Alice adds a single column, the diff will appear to contain all of Bob’s changes (reordering) as well as Alice’s changes.
To help with providing more accurate diffs, we can sort the item arrays deterministically (i.e. by name). When used as a git post-commit hook, this will take care of any diffs done in code review.
I created a tiny Node.js script to perform the reordering: Tabular Normalizer.
Our current set up with Power BI is to host Analysis Services databases, and point a Power BI report to that. Due to the de-coupling of the data model and the report, there is no current mechanism to identify what fields and tables in Analysis Services are being used by which Power BI reports.
I’ve written a PowerShell script that will help in understanding this relationship: https://github.com/avinmathew/extract-powerbi-fields
Point the script to a directory containing Power BI .pbix or .pbit files, and the script will iterate through each of the reports, open up the
Layout file, and extract all used fields to a CSV file with three columns: Field, Table, File.
Due to the complexities of Layout structure with filters on visuals, I’ve taken a shortcut and used regex rather than JSON navigation to mop up any missing properties not found the the first two passes. This will result in the Table column in the CSV file being blank in these cases.
For some of the spatial visualisations I’ve been working with in Power BI, I’ve had to create Custom Visuals as the out-of-the-box and visuals in the AppSource don’t quite hit the mark. I’m quite fond of Leaflet for map rendering. Here’s how I got it working with a Power BI Custom Visual.
Create a new Custom Visual via the Command Line: