Jason Bonello

Jason's blog

Why Delegation has a role even in PowerApps: Delegation Explained

In the first blog of this series: http://tiny.cc/40p67y we looked at some tips to keep in mind when doing application development with particular focus on PowerApps. This blog will focus on an important point, more specific to PowerApps, which is worth keeping in mind during the development stage of a PowerApps application.


What is Delegation for?

Delegation permits the data processing to be done on the datasource’s end, and get only the required data back to the application, in order to avoid shifting all the data in the application itself and process it within. However, some of the functions in PowerApps do not qualify for delegation. Delegation is only supported for certain tabular datasources only – including SQL. If the application being developed is foreseen to work with small datasets that do not exceed 2000 rows (now or in the future), this will not be a concern. In these instances, any datasource or function can be utilized without delegation risks, as if the function cannot be delegated there will still be the ability of the data being processed locally within the application.

Difference between a delegatable and non-delegatable

In short, what will be the implications if a function cannot be delegated? The data brought from the datasource will be much less (maximum 2000 rows) than what actually exist in the database, hence producing incomplete results (especially for lists or galleries) or inaccurate results (especially for calculations). Let’s go through an example of this, which architecturally might not be a feasible option to be implemented in a realistic application, but will help us understand better this concept. Let’s say a developer is doing a Form in a PowerApps that will take the details of the end user and submit it to a SQL database. In order to decrease chances of typos and allow better analysis on the gathered data, most of the dropdowns or lists within the Form are being loaded from another SQL database containing master data. And one of these lists is to be populated from the City table, allowing the user to choose from a pre-defined list of cities. The city table in the master database has thousands of records. Without going into the argument of whether this is a good approach or not, let’s say the developer populates a collection first directly from the master data table to be able to hook the list to it with improved performance. The SELECT query passed to the datasource as part of the Collect() function will include a TOP 2000 (beyond the developer’s control), and will only get back 2000 records of the full table. Likewise if a filter was being done or search directly on the database, it will only be considering the initial 2000 records out of the entire table in the datasource.

Data row limit for non-delegatable functions

It is important to note that by default the data row limit for non-delegatable queries is by default 500 (on creation of a new application). However, this can be increased to a maximum of 2000 (at the point of writing this blog) from the App Settings:


Delegation Warnings

Whilst developing the application, you might encounter warnings that will highlight that that particular action or behaviour might not work properly due to delegation. And this applies were that command will turn into a non-delegatable function. It is normally underlined with a light blue line, similar to the below. Here we are trying to get a dynamic list of all the products that have been inserted within this month and year:


The same will happen if we try to get the products inserted in the last 7 days:


The relevant control will also have a warning triangle displayed in Edit mode:


Proceeding with publishing this application will mean that when the data row limit is exceeded in the source, not all the dataset will be considered for this command (and hence the result will not be exact).

Possible workarounds?

In certain cases like the above delegation warnings, a different approach for the same command can do the trick. So if upon starting the action we are getting the Start Date once and putting that in a variable, whilst filtering against the range of that and today, then no delegation warnings will be given:


A similar solution would be to refer to a used Label on the canvas that is calling the DateAdd(Today(),-7) and referring to it in the Filter instead of the variable.

One other option to go around delegation is loading a collection at the start of the application from a large table (as long as it is controllably large, and not millions of rows), and looping through it a number of times (the maximum key in the datasource divided by the default data row limit of 500) to populate the collection with the entire dataset.

This can be done either directly in PowerApps or even using a SQL function similar to the following. In this table-valued function we are aiming to output a table with 3 columns. The first column will be the PointerID holding the list of times we are going to re-loop over a database to get up to the maximum row. For each PointerID we will be setting the RangeTo and Range From, when considering the data row limit in PowerApps (which is accepted as a parameter).

Following is the SQL code for the function:

CREATE FUNCTION [dbo].[GenerateReRunTable] (@Limit INT)


PointerID INT

,RangeFrom INT

,RangeTo INT




--Replace [dbo].[LargeTable] with your table name and the [Id] column with your key

--Getting the minimum and maximum from the table to set the entire range

DECLARE @RangeFrom INT = (


FROM [dbo].[LargeTable] L


DECLARE @RangeTo INT = (


FROM [dbo].[LargeTable] L


--Calculating the number of reruns that need to be done by dividing the range of records with the data row limit in PowerApps

WITH MaxReRuns

AS (

SELECT ((@RangeTo - @RangeFrom) / @Limit) + 1 AS [MaxReRuns]



AS (
--Generating the runs including the unique PointerID for each and their start and end range to then be picked by PowerApps for filtering


,@RangeFrom AS [RangeFrom]


WHEN @Limit < @RangeFrom

THEN @RangeFrom

ELSE @Limit

END AS [RangeTo]


SELECT PointerID + 1

,[RangeFrom] + @Limit

,[RangeTo] + @Limit


WHERE PointerID + 1 <= (

SELECT [MaxReRuns]

FROM MaxReRuns










A SQL View can then be created to get the results from the function. That way whenever we are loading the view, we will be getting the latest result based on the updated data, like for example:

CREATE VIEW [dbo].[ReRunsView]


SELECT * FROM [dbo].[GenerateReRunTable] (500)

In this case, the [dbo].[LargeTable] had about 5203 rows. Hence the view would show the following output:


In PowerApps, we are then making use of this code (either in the OnVisible() section of the screen or in the OnSelect() of a button):



Collect (ProductListColl,

Filter ('[dbo].[Ori Product List]', ProductID >= RangeFrom && ProductID <= RangeTo)))

So, for each line in the SQL view (that has been loaded to the collection), we will be posting data to a PowerApps collection named ProductListColl all the contents from the SQL table named [dbo].[Ori Product List] in batches filtering on the specific range for that run (so that we bypass the delegation row limit, which will else disregard the rows above the row limit). Else, a normal Collect(ProductListColl, ‘[dbo].[Ori Product List]’) outside the loop would have only populated the collection with the maximum data row limit allowance of 500 – providing an incomplete dataset. This fully populated collection will also enable us to refer to the ProductListColl locally (as a replica of the entire [dbo].[Ori Product List] SQL table), increasing also the performance of data retrieval from this particular dataset.

We need to keep in mind though that eventually in PowerApps, we will still are using a ForAll() loop to read and hence doing multiple calls to the database when dealing with large datasets, increasing the chances of encountering issues which I will be explaining in later releases of this blog series. This will also have an effect on the performance (till the loop is being done and collection populated).

I also intend to finish a more practical example related to this to highlight better how delegation works in PowerApps in the upcoming blog post within this series – so please stay tuned for next release! In the meantime feel free to exchange any ideas or comments on this topic! 

Considering agility and flexibility when developing with PowerApps

Even if PowerApps truly is a low-development option for building applications, it would still be important to approach application development with best practices that are normally applied in other development technologies. For anyone who has developed applications in the past through any programming language, such tips might be already known or have now even became obvious. However, in this blog series, I will be collating some guidelines which will be useful when developing a PowerApps (and probably any other) application.

Development methodology

Even if PowerApps may provide a more controlled way of designing and developing an application, and an easy approach of adding more features as you get along – care should be taken when developing extensive applications which are composed of many screens, features and functions (in the same way as any other application development technology). It is hard to generalize and suggest a one size fits all approach for all PowerApps use cases. However, I personally found it helpful to start with having a draft UX design of all screens setup and run by the end-users/ stakeholders before going through the application of the logic itself.

Although PowerApps is a low-development technology, some level of coding and logic is still required. And in cases where this logic needs to do more than just a Submit(Form) or a simple Patch() command, thought would need to be applied in almost the same way as developing an application with other technologies. For instance, there are cases where variables need to be reset through the OnVisible() attribute of a screen (so that when a screen loads, the variables are fresh to be re-used). There are cases where a Refresh() needs to be done – for example right after an update to a SQL database, if the latest added record needs to be shown in a Gallery or other lists. And such commands need to be implemented based on the design and user journey that exists in an application.

Being too agile in developing a PowerApps application might increase the risk of technical debt. One additional step in the middle of the user journey, requested when the basis of the application and its logic have been applied, might mean that what has been properly developed and worked, is now to obsolete from a logic perspective. Hence rework needs to be applied, increasing risks of mistakes, bugs or flaws.

Likewise, what has been tested and approved before, now needs to be retested from a functional, UX state and unit testing point of view. This means that one slight addition of a feature (maybe another update or just a change of a background colour of a label/ button depending on a validation/ boolean check), could trigger far more indirect effort to ensure the application still satisfies scope and quality.


Like any type of coding, hardcoding when building a PowerApps application might not be the best practice and hence is ideally avoided. Through hardcoding, assumptions are being done during development stage that the application coding will remain static and any values being used or referred to within it are not going to change. For example, a filter is done on a hardcoded value of an organization name directly within the code, like Filter(OrganizationTable, OrganizationName = “CurrentName”). A change in the name will require a revision of the code, whereas if the check was being done using the record’s key in the database or a populated variable (rather then the value itself), the change of value would only be required within the database.

In a similar manner it is important to adapt this logic approach to any areas where potential additions or changes in the future might occur. So for instance, let’s say an If condition is going to be done within the program to only enable a button when a value is positive. And let’s assume the positive and negative value are being picked from a junk dimension or any other table. It might be that at the time of developing the application, there are only two values present: a positive and a negative value – i.e. “Yes” and “No”. If the button needs to be only visible when the value is positive, there are two ways of writing the If condition:

· If(Variable <> “No”, DisplayMode.Edit, DisplayMode.Disabled)

· If(Variable = “Yes”, DisplayMode.Edit, DisplayMode.Disabled)

At the point of development both commands are going to return the same result. However in the event that in the future a neutral value is added – let’s say “Maybe” – and some conditional checks might hence include any of “Yes”, “Maybe” and “No” – the results of the conditional check might be skewed if we had applied the first command. This is because the button will be enabled both when the value is “Yes” and also when its “Maybe”, whereas the button was required to only be enabled on “Yes”. Now when the same developer might be maintaining this application and the database, and the number of screens is minimal or with limited coding logic, one might argue that the command can easily be changed within the application coding and a new version of the application is published. However as best practice, it is always advisable to develop the application with the scope of doing minimal changes to the coding, as there might be derived effects of a simple change on other functionality in addition to the potential need of having the entire application re-tested (especially if we are dealing with a sizeable application).

Commenting coding

As with any other development tool and language, it is a good practice to comment the coding applied in the different screens.

With PowerApps application (like other technologies) having higher chances of being developed by one person/s and maintained (now or in the future) by other person/s, troubleshooting issues or even judging the effect of a future change will become harder without commented code. PowerApps comments start with // and adding simple comments like “//Refreshing datasource to get latest data in Gallery GalleryName post-patch to its datasource DataSourceName” might be enough for a different developer to realise why that command should be or not be changed (and the tester to know what to focus on if that is changed after all).

Testing, testing, testing

A developed application is going to be used by different users, and in cases where the application is not a two-screen application, different user journeys might exist. Also using the logic path of the application might be different depending on the user behaviour, and especially in cases where If…Else conditions are being used. In such scenarios, it is even more important to ensure the application has been tested over and over, and preferably by different persons (to ensure different perspectives and behaviours) hence increasing the chances of covering as much user journeys and logic paths as possible. As with any other technology, the use of testing logs is suggested to keep track of the flaws, bugs or improvements noted and if possible the pattern of reproducing that error is also noted.

In cases where different navigation options exist (like back and forward from each page in addition to a normal menu), one should not underestimate the importance of re-testing following successful runs, trying different combinations or patterns, especially if variables are being passed from one screen to the other.

In a following blog within this series, to be released soon we will continue to cover more points related to PowerApps and application development – also highlighting PowerApps specific pitfalls worth avoiding! As always feel free to reach out with any feedback or clarification required on any of the above points either through our blog itself or on Twitter: @bonello_jason

Getting user list from Office 365 in PowerApps


In the previous blog, we have gone through an overview of where PowerApps could be used and also some ideas on APIs that can be connected to it: https://bit.ly/2GPjghA

In this blog, we are going to go in some detail on the connection of PowerApps to Office 365 – and some examples of how this integration could be beneficial in application development through PowerApps.

Office 365 and PowerApps

One way though which PowerApps can be accessed, is through an Office 365 subscription. Once logged in you will see PowerApps available under one of the apps with this platform.


Yet the relationship between Office 365 and PowerApps does not stop there. When building an application in PowerApps – which is going to be accessed in an Office 365 platform – there are various commands that may be used to add features to the developed application.

For instance, presenting a list of users in a dropdown dynamically based on the Office 365 directory would be easy and adding “Office 365 Users” from within the data sources list will enable this.

Once the connection has been added, two elements and a screen in PowerApps should be enough to provide the option to choose one profile from Office 365 directory.


A ComboBox could be added from the Controls, and in this case we added a Label (which although it does not affect the functionality itself, gives a description). A ComboBox is like a normal Dropdown (which is classified as a separate control in PowerApps) but which amongst other slight differences, allow the selection of more than one item from the item list of the dropdown. It is worth saying that the allow multiple selection option can still be switched off. But I still prefer using this over a Dropdown because it allows the search option (which again can be enabled or disabled). This will apply an interactive filter to narrow down the list in the dropdown items.

“User:” has been typed in the Text of Label1 and placed the ComboBox just beneath it on the screen. What the user inputs as a search text when using the application within the ComboBox can be called within the application using the SearchText function. In the Items of ComboBox1, which will determine what the user is to be presented with when the dropdown is clicked on, the connector is called through the Office365Users command. The SearchUser operation is then added, and as a searchTerm (which can be used as part of the SearchUser operation to filter the list), the ComboBox’s SearchText is to be used. The complete and proper syntax can be seen in the next screenshot.


It is also important to specify which Office 365 fields are going to be used in the list and for the search – in this example it was the DisplayName.


That is basically enough to present the PowerApps user a dropdown to filter and select a profile from. When running the application, the PowerApps user will start typing the display name that s/he intends to find, and the list is narrowed down.


The user can then select the appropriate display name.


What is also useful, is the ability to add other Labels, in order to get more information from the selected Office 365 profile.

And this can be done with a simple syntax like the following:


This way every time the selected profile is changed, the value of Label2, will change automatically depending on the selected value.

Instead of MailNickname, there will be a whole list of attributes that can be extracted from Office 365 profile like City, Country, Id, Mail, contact numbers etc.

Likewise, like the SearchUser() operation, there are many other operations that can be used in conjunction with Office365Users connector command.
One other notable operation is Office365Users.UserProfileV2() which enables retrieval of data by passing the Profile Id or User Principal Name in order to get a whole list of information including skills, past projects, responsibilities, schools, interests and such fields that are available in Office 365 user profile:


This is an example of the command that can be used for this:


ProfileId in the above, is a label that is using a command to get the Id – which we will explain in the next section.

User identification

Handling user logins and presenting different screens to different users within the same application is also easy.

In order to, identify a user that has been logged in and display her/his details, another 3 elements within a screen should also be enough.

One of the elements could be an image, added through the Media tab. And the User() function could be used to get back the image of the user (as updated in her/his profile).


Two Labels can then be added and placed under the image, one displaying the name and the other the email with this syntax for each label (in Text property):



Once the PowerApps user logs in, the details will show up on this screen. And similarly, the email could be obtained and used to filter out/ give access to certain controls or screens within the application.

It is also worth noting that other functions like Office365Users.MyProfileV2() can be used to get a whole list of other details for the logged user – if the user has logged in through the Office 365 account.

Getting back to the ProfileId label mentioned in the end of the previous section, the following command is being used to get the Id:


It is important to keep in mind that even if there are various ways of retrieving such information, ensuring the use of a command that guarantees the retrieval of a record using a unique key is imperative. For instance, this command also returns an Id:


However, there can be more than one user in an active directory with the same full name. In this command we are getting the first profile and returning its Id – which is not always guaranteeing us the proper profile Id, especially if another user exists with the same name as the logged in user


This explains how easy it will be to develop and create such a capability in PowerApps solution. As pinpointed already, Office 365 and PowerApps offer other features that can be maximized through this connection and integration. And in a similar manner PowerApps can produce other positive results when connected and integrated with other relevant data sources and utilized as a user interface through application development!

Why PowerApps?

Having had the opportunity to work with the development and delivery of PowerApps applications on different projects, I feel that this technology provides a great modern solution to fill in business technology gaps, be a companion to software-as-a-service models and serve as an integrated frontend to various enterprise architectures.

Uses of PowerApps

Through PowerApps application, a user or business could now be presented with a simple and user-friendly, user interface to input or maintain data in different modes and for various reasons. Such applications can be designed to be used on mobile, tablet or desktops.

PowerApps is another tool in the powerful Microsoft Power Platform. Most of us already had the chance to experience the benefits of Power BI within the reporting, dashboard and data visualization spectrum. When building reports through Power BI, it is normally essential to use Power Query to cleanse, standardize and prepare the data to a reportable state prior to modelling it and consequently start the development of the report. The effort required in this stage varies depending on the state and consistency of the dataset or database that is to be used to build the reports. And another benefit of PowerApps could be highly experienced at this stage.

PowerApps could help in ensuring that data is kept in a consistent, organized and meaningful way, as opposed to leaving data entry validation at the stake of the user and applying endless data cleansing and preparation phases after its commitment. This will hence be also beneficial to any other data architecture like Data Warehouses. To highlight another benefit of PowerApps, building an application with this technology takes much less time then if coding such an application through managed coding languages. Likewise, I would say it also replaces the need to develop through VBA in Excel – by presenting a more scalable, modern and intuitive alternative.

In this blog I am going to particularly focus on connections to data sources within PowerApps from a general perspective.

Data sources in PowerApps

PowerApps allows the connection to multiple data sources and APIs. It can hence serve as a common handler of the data in different systems being used simultaneously within an organization.

In order to include a data source in your application, the following few clicks are needed:

1) Just go to View on the top ribbon

2) Then click Data sources

3) A column will show on the right, where the New Connection button can be used to add the required data source.


4) And voila! There’s a whole list of connections to choose from, which range from other Microsoft products like Azure and Dynamics 365 to others like Google Calendar; social media favourites like Twitter, Instagram and Facebook; Marketing automation platforms like MailChimp; CRMs like Salesforce; Zendesk and so on.

Once the login details for the connection are added, that data source could be used in the PowerApp solution in many ways as a data source, data processor or as a data destination.

It is worth keeping in mind that some API connectors might only be available in Premium (which thus relies on the price tier/ under the plan that you are on). Such connectors are clearly identifiable with a gem icon beside them in the list.

Some other connectors have a bulb icon beside them. This just shows that the feature is in experimental or preview. This serves as a warning to the developers when designing and developing their solutions, that such connectors are still subject to change and to be cautious when basing their system on them.

Some connectors might also require some other details during development to activate their usage within the application, ranging from login details to Azure Keys etc. Yet once the application is published these connectors are treated as part of the application and any user granted access to the application will inherit the right to use these connectors as part of the application itself (without the need for her/him to have a relevant account for each and every connector). Upon loading the application the first time, the users will be prompted by a one-time message to have access to the connectors within:


Once “Allow” is clicked, the users will be able to use as part of the application itself (with no other access outside the application being affected).

Azure services with PowerApps

When acting as a frontend to Azure services, PowerApps can provide a cloud-hosted scalable solution facilitating the operation which might have otherwise required technical expertise.

Just an example of this, Cognitive Services within Azure offers different AI solution that can be put in practice on a business level. Face API is one of these services, that can be used as a face verification and detection whilst integrating emotion recognition. Thus, by adding Face API as a data source in PowerApps (along with some other technologies like Microsoft Flow) one can easily provide a front end to take a person’s face image, have buttons included that trigger the analysis of those images through Face API and Microsoft Flow. Additionally, the results can be retained immediately and displayed in the same PowerApps solution. Azure Storage could also be added as part of the flow processes in order to retain a copy of the analyzed pictures – if required.

This applies for the utilization of other Azure service in conjunction with PowerApps.

The fact that PowerApps allows ease of input through Camera, Barcode Scanner and Microphone, it makes this technology versatile in data inputting methods and user interface adaptations. It hence also allows the opportunity to utilize the wide breadth of Azure Services that correspond with these data input methods.

To sum up, this gives some ideas of PowerApps usage and an example of some applications that can be developed through PowerApps. It is also worth noting that PowerApps allows also the connection to custom APIs that have been produced through tailor-made development. Hence in these cases it will also serve as a potential front-end especially in cases where expertise in technical languages used in UI is lacking.

ArcGIS Maps for Power BI – Further capabilities and other information

The ArcGIS Maps for Power BI is also useful when used in conjunction with other visualizations, through Interactions. When selecting any location feature on the map, the filters will be communicated to the other visualizations which will in turn consider the same filtering.

The perfect map blends art and science into an effective tool of visual communication.

In the previous blog posts (that can be accessed from the following links: https://bit.ly/2PnIgR3 and https://bit.ly/2PnegWk),we have gone through the ArcGIS Maps for Power BI tool and had an overview of its use.

To wrap up this series, we will build on the description of most of the features in the previous posts, available when editing the ArcGIS Maps for Power BI.


The remaining features currently available in this tool will be shortly described to wrap up the features section:

  • Pins – These provide the option of adding locations that are important to you on the map. For instance, if you are analyzing supply chain and distribution, the warehouse/s might be useful as pin/s, since these do not change no matter the changing dataset and need to be considered in most analysis.
  • Drive time – Once pins are set-up, drive time tool could be used to highlight distances within specified radius or drive time. For instance, if you are analyzing orders that are being delivered late in a particular area, the pin and drive time tools could help in analyzing the possibility of opening a new store in a nearby area (in comparison to the coverage it will offer to the client sites that are having the orders delivered late).
  • Reference layers – These can be either demographic layers or ArcGIS reference layers. Out-of-the-box demographic layers are US based and 10 of them are provided freely, which give data such as average households incomes. ArcGIS layers is data being published by other authoritative partners, customers and users that use ArcGIS online like weather services related data. Such data can be loaded and used in Power BI, without the need to spend time collecting regional data, cleaning and modelling it. For instance, an insurance company using Power BI and which might need to analyze the impact of a storm and the insurance policies within an area, might utilize ArcGIS storm surge information alongside with their own dataset for prediction and effect on policies.
  • Infographics – This tool provides information such as population and age that could also be used alongside the data. So, taking the same insurance use case above, this information could help identify how many people (irrelevant whether customers or not) are likely to be affected. This might hence be a good indication for selling and marketing departments.

Different selection options

There are different select methods available when using ArcGIS Maps for Power BI. These can be accessed through the map visual beneath the zoom in and out buttons (+ and -), as highlighted below:


  • Select individual location – This enables the selection of only a single individual location at a time. Once another location is clicked the previous selected one will be automatically unselected, and the last selection retained with the reporting data refreshed and re-filtered accordingly.
  • Select multiple location – This enables the selection of multiple locations by drawing a box over multiple locations that need to be selected simultaneously. This provide a good way of comparing the data of various locations.
  • Select locations using reference layer – This enables the selection by area on a reference layer or within a defined radius / driving time from a specific location. This selection method will only be available if the map contains a reference layer or a drive-time area layer. If the map contains both a reference layer and a drive-time area layer, then both options will be available to choose from. The Reference layer selection tool will select all map features within the reference polygon – for instance if a state in US is selected, all location-based features located in that state on the data layer will be selected on the map. If the Drive time areas is chosen, data features will be selected within the defined area.
  • Find similar – This latest added feature helps the user to swiftly pinpoint any locations that have similar attributes to the selected locations on the map (through the data layer only). For instance, in a map showing the GDP per country, the Find similar would allow the easy identification of those countries/ locations that have the same/ closest GDP. The numerical field (like GDP in this case) that is to be used as a basis for comparable values, needs to be dragged in the Find Similar data well.
    Then, after choosing the location to have the GDP compared to – in this case US – the GDP for the other locations will be compared and a rank for the most similar GDPs will be shown. In this case, Canada has been ranked as the closest for that year when filtering on the North and South American regions, with Dominican Republic being marked as the 10th most similar.

Basic vs Plus subscriber/ ArcGIS account

By default, the ArcGIS Maps for Power BI tool is available in basic version. This brings with it some limitations when compared to the same tool accessed through the Plus Subscription.

In order to login or signup for Plus Subscription, or even connect to ArcGIS directly, the yellow circle with the plus sign on the ArcGIS Map for Power BI could be used – as per below:


But what is exactly different between the Basic and Plus subscription? Following is a summarized comparison of the main features that differ between both:

ArcGIS Maps for Power BI features

Free (incorporated within Power BI with no cost)

Plus subscription (monthly fee per user - currently $5 pp)

US demographics only Global and richer demographics including Age, Behaviours, Healthcare etc.
Public maps Validated, at the ready data, organized and collected from reliable and trustworthy sources
Map and visualize locations: 1.5K features (geocoded locations) per map; 100K features per month Map and visualize further locations: 5K features per map; 1 million features per month
Carry out spatial analysis including heatmaps, drive-time etc. Execute the same spatial analysis as the basic incorporated
4 basic basemaps

4 basic basemaps + 8 others including satellite imagery, oceans and terrain

One point that remains evident from the above is the limit that still exist in the features that could be included in a map. Hence if one is trying to geocode locations or street addresses, in Basic, only the first 1500 will be considered in one map, whilst 5000 will be taken into account for the Plus subscription. This cap mainly exists to retain a good user experience for both visualization and performance, yet might be something to keep in mind especially if targeting large datasets.

However, this is not a restriction for Power BI overall – since it offers other mapping tools that might be used in large datasets like for instance MapBox. The different mapping tools present different capabilities and features that might appeal and apply to different users depending on their use-case, scope, complexity and functionality sought.

Performance enhancement and other improvements

In one of the latest updates (September 2018), performance of ArcGIS Maps for Power BI has been improved drastically, and is cited to be about 50% better. This is mostly visible when navigating through the map or through interactions done and its benefits make live presentations of such maps less nerve-wracking. The improvement does not only apply for new maps or new reports, but will also be experienced by existing reports that are already using an ArcGIS Map for Power BI.

In December 2018, other improvements were released for ArcGIS Maps for Power BI. In addition to the Find Similar feature that has been referred to already above, there were other features enhanced covering both the basic and plus subscription. One other improvement worth highlighting is the boundary data in the Location Type (which had been described in the previous blog on this series). In this latest release, more boundary data has been made available to help increase accuracy in data analysis. Likewise, the Plus subscribers have access to lot more curated infographic data for various demographics.

Concluding remarks

All of this shows that GIS and mapping tools are being given the required attention by Power BI and Microsoft in general. With their increased popularity in recent years, they have now become essential in decision making and business, and through proper data visualizations – the data could become much more meaningful. This ties up with what Dr. Keith Harries suggests, that “The perfect map blends art and science into an effective tool of visual communication.” And thanks to Power BI mapping tools this is being enabled!

ArcGIS Maps for Power BI – Editing and available features

This blog post builds up on the previous post that covered an introduction to ArcGIS Maps for Power BI – which can be re-accessed through the following link: https://bit.ly/2PnIgR3

We will start by looking at the editing features that are available within ArcGIS Maps for Power BI.

Jack Dangermond said, “the application of GIS is limited only by the imagination of those who use it.” So, let’s try to get a glimpse of the range of features, with the hope of being inspired and applying them in an imaginative way Smile

Editing an ArcGIS map in PowerBI

Clicking on the Edit of the visualization map would allow you to change various map contents through different tabs and tools – as shown below:


Following is some information on each of the tools:

  • Basemap – This lets you choose from four different basic basemaps (unless you are a Plus subscriber or have an ArcGIS account which then gives you access to more options of basemaps). Through such changes, one can ensure that the focus will remain on the data importance rather than caused visual distractions within the map. If different basemaps use different projections, there might be the need to save, close and re-open (or change between report pages) for the map features to reflect the new projection. In such cases, the Attention icon will probably show stating that the basemap will be updated next time visual loads.        


  • Location Type – This will allow the locations to either show as points or boundaries as can be seen below. It can be specified whether the data pertains to one country or many countries. For the boundaries the location type could be changed from a dropdown list including countries, districts, postcode areas, region etc. (available list changes depending on the country selected). The method to be used in matching locations could vary between:
    • Closest match—This can be used when the exact spelling cannot be guaranteed (with errors like Frence), if there might be a mismatch with the spelling in Esri's data services, or if an exact match search will not add all your data to the map (even if this might not be the best solution to data accuracy). This might obviously lead to inaccuracy and wrong assumptions.
    • Exact match—This is the most accurate method and is to be used when boundaries are defined by codes or abbreviations, or when one is sure that the dataset spelling is correct and matches Esri's data services.

      The Location Type might be a good place to start with if a “Failed to locate [x] features error” is encountered as follows:

      This might be a result of incorrect settings in the location type, like adding a list of states in Brazil without setting the Locations are in one country setting, choosing Brazil and changing Location Type to States. The error might also be a result of an improper match between a value in the dataset and Esri’s data services (or a spelling mistake).

  • Map themes – This allows a change in the style for the map and once can choose from location only, heatmaps or clustering (the last two are only available for point layers, that is when you select Points in the Location Type). Through the clustering option, one could group individual location points into larger circular clusters that fall within a cluster radius – giving a high level view and then the ability to drill down into each region. If heatmaps are chosen any values in the Size or Color will be ignored and the tooltips will not be available.
    If numerical data exists in the Size and/ or Color field wells, there will be a further 3 map themes available – being Size (showing bubbles with different sizes based on the measure’s value), Color and Size & Color:

  • Symbol style – This provides the option to do changes in appearance that are immediately reflected in the map including symbol shape and size, colours (and defining the colour ramp), transparency level and classification types. The available options in the Symbol Style screen depends on the map theme selected and the nature of the data being analyzed. For example for heat maps formatting options like Transparency and Area of Influence will be available.

    Classification types will provide different options to classify the data and defines the way ArcGIS is going to create the clusters from your data, namely:
    • Natural Breaks (sometimes referred to as Jenks) – The fluctuated data values are clustered in naturally occurring data categories. Class breaks take place where a gap between clusters exist. This method is suitable for unevenly distributed data. For instance, streets are clustered based on their length (short vs long), or cities based on their size (small vs large).
    • Equal Intervals – Value ranges are set equally in size in every category. The entire range of data values (max - min) is divided equally into the number of categories that have been chosen. One will specify the number of classes, and ArcGIS Maps for Power BI will automatically determine how to divide the data.
    • Quantile – Classification of data is done within certain number of categories having equal number of units in every category.This might be suitable for evenly distributed data.
    • Standard Deviation – Shows how much a feature’s attribute value varies from the mean – whether above or below mean. This might be a good way of pointing out the extremes.
    • Manual Breaks – Enables one to define own classes, class breaks and ranges.

As one can see, such features and functions not only offer different options to the creator to give context to the geographic data visualization but will also help the end-user in understanding and adapting to the data better and enhance comparison. I hope the information in this post was useful in some way or another. In the next blog, we will continue building further on this. Stay mapped!

What is ArcGIS Maps for Power BI – and why you want to use this

In this and other following blog posts over the coming weeks, I’ll be compiling some information gathered from some research done on a capability in Power BI which has increased in popularity and usage: ArcGIS Maps.

As Jimmy Buffett said “Without geography you're nowhere” – so hopefully this capability will help us get to somewhere

So… what is ArcGIS?

In a nutshell, ArcGIS is a software-as-a-service platform that allows the creation, discovering, management, sharing, analysis and visualization of location-based and geographic data through maps, applications and reports. Esri (Environmental System Research Institute), which developed, hosts and administers ArcGIS, has partnered with Power BI and as of 2016 ArcGIS Maps for Power BI has been included and is now a built-in feature in Power BI to provide enhanced mapping capabilities. GIS (Geographical Information Systems) analysis is beneficial to different domains or industries including, but not limited to business, urban planning, crime mapping, resource optimization, transport planning/ management (public & private), education, assessment of natural disasters/ hazards, healthcare etc.

What is possible with ArcGIS?

The following is just a glimpse of some simple map-related visuals that may be acquired through ArcGIS Maps for Power BI.


How does ArcGIS differ from the previous map visualizations?

ArcGIS has various features and capabilities, which are to be summarized and explained further in the upcoming blogs. Prior to the ArcGIS Maps for Power BI introduction, Power BI had limited mapping capabilities through its own map function, especially when compared to other suppliers and platforms such as Tableau. Apart from the enhanced visual appearance (such as changes in themes, ability to add layers of demographic data etc.), this tool allows comparability between location-based data and features mainly through the selection methods of points on the map (the different selection methods also to be explained further). It helps provide context to geographic data and provide accurate spatial analysis. It’s worth keeping in mind that ArcGIS is intended to display and interact with data that is managed by Power BI. Hence, it’s not intended to provide data enrichment in the form of adding data from ArcGIS Maps for Power BI itself to your own business data.

You excited? Let’s do this…

Enabling ArcGIS in Power BI service

If you’re using Power BI through app.powerbi.com, you might need to enable ArcGIS Maps for Power BI. This can be done by clicking on the cog icon in the top right section of the menu bar and open Settings. Under the General tab, select ArcGIS Maps for Power BI checkbox.


Power BI will have to be restarted after that, and the ArcGIS Maps for Power BI icon will then be available in the Visualizations pane when opening a report in Editing view.

When using Power BI Desktop, ArcGIS Maps is normally enabled by default, however it has an option which could still control the enabling and disabling of this capability, under the Security tab in Options & Settings > Options, as follows:


ArcGIS Maps is unavailable in Power BI Embedded edition and Power BI service embedding (PowerBI.com).

Creating an ArcGIS map in PowerBI

To add a map to the report, use the ArcGIS Map for Power BI Visualization from the visualization pane – icon shown below:


This will add a new ArcGIS Maps for Power BI visualization on the report page.


The dataset in the Fields pane can then be used to populate the field wells and add data to the report.

The following are the location field wells that can be used:

  • Location – Values dragged within this field well should be location-based like single field address, postcode or country name to display the features on the map.
  • Latitude – Values dragged here will define part of a map coordinate (north to south position). If this is used, the corresponding longitude field must also be used. In the modelling tab, a data category with the same name also exists to transform the column in the data set.
  • clip_image017
  • Longitude – Values dragged here will define part of a map coordinate (east to west position). If this is used the corresponding latitude field must also be used. The data category Longitude also exists.

The same data categorization can also be applied on the City, Country, Province/State and other data to be considered location based. When done, a globe icon will show next to fields’ name to show that the field is marked by Power BI as containing geographic information – as follows:


For accurate plotting of data on the map, latitude and longitude values are suggested to be used if these values are available. If you fill all of the Location, Latitude and Longitude field wells, only the Latitude and Longitude will be considered by ArcGIS Maps for Power BI to map data and the Location field will be ignored. The country will still however be considered for the information that is to be shown when hovering over the mapped point - as per below:


In addition to these, the Size, Color, Time and Tooltips can be used as non-location fields that will affect how features look on the map. As best practice it is suggested to first add the location-based data before adding these to render the points on the map.

  • Size – Numerical values could be added to this field well to show map features based on size. For instance, when analyzing earthquake data, dragging the magnitude value here will show the extent of the scale. Similarly, by dragging the population field, the location symbols on the map will be automatically rendered in proportional sizes based on the population value.
  • Color – This field well could be used for both numerical and categorical values to differentiate the map features according to the colour ramp. For instance, a light-to-dark colour ramp will help identifying low-to-high numerical data values like sold quantity, whilst different colours could be used for categorical data like brand preference. Size and Color could be used together both for different or same attributes in the same map.
  • Time – This field well could be used to animate the features on the map for temporal data fields according to date, time or date and time. It helps understand how measure values occurred through the time, and will prompt the time slider.
  • Tooltips – As for other visualizations, this can be used to provide definitions of information that appears in tooltips, and multiple fields can be used here.

It seems quite straight forward, doesn’t it? Well, in the next blog post we will be going deeper into how to edit the ArcGIS map and how to utilize some of the beneficial functions this tool offers!