0% found this document useful (0 votes)
262 views82 pages

CODEMagazine 2020 JulyAugust PDF

Uploaded by

Enzo Kette
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
262 views82 pages

CODEMagazine 2020 JulyAugust PDF

Uploaded by

Enzo Kette
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 82

ASP.NET Core, Vue.

js, Amazon Web Services, Power Query

JUL
AUG
2020
codemag.com - THE LEADING INDEPENDENT DEVELOPER MAGAZINE - US $ 8.95 Can $ 11.95

Transform
Your ASP.NET
Core API into
AWS Lambda
Functions

Exploring Building WPF Working


What’s New Apps with with Managed
in Vue 3 Custom Scripts Identity
TABLE OF CONTENTS

Features
8 Managed Identity 72  sing a Scripting Language
U
These days, you can hardly use any device without encountering a demand
for a password. Sahil looks at the need for them, the problems with them,
to Develop Native Windows WPF
and a possible solution for them using managed identity.
Sahil Malik
GUI Apps
There are some surprising benefits to creating a GUI via scripting
language for your WPF application, and Vassili tells you how.
16  se the MVVM Design Pattern
U Vassili Kaplan

in MVC Core: Part 2


When you need your app to be testable, maintainable, and reusable,

Columns
you need MVVM in MVC. Paul shows you how to add a feature that restricts
how much data is shown per page and how caching the product data
improves performance.
Paul D. Sheriff
66 T alk to an RD:
 24 Transform Your ASP.NET Core API
Tiberiu Covaci and Markus Egger
into AWS Lambda Functions Tiberiu and Markus talk about what’s new and what’s coming with Azure.
Julie continues building her .NET app in the AWS ecosystem and Markus Egger
starts to look at the serverless offering of AWS Lambda functions.
Julie Lerman
CODA: On First Principles
80 
32 Power Query: Excel’s Hidden Weapon John talks about the four basic principles for building great software.
John V. Petersen
Helen changes everything by showing you a tool in Excel that helps
automate populating Excel cells with data.
Helen Wall

40 Vue 3: The Changes


You already love Vue if you’ve been developing for the Web.
Departments
Well, it’s about to get better, and Shawn shows you how.
Shawn Wildermuth 6 Black Lives Matter
46 S tages of Data: A Playbook 20 Advertisers Index
for Analytic Reporting Using
79 Code Compilers
COVID-19 Data
When it’s time to crunch the data warehousing numbers, you need
the right tools. Kevin uses Power BI to track a constantly changing
array of data about the pandemic.
Kevin S. Goff

US subscriptions are US $29.99 for one year. Subscriptions outside the US pay $49.99 USD. Payments should be made in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards are accepted. Bill Me option is available only for US subscriptions. Back issues are available. For subscription information,
send e-mail to [email protected] or contact Customer Service at 832-717-4445 ext. 9.
Subscribe online at www.codemag.com
CODE Component Developer Magazine (ISSN # 1547-5166) is published bimonthly by EPS Software Corporation, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.
POSTMASTER: Send address changes to CODE Component Developer Magazine, 6605 Cypresswood Drive, Suite 425, Spring, TX 77379 U.S.A.

4 Table of Contents codemag.com


EDITORIAL

From the Publishers, Editors, and Staff of CODE Magazine

A
Moment
of
Silence

#blacklivesmatter

6 codemag.com
Nevron Open Vision (NOV)
The future of .NET UI development is here!

NOV lets you build applications


for Windows, Mac and soon Blazor
using a single codebase.
The suite features industry-leading components
including Chart, Diagram, Gauge, Grid,
Scheduler, Rich Text Editor, Ribbon, and others.
Nevron also provides advanced solutions for:

codemag.com
ONLINE QUICK ID 2008021

Managed Identity
Let me introduce you to something that we all use and we all hate: passwords. You’d think this problem is recent, but it isn’t.
In World War 2, Germans had a tool called the enigma machine, the main purpose of which was encrypted communication.
The military version had a switchboard in the front. The two communicating parties had to agree ahead of time on the

pattern that they’d use. Pattern, password: same thing, As you may know, when you protect an API using Azure AD,
different smell. In reality, and in the daily struggle of war, you’re doing so by validating an incoming access token. For
which, frankly, our jobs feel like sometimes, they got sick of instance, when you make a call to Microsoft Graph, Micro-
changing passwords, and they just stuck with one easy-to- soft Graph, being yet another API, expects you to include an
guess password. Guess how they ended each message. With access token in the authorization header. This token must
“Heil Hitler.” Those two bits of information made it pos- be for the audience Microsoft Graph. When making such a
sible for the allies to decrypt their messages. Imagine how call using managed identity, such a token must be for an
much damage they sustained because they were too lazy to audience of Microsoft Graph (or any other API you wish to
change a password? target).
Sahil Malik
www.winsmarts.com Fast forward to today. You still have organizations insisting There is one other key important point. When making calls
@sahilmalik on super complex passwords that must be changed every two to protected APIs, you have the ability to make such a call
months. Some of my utility companies force me to do that. using delegated permissions or application permissions.
Sahil Malik is a Microsoft Delegated permissions, as you may know, require the user
It’s mind boggling. My only recourse is to write it down some-
MVP, INETA speaker,
where or I’ll simply forget it. So where do we write it down? A identity to be present. Application permissions, on the
a .NET author, consultant,
password manager! Yes, what a great idea, let’s collect all of other hand, require only the application’s identity to be
and trainer.
our valuable passwords, protected by yet another password. present. Managed identities can only make calls that use
Sahil loves interacting with application permissions.
fellow geeks in real time. I fully realize that attempts are being made to get around
His talks and trainings are this password drama, but let’s be honest, we are far from There are two kinds of manage identities available in
full of humor and practical it. There may be technical solutions available, but have you Azure AD. A system-assigned managed identity and a user-
nuggets. entered a password anywhere earlier today? assigned managed identity.

His areas of expertise are Passwords suck! They are hard to remember, they are hard to A system-assigned managed identity is enabled directly
cross-platform Mobile app secure, they expire, rotating them is a pain, etc., etc. on an Azure service instance. When the identity is enabled,
development, Microsoft Azure creates an identity for the instance in the Azure
anything, and security But then, a big part of our architecture includes headless AD tenant that’s trusted by the subscription of the in-
and identity. processes. Or programs that need to access resources se- stance. After the identity is created, the credentials are
curely. For instance, something trying to access a password provisioned onto the instance. The lifecycle of a system-
manager of sorts. Let’s say that in Azure there’s something assigned identity is directly tied to the Azure service
called a Key Vault, which lets you manage secrets and cer- instance that it’s enabled on. If the instance is deleted,
tificates. And your code wishes to access Key Vault securely. Azure automatically cleans up the credentials and the
identity in Azure AD.
You could, of course, access it as an interactive user. But
how would that code work with DevOps? You’d have to put A user-assigned managed identity is created as a stand-
the password in some config file, I guess. Or maybe a re- alone Azure resource. Through a create process, Azure cre-
fresh token, which is, well, almost like a password. And then ates an identity in the Azure AD tenant that’s trusted by
you’d have to somehow make sure that you don’t acciden- the subscription in use. After the identity is created, the
tally check it into source control. identity can be assigned to one or more Azure service in-
stances. The lifecycle of a user-assigned identity is managed
Looks like we are back to square one. separately from the lifecycle of the Azure service instances
to which it’s assigned.
Wouldn’t it be nice if you had something that gave all the
advantages and possibilities of having an identity for your If you read between the lines, there’s a key advantage to
running code, without the headache of credential manage- using a user-assigned managed identity. That advantage is
ment? that you can manage the lifecycle of the managed identity
separately from the Azure resources that the identity is as-
That very thing is managed identity. signed to. In other words, not only can you share that iden-
tity across multiple Azure resources, but perhaps a bigger
advantage is that you know the identity involved ahead of
What Is Managed Identity? time, since it wasn’t auto-provisioned for you. This means
Managed identity is a feature of Azure Active Directory that that you can set up the permissions, RBAC (role-based ac-
lets you assign an identity to various Azure resources, with- cess control) assignments, etc., ahead of time. Now, this
out the headache of managing the identity’s credential. You may be anecdotal, or just one person’s opinion, but I find
can use this identity to authenticate to any service that sup- system-assigned managed identity is great for demos and
ports Azure AD authentication, such as Microsoft Graph, Key user-assigned managed identity is great for real world sce-
Vault, custom APIs, etc. narios.

8 Managed Identity codemag.com


Another very key player in the managed identity world is the Usually, when you’d like to have a service perform call an API
Azure instance metadata service or IMDS for short. IMDS securely for you, you get an access token using something
is a REST endpoint accessible to all IaaS VMs created via called a client credential flow. Client credential flow, by defini-
the Azure Resource Manager. The endpoint is available at tion, requires you to present a credential. This credential can
a well-known non-routable IP address (169.254.169.254) either be a password or it can be a certificate. And with that,
that can be accessed only from within the VM. Whenever any you’re back to the headache of managing that credential.
code running on a virtual machine with a managed identity
needs to ask for an access token for the managed identity, Managed identities give you the advantages of a real iden-
instead of asking the AzureAD endpoints, it asks for their tity without the headache of managing a credential. In this
token from this IMDS endpoint. Although this reduces the example, I’m going to show you how you can use managed
workload on Azure AD endpoints, this itself this has two sig- identity, working as a daemon, accessing Microsoft Graph.
nificant downsides.
Microsoft Graph has a huge API surface. I’m going to show you
• IMDS will give you a cached view of what it thinks a very simple example of how to call the endpoint that gives
Azure AD knows about this identity. This cache view the details of all users in the tenant. In order to do, so I’ll
may be a little out of date. This cache, at the time of have to go through some steps. I’ll have to create a managed
writing this article, is up to eight hours old, although identity. I’ll have to assign that managed identity to some-
efforts are underway to reduce this latency. thing that understands that managed identity; in this case,
• Any program running on the virtual machine has equal I’ll use a function. I’ll have to write some code that’s able to
rights to this IMDS endpoint. In other words, you can’t get an access token for Microsoft Graph on behalf of this man-
say within one virtual machine that you have two aged identity. And, of course, I’ll need to make sure that my
managed identities, one for each running program. managed identity has the permissions to call Microsoft Graph.
If you need two identities, you will need two virtual
machines. As you can see, it doesn’t matter what I’m calling in Micro-
soft Graph, or, for that matter, what I’m calling on any API
Now, I use the term virtual machine very loosely here. In protected by Azure Active Directory. The steps are identi-
fact, managed identities in Azure apply to way more than cal. From what I’m about to present here in this article, you
just virtual machines. You can assign a managed identity to should be able to extrapolate in calling any Azure Active
a docker image, to an Azure function, to app services, or to Directory protected API.
a virtual machine, etc. Additionally, a number of services in
Azure AD understand incoming managed identity. You can
find a list of the current services here https://docs.micro- The Main Steps
soft.com/en-us/azure/active-directory/managed-identi- In order for my example to work, let’s outline the main steps
ties-azure-resources/services-support-managed-identities. I need to accomplish. There are five main steps.
Undoubtedly, this list will continue to increase as time
moves forward. Managed identity is a very key investment 1. Create the function app in Azure.
for Azure AD. 2. Create a user-assigned managed identity.
3. Assign the managed identity to the function app.
4. Grant permissions to the managed identity to call Mi-
How Do Managed Identities Work? crosoft Graph.
Managed identity is a service principal of a special type 5. Author the function app that runs on a timer, gets an
reserved for use with Azure resources. The key differ- access token, and calls MSGraph.
ence between a managed identity service principal and a
normal service principal is that when a managed identity It’s worth mentioning that everything I’m about to show
is created, the managed identity resource provider issues here will work with the system-assigned managed identity
a certificate internally to the identity. Now your code can as well. However, I’ve chosen to go with a user-assigned
freely use the managed identity to request access tokens for managed identity because I feel that it’s a little bit more
services that support Azure AD authentication. You never practical, a bit more real world.
have to worry about rolling the credentials, because that’s
the responsibility of Azure. The default period for creden- Additionally, I choose to show you this code example in
tial rotation is 46 days, but the individual resource provider NodeJS. Although I must emphasize that any of these con-
can choose to wait longer if necessary. At this point, when cepts are platform-agnostic. Everything I’m showing here
your code wishes to get an access token using the managed will work in Python, .NET Core, or any other language you
identity, it simply requests it from the IMDS endpoint at prefer. And because I’m going to use functions as my man-
http://169.254.169.254/metadata/identity/oauth2/token. aged identity host, you need to make sure that you work
with something that functions can support.
Managed Identity Calling Microsoft Finally, a function app can be triggered by a number of pos-
Graph sible triggers. The easiest possible implementation is the
Next let’s wrap up your understanding with a real-world HTTP trigger, because all the project templates are designed
example. Frequently in our applications, we end up writing to give you an HTTP trigger very easily. I’ll keep it a little
services, also known as Cron jobs. These are applications more real-world by using a timer trigger. This is because I
that don’t have a user interface. They’re like services that intend to use this managed identity as a serverless process,
run behind the scenes doing important stuff, stuff that and a timer job makes the most sense here.
cannot be done synchronously and must be handled by a
service. Enough background. Let’s get our hands dirty with some code.

codemag.com Managed Identity 9


Figure 1: The Function App template

Create the Function App in Azure Create a User-Assigned Managed


This part is easy. Just go to Azure and create a new function Identity
app. To do so, simply search for function app and click the This is also pretty simple. Just search for “user-assigned managed
Create button. This can be seen in Figure 1. identity” in Azure Portal, as can be seen in Figure 3. Of course,
you may also choose to create this from Azure CLI or PowerShell
The Azure portal with them shows you a user interface, like or Microsoft Graph. There are many ways to achieve this goal.
that shown in Figure 2. Go ahead and provide the details.
As you can see, my function app is called “sahiltimerapp” When creating a user-assigned managed identity, you will be
and it runs on NodeJS version 12. asked to provide a name for it. I called my managed identity
sahiltimerfunctionidentity.

Once you provide all the details and create the managed iden-
tity, in the Azure Portal, go to its properties, and get its Client
ID and Object ID. You’ll need both in a moment. Mine looks
like Figure 4.

Assign the Managed Identity


to the Function App
You have a function app and you have a managed identity. It’s
now time to assign this newly created managed identity to your
function app. This is quite simple to do. In the Azure portal, navi-
gate to your function app. And under its properties, go to plat-
form features. Over there, look for identity, as shown in Figure 5.

Under the identity section of the function app, choose to


assign the user-assigned managed identity to the function
app, as shown in Figure 6.

Grant Permissions to the Managed


Identity to Call Microsoft Graph
Unfortunately, at the time of writing this article, there’s
no easy user interface built inside of the Azure portal to
grant permissions to a managed identity. Note that you can
view the permissions, but you can’t grant the permissions
through the user interface at this time. Additionally, there’s
Figure 2: Function App details
no direct commandlet in Azure CLI to achieve this either.

Currently, there are two ways to grant permissions to a man-


aged identity to any arbitrary API. One is using PowerShell,
and other is using Microsoft Graph. Because Microsoft Graph
is a cross-platform approach and I’m on a Mac, I choose to
show you how to do this using Microsoft Graph.

To work with Microsoft Graph, I’m going to need an access


token. There are many ways to get access tokens, but by
far the simplest is Azure CLI. I showed this trick and many
other Azure CLI tricks in a previous CODE Magazine article at
Figure 3: Creating a user-assigned managed identity https://codemag.com/Article/2001021/Azure-CLI.

10 Managed Identity codemag.com


Figure 4: Newly created managed identity

On a computer with Azure CLI installed, first log in as an ad- Once you execute the command in Listing 1, you should get
ministrator to your Azure AD and issue the following command: an output as shown in Figure 8.

az account You can verify the permissions by issuing an authenticated


get-access-token GET request to the same URL you sent a POST to.
--resource https://graph.microsoft.com
| jq .\accessToken
| pbcopy

This grabs an access token for you, with the necessary per-
missions, targeted for Microsoft Graph, and puts it in your
clipboard.

With this token, I can grant permissions to my managed


identity. This is simply a CURL command that issues a POST
to Microsoft Graph, as shown in Listing 1.

Easy right? Uhm, not so much! There are so many magic


GUIDs in Listing 1 that I can’t possibly move on without
explaining each one of them. Let’s slice and dice the com-
mand you see in Listing 1, bit by bit.

Note that I am targeting a beta API,


Figure 5: The identity section of a function app
which is subject to change as
it moves to a v1 API.

You’re making a call to Microsoft Graph’s service principal


and making the necessary appRoleAssignments. 8f5f9081–
66af-4ff0–89dc-800b738efd6a is the ObjectID of your
managed identity service principal. 9654473a-512a-4c6a-
8525–02cc112c5b08 is the GUID for Graph. And df021288-
bdef-4463–88db-98f22de89214 is the GUID that represents
User.Read.All.

Oh my! But how do you remember all those GUIDs? I don’t!


I look them up. Here is how.

The ObjectID of the managed identity is from Figure 4. Figure 6: Assign the managed identity to the function app

Getting the GUID for the service principal associated with Mi-
crosoft Graph is a bit more complex, but not too bad. And Listing 1: Grant permissions to my managed identity
you can use these steps for any API. First, register an app in curl --location --request POST
Azure AD and ensure that it has some kind of access to Micro- ‚https://graph.microsoft.com/beta
soft Graph. Then, visit that app’s service principal, and under /servicePrincipals/
9654473a-512a-4c6a-8525-02cc112c5b08
there look for the Permissions menu item in the blade. There, /appRoleAssignments‘ \
choose Microsoft Graph, and you should see the service prin- --header ‚Authorization: Bearer <token_you_got_from_above>
cipal ID for Microsoft Graph. This can be seen in Figure 7. ‚ \
--header ‚Content-Type: application/json‘ \
--data-raw ‚{
Finally, to find the GUID that represents the application per- „principalId“: „8f5f9081-66af-4ff0-89dc-800b738efd6a“,
mission User.Read.All, grant this admin permission to any „resourceId“: „9654473a-512a-4c6a-8525-02cc112c5b08“,
app, and open its manifest.xml. You should be able to grab „appRoleId“: „df021288-bdef-4463-88db-98f22de89214“
that GUID from the requiredResourceAccess section. }‘

codemag.com Managed Identity 11


Alternatively, you can verify the permissions by going to en- a timer trigger. The code for my function app using a timer
terprise applications and searching for your managed identity trigger, can be seen in Listing 2.
using its client ID, as shown in Figure 9. Note that you can
grab the client ID of this managed identity from Figure 4. This line from Listing 2 is especially interesting!

Once you open the app that represents the managed identity, const managedIdentityAppID =
you can look for its permissions, as shown in Figure 10. “f8973019-6953-433b-b935-adc31f5646d5”;

One important note: Just like any service principal or headless That’s the AppID of my user-assigned managed identity.
process that has no user identity, managed identity permissions
must be applications permissions and granted ahead of time. For this code to work, I’ve had to take a dependency on a
specific node package. My package.json looks like this:
Author the Function App {
All the plumbing is in place and all that’s left to do is to “dependencies”: {
author the function app. This part is perhaps the easiest. If “request”: “2.88.2”,
you’ve installed the Azure function CLI on your computer, “@azure/identity”: “^1.1.0”
just type “func new” on terminal and it’ll guide you through. }
Ensure that you choose to create a function app that accepts }

Figure 7: The service principal ID of Microsoft Graph

12 Managed Identity codemag.com


Figure 8: Assign permission success

Figure 9: Finding your managed identity.

Figure 10: The managed identity permission

Running the Code Example necessary permissions and were able to call Microsoft Graph
At this point, all my code changes are done. Now simply with it.
go ahead and deploy the function app and connect to it
streaming logs. You should see an output like that in Fig- This is incredibly powerful. You have just authored a timer
ure 11. function, which is a headless process for which you’d never
need to manage any credentials. No more key rotation head-
As can be seen from Figure 11, you’re able to get an ac- ache. No more headache of securing that credential. Every-
cess token for the managed identity to come up with the thing is contained neatly inside Azure.

codemag.com Managed Identity 13


Listing 2: The function app code
const identity = require(‘@azure/identity’); request.get(
const request = require(‘request’); ‘https://graph.microsoft.com/v1.0/users’,
{
module.exports = async function (context, myTimer) { ‘auth’: {
var timeStamp = new Date().toISOString(); ‘bearer’: response.token}
context.log( },
‘Timer function called’, timeStamp); function (error, response, body) {
var promise = if (error) {
new Promise((resolve, reject) => { reject(error);
const managedIdentityAppID = }
“f8973019-6953-433b-b935-adc31f5646d5”; context.log(timeStamp + body);
const credential = resolve(body);
new })}, err => {
identity.ManagedIdentityCredential( context.log(err);
managedIdentityAppID); reject(err);
context.log(‘getting access token’); });
credential.getToken( });
“https://graph.microsoft.com/.default”) await promise;
.then(response => { };
context.log(response);

Figure 11: Our call as a managed identity successfully calling Microsoft Graph

Summary Although this article showed you one possible usage of man-
A platform such as Azure is full of golden nuggets everywhere. aged identity, using it as headless process to call protected
I’ve been working with Azure for many years now and yet I feel APIs, the possibilities are truly endless. I’m sure you’ll use
I’ve only scratched the surface of it. Every day I learn so many your own ingenuity and creativity to come up with some
more new and interesting ways of solving customers’ problems. amazing examples of managed identity put to real use.

Managed identity is a very powerful feature of Azure. It allows I look forward to seeing them. Stay in touch and happy coding.
you to write secure applications because you have no creden-
tials to manage. There’s no danger of polluting environment  Sahil Malik
variables. There’s no danger of accidentally checking in cre- 
dentials into source code for the world to see. It promotes bet-
ter architecture. It promotes better patterns. It reduces stupid
mistakes.

14 Managed Identity codemag.com


INDUSTRY’S FASTEST DATA CONNECTORS

















 
 
 
 
 



  


  
  
  
  

 
ONLINE QUICK ID 2008031

Use the MVVM Design Pattern


in MVC Core: Part 2
In Part 1 of this article series (called “Use the MVVM Design Pattern in MVC Core: Part 1,” and located at https://bit.ly/3gA4IkL), you
started using the Model-View-View-Model (MVVM) design pattern in MVC Core applications. In that article, you created an MVC
Core application using VS Code and a few class library projects in which to put your entity, repository, and view model classes.

Using the AdventureWorksLT database, you created a page to property. Open the ViewModelBase.cs file and add a new
display product data in an HTML table. In addition, you wrote method named SetSortDirection() to perform this logic.
the appropriate code to search for products based on user input.
protected virtual void SetSortDirection() {
In this article, you’re going to add on to the sample from if (SortExpression ==
the last article to sort the database when the user clicks on PreviousSortExpression) {
any of the column headers in the HTML table. You’re going // Toggle the sort direction if
to learn how to add a pager to your HTML table so only a // the field name is the same
specified number of rows are displayed on the page. Finally, SortDirection = (SortDirection == "asc" ?
Paul D. Sheriff you learn to cache the product data in the Session object to "desc" : "asc");
http://www.pdsa.com improve performance. }
else {
Paul has been in the IT SortDirection = "asc";
industry over 33 years. Sort the Product Table }
In that time, he has suc- In most Web applications, when you have an HTML table of
cessfully assisted hundreds
data, the user can click on the header above each column and // Set previous sort expression to new column
of companies to architect
sort the data within that column. If the user clicks on the same PreviousSortExpression = SortExpression;
software applications to
column header twice in a row, it first sorts the data in ascend- }
solve their toughest business
problems. Paul has been ing, then in descending order. If the user clicks on a different
a teacher and mentor column header, the sort direction should go back to ascending Create Hidden Input Fields for New Properties
through various mediums and the table should be sorted on the new column’s data. In the last article, you added a partial page with hidden in-
such as video courses, put fields to hold the values of each property added to the
blogs, articles, and speaking Add Sort Properties to ViewModel Base Class ViewModelBase class. The EventCommand property is the only
engagements at user The ViewModelBase class you created in the previous article hidden input field in this partial page at this point. Open the
groups and conferences is designed to be the base class for any view models you add _StandardViewModelHidden.cshtml file and add three more
around the world. to your project. As many pages you design might need the hidden input fields to hold the values of each of the three new
Paul has 23 courses in the sorting functionality, add three new properties to the View- properties you added to the ViewModelBase class. Modify the
www.pluralsight.com library ModelBase class, as shown in the following code snippet: partial page to look like the following code snippet:
(http://www.pluralsight.
com/author/paul-sheriff) public string SortDirection { get; set; } @model MVVMViewModelLayer.ViewModelBase
on topics ranging from public string SortExpression { get; set; }
JavaScript, Angular, MVC, public string PreviousSortExpression <input type="hidden" asp-for="EventCommand" />
WPF, XML, jQuery, and { get; set; } <input type="hidden" asp-for="SortDirection" />
Bootstrap. Contact Paul <input type="hidden" asp-for="SortExpression" />
at [email protected]. Initialize the three properties you just added by adding <input type="hidden"
code into the constructor, as shown below. asp-for="PreviousSortExpression" />

public ViewModelBase() { Create Clickable Column Headers


EventCommand = string.Empty; You need to change each column header to hyperlinks so
SortDirection = "asc"; the user can click on them. To do this, use an anchor tag
SortExpression = string.Empty; with a couple of “data-” attributes added to pass data to
PreviousSortExpression = string.Empty; the view model. Add an attribute named “data-custom-cmd”
} and set its value equal to “sort”. This tells the view model
what operation to perform. Add another attribute named
You put either “asc” or “desc” to specify the sort direction “data-custom-arg” and set its value to the property name to
into the SortDirection property. The SortExpression property sort upon. This value will be different for each column the
holds the name of the column the user just clicked upon. The user clicks on. Open the partial page named _ProductList.
PreviousSortExpression property holds the name of the last cshtml and add the links to each of the column headers, as
column the user clicked upon. If the SortExpression property shown in Listing 1.
is equal to the PreviousSortExpression, change the SortDirec-
tion to the “desc” if its current value is “asc”, or to “asc” Set the SortExpression Property Using jQuery
if its value is “desc”. After you change the SortDirection, In the last article, you added a $(document).ready() func-
put the SortExpression value into the PreviousSortExpression tion to connect up a click event to any HTML element that

16 Use the MVVM Design Pattern in MVC Core: Part 2 codemag.com


has a “data-custom-cmd” attribute. Within that code, you SortExpression property against each case statement. Once
retrieve the value in the “data-custom-cmd” attribute you have a match on the column name, check the Boolean
and put it into the hidden input field that is bound to the variable to see if you should apply the OrderBy() or OrderBy-
EventCommand property in your view model. If the Event- Descending() methods to the Products collection.
Command property is set to “sort”, retrieve the value in
the “data-custom-arg” attribute and put that value into Add Sort Command to HandleRequest() Method
the hidden input field bound to the SortExpression prop- The HandleRequest() method is the public method called
erty. Open the Products.cshtml and locate the $(docu- from the controller. This method is where you check the
ment).ready() function at the bottom of the file. After the EventCommand property to decide what method(s) to call in
line of code that updates the $(“#EventCommand”) hidden the view model to set the view model properties for display-
input field add an If statement and the code within it as ing on the screen. You just passed in a new command with
shown below. a value of “sort”, so you need to add a new case statement
just above the “search” case. Modify the method called from
// $(document).ready() SearchProducts() to SortProducts().

// Fill in command to post back to view model


$("#EventCommand").val($(this) Listing 1: Change the table headers to hyperlinks for sorting
.data("custom-cmd")); <th>
<a href="#"
// Only set sort variables if command was "sort" data-custom-cmd="sort"
if($("#EventCommand").val() == "sort") { data-custom-arg="Name">
Product Name
// Get the new sort expression </a>
$("#SortExpression").val( </th>
$(this).data("custom-arg")); <th>
} <a href="#"
data-custom-cmd="sort"
data-custom-arg="ProductNumber">
Add SortProducts Method Product Number
Now that you have the EventCommand and SortExpression </a>
properties filled in with the proper values, and you have </th>
code written in the SetSortDirection() method to set the <th class="text-right">
<a href="#"
SortDirection and PreviousSortExpression properties, it’s time data-custom-cmd="sort"
to do the actual sorting. Open the ProductViewModel.cs file data-custom-arg="StandardCost">
and add a SortProducts() method, as shown in Listing 2. Cost
</a>
</th>
The first thing the SortProducts() method does is to retrieve <th class="text-right">
the product data from the data repository by calling the <a href="#"
SearchProducts() method. Once the Products collection has data-custom-cmd="sort"
been filled in with the product data, call the SetSortDirec- data-custom-arg="ListPrice">
Price
tion() method. Check the SortDirection property to see if it </a>
is set to “asc” or “desc”. Set a Boolean variable appropri- </th>
ately. In the switch() expression, compare the value in the

Listing 2: For each column in your table you need code to sort data based on that field.
protected virtual void SortProducts() { p => p.ProductNumber).ToList();
// Search for Products } else {
SearchProducts(); Products = Products.OrderByDescending(
p => p.ProductNumber).ToList();
if (EventCommand == "sort") { }
// Set sort direction break;
SetSortDirection(); case "standardcost":
} if (isAscending) {
Products = Products.OrderBy(
// Determine sort direction p => p.StandardCost).ToList();
bool isAscending = SortDirection == "asc"; } else {
Products = Products.OrderByDescending(
// What field should we sort on? p => p.StandardCost).ToList();
switch (SortExpression.ToLower()) { }
case "name": break;
if (isAscending) { case "listprice":
Products = Products.OrderBy( if (isAscending) {
p => p.Name).ToList(); Products = Products.OrderBy(
} else { p => p.ListPrice).ToList();
Products = Products.OrderByDescending( } else {
p => p.Name).ToList(); Products = Products.OrderByDescending(
} p => p.ListPrice).ToList();
break; }
case "productnumber": break;
if (isAscending) { }
Products = Products.OrderBy( }

codemag.com Use the MVVM Design Pattern in MVC Core: Part 2 17


case "sort": classes are in the download for this article under a folder
case "search": called PagerClasses. Please download the samples now so
SortProducts(); you can continue following along.
break;
These pager classes you downloaded belong in another class
Set Sort Order in Controller library that you are naming CommonLibrary. This class li-
When the user first enters your product list page, show them brary will be for any generic classes that can be used in
the products sorted by the Name property. Open the Prod- any type of application. To create the CommonLibrary proj-
uctsController.cs file and add two lines of code to the Prod- ect, open a terminal window by clicking on the Terminal >
ucts() method, as shown in the code snippet below. New Terminal menu. Go to your development directory (in
my case, that was D:\Samples). Type in the following com-
public IActionResult Products() mands to create a new folder named CommonLibrary.
{
// Load products MD CommonLibrary
_viewModel.SortExpression = "Name"; CD CommonLibrary
_viewModel.EventCommand = "sort";
_viewModel.HandleRequest(); dotnet new classlib

return View(_viewModel); The dotnet new classlib command creates a class library
} project with the minimum number of references to .NET Core
libraries. Now that you have created this new project, add
Getting the Sample Code The first line of code sets the SortExpression property to it to your Visual Studio Code workspace by clicking the File
“Name”. The second line of code sets the EventCommand > Add Folder to Workspace… menu. Select the CommonLi-
You can download the sample property to “sort”. When the HandleRequest() method is brary folder and click on the Add button. You should see the
code for this article by visiting called, the SortProducts() method is called because of the CommonLibrary project added to your VS Code workspace.
www.CODEMag.com under value in the EventCommand property. Because the value in Delete the Class1.cs file, as you’re not going to need that.
the issue and article, the PreviousSortExpression property is blank and the value Copy the \PagerClasses folder into the CommonLibrary fold-
or by visiting www.pdsa.com/
in the SortExpression property is “Name”, the two values are er so it now appears in your VS Code workspace.
downloads. Select “Fairway/
not equal. This causes the SortDirection property to be set
PDSA Articles” from the
Category drop-down. Then
to “asc”. Because of the SortExpression and SortDirection, Add References to the Common Library
select “Use the MVVM Design the switch statement causes the Products collection to be This CommonLibrary needs to be referenced from both the
Pattern in MVC Core - Part 2” ordered by the Name property in an ascending order. MVVMViewModelLayer and the MVVMSample projects. Click
from the Item drop-down. on the Terminal > New Terminal menu and select the MV-
Reset Hidden Fields After Postback VMSample folder. Set a reference to the CommonLibrary
When the user clicks on another column header and forces a project using the following command.
post-back with new values to sort by, the [HttpPost] Prod-
ucts() method is called. After the HandleRequest() method dotnet add . reference
is called, there are new values in the SortExpression, Pre- ../CommonLibrary/CommonLibrary.csproj
viousSortExpression and SortDirection properties. To update
the hidden input fields, call the ModelState.Clear() method Change the directory to the MVVMViewModelLayer folder
as shown in the code snippet below. and execute the following command to reference the Com-
monLibrary in this project as well.
[HttpPost]
public IActionResult Products( dotnet add . reference
ProductViewModel vm) ../CommonLibrary/CommonLibrary.csproj
{
vm.Repository = _repo; Try It Out
vm.HandleRequest(); To ensure that you’ve typed in everything correctly, run a
build task to compile the projects. Because you have a refer-
ModelState.Clear(); ence from the MVVMSample to the CommonLibrary, if you
run a build task on the MVVMSample project, it builds all
return View(vm); of the other projects, including the CommonLibrary. Select
} the Terminal > Run Build Task… menu and select the MV-
VMSample project. Watch the output in the terminal window
Try It Out and you should see that it compiles all five of your projects.
Now that you have code written to perform sorting, it is time
to try it out. Run the Web application and click on the different
column headers. Try clicking on different headers to see the Paging
data sort in ascending order for each column. Then try click- Instead of displaying hundreds of product rows in a table
ing on the same column header to see the data swap between and forcing the user to scroll down on your Web page, add a
ascending and descending order for that column of data. pager to your table like the one shown at the bottom of Fig-
ure 1. There are two pieces to adding a pager to your table:
the first is the actual pager UI, and the second is the calcu-
Create a Common Library Project lation and selection of which rows are on each page. All the
The next step is to add paging to your table. In order to do code to do the calculations and create a set of pager items
this, you need some classes to help with the paging. These used to display the pager are contained in the classes in the

18 Use the MVVM Design Pattern in MVC Core: Part 2 codemag.com


\PageClasses folder you just added to the CommonLibrary
project. I’m not going to cover how these classes work, as
that is beyond the scope of this article. I am going to show
you how to use them to create the pager shown in Figure 1.

To create a pager that looks like what you see in Figure 1,


use Bootstrap and the HTML shown in Listing 3. Add the
“data-custom-cmd” and “data-custom-arg” attributes to
each anchor tag to set properties in the view model class.
The “data-custom-arg” attribute is going to be used to set a
property named EventArgument in the ViewModelBase class.
In the previous section on sorting, you used the “data-cus-
tom-arg” attribute to set the SortExpression property. For
paging, you are going to set the EventArgument property
from the “data-custom-arg” attribute and use that to tell
the pager how to page through the data.

Add Paging Properties to View Model Base Class Figure 1: Add a pager below your table to keep the number of records displayed at one time
Open the ViewModelBase.cs file and add a few new prop- to a minimum.
erties to help with paging. The first one, EventArgument,
receives the action to perform when paging. For example,
this can be set to “next”, “previous”, “first”, “last”, or a ModelBase class using the code shown in the following code
specific page number, as you can see by looking at the “da- snippet:
ta-custom-arg” attributes in Listing 3. The next property,
Pager, is of the type Pager, which is one of the classes you public ViewModelBase() {
downloaded and added to the CommonLibrary project from EventCommand = string.Empty;
the PagerClasses folder. This class contains properties such EventArgument = string.Empty;
as PageSize, VisiblePagesToDisplay, PageIndex, StartingRow, Pager = new Pager();
TotalPages, and TotalRecords. The last property to add to SortDirection = "asc";
the view model, Pages, is a collection of PagerItem objects. SortExpression = string.Empty;
A PagerItem object represents a single visible anchor tag PreviousSortExpression = string.Empty;
in the pager displayed at the bottom of the table. Each }
PagerItem object contains properties such as Text, Tooltip,
Argument, and CssClass. Add a new method named SetPagerObject() to the View-
ModelBase class. This method accepts the total amount of
public string EventArgument { get; set; } records in the Product table. This parameter is passed to
public Pager Pager { get; set; } the Pager.TotalRecords property. The TotalPages property in
public PagerItemCollection Pages { get; set; } the Pager object is calculated from the TotalRecords and the
PageSize properties. PageSize has a default size of ten. If
After adding these new properties, initialize the EventArgu- the TotalRecords is set to 50, the total pager objects to show
ment and Pager properties in the constructor of the View- on the UI is five.

Listing 3: Create a pager using HTML and Bootstrap


<ul class="pagination"> 2
<li class="page-item disabled"> </a>
<a class="page-link" href="#" </li>
data-custom-cmd="page" <li class="page-item ">
data-custom-arg="first"> <a class="page-link" href="#"
&laquo; data-custom-cmd="page"
</a> data-custom-arg="10">
</li> ...
<li class="page-item disabled"> </a>
<a class="page-link" href="#" </li>
data-custom-cmd="page" <li class="page-item ">
data-custom-arg="prev"> <a class="page-link" href="#"
&lsaquo; data-custom-cmd="page"
</a> data-custom-arg="next">
</li> &rsaquo;
<li class="page-item active"> </a>
<a class="page-link" href="#" </li>
data-custom-cmd="page" <li class="page-item ">
data-custom-arg="0"> <a class="page-link" href="#"
1 data-custom-cmd="page"
</a> data-custom-arg="last">
</li> &raquo;
<li class="page-item "> </a>
<a class="page-link" href="#" </li>
data-custom-cmd="page" </ul>
data-custom-arg="1">

codemag.com Use the MVVM Design Pattern in MVC Core: Part 2 19


protected virtual void the top of the file to reference the CommonLibrary.Pager-
SetPagerObject(int totalRecords) { Classes namespace. It’s in this namespace that the Pager
// Set Pager Information classes are located.
Pager.TotalRecords = totalRecords;
@using CommonLibrary.PagerClasses
// Set Pager Properties
Pager.SetPagerProperties(EventArgument); Move to the bottom of this file and add the code in this
next snippet. This code is what takes the Pages property you
// Build paging collection just created in the SetPagerObject() method and builds each
Pages = new PagerItemCollection(Pager); hyperlink you see in the pager at the bottom of Figure 1.
} Feel free to step through the code in the PagerItemCollection
object to see how this pager is built.
Next, the SetPagerProperties() method is called and passed
the EventArgument property. Remember the EventArgument <ul class="pagination">
is set to “next”, “previous”, or similar commands depend- @foreach (PagerItem item in Model.Pages) {
ing on which anchor tag the user clicked upon. From this <li class="page-item @item.CssClass">
command, the PageIndex property in the Pager object is set. <a class="page-link" href="#"
data-custom-cmd="page"
Now that you have the properties in the Pager object calculat- data-custom-arg="@item.Argument"
ed, pass this Pager object to the PagerItemCollection object. title="@item.Tooltip">
By reading the TotalRecords, the TotalPages and the PageIndex @Html.Raw(item.Text)
properties in the Pager object, the PagerItemCollection builds </a>
the collection of pages used to display the anchor tags. </li>
}
Create Pager UI on the Product List Page </ul>
To build the pager that the user sees on the list page, open
the _ProductList.cshtml file and add a using statement at Get One Page of Data from the Products Collection
Before you can try out the paging, you have a few more
pieces of code to write. First, open the ProductViewModel.
cs file and add a using statement at the top of the file.

ADVERTISERS INDEX using CommonLibrary.PagerClasses;

Within the ProductViewModel class, add a new property to


Advertisers Index hold onto the total amount of product records you read from
the table. The TotalProducts property is used to display how
Amazon Web Services (AWS) many records the user selected and display it on the page.
www.aws.amazon.com 82
public int TotalProducts { get; set; }
CData
www.cdata.com 15 Add a new method named PageProducts() that sets the
CODE Consulting TotalProducts property, then calls the SetPagerObject()
www.codemag.com/consulting 23 method in the base class. Once the Pager object has been
set up, you now need to get just the data for the current
CODE Legacy page of data. The starting row in the Products collection
www.codemag.com/legacy 71 can be calculated by taking the current PageIndex property
and multiplying that number by the PageSize property. Pass
DevIntersection
this result to the LINQ Skip() method on the Products col-
www.devintersection.com 2
Advertising Sales: lection. Then take the next PageSize number of records from
Tammy Ferguson dtSearch the collection.
832-717-4445 ext 26
[email protected]
www.dtsearch.com 43
protected virtual void PageProducts() {
JetBrains
TotalProducts = Products.Count;
www.jetbrains.com 45
LEAD Technologies base.SetPagerObject(TotalProducts);
www.leadtools.com 5
Products = Products.Skip(
Nevron Software, LLC base.Pager.PageIndex *
www.nevron.com 7 base.Pager.PageSize)
PLURALSIGHT .Take(base.Pager.PageSize).ToList();
www.pluralsight.com 81 }
This listing is provided as a courtesy
The Tech Academy Locate the HandleRequest() method and modify the switch
to our readers and advertisers.
The publisher assumes no responsibi- www.learncodiinganywhere.com 31 statement to handle the different commands. Modify the
lity for errors or omissions. “search” command to reset the PageIndex property. Add a
“page” command for when the user clicks on one of the pag-

20 Use the MVVM Design Pattern in MVC Core: Part 2 codemag.com


er anchor tags. Call the PageProducts() method after the $("#EventArgument").val($(this)
call to the SortProducts() method in both the “search” and .data("custom-arg"));
“page” case statements.
Try It Out
case "search": You’re now ready to try out the paging to ensure that you
Pager.PageIndex = 0; typed everything in correctly. Run the application and try
SortProducts(); clicking on the various anchor tags on the pager UI.
PageProducts();
break; Changing the Page Size
case "sort": By default, the PageSize property in the Pager object is set
case "page" to 10. If you wish to adjust the size, modify the PageSize
SortProducts(); property in the controller. Open the ProductsController.cs
PageProducts(); file and, in the Products() method, add the following line of
break; code just before the call to the HandleRequest() method.

Add Hidden Input Fields for Paging _viewModel.Pager.PageSize = 5;


Open the _StandardViewModelHidden.cshtml file and add
two new hidden input fields. You need to post-back both the Also, in the [HttpPost] Products() method, add the follow-
EventArgument and the current PageIndex values. The Event- ing line of code just before the call to the HandleRequest()
Argument is going to have the “data-custom-arg” value from method.
the pager and the PageIndex is the page the user is currently
viewing. vm.Pager.PageSize = 5;

<input type="hidden" asp-for="EventArgument" /> Try It Out


<input type="hidden" Run the application again and you should see only five rows
asp-for="Pager.PageIndex" /> of data appear on each page in the table.

Add Total Records to Products Search Page


You added the TotalProducts property to the ProductView- Cache the Products List
Model class. It’s now time to display that on the page. Put As you may have noticed, the Products are read from the da-
the following code within the card-footer in the _Product- tabase each time you sort or page through the data. Having
Search.cshtml file. to retrieve the data every time from the data is very inef-
ficient and this process can be fixed easily. If your product
<div class="card-footer bg-primary text-light"> data does not change that often, cache the data in your Web
<div class="row"> server’s memory. There are many methods you may employ
<div class="col-8"> for caching; for this article, let’s use the Session object.
<button type="button"
data-custom-cmd="search" Add Property to Hold all the Product Data
class="btn btn-success">Search In the Product view model class, there is a Products collec-
</button> tion to hold the data for the one page of data to display. To
</div> avoid a round-trip to your SQL Server to get all your prod-
<div class="col-4"> uct data, add another property to hold the original list of
<p class="text-right"> product data. The list of all product data is placed into your
Total Records: @Model.TotalProducts cache after retrieving the data the first time. The data is
</p> retrieved from the cache on each post-back to populate the
</div> Products collection when the user requests the next page of
</div> data to display. Open the ProductViewModel.cs file and add
</div> a new property named AllProducts.

Set the EventArgument Property Using jQuery public List<Product> AllProducts { get; set; }
The last thing you need to do before trying out the pag-
ing is to take the “data-custom-arg” attribute value and Each time the user posts back to the controller to either
place it into the new hidden input field you just added. search, sort, or page, the SearchProducts() method is called
Open the Products.cshtml file and locate the $(document). to retrieve the data from the Repository class. In this meth-
ready() function and after the line of code that updates the od, modify the code to use the data in the AllProducts col-
$(“#EventCommand”) hidden input field, add a new line of lection if that collection has been retrieved from the cache.
code to set the EventArgument as shown in the code snippet
below: Locate the SearchProducts() method and modify it to look
like Listing 4. In SearchProducts(), check if this is the first
// The $(document).ready() function time the user has hit the product list page. If the AllProducts
collection is null, then call the Search() method on the Re-
// Fill in command to post back to view model pository class to get all of the product data from the database
$("#EventCommand").val($(this) server and store that data into the AllProducts property.
.data("custom-cmd"));
In the Products() method in the ProductsController class,
// Fill in arguments to post back to view model you’re going to store the AllProducts collection into the

codemag.com Use the MVVM Design Pattern in MVC Core: Part 2 21


Listing 4: Modify the SearchProducts() method to use the cached data Locate the Products() method and after the call to Han-
dleRequest() has been called, get the data from the AllProd-
public virtual void SearchProducts() {
if (AllProducts == null) {
ucts collection, serialize it, and place it into the Session
if (Repository == null) { object using the SetString() method as shown in the code
throw new ApplicationException( below.
"Must set the Repository property.");
} else { HttpContext.Session.SetString("Products",
// Get data from Repository
AllProducts = Repository JsonConvert.SerializeObject(
.Search(SearchEntity) _viewModel.AllProducts));
.OrderBy(p => p.Name).ToList();
} When the user clicks on any of the links that post back to
}
// Get data for the Products collection the controller, the [HttpPost] Products() method is called.
Products = AllProducts.Where(p => Place the following line of code just before the call to the
(SearchEntity.Name == null || HandleRequest() method to populate the AllProducts collec-
p.Name.StartsWith(SearchEntity.Name)) && tion with the data you stored into the Session object.
p.ListPrice >= SearchEntity.ListPrice)
.ToList();
} vm.AllProducts =
JsonConvert.DeserializeObject<List<Product>>
(HttpContext.Session.GetString("Products"));

Session object. You’re then going to retrieve the data from When the HandleRequest() method is called, the AllProducts
SPONSORED SIDEBAR:
the Session object in the [HttpPost] Products() method and collection has data in it, and thus in the SearchProducts()
Get .NET Core Help put that data back into the AllProducts collection. When the method, the Products collection is built from the data in this
for Free SearchProducts() method is called the second time, the All- collection instead of calling the database server.
Products collection is not null and thus the Products collec-
Looking to create a new tion is built by querying the AllProducts collection instead of Try It Out
or convert an existing going to the database server. Run the application and try out the various features of the
application to .NET Core or page. Try searching for data, try sorting the data, and try
ASP.NET Core? Get started Add Session State to MVC Core paging through the data. If you set a breakpoint, in the
with a FREE hour-long As mentioned, you are going to use the Session state ob- SearchProducts() method, you can see that it only fetches the
CODE Consulting session ject that’s built-in to MVC Core. In order to use Session, you data from the Repository class the first time into the page.
to make sure you get must configure it in the Startup class. Open the Startup.cs After that, the data is always coming from the cached data.
started on the right foot. file and locate the InjectAppServices() method you created
CODE consultants have
in the last article. After the line of code services.AddDbCon-
been working with and
text(), add the following two lines of code to add services to Summary
contributing to the .NET
Core and ASP.NET Core
support session state. In this article, you learned how to add hyperlinks to each
teams since the early pre- column header in a table to allow sorting of product data.
release builds. Leverage // The next two lines are for session state Next, you added paging capabilities to the table, so the
our team’s experience services.AddDistributedMemoryCache(); user only sees a limited amount of data at a time. This is a
and proven track record services.AddSession(); better user experience than having a large scrollable table.
to make sure your next Finally, you learned how to use the Session object in order
project is a success. Next, locate the Configure() method and find the line of to avoid a round-trip to the database server each time the
No strings. No commitment. code app.UseRouting(). After this line of code, add the fol- user clicks on something on the page. This saves time and
For more information, lowing line of code to turn on session state. resources and makes updating your page much quicker. The
visit www.codemag.com/ great thing about using the MVVM design pattern is the code
consulting or email us app.UseSession(); in your controller is still very simple, and your view model
at [email protected]. class need only expose one public method. In the next ar-
Add Newtonsoft Package ticle, you’ll learn to add, edit, and delete product data using
The Session object in MVC Core can only store strings or inte- the MVVM design pattern.
gers. However, you need to put in a collection of Product ob-
jects. In order to do this, you need to serialize the collection  Paul D. Sheriff
into JSON. This can be done using the Newtonsoft package. 
Open a terminal window in the MVVMSample folder and execute
the following command to bring in the Newtonsoft library.

dotnet add package Newtonsoft.Json

Store AllProducts Collection into Session


It’s now time to cache the data you placed into the AllProd-
ucts collection. Open the ProductsController.cs file and add
some Using statements to the top of the file.

using System.Collections.Generic;
using Microsoft.AspNetCore.Http;
using MVVMEntityLayer;
using NewtonSoft.Json;

22 Use the MVVM Design Pattern in MVC Core: Part 2 codemag.com


UR
GET YO R
O U
FREE H

TAKE
AN HOUR
ON US!
Does your team lack the technical knowledge or the resources to start new software development projects,
or keep existing projects moving forward? CODE Consulting has top-tier developers available to fill in
the technical skills and manpower gaps to make your projects successful. With in-depth experience in .NET,
.NET Core, web development, Azure, custom apps for iOS and Android and more, CODE Consulting can
get your software project back on track.

Contact us today for a free 1-hour consultation to see how we can help you succeed.

codemag.com/OneHourConsulting
832-717-4445 ext. 9 • [email protected]
ONLINE QUICK ID 2008041

Transform Your ASP.NET Core API


into AWS Lambda Functions
In a recent CODE Magazine article (Discovering AWS for .NET Developers, May/June 2020 https://www.codemag.com/
Article/2005041), you read about my first foray into using .NET in the Amazon Web Services (AWS) ecosystem. I explored the
AWS Relational Database Service (RDS) and created an ASP.NET Core API using Entity Framework Core (EF Core) to connect to a

SQL Server Express database hosted in RDS. In the end, I ing. Although I do think it’s important to have some under-
deployed my API to run on AWS Elastic Beanstalk with my standing about how your tools work, there’s a point where
database credentials stored securely in Amazon’s Parameter it’s okay to say “okay, it just works.”
Store to continue interacting with that same database.
Let’s walk through the steps that I performed to transform
Interacting with the database was a great first step for me my API. While this article is lengthy, most of the details are
and hopefully for readers as well. And it gave me enough here to provide a deeper understanding of the choices I’ve
comfort with AWS to set my sights on their serverless of- made and how things are working. But the actual steps are
fering, AWS Lambda Functions. Some of the most critical not that many. If you want to follow along, I’ve included the
Julie Lerman differences between hosting a full application in the cloud previous solution in the downloads for this article.
thedatafarm.com/blog and rendering your logic as functions are:
twitter.com/julielerman
• Rather than paying for an application that’s constant- Creating the New Project
Julie Lerman is a Microsoft
ly running and available, you only pay for individual Start by creating a new project and in the template finder,
Regional Director, Docker
requests to a function. In fact, the first one million filter on AWS and C#. This gives you four templates and the
Captain and a long-time
requests each month are free along with a generous one to choose is AWS Serverless Application (.NET Core—
Microsoft MVP who now
counts her years as a coder amount of compute time. (Details at aws.amazon. C#). After naming the new project, you’ll get a chance to
in decades. She makes com/Lambda/pricing) choose a “Blueprint”, i.e., a sample template for a particu-
her living as a coach and • Serverless functions are also stateless, meaning that lar type of app. From the available blueprint options, choose
consultant to software you can run many instances of the same function ASP.NET Core Web API. This is the template that includes
teams around the world. without worrying about conflicting state across those the plumbing to ensure that your controller methods can be
You can find Julie present- instances. run behind a Lambda function. The project that’s generated
ing on Entity Framework, • Most of the management of serverless functions is tak- (shown in Figure 1) looks similar to the one created by the
Domain Driven Design and en care of by the function host, leaving you to focus ASP.NET Core Web API template with a few exceptions.
other topics at user groups on the logic you care about.
and conferences around • One exception is the introduction of the S3ProxyCon-
the world. Julie blogs at In this article, I’ll evolve the ASP.NET Core API from the pre- troller. This is just a sample controller that I’ll remove
thedatafarm.com/blog, vious article to a Serverless Application Model (SAM) appli- from my project. I’m keeping the values controller so
is the author of the highly cation which is a form of Lambda function. that I can validate my API if needed.
acclaimed “Programming • Another is the aws-Lambda-tools-defaults.json file.
Entity Framework” books, This file holds settings used for publishing whether
the MSDN Magazine Data Moving an Existing ASP.NET Core API
Points column and popular to a Serverless App
videos on Pluralsight.com.
This was such an interesting journey. And an educational one.
Follow Julie on twitter
Amazon has created what I’ll refer to as a lot of “shims” to
at julielerman.
seamlessly host an ASP.NET Core API behind a Lambda func-
tion. The beauty of this is that you can write an ASP.NET Core
API using the skills you already have and AWS’s logic will pro-
vide a bridge that runs each controller method as needed.
That way, you get the benefits of serverless functions such as
the on-demand billing but continue to build APIs the way you
already know how. It took a bit of time (and some repeated
explanations and reading) to wrap my head around this. I
hope this article provides a quicker learning path for you.

If you installed the AWS Toolkit for Visual Studio as per the
previous article, then you already have the project template
needed to create the basis for the new API. I’ll start by cre-
ating a new project using the template and then copy the
classes and some code from the existing API into the new
project. The project template contains part of the “bridge”
I just referred to, and it also has logic that calls into some
additional tooling in AWS that provides more of the bridg- Figure 1: The project generated from the selected template

24 Transform Your ASP.NET Core API into AWS Lambda Functions codemag.com
Figure 2: The EF Core and SystemsManager package references added to the project file

you are doing so via the tooling or at the command Let Visual Studio Access
line using AWS’ dotnet CLI extensions. Your Database
• LambdaEntryPoint.cs replaces program.cs for the de-
Note that if you’re using
ployed application. the database you created
• LocalEntryPoint.cs replaces program.cs for running or from the previous article,
debugging locally. you may need to re-grant
• The serverless.template contains configuration informa- access from the IP address of
tion for the deploying the application. Specifically, this your PC. I hit this snag myself.
uses the AWS SAM specification, which is an AWS Cloud- The easiest way I found to
Formation extension used for serverless applications. do this was to view the RDS
instances in the AWS Explorer,
right click-on the desired
Copying Assets from the Original API instance and then choose
Before looking at the Lambda-specific files, let’s pull in the “Add to Server Explorer.”
logic from the original API. In the downloads that accom- If your IP address doesn’t have
pany this article, you’ll find a BEFORE folder that contains access, a window will open
the solution from the previous article. with your IP address listed and
a prompt to grant access.
First, you’ll need to add the NuGet references for the EF Core
packages (SqlServer and Tools for migrations) as well as the
SystemsManager extension you used for the deployed API to
read the secured parameters stored in AWS. You can see the
packages in the csproj file shown in Figure 2.

Next, I’ll copy files from the previous application into the
project and remove the S3ProxyController file. The files I
copied in, highlighted in Figure 3, are the AuthorsCon-
troller, the BookContext, the Author and Book classes, and Figure 3: Files copied into the project from my original API
the contents of the Migrations folder. As a reminder, the
AuthorsController was originally created using the control-
ler template that generates actions using Entity Framework. one from the original solution instead of copying various
pieces of that file. Most important in there is the logic to
Note that in the previous article, I created a SQL Server da- build a connection string by combining details you’ll add in
tabase instance in Amazon’s RDS, let EF Core migrations cre- shortly. The final bit of that logic attaches the concatenated
ate the database and tables, and then manually added some connection string to the DbContext dependency injection
data via SQL Server Object Explorer. The “Before” solution configuration with this code:
that comes with this article has two changes related to the
data. Its BookContext class now includes HasData methods services.AddDbContext<BookContext>
to seed some data into the Authors and Books tables. Also, (options =>
there is a second migration file, seeddata, that has the logic options.UseSqlServer(connection));
to insert the seed data defined in the BookContext. If you
don’t have the database yet, you’ll be able to use the up- My startup class (and the others) from the earlier project
date-database migrations command to create the database has the namespace, CodeMagEFCoreAPI. My new project’s
and its seed data in your database instance. But you will namespace is different. Therefore, I needed to add
have to create the database instance in advance.
using CodeMagEFCoreAPI;
The new Startup class has some extra logic to interact with
an AWS S3 Proxy, which is then used by the S3ProxyCon- Add this using statement to both the LambdaEntryPoint and
troller that you just deleted. Because you don’t need that, LocalEntryPoint classes. The compiler will remind you about
it’s safe to completely replace this startup.cs file with the this.

codemag.com Transform Your ASP.NET Core API into AWS Lambda Functions 25
The next assets you need from the earlier solution are the “DbUser”: “myusername”
connection string and its credentials. }

The connection string to the database goes into appset- These values will be available to the Configuration API. With
tings.json, to be read by the startup class code that builds all of this in place, I’m now able to run the new API locally
the connection string. My setting looks like this, although on my computer—hosted by .NET’s Kestrel server—by choos-
I’ve hidden my server name: ing the project name in the Debug Toolbar. The apps read
the password and user ID from the ASP.NET secrets and with
"ConnectionStrings": { those, are able to interact with my AWS hosted database.
"BooksConnection":
“Server=codemagmicro.***.us-east-2.
rds.amazonaws.com,1433;
Running Locally Isn’t Using
Database=BookDatabase” Any Lambda Logic
} At this point, Visual Studio isn’t doing anything more than
running the API in the same way as it would for an ASP.NET
Keep in mind that JSON doesn’t like wrapped lines. They are Core API, ignoring all of the Lambda-specific logic added by
only wrapped here for the sake of this article’s formatting. the template. The new API runs locally and the puzzle pieces
Also don’t forget that very important comma to separate are in place for this application to run as a Lambda function,
the Logging section from the ConnectionStrings section. but they aren’t being used yet.
I can attest to how easy it is to make that mistake.
Watch Out for Publicly So far, you’re seeing that your existing skills for building
Available Production The ASP.NET Core Secret Manager will supply the DbPassword ASP.NET Core apps remain 100% relevant, even for testing
Databases and DbUser values for the connection string at design time and debugging your apps. You don’t have to worry about
Keep in mind that when but won’t get stored into the project, which means that you issues related to the Lambda function getting in the way
originally creating the don’t have to worry about accidentally deploying them. As of building and debugging the app. All of that logic stays
database instance (in the a reminder, right-click on the project in Solution Explorer, out of your way for this part of the application building.
earlier article), I specified choose Manage User Secrets, which will open a json file for That’s because the project knows to run locally from the
that it should be publicly the secrets. Add your secret credentials into this file, for LocalEntryPoint class, which avoids all of the Lambda infra-
available which, combined example: structure. Other than the class name, LocalEntryPoint.cs is
with setting accessibility to exactly the same as program.cs in a typical ASP.NET Core API
my development computer’s { project. And by default, the debugger will start by calling
IP address, allows me to “DbPassword”: “myfancypassword”, its Main method.
debug the API in Visual
Studio while connecting
to the database on AWS. I Understanding How Your API Will
can also connect through Transform into a Lambda Function
Visual Studio’s database tools, With my API now running successfully, it’s time to trigger
SSMS, Azure Data Studio or the special logic included in the template to run all of this
other tools. However, for a as a Lambda function.
production database that’s
being accessed by another
I think it’s important to understand some of the “magic”
AWS service (e.g., this Lambda
that is happening for this scenario. Of course, it’s not
function app, or my API
deployed to AWS Elastic
magic. It’s some very clever architecture on the part of the
Beanstalk) you should disable AWS Lambda team. Keep in mind that the biggest difference
the public availability for between running the API as a regular Web application and
the instance. For testing, running it as a serverless application is that the Web appli-
you could have one test Figure 4: Targeting the project to run locally using .NET’s cation is always running and consuming resources, whereas
database in its own publicly Kestrel server the serverless application is a Lambda function that acts as
available instance (still limited
to select IP addresses) and
leave the production database
locked down.

Figure 5: How your hosted API works after being transformed during deployment

26 Transform Your ASP.NET Core API into AWS Lambda Functions codemag.com
a wrapper to your controller methods. And by running on {
demand, that means you are only paying for the resources builder.ConfigureAppConfiguration(
used in the moments that function is running, not while it’s (c, b) =>
sitting around waiting for a request to come in. b.AddSystemsManager(“/codemagapi”)
);
When the app is deployed (using some of the special assets builder.UseStartup<Startup>();
added by the template) it doesn’t just push your applica- }
tion to the cloud, it builds a full Lambda function infrastruc-
ture. Because this function is meant to be accessed through In the serverless.template file, you also need to make a
HTTP, it’s shielded by an API Gateway—the default—but you simple change to the policies controlling what the function
have the option to switch to an Application Load Balancer can access, so that it can read the parameters.
instead. Unlike a regular ASP.NET Core API, the controller
methods aren’t exposed directly through URIs (or routing). By default, AWS’ AWSLambdaFullAccess policy is defined
A Lambda function wraps your controllers and runs only on directly in the serverless.template without using roles. You
demand when something calls your API. If nothing calls, can see this in the Properties section of the AspNetCore-
nothing is running. Function resource in the file:

What’s in between the gateway and your controller is the “Role”: null,
Amazon.Lambda.AspNetCoreServer, which contains its own “Policies”: [
Lambda function that translates the API Gateway request “AWSLambdaFullAccess
into an ASP.NET Core request. The requests to that Lambda ], The Relationship Between
function are all that you pay for in this setup, not the con- the Function and the
troller activity; that is, after the monthly free allocation. You just need to add two more policies, AmazonSSMReadOn- Database’s VPC
There’s more to how this works but for the purposes of this lyAccess and AWSLambdaVPCAccessExecutionRole. The Role At first, I thought that
article, this should be enough to have a high-level under- property is not needed at all, so I removed it. specifying a VPC for the
standing of what appears to be magic. application meant that
“Policies”: [ I would be pushing the
“AWSLambdaFullAccess”,
Engaging the SAM and “AmazonSSMReadOnlyAccess”,
app into the same VPC
as the database.
Lambda Logic in Your API “AWSLambdaVPCAccessExecutionRole” This misunderstanding led
So now let’s take a look at some of the assets shown in Fig- ], me on a wild goose chase.
ure 1 that were created by the template. Your friend here is I started by creating a
the Readme markdown file included in the project. It gives The SSM policy gives the deployed function permission to separate VPC and could never
some insight into the assets and I will highlight some of the access the parameters in the Systems Manager. The VPCAc- get it to communicate with
relevant descriptions here for you: cess policy gives the function permission to wire up a con- the database. It’s important
nection to the VPC that’s hosting the database. I’ll point to understand that you aren’t
• serverless.template: an AWS CloudFormation Server- out when this comes into play after deploying the function. creating a VPC for hosting the
less Application Model template file for declaring your application, but identifying
Serverless functions and other AWS resources There is some more cleanup you can do in the serverless the existing VPC with the
• aws-Lambda-tools-defaults.json: default argument template. Many settings in there are related to the S3Proxy subnet(s) that allow the
access to the RDS instance.
settings for use with Visual Studio and command line controller that you deleted. You can delete the related sec-
It’s like you’re making an
deployment tools for AWS tions.
introduction between the VPC
• LambdaEntryPoint.cs: class that derives from Ama-
and the Systems Manager.
zon.Lambda.AspNetCoreServer.APIGatewayProxyFunc- The sections you can delete, starting from the top are:
tion. The code in this file bootstraps the ASP.NET Core
hosting framework. The Lambda function is defined in • Parameters
the base class. When you ran the app locally, it start- • Conditions
ed with the LocalEntryPoint class. When you run this • The Environment section within Resources:
within the Lambda service in the cloud, this Lambda AspNetCoreFunction
EntryPoint is what will be used. • Bucket within Resources
• S3ProxyBucket within Outputs
In addition to the Using statement mentioned above, there
is one more change to make in the LambdaEntryPoint class. Take care to get correct start and end points when deleting
If you read the earlier article, you may recall that there sections from this JSON file, including commas.
was also a lesson in there on storing the database UserId
and password as secured parameters in AWS. I was able to A related setting in appsettings.json is the AppS3Bucket
leverage the SystemsManager extension to read from AWS property. You can delete that as well.
Systems Manager where the parameters are stored. You’ll
need to ensure that the deployed app can do that by add-
ing the following builder.ConfigureAppConfiguration code Deploying the Serverless Application
into the Init method of the LambdaEntryPoint class. This Although you can run the non-Lambda version of the app
will also require a Using statement for Microsoft.Extensions. locally as I did earlier, you can’t just install the Lambda ser-
Configuration. vice on your computer to check out how it works with the
infrastructure. You need to deploy the application to AWS.
protected override void Init This is a simple task, thanks again to the toolkit. Let’s walk
(IWebHostBuilder builder) through that process.

codemag.com Transform Your ASP.NET Core API into AWS Lambda Functions 27
The aws-Lambda-tools-default.json file contains configura- Note that bucket names must be all lower case. Also note
tion information for publishing the function. In fact, the file that if you selected an existing bucket, it needs to be one
also has configuration information for creating the S3 Proxy that’s in the same region as the one where you’re deploying
used by the controller which we have now deleted. Deleting the Lambda function. My settings are shown in Figure 6.
the “template-parameters” property from the file will clear
that extraneous information. Now you’re ready to publish the application, so just click
Publish. You’ll immediately start to see log information
The context menu for the serverless project has the option: about the steps being taken to build and push the appli-
“Publish to AWS Lambda…”. This triggers a form to open cation to the cloud. After that, a log is displayed showing
where you can specify settings for your deployed applica- what’s happening in the cloud to create all of the infra-
tion. The profile and region are pre-populated using your structure to run the application. When it’s all done, the sta-
AWS Explorer settings. tus shows CREATE_COMPLETE and the final logs indicate the
same. (Figure 7).
You’ll need to name the CloudFormation stack and bucket
for the deployment. All of the resources for your application The URL of the application is shown on the form. Mine is
will be bundled up into a single unit and managed by Cloud- https://hfsw7u3sk5.execute-api.us-east-2.amazonaws.
Formation. This is what is the stack refers to. The S3 bucket com/Prod.
(different from the S3 Proxy used by the deleted controller)
will store the application’s compiled code for your function. Sending a request to the values controller will succeed, in
Any existing buckets in your account are listed in the drop- my case at https://hfsw7u3sk5.execute-api.us-east-2.ama-
down, and you can create a new one with the New button. zonaws.com/Prod/api/values, but the authors controller will
fail with a timeout. That’s because you have a bit more secu-
rity configuration to perform on the newly deployed function.

Let’s first look in the portal to see what was created and
then address the last bits of security for the authors con-
troller in the AWS cloud to access the authors data in the
database.

Examining What Got Created


in the Cloud
If you refresh the AWS Lambda node in the AWS Explorer,
you should see your new function app listed. Its name will
start with the CloudFormation stack you specified in the
publish wizard, concatenated with “AspNetCoreFunction”
and a randomly generated string. You can update some of
the function’s configuration, look at logs, and more. You
might notice the Mock Lambda Test Tool in the toolkit. But
Figure 6: Publishing the serverless application this is not for debugging the cloud-based Lambda from Vi-

Figure 7: The final logs when the deployment has completed

28 Transform Your ASP.NET Core API into AWS Lambda Functions codemag.com
sual Studio. It’s for performing an advanced form of testing
to debug problems in the deployed function. You can learn
more about that tool here: https://aws.amazon.com/blogs/
developer/debugging-net-core-aws-Lambda-functions-us-
ing-the-aws-net-mock-Lambda-test-tool/. I’ll come back to
the configuration page shortly.

When learning I like to also see the function in the portal.


It feels more real and more interesting to me. Here’s how
to do that.

Log into the portal and be sure to set your view to the region
where you published the function. Select Lambda by dropping
down the Services menu at the top. From the AWS Lambda
dashboard, select the Functions view. Here, you’ll see the
same list of Lambda functions in your account, filtered by
whatever region is selected at the top of the browser page.

Click on the function to open its configuration page. At the


top, there’s a note that the function belongs to an applica-
tion with a link to see some information about the applica-
tion: the API endpoint and a view of the various resources
(your Lambda Function, an IAM role associated with the
function, and the API gateway). There are other details to
explore in the application view, such as a log of deploy-
ments and monitoring.

Back in the function’s overview page, the first section shows


a visual representation of the function with an API gateway
block and the function itself. Click on the API gateway to see
the two REST endpoints that were created: one with a proxy
and one without. Next, click on the block for the function
and you’ll notice that the display below changes. If your app
was using a scripting language, there would be a code editor
available. Because you’re using a language that requires a
compiler, uploading a zip file is the only option—and that’s Figure 8: Selecting a subnet within the database’s VPC
what the Publish wizard did for you —so the code editor is
hidden. Keep scrolling down to see more sections: Environ-
ment variables, Tags, and a few others. contains the database instance. You can do that through
the portal or using the Function configuration page of the
The block of interest is the currently empty VPC area. VPC Toolkit in Visual Studio. I’ll show you how to do this back in
is an acronym for Virtual Private Cloud, a logically isolated Visual Studio.
section of the AWS cloud. The VPC settings are the key to
giving the function permission to access the database in- The Toolkit’s function configuration page has a VPC section
stance. Currently, the lack of that access why the authors and in there, a drop-down to select one or more VPC Sub-
controller is failing. nets to which you can tie the function and a drop-down for
security groups. The latter is disabled because it only shows
security groups for the VPC(s) you’ve selected.
Understanding and Affecting What
Permissions the Function Has A subnet is essentially a part of an IP range exposed through
Thanks to the AmazonSSMReadOnlyAccess policy you added the AWS cloud. A VPC can have one or more subnets associ-
to the function in the serverless.template file, the func- ated with it. By default, the default VPC has three subnets
tion is able to access the UserId and Password parameters and each of those is a public subnet, meaning that it’s al-
you stored in the Systems Manager as part of the previous lowed to receive inbound requests from the Internet (and
article. However, even though it can read the connection can make outbound calls), e.g., a Web server. To connect the
string credentials for the database, it isn’t able to connect Lambda function to this VPC, you can select any one of the
to the VPC where the database lives. Everything is secure by subnets from the default VPC. If the VPC has private subnets,
default here. Even from other services attached to the same connecting to one of those will work as well. Based on all
IAM account. of my experiments (and guidance specifically for this article
from experts at AWS), you can randomly choose any subnet
The database instance is inside the default VPC in my AWS attached to the VPC as I’ve done in Figure 8. Note that the
account. That’s most likely the case for you if you followed “Map Public IP” column isn’t an indication of whether the
the demo in the earlier article. The function itself isn’t in- subnet is public or private.
side a VPC. As I explained earlier, it was deployed to a Cloud-
Formation stack, the one you named in the Publish wizard. Having selected that subnet, the Security Groups drop-down
What you need to do next is tie the function to the VPC that now gives me two security group options—these are the

codemag.com Transform Your ASP.NET Core API into AWS Lambda Functions 29
only groups tied to that VPC. If you have more, they’ll all be of the defaults on this page are correct, so you can scroll
available in the drop-down. There will always be a default to the bottom of the page and click the Create endpoint
security group, so you can select that one. button.

Once you’ve specified the subnet and security group, save That’s it! The endpoint should be ready right away. The ap-
the settings by clicking the “Apply Changes” icon at the top plication now has access to the parameters, and it’s able
of the Function page. Unfortunately, the toolkit doesn’t to use those parameters to build the connection string and
provide status. So, I flipped back to the portal view, re- access the database. Finally, the api/values and api/authors
freshed the Web page and waited for the message “Updat- should successfully return their expected output.
ing the function” to change to “Updated”. The message is
displayed right at the top of the page in a blue banner so
shouldn’t be hard to find. This took about one minute.
A Journey, But in the End,
Not a Lot of Work
Remember adding the AWSLambdaVPCAccessExecutionRole Although this article has been a long one, so much of what
to the serverless.template policies earlier? That policy is you read was to be sure you understood how things work
what gave the Lambda function permission to perform this and why you were performing certain steps. In fact, the
action of attaching to the VPC. journey to modernize your ASP.NET Core API to AWS Lambda
functions doesn’t entail a lot of work and the value can be
significant.
One Last Hook: VPC,
SPONSORED SIDEBAR: Meet Systems Manager You created a project from a template, copied over files from
Now, if you test the api/values again or the api/authors, the original API and made a few small changes to a handful
Need Cloud Help? you may think you’ve broken everything! Both controllers of files. With this, the API was already able to run locally in
CODE Can Help! time out. But you haven’t broken the function. The function Visual Studio.
itself is able to access the parameters, but the VPC is not.
Take advantage of a Therefore, now that the function has been configured to run Then you used a wizard to publish the API to AWS as a Lamb-
FREE hour-long, remote attached to my VPC, it can’t reach back to Parameter Store da function and because the API interacts with a SQL Server
CODE Consulting session
over the Internet. Recall the logic you added to the Lamb- database in Amazon RDS (using Entity Framework Core), you
(yes, FREE!) to jumpstart
daEntryPoint class: needed to enable a few more permissions. That was only
your organization’s plans
two steps: Connect the database’s VPC to the function and
to develop solutions in
the cloud. Got questions? builder.ConfigureAppConfiguration((c, b) create an endpoint so that VPC was able to access the cre-
We’ll do our best to answer => b.AddSystemsManager("/codemagapi")); dentials that are stored as AWS parameters.
them! No strings.
No commitment. That’s the very first line of code executed by the applica- Although this exercise was focused on an existing API, you
For more information, tion, and since the VPC isn’t allowed to reach the Systems can also just create a function app from scratch with this
visit www.codemag.com/ Manager by default, that code fails before even trying to run template and, using the new project, build an ASP.NET Core
consulting or email us any controller code. API from scratch using all of the knowledge you already
at [email protected]. have for doing that without having to learn how to build
The final piece of the puzzle is to allow the VPC access to the Lambda functions. Surely, like me, once you’ve whetted your
Systems Manager. There are two options. One is to configure appetite with this, you’ll probably be curious and ready to
the VPC to allow the Lambda function to go out to the In- explore building straight up Lambda functions next.
ternet and then to the service for the Parameter Store. The
other is to configure a channel (called an endpoint) on the  Julie Lerman
VPC that allows the function to call the Systems Manager 
without ever leaving the AWS network. The latter is the sim-
plest path and the one I chose.

I’ll create endpoint on the default VPC, giving the endpoint


permissions to call the Systems Manager. Endpoints aren’t
available in the toolkit, so you’ll do that in the portal, and
luckily, it’s just a few steps where you can rely mostly on
default settings. It’s not a bad idea to get a little more ex-
perience with interacting with the portal. Alternatively, you
could do this using the AWS CLI or AWS’ PowerShell tools
as well.

In the portal, start by selecting VPC from the AWS Services


list. From the VPC menu on the left, select Endpoints, then
Create Endpoint. Filter the available service names by typ-
ing ssm into the search box, then select com.amazonaws.
[region].ssm.

From the VPC drop-down, select the relevant VPC. It’s handy
to know the ID of your VPC, or its name, if you’ve assigned
one in the console. Once selected, all of that VPC’s public
subnets are preselected, which is fine. In fact, all of the rest

30 Transform Your ASP.NET Core API into AWS Lambda Functions codemag.com
ADVERTORIAL

Break into the Tech Industry:


Enroll in an Online Tech Academy
Coding Boot Camp today!
The Tech Academy is a technology school that delivers online training, with students
all over the world. In fact, they just received 2020’s Best Online Coding Bootcamp Award.

Learn Coding Anywhere The Tech Industry is Still Hiring What Sets the Tech Academy Apart
The Tech Academy programs require no technical Despite recent global events, we are placing a high There are other online schools, so what makes
background or coding experience. Our classes are number of graduates in remote, technical positions. The Tech Academy special?
deisgned for absolute beginners. We specialize in While many companies have been struggling,
coding boot camps that train students in a wide the tech industry has been impacted less than • No tech background or coding experience
range of technology subjects, including: others. is required
• Online and in-person training
• Website development During these uncertain times, one thing is for sure: • Open enrollment—start anytime
• Computer programming the need for technology isn’t going anywhere – • Flexible scheduling options
• Design in fact, it’s increasing. • Self-paced program
• Data science • Outstanding job placement assistance
• And more... Prepare for your future today by completing a • Multiple financing options
Tech Academy Boot Camp from the comfort an
These programs can be sompleted in as little safety of your own home.
8 weeks and prepare graduates for working in
the tech industry.

Start Your Career


Path Today
The Tech Academy is enrolling now!

Contact us at:
[email protected]
Or call: (503) 206-6915

FIND OUT MORE AT LEARNCODINGANYWHERE.COM


ONLINE QUICK ID 2008051

Power Query: Excel’s Hidden Weapon


You’ve certainly worked with Excel extensively. You know its ubiquity across the modern business world. Excel examples range
from debt forecasting and pricing models to enable decision-making to ad hoc accounting reports. I absolutely embrace its
versatility, its incredible calculations capabilities, and its stable presence in an everchanging business world. However, it still

can instill fear and anxiety of running into potential problems cess, requires very little coding, and is easy to refresh. It’s a
like finding undetectable mistakes in large files or unknowing- scalable solution that can keep up with the demands of your or-
ly using an outdated data set. For those reasons, Excel often ganization without requiring a massive investment of time and
becomes the target of ire in the analysis process rather than energy to set up the process that you would face with a larger
the users who perhaps incubated the problem to begin with. solution like setting up the data in an enterprise-wide database.

You might ask what better options exist out there to man- Our Challenge: Converting Currencies
age these large data sets? The answer comes by utilizing Let’s say you work for a company that does business in both
Power Query. I’m going to show you how it alleviates many the US and the European Union. In addition to worrying
Helen Wall of these concerns by making the entire ETL framework much about differing rules and laws for business in multiple coun-
www.linkedin.com/in/ more efficient and scalable. It’s an Excel game-changer. tries, you also need to worry about the conversion rates be-
helenrmwall/ tween these two monetary regions. Let’s say, for the sake of
www.helendatadesign.com this project, that you want to convert these currencies into
What Is Power Query? US Dollars for your financial reports.
Helen Wall is a power user of
Microsoft Power BI, Excel, and What exactly is Power Query? It’s a tool that sits inside Ex-
Tableau. The primary driver cel (as well as Power BI) that enables you to automate the You need to obtain the currency data first, and you can get
behind working in these tools ETL process of bringing data into Excel. This query editor it from the Federal Reserve of Economic Data (FRED) for St.
is finding the point where extracts data from different data sources, transforms this Louis, a free resource that allows you to not only view the
data analytics meets design data, then finally loads it into Excel where you can create trends for key economic and finance data, but also download
principles, thus making data additional calculations and modeling. Perhaps more im- it or connect to its API. Here’s the link to the FRED webpage
visualization platforms both portantly, if you want to repeat this process of bringing in (Figure 1): https://fred.stlouisfed.org/series/DEXUSEU
an art and a science. She con- the same data every month, you can refresh the entire ETL
siders herself both a lifelong framework with the click of a button. Power Query provides Remember that you want to make the process of getting
teacher and learner. She is a the ideal bridge between organization segments because data more efficient. Downloading the data before connect-
LinkedIn Learning instruc- the business will find that it’s often an easy process to set ing to it adds steps to the process rather than reducing
tor for Power BI courses that up and maintain, and developers in turn can reduce their them, which in turn increases completion time and the po-
focus on all aspects of using scheduling obligations to maintain these types of reports. tential for making mistakes. There’s not typically one right
the application, including answer to solving a problem but using the FRED API is both
data methods, dashboard Where can you get your data from? Power Query probably easy to set up and refresh in future months.
design, and programming in has at least one connection that works for you (plus many
DAX and M formula language. more) including: Before you jump into working in Excel, you need to first get
Her work background includes access on the FRED website:
an array of industries, and in
• SQL Server
numerous functional groups,
• SAP 1. Navigate to the FRED account sign-up page to get your
including actuarial, financial
reporting, forecasting, IT,
• Azure applications own account if you don’t already have one. Remember
and management consulting. • Amazon Web Services your username and password; you will need them later to
She has a double bachelor’s • Web API queries configure your sign-in credentials in Power Query.
degree from the University of 2. Because you’ll be using the API query connection to the FRED
Washington where she studied Once you connect to the data source, you can then transform data, you’ll need to sign up for your own unique API key
math and economics, and also the data connection into a useful data set by employing a through the API menu and requesting a new key (Figure 2).
was a Division I varsity rower. plethora of built-in query editor interface functions like: Once you add this API key, you’ll see it appear on this page.
On a note about brushing with
history, the real-life charac- • Trimming spaces Accessing Power Query
ters from the book The Boys • Splitting a column into several columns For those familiar with Microsoft’s Power BI Desktop, you
in the Boat were also Husky • Creating new calculations know that initially opening the application immediately
rowers that came before her. • Grouping fields prompts you to select your data source. After selecting a
She also has a master’s degree • Pivoting to reshape data data connection type, it takes you into the Power Query Edi-
in financial management tor to begin the transformation process and add any addi-
from Durham University After applying these transformation steps, you then load tional queries. Excel offers similar ETL capabilities in its own
(in the United Kingdom). this clean, well-structured data table directly into Excel. Power Query Editor. Unfortunately, when you open Excel, it
Concerned about a data update later today that could doesn’t prompt you to set up a data connection, and thus
change all your numbers? Simply refresh the entire ETL few business users even know this incredible tool exists.
framework with the click of a button.
To access Power Query in Excel:
Power Query allows business users to develop reports with cur-
rent technology that uses no code, and it allows developers to 1. From the Data menu, choose the Get Data menu item.
create something tangible for the business side. It’s easy to ac- 2. You will next see the prompt to choose your data source

32 Power Query: Excel’s Hidden Weapon codemag.com


Figure 1: FRED website data

Figure 2: API key

codemag.com Power Query: Excel’s Hidden Weapon 33


Figure 3: API query setup

from a large array of data connectors, such as SQL Serv- Query capabilities in this example. When you create your own
er, another Excel file, or a Web connection. You can also queries, you can leverage many more existing functionalities.
select Launch Power Query Editor.
Step 1: Extract the Data
Note that you can only access Power Query in the later Excel For this project, you’re going to obtain the FRED US-Euro
versions, so for those of you working with earlier Excel ver- exchange rate data by querying the FRED API. This API con-
sions, you must update to a later version! Once you create nection works with a lot of data series, and it’s also quite
a new query and load it to Excel, you can open Power Query straightforward to configure. You set up the FRED API query
again by going back into the query editor. You can also re- as a single URL string comprised of three components: the
fresh the data directly in the Excel interface without open- API endpoint, the query parameters, and your own unique
ing the query editor. API key. I typically test out API connections and queries in a
resource like Swagger Inspector (Figure 3). Although Power
Query is an incredibly useful tool for working with data, it’s
Making Power Query Work unfortunately not a compiler and doesn’t give much feed-
You want to create an elegant ETL process in Power Query to back on errors, unlike tools like the Swagger Inspector.
first bring in the data, then later refresh it with a single but- Remember to replace the sample API key from the docu-
ton. You’ll just see just a few key concepts within the Power mentation with your own unique API key. From the FRED

Figure 4: XML table for the API query

34 Power Query: Excel’s Hidden Weapon codemag.com


website, you can learn more about how to set up these que- mation about the data, like time stamps, you want to focus
ries in the API documentation webpage. Here’s a link to try in on the Table hyperlink in the observation column, which
your own API key: https://api.stlouisfed.org/fred/series/ represents a Power Query table object. Power Query objects
observations?series_id=DEXUSEU&api_key=yours consist of a combination of variables, functions, and data
structures. They enable transformation capabilities such as
You may recognize the format of the API query results as drilling into or expanding data tables. In this example API
the XML data structure. Power Query can handle API query query, the table object represents the data returned from
results returned in many different other formats as well, the API query in the XML format.
such as JSON data structures. There are limitations for con-
nection to Web data like this in Power Query, but it’s an If you work in Power Query a lot, you’ll encounter table ob-
efficient tool to create a simple API connection like this. jects frequently and will find them immensely useful. You
can access the data in table objects in a few different ways:
To configure the API query connection in Power Query:
• Click into the Table hyperlink, which enables you to drill
1. Select the Web connection option. directly into the table. Notice that you no longer see the
2. In the dialog box that opens, paste the URL string for metadata columns in this view and you instead see the rows
the API connection with your own API token into the and columns with the table object you just drilled into.
URL text box (Figure 4). • You will also notice a diverging arrow button next to the
3. Notice that you can also select advanced options for column name. Choosing this button opens a selection
the Web connection, like splitting up the query into menu to select the columns to expand within the table
several pieces (splitting up the parameters for exam- object. Because you’re not drilling into a single table
ple) or using the API key or token as a header. object, you can still see the metadata in the view. For
4. Confirm these selections, and you’ll see the query edi- queries returning multiple records with table objects, ex-
tor process the connection by importing it. panding the table objects retains all the data in each ta-
ble object, and not just the data in a single table object.
As part of the extract process, you can set up log-in creden-
tials specific to the FRED website because the API query only Drilling into the table object transforms the returned API query
works with an active login. Passing these credentials into data into a readable data format, with four columns in this new
Power Query enables you to refresh the entire query without view. You see a new applied step on the right side of the screen.
logging into the FRED website. Here’s how: Of these four columns, I only want to keep the attribute col-

1. Select the Source Settings option, which opens a new


window.
2. Select the FRED API query to edit the query permis-
sions.
3. In the Basic authorization selection input the user-
name and password for your FRED account (Figure 5).

After setting up the connection, you see the query results in


the middle of the screen. On the left side, you see a query list
where you can add additional queries. On the right side of the
screen, you see the applied steps, which will tell you the func-
tions you perform on the query during the ETL process. You
see a Source step automatically added to these applied steps
list. Double-click on the gear wheel next to the step name,
which opens the source connection details you see in Figure
5. Power Query uses a bit of AI capability to determine on its
own to read this Web connection in the XML data structure.
Why, then, don’t you see the results returned in a structured
table format? You’ll see in the next step how to transform this
initial connection into a useful data table.

Step 2: Transform the Data Connection


Power Query returns the API query results in the XML struc-
ture format with a single record or row of data consisting of
several columns providing the connection metadata, as you
see in Figure 6. Although the metadata provides key infor- Figure 5: Source settings for FRED log-in

Figure 6: Initial table object

codemag.com Power Query: Excel’s Hidden Weapon 35


umns for the date and value because the other columns serve
as metadata rather than useful data. You can select the first
two columns at the same time and delete them together. You
can then rename the columns to date and monthly exchange
rate respectively to make it more readable. Power Query also
gives you the capability to make changes to a previous step that
ultimately update the rest of the query. If you select the step
before the step to change the data type, you see that these rows
throwing errors use a period in place of an empty field to repre-
sent closed markets on holidays. When you attempt calculations
with this data later, you’ll run into problems. To remove the er-
rors from this field, select Remove Errors from the menu in the
field name, which gets you to the updated data set (Figure 7).

In this query, each applied step takes the result from the ap-
plied step before it and applies the current transformation
step on top of it. If you open the Advanced Editor, you can see
how Power Query applies these transformation steps through
the functional M language by automatically writing out the
steps. You can create some impressive custom queries with
a little background knowledge of M. Start by first slightly
tweaking the existing applied steps in the Advanced Editor,
and then transition to creating more complex M code.

Step 3: Aggregate the Calculations


You can change the shape and orientation of a data table by
leveraging several existing functionalities in the query editor.
The grouping functionality lets you determine a field in the
existing table to configure as a dimension in the new table
shape. You can aggregate a numeric field by summing, averag-
ing, or counting it over the grouping dimension. Let’s say that
you want to calculate the average exchange rate for the entire
Figure 7: Transformed data table month rather than the daily exchange rate. Before applying

Figure 8: Month End column

36 Power Query: Excel’s Hidden Weapon codemag.com


the grouping functionality, you first create a new column for Now you can group the data with the new date field. Se-
the last day of the month for each existing date. Select to add lect the Group By functionality from the top Transformation
a new column to the data table, then enter the formula to menu. In the dialog box group, use the Month End Date as the
reference the existing date field you see in Figure 8. dimension in the dropdown menu, then create a new column
named Average Exchange Rate by choosing the Average Oper-
ation from the dropdown menu to calculate using the Column
of Daily Exchange Rate. In setting up this simple grouping,
the query editor returns a table in a consolidated shape with
two columns: the last day of the month column and the new
average currency rate for the entire month (Figure 9).

You notice that this data grouping loses much of the initial
data granularity. What if you wanted to calculate the aver-
age monthly exchange rate, but keep the daily exchange
rates as part of the data table? To achieve this desired out-
come, you will need to leverage the capabilities supported
in the advanced grouping options.

Notice that the Grouping applied step in the query editor has a
gear wheel icon next to the step name. Double clicking on this
icon takes you back into the grouping configuration. This time,
you want to select the advanced options, which enables you to
set up more complex grouping configurations. Keep the date
field from the existing configurations, as well as the aggrega-
tion for the average exchange rate. Since you want to return
both the average monthly rates and daily exchange rates in the
same view, you want to add a table object to each month that
contain the daily rates. To do so, select to group All Rows, name
this field Data, and notice that it doesn’t allow you to choose an
aggregation because you’re returning all the rows (Figure 10).

After confirming this selection, you’ll notice that the re-


sulting data table (Figure 11) looks like the monthly aver-
age rates, except you now see that the last column contains
rows consisting entirely of table objects associated with each
month’s end date. Think of this as a table of data attached
to each of the month end dates and their respective monthly
average rates. When you expand out this new Data column
containing table objects, you combine the two columns of
monthly aggregated data with each of the rows and columns
contained in the table objects connected to the same date
Figure 9: Returned table from grouping range, which gives you two rates for each monthly date.

Figure 10: Advanced grouping configuration

codemag.com Power Query: Excel’s Hidden Weapon 37


By selecting the diverging arrows icon at the top of this new of the monthly average rate, and finally creates new table
field, this expands to give you two rates for each date: the objects for each row that contain both the average monthly
daily rate, and the monthly rate. rates and daily rates (Figure 12). You then expand out the
Data field for the new grouped table.
As an added challenge, you can add another calculation as a
new grouping, this time for the year’s end. You need to first Step 4: Load Data to Excel
create a new field for the year-end date using the same for- After creating this data table, you can remove all the date
mula as Figure 8, except changing the Month piece to Year fields except the daily date (Figure 13. Lastly, you load the
in the Date function. You then add another grouping to the data to Excel (you can also load this same query into Power
transformation steps that uses this year field as the group- BI). Although you can theoretically load just over a million
ing dimension, calculates the yearly average rate instead rows to Excel, this wouldn’t be ideal because loading that
much data severely impacts performance and fewer than
60,000 live calculations.

The loaded data appears as a separate tab within the Excel


file that you can then rename to something like “US-Euro
XR.” You can also delete the other unneeded tabs in the Ex-
cel file, if you’d like. You can see that this data set is small,
but if you’re loading large data sets, you can select to only
load the connections. Although this means you now can’t
see all the data in the same views, you can still build im-
pactful data summaries such as pivot tables within the file.

Step 5: Enable the Users to Refresh Data


Finally, you want to make your Power Query work accessible
to others. You don’t want to update this file yourself every
month, especially if you can avoid it. How, then, can you con-
figure it as a user-centered process with the end business user
in mind? You passed your FRED credentials into Power Query
with the username and password. By sharing your file with
others, they won’t see your private log-in information, but
they can refresh the query on their own computer.

To refresh the queries, you can either go back into Power Query
to manually refresh the data, or you can manually refresh it
with the refresh buttons within the Excel file. Although you or
I may view this as the perfect bridge between the Excel view
and the data in the API query, remember that business users
may not have the same comfort with updating queries this way.

You can use a VBA macro to create a refresh button that the user
Figure 11: Returned table from advanced grouping can select to automatically update the queries directly in Excel.

Figure 12: New grouping by year-end date

38 Power Query: Excel’s Hidden Weapon codemag.com


Figure 13: Finished refreshable Excel file

I’ll be the first to admit I’m not a huge fan of VBA. One of my Next cn Transformation Options
pain points with VBA is that it still doesn’t mitigate the risk of End Sub
The transformation options
making easy mistakes that are difficult to correct. I do, how-
in Power Query are far more
ever, like adding a button to the screen if it means that the busi- Once you set up the macro to update the Power Query data, numerous than simply
ness users will continue to reference my work in the long run. you can now turn your focus to adding the elusive update removing and renaming fields.
button next to the data, easily within view. There are three categories of
This is done in two parts: First create the macro, then add transformation functions.
a button to automatically refresh the data. Of course, you’ll 1. Select the Insert drop-down menu from the Developer
find a few more specifics in the process of setting it up, but tab of the ribbon and select the rectangle shape from Cleaning data includes
you get the idea. Start from the Excel file view with a single this drop-down menu. removing unneeded spaces
sheet of the exchange rate data. 2. Drag this shape to the data table sheet where you want and changing capitalization
to drop it. I put the macro update button to the right options. Shaping data
1. In the top ribbon, go to the Developer tab and open of the data table to make it easy for the user to see im- includes grouping or
the Macros menu on the left. mediately when they open the Excel file. unpivoting the entire table
2. In the VBA window now open in the view, select your 3. When you let go, Excel prompts you to assign a macro into a new table shape.
file name, which should have the prefix VBA. Right-click to this button, so select UpdatePowerQueries. Adding new fields includes
and select Insert Module from the drop-down menu. 4. Select the text on the button to rename it something standard mathematical
functions, but also a plethora
3. A new window opens where you can copy and paste meaningful, like “Refresh Data”. To double check that
of unique Power Query
the code below into the window to enable you to run a the button works, hit the button and make sure that
formulas, such as dates and
macro to update the Power Query data. Closing out the it now includes any new records. Of course, if you cre-
reading data, from different
window saves the VBA code. ate this entire project in the span of a single day, you file types.
won’t see any new records on the same day, but you can
Public Sub Refresh() return to the file the next day to check.
' Macro to update my Power Query script(s)
Dim lTest As Long, cn As WorkbookConnection There you have it! In Figure 13, you have a smartly built,
On Error Resume Next yet simple to set up and understand business solution that
For Each cn In ThisWorkbook.Connections makes everyone’s life easier. If you’re interested in learn-
lTest = InStr(1, cn.OLEDBConnection.Connection, ing more about how Power Query works, you can check out
“Provider=Microsoft.Mashup.OleDb.1”, vbTextCompare) my Power BI Data Methods course in the LinkedIn Learning
If Err.Number <> 0 Then library. Yes, it’s a subscription service, but it does offer a lot
Err.Clear to the customers!
Exit For
End If  Helen Wall
If lTest > 0 Then cn.Refresh 

codemag.com Power Query: Excel’s Hidden Weapon 39


ONLINE QUICK ID 2008061

Vue 3: The Changes


In the last couple of years, the Vue framework has etched its place into the heart of many a Web developer. The team has been
working on some major improvements the last couple of years that culminates in a beta of Vue 3 that has recently started
shipping. In this article, I’ll walk you through the pertinent changes that are coming to your beloved Vue. Although many

of the changes are in the underlying plumbing, there are New Features
some major changes that I’ll detail in this article. The over- In the years since Vue 2, the landscape of client-side Web
arching changes include: development has changed quite a bit. Coming from a small
upstart to a fully-fledged SPA library with a lot of commu-
• Conversion to TypeScript to improve type inference for nity support, Vue has really grown up. Now with version 3
TypeScript users on the horizon, the team wanted to support a number of
• Switch to using the Virtual DOM for overall perfor- features to augment the library, simplify coding on Vue, and
mance improvements adopt modern techniques of Web development.
• Improve tree-shaking for Vue users using WebPack (or
Shawn Wildermuth similar solutions) Below, I’ve dug into the major new features, but this list
[email protected] isn’t exhaustive. It should cover the big items that will im-
wildermuth.com Overall, you should notice a much-improved experience pact how you develop Vue applications going forward.
twitter.com/shawnwildermut without sacrificing compatibility with Vue 2. Although there
Shawn Wildermuth has
are breaking changes, they’ve been kept to a minimum and Global Mounting
been tinkering with com- allow new features to be opted into instead of forced upon One of the first changes you’ll see is that mounting your
puters and software since existing code. Let’s get to the changes! root Vue object should be clearer than it was in version 2.
he got a Vic-20 back in the The Vue object now supports a createApp function that al-
lows you to mount it directly onto an element.
early ’80s. As a Microsoft
MVP since 2003, he’s also
The Current State of Vue 3
involved with Microsoft As of the writing this article, Vue 3 is in an early beta cycle. Vue.createApp(App)
as an ASP.NET Insider and The cadence of betas is pretty quick. You can build from the    .mount(“#theApp”);
ClientDev Insider. He’s source to get the absolutely latest version of Vue, but in any
the author of over twenty case, it’s likely not time to start building or converting your The fluent syntax creates an application by passing in a
Pluralsight courses, written projects. Some parts of Vue 3 (especially the big changes) component and allows you to mount it to a specific element
eight books, an interna- are available as plug-ins to Vue 2, in case you want to start (via a CSS Selector). This doesn’t change how the applica-
tional conference speaker, experimenting with them (I’ll mention them in the article tions are created, but it does make it clearer what is happen-
and one of the Wilder below as I talk about specific features). The team’s goals ing. In Version 2, there were just too many ways to kick off
Minds. You can reach are to make it highly compatible with Vue 2 so that you your project; I really like this change.
him at his blog at shouldn’t need to change code to move to the new version,
http://wildermuth.com. but instead opt into new features. In this same way, createApp returns an object in which to do
He’s also making his first, configuration at the app level (instead of version 2’s global-
feature-length documentary
Although the core library of Vue is now in a beta of version only option). For example:
about software developers
3, that’s not true for the entire ecosystem. Most of these
today called “Hello World:
libraries are in alpha or preview states. From public commit- createApp(App)
The Film.” You can see
more about it at ments from the team, these libraries should be at version 3   .mixin(...)
http://helloworldfilm.com. by the time the core library is released. These include:   .use(...)
  .component(...)
• vue-router   .mount(‘#app’)
• vuex
• eslint-plugin-vue Back in version 2, you’d add mix-ins, use plug-ins, and add
• vue-test-utils components (etc.) to the global Vue object. This allows you
• vue-devtools to scope them to your application and should cause fewer
conflicts between libraries.
Additionally, the Vue CLI will be updated to use the Vue 3
libraries once it ships. There is currently a CLI plug-in for up- Composition API
grading your projects to Vue 3 to see how they work. This add- This feature is the one that caused the most controversy
in is only for experimental use, as of the writing of the article. with the Vue community. The Composition API is simply a
To use it, just open an existing Vue CLI project and type: new way of developing components that’s more obvious but
is a stark change to the Options API (the Vue 2 default way
# in an existing Vue CLI project to build a component).
vue add vue-next
The motivation behind the Composition API is to improve the
This will upgrade the project to the latest beta versions. quality of the code. It does this by allowing you to decouple
features and improves the sharing of that logic. In Vue 2, de-
To look at the source or get the latest versions for all things velopers were forced to rely on extending the this object that
related to Vue 3, please visit GitHub at: https://github.com/ was passed around to extend and share logic. This approach
vuejs/vue-next. caused problems in that it was more difficult to see features

40 Vue 3: The Changes codemag.com


as they were added. This was especially exacerbated when us- The difference here is that you’re generating an object that
ing TypeScript (e.g., lack of typing). The upgrade also allows you return with the interface for the component. Because
you to more obviously share features via standard JavaScript/ it’s all uses the same scope, you can easily use closures to
TypeScript patterns instead of inventing new ones. more easily share data instead of depending on the magic
of the this property (the computed value is just getting the
As another benefit of the Composition API is that types are eas- name via a closure). This should also allow you to decompose
ier to infer, which means better support for TypeScript. That was a component into several files, if necessary, and to manage
a big motivation for version 3. Let’s see how it works in practice. large components, which was difficult with the Options API.

The Composition API works by changing from an options ob- Reactivity


ject to a composing of a Vue object. For example, the default The way that reactivity worked in version 2 of Vue wasn’t
Vue 2 way to create a component looks like this: exposed to the developer. It was trying to hide the details,
which caused more confusion than it should have. To remedy
export default { this, Vue 3 supports several ways of wrapping objects to
data: () => { ensure that they are reactive.
return {
name: “Shawn” For scalar types, you can wrap them with a ref function.
} This makes them mutable and you read or write the property
}, directly (if necessary) with .value:
methods: {
addCharacter() { const name = ref("Shawn");
this.name = this.name + “-”; const allCaps = computed(() => {
} name.value.toUpperCase()
}, });
computed: {
allCaps() { For templates, objects that are wrapped as ref don’t need to
return this.name.value.toUpperCase(); use .value, as those are unwrapped by default:
}
}, <template>
onMounted() { <div>
console.log(“Mounted”); <label>Name: </label>
} <input v-model="name">
} <div>{{ name }}</div>
<div>{{ allCaps }}</div>
In contrast, the Composition API, simplifies that all into a <button @click="addCharacter">Add</button>
method called ‘setup’ that has you compose the same thing: </div>
</template>
export default { <script>

setup() { For objects, you would wrap it in a reactive wrapper. This


const name = ref("Shawn"); wrapper makes a deep proxy for the object so that the entire
const allCaps = computed(() => name.value.toUpperCase()); object is reactive:
const address = reactive({
address: "123 Main Street", const address = reactive({
city: "Atlanta",   address: “123 Main Street”,
state: "GA",   city: “Atlanta”,
postalCode: "12345"   state: “GA”,
});   postalCode: “12345”
const favoriteColors = reactive(["blue", "red"]); });
function addCharacter() {
this.name = this.name + "-"; The object returned in the set-up function (from the Com-
} position API) is automatically wrapped in a reactive object,
so you don’t need to do it unless you have your own reactive
onMounted(() => { objects.
console.log("Mounted");
}); For arrays, reactive also works. You just need to wrap it in
the same way inside of setup():
return {
name, const favoriteColors = 
address, reactive([“blue”, “red”]);
favoriteColors,
allCaps, Wrapping arrays with reactive makes it more obvious when
addCharacter you modify the array. It may seem like this is a lot like RxJs
} because it is. My understanding is that you can opt to use
}, RxJs, too. The reactivity is the requirement for how Vue 3
} works. Although my example shows reactivity in a lot of

codemag.com Vue 3: The Changes 41


places, in most cases, libraries can hide some of these de- ample, you can see that the uppercase filter is changed to a
tails (e.g., Vuex). function that returns uppercase:

Composition Components export default {


Along with the changes coming to the composition APIs, setup() {
many of the utility components that were hung on the Vue const name = ref("Shawn");
object (e.g., $watch, computed) are now separate compo-
nents to make that code more readable: function uppercase(val) {
return val.toUpperCase();
For example, if you import watch, you can then watch one or }
more refs or reactive properties:
return {
import { ref, reactive, computed, watch } name,
from "vue"; uppercase
};
// Watch a scalar ref }
watch(name, (name, prevName) => { };
console.log(`Name changed to: ${name}`);
}); In Vue 2, I’m used to registering global filters. Instead, I
think an approach is to just create your filters as an import-
// watch a property of a reactive object able object and then just use the spread operator to add
watch( them to the binding object. For example, here’s a small ver-
() => address.postalCode, sion of a library that could hold one or more filters:
(curr, prev) => {
let msg = `Postal Code: ${prev} to ${curr}` // filters.js
return console.log(msg); export default {
}   uppercase(val) {
);     return val.toUpperCase();
  }
Similarly, computed is now an importable component: };

import { ref, reactive, computed, watch }  Then I could just import it and apply it:


from “vue”;
export default { import filters from “./filters”;
  setup() {
    const name = ref(“Shawn”); export default {
    const allCaps = computed(() => {   setup() {
return name.value.toUpperCase();     const name = ref(“Shawn”);
});
    return {     return {
      name,       name,
      allCaps       ...filters
    }     };
  },   }
} };

This should make the composition of your components a lot Then you can just use it as a function in the binding scope:
clearer and the code that much cleaner (instead of relying
on a heavily overloaded this property. <div>{{ uppercase(name) }}</div>

Filters Because the binding scope is just JavaScript, if you need to


One of the big surprises in Vue 3 is that the concept of filters specify additional parameters, you can. Go nuts with it. It’s
is going away. There are some justifications for this but the more obvious and reduces complexity. After being scared at
main one is that Vue 3 wants the inside of a bindings to be first, I like it.
simply executable JavaScript. Let me show you what I mean.
Here is a simple filter usage in Vue 2: Suspense
The Vue 3 team is making an effort to learn from other eco-
<div>{{ name | uppercase }}</div> systems. An example of this is the new support for Suspense
(like React’s common pattern). The problem that Suspense
This syntax was an exception to most of the ways that bind- is trying to solve is to allow you to specify some template
ing worked. This is because the syntax using the pipe (|) to be shown in case your components need to do any asyn-
character was confusing and not valid JavaScript. The team chronous work (e.g., calling the server) before being ready.
decided that we should do the same thing without resort- In this case, Suspense will do this for you.
ing to some special syntax. Because the binding can be any
valid JavaScript, you can accomplish the same thing with Let’s start with an example. Say I have a simple component
computed properties or just simple functions. In this ex- that has to do something; it makes its set-up use async and

42 Vue 3: The Changes codemag.com


await so it can handle the call. In this example, I’m doing a
simple timeout:
SPONSORED SIDEBAR:
®
Your Legacy Apps
<template> Stuck in the Past?
<div>

Instantly Search
<div>See this works now...</div> Need FREE advice on
</div> migrating yesterday’s

Terabytes
</template> legacy applications to a
<script> today’s modern platforms?
export default { Take advantage of CODE
name: "ShowSomething", Consulting’s years of
async setup() { experience migrating
legacy applications by
await new Promise(res => { contacting us today to
return setTimeout(res, 2000); schedule your free hour of dtSearch’s document filters
}); CODE consulting call with support:
our expert consultants!
return { No strings. No commitment. • popular file types
For more information, • emails with multilevel
visit www.codemag.com/
};
consulting or email us
attachments
}
}
at [email protected]. • a wide variety of databases
</script> • web data
Then, on a parent component that uses this component, there
normally wouldn’t be any mechanism to know it’s asynchro-
nous. So you bring in the component to the parent component:
Over 25 search options
including:
import ShowSomething • efficient multithreaded search
from "./components/showSomething";
• easy multicolor hit-highlighting
export default { • forensics options like credit
setup() { ... }, card search
components: { ShowSomething }
};

Instead of writing a simple template that uses the compo- Developers:


nent, you use a suspense block to specify two separate tem- • SDKs for Windows, Linux,
plates (marked default and fallback): macOS
• Cross-platform APIs for
<template>
.
C++, Java and NET with
. .
<Suspense>
<template #default> NET Standard / NET Core
<div>
• FAQs on faceted search,
<ShowSomething />
</div>
granular data classification,
</template> Azure and more
<template #fallback>
<div>
<div>Please wait...</div>
</div>
</template> Visit dtSearch.com for
</Suspense>
</template>
• hundreds of reviews and
case studies
In this case, you can see that what will happen is that un- • fully-functional enterprise
til the ShowSomething component is done with its setup,
the fallback template will be used. When it’s complete, it
and developer evaluations
switches to the default template. Neat!
The Smart Choice for Text
Teleport
Another interesting feature that started its life in React is
Retrieval® since 1991
the idea of Teleport (or Portals as React calls it). The basic
idea of teleport is to be able to render some part of a com- 1-800-IT-FINDS
ponent in a DOM element that exists outside the current
scope. For example, you can add a Teleport element with
www.dtSearch.com
some content that you want to render:

codemag.com Vue 3: The Changes 43


<div> state: {
<Teleport to="#appTitle"> },
<h3>{{ name }}</h3> mutations: {
</Teleport> },
... actions: {
},
And on your webpage, you might have a div with the ID of modules: {
appTitle: }
});
<p>
Teleport Should Appear here, With the changes to global mounting, this changes how
not inside the component: you’d wire up the store as well:
</p>
<div id="appTitle"></div> import { createApp } from ‘vue’;
import App from ‘./App.vue’
When rendered, the contents will be “teleported” to the ap- import store from ‘./store’
pTitle div and shown in that part of the UI—it doesn’t need
to be inside your application at all. Ta-da! createApp(App)
  .use(store)
Routing   .mount(‘#app’)
Although Vue Routing has been upgraded to Vue 3, there
were some changes to the API to make it more consistent— Except for these minor changes, your Vuex code should just
overall, the goal was to make sure that it was more com- continue to work.
patible with TypeScript. Routing is, on the whole, backward
compatible with Vue Routing version 3 (which was used in
Vue 2). There are some breaking changes, but they are rela- Where Are We?
tively minor including: Let me be honest. I’m sure that since we’re not at release
yet, I’m missing a couple of breaking changes. The purpose
• Specifying the routing mode is now done via a new of this article isn’t to be completely exhaustive but instead
history property instead of a mode property to prepare you for the major changes coming to Vue. On
• The base of routing is now specified when you specify the whole, I really like the new changes and think they are
history a definite improvement in the library. If you don’t agree,
• Catch-all routes have changed format don’t worry. You can stay with Vue 2 as long as you like.
• Transitions must now wait until the router is ready They are back-porting some of the major changes to allow
you to continue using Vue 2 if you like, although the back-
You can review the breaking changes on the repository: porting will happen sometime after Vue 3 releases.
https://github.com/vuejs/vue-router-next.
I hope this article has helped gird you for the upcoming
Vuex changes and, at best, gets you excited to try the new ver-
The overall API for Vuex has been kept for this new ver- sion of Vue.
sion of Vuex for Vue 3. But some of the ways you wire-up
the store are different. For example, although you can still  Shawn Wildermuth
create a Vuex Store by calling new Vuex.Store, they’re sug- 
gesting that you use the createStore exported function to
better align with the way that Vue Router. So, in version 2,
you’d create a new store like this:

import Vuex from 'vuex'

export default new Vuex.Store({


state: {
},
mutations: {
},
actions: {
},
modules: {
}
});

In contrast, in version 3, they suggest that you create it


like so:

import { createStore } from 'vuex'

export default createStore({

44 Vue 3: The Changes codemag.com


ONLINE QUICK ID 2008071

Stages of Data: A Playbook for Analytic


Reporting Using COVID-19 Data
Over the last few years, I’ve written articles about ETL and back-end processes in data warehousing applications. A few months
ago, I started writing a similar article on a “playbook” for analytic reporting. I’ve built many business intelligence applications for
several industries. Although the specifics differ, there are common key elements. Just like Visual Studio and .NET have properties,

events, and methods, BI applications have flashpoints, ac- Hawai’i. I’m in no way minimizing their shared experiences
tivities, and strategies that occur along the life cycle of the through this horrible crisis. Writing an article and presenting a
application. I began to write an article that organized these case application requires some economizing in a fixed window
“PEMs” into a playbook. I initially took an actual business of time, and for that reason I’ve restricted this to the 48 states.
application and mocked up the data for the example. Then
in February/March of 2020, news of COVID-19 began to af- Also, many in the news have discussed the accuracy of the
fect us all. I changed my example application to use CO- case/death counts and the context surrounding the data
VID-19 data and will present it here. collection. I’m going to assume, for purposes of this article,
that the numbers are valid. At the end of the article, I’ve
Kevin S. Goff listed all of my data sources.
kgoff@kevinsgoff. net
Disclaimer
www. KevinSGoff. net Before I talk about my goals for this article, I realize that I understand if readers react that this is too dark a topic.
@StagesOfData writing about COVID-19 and presenting a reporting applica- But to the degree that people compare current events to
tion can seem odd and even morbid. Every number is a human past history of pandemics, discussions today potentially
Kevin S. Goff is Database
architect/developer/speaker/ being, and as I type this, the United States and World Death become the history of tomorrow. The books we read today
author, and has been counts are roughly 87K and 312K respectively and growing on pandemics from 50-100 years ago (filled with statistics)
writing for CODE Magazine every day. These are victims with family and friends. were at one time “current events”.
since 2004. He was a mem-
We have health care workers who are dealing directly with the
ber of the Microsoft MVP
program from 2005 through crisis of their lifetimes. Millions of individuals have lost their Goals for This Application
2019, when he spoke jobs and/or have experienced major economic hardships. Par- I’ve looked at several public COVID-19 dashboard reporting
frequently community ents and children are dealing with the massive shift in the applications. Some have eye-popping displays, but don’t
events in the Mid-Atlantic education model and the challenges of kids staying at home. always have as many drill-down features. Some show more
region and also spoke For all the items I just mentioned, there are countless others. detailed information but spread across several pages in such
regularly for the VS Live/ I struggle to find the words to convey how serious this is. a way that putting it together to form a picture is difficult.
Live 360 Conference brand
from 2012 through 2015. Over the years, I’ve collected data for personal projects, Building the proverbial Holy Grail of reporting applications
ranging from my daily health numbers to election counts to takes a long cycle of continuous improvements. There’s an
sports statistics. I started collecting data on COVID because old variation of a Murphy’s Law joke that the perfect ap-
I wanted to track the trends—not just in the big cities, but plication does not exist, and if it did, few would likely use
in smaller areas where I’d heard news stories of outbreaks. it. Even so, I wanted to put together a reporting piece that
Along the way, several people alerted me to some very at- would cover some of my questions about numbers across the
tractive public dashboards on COVID data. They were visu- country with respect to certain metrics, such as population
ally impressive, but I found that several lacked the types of densities. Therefore, I decided to re-work my article to cover
analytic paths I wanted to track. (I’ll cover those below in a COVID data and reporting application. I’ll present the ap-
the “Goals for This Application” section). plication and then talk about the “plays” in my report ap-
plication playbook to see how many plays I actually covered.
We see coverage of these numbers every day, with many posi-
tions/opinions related to policy decisions to deal with this First, the granularity of the data would be daily cases count
crisis. I’m writing this without injecting any personal view or and deaths by United States County.
bias. Because I can never resist a pop culture analogy, many
will recall that in all the years of the Johnny Carson “Tonight • The county population (from 2019)
Show,” he talked about all sorts of news events, but you never • County square miles
knew his opinions on the content. Likewise, I’m not offering • Population per square mile
any opinion or looking to make any statement about poten- • Daily count of new CV cases and CV deaths
tial policy decisions to deal with COVID, other than to state • CV deaths as a % of cases
the obvious: There are no easy answers for this. • CV cases and CV deaths as a percent of population,
square mile, and population/square mile
In this article, I’m going to use some of the mapping features
in Power BI, and to keep things simple, I’m restricting the Here are some of the things I’d like to include:
scope of this article and COVID data to the “lower-48 states”
in the United States. I acknowledge that CODE has many read- • Summaries of this information by week, because there
ers outside the U.S, as well as the U.S. states of Alaska and can be dramatic day-to-day changes

46 Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data codemag.com
• Ranking the counties and states in the U.S. by any of the nental U.S. with very high death/population rates. Is
listed metrics. This will allow me to find hotspots that there any easy way to see them?
some might not initially believe would be a hotspot.
• Assign a key performance indicator (KPI), to see which State comparison question:
counties are trending downward enough to start re-
opening. For instance, a state might declare that a • We know that New York has had the highest state num-
county can re-open when the number of cases over the bers. What about the next five highest states? How have
last two weeks is less than X cases per Y-thousand people they trended over the last month? And when I look at
• Rendering this data in bar charts, tables, and basic their county populations, is there direct correlation?
state/county maps filled based on a metric.
• Easy drill-down from a state into the counties. Assessment of readiness (Key Performance Index) question:
• When looking at cumulative numbers by county, easily
see the daily and weekly progression of these metrics. • As of this writing (early May 2020), some states have
set guidelines for partial re-openings by county. For
If you’ve been following the news on this general subject, instance, in Pennsylvania, one major guideline is that,
you know that there’s been discussion on hospital counts, for the last 14 days, a county must have less than 50
hospital resource usage, nursing home statistics, and break- total new cases in those fourteen days per 100,000
outs by age group. I wanted to include statistics in these residents. For counties in PA (and for that matter,
areas, but for the following reasons couldn’t: across the nation), which counties are in the best rela-
tive position to meet this, and which are the worst?
• I tried to pull statistics on hospital case counts. I
found some at local levels but not across enough of The Tools I’ve Chosen
the country. I’m using SQL Server Integration Services to pull down daily
• I also tried to pull statistics on counts in long-term CSV case/death counts. I used SQL Server to hold the data
care facilities, as well as breakouts by age group. Con- and Microsoft Power BI for the visualizations. Could I have
fidentiality laws and regulations make this data hard done this in SQL Server Reporting Services? Yes. However,
to attain. As of early May 2020, roughly 35 of the 50 Power BI’s dashboard offerings have improved to the point
states provide some/all of these statistics. Currently where it’s a legitimate tool for this type of work. Yes, there
the NY Times reports that as many as 33% of COVID-19 are still SSRS features that should be in Power BI, but Power
deaths in the United States are in nursing homes, BI is still a strong tool for this type of work.
with several states well above 50%. My home state
of Pennsylvania reports nearly 70% of COVID deaths I deployed this report to my Power BI Pro site in the cloud,
are in nursing homes, with some counties reporting and I published a public version. My website (www.kevins-
as high as 80%. goff.net) has a link to the application (the URL is much too
long to include here and could change).
Based on the Data, Find Out the Following
There are nine questions I’d like to use this application to This article will contain some T-SQL code to deal with some
answer. I’ve put them into three categories: allocation of data, and some Power BI DAX code to deal with
dynamic measures/calculations. My website (www.kevins-
General statistics: goff.net) has information on the public version of this Pow-
er BI application, along with the Power BI Desktop (PBIX)
• On any given day, how did the states rank for a par- file and other necessary files.
ticular metric?
• For the state with the highest deaths, what was the
county breakout? Begin with the End in Mind: The Data
• For a specific county and a rate/based metric (deaths Here is the core data I need to collect. First, I need a table of
per square mile, deaths per population), how easily counties in the continental United States, along with a recent
can I see, all at once, the rate for the county, the re- population count and square mile measurement (Figure 1).
lated state, and the U.S.? (Note: at the risk of giving
the plays away too soon, this is an important topic: The population count comes from the July 1, 2019 United
the ability to show percentages for an entity, the par- States Census Bureau. The FIPS code (Federal Information
ent entity, the grandparent entity, etc.) Processing Standard) is a combination of a zero-filled two-
• Even if I’m looking at cumulative numbers as of a certain digit state code and a zero-filled three-digit county code.
point in time, can I see the daily/weekly progression? From Figure 1, 42045 uniquely indicates Delaware County in
• Can I sort counties across the country by one number Pennsylvania. (There are six counties named “Delaware” in
(population, or population per square mile), and then the United States). As it turns out, the COVID-19 daily count
compare another metric (deaths or cases per popula- source I’ve been using has the FIPS code as the key.
tion)?
In the Data Sources section at the end of this article, I’ve
Mapping questions: included the websites where I pulled the county FIPS codes

• Can I render this information in maps and show a geo-


graphic county map to illustrate which counties in a
state have the worst metrics?
• I know that major cities like NYC have been devastat-
ed. However, there are smaller counties in the conti- Figure 1: County/State Master table with FIPS code, population, and square mileage

codemag.com Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data 47
and population. I was determined to pull square miles by this would likely be a one-and-done effort (county land area
land, as I wanted to include population density as a metric. doesn’t change much, and certainly not to any degree of
Given the early statistics on infection rates, population per analytic significance), I didn’t mind doing a one-time pat-
square mile can be a major correlating factor. tern hack, so long as I could manually verify small and large
counties in each state.
This was quite a challenge: the only source I could find was
a PDF that I had to convert to Excel, and then parse the Next, I needed to find a daily count of confirmed cases and
columns carefully. The values had spaces instead of com- deaths by county. I found a great one from GitHub (listed in
mas for any four-digit values, which made it tricky. Knowing the Data Sources section at the end of this article). This daily
feed is available for download as a CSV file and contains cumu-
lative counts of cases and deaths by day/county (Figure 2).

As it turns out, this was all I needed. I wrote a basic SSIS


package that parsed the columns into a table called COVID-
History (Figure 3).

I want to report on daily numbers, even though the feed


only contains cumulative counts for every county. There-
fore, I need to write something to parse the difference in
cases/deaths from one day going backwards. I want the re-
sult set like Figure 4:

I was determined to pull square


miles by land, as I wanted
to include population density
as a metric. Given the early statistics
on infection rates, population
per square mile can be a major
correlating factor.
Figure 2: CSV from daily web feed from GitHub (NY Times)

I can use a little bit of T-SQL, specifically the LAG function,


to line up cases for the current day and the prior day.

Note: I am taking significant advantage of the fact that


once a data feed identifies a county case for a day, the data
feed continues to include that county on each subsequent
data, even if the cumulative counts have not changed.

truncate table covid19DailyCounts


insert into covid19DailyCounts
(FipsCode, CaseDate, Cases, Deaths)
select fipscode, casedate,
cases - casesyesterday as Cases,
deaths - deathsyesterday as Deaths
Figure 3: Cumulative counts stored into a table from (select fipscode, casedate, cases, deaths,
lag(cases,1,0) over
(partition by fipscode order by casedate) as
CasesYesterday,
lag(deaths,1,0) over
(partition by fipscode order by casedate) as
DeathsYesterday
from CovidHistory ) temp

Finally, I want to create a daily table with the cumulative


number of new deaths and cases in the last 14 days, relative
to each day. (Figure 5).

At this point, I’m reminded of the old joke about getting 10


Figure 4: Daily counts derived from cumulative counts published economists in the room who give 10 different an-

48 Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data codemag.com
COBOL-like “Perform Varying” to self-join a table based on
a sliding condition (yes, I’m sure many readers just fainted
that I used a COBOL analogy!)

truncate table Covid19Last14Days

insert into Covid19Last14Days


select casedate, fipscode,
caseslast14Days, DeathsLast14Days
from covid19DailyCounts outside
cross apply
(select sum(cases) as CasesLast14days,
sum(deaths) as DeathsLast14Days
from covid19DailyCounts inside
Figure 5: Daily county record of cumulative cases and where inside. fipscode =
deaths over the last 14 days outside. fipscode
and inside. casedate between
dateadd(d,-13,outside. casedate) and
swers to a question. (There’s an alternate joke about getting outside. casedate) temp
10+ answers!). Similarly, database people might look at my
approach and say something like, “Why don’t you just store As far as the data goes, that’s it! Table 1 contains the cal-
those two columns in the snapshot history table and not cre- culated measures I’d like to create in the Power BI report. Variances over the Average
ate a new table?” or even, “Why are you materializing this
data at all? Just let the reporting tool create it on the fly” You might have thousands
Some Source Data Challenges: of customers and hundreds
of product options. When
All valid questions. So long as I’m not talking about an as- I encountered two data source challenges with the daily count
comparing shipments and
tronomical amount of data, and so long as the ETL processes feed. First, the data source summarized NY city as a whole, in-
revenue and costs from
deal with this in an automated fashion, I don’t mind mate- stead of breaking it out by the five counties/boroughs: the most recent quarter
rializing this into a separate table. to the quarter before, there
• New York County (Manhattan) might have been three
I’ve created a third table called CovidLast14days, using the • Kings County (Brooklyn) specific combinations that
CROSS APPLY function to marry up each county/date with • Bronx County represented the single highest
the sum of cases and deaths going back the last 14 days • Richmond County (Staten Island) percentage of increase for a
inclusive. When Microsoft released the CROSS APPLY func- • Queens County metric. Being able to bring
tion, many writers (myself included) praised it as a way to this information to the surface
apply (thus the name) the results of a table-valued function I took the New York city tally as a whole and spread it across EASILY can help the business
to a query. CROSS APPLY also has a nice ability to act as a the five counties based on population. Yes, I could have to spot trends

Calculated Measure Notes


Population across all 48 states
Square miles across all 48 states
Population Per Square Mile (current slice of data)
Population Per Square Mile across all 48 states
Cases Per Population
Cases Per Population across all 48 states
Cases Per Square Mile
Deaths Per Case
Deaths Per Population
Deaths Per Square Mile
CasesLast14DaysGoal For current slice, the Population divided by 100K, then multiplied by 50.
This is for the metric to compare the cumulative cases over the last 14 days.
CasesLast14DaysGoalPerformanceIndex The performance index of the cumulative cases over the last 14 days
against the goal.
A county with a population of 200K would have a goal of no more than
100 new cases in the last 14 days.
If they had 90 new cases in 14 days, the index would be. 9 (which is good).
If they had 110 new cases in the last 14 days, the index would be 1.
1 (not as good).

Table 1: Calculated Measures

codemag.com Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data 49
Figure 6: First page of general cumulative statistics and overall tabs

Figure 7: Comparing NY state measures to U.S. measures and ranking NY counties

50 Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data codemag.com
searched for a second website source. Given the significant a prior day to view of snapshot of cumulative counts as of
population density of these counties and the issue of county that date.
reporting, I can understand the challenges in trying to get a
100% accurate count down to the county level. I’ve selected New York with the mouse, which filters the list
of counties on the right. In that list of counties, I’ve sorted
Second, the data feed contained some state tallies with a on the metric for Deaths Per Square Mile.
county of **Unknown**. To address this, I spread those
counts across the counties in the state based on population. Note the chart in the lower right that currently shows daily
case and death counts. If I want to show weekly counts in-
stead (Figure 8), I can take advantage of a dynamic mea-
The Power BI Application Pages sure based on the user selection.
Here’s the first page of the Power BI application (Figure 6),
along with some tabs at the bottom for different pages. I’ll DynamicDailyDeathsMeasure =
address the questions above across the different pages. switch( values ( DailyTrendOption[Option]),
"Show Daily Counts",
General Statistics sum(covid19DailyCounts[Deaths]),
Let’s take the first five questions and walk through a few "Show Cumulative Counts",
scenarios. I’ll start with Figure 7. First, notice the drop- sum(CovidHistory[deaths]),
down for the Display Option. Based on that selection, the "Show Weekly Counts",
page ranks the 48 states based on that metric. I’ve selected sum(covidweeklycounts[Deaths]))
“Deaths,” although I could have picked any of the aggregat-
ed counts or any of the metrics from Table 1. (I’ll talk about With very little navigation, I’m able to see the following:
the drop-down for “Display Daily Chart Option” in a minute).
• The ranking of states by cumulative death count
Note the filters caption, with the date of 5/15. I’ve selected • By selecting New York, I’m able to see (from the two
5/15 from a date list drop-down (from back in Figure 6, but pink rectangular panels in the upper right) that NY’s
not shown here, simply to conserve screen real estate). The Cases and Deaths Per Population and Per Square Mile
output ranks the states by cumulative death counts, with are many times greater than the national average.
NY first at 27,755, New Jersey a distant second at 10,138 • The five counties that make up NYC (New York/Man-
deaths, and so on. Again, this is “as of 5/15”. I could pick hattan, Kings, Bronx, Queens, and Richmond) have

Figure 8: Same as Figure 7, but showing weekly counts instead of daily counts

codemag.com Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data 51
Figure 9: Comparing NJ state measures to U.S. measures and ranking NJ counties

Figure 10: Sort NY counties by Deaths/Square Mile and then select first five

52 Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data codemag.com
Figure 11: Counties with the largest populations

even higher rates than the NY state figures as a whole. County is third in cases with 819, it’s first in NJ with 17.43
These are very densely populated counties with rates deaths per square mile. Also note that NJ’s 1.02 deaths per
per square mile that reflect the severe crisis situation square mile is roughly two-and-a-half times greater than
they have faced. NY’s 0.39 deaths per square mile (from back in Figure 7).

Now let’s select New Jersey from the list of states on the left Let’s go back to Figure 7 (New York) and on the right, click-
and continue to sort the counties on the right by Deaths/ CTRL-CLICK on the top five counties (Figure 10). Notice that
Square Mile. You can see in Figure 9 that although Hudson the national average stays the same, but the second recap

Figure 12: Map of 48 states filled with a rule based on the Dynamic Measure selected

codemag.com Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data 53
Figure 13: Breakout of Pennsylvania by country

line is filtered based on the five counties. This is exponen- "Date: " &
tially sobering: The five counties that make up New York City CONCATENATEX(filters(DateList[casedate]),
collectively cover just under 400 square miles, yet account DateList[casedate] ,
for about 25% of the nation’s deaths. ", ") & ", ","") &

Also notice that at the bottom of Figure 10, there’s a daily if(isfiltered(statemaster[statename] ),
chart of cases and deaths. "State(s): " &
CONCATENATEX(filters(
OK, let’s stop and talk about a few things in the chart back statemaster [statename] ),
in Figure 7, specifically the bar chart that ranked states by statemaster[statename] ,
a dynamic measure I selected in a drop-down. In Power BI, ", ") & ", ","")
you can use a DAX formula to determine which measure to
use, based on the Measure option the user selects. Finally, for general population and death statistics, Figure 11
sorts the counties in the U.S. by population, so that you can
DynamicMeasure = see case/death rates for these counties.
switch( values ( ShowMeasure[OptionName] ),
"Show Cases", sum(CovidHistory[Cases]), Some observations:
"Show Deaths", sum(CovidHistory[deaths]) ,
"Show Case % of Pop", [Cases Per Population] , • Cook County in Illinois has twice the population of ei-
"Show Death % of Pop", [Deaths Per Population], ther Kings County or Queens County in NY, yet the two NY
"Show Death Rate", [Deaths per case], counties are far more densely populated (36K and 20K per
"Show Case % of Sq Miles", square mile versus 5.4K per square mile) and the differ-
[Cases Per Square Mile], etc. ) ence in deaths per square mile is exponentially sobering.
• Additionally, the cumulative deaths in either Kings or
Also, note the “filter recap” label, which is also a DAX for- Queens County NY is “roughly” near the sum of the
mula that reads the current filter selections: other displayed counties combined.
• There are large counties in Texas, Arizona, and Cali-
SelectedFilter = "Filters: " & fornia with death rates/population below the national
if(isfiltered( DateList[casedate]) , average.

54 Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data codemag.com
Figure 14: Scatter chart for counties in PA. Notice the timeline axis.

Mapping This has largely been the story in Pennsylvania for over a
Figure 12 shows a map, based on cumulative numbers on month. The southeastern part of the state (closest to New
5/1. I’ve chosen “Death as a % of Population:”. The country Jersey and NY) have the highest percentages. Inside each
map fills states based on the measurement I selected in the of the counties are incredibly sad and tragic stories. Lehigh
drop-down. County (Allentown) has reported that over 70% of deaths
have occurred in long-term care facilities.
Note that New York is the “reddest,” with the highest death
rate per population in the U.S. Note that I’ve placed the mouse over Delaware County,
which is just west of Philadelphia county. I’ve got a tooltip
However, note that Pennsylvania, Louisiana, Georgia, and with all the available information. Later in the article, in the
Michigan have a slightly darker shade than other states. I’ll Playbook section, I talk about the value of tooltips. Dela-
take a look at those states one at a time. ware County has percentages that are generally worse than
the overall national averages.
Because I live in Pennsylvania, I’ll click on that state, and
Power BI will plot the counties by the metric I’ve chosen In a moment, I’ll look at a few other states (Louisiana, Geor-
(“Death as a % of Population), in Figure 13. gia, and Michigan). They have counties and scenarios that

codemag.com Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data 55
Figure 15: State of Louisiana, tooltip for St John the Baptist Parish

Figure 16: Georgia’s Dougherty County and neighboring Randolph and Terrell Counties

56 Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data codemag.com
Figure 17: Michigan, with focus on Wayne County (Detroit)

you might or might not have heard about on the national plots population and deaths by county, along with a daily
news, but are still far above the national averages. timeline (Figure 14):

But before then, you might want to know something. How At the top of Figure 14, note that on April 7th, the Y-axis
have deaths progressed in Pennsylvania? I can provide a for death count was nearly the same for most counties.
right-click drill-through option to show a scatter graph that But then as I slide the date axis at the bottom forward

Figure 18: Sort the grid on the first dashboard page by Death Rate per Population

codemag.com Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data 57
Figure 19: State-by-State comparison of a metric (Deaths) for specific states

by day, I interactively see that the death count in Philadel- population figures that are much higher than the national
phia has risen sharply. And even though Allegheny county is average.
the second most populated county in PA, the deaths in small-
er counties like Bucks, Delaware, and Lancaster are higher. In Figure 17, I’ll look at Michigan. I’ve spent time in Michi-
gan the last 15 years, both in Traverse City and also in Dear-
Let’s go back to the national map and then drill into Loui- born. Right now, Wayne County (both Detroit and Dearborn)
siana. Note: Louisiana is broken out by Parish, which is has also faced very high case and death rates. You can
roughly the equivalent of a county. Louisiana was originally see from Figure 17 that Wayne County has nearly half the
part of Spain and then France, and those countries divided deaths in Michigan (2,192 vs. 4,765) despite occupying only
their lands according to church parish boundaries. Even af- 614 square miles.
ter Louisiana became part of the U.S., they continue to refer
to regions as parishes and not counties. Now that you’ve seen some county scenarios for a handful
of states, let’s see how many of them show up when I sort
In Figure 15, you can see that St John the Baptist Parish all counties in the U.S. by death/population. Let’s go back
has the highest death rate (red) in the state. Even though to the grid on the first page of the dashboard, make sure
it isn’t a large region (about 43K residents), the case and all states on the left are selected, and sort the counties
death rates are very high: as you’ll see in a minute, some of on the right by Death/Population (Figure 18). Although
the highest in the nation. you can see several counties in NY in the list, you also see
several counties in Georgia at the top of the list, as well as
In Figure 16, I’ll take a look at Georgia. I lived in Atlanta St John the Baptist Parish in Louisiana. And although not
in the 1990s and travelled to nearly every county health de- pictured, Wayne county in Michigan is also high on the list if
partment as part of a state-wide automation project. I was you scroll down a few more counties.
surprised and saddened to learn that Dougherty County, a
few hours south of Atlanta with a population of about 88K, State Comparisons
had been severely hit. The neighboring county of Randolph, You know that New York has, by far and away, the highest
with just 6.7K residents, has also been severely hit, with one number of cases and deaths. But for the next five states
of the highest cases and death/population percentages in based on deaths, how much do they account for the nation’s
the nation. Several news sources reported that many of the total and how do they compare to each other?
deaths occurred in a nursing home that received essential
services from neighboring Dougherty County. Additionally, In Figure 19, I’ve gone to the tab for state comparisons and
Terrell County, just east of Randolph County, has death per I’ve selected states two through six (New Jersey, Michigan,

58 Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data codemag.com
Figure 20: Massachusetts counties, first by population and then by Death Rate

Figure 21: Alternate view of Hampden county: a scatter chart plotting population vs. death rate

codemag.com Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data 59
Figure 22: Counties with the worst performance index for re-opening based on a 14-day case count

Massachusetts, Pennsylvania, and Illinois.) You can see that residents in that county, which counties would be the best
in the last few weeks, deaths have spiked in New Jersey at a (or worst) position for that? (Note: I realize that many will
higher rate than other states. Also, of interest, Massachu- have opinions on the parameters for such a goal. My objec-
setts is close to Michigan in cumulative deaths, even though tive here is simply to show the mechanics of a KPI).
Michigan has roughly three million more residents. Deaths
in Massachusetts have risen sharply in the last month. Figure 22 plots that measure and find that Sussex County
in Delaware has had 1,273 cases over the last 14 days. They
Here’s a question: Is the rise in Massachusetts directly cor- have a population of 234,225. If you applied that metric
related to population? As it turns out, not always. In Figure goal above, you’d want no more than 117 total cases over
20, on the far left, Hampden County in Massachusetts is the the last 14 days.
eighth largest county in Massachusetts in terms of popula-
tion but has the highest death per population statistic. Okay, so that’s 1,273 divided by 117, which is an index of
roughly 10.87. Basically, according to the rules of this KPI
It occurred to me that another visualization could be a scat- goal, cases over the last 14 days are roughly 10.87 times
ter chart where the user could select the option for one axis greater than they should be, in order to start re-opening.
(Population, or Population Per Square mile, etc.) and then That represents one of the higher indexes in the country
the other axis (Deaths, Deaths per population, etc.). in terms of readiness. In other words, the lower the index,
the better.
The scatter chart in Figure 21 is just another way to show that
Hampden County is far from the most populated county but If you go back to the grid on the opening page and add in
has the highest death rate per population. In the scatter chart, the 14-day case total and the performance index, you can
Hampden represents the furthest “negative” distance from the see that, of the largest counties in the U. S., some are close
regression line on the relationship between population and to (or under) the threshold and some are not (Figure 23).
deaths per population. Scatter charts help us to see distri-
bution of values and relationships between two variables. For instance, Harris County in Texas has had 2,499 cases in
the last 14 days (relative to May 15). They have a popula-
Assessment of Readiness tion of 4.7 million people. Using the rule of no more than
Finally, if there were a nationwide goal of fewer than 50 new 50 cases per 100K residents, that means a possible indica-
total cases per county over the last 14 days for every 100K tor for reopening would be no more than ((4.7 million /

60 Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data codemag.com
Figure 23: All counties, sorted by 14-day Case per Population Performance Index

100,000) multiplied by 50), or roughly 2,350 new cases. As Content Reward


a result, they are slightly above 1 for the index. Just like the old TV show where a contestant won by “naming
that tune in four notes,” a successful reporting application
provides meaningful content reward with a minimal number
Playbook: Presentation Layer of mouse-clicks. In my career, I’ve built what I thought were
Now that I’ve gone through the application, I’ll cover some great reporting applications, only to learn that users found
of the items I’ve borrowed from my “playbook,” both on them too difficult to use. Users are busy and don’t have time
the Presentation and Non-Presentation layer. I’ll start with to learn all sorts of system intricacies. This is profoundly
items I often consider on the presentation end, whether seen in Apple products. The more information you provide
talking about COVID statistics or inventory/order book to users with a minimal amount of effort, the more they’ll
dashboards or financial dashboards: want to use your applications.

Shared Filters I’ve tried to demonstrate this theme in Figures 7 through 9,


Recently, I needed to filter on data from a website and found where someone could see country and specific state/county
that I couldn’t. The data element I needed was a pretty com- metrics in one glance. I’ve also tried to demonstrate this in
mon one for the business context and had a fairly short list tooltips to show as much supporting data as possible.
of discrete values. Unfortunately, the website didn’t permit
me to filter. To make matters worse, the site made it difficult Dynamic Measures
for me to pull back raw data and filter it myself. If you have a reporting application with many key metrics,
it’s tempting to create a page or page segment for each
I tend to be a pretty grumpy user when I can’t get what I measure. In some cases, you can alternatively display all
think shouldn’t be realistically difficult, and that’s precisely the possible measures in a drop-down and then plot the se-
why I try to be very sensitive to user filtering needs. I also lected measure dynamically in a chart.
get cranky when I set a filter on one page and there’s no
option to persist that filter on subsequent pages where the
specific filter would hold the same context.

There weren’t many filters in this COVID dashboard report,


Just like the old TV show where
but I did try to make as much available as possible and per- a contestant won by “naming
sisted common filters (like current snapshot date) across that tune in four notes,”
pages.
a successful reporting application
Keep the Door Open for Exporting Raw Detailed Data provides meaningful content
Granted, applications can only drill-down so far. Having said
that, don’t assume that users will be satisfied with summary reward with a minimal number
information. Users will want details, and they’ll often want of mouse-clicks.
to export those details to Excel to do their own analysis.

Fortunately, Power BI provides native ability for an end user


to export the results of a visualization (i.e., a chart, a table, In this application, Figure 7 showed where you can rank
etc.) to a CSV file. Every Power BI visualization has a run- states by a variable metric and Figure 8 shows where you
time option in the upper-right hover menu to export data. can show daily counts or cumulative counts by day. I defi-

codemag.com Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data 61
nitely recommend considering this approach to avoid re- product(s) you selected. Conversely, if you click on specific
peating the same visualization across many pages. weeks on the right, the chart on the left is auto-refreshed
based on the weeks you selected.
Again, Power BI doesn’t currently seem to have any elegant
way of specifying dynamic attributes (such as allowing a This is a great feature that most users expect. The only catch
user to show geographies or products or GL codes, etc.). is that you want to test all the possible behaviors and selec-
Some have suggested a workaround of exploding/normal- tion options. If you’ve applied visual-level filters for things
izing the result set and joining to a bridge table of attribute like Top 25 Products, the cross-filtering on the right might
definitions. If the result set isn’t large, this can work, but not behave exactly the way you expect.
isn’t practical for larger applications. Hopefully Microsoft
will provide the ability to set dynamic attributes via a simple A few years ago, I wrote here in CODE Magazine about how
DAX formula, in the same way we can do it now with mea- to mimic this feature in SQL Server Reporting Services,
sures. using invisible hot parameters. You can find the solution
in the May/June 2016 issue of CODE Magazine, tip #3
Maps (https://www.codemag.com/Article/1605111). Although I
Some developers love maps and want to show them to users still use that feature today in SSRS, there’s no question that
(whether users ask for them or not). Other developers will Power BI’s native feature for cross-filtering is much snappier.
avoid them at all costs. You can use them to create great
visualizations, but anyone who’s worked with mapping It’s difficult to see the impact of cross-filtering in a printed
components knows that they can be challenging. I’ve made article, but certainly the ability to click a state back in Fig-
Some Business moderate but not heavy use of mapping here. I think they ures 7 and 8 and immediately see the relevant counties on
Buzzwords are Worth provide a nice touch and serve a particularly great role in the right is an example of cross-filtering.
Remembering presentations. However, you need to learn the rules and any
Some business buzzwords nuances for what geography elements (names or geospa- Easy Way to See “Best/Worst 25”
make me want to reach for tial/latitude-longitude points) the mapping visualization Going back the theme of “I can name that tune in X mouse-
a bottle of Pepto; however, expects. You also need to be careful about overloading a clicks,” it’s very important for a user to be able to see a
others are important. Here’s map, which can take much longer than a few seconds. list of top/bottom values with little effort. The “best” or
one from the latter category: “worst” of a metric is often key to any analysis: Users will
SO THAT. We do this SO This application has only used maps as a visual launching appreciate seeing this information easily.
THAT we can accomplish point to talk about certain states and to help the user eas-
something. In this case, we are ily identify counties. Other applications use maps for more My reporting applications have all sorts of “Top/Bottom 25”
trying to take large amounts intricate geographic definitions than what I’ve provided lists of worst orders by margin, best sales reps by margin,
of data (whether organized here, such as distances between grocery stores. (Ironically, etc. Often business users will be quite vocal up front about
neatly or not) and present the recent discussion of contact tracing opens discussions wanting to see this, and never hesitate to raise it as a pos-
a story of what happened about far more elaborating mapping than what I’ve pro- sible feature.
to the business. vided here).
In this COVID reporting application, the list of counties in
Tooltips the figures for general statistics showed how you could eas-
A long time ago, I had a boss who was adamant about pro- ily get the top N or bottom N based on a metric. Power BI
viding as much tooltip content as possible. He felt that any makes this particularly easy by allowing native sorting on a
calculated field should also immediately show all the values column in ascending or descending order.
involved in the calculation. Periodically he’d go through my
applications, print out any screen shots where the tooltip Recap of Main Numbers: Your Version of “The Magic Wall”
didn’t sufficiently provide all the background information I have saved the most esoteric top for last. If you watch
for a calculation, and leave the printouts on my desk (and news shows on television, some stations have analysts who
also email everyone in the company). He was brutal. I didn’t show interactive maps of voting trends, voting demograph-
care for the tactics, but he was almost always correct. ics, etc. I frequently watch these shows, but not for the
reason you might think. More than anything, I’m utterly
Figures 13, 15-17, and 22 all show examples of tooltips. fascinated with the use of technology to tell a story. In par-
Perhaps I went overboard, but it’s usually better situation ticular, CNN’s John King is a master in using the multi-touch
to remove unnecessary content than to add it. collaboration wall (nicknamed “The Magic Wall”) to tell a
story.
Final note: Microsoft also has a feature in Power BI called
Report Tooltips. You can define a separate report page and When you create dashboard pages to show company finan-
make it available as a tooltip for a plotted page. I’ve seen cial performance, company costs, or sales trends by prod-
applications that have gone overboard with this concept, uct/market, essentially you have a story to tell regarding
but it provides a powerful way to give context to a plotted the data. Maybe margin rates are down this month because
point on a chart, far more than a regular tooltip list of val- you had to offer lower prices and could not lower the costs
ues. In my next article, I’ll cover this in detail. as well. Maybe margin rates are down because you wound up
selling more of lower margin products than higher margin
Cross-Filtering products. Bottom line, some aspect of overall performance
Power BI provides some fantastic capabilities for cross-fil- is up or down because of this and this and this. The factors
tering. If you’re not familiar with the term, imagine if you could be simple, or more complicated.
have a chart on the left of Sales by Product and the chart
on the right of Sales by Week. If you click on a product on A business intelligence application is the means by which
the left, the chart on the right auto-refreshes based on the you tell that story. This means three things:

62 Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data codemag.com
• The data in the story needs to be accurate. highest margin rate increase/decrease by geography/prod-
• The presentation of the data needs to be clear and uct. Out of dozens of products and hundreds of customers,
representative of how you label/annotate it on the for those with more than X dollars of sales to begin with,
screen. what single customer/product had the biggest increase or
• Regardless of whether you use a bar chart, scatter decrease in margin from month to month?
chart, or just a sorted table, the visualization needs
to be simple enough to be obvious, but not so simple The KPI at the end is an example of a trend-basic metric,
that it prevents the user from being able to get more as I’m evaluating if counties are ready to re-open based on
information. 14-day trends.

On that last note, there can be instances where some think Spotlight: Combining Metrics to Form a Picture
a vertical bar chart is better than a horizontal bar chart. This is one of the most important elements of valuable ana-
Sometimes people use bar charts when line charts might lytic reporting applications: bringing together different ele-
arguably be better. (Pro tip: Don’t use a pie chart if you’ll ments to help form a narrative. I cannot stress this enough:
have more than a few entries!)
Reporting on cases and deaths from COVID-19 gives you
Every once in a while, maybe before a major release, pre- one picture. But aligning those measures with population
tend like you’re a news analyst using your tool to explain density will frame a better picture. Similarly, a dashboard
why some aspect of the business is up or down. Here’s an of shipment trends gives you one picture, but integrating
even crazier suggestion: Capture a five-minute video and the upcoming order booking and inventory turnover trends
watch it. In the same way you do code reviews and design gives a much more insightful and compelling picture.
reviews, a presentation review can flush out quirks.

Playbook: Non-Presentation Layer Know the gaps and quirks in


Now that I’ve talked about some of the presentation plays,
let’s talk about the playbook for the non-presentation layer. your source data. I had to spread
I mentioned at the beginning of this article that I’ve writ- out NY cases by county based
ten past articles about ETL components in data warehousing.
That’s one aspect (and a BIG one) of all the work that the user
on population and with unknown
doesn’t directly see. However, there are others. Let’s look. counties by population.
Trends and Variances, Yes! But What Type?
Trend analysis and variance analysis can mean different
things to different groups. Sometimes people just want Where this can get tricky is when the elements don’t share
to see the biggest increases or decreases from month to the same level of granularity. Metrics won’t align properly
month. Other times people want to see when a product sud- if they don’t share a common business context. Sometimes
denly sells far more (or less) than the average over the last it’s obvious and sometimes it’s subtle. I’ve seen applications
six months. that mixed aggregations (both summary and averages) at
different levels and gave a misleading picture. Ralph Kimball
wrote profoundly about this in his data warehouse books.
Even though he has retired, you can still find links to his
books and design tips at https://www.kimballgroup.com/.
Reporting on cases and I highly recommend that all data warehouse and business
deaths gives you one picture. intelligence professionals read Kimball’s works, even if you
only implement a small portion of his ideas in your work.
But aligning those measures In particular, the book “The Data Warehouse Toolkit: The
with population density frames Definitive Guide to Dimensional Modeling” truly deserves a
a better picture. Similarly, rating of “5 out of 5 stars, a must-read”.
a dashboard of shipment trends Scheduled Refreshes
gives you one picture, Another theme to stress in any reporting application is de-
fining and annotating the data refresh schedule. Although
but integrating the upcoming there are a growing number of “real-time” analytic applica-
order booking and inventory tions, many still work on a scheduled refresh that could be
turnover trends gives a much more hourly, daily, twice a day, etc. In this instance, the COVID-19
reporting application is simple: The data is updated once
insightful and compelling picture. a day from the GitHub source (typically by noon, Eastern
Standard time) for the prior business day.

Materializing Data from a Mode; Table or View?


I have an application that shows the standard shipping rate Opinions vary on this topic, but I try to keep views material-
for a city, and the different carrier rates, ranked by the ized and fairly simple. Nearly any database feature can be
highest carrier variance over the average. This shows which misused, and I’ve seen views misused more than any other
carriers are charging the most, relative to the average. I database object. When effectively possible, I’d rather gen-
have another application that shows, month to month, the erate a flat report structure table for a set of visualizations.

codemag.com Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data 63
I see from your product price table that product XYZ has a
No one should overlook a tried-and- price of $405, and was last updated on 1/16/2020. What was
the price before that? And what was the price a year ago?
true data profiling approach:
the good, old fashioned pair of If the answer is something along the lines of the following,
that’s sad:
eyes (and extra pair of eyes).
“Well, we’d have to restore a backup”
“We had a developer who suggested that, but that’s just
ETL, Data Lineage, and Data Munging too much storage”
I’ve written on this topic in prior CODE articles (particularly “Why? None of our users have ever asked for that”
back in 2017) and so I don’t want to repeat large amounts
of text. If you go to my CODE Magazine bio page (https:// That last one is particularly sad (and a response I’ve heard
codemag. com/People/Bio/Kevin. Goff), there are multiple before). If you are waiting for users to suggest the end re-
articles on Data Warehouse ETL techniques. Once again, I pay sult of a practice you should know about, you might have
a lifetime of homage to the great works of Ralph Kimball, who made some deeper errors.
wrote extensively on ETL and related activities.

In this reporting exercise, most of the ETL was basic. The


only real issues I encountered with the data were that some Repeat after me: store version
Get Your Popcorn Ready state death counts had a county of “Unknown” and a few history. Store Version History.
counties in the data feed did not have a FIPS code. I dealt
No joke: Two to three times
with the former by spreading out the “unknown” case/death STORE VERSION HISTORY!
a year I’ll take some of the
more advanced or powerful counts across the counties in that state by population. Ad-
dashboard applications and mittedly, I took a shortcut with a hard-wired case statement
cut some 3-5 minute videos to deal with the few counties that did not have a FIPS code.
that walk through scenarios In an actual application, I’d want to use a lookup table. As recently as in the last five years, I’ve seen key data in
in the same way the major TV regulated industries that doesn’t store version history. Da-
news analysts walk through Know the gaps and quirks in your source data. Here, I had tabases have had triggers for decades. Change Data Capture
voting results on election to spread out NY cases by county based on population. I has been in systems for years. And if all else fails, an ap-
night. It’s amazing how many also had to spread out state cases with unknown counties by plication can have its down home-grown process for storing
things (little and not so little) population. Recognize and account for it, document it, and version history. I try not to get preachy, but there is just no
I find. Then I’ll put those make your key users aware of it. excuse for this. So repeat after me: store version history.
videos into the hands of Store Version History. STORE VERSION HISTORY!
key business users. It’s a There are great data profiling tools to spot data anomalies.
technique that can help get No one should ever overlook their value—just like no one This application makes use of version history: The user can
users comfortable with your should overlook a tried-and-true data profiling approach: view cumulative counts for any day since this crisis began.
solutions. The good old-fashioned pair of eyes (and extra pair of eyes).

Validation with Other Systems Final Thoughts


I validate numbers constantly. Maybe it’s part of my obses- This concludes my first version of this application, with more
sive nature. When I started producing output for this re- to come. Here are some random thoughts to close:
porting application, I checked other websites to make sure
my numbers were the same or at least reasonably close. Our World Has Changed, but Good Hasn’t
I’m 55 years old, which means I can remember the end of
Within an organization, rarely will an application exist in isola- the Vietnam war, the handful of rough economic times,
tion of others. People will compare your output to other sys- and certainly most CODE Magazine readers are old enough
tems, general ledger numbers, etc. I highly encourage people to recall the tragedy of 9/11/2001. I also remember the
to stay on top of data validation, as difficult as it can be. brief fear in spring 1979 when I lived about 30 minutes
from the Three Mile Island nuclear power plant in Penn-
Here’s an example: Over the last year, I’ve been working sylvania.
on a costing application. Users of this application compare
the results against data in their Great Plains accounting/ This situation is far different. Our world has changed dra-
financial modules. Because they’d been using Great Plains matically. In another year, will our lives go back to the way
for years and this new costing application was the new kid they were? I have no way of predicting and won’t offer any
on the block, it was my responsibility to research any differ- guesses. The old cliché of, “Hope for the best, prepare for
ences in the numbers. As a result, I had to quickly learn how and expect the worst” seems as good advice as any.
to manually query Great Plains tables. Fortunately, I found
a fantastic website with valuable information on GP tables: I’ve re-learned some personal lessons from this and I’m sure
https://victoriayudin.com/gp-tables/gl-tables/. others have as well. One is that small acts of kindness can
still go a long way these days. Rod Paddock (CODE editor-
Version History (Set Preach On) in-chief) and I talked on the phone about a month ago. At
What color shirt were you wearing on the night of January 16th? the time I was quite down. If you’ve ever talked to Rod, you
know that he can lift your spirits. Then about a week ago,
OK, that sounds like an odd question to ask. But here’s a he texted me and shared some good personal news he had,
different one. which made me feel happy. Again, small acts of kindness…

64 Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data codemag.com
Beware of Third-Party Tools In the last year, I’ve made major changes in my life. I “re-
If you’ve read my prior articles in CODE Magazine or you’ve at- tired” from all conference speaking and community speak-
tended my presentations, you might remember that I caution ing and from the Microsoft MVP program. My main area of
people about diving into third party tools. Sometimes they focus is my family, my health, and my full-time job as a da-
are fantastic. Sometimes they are frustrating or worse. Some- tabase specialist. However, I still plan to continue writing
times you have to do some research to discover that they offer articles here and there, and will look to expand this project.
great functionality over the core product, but they are either I want to devote an article to incorporating Power BI con-
missing a few key seemingly standard features or suffer from tent into a .NET application, and deploying this project to
performance problems. I’ve evaluated several Power BI third- the cloud. As public sites continue to publish more and more
party visualization components over the last year with mixed county-wide data, I’ll look to incorporate it.
results. There are some that offer great user experiences that
I’d never be able to do quite the same with the regular Power Also, as I look back over this application, there are some
BI visualizations, but also come with some nasty side-effects. features I’d like to add/change, particularly in the trending
area and expanding the use of report tooltips. If I could add
Microsoft offers some nice (and free) add-on visualizations one additional report page for this article, it would be rank-
for Power BI on their Marketplace website. I use some for ing states and counties by % of increase/decrease in cases/
commercial work but have chosen to keep this application deaths, from the current week going back a week. Although
free of third-party visualizations, at least for now. I’ve included population density, I’d like to add some corre-
lation coefficients to provide mathematical context to how
How Can I Run This App? strong density plays a factor in these numbers.
Where Can I Get the Necessary Files?
Because this application contains data from the public do- I’m sure others will look at this application and say, “I can
main (i.e., nothing proprietary), Microsoft Power BI Pro al- think of a better way to do ZYZ.” Sometimes I’ll look at how
lows me to publish this web app to an open public URL. I’m presenting data and I’ll say to myself, “How I’m doing
this is okay, but I know there’s a shorter and more elegant
The URL is a long one, and because of the chance it could path.” We all live in the world of Continuous Improvement.
change, you can find it on the opening page of my website, Until next time!
www. KevinSGoff.net. My site also contains the necessary
code to embed the application in an iFrame.  Kevin S. Goff

As of this writing, I’m manually republishing the data dai-
ly. In my next article when I cover enhancements, I’ll talk
about setting up a data source and ETL process in Azure to
do it automatically.

That future article will contain T-SQL code to deal with some
allocation of data, and Power BI DAX code to deal with dy-
namic measures/calculations. My website (www.kevinsgoff.
net) has information on the public version of this Power BI
application, along with the Power BI Desktop (PBIX) file and
other necessary files.

Data Sources
I used the following location on GitHub to pull down daily
COVID-19 data. I’m not associated with this data source in
any way: I’ve simply been using it to populate my sets of
data for this application: https://github. com/nytimes/
covid-19-data

Here are three links that talk further about the source of data.

https://www.nytimes.com/interactive/2020/us/coronavirus-
us-cases. html
https://raw. githubusercontent. com/nytimes/covid-19-data/
master/us-counties. csv
https://www.nytimes.com/interactive/2020/us/coronavirus-
us-cases. html

“Where Do We Go from Here?


I wrote this article to serve as a model (playbook) for cre-
ating analytic reporting applications. It’s certainly not a
complete list. Football teams put new plays in the playbook
when they see a need or an opportunity. Additionally, teams
win some games without using every play in the playbook,
just like specific reporting applications use some of the
plays but not others.

codemag.com Stages of Data: A Playbook for Analytic Reporting Using COVID-19 Data 65
Tiberiu Covaci and
Talk to an RD Markus Egger
To continue our “Talk to an RD” column, Markus Egger and Tiberiu Covaci were
The Microsoft Regional supposed to get together at this year’s RD and MVP Summit at Microsoft in Redmond.
Director Program Those plans had to be changed in a hurry due to the current Coronavirus Crisis,
The Regional Director Program, or RDs for short,
but that didn’t deter these two RDs from chatting about Azure and other topics.
is a program that allows Microsoft to identify
influential individuals in an effort to give the Markus and Tiberiu have been acquainted—even friends—for quite some time. Tibi
community-at-large access to these individuals, (as Markus likes to call him) has long been involved with various communities and has
and also to provide a point of communication
and feedback into Microsoft. Regional Directors organized various events, such as DevSum in Sweden. He even presented some of CODE
do NOT work for Microsoft (and they aren’t Magazine’s State of .NET Events. (Side note: We’ve moved our State of .NET events online
paid for being part of the RD program), but for the time being. Live events will return once that’s possible, but for now, there’s a free
they have a formal relationship with Microsoft
that provides them with great insights and monthly State of .NET event online, each focusing on a specific topic. Find out more at
connections within Microsoft. codemag.com/stateofdotnet). It’s therefore no surprise that these two RDs have a lot
of interests in common, especially when it comes to cloud computing. Let’s listen in!
The Microsoft Regional Director website
(https://rd.microsoft.com) defines the RD
program in the following words:
Markus: Hello Tibi, good to see you! Tiberiu: I’ve known about Azure ever since Micro-
“The Regional Director Program provides Microsoft soft started talking about it the very first time.
leaders with the customer insights and real- Tiberiu: Good to see you too! Even before the PDC when they showed the very
world voices it needs to continue empowering first data center in a container. I don’t know if
developers and IT professionals with the world’s Markus: I know you usually travel the world, and you remember. Now the show is called BUILD. It
most innovative and impactful tools, services, you’ve worked and lived in all kinds of places. was in Los Angeles. I think it was 2009 or 2010 or
and solutions. Established in 1993, the program Where are you right now? something like that.
consists of 160 of the world’s top technology
visionaries chosen specifically for their proven Tiberiu: I’m in Sweden. But yes, for the past 10 years, Markus: Yes, over 10 years ago now.
cross-platform expertise, community leadership,
we’ve moved all over the world. Now we’re back in Swe-
and commitment to business results. You will
den, mainly because we have a baby daughter now. Tiberiu: Yes. A lot of years ago. And they were
typically find Regional Directors keynoting at
top industry events, leading community groups
showing how they wanted to go to the cloud. And
and local initiatives, running technology-focused Markus: Congratulations! That’s very cool. my very first thought about Azure was “nah... it’s
companies, or consulting on and implementing just Microsoft going into the data center busi-
the latest breakthrough within a multinational Tiberiu: Thank you. ness. I don’t think they have a chance.” I thought
corporation.” they would kill all their partners. You know, all
Markus: I even met her last time we got together. the ISVs and all the other partners that are offer-
Regional Directors are expected to have deep It must have been a little over a year ago, I think. ing data centers. But then, I actually started to
technical understanding of many of the Microsoft look at what they were offering. And I liked their
technologies. Not just that, but RDs are expected Tiberiu: Yes. She is a year and three months approach, because, while other companies were
to have an understanding of, and experience and she keeps us on our tippy-toes. She started starting from the infrastructure-as-a-service
with, competing technologies. RDs are also to run around the house. So she’s the boss for (IaaS) offering, Microsoft was thinking, “how can
expected to go beyond technical expertise and sure. She’s clearly making all the decisions in the we solve the distributed computing problem?”
have considerable business expertise. Many RDs house at this point. and “How can we solve what kind of applications
present at corporate events, advise governments our customers are using?”
and NGOs, and many similar scenarios. Markus: That’s good. (laughs)
Feel free to contact any of the Regional Directors And now Microsoft, being the enterprise-friendly
to get access to an RD’s expertise and a well- Tiberiu: Yeah, exactly. (laughs) company that they are, I thought they’d have
informed opinion that isn’t shaped or influenced enough data to understand what the enterprise
by having to go along with Microsoft’s marketing Markus: We wanted to talk about Azure. You’re customer wants. They weren’t so much looking
speak. You can contact RDs to get advice and very involved in Azure. You are a Regional Direc- into startups or into private people. But they were
opinions on your projects regarding technical tor, obviously, which is why we’re talking, al- looking at how can they offer a cloud that is sus-
needs. You can contact them to help analyze though I’m catching you on the tail end of being tainable for an enterprise. So the evolution of it
how technology will influence your business. a Regional Director, as you’re going to move on I really liked, because they had Web applications,
You can invite RDs to speak to your project and join the “mothership” by going to work for databases, inter-process communication, that sort
stakeholders, board of directors, or corporate Microsoft. From what I understand, you’re mov- of thing. There were problems, but they had good
event. And you can contact your RD for many ing into a role in the Azure team. Tell me about solutions for that. And then we started giving
more scenarios. your involvement with Azure. I know you’ve been feedback. In a way, we were the unhappy ones all
passionate about that for a long time. the time. “Oh, but we need access to the machine

66 Talk to an RD: Tiberiu Covaci and Markus Egger


that runs my application.” So there were always Markus: As of now, we’ll still count you as a of XSockets. He did that as a technology to be
these kinds of things and Microsoft said, “okay, developer. Maybe next month when you’re offi- able to use sockets when Web sockets wasn’t
we’re opening RDP for you” or “we’re opening SSH cially a Microsoft employee, we’ll change our tune. available everywhere. It’s very nice. And that’s
for you.” Then, “we’re giving you a virtual machine (laughs) the cool part. We wanted to have screen-sharing.
so you can put whatever you want.” So things pro- You just open a new media stream. And then we
gressed in a very iterative fashion. Tiberiu: Yes. (laughs) Just this weekend I was in a said, “oh, we want to do file sharing just on that
hackathon for two straight days and I was working the media stream” and you send the file. It’s very
And then they realized that those are good find- with a few people. We just created a peer-to-peer cool. That’s the power of these technologies,
ings, and, based on that, they decided to rewrite video communication platform. We had a central platforms, and services we all now have available.
the whole thing. And I think that was one of the server to start the process and then once I started
best decisions ever made by a software company to talk with you, there’s no server involved. So Markus: It sounds like a magazine article right
to say, “whatever we’ve done, we will scrap, and there’s no bottleneck. There’s no problem with there actually.
decide to start over with a clean slate.” Nobody infrastructure. It took me about, oh, I’d say, half
else does that. You know, I always advocated for an hour to deploy that to Azure. So the guy that Tiberiu: That would be a good CODE Magazine
something like that because the first version of did the application was running it on Heroku and article, yes.
every software application should be your play- I just said “okay, let’s see how it works.” And then
ground. That’s how you learn when you start writ- my whole weekend went toward doing the whole Markus: You and I have worked together a lot
ing something completely different. In that case, Web pack and trying to do automatic deployments. as RDs. We don’t have a formal relationship as
for the second version, you shouldn’t patch the Now, whenever I’m committing something, I auto- such, but we’re working together on various proj-
first one. You should just start with a clean slate. matically get the back-end application deployed. ects. You’ve even done a number of State of .NET
And they had the guts to do it, you know? And And that’s a Node.js application. And the front- events for us over in Europe. [State of .NET events
that coming from a company that wanted to keep end deployed on a storage account, with a CDN are a series of free events sponsored by CODE
compatibility with Windows 3.11 for a very long [content delivery network]. And all that now hap- Magazine. Recently, State of .NET has moved on-
time. (laughs) That says a lot about how much pens by pushing to a Git branch. line with monthly free events. Find out more at
Microsoft changed and how many things they did. codemag.com/stateofdotnet]. I know you’ve al-
So I came to actually like Azure a lot. Markus: What did you use as the underlying tech ways done a lot of Azure content in those State of
for the actual video stream? .NET events as well as other presentations. For a
These days, one of the things I’m looking at is while, you’ve done a “five things everyone should
how developers are doing [on Azure], because I Tiberiu: Web sockets. WebRTC actually. That’s know about Azure” series that evolved over time.
used to be a developer 100% of my time. I’m still what one of the people involved in the hackathon Tell me a little bit about that. What are the latest
doing it. chose to build the application. He’s the creator five things everybody should know about Azure?

In this feature, we eavesdrop on a conversation between Tiberiu Covaci and Markus Egger, both seasoned Regional Directors.
Talk to Tiberiu about: Cloud Computing, Talk to Markus about:
Azure, Software Development, IoT, DevOps, machine learning and AI, ASP.NET, HTML apps
and many other topics. (including Angular, Vue.js, etc.), the cloud,
Azure, Microservices, Windows applications,
software strategy, and much more.
You can contact him at [email protected]

For over 20 years Tiberiu dedicated his life to improve the quality of the Markus Egger is not just an RD, but as the founder and publisher of CODE
software written by other people by helping them to understand the Magazine, he’s also directly associated with this publication. In his main
technological choices they are presented with. His efforts include training, job, he is the President and Chief Software Architect of EPS Software Corp.
mentoring, advising, motivating, and working side-by-side with their (a company better known for its CODE brands such as CODE Consulting,
development team. He helped many of his partners and customers to CODE Training, CODE Staffing, and CODE Magazine). He is also the founder
choose the right technological solution for the right problem. of other business ventures, such as Tower 48, Wikinome, and others. In his
own words, he spends his time “like most software developers, writing
For his passion and dedication, Microsoft has presented Tibi with the Most production code” both for consulting and custom app-dev customers,
Valuable Professional and the Microsoft Regional Director awards several and also for his own companies. He has worked for some of the largest
years in a row. Since this conversation, he has joined Microsoft as a full-time companies, including many Fortune 500 companies, such as Microsoft.
employee. Markus often takes on the role as a “CTO for hire.”

Talk to an RD: Tiberiu Covaci and Markus Egger 67


TALK TO AN RD

Tiberiu: For the first two or three, it’s relatively application when you can leverage a ready-made it’s still cheaper than running a virtual machine
easy and little changes. You need to know about storage solution that does exactly that? Then, all the time just in case something needs to run
Web applications, databases, and storage. That’s there are a number of things you can configure on or something needs to be done.
a very good starting point. And you know, when I top of a storage account. For instance, you can add
start talking to people, the first thing they always a CDN on top of a storage account. You can even Markus: Not to mention that Azure Functions
want to know is “what would be the advantage?” do things such as making sure storage happens provide a lot of extra features that would be dif-
That’s always where it starts. When you move to in the right place geographically, either for legal ficult to create on your own. For instance, if your
Platform as a Service [PaaS], you shift the respon- reasons, or because you want to store close to the code is supposed to run reliably in response to an
sibility away from taking care of the underlying user for best performance. Those are all benefits incoming email or text message, that, in itself,
platform and you can concentrate on your applica- that would be very difficult to reproduce without a would otherwise be a serious development and
tion. There is some—let’s call it “hygiene”—that’s large cloud infrastructure. quality control task.
involved. You need to do some things a little bit
differently, too. Like you don’t have 100% con- Markus: Subscribers of CODE Magazine see this in Tiberiu: The last but not the least service that I’m
trol over all the things, but I would say in 99% action, whether they know it or not, because digi- talking about is Azure Key Vault. Because most of
of the cases, you don’t really need that. There are tal CODE Magazine subscription files are served the time you need secrets. Think about it like this:
very few edge cases where you really need to have up by Azure storage. This hugely simplified things If you want to configure your Web application to
control over those small things. Otherwise, there for us when we moved to that setup. work with your SQL Server database, you need a
shouldn’t be any need for that. connection string to get to it. You need to store a
Tiberiu: Yes, that’s a perfect example! password somewhere. Often, that’s done in clear
You start with an application and then you get text where, at the very least, it’s visible to the ad-
benefits such as scaling. You get custom domains Those are the three of the five services that I was ministrator. That administrator can put it in the
automatically. You get back up as part of it. Every- talking about. Then things change a bit more as Web application, but those are credentials that
thing in your application is taken care of for you Azure evolves. We have stuff like Azure Functions. the administrator should never get access to. The
without you ever having to worry about it again, Functions are essentially a piece of code you need only one who needs access to those credentials
and that’s a pretty big benefit, even for something to run when something happens. should be your application. With Azure Key Vault,
as simple as a Web app. You can concentrate on I can configure my Web application as a security
deploying your application and that’s it. And as Markus: Like something that runs on a schedule, principle and that Web application accesses the
you then start scaling your app, you get benefits, or something that runs as a response to an HTTP Key Vault. I don’t have any administrator involved
such as affinity to a certain machine being built request, which could even be a different way to who needs to see a database. Of course, I can use
in. If you scale to ten machines, or to two for that implement a REST API call. the Key Vault for other things, but this is a per-
matter, one of the biggest challenges is always to fect example and a very common usage scenario.
make sure you can still have communications hap- Tiberiu: Exactly! Why would you have to worry
pening, especially if you use session cookies and about creating the virtual machine and having to Now, doing the presentation, which I’ve done many
stuff like that. Because you want to make sure that pay for a virtual machine, when the only thing times, there’s one question that keeps coming up:
you talk with the same machine in those situa- you should have to care about is your code run- “Do you really have to do everything manually?”
tions. Azure solves that problem for you and you ning when X happens. You deploy your code, and That’s because in doing the demonstration, I have
never even have to worry about it in the first place. you say to Azure “please do that for me.” When- to show every step and do it manually. I can, of
In a sense, it’s as if this problem never existed. ever you have these events, Azure spins up the course, automate things, but then I don’t think it
virtual machine or a container. It doesn’t really has the same effect for demonstration purposes.
Then, the second part is a database. As a devel- matter what it spins up. All you care about is that At that point, the overview of five services become
oper, I just need to save the data somewhere. Why your code responds and does its thing. six because now you have Azure DevOps you can
would I worry about creating the database? Why use, and now also GitHub Actions. I really love that,
would I worry about issues such as a back-up? Markus: The whole point is that you don’t have to the simplicity of these services. In an automated
And moving the tapes of my back-up and all those worry what the server is. We refer to it as “serverless.” fashion, you can create the template that deploys
headaches. In Azure, I can just say that I need the It’s a little bit of a misnomer in that sense. There cer- everything that you need. You can have an Azure
database and everything just works out of the box. tainly is a server. It’s just that you don’t care. DevOps pipeline created. It can take the code and
And it doesn’t cost a lot of money. The cheapest it can do the build. It can copy the deployment. You
production database you can buy in Azure is about Tiberiu: Yeah, exactly. It should be called “I can do whatever you need to do transformations,
$15 per month, which is enough for a medium- couldn’t care less about the server.” I think that such as minification. Whatever you need to have
sized application. And if you need more, you can would have been a better name than “serverless,” done, it’s done in a scripted, automated fashion.
just go and say, “okay, I want to pay more for a but they never ask us about such things. (laughs)
bigger and more powerful database because I have In the end, you have your code running as you Using Azure DevOps, you can even use features
more customers.” It’s a simple turn of a dial. would expect. And scalability happens automati- such as deployment slots. I can say my dev branch
cally. You just pay for the time your code is actu- will always do a deployment on my dev environ-
The third part was the storage. Most of the assets ally running. The more requests, the more ma- ment. And the dev environment is more or less
that you’re using in your Web application are static chines might be spun up for you. That’s the price the same as my production environment. Then
files. Why would you put the pressure on your web of doing the business. And at the end of the day, I have the possibility to run my tests on there.

68 Talk to an RD: Tiberiu Covaci and Markus Egger


TALK TO AN RD

I have the possibility to do some integration tests as it is at Starbucks. I don’t think there is too I see a lot of the same patterns in many cases. A lot
and if I see that everything works, I can put it on much of a difference. of customers say “we want to start with Software as
staging. From staging, I can get the people to try a Service and if we can’t find anything, we want to
it out. And then I use a slot and my live applica- Markus: I agree. I really think that more accep- write our own using Platform as a Service. If we don’t
tion is upgraded to a newer version right away. tance of working off-site will be a lasting change. find that, then let’s use Infrastructure as a Service.”
When we had the 2008 financial crisis, we squeezed Guess what? 80% of people moving to the cloud start
There are a lot of things like that that will keep out a lot of efficiency from our business processes. out moving to Infrastructure as a Service in spite of
developers relevant. Because if you just ignore Perhaps we’ve relapsed on some of those, but all what the CEO and CTO decided. It happens on its own.
them, at some point you will have to do the task in all, what will be the optimizations we imple- Most people are in a rush, so they start moving the
anyway. If you want your application to really use ment to overcome the current crisis? It gets harder machines they have to the cloud as they are. The ex-
the power of the cloud, you need to start using at and harder to think about inefficiencies. One of planation is pretty simple. Most companies just don’t
least some of those services. the things I can think of is to eliminate the over- have the time or means to do a full analysis to see if
head people run into that is related to work, but they can move to Platform as a Service.
Markus: It’s a very interesting point to think inefficient. Things like hours of commuting to and
about. How do you get started? Or perhaps “start- from work are a huge waste. I see a lot of potential Markus: We’ve had the same scenario. The hardest
ed” is the wrong word, but how do you move to for becoming more competitive by having people part for us to move after the ransomware attack we
that? We just had, as you know, a very interesting dedicate more productive time to work while at the suffered was to move the database. We had a very,
scenario that we’ll be publishing articles about in same time reducing the overall hours they spend very large database, which wasn’t really designed
the future here in the magazine, because we had a on work and work-related tasks. for the cloud but needed to get up and running. We
major ransomware attack. There’s a lot of interest- had two really tough challenges. One was how to
ing stuff going on, starting with the fact that ev- We always think of people working 40 hours a week. get this much data to the cloud. For us, there was
erything that we had on Azure was unaffected. It But when you count the commute to and from work, little choice, because we were dead in the water. We
was a reversal of what people usually worry about. and you count that they were somewhere for lunch uploaded to the cloud and however long it took, it
Normally they think “oh, I gotta be careful with that was really just because they were at work in took. Then, the second aspect was that it just wasn’t
security in the cloud.” It was totally the opposite the office, and that they had to drive to and from designed for the cloud. We had to change some of
for us, because none of our cloud infrastructure day-care because they can’t take care of their own our data structures. Not much, as it turned out. In
was affected at all. And neither were the people children, and so on, then we probably get to some- the end, it was relatively easy. As a first step, we
working from home or off-site offices. As a result thing more like 60 hours a week. I think it would be just spun up a VM and put it in there, as if it were its
of all that, we decided to remove the other half better to reduce that to 50 hours but spend those own SQL server. Then we changed data structures a
of our infrastructure that we still had in our local 50 hours more productively. little bit, and then broke the database apart a little
data center. We decided to recover by moving it all bit, and cleaned it up a little bit while we were at it.
to Azure. It was a wild two weeks because it wasn’t Tiberiu: That’s exactly the idea! How can you And now it’s fully on SQL Azure as a SaaS platform.
a well-planned move in that sense. But it did work enable people to work with the things that they
out pretty well in the end. I was surprised. need to do? Microsoft did a lot of work around Tiberiu: Cool.
that lately.
I know you were doing a lot of stuff with getting Markus: What do you see as the hardest part? Is
people to start moving into Azure and move ex- Markus: How do businesses move toward that? it usually data? Or do you just see old apps that
isting infrastructure, right? What do you tell businesses that want to take need to change?
their first step toward the cloud?
Tiberiu: Yes. I don’t know if you’ve seen that, but Tiberiu: It’s people. Still people. That’s the biggest
especially now, with the current situation [the Tiberiu: Microsoft now has something called the challenge I see with every company that wants to
COVID-19 crisis]. People always ask, “what trig- Cloud Adoption Framework. You can think of it move to the cloud. Regardless of where the decision
gered you to move to the cloud?” You know, like as a DevOps approach to moving to the cloud. comes from, it’s the people that resist the most.
the CIO or the CTO who’s worried about COVID-19. If you want to start that process, Microsoft has They make it so much harder in most situations.
In the past, people always thought you needed a DevOps generator that will give you a starting They don’t want to change. I think that’s part of hu-
to be in the office. Now, all of that has changed. point. You’ll get all the activities with all the fea- man nature. What I’ve also seen is that a lot of them
We see a surge of people who say, “what I need is tures and user stories and so on. You can have are very scared of change because they don’t know
for my employees to be able to open their laptop your Kanban board that includes things such as what to expect. It’s completely unknown territory
in a Starbucks and start working from anywhere.” “you need to convince stakeholders,” “you need and it’s something completely different.
That’s the new normal. More and more people to address those business people in order make
have started thinking along those lines. Perhaps certain decisions,” and so on. One of the things I think for me, at the beginning, it was pretty hard
they aren’t thinking about the Starbuck scenario, I love about that is that most companies think as well because the terminology is completely dif-
but they are thinking about working from home. their needs are unique. And they are all different, ferent. You need to get used to that. But then, when
Either way, if you do the one, the other will just of course. But when you move infrastructure to you start explaining, they get the hang of it pretty
follow. I don’t think there’ll be any difference be- the cloud, there are certain steps almost every quickly. In Azure, unlike in conventional data cen-
cause if you know it’s safe to work from off-site, company has to take. This framework really helps ters, it’s not about hardware anymore. Everything is
then it’s as safe or as unsafe at someone’s home with that. defined by software. At the end of the day, things

Talk to an RD: Tiberiu Covaci and Markus Egger 69


TALK TO AN RD

are still the same, at least from an operational our infrastructure was easy to move and we moved Tiberiu: One of the things that I will have to do
standpoint. Of course, if you talk about monitoring, it relatively quickly. The other 50% was a different is not only excite developers but more of a gen-
if you talk about networking, and so on, there are story. And it isn’t even that we knew it was going eral audience of stakeholders. It might involve
other tools that you need to put in place. They’re to be hard to move. It’s more like we didn’t know me talking to the customer. Maybe not doing the
still the same kinds of ideas. Then, of course, you how hard it was going to be. We had a hard time groundwork but advising them on how to move.
talk with people, and they still think, “oh, I need assessing what pieces there are that we don’t re- At least having initial discussions and trying to
my DMZ.” And I’m like, “are you sure that’s what you ally know about or that you forget about or that understand what they need and maybe explain-
need?” because I’m not really sure. aren’t really worth moving. Due to our attack, we ing to them how I can help. It will still be an in-
were forced to make those changes and take the teresting position. So I’ll see.
Markus: You’re applying your old concepts to a plunge. And I’m glad, in a way, that we did be-
new setup. And some of those concepts may just cause we’re probably better off today. But we went Markus: Excellent. I wish you the best of luck
not apply anymore. through a few rough weeks to make that all work. with all that. It sounds pretty exciting.

Tiberiu: They used to work with those concepts You know, one of the biggest drivers today is that It’s getting late where you are. I’m just getting
and that’s why they want to keep it like that. people are at the end of their contract cycle with started with my day here. This is definitely the
That’s the biggest challenge. I don’t think it’s a their current data center. Or maybe they need most geographically remote RD Talk I’ve ever
technological one. It’s people. to buy new hardware and decide to move to the done. We are pretty much exactly on opposite
cloud instead. So three or five years have gone by ends of the planet, with a 12-hour time differ-
Another problem that I see quite often is that and they need to switch to new machines, or they ence. That in itself is very cool and shows how far
they’re afraid that if they move a machine, they push to the cloud instead. we’ve come. It’s been great talking to you.
don’t know what all the dependencies are. Most big
organizations have a mess of IT. I think there are The very first cloud customer I had I actually man- Tiberiu: Likewise.
very few where that isn’t the case. There are very aged to scare away. (laughs) They decided to move
few companies that have a complete list of not to another data center, not to Azure. I think that Markus: I’m sure the times will come again when we
just all their servers and what’s on them, but also that was my mistake. I went to them and showed can all meet in person. Maybe at one of the confer-
what accesses them and what might break if they’re them as much of Azure as possible in all its beauty. ences you were involved with or, or at Microsoft.
moved. There are tools out there that can help with I think to them, it was overwhelming. They just
that. They can detect how the machines are work- wanted to move some servers and some websites. Tiberiu: I’ll still do my conference. [Tibi organiz-
ing. They can give you a fair picture about what you And I’m like “wait a second. If you do that, you can es DevSum in Sweden] We are going to do it digi-
have. But still, companies are often in a situation use the service directly,” and then all of a sudden, tally this year. There are about 15 to 17 speakers
where they don’t really have everything they need. it was too much for them. So they decided to just from Sweden. Most of them are from Stockholm.
In most cases, you need to make sure that when you move to a local data center. That was a hard lesson We also have some international speakers. Tim
move something, you’re able to move back quickly to learn, because it was a project on the order of Huckaby, our fellow RD [and former participant in
when you discover that you broke something. 20 million pounds by the time it was all said and this column] is going to do our keynote remotely.
done. So that was a bit of an eye opener for me,
Markus: Which is pretty much what the case was for because I thought that if I like it, everybody will Markus: It’s great that you’re still doing it. Keep
us. I mean, obviously, because our ransomware at- like it, especially if you talked nicely about it. But pushing forward!
tack wiped out every single server we had, and be- people were still hung up from their own technol-
cause we had to preserve that for forensic analysis, ogy and their own way of thinking. And it’s easy Tiberiu: Yeah, exactly. We have to.
we had no choice. We had to move forward. It was to overwhelm them with everything Azure can do.
challenging. Getting the major pieces up and run- Markus: So is Microsoft with BUILD online and
ning was not so difficult. We had our external fac- Tiberiu: I hear that. other online events. There will be different kinds
ing things, like websites and so on, up and running of events, but I think there will also be interest-
relatively quickly. But then it was the little internal Markus: “Eye opening” is a good term. It was re- ing ideas coming out of those.
details that you just didn’t even remember, that ally eye opening for us to have a scenario where
were the hard parts. Although we had some pretty we needed to do this for ourselves. Usually, we Tiberiu: That’s right. Microsoft has announced
good docs overall, we didn’t have all those details do it for customers and help them; when it’s their they’re moving all their events online between
well-documented. Like the little WinForms tool pain, that doesn’t seem so bad, right? But when now and the summer of 2021.
that somebody wrote themselves 15 years ago that all of a sudden it’s your own organization, it truly
was still in use and had access to the database in is an eye opener. And that’s good, because it re- Markus: I hope I will see you sooner than that.
some way. And you just forgot about that and now ally helped us in supporting our customers better. And best of luck with your new role within Mi-
it didn’t work anymore, and there was really only crosoft.
one person in the company that even still used that. Hopefully you will still be able to do that. I know in
your new role at Microsoft you’re not going to be Tiberiu: Thank you.
But I also have to say, when we looked at this working directly with customers in a consulting fash-
maybe two or three years ago, when we started ion anymore, but you’re still doing the same thing  Markus Egger
our move to Azure, we identified that about 50% of for Microsoft essentially when you join them, right? 

70 Talk to an RD: Tiberiu Covaci and Markus Egger


OLD
TECH HOLDING
YOU BACK?

Are you being held back by a legacy application that needs to be modernized? We can help.
We specialize in converting legacy applications to modern technologies. Whether your application
is currently written in Visual Basic, FoxPro, Access, ASP Classic, .NET 1.0, PHP, Delphi…
or something else, we can help.

codemag.com/legacy
832-717-4445 ext. 9 • [email protected]
ONLINE QUICK ID 2008081

Using a Scripting Language to Develop


Native Windows WPF GUI Apps
In this article, I’m going to talk about developing WPF (Windows Presentation Foundation) GUI using a scripting language.
Why would we use a scripting language for making a graphical interface? There are several benefits:

• The GUI functionality could be changed at runtime, to use and modify it as you wish. The framework GitHub link
without recompiling the binary. is provided in the side bar.
• The GUI can have a different customization for the
same binary. You can add some missing events or widgets there in the same
• Programming can be done much quicker: C# imple- way it’s done with other widgets. Similarly, you can add other
mentation that may require many lines of code, can databases, like Oracle, or add any missing database function-
be embedded in just one or two scripting language ality. Let’s start with setting up a sample project.
statements.

Vassili Kaplan As a scripting language, I’m going to use CSCS (Custom- Setting Up Scripting in a WPF Project
[email protected] ized Scripting in C#). This is an easy and a lightweight Note that I used Visual Studio 2019 Express Edition for all of
open-source language that has been described in previ- the examples in this article.
Vassili is a former Microsoft ous CODE Magazine articles: https://www.codemag.com/
Lync developer. He’s been article/1607081 introduced it, https://www.codemag. To set up CSCS scripting in a WPF project, first open a new
studying and working in com/article/1711081 showed how you can use it on top of WPF project from Visual Studio [by selecting File > New Proj-
a few countries, such as
Xamarin to create cross-platform native mobile apps, and ect… > WPF App (.NET Framework)]. After that, you can add
Russia, Mexico, the USA,
https://www.codemag.com/article/1903081 showed how a CSCS framework manually to the project. The CSCS frame-
and Switzerland.
you can use it for Unity programming. CSCS resembles Ja- work is open source and can be downloaded from https://
He has a Masters in vaScript with a few differences, e.g., the variable and func- github.com/vassilych/cscs.
Applied Mathematics with tion names are case-insensitive.
Specialization in Computa- Alternatively, you can download an existing WPF sample proj-
tional Sciences from Purdue ect from https://github.com/vassilych/cscs_wpf with every-
University, West Lafayette, thing already set up, and start playing with it by changing the
Indiana and a Bachelor in A picture is worth a thousand GUI, some parameters, etc. It also has a few sample scripts.
Applied Mathematics from
ITAM, Mexico City. words. An interface is worth After that, your Visual Studio main view will look similar to
a thousand pictures. the one shown in Figure 1.
In his spare time, Vassili
works on the CSCS scripting  Ben Shneiderman
language. His other hobbies How Scripting is Triggered
are traveling, biking,
badminton, and enjoying a
in a WPF Project
You’re going to see how to integrate scripting into a WPF The code that initializes the CSCS scripting engine and
glass of a good red wine.
App and use it for different tasks: starts it up is in the MainWindow.xaml.cs file, in the Main-
You can contact him View_Loaded() method. As its name suggests, this method
through his website: • Creating new GUI widgets is triggered after the main view is loaded; then you can do
http://www.iLanguage.ch • Moving/showing/hiding existing widgets some GUI adjustments, subscribe to the GUI events, etc.
or e-mail: [email protected] • Dynamically modifying GUI Here is this method:
• Providing some non/GUI functionality, e.g., working
with SQL Server void MainView_Loaded(object sender,
• Creating new windows from XAML files on the fly RoutedEventArgs e) {
• Automatic binding of the Windows and Widget events CSCS_SQL.Init();
CSCS_GUI.MainWindow = this;
Using a scripting approach significantly decreases develop-
ment time. What could’ve taken many lines of code in C#, var res = this.Resources;
takes just one command in CSCS, or even nothing at all. For var cscsScript = (string)res[“CSCS”];
example, there’s no need to bind a widget or a window event
to the event handler. This binding is created by the CSCS Console.WriteLine(“Running CSCS script: “ +
Framework behind the scenes. The developer just needs to cscsScript);
fill out the corresponding function, which will have an empty CSCS_GUI.RunScript(cscsScript);
body by default. You’ll see some examples of that later on. }

Also, it takes much less code to access the SQL Server, to get The name of the CSCS scripting file to run is taken from the
data from it, and to update existing SQL tables using CSCS resources. Specifically, it’s defined in the MainWindow.xaml
than C#. The CSCS Framework is open source and you’re free file. Here’s an example of how it can be defined:

72 Using a Scripting Language to Develop Native Windows WPF GUI Apps codemag.com
Figure 1: Microsoft Visual Studio 2019 WPF Project with integrated CSCS Scripting

<Window.Resources>
General Structure of a WPF Project
<sys:String x:Key=”CSCS”> with Scripting
../../scripts/start.cscs</sys:String> The general structure of a WPF project with scripting is similar
</Window.Resources> to a plain vanilla WPF project with this exception: You create the
GUI in mainView.xml as usual, but you don’t create any event
CSCS scripts are first-class citizens of the WPF projects and handling. All this will be handled by the CSCS GUI module. This
are included in the scripts folder on the top level (see the GUI module is part of the CSCS project and it’s called CSCS_GUI.cs
right side of Figure 1 for details).
The GUI widgets can also be created purely in CSCS and you’ll
The contents of the start.cscs script is rather straightfor- see an example of this later on.
ward—it just says what script to run:
It’s a good idea to take a look at the CSCS_GUI.cs file at the
StartDebugger(); GitHub location mentioned on the sidebar. You will see how
include(“wpfgui.cscs”); the C# widget events are hooked to the CSCS methods.

The StartDebugger() statement is optional. It starts the script- It works as follows: First you define a widget in MainWindow.
ing debugger so that you can connect and debug scripting from xml. The relevant parameter is DataContext in the widget def-
Visual Studio Code. A previous CODE Magazine article explains inition. For instance, if a new button is defined as following:
its usage: https://www.codemag.com/Article/1809051. Basi-
cally, you need to download a Visual Studio Code CSCS Debug- <Button Content=”Open File” HorizontalAlignment=”Left”
ger and REPL extension from the marketplace. Width=”75” Name=”button1” DataContext=”myButton”/>

The real action happens in the wpfgui.cscs file. This way, you Then a few event handlers in CSCS are created automatically
can change what file to include in the start.cscs file if you by the CSCS GUI module. Some of these event handlers are
work with different projects. the following:

codemag.com Using a Scripting Language to Develop Native Windows WPF GUI Apps 73
• myButton@click: This function will be triggered when Even though the event handlers will be created by the CSCS
the user clicks on the button. framework, the function bodies, by default, will be empty. In
• myButton@preclick: This function will be triggered the next section, you’re going to see how these event-handling
when the user presses the mouse on the button. functions can be filled with some useful stuff in CSCS code.
• myButton@postclick: This function will be triggered
when the user releases the mouse on the button.
• myButton@mouseHover: This function will be trig- Hello World in WPF with Scripting
gered when the user’s mouse is over the button. Let’s take a look at a relatively simple GUI example. Let’s
create a GUI in Visual Studio by dragging and dropping dif-
ferent widgets, as shown in Figure 2.

The Event Handlers will be created What you’re going to do in CSCS is to add some life to the
GUI created in Figure 2. First, initialize the global data and
automatically, e.g., if you define populate both combo boxes with data as follows:
myButton widget, myButton@Click
comboItems = {“white”, “green”, “red”, “yellow”,
will be triggered as soon “black”, “pink”, “violet”,
as the user clicks on myButton. “brown”, “blue”, “cyan”};
AddWidgetData(comboBGColors, comboItems);
AddWidgetData(comboFGColors, comboItems);
count = 0;
For a text field, there are different event handlers. For in-
stance, if you have this definition of a text field: Next, add the event handlers:

<TextBox HorizontalAlignment=”Left” function comboBGColors@SelectionChanged(


Width=”75” Name=”textfield1” sender, arg) {
DataContext=”myTextField”/> SetBackgroundColor(“buttonUpdater”, arg);
}
Some of the event handlers created by the CSCS GUI module function comboFGColors@SelectionChanged(
are going to be: sender, arg) {
SetForeGroundColor”buttonUpdater”, arg);
• myTextField@textChange: This function will be trig- }
gered when the user types anything in the text field. function buttonUpdater@Clicked(
• myTextField@keyDown: This function will be triggered sender, load) {
when the user presses any key. SetText(“textArea”, “Hello, world “ +
• myTextField@keyUp: This function will be triggered (++count));
when the user releases any key. }
function buttonQuestion@Clicked(
For a combo-box (a drop-down widget), a typical automati- sender, load) {
cally created event handler is: result = MessageBox(“Do you like my app?”,
“My Great App”, “YesNoCancel”, “question”);
• myCombobox@selectionChanged: This function will SetText(“labelAnswer”, result);
be triggered when the user selects an entry. }

CSCS is case-insensitive.
Therefore, the method signature
buttonUpdater@Clicked is the
same as Buttonupdater@clicked.

The first two functions are triggered when the combo-box


values change. They update the colors of a button. The third
function updates the value of a textbox with a global variable
counter, being incremented on each click. The fourth function
shows a modal message box and waits until the user clicks
on one of its buttons. The very moment of this message box
shown to the user is shown in Figure 2. The name of the but-
ton that user clicked on is shown in the labelAnswer widget.

Note that you only need to implement the event handlers;


binding them with the actual events will be taken care by
Figure 2: WPF Hello, World! With Scripting the CSCS Framework.

74 Using a Scripting Language to Develop Native Windows WPF GUI Apps codemag.com
CSCS Function Description
SQLConnectionString (connString) Initializes database and sets it to be used with consequent DB statements.
The connString is in a standard SQL initialization form.
Example: SQLConnectionString(“Server=myPC\\SQLEXPRESS;Database=CSCS;User
Id=sa;Password=sqlpassword;”);
SQLTableColumns (tableName) Returns a list of all column names of a given table.
Example: cols = SQLTableColumns(“Users”);
SQLQuery (query) Performs a select statement and returns an array with the results.
Example: results = SQLQuery(“SELECT TOP 5 * FROM Users where id <= 100”);
SQLNonQuery (sqlStatement) Performs a non-query statement, like insert, delete, etc.,
and returns the number of records affected.
Example: SQLNonQuery(“Delete from Users where id=10”);
SQLInsert (tableName, colNames, data) Inserts data from the data CSCS array into the table.
Example: data = {“John”, 5000, “[email protected]”};
result = SQLInsert(“Users”, “Name,Salary,Email”, data);
BindSQL (widgetName, tableName) Binds WPF widget to a given SQL table. After this statement, the widget is going
to have the data from the passed table.
Example: BindSQL(“myGrid”, “Users”); // myGrid is a WPF DataGrid
Table 1: CSCS Cross-Platform Functions for Mobile Development

That’s it—as you can see, there would be much more coding grid has been auto-populated. After adding an entry from
if the same functionality were implemented directly in C#. the GUI, the SQL table Users will look like Figure 4.
The whole source code for this example can be consulted on
the GitHub (see the links in the sidebar). Now let’s see how this is implemented in CSCS. Let’s start
with the implementation of the function when the user
clicks on the Refresh button, which fills out the table in the
Using SQL Server in Scripting GUI in Figure 3:
In this section, you’re going to see how to use SQL Server
together with WPF and scripting. You don’t need any extra function buttonRefresh@Clicked(sender, load) {
include statements in the scripting. Do make sure that the BindSQL(“myGrid”, “Users”);
System.Data.dll is included in the project references (it’s
already included in the sample project). query = “SELECT TOP 15 * FROM Users “ +
“where id <= 100”;
Table 1 contains available SQL functions with corresponding print(SQLQuery(query));
examples. You’ll also see more examples below. }

Let’s see an example of using the SQL functions described As you can see, it’s pretty short. But actually it’s even
above. For the example, create the following table in your shorter than it looks; to fill out the table, you need just one
SQL database: statement: BindSQL(“myGrid”, “Users”). It will populate
the DataGrid with the contents of the Users table, setting
CREATE TABLE [dbo].[Users]( the header row with the column names from the SQL Server
[Id] [int] IDENTITY(1,1) NOT NULL, database defined earlier with the SQLConnectionString()
[Name] [varchar](255) NULL, CSCS function.
[Salary] [real] NULL,
[Email] [varchar](255) NULL, The other two statements are there just for the illustrative
[Created] [datetime] NOT NULL, proposes—they show what happens if you want to run a SQL
PRIMARY KEY CLUSTERED ([Id] ASC) Server select query. The result of that query is printed in the
WITH (PAD_INDEX = OFF) ON [PRIMARY] Output Window in Visual Studio. This result is a list consist-
) ON [PRIMARY] ing of the following entries:
GO
ALTER TABLE [dbo].[Users] ADD DEFAULT {{Id, Name, Salary, Email, Created},
(getdate()) FOR [Created] {1, John, 20000, [email protected],
9/10/19 8:11:43 PM},
Using the SQL statement above, you’re making sure that the {2, Juan, 15600, [email protected],
Id filed is autogenerated by incrementing the previous Id and 9/10/19 8:11:43 PM},
the Created filed is autogenerated from the current date. {3, Ivan, 14900, [email protected],
9/10/19 8:11:43 PM},
Then create GUI in Visual Studio, as shown in Figure 3. The data {41, Johannes, 12345, [email protected],
table in the middle is a DataGrid. The XAML file that contains 3/19/20 2:05:59 AM}}
this example is available in the sample WPF project on GitHub.
The first list entry is the header (column names in the cor-
Figure 3 shows the moment after the user clicked on the responding database table) and the rest are the table values
“Add Data” button to add a new entry but before the data (corresponding to the actual values, as shown in Figure 4).

codemag.com Using a Scripting Language to Develop Native Windows WPF GUI Apps 75
Figure 3: WPF with scripting and SQL Server example

Figure 4: The contents of the Users table in the SQL Server Database

When the user clicks on the Add Data button, the following data[2] == “”) {
CSCS function is triggered: MessageBox(“Please fill out all fields”,
title, “OK”, “warning”);
function buttonAddData@Clicked(sender, load) { return;
data = {GetText(“textName”).Trim(), }
GetText(“textSalary”).Trim(), try {
GetText(“textEmail”).Trim()}; result = SQLInsert(“Users”,
if (data[0] == “” || data[1] == “” || “Name,Salary,Email”, data);

76 Using a Scripting Language to Develop Native Windows WPF GUI Apps codemag.com
MessageBox(“Inserted “ + result + var query = Utils.GetSafeString(args, 0);
“ rows(s).”, title, “OK”, “info”);
buttonRefresh@Clicked(sender, “”); using (SqlConnection con = new
} catch (exc) { SqlConnection(CSCS_SQL.ConnectionString))
MessageBox(exc, title, “Cancel”, “error”); {
} using (SqlCommand cmd =
} new SqlCommand(query, con)) {
con.Open();
This function explains why Figure 4 has the message box cmd.ExecuteNonQuery();
saying that a new entry was inserted even though the Da- }
taGrid in the GUI hasn’t been updated; this is because the }
buttonRefresh@Clicked() method is called after that. If you return new Variable(true);
want to change this functionality and update the DataGrid }
first, simply put the buttonRefresh@Clicked() method be- }
fore triggering the MessageBox.
To bind it with the Parser, register this function in the ini-
Note that the data array containing the rows to be inserted, tialization phase, as follows:
has only Name, Salary, and Email columns, even though the
Users table has two more columns (see Figure 4). This is ParserFunction.RegisterFunction(“SQLNonQuery”,
because of the way the table was created: the first column,  new SQLNonQueryFunction());
ID, will be auto-incremented with each insert, and the last
column, Created, will automatically contain the current That’s it. As soon as the parser encounters the “SQLNon-
timestamp. Query” token, it triggers the execution of the Evaluate()
method on the SQLNonQueryFunction shown above. The
Finally, here is the implementation of the CSCS function script.GetFunctionArgs() method extracts all of the param-
triggered when the user clicks on the Delete Data button: eters passed to the CSCS “SQLNonQuery” method.

function buttonDeleteData@Clicked(sender, load) The implementations of other functions, like SQLQuery(),


{ SQLInsert, etc., are similar. I encourage you to consult the
selected = GetSelected(“myGrid”); source code and take a look at other implementations as
if (selected.Size == 0) { well.
MessageBox(“Nothing has been selected.”,
title, “OK”, “warning”);
return;
Creating, Showing, Hiding,
} and Moving Widgets with Scripting
Using the CSCS Scripting language, it’s easy to implement
deleted = 0; most of the GUI related functions. Here’s an example of
for (row : selected) { showing and hiding a widget when the user clicks on a but-
deleted += SQLNonQuery( ton:
“Delete from Users where id=” + row);
} function myButton@Clicked(sender, load) {
MessageBox(“Deleted “ + deleted + “ row(s).”, if (showCounter % 2 == 1) {
title, “OK”, “info”); ShowWidget(“gridTable”);
buttonRefresh@Clicked(sender, “”); } else {
} HideWidget(“gridTable”);
}
The GetSelected(widget) is a CSCS function that returns a showCounter++;
list of all rows being selected. In this example, you iterate };
over all selected DataGrid rows and run a Delete statement
for each of them. Similarly, here is a function to add a new button:

If there are many rows selected, it would make sense to run function addButtons@Clicked(sender, load) {
the Delete statement just once for all of them. In this case, name = “newButton” + newWidgets;
you can slightly modify the code above in the Delete state- text = “New Button” + (newWidgets++);
ment by including all the selected rows and then concat- AddWidget(name, “button”, text, x, y,
enating them with an “or“. width, height);
x += 100;
Implementation of SQL Functions in C# }
Let’s see how the SQL functions shown in the previous sec-
tion are implemented in C#. The shortest one is a SQL non- The C# implementation of all of these GUI functions is simi-
Query function: lar. Here is an implementation of a function to show or to
hide a widget:
class SQLNonQueryFunction : ParserFunction {
protected override Variable Evaluate( class ShowHideWidgetFunction : ParserFunction
ParsingScript script) { {
var args = script.GetFunctionArgs(); bool m_showWidget;

codemag.com Using a Scripting Language to Develop Native Windows WPF GUI Apps 77
public ShowHideWidgetFunction(bool showWidget)   <Label Content = "Window1" 
{ HorizontalAlignment="Left" 
m_showWidget = showWidget; Margin="48,10,0,0" 
} VerticalAlignment="Top"
protected override Variable Evaluate(   Height="44" Width="76"/>
ParsingScript script) {   </Grid>
var args = script.GetFunctionArgs(); </Window>
var name = Utils.GetSafeString(args, 0);
Control widget = CSCS_GUI.GetWidget(name); This window can be added either as a modal window (i.e.,
having a parent window) or a stand-alone window. To add
widget.Visibility = m_showWidget ? this window as a modal window, execute this CSCS state-
Visibility.Visible : Visibility.Hidden; ment:
return new Variable(true);
} win = ModalWindow(pathToWindowXAMLFile);
}
Optionally, you can provide a parameter indicating the win-
To register the implementation above with the parser, the dow’s parent (by default, it’s the main window). To create
following statements must be executed at the start-up time: a stand-alone window from a XAML file execute this CSCS
statement:
ParserFunction.RegisterFunction(“ShowWidget”,
References new ShowHideWidgetFunction(true)); win = CreateWindow(pathToWindowXAMLFile);
ParserFunction.RegisterFunction(“HideWidget”,
CSCS Language E-book:
new ShowHideWidgetFunction(false)); Both of these functions, ModalWindow() and CreateWin-
https://www.syncfusion.com/
ebooks/implementing-a- dow(), will compile the XAML file on the fly and show user
custom-language The implementation of other GUI functionality, like remov- the corresponding Window GUI.
ing or moving widgets, is very similar and can be found by
CSCS WPF Sample Project: consulting the source code.
https://github.com/vassilych/
cscs_wpf For each new window dynamically
CSCS Repository: created from a XAML file,
To have any C# functionality
https://github.com/vassilych/ the window event handlers will
cscs available in CSCS, first create a class
be created automatically.
CSCS Visual Studio Code
deriving from the ParserFunction
Debugger Extension: class and override its Evaluate()
https://marketplace.
visualstudio.com/
method. Then register the new Once a new window is created, there are a few event han-
items?itemName=vassilik.cscs- class with the parser. dlers bounded to the Window creation and destruction
debugger events. These event handlers will be created automatically
for each new window created from a XAML file.
Windows Lifetime GUI Events:
https://docs.microsoft.com/ For instance, if you want to have some code executed when
en-us/dotnet/framework/ Creating New Windows Dynamically the user closes a window, implement a function called Win-
wpf/app-development/wpf- from XAML Files dowName_OnClosing() (by default it has an empty body).
windows-overview - window- Another useful feature of using a scripting language for GUI Here is an example of such function implementation:
lifetime-events development, is the possibility to create new GUI windows
at runtime with just a few scripting commands. function Window1_OnClosing(sender, load)
{
For example, you can add any new window to an existing ap- result = MessageBox(
plication, if you have this window sourced as a standard WPF “Do you want to close this window?”,
XAML file. This XAML file can be created using Visual Studio title, “YesNo”, “info”);
or any text editor. Note that the XAML file doesn’t have to return result != “Yes”;
be compiled into the executing program but can be added }
dynamically on the fly.
If this function returns true, the window closing will be can-
Here is an example of such a XAML file containing a window celed.
definition:
Similarly, the following functions can be implemented in
<Window xmlns:local="clr-namespace:WpfCSCS" CSCS:
  Title="Window1" Height="350" Width="500">
  <Grid Margin = "0,0,230.6,172" > • WindowName_SourceInitialized (corresponds to the
  <Button Content="Button1" window SourceInitialized event)
HorizontalAlignment= "Left"  • WindowName_Activated (corresponds to the window
Margin="48,68,0,0"  Activated event)
VerticalAlignment="Top" Width="75" • WindowName_Loaded (corresponds to the window
DataContext="Window1Button"/> Loaded event)

78 Using a Scripting Language to Develop Native Windows WPF GUI Apps codemag.com
CODE COMPILERS

• WindowName_ContentRendered (corresponds The code shown in this article is an invitation to


to the window ContentRendered event) explore. After reading it, I hope you can extend
• WindowName_Closing (corresponds to the the GUI functions shown in this article and create
window Closing event) the new ones. If it’s not available yet in the sam- Jul/Aug 2020
• WindowName_Deactivated (corresponds to ple project, you can implement any GUI-related Volume 21 Issue 4
the window Deactivated event) functionality in CSCS similar to what I did in this
• WindowName_Closed (corresponds to the article. Group Publisher
window Closed event) Markus Egger
I’m looking forward to your feedback: Tell me Associate Publisher
A good explanation and overview of the Windows what you create with CSCS and WPF, or what other Rick Strahl
lifetime events mentioned above can be con- features in CSCS you would like to see. Editor-in-Chief
sulted here: https://docs.microsoft.com/en-us/ Rod Paddock
dotnet/framework/wpf/app-development/wpf-  Vassili Kaplan
Managing Editor
windows-overview#window-lifetime-events.  Ellen Whitney

Content Editor
Wrapping Up Melanie Spiller

You need just two steps to have any C# function- Editorial Contributors
Otto Dobretsberger
ality available in CSCS. First, implement a class Jim Duffy
deriving from the ParserFunction class and over- Jeff Etter
ride its Evaluate() method. Second, register the Mike Yeager
new class with the parser using the ParserFunc- Writers In This Issue
tion.RegisterFunction() method. Markus Egger Kevin S. Goff
Vasili Kaplan Julie Lerman
Sahil Malik John V. Petersen
In this article, you saw how to use a scripting lan- Paul D. Sheriff Helen Wall
guage to change a WPF GUI at runtime and also Shawn Wildermuth
how to add muscle to the XAML file created with
Technical Reviewers
Microsoft Visual Studio. Markus Egger
Rod Paddock
Basically, CSCS is a functional language; with just
Production
a few CSCS statements, you can achieve some- Franz Wimmer
thing that would have taken you tens or even King Laurin GmbH
hundreds of lines of the C# code. 39057 St. Michael/Eppan, Italy

Printing
Fry Communications, Inc.
800 West Church Rd.
Mechanicsburg, PA 17055

(Continued from 80) If your software can’t be depended upon to Advertising Sales
work, it’s useless and of no value. Tammy Ferguson
4. Build Competent Software • Scalable: If your software needs to be avail-
832-717-4445 ext 26
[email protected]
This is a phrase I have coined: Competent Soft- able to thousands of users, then it must be
ware. Competency is about having the necessary designed, built, and deployed with that Circulation & Distribution
General Circulation: EPS Software Corp.
things to be successful. For example, to be a com- capability. If it isn’t, although any single Newsstand: The NEWS Group (TNG)
petent lawyer, among other things, you need to user may find value, to the business that Media Solutions
graduate from law school, qualify, sit for and pass sponsored the software, it’s useless and of Subscriptions
the bar exam, fulfill your annual continuing edu- no value. Subscription Manager
cation requirements, abide by the rules of pro- • Verifiable: It’s not enough that your soft- Colleen Cade
fessional responsibility, and not be suspended/ ware works. You must be able to demon- [email protected]
disbarred from the practice of law. That is a nice strate that it works under friendly and ad- US subscriptions are US $29.99 for one year. Subscriptions
checklist of items. I have the following checklist verse conditions. This’s what testing is all outside the US are US $49.99. Payments should be made
of items for Competent Software: about. We also verify through metrics and in US dollars drawn on a US bank. American Express,
MasterCard, Visa, and Discover credit cards accepted.
instrumentation. If you don’t test com- Bill me option is available only for US subscriptions.
• Delivery: Delivery is a feature. Unless peo- pletely and correctly, by log, instrument, Back issues are available. For subscription information,
ple can use and interact with your software, etc., you’re throwing caution to the wind. e-mail [email protected].
it’s useless and of no value. Your software might work. Or, you may have Subscribe online at
• Capable: Your software must be built for a a Galloping Gertie on your hands. In many www.codemag.com
specific purpose. It must be able to do what cases, SOX, SOC, FDA, and other regulations
CODE Developer Magazine
is required of it by the business. If it can’t, require a complete working and document- 6605 Cypresswood Drive, Ste 425, Spring, Texas 77379
it’s useless and of no value. ed testing regime. You may have delivered Phone: 832-717-4445
• Reliable: Your software must work. It must capable, reliable, and scalable software.
be able to handle exceptions and, at the But if you can’t verify it, it will be useless
very least, degrade gracefully. If a depen- and of no value because it won’t see the
dency isn’t available, it’s okay that your light of day.
software doesn’t work. The question is,
“how does your software respond?” Or does  John V. Petersen
it just react with a cryptic error message? 

codemag.com CODA: On First Principles 79


CODA

On First Principles
Two of my favorite technical books are Andy Hunt’s and Dave Thomas’ book The Pragmatic Programmer
and Bob Martin’s Clean Code: A Handbook of Agile Software Craftsmanship. These books are borne of
what I call Actionable Theory. We know that person, the “enterprise architect” type who graces us with

their presence to offer up a cookbook commen- much more to learn, but so far, it seems that not 2. P
 eople, Process,
tary. I say “commentary” as opposed to “answer”
because all too often, the observation is devoid of
much was done correctly on that project. And
like the Tacoma Narrows Bridge, it appears that
and Tools—In That Order
actionable context. And all too often, the sentenc- there’s an aggressive attempt at after-the-fact Ultimately, people must build things. People are
es begin with “you should….” Once they provide remedial steps. Can that work? Sure, anything is the ones to develop and govern processes and
their “advice,” like a seagull, they fly away, only to possible. Is it likely to work? Absent good tests, to develop and employ tools. Two of my favor-
return at some later time to repeat the cycle. there is no quantifiable way to know whether ite words to describe what’s necessary for good
something is more likely than not to work. process are Discipline and Rigor. What we do is
For the record, the above referenced books were difficult. There must be discipline to “stick to it.”
published in 1999 and 2008 respectively, they’ve In software development, history has taught us At the same time, we must be flexible to change
stuck around and are at least as much relevant that there are certain things, which, if applied, when needed. That’s where rigor comes in. We
today as they were decades ago. Recently, I’ve tend to coincide with success. Can there still be should be as through as practicably possible—but
given much thought to theory and its role in soft- failure? Of course, there can be failure because no more so. I often go back to the Agile Mani-
ware development. What we do is supposed to be other variables matter. For example, my code festo and the associated principles that are often
utilitarian. Businesses use our work product. In may be exceptionally beautifully written and does misinterpreted as meaning “no documentation.”
general, people want to hear less about theory what it’s supposed to do on my development rig. Nothing could be further from the truth. We sim-
and more about “just getting the job done.” But if it’s put on crap hardware in production, it ply value working software over comprehensive
won’t matter much. Therefore, we test, whether documentation. In other words, we strive to
Imagine building a house, brick by brick, with- it be unit-, load-, or performance-based. avoid the “bike-shedding” problem.
out regard to theory. What would we end up
with? It might all work out. That, however, is Design patterns are another example of theory. With people that can employ a good process,
more about faith and hope. Faith and hope are Another good book to consider is Christopher Al- tools, whether they are developed or used, stand
not strategies. Consider the case of the first Ta- exander’s A Pattern Language: Towns, Buildings, a chance. On tools, without good people and
coma Narrows bridge which opened in July 1940. and Construction. If you’ve ever used a wiki, you process, we end up with the “leaky abstraction.”
It collapsed in December 1940. There were plans can thank Ward Cunningham for that. Ward’s in- Where software is supposed to buffer you from
and engineering principles applied. But a lot spiration for the wiki was born out of the Port- certain details, it can’t quite do so completely.
of shortcuts were taken in building the bridge. land Pattern Repository, which was a practi- The result is a tool that straddles the line and
Even during construction, when winds picked cal result of a 1987 OOPSLA paper titled: Using thus gets in the way and isn’t as useful. You know
up, the workers could feel the movement. That Pattern Languages for Object-Oriented Programs, these tools. The same holds true for throwing
is why the bridge was nicknamed the “Galloping and which drew heavily from the 1977 A Pattern tools at a problem. Without good people and pro-
Gertie.” Once things got bad enough, there was Language book. Ward Cunningham collaborated cess, A), you don’t know what the right tools are,
no issue with problem recognition. There was an with his wife Karen, a mathematician and Kent and B), even if you did, the chances that you can
issue with problem resolution. Long story short, Beck. For the record, Ward and Kent are part of rigorously evaluate and use them are low.
five days after a course of action was decided the crew that created the Agile Manifesto and are
upon, the bridge collapsed. The real issue was cited heavily in the Clean Code book. People and process can make up for poor tooling.
not the bridge’s sensitivity to aerostatic flutter. Good tooling is wasted on poor people and process.
The real issue was not adhering to first prin- Although we shouldn’t sacrifice delivery for pu-
ciples. rity, at the same time, we shouldn’t throw things
over the wall for the sake of speed, just to get 3. D
 o the Right Thing,
What principles do you apply to building soft- something delivered. If you’re thinking of the in the Right Way,
ware? Here are mine: classic project management troika of Good, Fast,
and Cheap: you can only choose two—you get
for the Right Reason
the point. We don’t build bridges to withstand If you’re going to cut corners, why are you do-
1. Theory Matters 500-year floods, as that isn’t practical. But we ing it? Can you achieve the same end? If not, you
Whether it’s building a bridge, building, airplane, do build bridges to withstand floods. Software may have to scale back so that you can do it the
or software, those who put practice before theory should be no different. right way. This is just another flavor of the good-
do so at their own peril. We have a very recent fast-cheap trilogy. It’s just a different way of
example with the Boeing 737 Max 8. That’s espe- Theory matters. And while there are many rea- articulating that constraint where you can only
cially relevant because that tragedy represents a sons software can fail, without exception, one of pick two of the three items.
concrete example of the intersection between a the root causes will almost always be a failure to
physically engineered thing and software. There’s heed theoretical principles. (Continued on page 79)

80 CODA: On First Principles codemag.com


Upskill from anywhere
Build in-demand tech skills no matter where you are
with access to courses on top technologies,
skill assessments, paths and more.

Expert-led courses Pluralsight IQ Guided learning

Keep up with the pace of Validate skill levels with Measure progress towards
change with thousands of assessments that take personal objectives and
expert-led, in-depth courses. 10 minutes or less. learning goals.

Start for free today


pluralsight.com/codemag
Build faster.
Build better.
Build .NET on AWS.
To learn about .NET on AWS visit
go.aws/codemag

You might also like